Frontiers of WWW Research and Development -- APWeb 2006: 8th Asia-Pacific Web Conference, Harbin, China, January 16-18, 2006, Proceedings (Lecture Notes in Computer Science, 3841) 9783540311423, 3540311424

This book constitutes the refereed proceedings of the 8th Asia-Pacific Web Conference, APWeb 2006, held in Harbin, China

116 96 23MB

English Pages 1248 [1244] Year 2006

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Frontmatter
Keynote Papers
Applications Development for the Computational Grid
Strongly Connected Dominating Sets in Wireless Sensor Networks with Unidirectional Links
Mobile Web and Location-Based Services
The Case of the Duplicate Documents Measurement, Search, and Science
Regular Papers
An Effective System for Mining Web Log
Adapting K-Means Algorithm for Discovering Clusters in Subspaces
Sample Sizes for Query Probing in Uncooperative Distributed Information Retrieval
The Probability of Success of Mobile Agents When Routing in Faulty Networks
Clustering Web Documents Based on Knowledge Granularity
XFlat: Query Friendly Encrypted XML View Publishing
Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in Wireless Sensor Networks
QoS-Driven Web Service Composition with Inter Service Conflicts
An Agent-Based Approach for Cooperative Data Management
Transforming Heterogeneous Messages Automatically in Web Service Composition
User-Perceived Web QoS Measurement and Evaluation System
An RDF Storage and Query Framework with Flexible Inference Strategy
An Aspect-Oriented Approach to Declarative Access Control for Web Applications
A Statistical Study of Today's Gnutella
Automatically Constructing Descriptive Site Maps
TWStream: Finding Correlated Data Streams Under Time Warping
Supplier Categorization with {\itshape K}-Means Type Subspace Clustering
Classifying Web Data in Directory Structures
Semantic Similarity Based Ontology Cache
In-Network Join Processing for Sensor Networks
Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support for Verification
Identifying Agitators as Important Blogger Based on Analyzing Blog Threads
Detecting Collusion Attacks in Security Protocols
Role-Based Delegation with Negative Authorization
Approximate Top-k Structural Similarity Search over XML Documents
Towards Enhancing Trust on Chinese E-Commerce
Flexible Deployment Models for Location-Aware Key Management in Wireless Sensor Networks
A Diachronic Analysis of Gender-Related Web Communities Using a HITS-Based Mining Tool
W3 Trust-Profiling Framework (W3TF) to Assess Trust and Transitivity of Trust of Web-Based Services in a Heterogeneous Web Environment
Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree
Personalized News Categorization Through Scalable Text Classification
The Adaptability of English Based Web Search Algorithms to Chinese Search Engines
A Feedback Based Framework for Semi-automic Composition of Web Services
Fast Approximate Matching Between XML Documents and Schemata
Mining Query Log to Assist Ontology Learning from Relational Database
An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks
F@: A Framework of Group Awareness in Synchronous Distributed Groupware
Adaptive User Profile Model and Collaborative Filtering for Personalized News
Context Matcher: Improved Web Search Using Query Term Context in Source Document and in Search Results
Weighted Ontology-Based Search Exploiting Semantic Similarity
Determinants of Groupware Usability for Community Care Collaboration
Automated Discovering of What is Hindering the Learning Performance of a Student
Sharing Protected Web Resources Using Distributed Role-Based Modeling
Concept Map Model for Web Ontology Exploration
A Resource-Adaptive Transcoding Proxy Caching Strategy
Optimizing Collaborative Filtering by Interpolating the Individual and Group Behaviors
Extracting Semantic Relationships Between Terms from PC Documents and Its Applications to Web Search Personalization
Detecting Implicit Dependencies Between Tasks from Event Logs
Implementing Privacy Negotiations in E-Commerce
A Community-Based, Agent-Driven, P2P Overlay Architecture for Personalized Web
Providing an Uncertainty Reasoning Service for Semantic Web Application
Indexing XML Documents Using Self Adaptive Genetic Algorithms for Better Retreival
GCC: A Knowledge Management Environment for Research Centers and Universities
Towards More Personalized Web: Extraction and Integration of Dynamic Content from the Web
Supporting Relative Workflows with Web Services
Text Based Knowledge Discovery with Information Flow Analysis
Short Papers
Study on QoS Driven Web Services Composition
Optimizing the Data Intensive Mediator-Based Web Services Composition
Role of Triple Space Computing in Semantic Web Services
Modified ID-Based Threshold Decryption and Its Application to Mediated ID-Based Encryption
Materialized View Maintenance in Peer Data Management Systems
Cubic Analysis of Social Bookmarking for Personalized Recommendation
MAGMS: Mobile Agent-Based Grid Monitoring System
A Computational Trust Model for Semantic Web Based on Bayesian Decision Theory
Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree
A Color Bar Based Affective Annotation Method for Media Player
Robin: Extracting Visual and Textual Features from Web Pages
Generalized Projected Clustering in High-Dimensional Data Streams
An Effective Web Page Layout Adaptation for Various Resolutions
XMine: A Methodology for Mining XML Structure
Multiple Join Processing in Data Grid
A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology
A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol
Web-Based Genomic Information Integration with Gene Ontology
Table Detection from Plain Text Using Machine Learning and Document Structure
Efficient Mining Strategy for Frequent Serial Episodes in Temporal Database
Efficient and Provably Secure Client-to-Client Password-Based Key Exchange Protocol
Effective Criteria for Web Page Changes
WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages
A Scalable Update Management Mechanism for Query Result Caching Systems at Database-Driven Web Sites
Building Content Clusters Based on Modelling Page Pairs
IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm
A Method to Select the Optimum Web Services
A New Methodology for Information Presentations on the Web
Integration of Single Sign-On and Role-Based Access Control Profiles for Grid Computing
An Effective Service Discovery Model for Highly Reliable Web Services Composition in a Specific Domain
Using Web Archive for Improving Search Engine Results
Closed Queueing Network Model for Multi-tier Data Stream Processing Center
Optimal Task Scheduling Algorithm for Non-preemptive Processing System
A Multi-agent Based Grid Service Discovery Framework Using Fuzzy Petri Net and Ontology
Modeling Identity Management Architecture Within a Social Setting
Ontological Engineering in Data Warehousing
Mapping Ontology Relations: An Approach Based on Best Approximations
Building a Semantic P2P Scientific References Sharing System with JXTA
Named Graphs as a Mechanism for Reasoning About Provenance
Discovery of Spatiotemporal Patterns in Mobile Environment
Visual Description Conversion for Enhancing Search Engines and Navigational Systems
Reusing Experiences for an Effective Learning in a Web-Based Context
Special Sessions on e-Water
Collaboration Between China and Australia: An e-Water Workshop Report
On Sensor Network Segmentation for Urban Water Distribution Monitoring
Using the Shuffled Complex Evolution Global Optimization Method to Solve Groundwater Management Models
Integrating Hydrological Data of Yellow River for Efficient Information Services
Application and Integration of Information Technology in Water Resources Informatization
An Empirical Study on Groupware Support for Water Resources Ontology Integration
Ontology Mapping Approach Based on OCL
Object Storage System for Mass Geographic Information
The Service-Oriented Data Integration Platform for Water Resources Management
Construction of Yellow River Digital Project Management System
Study on the Construction and Application of 3D Visualization Platform for the Yellow River Basin
Industry Papers
A Light-Weighted Approach to Workflow View Implementation
RSS Feed Generation from Legacy HTML Pages
Ontology Driven Securities Data Management and Analysis
Context Gallery: A Service-Oriented Framework to Facilitate Context Information Sharing
A Service-Oriented Architecture Based Macroeconomic Analysis \& Forecasting System
A Web-Based Method for Building Company Name Knowledge Base
Demo Sessions
Healthy Waterways: Healthy Catchments -- An Integrated Research/Management Program to Understand and Reduce Impacts of Sediments and Nutrients on Waterways in Queensland, Australia
Groundwater Monitoring in China
The Digital Yellow River Programme
Web Services Based State of the Environment Reporting
COEDIG: Collaborative Editor in Grid Computing
HVEM Grid: Experiences in Constructing an Electron Microscopy Grid
WISE: A Prototype for Ontology Driven Development of Web Information Systems
DSEC: A Data Stream Engine Based Clinical Information System
SESQ: A Novel System for Building Domain Specific Web Search Engines
Digital Map: Animated Mode
Dynamic Voice User Interface Using VoiceXML and Active Server Pages
WebVine Suite: A Web Services Based BPMS
Adaptive Mobile Cooperation Model Based on Context Awareness
An Integrated Network Management System
Ichigen-San: An Ontology-Based Information Retrieval System
A Database Monitoring and Disaster Recovery System
IPVita: An Intelligent Platform of Virtual Travel Agency
LocalRank: A Prototype for Ranking Web Pages with Database Considering Geographical Locality
Automated Content Transformation with Adjustment for Visual Presentation Related to Terminal Types
Backmatter
Recommend Papers

Frontiers of WWW Research and Development -- APWeb 2006: 8th Asia-Pacific Web Conference, Harbin, China, January 16-18, 2006, Proceedings (Lecture Notes in Computer Science, 3841)
 9783540311423, 3540311424

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

3841

Xiaofang Zhou Jianzhong Li Heng Tao Shen Masaru Kitsuregawa Yanchun Zhang (Eds.)

Frontiers of WWW Research and Development – APWeb 2006 8th Asia-Pacific Web Conference Harbin, China, January 16-18, 2006 Proceedings

13

Volume Editors Xiaofang Zhou Heng Tao Shen The University of Queensland School of Information Technology and Electrical Engineering Brisbane, QLD 4072, Australia E-mail: {zxf, shenht}@itee.uq.edu.au Jianzhong Li Harbin Institute of Technology, Department of Computer Science and Engineering 92 West DaZhi St., Harbin, China E-mail: [email protected] Masaru Kitsuregawa The University of Tokyo, Kitsuregawa Laboratory, IIS 3rd Div. 4-6-1 Komaba, Meguro-ku, Tokyo 135–8505, Japan E-mail: [email protected] Yanchun Zhang Victoria University, School of Computer Science and Mathematics Melbourne City MC, VIC 8001, Australia E-mail: [email protected]

Library of Congress Control Number: 2005938105 CR Subject Classification (1998): H.3, H.4, H.5, C.2, K.4 ISSN ISBN-10 ISBN-13

0302-9743 3-540-31142-4 Springer Berlin Heidelberg New York 978-3-540-31142-3 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springer.com © Springer-Verlag Berlin Heidelberg 2006 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 11610113 06/3142 543210

Message from the General Chair

It is our great pleasure to welcome you to Harbin for the 8th Asia Pacific Web Conference (APWeb 2006). The winter snow festival is one of the reasons to have chosen January and the setting of beautiful Songhuajiang Riverside. Since its start in 1998, APWeb has been a premier conference on theoretical and practical aspects of Web engineering in the Asia Pacific region. Previous APWeb conferences were held in Beijing (1998), Hong Kong (1999), Xi’an (2000), Changsha (2001), Xi’an (2003), Hangzhou (2004) and Shanghai (2005). Program Co-chairs Jianzhong Li and Xiaofang Zhou put in a lot of effort in the difficult and highly competitive selection of research papers. APWeb 2006 attracted more than 400 papers from 23 countries and regions. Industrial Program Chair Xiaowei Kan, Panel and Tutorial Chair Athman Bouguettaya, and Demo Chairs Yoshiharu Ishikawa and Haiyang Wang all contributed significantly to make an attractive program. We thank David Abramson, Ding-Zhu Du, Ling Liu and Justin Zobel for their keynotes as the highlights of the conference. So many people worked hard to make APWeb 2006 successful.The excellent conference and banquet places were managed by Local Arrangement Chair Hong Gao. Publicity Chairs Chengfei Liu and Ge Yu promoted this conference. Treasurer Qing Li played an important role in financial management. This year, in addition to the conference, we held various workshops. Four workshops on emerging topics were organized, which were selected and coordinated by Workshop Chair Jeffrey Yu. The Workshop on Metropolis/Enterprise Grid and Applications (MEGA) was organized by Minglu Li. The Workshop on Sensor Network (IWSN) was run by Xiaohua Jia, Jinbao Li, and Yingshu Li. The Workshop on Web-based Internet Computing for Science and Engineering (ICSE) was run by Jack Dongarra and Jun Ni and the Workshop on XML Research and Applications (XRA) was organized by Wei Wang and Raymond Wong. Publication Chair Heng Tao Shen did a great job of putting together an extensive volume, which contains more than 1000 pages. So many individuals contributed toward the conference, especially Xiaofang Zhou, who took care of all aspects of the conference. We hope that you enjoy these proceedings.

November 2005

Masaru Kitsuregawa APWeb 2006 General Chair

Message from the Program Co-chairs

This volume contains papers selected for presentation at the 8th Asia Pacific Web Conference (APWeb 2006), which was held in Harbin, China, January 1618, 2006. APWeb 2006 received 413 submissions. After a thorough review process for each submission by the Program Committee (with 148 PC members!) and specialists recommended by Program Committee members, APWeb accepted 56 regular papers and 42 short papers (the acceptance rates are 14% and 11%, respectively). This volume also includes invited keynote papers, presented by four leading experts at APWeb 2006: David Abramson (Monash University), Ding-Zhu Du (University of Minnesota), Ling Liu (Georgia Institute of Technology) and Justin Zobel (Royal Melbourne Institute of Technology). Other papers in this volume include selected papers for special sessions on ICT advances for water resources management organized by Yanchun Zhang (Victoria University of Technology), industry papers organized by Xiaowei Kan (Ricoh Co., Ltd.), and demo papers organized by Yoshiharu Ishikawa (Tsukuba University) and Haiyang Wang (Shandong University). Four workshops were held in conjunction with APWeb 2006. The workshop papers were compiled in a separate volume of proceedings, also published by Springer in its Lecture Notes in Computer Science series. The conference received financial support from the National Natural Science Foundation of China, Australian Research Council Research Network in Enterprise Information Infrastructure (EII), Harbin Institute of Technology, Heilongjiang University, Hohai University and Yellow River Conservation Commission. We, the conference organizers, also received help and logistic support from the University of Queensland, Harbin Institute of Technology, Heilongjiang University, City University of Hong Kong, the Web Information Systems Engineering Society (WISE Society), and the Conference Management Toolkit Support Team at Microsoft. We are grateful to Hong Gao, Winnie Cheng, Miranda Lee, Xin Zhan, Wenjie Zhang, Yanchun Zhang, Qing Li, Rikun Wang, Ken Deng, Helen Huang, Sai Sun and other people for their great effort in supporting the conference organization. Finally, we would like to take this opportunity to thank all Program Committee members and external reviewers for their expertise and help in evaluating papers, and to thank all authors who submitted their papers to this conference.

November 2005

Jianzhong Li and Xiaofang Zhou APWeb 2006 Program Committee Co-chairs

Organization

Conference Organization General Chair Masaru Kitsuregawa, Tokyo University, Japan Program Committee Co-chairs Jianzhong Li, Harbin Institute of Technology, China Xiaofang Zhou, University of Queensland, Australia Workshop Chair Jeffrey X. Yu, Chinese Univeristy of Hong Kong, China Tutorial and Panel Chair Athman Bouguettaya, Virginia Tech., USA Publication Chair Heng Tao Shen, University of Queensland, Australia Organization Chair Hong Gao, Harbin Institute of Technology, China Publicity Co-chairs Chengfei Liu, Swinburne University of Technology, Australia Ge Yu, Northeasten University, China Industry Chair Xiaowei Kan, Ricoh, Japan Demo Co-chairs Yoshiharu Ishikawa, Tsukuba University, Japan Haiyang Wang, Shandong University, China Treasurer Qing Li, City University of Hong Kong, China APWeb Steering Committee Xiaofang Zhou (Chair), University of Queensland, Australia Xuemin Lin, University of New South Wales, Australia Hongjun Lu, Hong Kong University of Science and Technology, China Jeffrey Xu Yu, Chinese University of Hong Kong, China Yanchun Zhang, Victoria University, Australia

X

Organization

Program Committee Toshiyuki Amagasa, Japan Masatoshi Arikawa, Japan James Bailey, Australia Ken Barker, Canada Djamal Benslimane, France Sourav Saha Bhowmick, Singapore Ulrik Brandes, Germany Stephane Bressan, Singapore Wentong Cai, Singapore Jiannong Cao, Hong Kong Jinli Cao, Australia Wojciech Cellary, Poland Kuo-Ming Chao, UK Somchai Chatvichienchai, Japan Akmal B. Chaudhri, UK Guihai Chen, China Hanxiong Chen, Japan Jian Chen, China Ke Chen, UK Yi-Ping Phoebe Chen, Australia Zheng Chen, China David Cheung, Hong Kong Bin Cui, Singapore Qianni Deng, China Gill Dobbie, New Zealand Marie-Christine Fauvet, France Ling Feng, Netherlands Hong Gao, China Yongsheng Gao, Australia Gia-Loi L. Gruenwald, USA Theo Haerder, Germany Hai Jin, China Jun Han, Australia Xiangjian He, Australia Jingyu Hou, Australia Hui-I Hsiao, USA Joshua Huang, Hong Kong Patrick C. K. Hung, Canada Weijia Jia, Hong Kong Yutaka Kidawara, Japan Markus Kirchberg, New Zealand Hiroyuki Kitagawa, Japan Huaizhong Kou, China

Shonali Krishnaswamy, Australia Yong-Jin Kwon, Korea Zo´e Lacroix, USA Alberto H. F. Laender, Brazil Chiang Lee, Taiwan Thomas Y. Lee, USA Chen Li, USA Jiuyong (John) Li, Australia Lee Mong Li, Singapore Qing Li, Hong Kong Xue Li, Australia Weifa Liang, Australia Ee Peng Lim, Singapore Xuemin Lin, Australia Tok Wong Ling, Singapore Hong-Cheu Liu, Australia Huan Liu, USA Jianguo Lu, Canada Qing Liu, Australia Qiong Luo, Hong Kong Wei-Ying Ma, China Hong Mei, China Weiyi Meng, USA Xiaofeng Meng, China Mukesh Mohania, India Atsuyuki Morishima, Japan Shinsuke Nakajima, Japan Wee Keong Ng, Singapore Anne Hee Hiong Ngu, USA Jun Ni, USA Jian-Yun Nie, Canada Jian Pei, Canada Zhiyong Peng, China Pearl Pu, Switzerland Depei Qian, China Gitesh Raikundalia, Australia Keun Ho Ryu, Korea Shazia Sadiq, Australia Monica Scannapieco, Italy Edwin Sha, USA Fei Shi, USA Hao Shi, Australia Timothy K. Shih, Taiwan

Organization

Dawei Song, UK William Song, UK Kian Lee Tan, Singapore Changjie Tang, China Egemen Tanin, Australia Kerry Taylor, Australia Weiqin Tong, China Farouk Toumani, France Alexei Tretiakov, New Zealand Millist Vincent, Australia Bing Wang, UK Guoren Wang, China Haixun Wang, USA Hua Wang, Australia Jianyong Wang, China

XI

Shengrui Wang, Canada Wei Wang, Australia Baowen Xu, China Kevin Xu, Australia Jian Yang, Australia Ge Yu, China Osmar R. Zaiane, Canada Chengqi Zhang, Australia Kang Zhang, USA Shichao Zhang, Australia Baihua Zheng, Singapore Lizhu Zhou, China Neng-Fa Zhou, USA Hong Zhu, UK

Additional Reviewers Aaron Harwood Adam Jatowt Alexander Liu Ashutosh Tiwari Benoit Fraikin Bilal Choudry Bing Xie Bo Chen Ce Dong Changgui Chen Changxi Zheng Xinjun Chen Chiemi Watanabe Christian Mathis Christian Pich Clement Leung Daniel Fleischer Debbie Zhang Dengyue Li Derong Shen Dhaminda Abeywickrama Diego Milano Elvis Leung Eric Bae Eric Lo Faizal Riaz-ud-Din

Fabio De Rosa Faten Khalil Guanglei Song H. Jaudoin Hai He Helga Duarte Herve Menager Hiroyasu Nishiyama Ho Wai Shing Hongkun Zhao Houtan Shirani-Mehr Huangzhang Liu I-Fang Su J¨ urgen Lerner Jacek Chmielewski Jarogniew Rykowski Jialie Shen Jiaqi Wang Jiarui Ni John Horwood Ju Wang Julian Lin Julien Ponge Jun Kong Jun Yan Kasumi Kanasaki

Kaushal Parekh Kevin Chen Klaus-Dieter Schewe Kok-Leong Ong Lance R Parsons Lars Kulik Lei Tang Li Lin Liangcai Shu Lijun Shan Liping Jing Longbing Cao Magdiel F. Galan Maria Vargas-Vera Massimo Mecella Michal Shmueli-Scheuer Minghui Zhou Minhao Yu Mohamed Bouguessa Mohamed Medhat Gaber Mohammed Eunus Ali Nitin Agarwal Niyati Parikh Norihide Shinagawa Noureddine Abbadeni Pan Ying

XII

Organization

Phanindra Dev Deepthimahanthi Philipp Dopichaj Philippe Mulhem Qiankun Zhao Quang Vinh Nguyen Rares Vernica Sai Moturu Shang Gao Shanika Karunasekera Shui Yu Shunneng Yung Somnath Shahapurkar Suan Khai Chong

Surendra Singhi SuTe Lei Sven Hartmann Thomas Schank Tian Yu Tze-Cheng Hsu Vincent D’Orangeville Wang Daling Wang ShuHong Willy Picard Xavier Percival Xianchao Zhang Xiang Li Xiangquan Chen

Xun Yi Yanchang Zhao Yanchun Zhang Yiyao Lu Yoshiharu Ishikawa Yu Li Yu Qian Yu Suzuki Yuan-Ke Hwang Yu-Chi Chung Yuhong Feng Zenga Shan Zhihong Chong Zili Zhang

Table of Contents

Keynote Papers Applications Development for the Computational Grid David Abramson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

Strongly Connected Dominating Sets in Wireless Sensor Networks with Unidirectional Links Ding-Zhu Du, My T. Thai, Yingshu Li, Dan Liu, Shiwei Zhu . . . . . . .

13

Mobile Web and Location-Based Services Ling Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

The Case of the Duplicate Documents Measurement, Search, and Science Justin Zobel, Yaniv Bernstein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

Regular Papers An Effective System for Mining Web Log Zhenglu Yang, Yitong Wang, Masaru Kitsuregawa . . . . . . . . . . . . . . . .

40

Adapting K-Means Algorithm for Discovering Clusters in Subspaces Yanchang Zhao, Chengqi Zhang, Shichao Zhang, Lianwei Zhao . . . . .

53

Sample Sizes for Query Probing in Uncooperative Distributed Information Retrieval Milad Shokouhi, Falk Scholer, Justin Zobel . . . . . . . . . . . . . . . . . . . . . . .

63

The Probability of Success of Mobile Agents When Routing in Faulty Networks Wenyu Qu, Hong Shen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

Clustering Web Documents Based on Knowledge Granularity Faliang Huang, Shichao Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

XFlat: Query Friendly Encrypted XML View Publishing Jun Gao, Tengjiao Wang, Dongqing Yang . . . . . . . . . . . . . . . . . . . . . . . .

97

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in Wireless Sensor Networks Haigang Gong, Ming Liu, Yinchi Mao, Lijun Chen, Li Xie . . . . . . . . .

109

XIV

Table of Contents

QoS-Driven Web Service Composition with Inter Service Conflicts Aiqiang Gao, Dongqing Yang, Shiwei Tang, Ming Zhang . . . . . . . . . . .

121

An Agent-Based Approach for Cooperative Data Management Chunyu Miao, Meilin Shi, Jialie Shen . . . . . . . . . . . . . . . . . . . . . . . . . . .

133

Transforming Heterogeneous Messages Automatically in Web Service Composition Wenjun Yang, Juanzi Li, Kehong Wang . . . . . . . . . . . . . . . . . . . . . . . . .

145

User-Perceived Web QoS Measurement and Evaluation System Hongjie Sun, Binxing Fang, Hongli Zhang . . . . . . . . . . . . . . . . . . . . . . . .

157

An RDF Storage and Query Framework with Flexible Inference Strategy Wennan Shen, Yuzhong Qu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

166

An Aspect-Oriented Approach to Declarative Access Control for Web Applications Kung Chen, Ching-Wei Lin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

176

A Statistical Study of Today’s Gnutella Shicong Meng, Cong Shi, Dingyi Han, Xing Zhu, Yong Yu . . . . . . . . .

189

Automatically Constructing Descriptive Site Maps Pavel Dmitriev, Carl Lagoze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

201

TWStream: Finding Correlated Data Streams Under Time Warping Ting Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

213

Supplier Categorization with K -Means Type Subspace Clustering Xingjun Zhang, Joshua Zhexue Huang, Depei Qian, Jun Xu, Liping Jing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

226

Classifying Web Data in Directory Structures Sofia Stamou, Alexandros Ntoulas, Vlassis Krikos, Pavlos Kokosis, Dimitris Christodoulakis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

238

Semantic Similarity Based Ontology Cache Bangyong Liang, Jie Tang, Juanzi Li, Kehong Wang . . . . . . . . . . . . . .

250

In-Network Join Processing for Sensor Networks Hai Yu, Ee-Peng Lim, Jun Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

263

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support for Verification Yanping Yang, Qingping Tan, Yong Xiao, Feng Liu, Jinshan Yu . . . .

275

Table of Contents

XV

Identifying Agitators as Important Blogger Based on Analyzing Blog Threads Shinsuke Nakajima, Junichi Tatemura, Yoshinori Hara, Katsumi Tanaka, Shunsuke Uemura . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

285

Detecting Collusion Attacks in Security Protocols Qingfeng Chen, Yi-Ping Phoebe Chen, Shichao Zhang, Chengqi Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

297

Role-Based Delegation with Negative Authorization Hua Wang, Jinli Cao, David Ross . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

307

Approximate Top-k Structural Similarity Search over XML Documents Tao Xie, Chaofeng Sha, Xiaoling Wang, Aoying Zhou . . . . . . . . . . . . .

319

Towards Enhancing Trust on Chinese E-Commerce Zhen Wang, Zhongwei Zhang, Yanchun Zhang . . . . . . . . . . . . . . . . . . . .

331

Flexible Deployment Models for Location-Aware Key Management in Wireless Sensor Networks Bo Yu, Xiaomei Cao, Peng Han, Dilin Mao, Chuanshan Gao . . . . . .

343

A Diachronic Analysis of Gender-Related Web Communities Using a HITS-Based Mining Tool Naoko Oyama, Yoshifumi Masunaga, Kaoru Tachi . . . . . . . . . . . . . . . .

355

W3 Trust-Profiling Framework (W3TF) to Assess Trust and Transitivity of Trust of Web-Based Services in a Heterogeneous Web Environment Yinan Yang, Lawrie Brown, Ed Lewis, Jan Newmarch . . . . . . . . . . . . .

367

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree Cong-Le Zhang, Sheng Huang, Gui-Rong Xue, Yong Yu . . . . . . . . . . .

379

Personalized News Categorization Through Scalable Text Classification Ioannis Antonellis, Christos Bouras, Vassilis Poulopoulos . . . . . . . . . .

391

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines Louis Yu, Kin Fun Li, Eric G. Manning . . . . . . . . . . . . . . . . . . . . . . . . .

402

A Feedback Based Framework for Semi-automic Composition of Web Services Dongsoo Han, Sungdoke Lee, Inyoung Ko . . . . . . . . . . . . . . . . . . . . . . . .

414

XVI

Table of Contents

Fast Approximate Matching Between XML Documents and Schemata Guangming Xing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

425

Mining Query Log to Assist Ontology Learning from Relational Database Jie Zhang, Miao Xiong, Yong Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

437

An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks Yanli Cai, Minglu Li, Min-You Wu . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

449

F@: A Framework of Group Awareness in Synchronous Distributed Groupware Minh Hong Tran, Yun Yang, Gitesh K. Raikundalia . . . . . . . . . . . . . . .

461

Adaptive User Profile Model and Collaborative Filtering for Personalized News Jue Wang, Zhiwei Li, Jinyi Yao, Zengqi Sun, Mingjing Li, Wei-ying Ma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

474

Context Matcher: Improved Web Search Using Query Term Context in Source Document and in Search Results Takahiro Kawashige, Satoshi Oyama, Hiroaki Ohshima, Katsumi Tanaka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

486

Weighted Ontology-Based Search Exploiting Semantic Similarity Kuo Zhang, Jie Tang, MingCai Hong, JuanZi Li, Wei Wei . . . . . . . .

498

Determinants of Groupware Usability for Community Care Collaboration Lu Liang, Yong Tang, Na Tang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

511

Automated Discovering of What is Hindering the Learning Performance of a Student Sylvia Encheva, Sharil Tumin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

521

Sharing Protected Web Resources Using Distributed Role-Based Modeling Sylvia Encheva, Sharil Tumin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

532

Concept Map Model for Web Ontology Exploration Yuxin Mao, Zhaohui Wu, Huajun Chen, Xiaoqing Zheng . . . . . . . . . . .

544

A Resource-Adaptive Transcoding Proxy Caching Strategy Chunhong Li, Guofu Feng, Wenzhong Li, Tiecheng Gu, Sanglu Lu, Daoxu Chen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

556

Table of Contents

XVII

Optimizing Collaborative Filtering by Interpolating the Individual and Group Behaviors Xue-Mei Jiang, Wen-Guan Song, Wei-Guo Feng . . . . . . . . . . . . . . . . . .

568

Extracting Semantic Relationships Between Terms from PC Documents and Its Applications to Web Search Personalization Hiroaki Ohshima, Satoshi Oyama, Katsumi Tanaka . . . . . . . . . . . . . . .

579

Detecting Implicit Dependencies Between Tasks from Event Logs Lijie Wen, Jianmin Wang, Jiaguang Sun . . . . . . . . . . . . . . . . . . . . . . . .

591

Implementing Privacy Negotiations in E-Commerce S¨ oren Preibusch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

604

A Community-Based, Agent-Driven, P2P Overlay Architecture for Personalized Web Chatree Sangpachatanaruk, Taieb Znati . . . . . . . . . . . . . . . . . . . . . . . . . .

616

Providing an Uncertainty Reasoning Service for Semantic Web Application Lei Li, Qiaoling Liu, Yunfeng Tao, Lei Zhang, Jian Zhou, Yong Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

628

Indexing XML Documents Using Self Adaptive Genetic Algorithms for Better Retrieval K.G. Srinivasa, S. Sharath, K.R. Venugopal, Lalit M. Patnaik . . . . . .

640

GCC: A Knowledge Management Environment for Research Centers and Universities Jonice Oliveira, Jano Moreira de Souza, Rodrigo Miranda, S´ergio Rodrigues, Viviane Kawamura, Rafael Martino, Carlos Mello, Diogo Krejci, Carlos Eduardo Barbosa, Luciano Maia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

652

Towards More Personalized Web: Extraction and Integration of Dynamic Content from the Web Marek Kowalkiewicz, Maria E. Orlowska, Tomasz Kaczmarek, Witold Abramowicz . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

668

Supporting Relative Workflows with Web Services Xiaohui Zhao, Chengfei Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

680

Text Based Knowledge Discovery with Information Flow Analysis Dawei Song, Peter Bruza . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

692

XVIII Table of Contents

Short Papers Study on QoS Driven Web Services Composition Yan-ping Chen, Zeng-zhi Li, Qin-xue Jin, Chuang Wang . . . . . . . . . . .

702

Optimizing the Data Intensive Mediator-Based Web Services Composition Yu Zhang, Xiangmin Zhou, Yiyue Gao . . . . . . . . . . . . . . . . . . . . . . . . . .

708

Role of Triple Space Computing in Semantic Web Services Brahmananda Sapkota, Edward Kilgarriff, Christoph Bussler . . . . . . .

714

Modified ID-Based Threshold Decryption and Its Application to Mediated ID-Based Encryption Hak Soo Ju, Dae Youb Kim, Dong Hoon Lee, Haeryong Park, Kilsoo Chun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

720

Materialized View Maintenance in Peer Data Management Systems Biao Qin, Shan Wang, Xiaoyong Du . . . . . . . . . . . . . . . . . . . . . . . . . . . .

726

Cubic Analysis of Social Bookmarking for Personalized Recommendation Yanfei Xu, Liang Zhang, Wei Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

733

MAGMS: Mobile Agent-Based Grid Monitoring System Anan Chen, Yazhe Tang, Yuan Liu, Ya Li . . . . . . . . . . . . . . . . . . . . . . .

739

A Computational Trust Model for Semantic Web Based on Bayesian Decision Theory Xiaoqing Zheng, Huajun Chen, Zhaohui Wu, Yu Zhang . . . . . . . . . . . .

745

Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree Yun Bai, Yanyan Guo, Xiaofeng Meng, Tao Wan, Karine Zeitouni . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

751

A Color Bar Based Affective Annotation Method for Media Player Chengzhe Xu, Ling Chen, Gencai Chen . . . . . . . . . . . . . . . . . . . . . . . . . .

759

Robin: Extracting Visual and Textual Features from Web Pages Mizuki Oka, Hiroshi Tsukada, Kazuhiko Kato . . . . . . . . . . . . . . . . . . . .

765

Generalized Projected Clustering in High-Dimensional Data Streams Ting Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

772

An Effective Web Page Layout Adaptation for Various Resolutions Jie Song, Tiezheng Nie, Daling Wang, Ge Yu . . . . . . . . . . . . . . . . . . . .

779

Table of Contents

XIX

XMine: A Methodology for Mining XML Structure Richi Nayak, Wina Iryadi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

786

Multiple Join Processing in Data Grid Donghua Yang, Qaisar Rasool, Zhenhuan Zhang . . . . . . . . . . . . . . . . . .

793

A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology Zhilin Feng, Jianwei Yin, Zhaoyang He, Xiaoming Liu, Jinxiang Dong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

800

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol Min Li, Enhong Chen, Phillip C-y Sheu . . . . . . . . . . . . . . . . . . . . . . . . .

806

Web-Based Genomic Information Integration with Gene Ontology Kai Xu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

812

Table Detection from Plain Text Using Machine Learning and Document Structure Juanzi Li, Jie Tang, Qiang Song, Peng Xu . . . . . . . . . . . . . . . . . . . . . . .

818

Efficient Mining Strategy for Frequent Serial Episodes in Temporal Database Kuo-Yu Huang, Chia-Hui Chang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

824

Efficient and Provably Secure Client-to-Client Password-Based Key Exchange Protocol Jin Wook Byun, Dong Hoon Lee, Jong-in Lim . . . . . . . . . . . . . . . . . . . .

830

Effective Criteria for Web Page Changes Shin Young Kwon, Sang Ho Lee, Sung Jin Kim . . . . . . . . . . . . . . . . . . .

837

WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages Xiaojun Wan, Jianwu Yang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

843

A Scalable Update Management Mechanism for Query Result Caching Systems at Database-Driven Web Sites Seunglak Choi, Sekyung Huh, Su Myeon Kim, Junehwa Song, Yoon-Joon Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

850

Building Content Clusters Based on Modelling Page Pairs Christoph Meinel, Long Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

856

IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm Jie Shen, Ying Lin, Gui-Rong Xue, Fan-De Zhu, Ai-Guo Yao . . . . . .

862

XX

Table of Contents

A Method to Select the Optimum Web Services Yuliang Shi, Guang’an Huang, Liang Zhang, Baile Shi . . . . . . . . . . . . .

868

A New Methodology for Information Presentations on the Web Hyun Woong Shin, Dennis McLeod, Larry Pryor . . . . . . . . . . . . . . . . . .

874

Integration of Single Sign-On and Role-Based Access Control Profiles for Grid Computing Jongil Jeong, Weehyuk Yu, Dongkyoo Shin, Dongil Shin, Kiyoung Moon, Jaeseung Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

880

An Effective Service Discovery Model for Highly Reliable Web Services Composition in a Specific Domain Derong Shen, Ge Yu, Tiezheng Nie, Yue Kou, Yu Cao, Meifang Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

886

Using Web Archive for Improving Search Engine Results Adam Jatowt, Yukiko Kawai, Katsumi Tanaka . . . . . . . . . . . . . . . . . . . .

893

Closed Queueing Network Model for Multi-tier Data Stream Processing Center YuFeng Wang, HuaiMin Wang, Yan Jia, Bixin Liu . . . . . . . . . . . . . . .

899

Optimal Task Scheduling Algorithm for Non-preemptive Processing System Yong-Jin Lee, Dong-Woo Lee, Duk-Jin Chang . . . . . . . . . . . . . . . . . . . .

905

A Multi-agent Based Grid Service Discovery Framework Using Fuzzy Petri Net and Ontology Zhengli Zhai, Yang Yang, Zhimin Tian . . . . . . . . . . . . . . . . . . . . . . . . . .

911

Modeling Identity Management Architecture Within a Social Setting Lin Liu, Eric Yu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

917

Ontological Engineering in Data Warehousing Longbing Cao, Jiarui Ni, Dan Luo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

923

Mapping Ontology Relations: An Approach Based on Best Approximations Peng Wang, Baowen Xu, Jianjiang Lu, Dazhou Kang, Jin Zhou . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

930

Building a Semantic P2P Scientific References Sharing System with JXTA Yijiao Yu, Hai Jin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

937

Table of Contents

XXI

Named Graphs as a Mechanism for Reasoning About Provenance E. Rowland Watkins, Denis A. Nicole . . . . . . . . . . . . . . . . . . . . . . . . . . .

943

Discovery of Spatiotemporal Patterns in Mobile Environment Vu Thi Hong Nhan, Jeong Hee Chi, Keun Ho Ryu . . . . . . . . . . . . . . . .

949

Visual Description Conversion for Enhancing Search Engines and Navigational Systems Taro Tezuka, Katsumi Tanaka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

955

Reusing Experiences for an Effective Learning in a Web-Based Context Elder Bomfim, Jonice Oliveira, Jano M. de Souza . . . . . . . . . . . . . . . . .

961

Special Sessions on e-Water Collaboration Between China and Australia: An e-Water Workshop Report Ah Chung Tsoi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

967

On Sensor Network Segmentation for Urban Water Distribution Monitoring Sudarsanan Nesamony, Madhan Karky Vairamuthu, Maria Elzbieta Orlowska, Shazia Wasim Sadiq . . . . . . . . . . . . . . . . . . . .

974

Using the Shuffled Complex Evolution Global Optimization Method to Solve Groundwater Management Models Jichun Wu, Xiaobin Zhu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

986

Integrating Hydrological Data of Yellow River for Efficient Information Services Huaizhong Kou, Weimin Zhao . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

996

Application and Integration of Information Technology in Water Resources Informatization Xiaojun Wang, Xiaofeng Zhou . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1004 An Empirical Study on Groupware Support for Water Resources Ontology Integration Juliana Lucas de Rezende, Jairo Francisco de Souza, Elder Bomfim, Jano Moreira de Souza, Otto Corrˆea Rotunno Filho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010 Ontology Mapping Approach Based on OCL Pengfei Qian, Shensheng Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1022

XXII

Table of Contents

Object Storage System for Mass Geographic Information Lingfang Zeng, Dan Feng, Fang Wang, Degang Liu, Fayong Zhang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1034 The Service-Oriented Data Integration Platform for Water Resources Management Xiaofeng Zhou, Zhijian Wang, Feng Xu . . . . . . . . . . . . . . . . . . . . . . . . . 1040 Construction of Yellow River Digital Project Management System Houyu Zhang, Dutian Lu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1046 Study on the Construction and Application of 3D Visualization Platform for the Yellow River Basin Junliang Wang, Tong Wang, Jiyong Zhang, Hao Tan, Liupeng He, Ji Cheng . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053

Industry Papers A Light-Weighted Approach to Workflow View Implementation Zhe Shan, Yu Yang, Qing Li, Yi Luo, Zhiyong Peng . . . . . . . . . . . . . . 1059 RSS Feed Generation from Legacy HTML Pages Jun Wang, Kanji Uchino, Tetsuro Takahashi, Seishi Okamoto . . . . . . 1071 Ontology Driven Securities Data Management and Analysis Xueqiao Hou, Gang Hu, Li Ma, Tao Liu, Yue Pan, Qian Qian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1083 Context Gallery: A Service-Oriented Framework to Facilitate Context Information Sharing Soichiro Iga, Makoto Shinnishi, Masashi Nakatomi, Tetsuro Nagatsuka, Atsuo Shimada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1096 A Service-Oriented Architecture Based Macroeconomic Analysis & Forecasting System Dongmei Han, Hailiang Huang, Haidong Cao, Chang Cui, Chunqu Jia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107 A Web-Based Method for Building Company Name Knowledge Base Zou Gang, Meng Yao, Yu Hao, Nishino Fumihito . . . . . . . . . . . . . . . . . 1118

Table of Contents XXIII

Demo Sessions Healthy Waterways: Healthy Catchments – An Integrated Research/Management Program to Understand and Reduce Impacts of Sediments and Nutrients on Waterways in Queensland, Australia Eva G. Abal, Paul F. Greenfield, Stuart E. Bunn, Diane M. Tarte . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1126 Groundwater Monitoring in China Qingcheng He, Cai Li . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1136 The Digital Yellow River Programme Qingping Zhu, Wentao Liu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144 Web Services Based State of the Environment Reporting Yu Zhang, Steve Jones, Lachlan Hurse, Arnon Accad . . . . . . . . . . . . . . 1152 COEDIG: Collaborative Editor in Grid Computing Hyunjoon Jung, Hyuck Han, Heon Y. Yeom, Hee-Jae Park, Jysoo Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1155 HVEM Grid: Experiences in Constructing an Electron Microscopy Grid Hyuck Han, Hyungsoo Jung, Heon Y. Yeom, Hee S. Kweon, Jysoo Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1159 WISE: A Prototype for Ontology Driven Development of Web Information Systems Lv-an Tang, Hongyan Li, Baojun Qiu, Meimei Li, Jianjun Wang, Lei Wang, Bin Zhou, Dongqing Yang, Shiwei Tang . . . . . . . . . . . . . . . . 1163 DSEC: A Data Stream Engine Based Clinical Information System Yu Fan, Hongyan Li, Zijing Hu, Jianlong Gao, Haibin Liu, Shiwei Tang, Xinbiao Zhou . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1168 SESQ: A Novel System for Building Domain Specific Web Search Engines Qi Guo, Lizhu Zhou, Hang Guo, Jun Zhang . . . . . . . . . . . . . . . . . . . . . . 1173 Digital Map: Animated Mode Kai-Chi Hung, Kuo-Hung Huang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1177 Dynamic Voice User Interface Using VoiceXML and Active Server Pages Rahul Ram Vankayala, Hao Shi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1181

XXIV Table of Contents

WebVine Suite: A Web Services Based BPMS Dongsoo Han, Seongdae Song, Jongyoung Koo . . . . . . . . . . . . . . . . . . . . 1185 Adaptive Mobile Cooperation Model Based on Context Awareness Weihong Wang, Zheng Qin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1189 An Integrated Network Management System Zongshui Xiao, Jun Chen, Ruting Guo . . . . . . . . . . . . . . . . . . . . . . . . . . 1193 Ichigen-San: An Ontology-Based Information Retrieval System Takashi Hattori, Kaoru Hiramatsu, Takeshi Okadome, Bijan Parsia, Evren Sirin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1197 A Database Monitoring and Disaster Recovery System Xiaoguang Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1201 IPVita: An Intelligent Platform of Virtual Travel Agency Qi Sui, Hai-yang Wang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1205 LocalRank: A Prototype for Ranking Web Pages with Database Considering Geographical Locality Jianwei Zhang, Yoshiharu Ishikawa, Sayumi Kurokawa, Hiroyuki Kitagawa . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1209 Automated Content Transformation with Adjustment for Visual Presentation Related to Terminal Types Hiromi Uwada, Akiyo Nadamoto, Tadahiko Kumamoto, Toru Hamabe, Makoto Yokozawa, Katsumi Tanaka . . . . . . . . . . . . . . . . 1214 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1219

Applications Development for the Computational Grid David Abramson Faculty of Information Technology, Monash University, Clayton, Vic, Australia, 3800 [email protected] http://www.csse.monash.edu.au/~davida

Abstract. The Computational Grid has promised a great deal in support of innovative applications, particularly in science and engineering. However, developing applications for this highly distributed, and often faulty, infrastructure can be demanding. Often it can take as long to set up a computational experiment as it does to execute it. Clearly we need to be more efficient if the Grid is to deliver useful results to applications scientists and engineers. In this paper I will present a raft of upper middleware services and tools aimed at solving the software engineering challenges in building real applications.

1 Introduction e-Science, enabled by the emerging Grid computing paradigm [28], tightly couples scientists, their instruments (e.g., telescopes, synchrotrons, and networks of sensors), massive data storage devices and powerful computational devices. This new discipline allows scientists to interact efficiently and effectively with each other, their instruments and their data, even across geographic separations, thereby ameliorating the tyranny of distance that often hinders research. Data can be captured, shared, interpreted and manipulated more efficiently and more reliably and on a far greater scale than previously possible. Data can be presented for interpretation in new ways using scientific visualization techniques and advanced data mining algorithms. These new technologies enable new insights to be derived and exploited. The data may also drive simulation models that support prediction and “what-if” analyses. The models and their results may be archived for later use and analysis, and shared securely and reliably with scientific collaborators across the globe. The resulting network of people and devices is empowered to interact more productively and to undertake experiments and analyses that are otherwise impossible. In spite of tremendous advances in middleware and internet software standards, creating Grid applications that harness geographically disparate resources is still ifficult and error-prone. Programmers are presented with a range of middleware services, a raft of legacy software tools that do not address the distributed nature of the Grid, and many other incompatible development tools that often deal with only part of the Grid programming problem. So, a scientist might start with an idea for an innovative experiment but quickly become distracted by technical details that have little to do with the task at hand. Moreover, the highly distributed, heterogeneous and unreliable nature of the Grid makes software development extremely difficult. If we are to X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1 – 12, 2006. © Springer-Verlag Berlin Heidelberg 2006

2

D. Abramson

capitalize on the enormous potential offered by Grid computing, we must find more efficient and effective ways of developing Grid based applications.

2 Software Engineering for the Grid A critical ingredient for success in e-Science is appropriate Grid-enabled software which, to date, has lagged behind the high-performance computers, data servers, instruments and networking infrastructure. All software follows a lifecycle, from development through execution, and back again, (Figure 1). Grid software is no exception, although there are sufficient differences in the details of the various phases in the lifecycle to make traditional tools and techniques inappropriate. For example, traditional software development tools rarely support the creation of virtual applications in which the components are distributed across multiple machines. In the Grid, these types of virtual applications are the norm. Likewise, traditional methods of debugging software do not scale to the size and heterogeneity of infrastructure found the Grid. Here, we identify four distinct phases of importance, development, deployment, testing and debugging and execution.

Development

Deployment

Execution

Testing & Debugging

Fig. 1.

2.1 Development Initially, software is developed using the most appropriate tools and programming languages for the task at hand. The process involves the specification, coding and compilation of the software. In the Grid, there is a very strong focus on building “virtual applications”, or workflows, that consist of a number of interoperating components distributed across multiple resources. Grid workflows are powerful because they support the integration of computations, data, scientific instruments and visualizationsoftware while leveraging multiple Grid resources. Grid workflows have been specified for many different scientific domains including physics [31] gravitational wave physics [25], geophysics [40] , astronomy [15] and bioinformatics [36]. Accordingly,

Applications Development for the Computational Grid

3

there have been many projects to date that support Grid workflows, to name a few, Triana [47], Taverna [37][42], Kepler [35], GrADS [17] and P-Grade [33]. A specialized form of workflow allows the creation of “parameter sweeps”, where a computational model is run repeatedly with different input parameters. Using this approach it is possible to explore different design options, and perform more robust science than previously possible. A number of systems support parameter sweep workflows, including APST [21], the NASA IPG (Information Power Grid) parameter process specification tool [50] and our own Nimrod/G [1][4][5][7]. Apart from these specific environments, programmers can adopt any one of a number of techniques for building distributed applications. These might build on standard languages like Java, and may use special message passing libraries like MPICH/G. Further, tools like Ninf-G [46] and NetSolve [22] provide powerful remote procedure call systems that can invoke arbitrary procedures as well as specific services such as linear algebra. Finally, Web Services provide a generic and standards based mechanism for building large distributed applications, and these underpin the newly developed Web Services Resource Framework (WSRF) [24]. 2.2 Deployment Traditionally, deployment is often combined with development, and may involve little more than copying the executable program to some partition in the file system, and perhaps registering the software with the operating system. The difficulty in deployment is often underestimated because modern installation techniques, such as those used by Microsoft and Apple, appear to simplify the process enormously. However, in a Grid environment, deployment is much more complex because of the need to install the code on a range of heterogeneous machines, geographically distributed and in different administrative domains. In the Grid deploying an application means building and installing the software on a range of different platforms, taking account of issues such as different instruction sets, operating systems, file system structures and software libraries. To date, this phase is often performed manually, which is both error-prone and does not scale to large Grids. For example, in order to deploy an application across 500 computers, a user would typically need to log into each of the 500 sequentially, compiling, linking and installing the software. Our own experiences at deploying a quantum chemistry package over a handful of resources have identified this as a serious bottleneck [43][44]. Surprisingly, there has been little work on the deployment problem, and none of the current middleware projects addresses deployment across heterogeneous resources. Some researchers have suggested solving the problem by taking a systemcentric view, as is done in systems that configure a homogeneous compute cluster [18][14][29][30][13]. In this model, an application image is produced for the entire Grid, and then copied from a central resource. However, this central approach is in conflict with the philosophy of the Grid, which favours a decentralized approach. Moreover, it does not handle significant heterogeneity because each resource could, in the worst case, require a tailored version of the software. One of the few systems that views deployment in a decentralized way is GridAnt [48].

4

D. Abramson

2.3 Testing and Debugging Testing and debugging software is already challenging on a local workstation. In the Grid this phase is particularly difficult, because the software must be tested and debugged on a range of different platforms, distributed geographically and in different administrative domains. Traditional testing and debugging tools are unable to provide the support required. At present, the only feasible way of debugging a piece of Grid software is for the programmer to log into the remote system and run a conventional debugger such as gdb [41]. This technique does not scale to large Grids, and is not practical for workflows that are distributed across multiple resources. Debugging is a serious challenge in the Grid, partly because an application must often execute on a range of different platforms. In this environment, it is not uncommon for a program to work correctly on one machine, but fail in subtle ways when the software is ported or moved to another platform. Traditional debugging techniques usually force the programmer to debug the ported application from scratch, machine the task complex and time consuming. 2.4 Execution This phase typically means scheduling and coordinating execution, using a variety of resources concurrently. Of the phases discussed to date, execution has attracted the most attention. There are many different ways of starting, scheduling and controlling the execution of Grid software, ranging from direct interface to the middleware through to sophisticated scheduling and orchestration systems. Web Services and Globus [27] provide rudimentary mechanisms for starting jobs on a given remote, and these services can build more complex multi-resource schedules. For example, Kepler contains methods for scheduling and controlling the execution of a workflow and uses the Globus GRAM interface to execute the various workflow actors [12]. Other middleware, such as Condor-G [26][34] and APST [21], use sophisticated scheduling algorithms to enforce quality of service metrics. For example, APST can minimize the total execution time of a workflow. Systems such as Cactus [11] provide techniques for migrating computations across Grid resources, thus the computation adapts to the variability in the resource availability.

3 Grid Middleware Figure 2 shows a traditional software hierarchy. Here, e-Science applications use services that are exposed by both the platform infrastructure and middleware such as Globus and Unicore [39]. In our experience, whilst powerful, these services are typically too low level for many e-Science applications. As a result, there is a significant ‘semantic gap’ between them, because the application needs are not matched by the underlying middleware services. Moreover, they do not support the software lifecycle thereby making software development difficult and error-prone. To solve these problems, we propose a new hierarchy as shown in Figure 3. The existing middleware is renamed lower-middleware, and an upper middleware layer is inserted. This upper middleware layer is designed to narrow the semantic gap between existing middleware and applications. Importantly, it hosts a range of interoperating tools that will form the e-Scientists workbench, thus supporting the major phases of the software development lifecycle as well as the applications themselves.

Applications Development for the Computational Grid

Applications

Synchrotron Science

Biotechnology

5

Earth Systems Science

Semantic Gap Middleware

Platform Infrastructure

Globus GT4

Unix Windows

JVM

Unicore

.Net Runtime

TCP/IP

MPI

Fig. 2. Ttraditional software hierarchy

Applications

Software Lifecycle

Upper Middleware

Lower Middleware

Platform Infrastructure

Synchrotron Science

Biotechnology

Applications Development

Development Tools

Globus GT4

Unix Windows

JVM

Earth Systems Science

Deploy

Test & Debug

Deployment Tools Deployment Service

.Net Runtime

Debug Tools

Execution

Execution Tools

Debugging Service

Unicore

TCP/IP

MPI

Fig. 3. New software hierarchy

4 Upper Middleware and Tools Our research group has built a number of software tools that address some of the challenges sited in Section 2, as shown in Figure 4. In particular, Nimrod and GriddleS target software development; Guard focuses on debugging; Grid Work Bench and DistAnt target deployment and Nimrod, GriddLeS, Active Sheets, REMUS and the Nimrod Portal all focus on execution. In this section we provide a very brief overview of these tools. 4.1 Development Nimrod/G and GriddLeS [6][8], address some of the challenges in creating of Grid software. Nimrod/G manages the execution of studies with varying parameters across distributed computers. It takes responsibility for the overall management of an experiment as well as the low-level issues of distributing files to remote systems,

6

D. Abramson

Applications

Software Lifecycle

Upper Middleware

Synchrotron Science

Biotechnology

Applications Development

Platform Infrastructure

Deploy

Kepler Nimrod/G

Nimrod/O

Guard GriddLeS

Lower Middleware

Earth Systems Science

Globus GT4

Unix Windows

JVM

Grid Work bench

Deployment Service

.Net Runtime

Test & Debug

DistANT

Active Sheets

Debugging Service

Execution

Nimrod Portal

Remus

Unicore

TCP/IP

MPI

Fig. 4. Monash Grid Tools

performing the remote computations, and gathering the results. When users describe an experiment to Nimrod/G, a declarative plan file is developed that describes the parameters, their default values, and the commands needed to perform the work. Apart from this high-level description, users are freed from much of the complexity of the Grid. As a result, Nimrod/G has been very popular among application scientists. Nimrod/O is a variant of Nimrod/G that performs a guided search of the design space rather than exploring all combinations. Nimrod/O allows users to phrase questions such as: “What set of design parameters will minimize (or maximize) the output of my model?” If the model computes metrics such as cost and lifetime, it is then possible to perform automatic optimal design. A commercial version of Nimrod, called EnFuzion, has been produced [16]. GriddLeS, on the other hand, provides a very flexible input-output model that makes it possible to build workflows from legacy applications (written in Fortran, C, etc) thereby leveraging the enormous amount of scientific software that already exists. GriddLeS allows existing programs to transparently access local and remote files, as well as data that is replicated across multiple servers using Grid middleware such as the Storage Resource Broker [38] and the Globus Replica Location Service [23]. It also allows workflows to pipe data from one application to another without any changes to the underlying data access model. In order to support scientific workflows, we have coupled GriddLeS with the Kepler workflow system. Kepler is an active open source cross-project, cross-institution collaboration to build and evolve a scientific workflow system on top of the Ptolemy II system. Kepler allows scientists from multiple domains to design and execute scientific workflows. It includes two dataflow-based computation models, Process Networks (PN) and Synchronous Data Flow (SDF), and these can be used to define the “orchestration semantics” of a workflow. Simply by changing these models, one can change the scheduling and overall execution semantics of a workflow. By combining Kepler and GriddLeS, a user has significant flexibility in choosing the way data is transferred between the individual components, and this can be done without any changes to the application source.

Applications Development for the Computational Grid

7

4.2 Deployment We are currently developing a few different tools to solve the deployment problem, specifically DistAnt and GWB. DistAnt provides an automated application deployment system with a user-oriented approach [30]. It is targeted at users with reasonable knowledge of the application they are deploying, but strictly limited grid computing knowledge, resource information and importantly, resource authority. DistAnt provides a simple, scalable and secure deployment service and supports a simple procedural deployment description. DistAnt supports application deployment over heterogeneous grids by virtualizing certain grid resource attributes to provide a common application deployment gateway, deployment description, file system structure and resource description. To manage remaining resource heterogeneity DistAnt supports sub-grids, to divide an unmanageable heterogeneous grid into manageable sets of like resources, categorized by resource attributes that can be queried. Sub-grids provide a framework to execute environment specific remote build routines, compile an application over a set of resource platforms and redistribute binaries to the entire grid. DistAnt also supports definition and deployment of application dependencies. DistAnt enables deployment of a complex native application over an uncharacterized heterogeneous grid, assuming nothing about grid resources. Furthermore, integration of DistAnt into Nimrod/G, provides an overall environment enabling grid scale application development, deployment and execution. In addition to DistAnt, we are building a rich interactive development environment (IDE), called Grid Work Bench (GWB). GWB is based on the public domain platform Eclipse [32], and supports the creation, management, distribution and debugging of Grid applications. GWB provides specific functionality to help programmers manage the complexity and heterogeneity of the Grid. 4.3 Testing and Debugging The Guard debugger targets the process of testing and debugging in the Grid [2][3]. Specifically, it solves some of the problems discussed in Section 2.3 concerning programs that fail when they are ported from one Grid resource to another. We use a new methodology called relative debugging, which allows users to compare data between two programs being executed. Relative Debugging is effectively a hybrid test-anddebug methodology. While traditional debuggers force the programmer to understand the expected state and internal operation of a program, relative debugging makes it possible to trace errors by comparing the contents of data structures between programs at run time. In this way, programmers are less concerned with the actual state of the program. They are more concerned with finding when, and where, differences occur between the old and new code. The methodology requires users to begin by observing that two programs generate different results. They then move back iteratively through the data flow of the codes, to determine the point at which different answers appear. Guard supports the execution of both sequential and parallel programs on a range of platforms. It also exists for a number of different development environments. Because Guard uses a client-server architecture, it is possible to run a debug client on one Grid resource and have it debug an application running on an

8

D. Abramson

other one, removing the need for users to log into the target system. Further, Guard uses a platform neutral data representation called AIF [49] which means the client and debug servers can run on different types of architecture. We are concurrently developing a WSRF compliant debug service that will allow high level tools like the GRB to debug applications across multiple Grid resources. This debug service will interoperate with the Globus GRAM interface, thus jobs launched by the GRAM can be debugged in a secure and efficient way using the additional interface. 4.4 Execution Nimrod provides significant support during the execution of parameter sweeps, including a sophisticated scheduler that enforces real time deadlines. This economy allows users to specify soft real time deadlines that are enforced by trading units in a computational economy [19][20]. Using this approach the system can provide a quality of service that is proportional to the amount of currency a user wishes to expend on an experiment – in short, the more a user pays, the more likely they are to meet their deadline at the expense of another user. The Nimrod scheduler supports two types of inter-task constraints, namely parallel and sequential dependencies. Parallel tasks are executed concurrently providing there are sufficient computational resources. Typically, these tasks pertain to different parameter values in a parameter sweep and can be executed in parallel. However, it is possible to specify special sequential parameters (called seqameters, as opposed to parameters) that force the order of the execution to be sequential. This means that one task may be dependent on the output from another, and its execution can be stalled until the data is available. The Nimrod Portal and Active Sheets address the execution phase of the life cycle. The Nimrod Portal allows users to create Nimrod experiments from a web interface. It supports the creation of the plan files discussed above using a graphical user interface, the management of the test bed (and associated Globus issues such as certificate management), and control of the experiment as it executes. Active Sheets [10] allows users to set up and execute an experiment from a familiar spreadsheet interface. Individual cells can invoke Nimrod/G to perform one simulation run; multiple data independent cells can be used to specify an entire “what if” experiment. Because the system is embedded in Microsoft Excel, all normal data manipulation and charting tools are available for post analysis (a feature that is popular with users). REMUS is an execution environment that helps users build complex Grid applications across firewalls and different administrative domains [45]. REMUS provides mechanisms that reroute traffic through approved channels without compromising the security of any site. It effectively handles heterogeneity in security mechanisms, allowing applications to communication when there is no common security framework.

5 Conclusion In this paper we have provided a very brief overview of the challenges in building software for the Grid. We have focused on four phases of a software lifecycle, namely development, deployment, testing and debugging. We have shown that it is possible

Applications Development for the Computational Grid

9

to split Grid middleware into two layers – one that addresses low level issues and a higher application focused layer. This latter layer can support software tools that make the software development task easier. We have discussed a number of tools developed by the author that simplify the software development task.

Acknowledgments The author wishes to acknowledge a number of people who have contributed to the work described in this document, including Shahaan Ayyub, Rajkumar Buyya, Phillip Chan, Clement Chu, Colin Enticott, Jagan Kommineni, Donny Kurniawan, Slavisa Garic, Jon Giddy, Wojtek Goscinski, Tim Ho, Andrew Lewis, Tom Peachey, Jeff Tan and Greg Watson. The projects are supported by a variety of funding agencies, including the Australian Research Council, the Australian Department of Communications, Arts and Information Technology (DCITA), the Australian Department of Education, Science and Technology (DEST), Microsoft, IBM and Hewlett Packard.

References 1. Abramson D, Lewis A, Peachey T, Fletcher, C., “An Automatic Design Optimization Tool and its Application to Computational Fluid Dynamics”, SuperComputing 2001, Denver, Nov 2001. 2. Abramson D., Foster, I., Michalakes, J. and Sosic R., "Relative Debugging: A new paradigm for debugging scientific applications", Communications of the Association for Computing Machinery (CACM), Vol. 39, No 11, pp 67 - 77, Nov 1996. 3. Abramson D., Foster, I., Michalakes, J. and Sosic R., "Relative Debugging and its Application to the Development of Large Numerical Models", Proceedings of IEEE Supercomputing 1995, San Diego, December 95. Paper on CD, no pages 4. Abramson D., Sosic R., Giddy J. and Hall B., "Nimrod: A Tool for Performing Parametrised Simulations using Distributed Workstations", The 4th IEEE Symposium on High Performance Distributed Computing, Virginia, August 1995. 5. Abramson, D, Lewis, A. and Peachy, T., "Nimrod/O: A Tool for Automatic Design Optimization", The 4th International Conference on Algorithms & Architectures for Parallel Processing (ICA3PP 2000), Hong Kong, 11 - 13 December 2000. 6. Abramson, D. and Kommineni, J., “A Flexible IO Scheme for Grid Workflows”. IPDPS04, Santa Fe, New Mexico, April 2004. 7. Abramson, D., Giddy, J. and Kotler, L. High Performance Parametric Modeling with Nimrod/G: Killer Application for the Global Grid?, International Parallel and Distributed Processing Symposium (IPDPS), pp 520- 528, Cancun, Mexico, May 2000. 8. Abramson, D., Kommineni, J. and Altinas, I. “Flexible IO services in the Kepler Grid Workflow Tool”, to appear, IEEE Conference on e-Science and Grid Computing, Melbourne, Dec 2005. 9. Abramson, D., Roe, P., Kotler L and Mather, D., “ActiveSheets: Super-Computing with Spreadsheets”. 2001 High Performance Computing Symposium (HPC'01), Advanced Simulation Technologies Conference, April 22-26, 2001, pp 110 – 115, Seattle, Washington (USA).

10

D. Abramson

10. Abramson, D., Roe, P., Kotler L and Mather, D., “ActiveSheets: Super-Computing with Spreadsheets”. 2001 High Performance Computing Symposium (HPC'01), Advanced Simulation Technologies Conference, April 22-26, 2001, pp 110 – 115, Seattle, Washington (USA). 11. Allen G. and Seidel, E. “Collaborative Science: Astrophysics Requirements and Experiences”, in The Grid: Blueprint for a New Computing Infrastructure (2nd Edition), Ed: Ian foster and Carl Kesselmann, p. 201-213, (2004) 12. Altintas, I. Berkley, C. Jaeger, E. Jones, M. Ludäscher B. and Mock, S. “Kepler: Towards a Grid-Enabled System for Scientific Workflows,” in the Workflow in Grid Systems Workshop in GGF10 - The 10th Global Grid Forum, Berlin, March 2004. 13. Anderson P. and Scobie. A. “LCFG: The Next Genera-tion”,UKUUG Winter Conference, 2002. 14. Anderson, P. Goldsack, P. Paterson, J. “SmartFrog meets LCFG - Autonomous Reconfiguration with Central Policy Control”, 2002 Large Installations Systems Admini-stration Conference, 2003 15. Annis, J., Zhao,Y. et al., "Applying Chimera Virtual Data Concepts to Cluster Finding in the Sloan Sky Survey," Technical Report GriPhyN-2002-05, 2002. 16. Axceleon Inc, http://www.axceleon.com 17. Berman et al, “The GrADS Project: Software Support for High-Level Grid Application Development”, International Journal of High Performance Computing Applications, Winter 2001 (Volume 15, Number 4), pp. 327-344. 18. Bruno, G. Papadopoulos P. and Katz., M. “Npaci rocks: Tools and techniques for easily deploying manageable linux clusters”. Cluster 2001, 2001 19. Buyya, R., Abramson, D. and Giddy, J. “Nimrod/G: An Architecture of a Resource Management and Scheduling System in a Global Computational Grid”, HPC Asia 2000, May 14-17, 2000, pp 283 – 289, Beijing, China 20. Buyya, R., Abramson, D. and Venugopal, S. “The Grid Economy”, Special Issue on Grid Computing, Proceedings of the IEEE, Manish Parashar and Craig Lee (editors), Volume 93, Issue 3, 698-714pp, IEEE Press, New York, USA, March 2005. 21. Casanova H. and Berman, F. “Parameter Sweeps on The Grid With APST”, chapter 26. Wiley Publisher, Inc., 2002. F. Berman, G. Fox, and T. Hey, editors. 22. Casanova H. and Dongarra, J., “Netsolve: A Network Server for Solving Computational Science Problems,” Supercomputing Applications and High Performance Computing, vol. 11, no. 3,pp. 212–223, 1997. 23. Chervenak, A.L. Palavalli, N. Bharathi, S. Kesselman C. and Schwartzkopf, R. “Performance and Scalability of a Replica Location Service,” Proceedings of the International IEEE Symposium on High Performance Distributed Computing (HPDC-13), June 2004 24. Czajkowski, K. Foster, I. Frey, J. et al. “The WS-Resource Framework, Version 1.0”, 03/05/2004, http://www.globus.org/wsrf/ 25. Deelman, E., Blackburn, K. et al., "GriPhyN and LIGO, Building a Virtual Data Grid for Gravitational Wave Scientists," 11th Intl Symposium on High Performance Distributed Computing, 2002. 26. Douglas Thain, Todd Tannenbaum, and Miron Livny, "Condor and the Grid", in Fran Berman, Anthony J.G. Hey, Geoffrey Fox, editors, Grid Computing: Making The Global Infrastructure a Reality, John Wiley, 2003. 27. Foster I. and Kesselman C., Globus: A Metacomputing Infrastructure Toolkit, International Journal of Supercomputer Applications, 11(2): 115-128, 1997. 28. Foster, I and Kesselman, C. (editors), The Grid: Blueprint for a New Computing Infrastructure, ISBN: 1558609334, 2nd edition, Morgan Kaufmann, USA, November 18, 2003.

Applications Development for the Computational Grid

11

29. Goldsack, P. Guijarro, J. Lain, A. et al, “SmartFrog: Configuration and Automatic Ignition of Distributed Applications”, HP Labs, UK, 2003, http://www.hpl.hp.com/research/ smartfrog/ 30. Goscinski, W. Abramson, “D. Distributed Ant: A System to Support Application Deployment in the Grid”, Fifth IEEE/ACM International Workshop on Grid Computing, 2004. 31. GriPhyN, www.griphyn.org 32. http://www.eclipse.org 33. Kacsuk, P., Cunha, J.C., Dózsa, G., Lourenco, J., Antao, T., Fadgyas, T., “GRADE: A Graphical Development and Debugging Environment for Parallel Programs”, Parallel Computing Journal, Elsevier, Vol. 22, No. 13, Feb. 1997, pp. 1747-1770 34. Litzkow, M., Livny, M. and Mutka, M. Condor - A Hunter of Idle Workstations". Proceedings of the 8th International Conference on Distributed Computing Systems. San Jose, Calif., June, 1988. 35. Ludäscher, B., Altintas, I., Berkley, C., Higgins, D., Jaeger-Frank, E., Jones, M, Lee, E., Tao J. and Zhao, Y. “Scientific Workflow Management and the Kepler System”, Concurrency and Computation: Practice & Experience, Special Issue on Scientific Workflows, 2005. 36. NPACI, "Telescience, https://gridport.npaci.edu/Telescience/." 37. Oinn, T., Addis, M., Ferris, J., Marvin, D., Senger, M., Greenwood, M., Carver, T., Glover, K., Pocock, M., Wipat, A. and Li., P. “Taverna: A tool for the composition and enactment of bioinformatics workflows”, Bioinformatics Journal 20(17) pp 3045-3054, 2004, doi:10.1093/bioinformatics/bth361. 38. Rajasekar, A. Wan, M. Moore, R. Schroeder, W. Kremenek, G. Jagatheesan, A. Cowart, C. Zhu, B. Chen S. and Olschanowsky, R. “Storage Resource Broker - Managing Distributed Data in a Grid,” Computer Society of India Journal, Special Issue on SAN, Vol. 33, No. 4, 2003, pp. 42-54. 39. Romberg, M. “The UNICORE Architecture–Seamless Access to Distributed Resources”, Proceedings of the Eight IEEE International Symposium on High Performance Computing, Redondo Beach, CA, USA, August 1999, Pages 287-293. 40. Southern California Earthquake Center’s Community Modeling. Environment, http://www.scec.org/cme/. 41. Stallman, R. Debugging with GDB – The GNU Source Level Debugger, Edition 4.12, Free Software Foundation, January 1994. 42. Stevens, R. Tipney, H.J. Wroe, C. Oinn, T. Senger, M. Lord, P. Goble, C.A. Brass A. and Tassabehji M. “Exploring Williams-Beuren Syndrome Using myGrid”, Proceedings of 12th International Conference on Intelligent Systems in Molecular Biology, 31st Jul-4th Aug 2004, Glasgow, UK, published Bioinformatics Vol. 20 Suppl. 1 2004, pp 303-310. 43. Sudholt, W., Baldridge, K., Abramson, D., Enticott, C. and Garic, S. “Parameter Scan of an Effective Group Difference Pseudopotential Using Grid Computing”, New Generation Computing 22 (2004) 125-135. 44. Sudholt, W., Baldridge, K., Abramson, D., Enticott, C. and Garic, S., “Application of Grid computing to parameter sweeps and optimizations in molecular modeling”, Future Generation Computer Systems, 21 (2005), 27-35. 45. Tan, J, Abramson, D. and Enticott, C. “Bridging Organizational Network Boundaries on the Grid”, IEEE Grid 2005, Seattle, Nov 2005. 46. Tanaka, Y. Takemiya, H. Nakada, H. and Sekiguchi, S. “Design, implementation and performance evaluation of GridRPC programming middleware for a large-scale computational grid,” Fifth IEEE/ACS International Workshop on Grid Computing, pp. 298–305, 2005.

12

D. Abramson

47. Taylor, I., Wang, I., Shields, M. and Majithia, S. “Distributed computing with Triana on the Grid” In Concurrency and Computation:Practice and Experience , 17(1-18), 2005. 48. von Laszewski, G. Alunkal, B. Amin, K. Hampton, S and Nijsure, S. GridAnt-Client-side Workflow Management with Ant, 2002, http://wwwunix.globus.org/cog/projects/gridant/ 49. Watson, G. and Abramson, D. “The Architecture of a Parallel Relative Debugger”, 13th International Conference on Parallel and Distributed Computing Systems - PDCS 2000, August 8 - 10, 2000. 50. Yarrow, M. McCann, K. Biswas, R. and Van der Wijngaart, R. “An Advanced User Interface Approach for Complex Parameter Study Process Specification on the Information Power Grid”, Proceedings of the 1st Workshop on Grid Computing (GRID 2002), Bangalore, India, Dec. 2000.

Strongly Connected Dominating Sets in Wireless Sensor Networks with Unidirectional Links Ding-Zhu Du1 , My T. Thai1 , Yingshu Li2 , Dan Liu1 , and Shiwei Zhu1 1

Department of Computer Science and Engineering, University of Minnesota, 200 Union Street, Minneapolis, MN 55455, USA {dzd, mythai, danliu, zhu}@cs.umn.edu 2 Department of Computer Science, Georgia State University, 34 Peachtree Street, Atlanta, GA 30303, USA [email protected]

Abstract. A Connected Dominating Set (CDS) can serve as a virtual backbone for a wireless sensor network since there is no fixed infrastructure or centralized management in wireless sensor networks. With the help of CDS, routing is easier and can adapt quickly to network topology changes. The CDS problem has been studied extensively in undirected graphs, especially in unit disk graphs, in which each sensor node has the same transmission range. However, in practice, the transmission ranges of all nodes are not necessarily to be equal. In this paper, we model a network as a disk graph where unidirectional links are considered and introduce the Strongly Connected Dominating Set (SCDS) problem in disk graphs. We propose two constant approximation algorithms for the SCDS problem and compare their performances through the theoretical analysis. Keywords: Strongly Connected Dominating Set, Disk Graph, Wireless Sensor Network, Virtual Backbone, Directed Graph.

1

Introduction

Recent advances in technology have made possible the creation of Wireless Sensor Networks (WSNs). WSNs can be used in a wide-range of potential applications both in military and people’s daily lives. In WSNs, there is no fixed or pre-defined infrastructure. Nodes in WSNs communicate via a shared medium, either through a single hop or multiple hops. Although there is no physical backbone infrastructure, a virtual backbone can be formed by constructing a Connected Dominating Set (CDS). Given an undirected graph G = (V, E), a subset V  ⊆ V is a CDS of G if for each node u ∈ V , u is either in V  or there exists a node v ∈ V  such that uv ∈ E and the subgraph induced by V  , i.e., G(V  ), is connected. The nodes in the CDS are called dominators, other nodes are called dominatees. With the help of CDS, routing is easier and can adapt quickly to network topology changes. To reduce the traffic during communication and simplify the connectivity management, it is desirable to construct a Minimum CDS (MCDS). X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 13–24, 2006. c Springer-Verlag Berlin Heidelberg 2006 

14

D.-Z. Du et al.

Fig. 1. A Disk Graph Representing a Network

The CDS problem has been studied intensively in Unit Disk Graph (UDG), in which each node has a same transmission range. The MCDS problem in UDG has been shown to be NP-hard. To build a CDS, most of the current algorithms first find a Maximal Independent Set (MIS) I of G and then connect all the nodes in I to have a CDS. The independent set I is a subset of V such that for any two nodes u, v ∈ I, uv ∈ / E. In other words, the nodes in I are pairwise nonadjacent. A maximal independent set is an independent set such that no more nodes can be added to remain the non-adjacency property. The most relevant related work using this scheme are in [3] and [4]. In [3], Wan et al. proposed the first distributed algorithm with the performance ratio of 8. Later, Li et al. proposed a better algorithm with the performance ratio of (4.8 + ln 5) by constructing a Steiner tree when connecting all the nodes in I [4]. However, in practice, the transmission ranges of all the nodes are not necessarily to be equal. In this case, a wireless ad hoc network can be modelled using a directed graph G = (V, E). The nodes in V are located in a Euclidean plane and each node vi ∈ V has a transmission range ri ∈ [rmin , rmax ]. A directed edge (vi , vj ) ∈ E if and only if d(vi , vj ) ≤ ri where d(vi , vj ) denotes the Euclidean distance between vi and vj . Such graphs are called Disk Graphs (DG). / E. An edge (vi , vj ) An edge (vi , vj ) is unidirectional if (vi , vj ) ∈ E and (vj , vi ) ∈ is bidirectional if both (vi , vj ) and (vj , vi ) are in E, i.e., d(vi , vj ) ≤ min{ri , rj }. In other words, the edge (vi , vj ) is bidirectional if vi is in the disk Dj centered at vj with radius rj and vj is in the disk Di centered at vi with radius ri . Figure 1 gives an example of a DG representing a network. In Figure 1, the dotted circles represent the transmission ranges and the directed edges represent the unidirectional links in G while the undirected edges represent the bidirectional links. In this paper, we study the Strongly Connected Dominating Set (SCDS) problem in directed disk graphs. Given a directed graph G = (V, E), a subset S ⊆ V is a Dominating Set (DS) of G if for any node u ∈ V , u ∈ S or there exists v ∈ S such that (v, u) ∈ E. S is strongly connected if for every pair u and v ∈ S, there exists a directed path from u to v in the directed graph induced by S, i.e., G(S). Formally, the SCDS problem can be defined as follows: Definition 1. Strongly Connected Dominating Set (SCDS) Problem: Given a directed disk graph G = (V, E), find a subset S ⊆ V with minimum size, such that the subgraph induced by S, called G(S) is strongly connected and S forms a dominating set in G.

Strongly Connected Dominating Sets in Wireless Sensor Networks

15

The SCDS problem is NP-hard since the MCDS problem in UDG is NPhard and UDG is a special case of DG. Note that in directed graphs, an MIS is not necessary a DS. Hence we cannot find an MIS and connect it to construct a SCDS. Instead, we need to find a DS directly and then connect it to form a SCDS. Based on this approach, we present two constant approximation algorithms for computing a minimum SCDS in DG, called Connected Dominating Set using the Breath First Search tree (CDS-BFS) and Connected Dominating Set using the Minimum Nodes Steiner tree (CDS-MNS) algorithms. The main differences of these two algorithm are the construction to connect the obtained dominating set. To guarantee that the graph G has a solution for the SCDS problem, we assume that G is a strongly connected graph. The remainder of this paper is organized as follows. Section 2 describes the related research work on the CDS problem, both in undirected and directed graphs. The CDS-BFS algorithm and its theoretical analysis are discussed in section 3. Section 4 presents the CDS-MNS algorithm as well as shows the performance ratio of CDS-MNS and its improvements over the previous algorithm. Finally, Section 5 ends the paper with conclusions and some future work.

2

Related Work

The CDS problem in wireless sensor networks has been studied extensively in undirected graphs. Algorithms that construct a CDS can be divided into two categories based on their algorithm designs: centralized algorithms and decentralized algorithms. The centralized algorithms usually yield a smaller CDS with a better performance ratio than that of decentralized algorithms. The decentralized algorithms can be further divided into two categories: distributed algorithms and localized algorithms. In distributed algorithms, the decision process is decentralized. In the localized algorithms, the decision process is not only distributed but also requires only a constant number of communication rounds. Based on the network models, these algorithms can be classified into two types: directed graphs and undirected graphs. For undirected graphs, they can be further divided into three categories: general undirected graphs, unit disk graphs, and disk graphs. When modelling a network as a general undirected graph G, the algorithm’s performance ratio is related to Δ where Δ is the maximum degree of G. When modelling a network as a unit disk graph, the performance ratio is constant due to the special geometric structure of UDG. In directed graphs, to our knowledge, there is only one work to find a SCDS [10]. In [10], the authors presented a localized algorithm to construct a SCDS using the marking process. The authors did not present the analysis of the performance ratio. In undirected graphs, several work have been studied in recent research literature. In [6], Guha and Khuller first proposed two polynomial time algorithms to construct a CDS in a general undirected graph G. These algorithms are greedy and centralized. The first one has an approximation ratio of 2(H(Δ) + 1) where H is a harmonic function. The idea of this algorithm is to build a spanning tree

16

D.-Z. Du et al.

T rooted at the node that has a maximum degree and grow T until all nodes are added to T . The non-leaf nodes in T form a CDS. In particular, all the nodes in a given network are white initially. The greedy function that the algorithm uses to add the nodes into T is the number of the white neighbors of each node or a pair of nodes. The one with the largest such number is marked black and its neighbors are marked grey. These nodes (black and grey nodes) are then added into T . The algorithm stops when no white node exists in G. The second algorithm is an improvement of the first one. This algorithm consist of two phases. The first phase is to construct a dominating set and the second phase is to connect the dominating set using a Steinter tree algorithm. With such improvement, the second algorithm has a performance ratio of H(Δ) + 2. In [7], Ruan et al. introduced another centralized and greedy algorithm of which the approximation ratio is (2 + ln Δ). For the localized algorithms, Wu and Li [8] proposed a simple algorithm that can quickly determine a CDS based on the connectivity information within the 2-hops neighbors. This approach uses a marking process. In particular, each node is marked true if it has two unconnected neighbors. All the marked nodes form a CDS. The authors also introduced some dominant pruning rules to reduce the size of the CDS. In [3], the authors showed that the performance ratio of [8] is within a factor of O(n) where n is the number of the nodes in a network. For UDG, most of the proposed algorithms are distributed algorithms, of which the main approach is to find a Maximal Independent Set (MIS) and then to connect this set. Note that in an undirected graph, an MIS is also a dominating set (DS). In [3], the authors proposed a distributed algorithm for a CDS problem in UDG. This algorithm consists of two phases and has a constant approximation ratio of 8. The algorithm first constructs a spanning tree. Then each node in a tree is examined to find an MIS for the first phase. All the nodes in an MIS are colored black. At the second phase, more nodes are added (color blue) to connect those black nodes. Later, Cardei et al. presented another 2-phase distributed algorithm for a CDS in UDG. This algorithm has the same performance ratio of 8. However, the improvement over [3] is the message complexity. The root does not need to wait for the COMPLETE message from the furthest nodes. Recently, Li et al. proposed another distributed algorithm with a better approximation ratio, which is (4.8 + ln 5) [4]. This algorithm also has two phases. At the first phase, an MIS is found. At the second phase, a Steiner tree algorithm is used to connect the MIS. The Steiner tree algorithm takes the property which is that any node in UDG is adjacent to at most 5 independent nodes into consideration. For the localized algorithms, in [9], Alzoubi et al. proposed a localized 2-phase algorithms with the performance ratio of 192. At the first phase, an MIS is constructed using the one-hop neighbors information. Specifically, once a node knows that it has the smallest ID within its neighbors, it becomes a dominator. At the second phase, the dominators are responsible for identifying a path to connect the MIS. In [2], Li et al. proposed another localized algorithm with the performance ratio of 172. This localized algorithm has only 1 phase. A node marks itself as a dominator if it can cover the most white nodes compared to its 2-hop neighbors.

Strongly Connected Dominating Sets in Wireless Sensor Networks

17

For undirected disk graphs, Thai et al. recently have proposed three constant centralized algorithms which can be implemented as distributed algorithms [1]. These algorithms use a similar approach as in UDG, that is to find an MIS and then to connect it. However, the authors in [1] took the different transmission rages of nodes in networks into consideration. None of the above work has studied the SCDS problem in directed disk graphs. The SCDS problem in directed disk graphs is very practical since nodes in wireless ad hoc networks usually have different transmission ranges. Hence a node u in a given network can communicate directly with node v but node v might not be able to communicate directly with node u. Motivated by this, we study the SCDS problem and present two approximation algorithms in the next two sections.

3

The CDS-BFS Algorithm

In this section, we introduce the Connected Dominating Set using Breath First Search tree (CDS-BFS) algorithm to construct a SCDS of a directed disk graph G = (V, E). We then analyze its performance ratio based on the geometric characteristics of disk graphs. Let us first begin this section with some graph theoretic notations that are used throughout this paper. For an arbitrary vertex v ∈ V , let N − (v) be the set of its incoming neighbors, i.e., N − (v) = {u | (u, v) ∈ E}. Let N − [v] = N − (v)∪{v} be the set of closed in-coming neighbors of v. Likewise, let N + (v) be the set of its outgoing neighbors, i.e., N + (v) = {u | (v, u) ∈ E} and N + [v] denote the set of the closed out-going neighbors of v. 3.1

Algorithm Description

The CDS-BFS algorithm has two stages. At the first stage, we find the Dominating Set (DS) S of G using a greedy method shown in Algorithm 1. Specifically, as described in Algorithm 1, at each iteration, we find a node u which has the largest transmission range in V and color it black. Remove the closed out-going neighbors of u from V , i.e, V = V − N + [u]. Note that a node u is added to S if and only if the constructed S so far does not dominate u yet. Clearly, the set of black nodes S forms a DS of G. At the second stage, two Breath First Search (BFS) trees are constructed to connect S. Let s denote a node with the largest transmission range in S and vi , i = 1...p be the other nodes in S. Let T f (s) = (V f , E f ) denote a forward tree rooted at s such that there exists a directed path from s to all vi , i = 1...p. Also let T b (s) = (V b , E b ) denote a backward tree rooted at s such that for any node vi , i = 1...p, there exists a directed path from vi to s. Note that the graph H that is the union of two such trees is a feasible solution to our SCDS problem. In other words, the graph H containing all the nodes in S and H is strongly connected. First, construct a BFS tree T1 of G rooted at s. Let Lj , j = 1...l be the set of nodes at level j in T1 where l is the depth of T1 . Note that L0 = {s}. At each

18

D.-Z. Du et al.

Algorithm 1 Find a Dominating Set 1: 2: 3: 4: 5: 6: 7: 8: 9:

INPUT: A directed disk graph G = (V, E) OUTPUT: A dominating set S S=∅ while V = ∅ do Find a node u ∈ V with the largest radius ru and color u black S = S ∪ {u} V = V − N + [u] end while Return S

level j, let Sj be the black nodes in Lj , i.e, Sj = Lj ∩ S, and S¯j be the non-black nodes in Lj , i.e., S¯j = Lj − Sj . We construct T f (s) as follows. Initially, T f (s) has only one node s. At each iteration j, for each node u ∈ Sj , we find a node v such that v ∈ N − (u) ∩ Lj−1 . If v is not black, color it blue. In other words, we need to find a node v such that v is an in-coming neighbor of u in G and v is in the previous level of u in T1 . Add v to T f (s) where v is the parent of u. This process stops when j = l. Next, we need to identify the parents of all the blue nodes. Similarly, at each iteration j, for each blue node u ∈ S¯j , find a node v ∈ N − (u) ∩ Sj−1 and set v as the parent of u in T f (s). If no such black v exists, select a blue node in N − (u) ∩ S¯j−1 . Thus T f (s) consists of all the black and blue nodes and there is a directed path from s to all the other nodes in S. Now, we need to find the T b (s). First, construct a graph G = (V, E  ) where  E = {(u, v) | (v, u) ∈ E}. In other words, we reverse all the edges in G to obtain G . Next, we build the second BFS tree T2 of G rooted at s. Then follow the  above procedure to find a T f (s) such that there exists a directed path from s  to all the other nodes in S. Then reverse all the edges in T f (s) back to their original directions, we have T b (s). Hence H = T f (s) ∪ T b (s) is the strongly connected subgraph where all the nodes in H form a SCDS. The construction of the CDS-BFS tree is described in Algorithm 2. 3.2

Theoretical Analysis

Lemma 1. For any two black nodes u and v in S, d(u, v) > rmin . Proof. This is trivial. Without loss of generality, assume that ru > rv >= rmin . Algorithm 1 would mark u as a black node before v. Assume that d(u, v) ≤ rmin , then v ∈ N + (u). Hence v cannot be black, contradicting our assumption. Lemma 2. In a directed disk graph G = (V, E), the size of any dominating set S is upper bounded by 1 1 2.4(k + )2 · opt + 3.7(k + )2 2 2 where k =

rmax and opt is the size of the optimal solution of the SCDS problem. rmin

Strongly Connected Dominating Sets in Wireless Sensor Networks

19

Algorithm 2 CDS-BFS 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: 18: 19: 20: 21: 22: 23: 24: 25: 26: 27: 28:

INPUT: A directed disk graph G = (V, E) OUTPUT: A Strongly Connected Dominating Set C Find a DS S using Algorithm 1 Choose node s ∈ S such that rs is maximum Construct a BFS tree T1 of G rooted at s Construct a tree T f (s) such that there exists a directed path in T f (s) from s to all other nodes in S as follows: for j = 1 to l do Lj is a set of nodes in T1 at level j Sj = Lj ∩ S; S¯j = Lj − Sj ; T f (s) = {s} for each node u ∈ Sj do select v ∈ (N − (u) ∩ Lj−1 ) and set v as a parent of u. If v is not black, color v blue end for end for for j = 1 to l do for each blue node u ∈ S¯j do if N − (u) ∩ Sj−1 = ∅ then select v ∈ (N − (u) ∩ Sj−1 ) and set v as a parent of u. else select v ∈ (N − (u) ∩ S¯j−1 ) and set v as a parent of u. end if end for end for Reverse all edges in G to obtain G Construct a BFS tree T2 of G rooted at s   Construct a tree T f (s) such that there exists a directed path in T f (s) from s to all other nodes in S  Reverse all edges back to their original directions, then T f (s) become T b (s) where there exists a directed path from all other nodes in S to s H = T f (s) ∪ T b (s) Return all nodes in H

Proof. Due to space limitation, the detailed proof is omitted. The basic idea of this proof is as follows. From Lemma 1, the set of all the disks centered at the nodes in S with radius rmin /2 are disjoint. Thus the size of any DS bounded by the maximum number of disks with radius rmin /2 packing in the area covered by the optimal SCDS. Theorem 1. The CDS-BFS algorithm produces a SCDS with the size bounded by 1 1 12(k + )2 opt + 18.5(k + )2 2 2 rmax where k = rmin . Proof. Let C denote the SCDS obtained from the CDS-BFS algorithm. Let BT f and BT b be the blue nodes in T f (s) and T b (s) respectively. We have:

20

D.-Z. Du et al.

|C| = |BT f | + |BT b | + |S| ≤ 5|S| 1 1 |C| ≤ 5[2.4(k + )2 · opt + 3.7(k + )2 ] 2 2 1 2 1 2 |C| ≤ 12(k + ) opt + 18.5(k + ) 2 2 Corollary 1. If the maximum and minimum transmission ranges are bounded, then the CDS-BFS algorithm has an approximation factor of O(1).

4

The CDS-MSN Algorithm

In the previous section, we use the bread first search tree to construct the tree interconnecting all the black nodes in S. This scheme is simple and fast. However, we can reduce the size of the obtained SCDS further by reducing the number of the blue nodes which are used to connect all the black nodes. In other words, we need to construct a tree with the minimum number of blue nodes to interconnect all the black nodes. The problem can be formally defined as follows: Definition 2. Directed Steiner tree with Minimum Steiner Nodes (DSMSN): Given a directed graph G = (V, E) and a set of nodes S ⊆ V called terminals, construct a directed Steiner tree T rooted at s ∈ V such that there exists a directed path from s to all the terminals in T and the number of the Steiner nodes is minimum. Note that a Steiner node is a node in T but not a terminal. In the SCDS problem context, Steiner nodes are also the blue nodes. Once we solve the DSMSN problem, we can use this solution to solve the SCDS problem. Initially, all the nodes in S are black and the other nodes in V are white. First, let us introduce the following definitions. Definition 3. Spider: A spider is defined as a directed tree having at most one white node of out-degree more than two and the other nodes are either black or blue. Such a white node is called a root of the spider. A v-spider is a spider rooted at a white node v. Each directed path from v to a leaf is called a leg. Note that all the nodes in each leg except v are either blue or black. Definition 4. Contracting Operation: Let U be a set of out-going neighbors of all the black and blue nodes in a v-spider. To contract a v-spider, for each white node u ∈ U , create a directed edge (v, u). We then delete all the black and blue nodes in the v-spider and color v blue. Figure 2 shows an example of a spider contracting operation. To solve the DSMSN problem, we repeatedly find a v-spider such that this spider has a maximum number of black and blue nodes and then contract this spider. The detail of this algorithm is described in Algorithm 3. The correctness of Algorithm 3 is obvious. Since this algorithm is a solution of the DSMSN problem, we are now ready to introduce the CDS-MSN algorithm.

Strongly Connected Dominating Sets in Wireless Sensor Networks s

s

1

21

1

v

v

Contracting 2

6

2 7

4 3

7

3

8

5 8

Fig. 2. A Spider Contracting Operation

Algorithm 3 DSMSN(G, s, S) 1: 2: 3: 4: 5:

INPUT: Graph G = (V, E), a root s, a set of black nodes S OUTPUT: A tree T rooted at s interconnecting all nodes in S T = ∅; while The number black and blue nodes in G > 1 do Find a white node v such that v-spider has the most number of black and blue nodes 6: Contracting this v-spider and update G 7: end while 8: Construct T from the set of black and blue nodes

4.1

Algorithm Description

The CDS-MSN algorithm consists of two stages. Similar to the CDS-BFS algorithm, the CDS-MSN constructs the dominating set S using Algorithm 1 at the first stage. At the second stage, the DSMSN algorithm as shown in Algorithm 3 is deployed to find a strongly connected dominating set. Choose s ∈ S such that rs is the largest. Note that all the nodes in S except s are black and the other nodes in V are white. Similar to the CDS-BFS algorithm, we need to construct T f (s) and T b (s). Let S  = S − {s}. T f (s) is constructed by calling algorithm DSM SN (G, s, S  ). Next, construct a graph G = (V, E  ) such that an edge (u, v) is in E  if and only if the edge (v, u) is in E. Then we  call the algorithm DSM SN (G , s, S  ) to obtain a tree T f (s). Then reverse all  the edges in T f (s) back to their original directions, we have a tree T b (s). The union of these two trees is our solution to the SCDS problem. The main steps of the CDS-MSN algorithm are described in Algorithm 4. 4.2

Theoretical Analysis

Lemma 3. Given a directed disk graph G = (V, E), for any arbitrary node v ∈ V , we have |N + (v) ∩ S| ≤ (2k + 1)2 where k = rmax /rmin .

22

D.-Z. Du et al.

Algorithm 4 CDS-MSN 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13:

INPUT: A directed graph G = (V, E) OUTPUT: A strongly connected dominating set C Find a DS S using Algorithm 1 Choose node s ∈ S that rs is maximum S  = S − {s} All nodes in S  are black, others are white T f (s) = DSM SN(G, s, S  ) Reverse all edges in G to obtain G All nodes in S  are black, others are white  T f (s) = DSM SN(G , s, S  )  Reverse all edges in T f (s) to obtain T b (s) f b H = T (s) ∪ T (s) Return all nodes in H

Proof. Recall that N + (v) is a set of out-going neighbors of v and S is a dominating set of G. Let v be a node with the largest transmission range. From Lemma 1, we have d(u, v) ≥ rmin where u, v ∈ S. Hence the size of N + (v) ∩ S is bounded by the maximum number of disjoint disks with radius rmin /2 packing in the disk centered at v with radius of rmax + rmin /2. We have: |N + (v) ∩ S| ≤

π(rmax + rmin /2)2 ≤ (2k + 1)2 π(rmin /2)2

Let T ∗ be an optimal tree when connecting a given set S and C(T ∗ ) is the number of the Steiner nodes in T ∗ . Also let B be a set of blue nodes in T where T is the solution of the DSMSN problem obtained from Algorithm 3, we have the following lemma: Lemma 4. The size of B is at most (1 + 2 ln(2k + 1))C(T ∗ ) Proof. Let n = |S| and p = |B|. Let Gi be the graph G at the iteration i after a spider contracting operation. Let vi , i = 1...p be the blue nodes in the order of appearance in Algorithm 3 and let ai be the number of the black and blue nodes in Gi . Also let C(Ti∗ ) be the optimal solution of Gi . If n = 1, then the lemma is trivial. Assume that n ≥ 2, thus C(T ∗ ) ≥ 1. Since at each iteration i, we pick a white node v such that the v-spider has the maximum number of black and blue ai nodes, the number of black and blue nodes in v-spider must be at least C(T ∗ . i ) Thus we have: ai+1 ≤ ai −

ai ai ≤ ai − C(Ti∗ ) C(T ∗ )

Note that ap = 1 hence ap ≤ C(T ∗ ). Also, initially, a0 = n > C(T ∗ ). Then there exists h, 1 ≤ h ≤ p such that ah ≥ C(T ∗ ) and ah+1 < C(T ∗ ). Thus we have:

Strongly Connected Dominating Sets in Wireless Sensor Networks

23

 2   1 1 ah−1 ≤ ah−1 1 − ≤ ah−2 1 − ah ≤ ah−1 − ≤ ... C(T ∗ ) C(T ∗ ) C(T ∗ )  ah ≤ a0 1 −

1 C(T ∗ )

h

h

≤ a0 e C(T ∗ )

The last step uses the fact that − ln(1 − x) ≥ x. Therefore, h a0 e C(T ∗ ) ≤ ah a0 h n ≤ ln ≤ 2 ln(2k + 1) ≤ ln ∗ C(T ) ah C(T ∗ ) The last step uses Lemma 3. We conclude that |B| = p ≤ h + ah+1 ≤ (1 + 2 ln(2k + 1))C(T ∗ ) Theorem 2. The CDS-MNS algorithm produces a SCDS with size bounded by: 1 1 (2.4(k + )2 + 2 + 4 ln(2k + 1))opt + 3.7(k + )2 2 2 where k = rrmax min Proof. Let C denote the SCDS obtained from the CDS-BFS algorithm. Let BT f and BT b be the blue nodes in T f (s) and T b (s) respectively. From Lemma 3 and 4, we have: 1 1 |C| = |S| + |BT f | + |BT b | ≤ 2.4(k + )2 opt + 3.7(k + )2 + 2(1 + 2 ln(2k + 1))opt 2 2 1 1 |C| ≤ (2.4(k + )2 + 2 + 4 ln(2k + 1))opt + 3.7(k + )2 2 2 Corollary 2. If the maximum and minimum transmission ranges are bounded, then the CDS-MSN algorithm has an approximation factor of O(1).

5

Conclusions

In this paper, we have studied the Strongly Connected Dominating Set (SCDS) problem in directed disk graphs where both unidirectional and bidirectional links are considered. The directed disk graphs can be used to model wireless sensor networks where nodes have different transmission ranges. We have proposed two approximation algorithms and shown that the obtained SCDS is within a constant factor of the optimal SCDS. The main approach in our algorithms is to construct a dominating set and then connect them. Through the theoretical analysis, we have shown that using a Steiner tree with the minimum number of Steiner nodes to interconnect the dominating set can help to reduce the size of the SCDS. In order for a node u to send data using the SCDS C, a node u is not only dominated by some nodes in C but also has an out-going neighbor in C. Thus we are interested to study this problem in the future.

24

D.-Z. Du et al.

References 1. M. T. Thai, F. Wang, D. Liu, S. Zhu, and D.-Z. Du, ”Connected Dominating Sets in Wireless Networks with Different Transmission Ranges”, Manuscript, 2005. 2. Y. Li, S. Zhu, M. T. Thai, and D.-Z. Du, ”Localized Construction of Connected Dominating Set in Wireless Networks”, NSF International Workshop on Thoretical Aspects of Wireless Ad Hoc, Sensor and Peer-to-Peer Networks, 2004. 3. P.-J. Wan, K. M. Alzoubi, and O. Frieder, ”Distributed Construction on Connected Dominating Set in Wireless Ad Hoc Networks”, Proceedings of the Conference of the IEEE Communications Society (INFOCOM), 2002. 4. Y. Li, M. T. Thai, F. Wang, C.-W. Yi, P.-J. Wang, and D.-Z. Du, ”On Greedy Construction of Connected Dominating Sets in Wireless Networks”, Special issue of Wireless Communications and Mobile Computing (WCMC), 2005. 5. M. Cardei, M.X. Cheng, X. Cheng, and D.-Z. Du, ”Connected Domination in Ad Hoc Wireless Networks”, Proceedings of the Sixth International Conference on Computer Science and Informatics (CSI), 2002. 6. S. Guha and S. Khuller, ”Approximation Algorithms for Connected Dominating Sets”, Algorithmica, vol. 20, pp. 374–387, 1998 7. L. Ruan, H. Du, X. Jia, W. Wu, Y. Li, and L.-I. Ko, ”A Greedy Approximation for Minimum Connected Dominating Sets”, Theoretical Computer Science, 2005. To appear 8. J. Wu and H. Li, ”On Calculating Connected Dominating Sets for Efficient Routing in Ad Hoc Wireless Networks”, Proceedings of the Third International Workshop Discrete Algorithms and Methods for Mobile Computing and Comm., 1999. 9. K.M. Alzoubi, P.-J. Wang, and O. Frieder, ”Message-Optimal Connected Dominating Sets in Mobile Ad Hoc Networks”, Proceedings of the ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), 2002. 10. F. Dai and J. Wu, ”An Extended Localized Algorithms for Connected Dominating Set Formation in Ad Hoc Wireless Networks”, IEEE Transactions on Parallel and Distributed Systems, vol. 15, no. 10, October 2004

Mobile Web and Location-Based Services Ling Liu College of Computing, Georgia Institute of Technology, Atlanta, Georgia, USA [email protected]

Abstract. The Web today, powered by Web server, application server technology, and Web services, is the lingua franca of the bulk of contents out on the Internet. As computing and communications options become ubiquitous, this Internet access capability is being embedded in billions of wireless devices such as PDAs, cellular phones, and computers embedded in vehicles. The Mobile Web is extending the Web through mobile information access, with the promise of greater information access opportunity, richer and device-spanning Web experiences, due to continuous availability and location awareness. In addition, advances in positioning technologies, mobile hardware, and the growing popularity and availability of mobile communications have made many devices location-aware. Location-based information management has become an important problem in mobile computing systems. Furthermore, the computational capabilities in mobile devices continue to rise, making mobile devices increasingly accessible. However, much research efforts to date have been devoted to location management in centralized location monitoring systems. Very few have studied the distributed approach to real-time location management. We argue that for mobile applications that need to manage a large and growing number of mobile objects, the centralized approaches do not scale well in terms of server load and network bandwidth, and are vulnerable to single point of failure. In this keynote, I will describe the distributed location service architecture, and discuss some important opportunities and challenges of mobile location based services (LBSs) in future computing environments. I will first review the research and development of LBSs in the past decade, focusing on system scalability, robustness, and performance measurements. Then I will discuss some important challenges for wide deployment of distributed location-based services in mission-critical applications and future computing environments. Not surprisingly, the mobile web and the locationaware computing will drive the merger of wireless and wired Internet world, creating a much larger industry than today's predominantly wired Internet industry.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, p. 25, 2006. © Springer-Verlag Berlin Heidelberg 2006

The Case of the Duplicate Documents Measurement, Search, and Science Justin Zobel and Yaniv Bernstein School of Computer Science & Information Technology, RMIT University, Melbourne, Australia

Abstract. Many of the documents in large text collections are duplicates and versions of each other. In recent research, we developed new methods for finding such duplicates; however, as there was no directly comparable prior work, we had no measure of whether we had succeeded. Worse, the concept of “duplicate” not only proved difficult to define, but on reflection was not logically defensible. Our investigation highlighted a paradox of computer science research: objective measurement of outcomes involves a subjective choice of preferred measure; and attempts to define measures can easily founder in circular reasoning. Also, some measures are abstractions that simplify complex real-world phenomena, so success by a measure may not be meaningful outside the context of the research. These are not merely academic concerns, but are significant problems in the design of research projects. In this paper, the case of the duplicate documents is used to explore whether and when it is reasonable to claim that research is successful.

1 Introduction Research in areas such as the web and information retrieval often involves identification of new problems and proposals of novel solutions to these problems. Our investigation of methods for discovery of duplicate documents was a case of this kind of research. We had noticed that sets of answers to queries on text collections developed by TREC often contained duplicates, and thus we investigated the problem of duplicate removal. We developed a new algorithm for combing for duplicates in a document collection such as a web crawl, and found that our method identified many instances of apparent duplication. While none of these documents were bytewise identical, they were often nearly so; in other cases, the differences were greater, but it was clear that the documents were in some sense copies. However, this research outcome potentially involved circular reasoning. The existence of the problem is demonstrated by the solution, because, in large collections, manual discovery of duplicates is infeasible; and the success of the solution is indicated by the extent of the problem. That is, our algorithm succeeded on its own terms, but there was no evidence connecting this success to any external view of what a duplicate might be. We are, potentially, being misled by the use of the word “duplicate”, which seems to have a simple natural interpretation. But this belies the complexity of the problem. Duplicates arise in many ways – mirroring, revision, plagiarism, and many others – and a pair of documents can be duplicates in one context but not in others. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 26–39, 2006. c Springer-Verlag Berlin Heidelberg 2006 

The Case of the Duplicate Documents Measurement, Search, and Science

27

This issue is perhaps more easily understood in an abstract case. Suppose a researcher develops an algorithm for locating documents that are grue (where grue is a new property of documents that the researcher has decided to investigate) and documents are defined as being grue if they are located by the algorithm. Or suppose the researcher develops an algorithm that, on some test data, scores highly for grueness when compared to some alternative algorithms. We can say that these algorithms succeed, but, without an argument connecting grueness to some useful property in the external world, they are of little interest. Such problems are an instance of a widespread issue in computer science research: the paradox of measurement. We measure systems to objectively assess them, but the choice of measure – even for simple cases such as evaluating the efficiency of an algorithm – is a subjective decision. For example, information retrieval systems are classically measured by recall and precision, but this choice is purely a custom. In much research there is no explicit consideration of choice of measure, and measures are sometimes chosen so poorly that a reader cannot determine whether the methods are of value. Thus appropriate use of measures is an essential element of research. An algorithm that is convincingly demonstrated to be efficient or effective against interesting criteria may well be adopted by other people; an algorithm that is argued for on the basis that it has high grueness will almost certainly be ignored. Problems in measurement are a common reason that research fails to have impact. Researchers need, therefore, to find a suitable yardstick for measurement of the success of their solution. Yardsticks rely on assumptions that have no formal justification, so we need to identify criteria by which the value of a yardstick might be weighed. In this paper, we explore these issues in the context of our research into duplicates. We pose criteria for yardsticks and how they might be applied to duplicate detection. Our investigation illustrates that strong, sound research not only requires new problems and novel solutions, but also requires an appropriate approach to measurement. As we noted elsewhere, “many research papers fail to earn any citations. A key reason, we believe, is that the evidence does not meet basic standards of rigor or persuasiveness” (Moffat and Zobel, 2004). Consideration of these issues – which concern the question of what distinguishes applied science from activities such as software development – can help scientists avoid some of the pitfalls encountered in research and lead to work of greater impact.

2 Discovery of Duplicate Documents In 1993, not long after the TREC newswire collections were first distributed, we discovered passages of text that were copied between documents. This posed questions such as: how much plagiarism was there in the collections? How could it be found? The cost of searching for copies of a document is O(n), but na¨ıvely the cost of discovery of copies, with no prior knowledge of which documents are copied, is O(n2 ). We developed a sifting method for discovery of duplicates, based on lossless identification of repeated phrases of length p. In this method, the data is processed p times, with non-duplicated phrases of increasing length progressively eliminated in each pass: a phrase of, say, four words cannot occur twice if one of its component phrases of length

28

J. Zobel and Y. Bernstein

three only occurs once. In our recent refinement of this method (Bernstein and Zobel, 2004), a hash table of say one billion 2-bit slots is used to identify phrase frequency, allowing false positives but no false negatives. When all p-word repeating phrases have been identified, these are processed to identify pairs of documents that share at least a specified amount of text. However, in our experiments we observed a vast number of cases of reuse of text, due to factors such as publication of the same article in different regions on different days. Cases of plagiarism – if there were any – were hidden by the great volume of other material. Moreover, the method did not scale well. In 2003 we returned to work on this problem, inspired by issues in management of large corporate document repositories, where it is common for documents such as policies and manuals to be present many times in multiple official versions, and for authors to have their own inconsistent versions. These documents represent corporate memory, yet management of them in practice may be highly chaotic; duplicate detection is a plausible method of helping to bring order to such collections. We refined our original sifting method and proposed metrics for measuring the degree of duplication between two documents. Using the TREC .gov crawls, we found, disturbingly, that our metric for measuring duplication led to a smooth, undifferentiated range of scores: there was no obvious threshold that separated duplicates from non-duplicates. We had na¨ıvely assumed that pairs would either be largely copied, with say 70% of their material in common, or largely different, with say no more than 20% in common. This assumption was entirely wrong. And again, we failed to find the kinds of duplicates we were seeking. Amongst the millions of documents there were millions of pairs (a collection of a million documents contains half a trillion potential pairs) with a reasonable amount of text in common. The diversity of kinds of duplication, rather than algorithmic issues, was the main obstacle to success. For web data, potential sources of duplicates include: – Mirrors. – Crawl artifacts, such as the same text with a different date or a different advertisement, available through multiple URLs. – Versions created for different delivery mechanisms, such as HTML and PDF. – Annotated and unannotated copies of the same document. – Policies and procedures for the same purpose in different legislatures. – Syndicated news articles delivered in different venues. – “Boilerplate” text such as licence agreements or disclaimers. – Shared context such as summaries of other material or lists of links. – Revisions and versions. – Reuse and republication of text (legitimate and otherwise). At the same time as our original work, fingerprinting methods for duplicate detection were being developed by other groups (Manber, 1994, Brin et al., 1995, Heintze, 1996, Broder, 1997, Chowdhury et al., 2002, Fetterly et al., 2003). Several groups developed methods that broadly have the same behaviour. Some phrases are heuristically selected from each document and are hashed separately or combined to give representative keys. Two documents that share a key (or a small number of keys) are deemed to be duplicates. As most phrases are neglected, the process is lossy, but it is relatively easy to scale and is sufficient to detect pairs of documents that share most of their text.

The Case of the Duplicate Documents Measurement, Search, and Science

29

Our sifting method can be seen as lossless but costly fingerprinting, and it is an easy step to regard the work as comparable. But closer inspection of the past work reveals that the groups were all working on different problems. – Manber (1994) used fingerprints to find similar files on a filesystem. Datasets used were compilations of online documentation such as README files. Documents were distinguished as being “similar” if the proportion of identical fingerprints between the documents exceeded a threshold, for example 50%. Manber reports the number of clusters of “similar” documents found by his technique, but does not report on any investigation of the nature of the similarities found by the system. – Brin et al. (1995) investigated fingerprinting in the context of copyright protection in digital libraries. The dataset used for experimentation was a small collection of academic articles. These articles were manually grouped into “related” documents and the scores between these were compared to the scores between unrelated documents. The conclusion was that there was a large difference between the scores. – Heintze (1996) investigated the characteristics of different fingerprinting schemes. The dataset was a small collection of technical reports. The experiments compare various fingerprint selection schemes with full fingerprinting, in which every fingerprint is stored. The findings are that sensitivity of the algorithm is not heavily affected by increasing the selectivity of fingerprint selection. – Broder (1997) used fingerprinting to find documents that are “roughly the same”, based on resemblance and containment, defined by a count of the volume of text two documents share. The motivation is management of web data. The dataset was a large crawl of web documents. Results focused on the runtime of the algorithm, with a brief mention of the number of identical and “similar” documents found. – Chowdhury et al. (2002) identify documents that are identical after removal of common terms. The motivation is improving search-engine performance. The datasets used are a set of web documents from the Excite@Home crawl thought to have duplicates within the collection, a subset of the TREC LATimes collection with known duplicates seeded into the collection, TREC disks 4 and 5, and WT2G. Synthetic “duplicates” were created by permuting existing documents. Success was measured by the proportion of known duplicates discovered by various methods. – Fetterly et al. (2003) used a variant of fingerprinting known as super-shingling to analyze large web collections for “near-duplicates” with a 90% likelihood of two fingerprints matching between documents that are 95% similar. Similarity is defined by whether the fingerprints match. The motivation is improved crawling. The results were that they found large numbers of clusters of documents that shared fingerprints. – Our work (Bernstein and Zobel, 2004) concerned detection of co-derivative documents, that is, documents that were derived from each other or from some other document. We used a test collection composed of documentation from distributions of RedHat Linux, and verified the detected duplicates for a sample of query documents. Measures were analogous to recall and precision. Experimental findings were that our technique was reasonably accurate at finding co-derived documents. There are good reasons to want to identify duplicates. They may represent redundant information; intuitively, there seems no reason to store the same information multiple

30

J. Zobel and Y. Bernstein

times, and it is rarely helpful to have multiple copies of a document in an answer list. Elimination of duplicates may have benefits for efficiency at search time. In a web collection, the presence of duplicates can indicate a crawler failure. Knowledge of duplication can be used for version management or file system management, and can plausibly be used to help identify where an item of information originated (Metzler et al., 2005). And copies of information may be illegitimate. However, in much of the prior work in the area, the different kinds of duplication, and the different ways in which knowledge of duplication might be used, were jumbled together. There was no consideration of whether the information about duplicates could be used to solve a practical problem and, fundamentally, in none of these papers was there a qualitative definition of what a duplicate was. Without such a definition, it is not clear how the performance of these systems might be measured, or how we could evaluate whether they were doing useful work. Over the next few sections we explore the difficulties of measurement in the context of research, then return to the question of duplicate detection.

3 Research and Measurement Successful research leads to change in the practice or beliefs of others. We persuade people to use a new algorithm, or show that an existing belief is wrong, or show how new results might be achieved, or demonstrate that a particular approach is effective in practice. That is, research is valuable if the results have impact and predictive power. Research is typically pursued for subjective or context-dependent reasons – for example, we find the topic interesting or look into it because we have funding for investigation of a certain problem. However, research outcomes are expected to be objective, that is, free from the biases and opinions of the researcher doing the work. If a hypothesis is objectively shown to be false, then it is false, no matter how widely it is believed or how true it had seemed to be; and, if there is solid evidence to support a hypothesis, then probably it should be believed, even if it seems to contradict intuition. That is, we say the hypothesis is confirmed, meaning that the strength of belief in the hypothesis is increased. For research to be robust and to have high impact, three key elements must be present. First, the hypothesis being investigated must be interesting – that is, if it is confirmed, then it will alter the practice and research of others. Second, there must be a convincing way of measuring the outcomes of the research investigation. Third, according to this measure the hypothesis should be confirmed. In this paper, we call the thing being measured a system and the measure a yardstick. Examples of systems include a search engine, a sorting algorithm, and a web crawler; these are bodies of code that have identifiable inputs and are expected to produce output meeting certain criteria. Examples of yardsticks include computation time on some task, number of relevant documents retrieved, and time for a human to complete a task using a system. Without measurement, there are no research outcomes. Nothing is learnt until a measurement is taken. The onus is on the researcher to use solid evidence to persuade a skeptical reader that the results are sound; how convincing the results are will partly depend on how they are measured. “A major difference between a ‘well-developed’ sci-

The Case of the Duplicate Documents Measurement, Search, and Science

31

ence such as physics and some of the less ‘well-developed’ sciences such as psychology or sociology is the degree to which things are measured” (Roberts, 1979, page 1). How a system is measured is a choice made by the researcher. It is a subjective choice, dictated by the expected task for which the system will be used or the expected context of the system. For example, will the system be run on a supercomputer or a palmtop? Will the user be a child or an expert? Will the data to be searched be web pages or textbooks? There is no authority that determines what the yardstick for any system should be. For measurement of a research outcome such as an interface, this observation is obvious; what may be less obvious is that the observation also applies to algorithmic research. Consider empirical measurement of the efficiency of some algorithm whose properties are well understood, such as a method for sorting integers. The efficiency of an algorithm is an absolutely fundamental computer science question, but there are many different ways to measure it. We have to choose test data and specify its properties. We then have to make assumptions about the environment, such as the volume of data in relation to cache and memory and the relative costs of disk, network, processor, and memory type. There is no absolute reference that determines what is a reasonable “typical” amount of buffer memory for a disk-based algorithm should be, or whether an algorithm that uses two megabytes of memory to access a gigabyte of disk is in any meaningful way superior to one that is faster but uses three megabytes of memory. Complexity, or asymptotic cost, is widely accepted as a measurement of algorithmic behaviour. Complexity can provide a clear reason to choose one algorithm over another, but it has significant limitations as a yardstick. To begin with, “theoretical results cannot tell the full story about real-world algorithmic performance” (Johnson, 2002). For example, the notional cost of search of a B-tree of n items is O(log n), but in practice the cost is dominated by the effort of retrieval of a single leaf node from disk. A particular concern from the perspective of measurement is that complexity analysis is based on subjective decisions, because it relies on assumptions about machine behaviour and data. Worst cases may be absurd in practice; there may be assumptions such as that all memory accesses are of equal cost; and average cases are often based on simplistic models of data distributions. Such issues arise in even elementary algorithms. In an in-memory chained hash table, for example, implemented on a 2005 desktop computer, increasing the number of slots decreases the per-slot load factor – but can increase the per-key access time for practical data volumes (Askitis and Zobel, 2005). While a complexity analysis can provide insight into behaviour, such as in comparison of radixsort to primitive methods such as bubblesort, it does not follow that such analysis is always sufficient. First, “only experiments test theories” (Tichy, 1998). Second, analysis is based on assumptions as subjective as those of an experiment; it may provide no useful estimate of cost in practice; and it is not the answer to the problem of the subjectivity of measurement. Philosophical issues such as paradoxes of measurement are not merely academic concerns, but are significant practical problems in design of research projects. We need to find a basis for justification of our claims about research outcomes, to guide our work and to yield results that are supported by plausible, robust evidence.

32

J. Zobel and Y. Bernstein

4 Choosing a Yardstick Identification of what to measure is a key step in development of an idea into a concrete research project. In applied science, the ultimate aim is to demonstrate that a proposal has utility. The two key questions are, thus, what aspect of utility to measure and how to measure it. We propose that principles for choice of a process of measurement – that is, choice of yardstick – be based on the concept of a warrant. Booth et al. (1995) define a warrant as an assumption that allows a particular kind of evidence to be used to support a particular class of hypothesis. An example from Booth et al. is: Hypothesis. It rained last night. Evidence. The streets are wet this morning. This argument may seem self-supporting and self-evident. However, the argument relies on an implied warrant: that the most likely cause of wet streets is rain. Without the warrant, there is nothing to link the evidence to the hypothesis. Crucially, there is nothing within either the hypothesis or the evidence that is able to justify the choice of warrant; the warrant is an assertion that is external to the system under examination. The fact that the warrants under which an experiment are conducted are axiomatic can lead to a kind of scientific pessimism, in which results have no authority because they are built on arbitrary foundations. With no criteria for choosing amongst warrants, we are in the position of the philosopher who concludes that all truths are equally likely, and thus that nothing can be learnt. However, clearly this is unhelpful: some warrants do have more merit than others. The issue then becomes identification of the properties a good set of warrants should have. The answer to the question “what should we measure?” we refer to as the qualitative warrant, and the answer to the question “how should we measure it?” we refer to as the quantitative warrant, that is, the yardstick. These assertions are what links the measurement to the goal of demonstrating utility. We propose a set (not necessarily exhaustive) of four properties that are satisfied by a good qualitative warrant, and of three properties that are satisfied by a good yardstick: – Applicability. A qualitative warrant should reflect the task or problem the system is designed to address. For example, it would (usually) be uninteresting to measure a user interface based on the number of system calls required to render it. – Power. The power of a qualitative warrant is the degree to which it makes a meaningful assertion about utility. Intuitively, a qualitative warrant is not powerful if its negation results in a new warrant that seems equally reasonable. For example, the warrant “a system is useful if it discards documents that are of uncertain relevance” is not powerful, because its negation, “a system is useful if it retains documents that are of uncertain relevance”, also seems reasonable. In contrast, the warrant “an algorithm is useful if it can sort integers faster than any known algorithm” is powerful because its negation, “an algorithm is useful if it cannot sort integers faster than other algorithms”, is absurd. – Specificity. Evaluation of a system cannot be meaningful if we are not specific about what we are trying to measure. An example is a warrant such as “a system is useful if it allows users quick access to commonly-used functions”. While at first

The Case of the Duplicate Documents Measurement, Search, and Science

33

glance this may seem reasonable, the question of which functions are commonly used is likely to depend on the task and the kind of user. – Richness. The utility of many systems depends on more than just one dimension of performance. For example, we would like an information retrieval system to be both fast and effective. The speed of an IR system can be the basis of a qualitative warrant that is both applicable and poweful; however, it misses a key aspect of IR system performance. Hence, we say that the warrant lacks richness. The quantitative warrant is effectively dictated by the choice of yardstick used to measure the system. A good yardstick should have the following properties: – Independence. A yardstick needs to be independent of the solution; it should not rely in a circular way on the system being measured, but should instead be defined in terms of some properties that would still be meaningful even if the system did not exist. If we claim that a method is useful because it finds grue documents, and that documents are grue if they are found by the method, then the “grueness” yardstick is meaningless. Ethical issues are also relevant; a researcher should not, for example, choose a yardstick solely on the basis that it favours a particular system. – Fidelity. Because the yardstick is used to quantify the utility of the system under investigation, there needs to be fidelity, that is, a strong correspondence between the outcome as determined by the yardstick and the utility criterion it is attempting to quantify. Many yardsticks must reduce a complex process to a simple quantifiable model, that is, “most representations in a scientific context result in some reduction of the original structure” (Suppes et al., 1994). Success by a yardstick lacking fidelity will not be meaningful outside the context of the research. – Repeatability. We expect research results to be predictive, and in particular that repeating an experiment will lead to the same outcomes. The outcomes may vary in detail (consider a user experiment, or variations in performance due to machines and implementation) but the broad picture should be the same. Thus the yardstick should measure the system, not other factors that are external to the work. Using these criteria, it can be argued that some qualitative warrants are indeed superior to others, and that, given a particular qualitative warrant, some yardsticks are superior to others. Note that measures often conflict, and that this is to be expected – consider yardsticks such as speed versus space, or speed versus complexity of implementation, or speed in practice versus expected asymptotic cost. We should not expect yardsticks to be consistent, and indeed this is why choice of yardstick can be far from straightforward. For algorithmic work, we may choose a qualitative warrant such as “an algorithm is useful if it is computationally efficient”. This satisfies the criteria: it is applicable, powerful, reasonably specific, and rich. Given this warrant, we can consider the yardstick “reduced elapsed computation time”. It is independent (we don’t even need to know what the algorithm is), repeatable, and in general is a faithful measure of utility as defined by the qualitative warrant. The yardstick “reduced instruction count” is independent and repeatable, but in some cases lacks fidelity: for many algorithms, other costs, such as memory or disk accesses, are much more significant. The yardstick “makes use of a wider range of instructions” is independent and repeatable, but entirely lacks fidelity: measures by this yardstick will bear little correspondence to utility as defined by our qualitative warrant.

34

J. Zobel and Y. Bernstein

Some potential criteria that could be used to justify a yardstick are fallacies or irrelevancies that do not stand scrutiny. For example, the fact that a property is easy to measure does not make the measure a good choice. A yardstick that has been used for another task may well be applicable, but the fact that it has been used for another task carries little weight by itself; the rationale that led to it being used for that task may be relevant, however. Even the fact that a yardstick has previously been used for the same task may carry little weight – we need to be persuaded that the yardstick was well chosen in the first place. An underlying issue is that typically yardsticks are abstractions of semantic properties that are inherently not available by symbolic reasoning. When a survey is used to measure human behaviour, for example, a complex range of real-world properties is reduced to numerical scores. Confusion over whether processes are “semantic” is a failing of a range of research activities. Symbolic reasoning processes cannot be semantic; only abstract representations of real-world properties – not the properties themselves, in which the meaning resides – are available to computers. Note too that, as computer scientists, we do not write code merely to produce software, but to create a system that can be measured, and that can be shown to possess a level of utility according to some criterion. If the principal concern is efficiency, then the code needs to be written with great care, in an appropriate language; if the concern is whether the task is feasible, a rapid prototype may be a better choice; if only one component of a system is to be measured, the others may not need to be implemented at all. Choice of a yardstick determines which aspects of the system are of interest and thus need to be implemented.

5 Measurement in Information Retrieval In algorithmic research, the qualitative warrants are fairly straightforward, typically concerning concrete properties such as speed, throughput, and correctness. Such warrants can be justified – although usually the justification is implicit – by reference to goals such as reducing costs. Yardsticks for such criteria are usually straightforward, as the qualitative warrants are inherently quantifiable properties. In IR, the qualitative warrant concerns the quality of the user search experience, often in terms of the cost to the user of resolving an information need. Yardsticks are typically based on the abstractions precision and recall. The qualitative warrant satisfies the criteria of applicability, power, and richness. Furthermore, the IR yardsticks typically demonstrate independence and repeatability. However, the qualitative warrant is not sufficiently specific. It is difficult to model user behaviour when it has not been specified what sort of user is being modelled, and what sort of task they are supposed to be performing. For example, a casual web searcher does not search in the same way as a legal researcher hoping to find relevant precedents for an important case. Even if the qualitative warrant were made more specific, for example by restricting the domain to ad-hoc web search, the fidelity of many of the current yardsticks can be brought into question. Search is a complex cognitive process, and many factors influence the degree of satisfaction a user has with their search experience; many of these factors are simplified or ignored in order to yield a yard-

The Case of the Duplicate Documents Measurement, Search, and Science

35

stick that can be tractably evaluated. It is not necessarily the case that the user will be most satisfied with a search that simply presents them with the greatest concentration of relevant documents. To the credit of the IR research community, measurement of effectiveness has been the subject of ongoing debate; in some other research areas, the issue of measurement is never considered. In particular, user studies have found some degree of correlation between these measures and the ease with which users can complete an IR task (Allan et al., 2005), thus demonstrating that – despite the concerns raised above – the yardsticks have at least limited fidelity and research outcomes are not entirely irrelevant. Yardsticks drive the direction of research; for example, the aim of a great deal of IR research is to improve recall and precision. To the extent that a yardstick represents community agreement on what outcome is desirable, letting research follow a yardstick is not necessarily a bad thing. However, if the divergence between yardsticks and the fundamental aim of the research is too great, then research can be driven in a direction that is not sensible. We need to be confident that our yardsticks are meaningful in the world external to the research.

6 Measurement of Duplicate Discovery Methods In some of the work on duplicate discovery discussed earlier, the qualitative warrant is defined as (we paraphrase) “system A is useful if it is able to efficiently identify duplicates or near-duplicates”. However, anything that is found by the algorithms is deemed to be a duplicate. Such a yardstick clearly fails the criteria set out earlier. It is not independent, powerful, or rich. It provides no guidance for future work, or any information as to whether the methods are valuable in practice. The question of whether these methods are successful depends on the definition of “duplicate”. When the same page is crawled twice, identical but for a date, there are still contexts in which the two versions are not duplicates – sometimes, for example, the dates over which a document existed are of interest. Indeed, almost any aspect of a document is a reasonable target of a user’s interest. It is arguable whether two documents are duplicates if the name of the author has changed, or if the URL is different. Are a pair “the same” if one byte is changed? Two bytes? That is, there is no one obvious criterion for determining duplication. Again, we argue that the warrant is not specific enough. A pair of documents that are duplicates in the context of, say, topic search may not be duplicates in the context of, say, establishing which version is most up-to-date. As in IR, there need to be clear criteria against which the assessment is made, in which the concept of task or utility is implicitly or explicitly present. For duplication, one context in which task can be defined is that of search. Consider some of the ways in which a document might address an information need: – – – –

As a source of new information. As confirmation of existing knowledge. As a means of identifying the author, date, or context. As a demonstration of whether the information is from a reputable provider.

That is, a pair of documents can only be judged as duplicates in the context of the use that is being made of them.

36

J. Zobel and Y. Bernstein

To establish whether our SPEX method for duplicate discovery was effective, we explored several search-oriented varieties of duplication, using human experiments to calibrate SPEX scores against human judgements (Bernstein and Zobel, 2005). The first kind of duplication was retrieval equivalence: a pair of documents is retrieval equivalent if they appear identical to a typical search engine. This definition can be validated mechanically, by parsing the documents according to typical search engine rules to yield a canonical form. A pair is deemed to be retrieval equivalent if their canonical forms are bytewise identical. However, even retrieval equivalent documents may not be duplicates in the context of some search tasks. Two mirrors may hold identical documents, but we may trust one mirror more than another; removal of either document from an index would be a mistake. Knowledge of duplication can affect how such answers are presented, but does not mean that they can be eliminated. The second kind of duplication we considered was content equivalence. In an initial experiment, we identified document pairs where SPEX had returned a high match score, and asked test subjects to assess the pairs against criteria such as “effectively duplicated”. However, our subjects differed widely in their interpretation of this criterion. For some, a minor element such as date was held to indicate a substantive difference; for others it was irrelevant. We therefore refined these criteria, to statements such as “differences between the documents are trivial and do not differentiate them with respect to any reasonable query” and “with respect to any query for which both documents may be returned by a plausible search, the documents are equivalent; any query for which the documents are not equivalent would only return one or the other”. We called this new criterion conditional equivalence. We could define our warrants for this task as follows: Qualitative warrant. The SPEX system is useful if it accurately identifies pairs of documents that can be considered to be duplicates in a web search context. Quantitative warrant. The extent to which pairs of documents identified by a system are judged by a human to be duplicates in a web search context is a good estimator of whether the system accurately identifies duplicates. Superficially, retrieval and content equivalence, and the sub-classes of content equivalence, may seem similar to each other, but in a good fraction of cases documents that were duplicates under one criterion were not duplicates under another. An immediate lesson is that investigation of duplicate discovery that is not based on a clear definition of task is meaningless. A more positive lesson is that these definitions provide a good yardstick; they meet all of the criteria listed earlier. Using these yardsticks, we observed that there was a clear correlation between SPEX scores and whether a user would judge the documents to be duplicated. This meant that we could use SPEX to measure the level of duplication – from the perspective of search! – in test collections. Our experiments used the GOV1 and GOV2 collections, two crawls of the .gov domain created for TREC. GOV1 is a partial crawl of .gov from 2002, with 1,247,753 documents occupying 18 gigabytes. GOV2 is a much more complete crawl of .gov from 2004, with 25,205,179 documents occupying 426 gigabytes. On the GOV1 collection, we found that 99,227 documents were in 22,870 retrievalequivalent clusters. We found a further 116,087 documents that participated in contentequivalence relationships, and that the change in definition from content-equivalence

The Case of the Duplicate Documents Measurement, Search, and Science

37

to conditional equivalence led to large variations in the numbers of detected duplicates. On the GOV2 collection, we found a total of 6,943,000 documents in 865,362 retrieval-equivalent clusters – more than 25% of the entire collection. (Note that, prior to distribution of the data, 2,950,950 documents were removed after being identified as duplicates by MD5.) Though we were unable to scan the entire GOV2 collection for content-equivalence, we believe that a similar proportion again is content-equivalent, as was the case for the GOV1 collection. These results indicate that there are many pairs of documents within these collections that are mutually redundant from a user perspective: if a user were to see one document in relation to a particular query, there may be several other documents that would no longer be of interest to them. This observation provides empirical support to the questioning of the notion of independent relevance. The results suggest that the volume of retrieval- and content-equivalent documents in the collection may be so great that the assumption of independent relevance is significantly affecting the fidelity of the IR yardsticks. To investigate this further, we experimented with the runs submitted for the TREC 2004 terabyte track, consisting of result sets for 50 queries on the GOV2 collection. In our first experiment, we modified the query relevance assessments so that a document returned for a particular query on a particular system would be marked as not relevant if a document with which it was content-equivalent appeared earlier in the result list. This partially models the notion of relevance as dependent on previously seen documents. The result was significant: under this assumption, the MAP of the runs in the top three quartiles of submission dropped by a relative 20.2% from 0.201 to 0.161. Interestingly, the drop in MAP was greater for the more successful runs than for the less successful runs. While ordering between runs was generally preserved, it seems that the highestscoring runs were magnifying their success by retrieving multiple copies of the same relevant document, an operation that we argue does nothing to improve the user search experience in most cases. These experiments allowed us to observe the power that measurement and yardsticks have in influencing the direction of research. Consider two examples. The first example is that, in defining an appropriate measure of the success of our system, we were forced to re-evaluate and ultimately redefine our task. We had originally intended to simply measure the occurrence in collections of documents that were content-equivalent with a view to removing them from the collection. Our user experiments quickly showed us that this approach was unrealistic: even minor differences between documents had the potential to be significant in certain circumstances. The concept of conditional equivalence, in which documents were equivalent with respect to a query, proved to be far more successful. This meant that it was unsuitable to simply remove documents from the collection; rather, duplicate removal was much better performed as a postprocessing step on result lists. This lesson, learnt in the process of defining a yardstick, has practical effects on the way in which duplication should be managed in search engines. The second example concerns the fidelity of measures based on the assumption of independence of relevance. We have shown that, based on user experiments, our software can reliably identify pairs of documents that are conditionally equivalent, and that

38

J. Zobel and Y. Bernstein

lifting the general assumption of independent relevance can have a significant impact on the reported effectiveness of real search systems. Furthermore, postprocessing result lists in order to remove such equivalent documents, while significantly increasing MAP from the lower figure, failed to restore the MAP of runs to its original level. The consequence of this is that the current TREC assessment regime discourages the removal of duplicate documents from result lists. This demonstrates the power of yardsticks, and the dangers if they are poorly chosen. Because yardsticks are the measured outcomes of research, it is natural for research communities to have as their goal improvement in performance according to commonly accepted yardsticks. Given an insufficiently faithful yardstick it is likely, or perhaps inevitable, that the research activity may diverge from the practical goals that the research community had originally intended to service.

7 Conclusions Careful consideration of how outcomes are to be measured is a critical component of high-quality research. No researcher, one presumes, would pursue a project with the expectation that it will have little impact, yet much research is unpersuasive and for that reason is likely to be ignored. Each paper needs a robust argument to demonstrate that the claims are confirmed. Such argument rests on evidence, and, in the case of experimental research, the evidence depends on a system of measurement. We have proposed seven criteria that should be considered when deciding how research outcomes should be measured. These criteria – applicability, power, specificity, richness, independence, fidelity, and repeatability – can be used to examine yardsticks used for measurement. As we have argued in the case of IR research, even widely accepted yardsticks can be unsatisfactory. In the case of the duplicate documents, our examination of the problems of measurement reveals one plausible reason why some prior work has had little impact: the yardsticks are poor or absent, and consequently the work is not well founded. We applied the criteria to evaluation of our new yardsticks for duplicate detection, and found that the concept of “duplicate” is surprisingly hard to define, and in the absence of a task is not meaningful. Almost every paper on duplication concerns a different variant and our user studies show that slightly different definitions of “duplicate” lead to very different results. Duplicates can be found, but there is no obvious way to find specific kinds of duplicates – previous work was typically motivated by one kind of duplication but measured on all kinds of duplication. Our examination of yardsticks not only suggests future directions for research on duplicate detection, but more broadly suggests processes that researchers should follow in design of research projects. Acknowledgements. This work was supported by the Australian Research Council.

References Allan, J., Carterette, B. and Lewis, J. (2005), When will information retrieval be “good enough”?, in “Proc. ACM-SIGIR Ann. Int. Conf. on Research and Development in Information Retrieval”, ACM Press, New York, NY, USA, pp. 433–440.

The Case of the Duplicate Documents Measurement, Search, and Science

39

Askitis, N. and Zobel, J. (2005), Cache-conscious collision resolution in string hash tables, in “Proc. String Processing and Information Retrieval Symposium (SPIRE)”. To appear. Bernstein, Y. and Zobel, J. (2004), A scalable system for identifying co-derivative documents, in A. Apostolico and M. Melucci, eds, “Proc. String Processing and Information Retrieval Symposium (SPIRE)”, Springer, Padova, Italy, pp. 55–67. Bernstein, Y. and Zobel, J. (2005), Redundant documents and search effectiveness, in “Proc. ACM Ann. Int. Conf. on Information and Knowledge Management (CIKM)”. To appear. Booth, W. C., Colomb, G. G. and Williams, J. M. (1995), The Craft of Research, U. Chicago Press. Brin, S., Davis, J. and Garc´ıa-Molina, H. (1995), Copy detection mechanisms for digital documents, in M. Carey and D. Schneider, eds, “Proc. ACM-SIGMOD Ann. Int. Conf. on Management of Data”, ACM Press, San Jose, California, United States, pp. 398–409. Broder, A. Z. (1997), On the resemblance and containment of documents, in “Compression and Complexity of Sequences (SEQUENCES’97)”, IEEE Computer Society Press, Positano, Italy, pp. 21–29. Chowdhury, A., Frieder, O., Grossman, D. and McCabe, M. C. (2002), “Collection statistics for fast duplicate document detection”, ACM Transactions on Information Systems (TOIS) 20(2), 171–191. Fetterly, D., Manasse, M. and Najork, M. (2003), On the evolution of clusters of near-duplicate web pages, in R. Baeza-Yates, ed., “Proc. 1st Latin American Web Congress”, IEEE, Santiago, Chile, pp. 37–45. Heintze, N. (1996), Scalable document fingerprinting, in “1996 USENIX Workshop on Electronic Commerce”, Oakland, California, USA, pp. 191–200. Johnson, D. S. (2002), A theoretician’s guide to the experimental analysis of algorithms, in M. Goldwasser, D. S. Johnson and C. C. McGeoch, eds, “Proceedings of the 5th and 6th DIMACS Implementation Challenges”, American Mathematical Society, Providence. Manber, U. (1994), Finding similar files in a large file system, in “Proc. USENIX Winter 1994 Technical Conference”, San Fransisco, CA, USA, pp. 1–10. Metzler, D., Bernstein, Y., Croft, W. B., Moffat, A. and Zobel, J. (2005), Similarity measures for tracking information flow, in “Proc. ACM Ann. Int. Conf. on Information and Knowledge Management (CIKM)”. To appear. Moffat, A. and Zobel, J. (2004), What does it mean to ‘measure performance’?, in X. Zhou, S. Su, M. P. Papazoglou, M. E. Owlowska and K. Jeffrey, eds, “Proc. International Conference on Web Informations Systems”, Springer, Brisbane, Australia, pp. 1–12. Published as LNCS 3306. Roberts, F. S. (1979), Measurement Theory, Addison-Wesley. Suppes, P., Pavel, M. and Falmagne, J.-C. (1994), “Representations and models in psychology”, Annual Review of Psychology 45, 517–544. Tichy, W. F. (1998), “Should computer scientists experiment more?”, IEEE Computer 31(5), 32– 40.

An Effective System for Mining Web Log Zhenglu Yang, Yitong Wang, and Masaru Kitsuregawa Institute of Industrial Science, The University of Tokyo, 4-6-1 Komaba, Meguro-Ku, Tokyo 153-8305, Japan {yangzl, ytwang, kitsure}@tkl.iis.u-tokyo.ac.jp

Abstract. The WWW provides a simple yet effective media for users to search, browse, and retrieve information in the Web. Web log mining is a promising tool to study user behaviors, which could further benefit web-site designers with better organization and services. Although there are many existing systems that can be used to analyze the traversal path of web-site visitors, their performance is still far from satisfactory. In this paper, we propose our effective Web log mining system consists of data preprocessing, sequential pattern mining and visualization. In particular, we propose an efficient sequential mining algorithm (LAPIN WEB: LAst Position INduction for WEB log), an extension of previous LAPIN algorithm to extract user access patterns from traversal path in Web logs. Our experimental results and performance studies demonstrate that LAPIN WEB is very efficient and outperforms well-known PrefixSpan by up to an order of magnitude on real Web log datasets. Moreover, we also implement a visualization tool to help interpret mining results as well as predict users’ future requests.

1 Introduction The World Wide Web has become one of the most important media to store, share and distribute information. At present, Google is indexing more than 8 billion Web pages [1]. The rapid expansion of the Web has provided a great opportunity to study user and system behavior by exploring Web access logs. Web mining that discovers and extracts interesting knowledge/patterns from Web could be classified into three types based on different data that mining is executed: Web Structure Mining that focuses on hyperlink structure, Web Contents Mining that focuses on page contents as well as Web Usage Mining that focuses on Web logs. In this paper, we are concerned about Web Usage Mining (WUM), which also named Web log mining. The process of WUM includes three phases: data preprocessing, pattern discovery, and pattern analysis [14]. During preprocessing phase, raw Web logs need to be cleaned, analyzed and converted before further pattern mining. The data recorded in server logs, such as the user IP address, browser, viewing time, etc, are available to identify users and sessions. However, because some page views may be cached by the user browser or by a proxy server, we should know that the data collected by server logs are not entirely reliable. This problem can be partly solved by using some other kinds of usage information such as cookies. After each user has been identified, the entry for each user must be divided into sessions. A timeout is often used to break the entry into sessions. The following are some preprocessing tasks [14]: (a) Data Cleaning: The server log is X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 40–52, 2006. c Springer-Verlag Berlin Heidelberg 2006 

An Effective System for Mining Web Log

41

examined to remove irrelevant items. (b) User Identification: To identify different users by overcoming the difficulty produced by the presence of proxy servers and cache. (c) Session Identification: The page accesses must be divided into individual sessions according to different Web users. The second phase of WUM is pattern mining and researches in data mining, machine learning as well as statistics are mainly focused on this phase. As for pattern mining, it could be: (a) statistical analysis, used to obtain useful statistical information such as the most frequently accessed pages; (b) association rule mining [12], used to find references to a set of pages that are accessed together with a support value exceeding some specified threshold; (c) sequential pattern mining [13], used to discover frequent sequential patterns which are lists of Web pages ordered by viewing time for predicting visit patterns; (d) clustering, used to group together users with similar characteristics; (e) classification, used to group together users into predefined classes based on their characteristics. In this paper, we focus on sequential pattern mining for finding interesting patterns based on Web logs. Sequential pattern mining, which extracts frequent subsequences from a sequence database, has attracted a great deal of interest during the recent surge in data mining research because it is the basis of many applications, such as Web user analysis, stock trend prediction, and DNA sequence analysis. Much work has been carried out on mining frequent patterns, as for example, in [13] [16] [10] [7] [4]. However, all of these works suffer from the problems of having a large search space and the ineffectiveness in handling long patterns. In our previous work [18], we proposed a novel algorithm to reduce searching space greatly. Instead of searching the entire projected database for each item, as PrefixSpan [7] does, we only search a small portion of the database by recording the last position of item in each sequence (LAPIN: LAst Position INduction). While support counting usually is the most costly step in sequential pattern mining, the proposed LAPIN could improve the performance significantly by avoiding cost scanning and comparisons. In order to meet special features of Web data and Web log, we propose LAPIN WEB by extending our previous work. In pattern analysis phase, which mainly filter out uninteresting rules obtained, we implement a visualization tool to help interpret mined patterns and predict users’ future request. Our contribution in this paper could be summarized as: 1) propose an effective Web log mining system that deals with log preprocessing, sequential pattern mining, and result visualizing; 2) propose an efficient sequential pattern mining algorithm by extending previous LAPIN techniques; 3) implement a visualization tool to interpret mining results and predict users’ future behavior. Experimental results on real datasets demonstrate the effectiveness of the whole system as well as the high performance of the proposed mining algorithm, which outperforms existing algorithm by up to an order of magnitude. The remainder of this paper is organized as follows. We present the related work in Section 2. In Section 3, we introduce our Web log mining system, including preprocessing, pattern discovery and pattern analysis parts. Experimental results and performance analysis are reported in Section 4. We conclude the paper and provide suggestions for future work in Section 5.

42

Z. Yang, Y. Wang, and M. Kitsuregawa

2 Related Work Commonly, a mining system includes three parts, as mentioned in Section 1, data preprocessing, pattern discovery and pattern analysis. In this section, we first introduce some related work in data preprocessing and then we focus on pattern mining and pattern analysis. Data preprocessing: Because of the proxy servers and Web browser cache existing, to get an accurate picture of the web-site access is difficult. Web browsers store pages that have been visited and if the same page is requested, the Web browser will directly displays the page rather than sending another request to the Web server, which makes the user access stream incomplete. By using the same proxy server, different users leave the same IP address in the server log, which makes the user identification rather difficult. [14] presented the solution to these problems by using Cookies or Remote Agents. Moreover, in the same paper, the authors have presented several data preparation techniques to identify Web users, i.e., the path completion and the use of site topology. To identify the user sessions, a fixed time period, say thirty minutes [14] [3], is used to be the threshold between two sessions. Sequential pattern mining: Srikant and Agrawal proposed the GSP algorithm [16], which iteratively generates candidate k-sequences from frequent (k-1)-sequences based on the anti-monotone property that all the subsequences of a frequent sequence must be frequent. Zaki proposed SPADE [10] to elucidate frequent sequences using efficient lattice search techniques and simple join operations. SPADE divides the candidate sequences into groups by items, and transforms the original sequence database into a vertical ID-List database format. SPADE counts the support of a candidate k-sequence generated by merging the ID-Lists of any two frequent (k-1)-sequences with the same (k2)-prefix in each iteration. Ayres et al. [4] proposed the SPAM algorithm, which uses SPADE’s lattice concept, but represents each ID-List as a vertical bitmap. SPADE and SPAM can be grouped as candidate-generation-and-test method. On the other hand, Pei et al. proposed a pattern growth algorithm, PrefixSpan [7], which adopts a projection strategy to project sequences into different groups called projected databases. The PrefixSpan algorithm recursively generates a projected database for each frequent k-sequence to find the frequent (k+1)-sequences. A comprehensive performance study showed that PrefixSpan, in most cases, outperforms former apriori-based algorithms [8]. However, PrefixSpan still needs to scan large projected database and it does not work well for dense datasets, i.e. DNA sequences, which is an very important application. These observations motivate our work in [18], which proposed an efficient sequential pattern mining algorithm LAPIN by the idea of using the last position of each item to judge whether a k-length pattern could grow to a (k+1)-length pattern. LAPIN could improve the performance significantly by largely reduce the search space required. In this paper, we propose another pattern mining algorithm by combining the merits of both LAPIN and PrefixSpan to meet the special requirement of Web logs, which is very sparse. Visualization tools: Pitkow et. al. proposed WebViz [9] as a tool for Web log analysis, which provides graphical view of web-site local documents and access patterns. By incorporating the Web-Path paradigm, Web masters can see the documents in their

An Effective System for Mining Web Log

43

web-site as well as the hyperlinks travelled by visitors. Spiliopoulou et. al. presented Web Utilization Miner (WUM) [11] as a mining system for the discovery of interesting navigation patterns. One of the most important features of WUM is that using WUM mining language MINT, human expert can dynamically specify the interestingness criteria for navigation patterns. To discover the navigation patterns satisfying the expert criteria, WUM exploits Aggregation Service that extracts information on Web access log and retains aggregated statistical information. Hong et. al. proposed WebQuilt [5] as a Web logging and visualization system that helps Web design teams run usability tests and analyze the collected data. To overcome many of the problems with server-side and client-side logging, WebQuilt uses a proxy to log the activity. It aggregates logged usage traces and visualize in a zooming interface that shows the Web pages viewed. Also it shows the most common paths taken through the web-site for a given task, as well as the optimal path for that task.

3 Web Log Mining System We designed a Web log mining system for sequential pattern mining and its visualization, as shown in Fig. 1. The input and output of the system is Web log files as well as visualized patterns or text reports. As mentioned in Section 1, the whole system includes: • Data Preprocessing. This is the phase where data are cleaned from noise by overcoming the difficulty of recognizing different users and sessions, in order to be used as input to the next phase of pattern discovery. Data preprocessing phase always involves data cleaning, user identification and session identification. • Pattern Mining. While various mining algorithms could be incorporated into the system to mine different types of patterns, currently, we only implemented sequential pattern mining on Web log data. We plan to add other part in future work.

㪣㫆㪾㩷㪝㫀㫃㪼

㪧㫉㪼㫇㫉㫆㪺㪼㫊㫊 㪧㪸㫋㫋㪼㫉㫅㩷㪛㫀㫊㪺㫆㫍㪼㫉㫐

㪛㪸㫋㪸㩷㪚㫃㪼㪸㫅㫀㫅㪾 㪬㫊㪼㫉㩷㪠㪻㪼㫅㫋㫀㪽㫀㪺㪸㫋㫀㫆㫅 㪪㪼㫊㫊㫀㫆㫅㩷㪠㪻㪼㫅㫋㫀㪽㫀㪺㪸㫋㫀㫆㫅

㪪㪼㫈㫌㪼㫅㫋㫀㪸㫃㩷㪧㪸㫋㫋㪼㫉㫅㩷㪤㫀㫅㫀㫅㪾

㪧㪸㫋㫋㪼㫉㫅㩷㪘㫅㪸㫃㫐㫊㫀㫊

㪫㪼㫏㫋㩷㪩㪼㫇㫆㫉㫋㫊 㪭㫀㫊㫌㪸㫃㫀㫑㪸㫋㫀㫆㫅

Fig. 1. Web Log Mining System Structure

44

Z. Yang, Y. Wang, and M. Kitsuregawa

• Pattern Analysis. In this phase, the mined patterns which in great numbers need to be evaluated by end users in an easy and interactive way: text report and and visualization tool. We will discuss each part in more detail in following subsections. 3.1 Data Preprocessing The raw Web log data is usually diverse and incomplete and difficult to be used directly for further pattern mining. In order to process it, we need to: 1) Data Cleaning. In our system, we use server logs in Common Log Format. We examine Web logs and remove irrelevant or redundant items like image, sound, video files which could be downloaded without an explicit user request. Other removal items include HTTP errors, records created by crawlers, etc., which can not truly reflect users’ behavior. 2) User Identification. To identify the users, one simple method is requiring the users to identify themselves, by logging in before using the web-site or system. Another approach is to use cookies for identifying the visitors of a web-site by storing an unique ID. However, these two methods are not general enough because they depend on the application domain and the quality of the source data, thus in our system we only set them as an option. More detail should be implemented according to different application domains. We have implemented a more general method to identify user based on [14]. We have three criteria: (1) A new IP indicates a new user. (2) The same IP but different Web browsers, or different operating systems, in terms of type and version, means a new user. (3) Suppose the topology of a site is available, if a request for a page originates from the same IP address as other already visited pages, and no direct hyperlink exists between these pages, it indicates a new user. (option) 3) Session Identification. To identify the user sessions is also very important because it will largely affects the quality of pattern discovery result. A user session can be defined as a set of pages visited by the same user within the duration of one particular visit to a web-site. According to [2] [6], a set of pages visited by a specific user is considered as a single user session if the pages are requested at a time interval not larger than a specified time period. In our system, we set this period to 30 minutes. 3.2 Sequential Pattern Mining Problem Definition. A W eb access sequence, s, is denoted as i1 , i2 , . . . , ik , where ij is a page item for 1 ≤ j ≤ k. The number of page items in a Web access sequence is called the length of the sequence. A Web access sequence with length l is called an l-sequence. A sequence, sa = a1 , a2 , . . . , an , is contained in another sequence, sb = b1 , b2 , . . . , bm , if there exists integers 1 ≤ i1 < i2 < . . . < in ≤ m, such

An Effective System for Mining Web Log

45

that a1 = bi1 , a2 = bi2 ,. . . , an = bin . We denote sa a subsequence of sb , and sb a supersequence of sa . Given a Web access sequence s = i1 , i2 , . . . , il , and an page item α, s α denotes that s concatenates with α, as Sequence Extension (SE), s α=i1 , i2 , . . . , il , α . If s = p s, then p is a pref ix of s and s is a suf f ix of s . A W eb access sequence database, S, is a set of tuples uid, s , where uid is a user id and s is a Web access sequence. A tuple uid, s is said to contain a sequence β, if β is a subsequence of s. The support of a sequence, β, in a sequence database, S, is the number of tuples in the database containing β, denoted as support(β). Given a user specified positive integer, ε, a sequence, β, is called a frequent Web access sequential pattern if support(β) ≥ ε. For sequential pattern mining in the pattern discovery phase, the objective is to find the complete set of Web access sequential patterns of database S in an efficient manner. Let our running database be the sequence database S shown in Table 1 with min support = 2. We will use this sample database throughout the paper. Here, we propose an efficient sequential pattern mining algorithm to mine Web logs by extending our previous work LAPIN [18]. Let us first briefly introduce the idea of LAPIN: LAPIN Algorithm. For any time series database, the last position of an item is the key used to judge whether or not the item can be appended to a given prefix (k-length) sequence (assumed to be s). For example, in a sequence, if the last position of item α is smaller than, or equal to, the position of the last item in s, then item α cannot be appended to s as a (k+1)-length sequence extension in the same sequence. Example 1. When scanning the database in Table 1 for the first time, we obtain Table 2, which is a list of the last positions of the 1-length frequent sequences in ascending order. Suppose that we have a prefix frequent sequence a , and its positions in Table 1 are 10:1, 20:6, 30:5, where uid:pid represents the sequence ID and the position ID. Then, we check Table 2 to obtain the first indices whose positions are larger than a ’s, resulting in 10:1, 20:3, 30:2, i.e., (10:blast = 3, 20:blast = 7, and 30:dlast = 6). We start from these indices to the end of each sequence, and increment the support of each passed item, resulting in a : 2, b : 2, c : 2, and d : 2, from which, we can determine that aa , ab , ac and ad are the frequent patterns. From the above example, we can show that the main difference between LAPIN and most existing algorithms is the searching space. PrefixSpan scans the entire projected database to find the frequent pattern. SPADE temporally joins the entire ID-List of the candidates to obtain the frequent pattern of next layer. LAPIN can obtain the same result by scanning only part of the search space of PrefixSpan and SPADE, which indeed, are

Table 1. Sequence Database UID 10 20 30

Sequence acbcdadc bcbcbab dbcbadca

46

Z. Yang, Y. Wang, and M. Kitsuregawa

UID 10 20 30

Table 2. Item Last Position List Last Position of Different Item blast = 3 alast = 6 dlast = 7 clast = 8 clast = 4 alast = 6 blast = 7 blast = 4 dlast = 6 clast = 7 alast = 8

the last positions of the items. The full justification and more detail about LAPIN can be found in [18]. However, we can not get the best performance by directly applying LAPIN to Web log ming because of the different properties between datasets. Comparing with general transaction data sequences that are commonly used, Web logs have following characteristics: (a) no two items/pages are accessed at the same time by the same user. (b) very sparse, which means that there are huge unique items and few item repetition in one user sequence. (c) user preference should be considered during mining process. Based on above points, we extended LAPIN to LAPIN WEB with: (1) dealing with only Sequence Extension (SE) case, no Itemset Extension (IE) case. (2) using sequential search instead of binary search. In more detail, LAPIN WEB does not use binary search in the item position list, but use pointer+offset sequential search strategy, which is similar to that used in PrefixSpan. (3) incorporating user preference into mining process to make the final extracted pattern more reasonable. LAPIN WEB: Design and Implementation. We used a lexicographic tree [4] as the search path of our algorithm. Furthermore, we adopted a lexicographic order, which was defined in the same way as in [17]. This used the Depth First Search (DFS) strategy. For Web log, because it is impossible that a user clicks two pages at the same time, Itemset Extension (IE) case in common sequential pattern mining does not exist in Web log mining. Hence, we only deal with Sequence Extension (SE) case. The pseudo code of LAPIN WEB is shown in Fig. 2. In Step 1, by scanning the DB once, we can obtain all the 1-length frequent patterns. Then we sort and construct the SE item-lastposition list in ascending order based on each 1-length frequent pattern’ last position, as shown in Table 2. Definition 1 (Prefix border position set). Given two sequences, A=A1 A2 . . . Am and B=B1 B2 . . . Bn , suppose that there exists C=C1 C2 . . . Cl for l ≤ m and l ≤ n, and that C is a common prefix for A and B. We record both positions of the last item Cl in A and B, respectively, e.g., Cl =Ai and Cl =Bj . The position set, (i, j), is called the prefix border position set of the common prefix C, denoted as Sc . Furthermore, we denote Sc,i as the prefix border position of the sequence, i. For example, if A=abc and

An Effective System for Mining Web Log

47

B=acde , then we can deduce that one common prefix of these two sequences is ac , whose prefix border position set is (3,2), which is the last item C’s positions in A and B. In function Gen P attern, to find the prefix border position set of k-length α (Step 3), we first obtain the sequence pointer and offset of the last item of α, and then perform a sequential search in the corresponding sequence for the (k-1)-length prefix border position. This method is similar to pseudo-projection in PrefixSpan, which is efficient for sparse datasets. Definition 2 (Local candidate item list). Given two sequences, A=A1 A2 . . . Am and B=B1 B2 . . . Bn , suppose that there exists C=C1 C2 . . . Cl for l ≤ m and l ≤ n, and that C is a common prefix for A and B. Let D = (D1 D2 . . . Dk ) be a list of items, such as those appended to C, and C  = C Dj (1 ≤ j ≤ k) is the common sequence for A and B. The list D is called the local candidate item list of the prefix C’. For example, if A=abce and B=abcde , we can deduce that one common prefix of these two sequences is ab , and abc , abe are the common sequences for A and B. Therefore, the item list (c,e) is called the local candidate item list of the prefixes abc and abe . Step 4, shown in Fig. 2, is used to find the frequent SE (k+1)-length pattern based on the frequent k-length pattern and the 1-length candidate items. Commonly, support counting is the most time consuming part in the entire mining process. In [18], we have found that LCI-oriented and Suf f ix-oriented have their own advantages for different types of datasets. Based on this discovery, in this paper, during the mining process, —————————————————————————————————————— LAPIN WEB Algorithm : Input : A sequence database, and the minimum support threshold, ε Output : The complete set of sequential patterns Function : Gen Pattern(α, S, CanIs ) Parameters : α = length k frequent sequential pattern; S = prefix border position set of (k-1)-length sequential pattern; CanIs = candidate sequence extension item list of length k+1 sequential pattern Goal : Generate (k+1)-length frequent sequential pattern Main(): 1. Scan DB once to do: 1.1 Bs ← Find the frequent 1-length SE sequences 1.2 Ls ← Obtain the item-last-position list of the 1-length SE sequences 2. For each frequent SE sequence αs in Bs 2.1 Call Gen Pattern (αs , 0, Bs ) Function Gen Pattern(α, S , CanIs ) 3. Sα ← Find the prefix border position set of α based on S 4. F reItems,α ← Obtain the SE item list of α based on CanIs and Sα 5. For each item γs in F reItems,α 5.1 Combine α and γs as SE, results in θ and output 5.2 Call Gen Pattern (θ, Sα , F reItems,α) ——————————————————————————————————————– Fig. 2. LAPIN WEB Algorithm pseudo code

48

Z. Yang, Y. Wang, and M. Kitsuregawa

——————————————————————————————————Input : Sα = prefix border position set of length k frequent sequential pattern α; BVs = bit vectors of the ITEM IS EXIST TABLE; Ls = SE item-last-position list; CanIs = candidate sequence extension items; ε = user specified minimum support Output : F reItems = local frequent SE item list 1. For each sequence, F, according to its priority (descending) 2. Sα,F ← obtain prefix border position of F in Sα 3. if (Sizelocal cand item list > Sizesuf f ix sequence ) 4. bitV ← obtain the bit vector of the Sα,F indexed from BVs 5. For each item β in CanIs 6. Suplist[β] = Suplist[β] + bitV[β]; 7. CanIs,p ← obtain the candidate items based on prior sequence 8. else 9. Ls,F ← obtain SE item-last-position list of F in Ls 10. M = Find the corresponding index for Sα,F 11. while ( M < Ls,F .size) 12. Suplist[M.item]++; 13. M++; 14. CanIs,p ← obtain the candidate items based on prior sequence 15. For each item γ in CanIs,p 16. if (Suplist[γ] ≥ ε) 17. F reItems.insert(γ); ——————————————————————————————————– Fig. 3. Finding the SE frequent patterns

we dynamically compare the suffix sequence length with the local candidate item list size and select the appropriate search space to build a single general framework. In other words, we combine the two approaches, LAPIN LCI and LAPIN Suffix, together to improve efficiency at the price of low memory consuming. The pseudo code of the frequent pattern finding process is shown in Fig. 3. From a system administrator’s view, the logs of special users (i.e. domain experts) are more important than other logs and thus, should be always considered more prior, as shown in Fig. 3 (Step 1). The appended candidate items are also judged based on this criteria (Step 7 and Step 14). 3.3 Pattern Visualization We could see from pattern mining process that given a support, usually there are great number of patterns produced and effective method to filter out and visualize mined pattern is necessary. In addition, web-site developers, designers, and maintainers also need to understand their efficiency as what kind of visitors are trying to do and how they are doing it. Towards this end, we developed a navigational behavior visualization tool based on Graphviz 1 . At present, our prototype system has only implemented the basic sequential pattern discovery as the main mining task, which requires relevant simple user-computer interface and visualization. As more functions are added and experiment done, we will make the tool more convenient to the users. 1

http://www.research.att.com/sw/tools/graphviz

An Effective System for Mining Web Log

49

4 Performance Study In this section, we will describe our experiments and evaluations conducted on the realworld datasets. We performed the experiments using a 1.6 GHz Intel Pentium(R)M PC machine with a 1 G memory, running Microsoft Windows XP. The core of LAPIN WEB algorithm is written in C++ software. When comparing the efficiency between LAPIN WEB and PrefixSpan, we turned off the output of the programs to make the comparison equitable. 4.1 Real Data We consider that results from real data will be more convincing in demonstrating the efficiency of our Web log mining system. There are two datasets used in our experiments, DMResearch and MSNBC. DMResearch was collected from the web-site of China Data Mining Research Institute 2 , from Oct. 17, 2004 to Dec. 12, 2004. The log is large, about 56.9M, which includes 342,592 entries and 8,846 distinct pages. After applying data preprocessing described in Section 2.1, we identified 12,193 unique users and average length of the sessions for each user is 28. The second dataset, MSNBC, was obtained from the UCI KDD Archive 3 . This dataset comes from Web server logs for msnbc.com and news-related portions of msn.com on Sep. 28, 1999. There are 989,818 users and only 17 distinct items, because these items are recorded at the level of URL category, not at page level, which greatly reduces the dimensionality. The 17 categories are ”frontpage”, ”news”, ”tech”, ”local”, ”opinion”, ”on-air”, ”misc”, ”weather”, ”health”, ”living”, ”business”, ”sports”, ”summary”, ”bbs”, ”travel”, ”msn-news”, and ”msn-sports”. Each category is associated with a category number using an integer starting from ”1”. The statistics of these datasets is given in Table 3. Table 3. Real Dataset Characteristics Dataset # Users # Items Min. len. Max. len. Avg. len. Total size DMResearch 12193 8846 1 10745 28 56.9M MSNBC 989818 17 1 14795 5.7 12.3M

4.2 Comparing PrefixSpan with LAPIN WEB Fig. 4 shows the running time and the searched space comparison between PrefixSpan and LAPIN WEB. Fig. 4 (a) shows the performance comparison between PrefixSpan and LAPIN WEB for DMResearch data set. From Fig. 4 (a), we can see that LAPIN WEB is much more efficient than PrefixSpan. For example, at support 1.3%, LAPIN WEB (runtime = 47 seconds) is more than an order of magnitude faster than PrefixSpan (runtime = 501 seconds). This is because the searched space of Prefixspan (space = 5,707M) was much larger than that in LAPIN WEB (space = 214M), as shown in Fig. 4 (c). 2 3

http://www.dmresearch.net http://kdd.ics.uci.edu/databases/msnbc/msnbc.html

50

Z. Yang, Y. Wang, and M. Kitsuregawa 㪛㪸㫋㪸㫊㪼㫋㩷㩿㪛㪤㪩㪼㫊㪼㪸㫉㪺㪿㪀

㪛㪸㫋㪸㫊㪼㫋㩷㩿㪤㪪㪥㪙㪚㪀

㪧㫉㪼㪽㫀㫏㪪㫇㪸㫅 㪣㪘㪧㪠㪥㪶㪮㪜㪙

㪋㪌㪇 㪊㪇㪇 㪈㪌㪇

㪩㫌㫅㫅㫀㫅㪾㩷㪫㫀㫄㪼㩷㩿㫊㪀

㪩㫌㫅㫅㫀㫅㪾㩷㫋㫀㫄㪼㩷㩿㫊㪀

㪍㪇㪇

㪇 㪈㪅㪊

㪈㪅㪋 㪈㪅㪌 㪈㪅㪍 㪤㫀㫅㫀㫄㫌㫄㩷㪪㫌㫇㫇㫆㫉㫋㩷㩿㩼㪀

㪉㪇㪇㪇㪇

㪧㫉㪼㪽㫀㫏㪪㫇㪸㫅 㪣㪘㪧㪠㪥 㪶㪮㪜㪙

㪈㪌㪇㪇㪇 㪈㪇㪇㪇㪇 㪌㪇㪇㪇 㪇 㪇㪅㪇㪈㪈

㪈㪅㪎

C 4WPPKPIVKOGEQORCTKUQP

㪇㪅㪇㪈㪉 㪇㪅㪇㪈㪊 㪇㪅㪇㪈㪋 㪤㫀㫅㫀㫄㫌㫄㩷㪪㫌㫇㫇㫆㫉㫋㩷㩿㩼㪀

D 4WPPKPIVKOGEQORCTKUQP 㪛㪸㫋㪸㫊㪼㫋㩷㩿㪤㪪㪥㪙㪚㪀

㪧㫉㪼㪽㫀㫏㪪㫇㪸㫅 㪣㪘㪧㪠㪥㪶㪮㪜㪙

㪋㪌㪇㪇 㪊㪇㪇㪇 㪈㪌㪇㪇 㪇 㪈㪅㪊

㪈㪅㪋 㪈㪅㪌 㪈㪅㪍 㪤㫀㫅㫀㫄㫌㫄㩷㪪㫌㫇㫇㫆㫉㫋㩷㩿㩼㪀

㪈㪅㪎

E 5GCTEJGFURCEGEQORCTKUQP

㪪㪼㪸㫉㪺㪿㪼㪻㩷㪪㫇㪸㪺㪼㩷㩿㪤㪀

㪪㪼㪸㫉㪺㪿㪼㪻㩷㪪㫇㪸㪺㪼㩷㩿㪤㪀

㪛㪸㫋㪸㫊㪼㫋㩷㩿㪛㪤㪩㪼㫊㪼㪸㫉㪺㪿㪀 㪍㪇㪇㪇

㪇㪅㪇㪈㪌

㪏㪇㪇㪇㪇㪇 㪍㪇㪇㪇㪇㪇

㪧㫉㪼㪽㫀㫏㪪㫇㪸㫅 㪣㪘㪧㪠㪥㪶㪮㪜㪙

㪋㪇㪇㪇㪇㪇 㪉㪇㪇㪇㪇㪇 㪇 㪇㪅㪇㪈㪈

㪇㪅㪇㪈㪉 㪇㪅㪇㪈㪊 㪇㪅㪇㪈㪋 㪤㫀㫅㫀㫄㫌㫄㩷㪪㫌㫇㫇㫆㫉㫋㩷㩿㩼㪀

㪇㪅㪇㪈㪌

F 5GCTEJGFURCEGEQORCTKUQP

Fig. 4. Real datasets comparison

Fig. 4 (b) shows the performance comparison between PrefixSpan and LAPIN WEB for MSNBC data set. From Fig. 4 (b), we can see that LAPIN WEB is much more efficient than PrefixSpan. For example, at support 0.011%, LAPIN WEB (runtime = 3,215 seconds) is about five times faster than PrefixSpan (runtime = 15,322 seconds). This is because the searched space of Prefixspan (space = 701,781M) was much larger than that in LAPIN WEB (space = 49,883M), as shown in Fig. 4 (d). We have not compared PrefixSpan and LAPIN WEB on user’s preference, because the former one has no such function. 4.3 Visualization Result To help web-site developers, and Web administrators analyze the efficiency of their web-site by understanding what and how visitors are doing on a web-site, we developed a navigational behavior visualization tool. Fig. 5 and Fig. 6 show the visualization result of traversal pathes for the two real datasets, respectively. Here, we set minimum support to 9% for DMResearch and 4% for MSNBC. The thickness of edge represents the support value of the corresponding traversal path. The number value, which is right of the traversal path, is the support value of the corresponding path. The ”start” and ”end” are not actual pages belong to the site, they are actually another sites placed somewhere on the internet, and indicate the entry and exit door to and from the site. From the figures, We can easily know that the most traversed edges, the thick ones, are connecting pages ”start” → ”\loginout.jsp” → ”end” in Fig. 5, and ”start” → ”frontpage” → ”end” in Fig. 6. Similar interesting traversal path can also be understood, and

An Effective System for Mining Web Log

51

Fig. 5. DMResearch visualization result

Fig. 6. MSNBC visualization result

used by web-site designers to make improvement on link structure as well as document content to maximize efficiency of visitor path.

5 Conclusions In this paper, we have proposed an effective framework for Web log mining system to benefit web-site designer to understand user behaviors by mining Web log data. In particular, we propose an efficient sequential pattern mining algorithm LAPIN WEB by extending previous work LAPIN with special consideration of Web log data. The proposed algorithm could improve the mining performance significantly by scanning only a small portion of projected database and extract more reasonable web usage patterns. Experimental results and evaluations performed on real data demonstrate that LAPIN WEB is very effective and outperforms existing algorithms by up to an order of magnitude. The visualization tool could be further used to make final patterns easy to interpret and thus improve the presentation and organization of web-site. Our framework of Web log mining system is designed in such a way that it could be easily extended by incorporating other new methods or algorithms to make it more functional and adaptive. We are now considering other pattern mining algorithms as we mentioned earlier such as clustering and association rule. Moreover, we are planning to build more sophisticated visualization tools to interpret the final results.

52

Z. Yang, Y. Wang, and M. Kitsuregawa

References 1. Google Website. http://www.google.com. 2. K. Wu, P.S. Yu. and A. Ballman, “Speedtracer: A Web usage mining and analysis tool,” In IBM Systems Journal, 37(1), pp. 89-105, 1998. 3. H. Ishikawa, M. Ohta, S. Yokoyama, J. Nakayama, and K. Katayama, “On the Effectiveness of Web Usage Mining for Page Recommendation and Restructuring,” In 2nd Annual International Workshop of the Working Group ”Web and Databases” of the German Informatics Society, Oct. 2002. 4. J. Ayres, J. Flannick, J. Gehrke, and T. Yiu, “Sequential Pattern Mining using A Bitmap Representation,” In 8th ACM SIGKDD Int’l Conf. Knowledge Discovery in Databases (KDD’02), pp. 429-435, Alberta, Canada, Jul. 2002. 5. J.I. Hong and J.A. Landay, “WebQuilt: A Framework for Capturing and Visualizing the Web Experience,” In 10th Int’l Conf. on the World Wide Web (WWW’01), pp. 717-724, Hong Kong, China, May 2001. 6. J. Pei, J. Han, B. Mortazavi-Asl and H. Zhu, “Mining access pattern efficiently from web logs,” In Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD’00), Kyoto, Japan, pp. 396-407, 2000. 7. J. Pei, J. Han, M. A. Behzad, and H. Pinto, “PrefixSpan:Mining Sequential Patterns Efficiently by Prefix-Projected Pattern Growth,” In 17th Int’l Conf. of Data Engineering (ICDE’01), Heidelberg, Germany, Apr. 2001. 8. J. Pei, J. Han, B. Mortazavi-Asl, J. Wang, H. Pinto, Q. Chen, U. Dayal, and M.C. Hsu, “Mining Sequential Patterns by Pattern-growth: The PrefixSpan Approach,” In IEEE Transactions on Knowledge and Data Engineering, Volume 16, Number 11, pp. 1424-1440, Nov. 2004. 9. J. Pitkow and K. Bharat, “WebViz: A Tool for World-Wide Web Access Log Analysis,” In 1st Int’l Conf. on the World Wide Web (WWW’94), Geneva, Switzerland, May 1994. 10. M. J. Zaki, “SPADE: An Efficient Algorithm for Mining Frequent Sequences,” In Machine Learning, Vol. 40, pp. 31-60, 2001. 11. M. Spiliopoulou and L.C. Faulstich, “WUM : A Web Utilization Miner,” In EDBT Workshop on the Web and Data Bases (WebDB’98), Valencia, Spain, 1998. Springer Verlag. 12. R. Agrawal and R. Srikant, “Fast Algorithms for Mining Association Rules,” In 20th Int’l Conf. on Very Large Databases (VLDB’94), pp. 487-499, Santiago, Chile, Sep. 1994. 13. R. Agrawal and R. Srikant, “Mining sequential patterns,” In 11th Int’l Conf. of Data Engineering (ICDE’95), pp. 3-14, Taipei, Taiwan, Mar. 1995. 14. R. Cooley, B. Mobasher, and J. Srivastava, “Data Preparation for Mining World Wide Web Browsing Patterns,” In J. Knowledge and Information Systems, pp. 5.32, vol. 1, no. 1, 1999. 15. R. Kosala, H. Blockeel, “Web Mining Research: A Survey,” In SIGKDD Explorations, ACM press, 2(1): 1-15, 2000. 16. R. Srikant and R. Agrawal, “Mining sequential patterns: Generalizations and performance improvements,” In 5th Int’l Conf. Extending Database Technology (EDBT’96), pp. 13-17, Avignon, France, Mar. 1996. 17. X. Yan, J. Han, and R. Afshar, “CloSpan: Mining closed sequential patterns in large datasets,” In 3rd SIAM Int’l Conf. Data Mining (SDM’03), pp. 166-177, San Francisco, CA, May 2003. 18. Z. Yang, Y. Wang, and M. Kitsuregawa, “LAPIN: Effective Sequential Pattern Mining Algorithms by Last Position Induction,” Technical Report (TR050617), Info. and Comm. Eng. Dept., Tokyo University, Japan, Jun. 2005. http://www.tkl.iis.u-tokyo.ac.jp/∼yangzl/Document/LAPIN.pdf

Adapting K-Means Algorithm for Discovering Clusters in Subspaces Yanchang Zhao1, Chengqi Zhang1, Shichao Zhang1, and Lianwei Zhao2 1

Faculty of Information Technology, University of Technology, Sydney, Australia {yczhao, chengqi, zhangsc}@it.uts.edu.au 2 Dept. of Computer Science, Beijing Jiaotong University, Beijing 100044, China [email protected]

Abstract. Subspace clustering is a challenging task in the field of data mining. Traditional distance measures fail to differentiate the furthest point from the nearest point in very high dimensional data space. To tackle the problem, we design minimal subspace distance which measures the similarity between two points in the subspace where they are nearest to each other. It can discover subspace clusters implicitly when measuring the similarities between points. We use the new similarity measure to improve traditional k-means algorithm for discovering clusters in subspaces. By clustering with low-dimensional minimal subspace distance first, the clusters in low-dimensional subspaces are detected. Then by gradually increasing the dimension of minimal subspace distance, the clusters get refined in higher dimensional subspaces. Our experiments on both synthetic data and real data show the effectiveness of the proposed similarity measure and algorithm.

1 Introduction As a main technique for data mining, clustering is confronted with increasingly high dimensional data. The dimension of data can be hundreds or thousands in the fields of retail, bioinformatics, telecom, etc., which brings the “curse of dimensionality”. It not only makes the index structure less efficient than linear scan, but also questions the meaningfulness of looking for the nearest neighbor [5], which in turn makes it ineffective to discover clusters in full dimensional space. The key point lies in that traditional distance measures fail to differentiate the nearest neighbor from the farthest point in very high-dimensional space. One solution is to measure the distance in subspaces, but it is not easy to select the appropriate subspaces. Fern et al. proposed random projection by choosing subspaces randomly and then the results of several random projections are combined in an ensemble way [3]. Procopiuc et al. chose the subspaces where a random group of points are in a Ȧ-width hyperrectangular box [7]. Agrawal et al. [2] proposed to discover the subspaces in an APRIORI-like way. To tackle the above problem, we design a new similarity measure, minimal subspace distance, for measuring the similarities between points in high dimensional space and discovering subspace clusters. The new measure defines the minimal l-D distance between two points as the minimum of their distances in all l-D subspaces X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 53 – 62, 2006. © Springer-Verlag Berlin Heidelberg 2006

54

Y. Zhao et al.

Algorithm: k-means Input: The number of clusters k and a dataset Output: A set of clusters that minimizes the squarederror criterion. 1. Select k objects as initial cluster centers; 2. Assign each data object to the nearest center; 3. Update the cluster center as the mean value of the objects for each cluster; 4. Repeat steps 2 and 3 until centers do not change or the criterion function converges. Fig. 1. K-means algorithm

and thus discovers implicitly the subspace of clusters while computing similarities. Based on our new similarity measure, k-means algorithm is improved for discovering subspace clusters in high dimensional space. Our experiments on both synthetic data and real-life data show the effectiveness of the proposed similarity measure and algorithm.

2 K-Means Algorithm K-means algorithm is one of the most well-known and widely used partitioning methods for clustering. It works in the following steps. First, it selects k objects from the dataset, each of which initially represents a cluster center. Each object is assigned to the cluster to which it is most similar, based on the distance between the object and the cluster center. Then the means of clusters are computed as the new cluster centers. The process iterates until the criterion function converges. A typical criterion function is the squared-error criterion, defined as k

E = ¦i =1 ¦ p∈C | p − mi | 2

(1)

i

where E is the sum of square-error, p is a point, and mi is the center of cluster Ci. The k-means algorithm is given in Figure 1. For detailed description of k-means clustering, please refer to [4].

3 Adapting K-Means Algorithm for Subspace Clustering In this section, a new similarity measure, minimal subspace distance, will be proposed to discover clusters in subspaces. Based on the new similarity measure, k-means algorithm will be adapted for discovering subspace clusters. 3.1 Motivation Euclidean distance is the mostly used distance measure in the field of data mining. However, the difference between the nearest point and the farthest one becomes less discriminating with the increase of dimensionality [5]. It is the same case with Minkowski distance (Lp-norm, p=2,3,...), except the Manhattan distance (p=1).

Adapting K-Means Algorithm for Discovering Clusters in Subspaces

55

Aggarwal et al. suggested to use fractional distance metrics (i.e., Lp-norm with 0 0.5) Recall 50000

100

80

40000

60

30000

60

30000

40

20000

40

20000

20

10000

20

10000

0

0 300 600 900 1200 1500 1800 2100 2400 2700 3000

Number of Sampled Documents

Percent

40000

0

Number of Terms

80

Number of Terms

Percent

Convergence Point

Number of Unique Terms Found Number of Significant Terms Found (tf.idf > 0.5) Recall 50000 Convergence Point

0 300 600 900 1200 1500 1800 2100 2400 2700 3000

Number of Sampled Documents

Fig. 1. Recall and total numbers of distinct terms for samples of the UDC-1 (left) and UDC-2 (right) collections

The rate at which new unique terms are found slows as the number of sampled documents increases. The slope of each curve is large at 300 documents, the recommended size for query-based sampling [Callan et al., 1999]. As sampling continues, the slope becomes flatter. Based on previous work [Williams and Zobel, 2005], continued sampling will always continue to find new words but the rate will decrease. Note that the rate for significant terms drops more rapidly than for terms. A key contribution in this paper is that convergence to a low rate of vocabulary increase is indicative of good coverage of vocabulary by the sampled documents. In other words, query sampling reaches a good coverage of the collection vocabulary when the slope becomes less than a certain threshold; empirical tests of this hypothesis are discussed below. In these charts, when the trends for the number of unique terms starts smoothing, the curves for the number of significant terms found are nearly flat, which means that by continuing sampling we are unlikely to receive many new significant terms, and it is unlikely to be efficient to keep probing. The recall curve confirms that the number of new significant terms hardly increases after sampling a certain amount of documents. The recall value for a sample of 300 document is less than 15%, while for summaries including more than 2000 documents this amount is greater than 45% (three times more) in both graphs. These trends strongly indicate that a sample size of 300 documents is insufficient for making effective summaries. As the slopes for significant terms are not negligible after sampling 300 documents, the risk of losing significant terms is high at this point. Figure 2 shows similar trends for the DATELINE managed collections. Again, the samples made from 300 documents do not appear to be a good representation of the collection language model. Curiously, although we were expecting the graphs to get smooth sooner than the previous collections (because of the documents should have similar topics), the results are very similar. The reason might be that all the collections so far are based on the TREC newswire data and contain similar documents. Trends for discovery of new terms and recall values for summaries obtained by sampling our WEB collection are shown in Figure 3. As the collection is significantly larger, we

Sample Sizes for Query Probing in Uncooperative DIR

100

Number of Unique Terms Found Number of Significant Terms Found (tf.idf > 0.5) Recall 50000

100

Number of Unique Terms Found Number of Significant Terms Found (tf.idf > 0.5) Recall 50000 Convergence Point

80

40000

60

30000

60

30000

40

20000

40

20000

20

10000

20

10000

0

Percent

40000

0 300 600 900 1200 1500 1800 2100 2400 2700 3000

Number of Sampled Documents

0

Number of Terms

80

Number of Terms

Percent

Convergence Point

69

0 300 600 900 1200 1500 1800 2100 2400 2700 3000

Number of Sampled Documents

Fig. 2. Recall and total numbers of distinct terms for samples of the DATELINE 325 (left) and 509 (right) collections

100

Number of Unique Terms Found Number of Significant Terms Found (tf.idf > 0.5) Recall

Number of Terms

80

600000

Convergence Point

Percent

400000 60

40

200000 20

0

300

600

0 900 1200 1500 1800 2100 2400 2700 3000 3300 3600 3900 4200 4500 4800 5100 5400 5700 6000

Number of Sampled Documents

Fig. 3. Recall and total numbers of distinct terms for samples of the WEB collection

extended our range of sampling to 6000 documents. The slope is sharply upward, not only after sampling 300 documents, but also in all the other points lower than 1000. At this point, the curve for significant terms is already fairly smooth. In other words, we are unlikely to receive significant terms with the previous rate by continuing probing. Interestingly, while the system has downloaded less than 2% of total documents, the trend for discovering new terms is getting smooth. Recall values start converging after downloading nearly 900 documents. Based on these experiments, we conclude that: – Hypothesis 1 is clearly confirmed, since the accumulation of new vocabulary never stops completely. – Hypothesis 2 is confirmed, because collections that were significantly different size show similar rates of vocabulary growth. For example DATELINE 325 and DATELINE 509 produced similar trends, although they are very different in size. – Hypothesis 3 is confirmed; if probing is halted after sampling 300 documents, the risk of losing significant terms is high.

70

4

M. Shokouhi, F. Scholer, and J. Zobel

Distributed Retrieval with Variable-Sized Samples

Given that a sample size of 300 is inadequate, but that some condition is needed to terminate sampling, we need to investigate when sampling should cease. In this section, we test the effect of varying the sample size on retrieval effectiveness. Table 2 shows the mean average precision (MAP) for different sample sizes. We use the TITLE field of TREC topics 51 − 150 as queries. Values for precision at 10 and 20 documents retrieved are provided because these include the documents that users are most likely to look at [Jansen et al., 2000]. Cutoff values represent the number of collections that will be searched for each query. The results show that, by using samples of more than 300 documents, the overall performance increases. The previously recommended number of 300 documents is not in general a sufficient sample size. Previous work uses ctf as an indication of vocabulary coverage, and shows that curves become smooth after downloading a limited number of documents from a collection [Callan et al., 1999; Callan and Connell, 2001]. However, our results show ctf is not an indication of achieving good vocabulary coverage. Terms that are more frequent in the collection are more likely to be extracted by query probing. Once the system finds such a term, the ctf ratio increases more than when system finds a word with lower frequency. However, these terms are not necessarily more important than the other terms [Luhn, 1958] in the collection, and indeed are unlikely to be significant in queries; downloading them does not mean that the coverage of the vocabulary is sufficient. Given that 300 documents is insufficient, and that the appropriate number is not consistent from collection to collection, the question is: how big a sample should be chosen from a given collection? We propose that an appropriate method is to keep sampling until the rate of occurrence of new unique terms (the slope in previous figures) becomes less than a predefined threshold. Specifically, we propose that query probing stop when, for η subsequent samples, the rate of growth in vocabulary becomes less than a threshhold τ . Based on the empirical experiments discussed in the previous Table 2. The impact of changing sample size on effectiveness Testbed Summary Size Cutoff MAP P@10 P@20 SYM236 SYM236 SYM236 SYM236 SYM236 SYM236 UDC39 UDC39 UDC39 UDC39 UDC39 UDC39

300 700 900 300 700 900 300 900 1500 300 900 1500

10 10 10 20 20 20 10 10 10 20 20 20

0.0133 0.0370 0.0326 0.0222 0.0533 0.0506 0.0611 0.0739 0.0773 0.0881 0.0972 0.1016

0.1465 0.2765 0.2510 0.1616 0.2806 0.2888 0.2653 0.2878 0.2959 0.2949 0.3051 0.2969

0.1256 0.2474 0.2260 0.1506 0.2587 0.2536 0.2566 0.2724 0.2867 0.2765 0.2867 0.2878

Sample Sizes for Query Probing in Uncooperative DIR

71

Table 3. Effectiveness of a central index of all documents of SYM236 or UDC39 Relevant Retrieved

MAP

P@10

P@20

R-Precision

8776

0.1137

0.2939

0.2760

0.1749

Table 4. Effectiveness of two DIR systems using both samples of 300 documents and adaptive sample sizes, for SYM236 (η = 3, τ = 2%) Cutoff Relevant Retrieved

MAP

Samples of 300 documents 1 158 0.0023 10 1396 0.0133 20 2252 0.0222 50 3713 0.0383 118 4800 0.0515 Adaptive samples 1 527 0.0075 10 2956 0.0327 20 4715 0.0532∗∗ 50 6813 0.0823∗∗ 118 7778 0.0936∗∗

P@10

P@20

R-Precision

0.0682 0.1465 0.1616 0.1628 0.1430

0.0435 0.1256 0.1506 0.1676 0.1395

0.0063 0.0429 0.0616 0.0926 0.1032

0.1454 0.2510 0.2724 0.2796∗∗ 0.2388∗∗

0.1244 0.2199 0.2372 0.2633∗∗ 0.2327∗∗

0.0168 0.0772 0.1135∗ 0.1506∗∗ 0.1604∗∗

section, we suggest initial parameter choices of η = 3 and τ = 2%; that is, probing stops once three consecutive probes all show growth rate of less than 2%. These convergence points are indicated by arrows in previous figures. In our approach, these points indicate when sampling is “enough”. According to the observations, “enough” varies drastically from collection to collection. Increasing the value for η or decreasing τ delay reaching the stopping condition and increase the number of samples that should be gathered from the collection. SYM236. The performance of a central index for document retrieval for both collections is shown in Table 3. Since both testbeds include exactly the same documents, the central index for both of them is the same. We used the values in this table as the baseline. Central indexes are usually reported as being more effective than distributed systems [Craswell et al., 2000]. The first column is the number of relevant documents retrieved for TREC topics 51 − 150; the last column is the precision of the system after as many documents have been retrieved as there are relevant documents in the collection. A comparison of the effectiveness of two systems using traditional and adaptive query-based sampling techniques is shown in Table 4. The numbers above the middle line represent the values obtained from the traditional method, while those below specify the same factor using our adaptive method. For cutoff = 1, only the best collection — that whose sampled lexicon has the greatest similarity to the query — will be searched. For cutoff = 118, half of the collections will be searched. It can be seen that our method outperforms the traditional query probing technique

72

M. Shokouhi, F. Scholer, and J. Zobel

Table 5. Summary of sampling for SYM236 and UDC39, using adaptive and traditional sampling Testbed Method SYM236 SYM236 SYM236 UDC39 UDC39

Traditional Adaptive Adaptive Traditional Adaptive

Documents Unique Terms Min Max (300 documents) (τ = 2%, η = 3) (τ = 1%, η = 3) (300 documents) (τ = 2%, η = 3)

37,200 163,900 321,300 11,700 80,800

831,849 1,565,193 2,083,700 624,765 1,289,607

300 300 500 2700 500 3200 300 300 1400 2800

Table 6. Effectiveness of two DIR systems using both samples of 300 documents and adaptive sample sizes, for UDC39 (η = 3, τ = 2%) Cutoff Relevant Retrieved

MAP

Samples of 300 documents 1 1132 0.0161 10 5551 0.0611 20 7320 0.0881 30 7947 0.0969 Adaptive samples 1 1306 0.0178 10 6342 0.0764∗∗ 20 7826 0.1017∗∗ 30 8280 0.1089∗∗

P@10

P@20

R-Precision

0.2061 0.2653 0.2949 0.2735

0.1658 0.2566 0.2765 0.2622

0.0351 0.1273 0.1610 0.1705

0.2173 0.2959∗∗ 0.3051 0.3051∗∗

0.1699 0.2837∗∗ 0.2969∗∗ 0.2837∗∗

0.0403∗ 0.1465∗∗ 0.1730∗∗ 0.1790∗∗

in all of the parameters and for all cutoff values 3 . Sanderson and Zobel [2005] demonstrated that a significant improvement in performance requires statistical tests. We applied the t-test for comparing the outputs of traditional and adaptive systems. Values shown with an asterisk (*) are significantly different at P < 0.05 while those with double asterisks (**) differ significantly at P < 0.01. Table 5 gives more information about the number of terms and documents that have been sampled using the traditional and adaptive techniques. The smallest and largest samples in each testbed are specified in the last two columns. It is clear that our new approach collects a much more comprehensive set of terms and documents during sampling, and that different collections require samples of greatly varying size. UDC39. Similar experiments using the UDC39 testbed are shown in Table 6. The same query set is used for experiments on this testbed. Table 6 confirms that our new method outperforms the traditional query based sampling approach; furthermore, our approach is more effective than a central index in many cases. Central index performance has often been viewed as an ideal goal in previous 3

Some of the collections in this testbed have very few documents(less than 20). We did not use query probing for those collections and consider the whole collection as its summary in both methods.

Sample Sizes for Query Probing in Uncooperative DIR

73

Table 7. Effectiveness of adaptive sampling on SYM236 with η = 3 and τ = 1% Cutoff Relevant MAP P@10 P@20 R-Precision Retrieved 1 10 20 50 118

0512 3191 4837 6947 7803

0.0075 0.0365 0.0580 0.0858 0.0938

0.1392 0.2510 0.2816 0.2796 0.2398

0.1052 0.2281 0.2526 0.2643 0.2352

0.0169 0.0837 0.1176 0.1536 0.1606

work [Craswell et al., 2000]. Developing a distributed system that outperforms the central index in all cases is still one of the open questions in distributed information retrieval but has been reported as achieveable [French et al., 1999]. According to these results, the performance of our DIR system was greater than the central index for cutoffs 10, 20, and 30 for precision-oriented metrics. For cutoff = 10, for example, the system only searches the top 10 collections for each query. This means that it searches only about a quarter of the collections and documents used by the central index, but shows greater effectiveness. Again, values flagged with (*) and (**) indicate statistical significant using the t-test. Changing η and τ . In the results discussed above, we used values for η and τ obtained from our initial experiments. Decreasing η or increasing τ leads to faster termination of query probing, with less effective summaries. In Table 7, we have decreased the threshold τ to 1% — thus increasing the sample sizes — for SYM236. In most cases, the effectiveness is greater than for the same parameters in Table 4, that uses the old τ and η values. Although the results are better, they are more costly. Table 5 shows that the number of documents sampled with η = 1% is about twice that with η = 2%. The results for the UDC39 were also tested and found to be similar (but are not presented here).

5

Conclusions

We have proposed a novel sampling strategy for query probing in distributed information retrieval. In almost all previous work on query probing, the sample size was 300 documents; we have shown that such small samples lead to considerable loss of effectiveness. In contrast to these methods, our system adaptively decides when to stop probing, according to the rate of which new unique terms are received. Our results indicate that once the rate of arrival of new terms has become constant, relatively few new significant terms — those of high impact in retrieval — are observed. We compared our new approach and traditional model for query-based sampling on two different testbeds. We found that collections have different characteristics, and that the sample size will vary between collections. The effectiveness of the new approach was not only significantly better than the fixed-size sampling approach, but also outperformed a central index in some cases. While the use of larger samples leads to greater initial costs, there is a significant benefit in effectiveness for subsequent queries.

74

M. Shokouhi, F. Scholer, and J. Zobel

References R. A. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. AddisonWesley Longman Publishing Co., Inc., Boston, MA, 1999. P. Bailey, N. Craswell, and D. Hawking. Engineering a multi-purpose test collection for web retrieval experiments. Inf. Process. Manage., 39(6):853–871, 2003. J. Callan and M. Connell. Query-based sampling of text databases. ACM Trans. Inf. Syst., 19(2):97–130, 2001. J. Callan, Z. Lu, and W. B. Croft. Searching distributed collections with inference networks. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 21–28, Seattle, Washington, 1995. ACM Press. J. Callan, M. Connell, and A. Du. Automatic discovery of language models for text databases. In Proceedings of the 1999 ACM SIGMOD International Conference on Management of Data, pages 479–490, Philadelphia, Pennsylvania, 1999. ACM Press. N. Craswell, P. Bailey, and D. Hawking. Server selection on the world wide web. In Proceedings of the fifth ACM Conference on Digital Libraries, pages 37–46, San Antonio, Texas, 2000. ACM Press. D. D’Souza, J. Thom, and J. Zobel. Collection selection for managed distributed document databases. Inf. Process. Manage., 40(3):527–546, 2004a. D. D’Souza, J. Zobel, and J. Thom. Is CORI effective for collection selection? an exploration of parameters, queries, and data. In P. Bruza, A. Moffat, and A. Turpin, editors, Proceedings of the Australian Document Computing Symposium, pages 41– 46, Melbourne, Australia, 2004b. J. French, A. L. Powell, J. Callan, C. L. Viles, T. Emmitt, K. J. Prey, and Y. Mou. Comparing the performance of database selection algorithms. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 238–245, Berkeley, California, 1999. ACM Press. L. Gravano, H. Garcia-Molina, and A. Tomasic. GlOSS: text-source discovery over the internet. ACM Trans. Database Syst., 24(2):229–264, 1999. L. Gravano, P. G. Ipeirotis, and M. Sahami. Qprober: A system for automatic classification of hidden-web databases. ACM Trans. Inf. Syst., 21(1):1–41, 2003. P. Ipeirotis. Classifying and Searching Hidden-Web Text Databases. PhD thesis, Columbia University, USA, 2004. P. G. Ipeirotis and L. Gravano. When one sample is not enough: improving text database selection using shrinkage. In Proceedings of the 2004 ACM SIGMOD International Conference on Management of Data, pages 767–778, Paris, France, 2004. ACM Press. B. J. Jansen, A. Spink, and T. Saracevic. Real life, real users, and real needs: a study and analysis of user queries on the web. Inf. Process. Manage., 36(2):207–227, 2000. H. P. Luhn. The automatic creation of literature abstracts. IBM Journal of Research Development, 2(2):159–165, 1958. W. Meng, C. Yu, and K. Liu. Building efficient and effective metasearch engines. ACM Comput. Surv., 34(1):48–89, 2002. A. L. Powell and J. French. Comparing the performance of collection selection algorithms. ACM Trans. Inf. Syst., 21(4):412–456, 2003. M. Sanderson and J. Zobel. Information retrieval system evaluation: Effort, sensitivity, and reliability. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 162–169, Salvador, Brazil, 2005. ACM Press.

Sample Sizes for Query Probing in Uncooperative DIR

75

L. Si and J. Callan. Relevant document distribution estimation method for resource selection. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 298–305, Toronto, Canada, 2003. ACM Press. H. E. Williams and J. Zobel. Searchable words on the web. International Journal of Digital Libraries, 5(2):99–105, 2005. B. Yuwono and D. L. Lee. Server ranking for distributed text retrieval systems on the internet. In Proceedings of the Fifth International Conference on Database Systems for Advanced Applications (DASFAA), pages 41–50, Melbourne, Australia, 1997. World Scientific Press.

The Probability of Success of Mobile Agents When Routing in Faulty Networks Wenyu Qu and Hong Shen Graduate School of Information Science, Japan Advanced Institute of Science and Technology

Abstract. Using of mobile agents has become an accessible technology in recent years. It is expected to be easier to build robust and faulttolerant distributed systems by use of mobile agents since they are capable to react dynamically to unfavorable situations and events. In this paper, we consider about the problem of using mobile agents for routing in faulty networks. We propose two mobile agent-based routing models and compared their probability of success (the probability that an agent can find the destination). Keywords: Mobile agents, faulty networks, routing, probability of success.

1

Introduction

A mobile agent is a program entity that is capable of migrating autonomously from node to node and acts on behalf of a user to perform intelligent decisionmaking tasks. in an information network, when a mobile agent is encapsulated with a task, it can be dispatched to a remote node. Once the agent has completed its tasks, the summary report for its trip is sent back to the source node. Since there are very few communications between the agent and the source node during the process of searching, the network traffic generated by mobile agents is very light. The potential benefits of using mobile agents includes reducing network load, overcoming network latency, encapsulating protocols, executing asynchronously and autonomously, adapting environment dynamically, etc. [11]. It has drawn a great deal of attention in both academia and industry [3,11,19,20]. Routing is an important issue for network management. Mobile agent-based network routing is a recently proposed method for use in large dynamic networks [5, 7, 13, 15, 17, 21]. For an agent-based network, agents can be generated from every node in the network, and each node in the network provides to mobile agents an execution environment. A node which generates mobile agents is called the server of these agents. Once a request for sending a packet is received from a server, the server will generate a number of mobile agents. These agents will then move out from the server to search for the destination. Once a mobile agent finds the destination, the information will be sent back to the server along the same path. When all (or some of) the mobile agents come back, the server will X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 76–84, 2006. c Springer-Verlag Berlin Heidelberg 2006 

The Probability of Success of Mobile Agents

77

determine the optimal path and send the packet to the destination along the optimal path. At the same time, the server will update its routing table. In this paper, we describe a general mobile agent-based routing model and classify it into two cases based on the reaction capability of mobile agents to a system failure. To compare their performances, we analyze the probability of success of mobile agents. Our contributions are summarized as follows: – A general agent-based routing model is described and is classified into two cases based on the reaction of mobile agents to a system failure: MWRC and MSRC. – The probability of success is analyzed for each case, which serves as an important measure for monitoring network performance. Our paper is organized as follows. Section 2 discusses related work. Section 3 describes our model. Section 4 introduces the notations used in this paper and presents the analytical results for mobile agents. Section 5 concludes the paper.

2

Related Work

A mobile agent is an autonomous object that possesses the ability for migrating autonomously from node to node in a computer network. Usually, the main task of a mobile agent is determined by specified applications of users, which can range from E-shopping and distributed computation to real-time device control. In recent years, a number of research institutions and industrial entities have been engaged in the development of elaborating supporting systems for this technology [11, 23]. In [11], several merits for mobile agents are described, including network load and latency reduction, protocol encapsulation, adaption, heterogeneity, robustness and fault-tolerance. Successful examples using mobile agents can be found in [10, 12]. Network routing is a problem in network management. Ant routing is a recently proposed mobile agent based network routing algorithm for use in these environments [21,22]. The continuing investigation and research of naturally occurring social systems offer the prospect of creating artificial systems that are controlled by emergent behavior and promise to generate engineering solutions to distributed systems management problems such as those in communication networks [5, 17]. Real ants are able to find the shortest path from a food source to the nest without using visual cues. Also, they can adapt to changes in the environment, for example finding a new shortest path once the old one is no longer feasible due to a new obstacle [2,9]. In the ant routing algorithm described in [7,18], artificial ants are agents which concurrently explore the network from node to node and exchange collected information when they meet each other. They irrespectively choose the node to move by using a probabilistic function which was proposed here to be a function of the connecting situation of each node. Artificial ants probabilistically prefer nodes that are connected immediately. Initially, a number of artificial ants are placed on randomly selected nodes. At each time step they

78

W. Qu and H. Shen

move to new nodes and select useful information. When an ant has completed it’s task, it will send a message back to the server. In [4], Brewington et al formulated a method of mobile agent planning, which is analogous to the travelling salesman problem [8] to decide the sequence of nodes to be visited by minimizing the total execution time until the destination is found. In the preliminary work of this paper [16], the probability of success of mobile agents is analyzed. The model can be seen as a special case of the one in this paper.

3

Mobile Agent-Based Routing Model

Assume that in a network with n nodes, agents can be generated from every node. Each node in the network provides mobile agents an execution environment. A node which generates mobile agents is called the server of these agents. Initially, there are a pile of requests for sending packets in the network. Then, a number of mobile agents are generated for each request. At any time t, the expected number of requests received from one node is m. Once a request arrives, k agents are created and dispatched into the network. Those agents traverse the network from the server to search for the destination. Once an agent reaches a node, it will check whether the node is its destination or not. If so, the agent will turn back to the server with information of the searched path. Otherwise, it will select a neighboring node to move on. The server will compare all the path collected and pick up the optimal path. Then, the packet is sent out to the destination along the optimal path. At the same time, the server updates its routing table. To avoid the user from waiting for a too long time, an agent will die if it cannot find its destination within a given time bound, which is called the agent’s life-span limit in this paper. As we know, any component of the network (machine, link, or agent) may fail at any time, thus preventing mobile agents from continuing their trip. Mobile agents have to dynamically adopt to the environment during their trip. In this paper, we study two cases based on mobile agents reaction to a system failure. One is that a mobile agent will die if it subjects to a failure; the other is that if a mobile agent subject to a failure, it will return back to the previous node, reselect another neighboring node of the previous node, and move to it. Obviously, there is a trade-off between these two case. Since mobile agents will be generated frequently in the network, there will be many agents running in the network. If the the death rate is high (for example, the first case), the number of agents working for the request is small, we cannot get a high probability of success. On the other hand, if there are too many mobile agents running in the network (for example, the second case), they will consume too much computational resource, which will affect the network performance due to the limited network resource and ultimately block the entire network. In the following, we will analyze both the number of mobile agents and the probability of success for evaluating the network performance for both cases.

The Probability of Success of Mobile Agents

4

79

Mathematical Analysis

Suppose that the network topology we used is a connected graph so that there is at least one path (directly or indirectly) between any two nodes. Matrix Φ = (ϕij )n×n is the connectivity matrix which describes the connectivity of the graph, i.e., if there is a direct link between node i and node j, then ϕij = ϕji = 1; otherwise, ϕij = 0. Let ϕj  be the j-th column vector of matrix Φ: Φ = n (ϕ1 , ϕ2 , · · · , ϕn ). cj = ϕj 1 = i=1 |ϕij |, σ1 = max cj , σn = min cj . C = 1≤j≤n

1≤j≤n

diag(c1 , c2 , · · · , cn ) is a diagonal matrix. It is easy to see that cj is the number of neighboring nodes of the j-th node including itself, and Φ 1 = max ϕj 1 = 1≤j≤n σ1 . For a network with n nodes (i.e., n1 , n2 , · · · , nn ), every node can be the destination of a request, and each node has an independent error rate. Let Xi be a binary valued variable defined as follows:  1 agent dies in the i-th node due to a failure Xi = 0 otherwise with a probability P r{Xi = 1} = p. Then, the parameter p measures the incidence of failure in the network. We say a node is down if it is out of work; otherwise, it is up. Once a point-to-point request 1 is made, a number of agents are generated and dispatched into the network. Once an agent reaches an up node, it will find its destination locally with a probability n1 . If the agent cannot find its destination here, it will select a neighboring node and move on. 4.1

Case 1

For the first case, assume that the probability of jumping to any neighboring nodes or die in the current node is same. Regarding to the probability that an agent can find the destination in d jumps, we have the following theorem: Theorem 1. The probability, P ∗ (n, d, p, k), that at least one agent among the k agents can find the destination in d jumps satisfies the following equality:  k a(1 − τ d ) , (1) P ∗ (n, d, p, k) = 1 − 1 − 1−τ where a = (1 − p)/n, b = E[1/ci ], and τ = (1 − a)(1 − b). Proof. The theorem can be easily proved similar to that in [16]. The value of b is depended on the probability distribution of ci . For example, if ci (1 ≤ i ≤ n) are independent and satisfy the uniform distribution, we have  n 1 1 ln n dci = . b = E[1/ci ] = · n−1 1 ci n − 1 From theorem 1, it is easy to estimate P ∗ (n, d, p, k) as follows. 1

For point-to-multiple-point requests, the idea is intrinsic same.

80

W. Qu and H. Shen

Corollary 1. The probability P ∗ (n, d, p, k) satisfies the following inequalities: 1−

1−a 1 + aσn − a

k

1−a 1 + aσ1 − a

≤ lim P ∗ (n, d, p, k) ≤ 1 − d→∞

k

,

where a = (1 − p)/n, σ1 and σn are the maximum and minimum number of ci . Proof. The first inequality can be easily proved due to the fact that d−1

P (d) = a(1 − a)d−1 i=1

cJi − 1 ≤ a(1 − a)d−1 cJi

d−1

i=1

σ1 − 1 = a(1 − a)d−1 σ1

σ1 − 1 σ1

d−1

,

Similarly, the seconde inequality can be proved. The probability that an agent can find its destination is decided by the connectivity of network and parameters k and d, which coincides with practice. From the theorem above, we can easily get that the probability that none of those k

k 1−a agents can find the destination is less than 1+aσ and the probability that 1 −a

k a·σ1 all the k agents can find the destination is less than 1+aσ . 1 −a

P*(n,d,p,k) for MWRC Where c Satisfies Uniform Distribution (n=10000) i

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

k=40;p=0.01 k=40;p=0.25 k=20;p=0.01 k=20;p=0.25

0.1

0

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Fig. 1. The changes of P ∗ (n, d, p, k) over d where ci satisfies uniform distribution. It is easy to see that P ∗ (n, d, p, k) is an increasing function on d with a loose upper bound 1. When p = 0, P ∗ (n, d, p, k) will not reach 1 no matter how long time the agent can search. The reason is that there is a possibility that the agent will die before it finds its destination. From the figure, it also can be seen that P ∗ (n, d, p, k) is an increasing function on k and a decreasing function on p.

4.2

Case 2

For the second case, since an agent will not die if it has not reached its destination within its life-span, the probability of success equals to r/n where r is the number of nodes that the agent has entered and checked. Denote the i-th node that an

The Probability of Success of Mobile Agents

81

agent enters by hi , the number of neighboring nodes of the i-th node ci , and the number of neighboring nodes that the agent has selected by vi (i.e., the agent fails to enter the first vi − 1 nodes and can only enter the vi -th selected node). Regarding to the average number of nodes selected, we have the following result. Lemma 1. The average number of neighboring nodes selected by an agent at ci ] − E [ci pci ]. each node E(vi ) = 1−E[p 1−p Proof. The probability that an agent can enter the first selected node, h1i , in N B(i), equals to 1 − p, and the probability that the agent can enter the second selected node equals to p(1 − p). By recursion, the probability that the agent enters the vi -th node equals to pvi −1 (1 − p). Therefore, the average number of nodes the agent selected in N B(i) satisfies

E(vi |N B(i)) =

ci

vi pvi −1 (1 − p) =

vi =1

=

1 − p ci pci +2 − (ci + 1)pci +1 + p · p (1 − p)2 1 − pci − ci p c i . 1−p

Thus, the average number of nodes the agent selected at each node during the agent’s trip satisfies E(vi ) = E[E(vi |N B(i))] =

1 − E [pci ] − E [ci pci ] . 1−p

Hence, the lemma is proven. Regarding the estimation of r, we have the following result. Lemma 2. Let r be the number of nodes that the agent visits, then the average number of nodes that an agent enters satisfies

d E(r) = , 2E(vi ) − 1 where x indicates the greatest integer less then or equal to x (i.e., x − 1 0} | I λ (t i ) |

(8)

_

( X ) = {t i ∈ T |

Suppose X is a term set expressing a vague concept, ψ _ (X) is core connotation of the concept and ψ (X) is extension of the concept. Occurrence frequency of “zero-valued” similarity can be greatly lessened by using upper approximation of the concept expressed by paragraph level granularity knowledge. _

Clustering Web Documents Based on Knowledge Granularity

91

4.3 Improved TFIDF Weighing System in EVSM Model We produce an improved TFIDF weighing system based on the traditional TFIDF weighing system of VSM [5,12]. pi = {t1 , t 2 , ! , t n } is a paragraph of a web

pi′ = {t1 , t 2 ,!, t m } , wij denotes weight of

document and its upper approximation is term

t j in paragraph pi′

wij′ is normalized value of weight wij , both weight are

formalized as below: (9)

w ij′

=

w

¦

t

j

(10)

ij

∈ d i′

w

ij

To demonstrate the use of the EVSM framework, we detail the process of converting web document by an example as follows. Example 2. Let paragraph collection be P = {p1, p2, p3, p4, p5, p6, p7}, term collection be T = {t1, t2, t3, t4}, the frequency data is listed in Table 1. Let threshold Ȝ equals 4, with equation 3, equation 4 and equation 5 upper approximations of the paragraph pi (I = 1, 2 , …, 7) can be computed as below:

ψ _ ( p1 ) = ψ _ ( p 2 ) = ψ _ ( p 4 ) = {t1 , t 2 , t 3 , t 4 , t 5 } ψ _ ( p3 ) = ψ _ ( p5 ) = ψ _ ( p6 ) = {t1 , t 2 , t 4 , t 5 } ψ _ ( p7 ) = {t 3 , t 4 , t 5 } We weigh the paragraph p1 with traditional TFIDF and TFIDF, result is listed in Table 2.

p1′ with the improved

Table 1. Sample Paragraph-Term Frequency Array

term/paragraph t1 t2 t3 t4 t5

p1 0 0 5 2 3

p2 8 0 3 0 7

p3 3 7 0 4 5

p4 6 5 4 0 2

p5 7 2 0 5 5

p6 1 6 0 6 4

p7 0 0 1 4 0

92

F. Huang and S. Zhang Table 2. VSM and EVSM Improved TFIDF weight

Traditional TFIDF weight

term t1 t2 t3 t4 t5

term Non-normalization

normalization

0.093 0.143 1.731 2.089 2.104

0.015 0.023 0.281 0.339 0.342

t1 t2 t3 t4 t5

Non-normalization

normalization

0 0 0.731 1.089 1.104

0 0 0.25 0.37 0.38

4.4 Evaluation on Paragraph Granularity’s Representing Ability In order to label document according to paragraph clustering results, it is necessary to develop appropriate metrics to evaluate paragraph granularity’s ability to represent its parent web document’s topic. For measuring the representative ability, we here extract three important attributes from each paragraph: Paragraph Position, Term Repeating Rate and Paragraph Relative Length. 1) Paragraph Position We classify all paragraphs in one web document into by paragraph position in web document: Type of the first paragraph is Front, type of the last paragraph is End, and type of other paragraphs is Middle. On this base, we present a strategy to determine weight of the position attribute of the paragraph. Let pi be a paragraph of web document d, |d| denotes the total of paragraphs of a web document. pi .PP denotes position weight of the paragraph pi.

2) Paragraph Relative Length According to article structure theory, generally speaking, the more detailed a paragraph description is, the more important a paragraph is to the parent web document. So we give the following definition (Paragraph Relative Length, abbreviated as PRL): p

i

. PRL

=

p i . Length d . Legnth

(11)

3) Term Repeating Rate Article structure principle holds that high some terms occur very frequently in some position to give importance to some viewpoint of author. We define Term Repeating Rate (TRRate) as the following formula:

Clustering Web Documents Based on Knowledge Granularity

pi .TRRate =

¦ freq (term

j

)

term j ∈ pi .Term

(12)

pi .Length

From above three measures, we can define weight

93

wpi of the paragraph as below:

w pi = PW * p i .PP + LW * p i .PRL + TRW * pi .TRRate s. t. PW + LW + TRW = 1 here PW,LW and TRW respectively denotes contribution to the paragraph representative ability of the attribute Paragraph Position, Paragraph Relative Length and Term Repeating Rate. The concrete values of PW, LW and TRW can be manually given by domain experts or automatically given by computer.

5 Algorithm Design LabelDocument Algorithm For simplicity the main procedure is described as following: first, score each paragraph by attribute Paragraph Position, Paragraph Relative Length and Term Repeating Rate. Second, assign document to the optimum by the value of membership to topic cluster, motivated by high-voting principle of multi-database mining [13]. Algorithm LabelDocument Input web document d = (title, p1 , p2 ,", p n ) ,topic set

T = (T1 , T2 ,", Tn ) Output label of web document d Method (1) for each pi ∈ d do compute w pi end for (2) for each T j ∈ T do if title ∈ d

w j = TW ;

pi ∈ d do if pi ∈ T j then w j = w j + PSW * w pi ; for each

end for end for (3) label = arg max ( w j ) T j ∈T

return label.

94

F. Huang and S. Zhang

WDCBKG Algorithm Input Web document collection D number of clusters K term tolerance threshold λ ,minimal frequency threshold β change rate ε Output: K web document clusters T1 , T1 , " , Tk (1) preprocess web document collection and convert it paragraph vectors with the guidance of the data model EVSM. (2) cluster paragraphs with k-means (3) label the web documents with LabelDocument.

6 Experiments We have illustrated the use of some proposed techniques in previous sections. Our experiments were conducted on a Dell Workstation PWS650 with 2GB main memory and Win2000 OS. 6.1 Dataset Selection To evaluate our proposed algorithm WDCBKG, we download 15013 web documents from sub-directory of Yahoo! News. The documents distribution is listed in Table 3. Table 3. Distribution of web document collection Group NO 1 2 3 4 5 6

Label Sports Health Technology Business Politics Label

Number of Web Document 2566 2641 2309 2470 2163 2566

6.2 Experimental Results In this section, we evaluate the function of the approach. The following experiments were conducted on a Dell Workstation PWS650 with 2 GB main memory and Win2000 OS. We access our proposed approach from three aspects as following: 1) Performance of clustering results We use F-measure, which is the harmonic mean of values of precision and recall rate, to evaluate clustering results by comparing WDCBKG algorithm with VSM_Kmeans algorithm. We randomly select 10 groups of web documents from the document collection and cluster each group data, the size of which is 10000, with VSM_Kmeans and WDCBKG respectively. Table 2 shows the results of the two algorithms. From table 2 we can see that, compared to VSM_Kmeans, performance of WDCBKG is great improved. 2) Scalability We conduct a group of experiments with different data set that is of different size. From Figure 2 we can see that the performance of clustering results from

Clustering Web Documents Based on Knowledge Granularity

95

EVSM_WDCBKG doesn’t decease with the size of experimental data set increased but keep satisfied stability, fluctuating from 0.75 to 0.81. Consequently, as far as data set size is concerned, our approach is scalable. 3) Sensitiveness to tolerance threshold parameter Tolerance threshold parameter is rather important to our WDCBKG. From our EVSM model it is not difficult to get such deduction that inadequate tolerance threshold can decrease the performance of the clustering results: on one hand, too small tolerance threshold can add noise data while representing clustering objects, on the other hand, too large tolerance threshold can make EVSM tend to VSM, both cases can lead to worse performance. From Figure 3 we can understand our experimental result corresponds to our deduction: when tolerance threshold equals 5, the performance is the best, however, when it equals 2,3 or 8, the performance is worst. Table 4. The comparison of clustering results of WDCBKG and VSM_Kmeans Group NO 1 2 3 4 5 6 7 8 9 10

VSM_Kmeans

Fig. 2. Scalability of WDCBKG

WDCBKG

Fig. 3. Sensitiveness to tolerance threshold

7 Summary In this paper, we have studied the intrinsic limitations of Vector Space Model and proposed a new representation model, named as EVSM model that is based on knowledge granularity and article structure principle. To evaluate our approach, we have conducted some experiments. The experimental results proved that, no matter which is performance of clustering results or scalability of our approach, our WDCBKG works better than VSM_Kmeans, in a word, our algorithm is effective, efficient and promising.

96

F. Huang and S. Zhang

References 1. A.L. Hsu and S.K. Halgamuge, “Enhancement of topology preservation and hierarchical dynamic self-organising maps for data visualization,” International Journal of Approximate Reasoning, vol. 32, no.2-3, 2003, pp. 259-279. 2. Bing Liu, Yiyuan Xia, Philip S Yu. Clustering Through Decision Tree Construction In SIGMOD-00, 2000. 3. C. Hung and S. Wermter, ‘‘A dynamic adaptive self-organising hybrid model for text clustering,’’ Proceedings of The Third IEEE International Conference on Data Mining (ICDM’03), Melbourne, USA, November, 2003, pp. 75-82. 4. C. Hung and S. Wermter, ‘‘A time-based self-organising model for document clustering,’’ Proceedings of International Joint Conference on Neural Networks, Budapest, Hungary, July, 2004, pp. 17-22. 5. Chi Lang Ngo, Hung Son Nguyen. A Tolerance Rough Set Approach to Clustering Web Search Results. PKDD 2004: 515-517. 6. J. Han and M. Kamber Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers, 2000. 7. J. Yoon, V. Raghavan and Venu Chakilam. BitCube: Clustering and Statistical Analysis for XML Documents. Thirteenth International Conference on Scientific and Statistical Database Management, Fairfax, Virginia, July 18-20, 2001. 8. M Kryszkiewicz. Properties of in complete information systems in the framework of rough sets. In:L Polkowski, A Skow roneds. Rough Sets in Data Mining and Knowledge Discovery. Berlin: Springer-Verlag, 1998.422-450. 9. M.Kryszkiewicz. Rough set approach to incomplete information system. Information Sciences, 1998,112:39-495. 10. Pawlak, Z. Rough Sets, Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Dordrecht. 11. Pawlak, Z. Granularity of knowledge, indiscernibility and rough sets, Proceedings of 1998 IEEE International Conference on Fuzzy Systems, 106-110. 12. Salton, G. and McGill, J. M. (Eds.): Introduction to Modern Information Retrieval, McGill-Hill 1983. 13. S. Zhang, Knowledge discovery in multi-databases by analyzing local instances. PhD Thesis, Deakin University, 2001. 14. Viette Poe,Patricia Klauer, and Stephen Brobst. Building A Data Warehouse for Decision Support. Prentice Hall PTR; 2nd edition. 15. Yang, Y., and Pedersen, J.O. A comparative study on feature selection in text categorization.In Proceedings of the Fourteenth International Conference on Machine Learning. San Francisco: Morgan Kaufmann, 1997, pp. 412---420. 16. Yao, Y.Y. (2001) Information granulation and rough set approximation, International Journal of Intelligent Systems, 16, 87-104. 17. Yao, Y.Y. (2003) Granular computing for the design of information retrieval support systems, in: Information Retrieval and Clustering, Wu, W., Xiong, H. and Shekhar, S. (Eds.), Kluwer Academic Publishers 299. 18. Yao, Y.Y. A Partition Model of Granular Computing. T. Rough Sets 2004: 232-253. 19. Zadeh, L.A. Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic, Fuzzy Sets and Systems, 19, 111-127. 20. Zadeh, L.A. Some reflections on soft computing, granular computing and their roles in the conception, design and utilization of information/ intelligent systems, Soft Computing, 2, 23-2. 21. Zheng Wenzhen. Architecture for Paragraphs (in Chinese). Fujian People’s Press, 1984.

XFlat: Query Friendly Encrypted XML View Publishing∗ Jun Gao, Tengjiao Wang, and Dongqing Yang The School of Electronic Engineering and Computer Science, Peking University, 100871 Beijing, China {gaojun, tjwang, dqyang}@db.pku.edu.cn

Abstract. The security of the published XML data receives high attention due to the sensitive nature of the data in some areas. This paper proposes an XML view publishing method called XFlat. Compared with other methods, XFlat focuses on the efficiency of query evaluation over the published XML view. XFlat decomposes a XML tree into a set of sub-trees with the same accessibility on each node to all users, encrypts and stores each sub-tree in a flat sequential way. This storage strategy can avoid the nested encryption in view construction and decryption in the query evaluation. In addition, we discuss how to generate the user specific schema and minimize the total space cost of XML view with the consideration of the size of the relationship among the sub-trees. The final experimental results demonstrate the effectiveness and efficiency of our method.

1 Introduction With XML becoming the standard of information exchange and representation over the Internet, more and more large corporations and organizations publish their data in the form of XML. This trend also raises a challenge on how to protect the security of the published data, especially when the data are sensitive. Different from the XML security research at the server side, the XML security in the publish scenario poses more challenges, where the data owner loses control over the data after the data are published. In general, the security of the published XML view relies on the cryptographic technology to combine the access control specifications into published XML views. Therefore, a user must have the correct key before visiting the accessible parts in the XML document assigned by the access control specifications. The naïve method to handle this problem is to generate the accessible XML subtree for each user separately, encrypt sub-trees and publish them together. The main problem of this method lies in repetitive encryption and the extra space cost when the sub-trees can be accessed by multiple users. The method [2] considers the multiple access specification rules as a whole. The encryption process takes a bottom up way. For example, if the accessibility of one sub-tree t1 is different from that of the parent node, t1 is encrypted first and then replaced by the cipher text. The process repeats ∗

Project 2005AA4Z3070 supported by the national high-tech research and development of China, Project 60503037 supported by NSFC, Project 4062018 supported by BNSF.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 97 – 108, 2006. © Springer-Verlag Berlin Heidelberg 2006

98

J. Gao, T. Wang, and D. Yang

until the whole tree has been processed. This method suffers several limitations. For example, the XPath evaluation over the security XML view needs to decrypt the encrypted sub-trees in a nested way, which incurs high cost in the evaluation. Another problem is that the method does not provide the user’s specific schema, while the study [1] shows that the exposure of the full document schema may lead to the information leakage. In order to handle the problems in the published XML view and to overcome the limitations of the current methods, this paper proposes a method called XFlat to publish a query friendly XML view. In summary, our contributions are as follows: z

z

z

Proposes a method to generate the encrypted XML view. XFlat decomposes a XML tree into the sub-trees with the same accessibility on each node in one sub-tree, encrypts and stores them in the final XML view in a flat manner, which indicates the nested encryption in the view construction as well as the nested decryption in the query evaluation can be avoided. XFlat also supports the user specific schema of the published XML view for each user. Proposes a method to evaluate the query over the encrypted XML view. XFlat can exploit the user specific schema and the flat structure of XML view to support the decryption on demand query evaluation strategy, hence to speed up the XML query over the encrypted XML view. Proves that our method meets the security requirement and implements the experiments to demonstrate XFlat outperforms other methods in view generation and query evaluation.

The rest of paper is organized as follows: section 2 describes some preliminary knowledge; section 3 proposes XML published view generation method; section 4 discusses the method to evaluate query over the XML view; section 5 shows the experimental results, section 6 reviews the related works; section 7 concludes the whole paper and discusses the future work.

2 Preliminaries 2.1 XML Security Access Specification This paper adopts the basic idea of access control specifications on the elements/attributes by XPath expression [4,10]. The access control specifications also support the override and inheritance of accessibility. That is, the accessibility of one element node depends on the explicit assignment or the accessibility of the nearest ancestor if not assigned. Definition 1. An access control specification is 5-tuple of the form: (subject, object, condition, action, sign), where subject is a user whom the authorization is granted, object is an XPath (discussed in 2.2) expression in the XML, Condition takes the form of [Q], where Q is a predicate expression for the object XPath expression, Action = read/Write, the Sign=(+,-) of the authorization can be positive (allow access) or negative (forbid access).

XFlat: Query Friendly Encrypted XML View Publishing

DTD

99

Secure Channel

Keys

Key

DTD

Schema Schema Encryption on the XML view

MetaData MetaData

Query evaluation

Encrypted Data Encrypted Data Query XML document

Result

Access Rules

Fig. 1. the framework of XML View Generation and Query Evaluation

2.2 Overview of the Published XML View Fig. 1 illustrates the framework of view generation and query evaluation. In our approach, a security published XML view is generated from XML document, DTD, a set of users, each user with several access control specification rules and a key. The published XML view is composed of three layers, namely schema layer containing the user specific schemas, metadata layer containing the relationship among the subtrees and the encrypted data layer for the sub-trees. After an encrypted XML view is published, the authorized user needs obtain corresponding keys securely. When user U wants to query the XML view, U needs to submit the keys first. System receives the keys, decrypts and exposes the schema information to U. User U formulates the XPath based on the schema information. The XPath is evaluated over the decrypted accessible parts of the XML view. The problem solved in this paper can be described as follows: given an XML instance I conforming to DTD D, a set of users U={u1,..un}, each ri is assigned with a set of access rules Ai (defined in 2.3) and a key ki (1≤i≤n) for each role ri, a set of users U={u1,..um}, how to generate the encrypted XML view which meets the following requirements: the security of the protected data; the specific schema for each user and the minimized space cost and query evaluation cost over the published XML views?

3 XML Security View Generation The whole process to publish the security XML view is illustrated with the following example. Example 1. Consider each insurance company needs to generate a report and publish this report to a well-known server every month. A fragment of DTD and the related XML of the report is illustrated in Fig. 1. The document conforming to a DTD consists of a list of Customers (Customer *), each customer with children nodes describing information about Name, Location and Categories of insurances. A Category of insurance contains the information on incoming, descriptions of the Category and a list of sub Categories of insurance.

100

J. Gao, T. Wang, and D. Yang

Fig. 2. Example of Fragment of Insurance DTD

Different groups of users have different privilege on the access to the different nodes in the tree. For example, the staffs in government need to check the financial situation of companies, the staffs in other companies want to share some common data, and customers want to know the reputation of each insurance company. Example 2. Access control policies for users in group A and in group B over the insurance report of Fig. 1 can be specified as follows: Rule 1: (A, /, , read, +); // User in group A can access the root element report; Rule 2: (A,/report/customer, /location"south", read, -); //User in group A can not access element customer when the location is not the "south" part of the city. Notice that rule 2 will override rule 1 on the customer when the condition of rule 2 is satisfied. Rule 3: (B, /, , read, +); Rule 4: (B, /report/customer/Category, //incoming> 1000$, read, -); 3.1 The LST and the User Specific Schema z

The Local Similarity Sub-Tree

Given one XML document tree T and a set of access control specifications, we can detect the accessibility of each node n denoted as Acc(n). In order to exploit the region similarity to handle the XML document tree, we decompose the XML tree into a set of sub-trees with the same accessibility on each node in the sub-tree. Formally, the sub-tree can be defined as: Definition 2. (Local similarity sub-tree) Local similarity sub-tree T1=(N1, E1, R1) for user u is a sub-tree in the XML document tree T=(N,E), where user u can access each node n1∈N1. There is no other element node n3, n3∉N1, n3∈N, n3 is connected with one node in the N1 and Acc(n3) is the same as Acc(n1). Each node n1∈N1 is assigned with one id which is unique in the whole tree and generated randomly for the purpose of the security. The whole sub-tree is also assigned with one randomly generated unique id. Local similarity sub-tree is shortened as LST in the following. R1 denotes a set of relationship between LST in the form of link(nid, tid), where nid is the id of one node in this LST, tid is the id of LST T2 which is nearest under T1. T2 is called the child LST of T1.

XFlat: Query Friendly Encrypted XML View Publishing

101

Fig. 3. The LST for the User in Group B

Taking the XML in Fig. 1 and the access control specifications in Example 2, LST for the user in group B is as described in Fig. 3. Three LST are obtained from the original XML tree in the whole. The link relationship to other LST in LST L1 includes (3, L2), (3, L3). The LST for each user can be generated during one DLR traversal of the whole XML tree. In the first visit of each node n in this traversal, we can determine Acc(n) for each node. If Acc(n) is the same as Acc(p), where p is the parent node of n, node n and p belong to one LST, or else, node n is marked as the root of one new LST. In the second visit of each node n, we remove the whole sub-tree rooted with node n as one LST if node n is marked with the root and Acc(n) is True. In this process, we need to generate the random unique id for each node and for each LST. z

Virtual Accessible Tree and User Specific Schema

Definition 3. Virtual accessible tree T1=(N1,E1) for user u in the XML document T=(N,E) can be constructed as follows: for each LST L=(N2,E2,R2), we add an edge from the node denoted by nid to the root node of LST denoted by tid according to the link(nid, tid) in R2. The final structure is called the virtual accessible tree. Given one LST L1, if there is no such a LST L2 which is the parent of L1, we call L1 the top LST. If there are more than one top LST, we add one virtual LST L3 with the link to all top LST and L3 is called the root LST; or else the only top LST is called the root LST. The virtual accessible tree for user in group B is illustrated in Fig.4. The nodes in one circle belong to one LST. The dotted lines among the trees represent the relationship among the sub-trees. As pointed by [1], the user specific schema will help the user formulate a query and reduce the possibility of information leakage. Since the access control policy in XFlat is XPath based rather than DTD based [1], while the interaction between XPath and DTD entails the un-decidable problem, we generate the user specified DTD not only from the full document DTD and the access control specifications, but also from the current XML document instance. In order to capture the accessibility of the element type in the DTD, we give the following definition. Definition 4. (The production rule with the accessibility on sub elements type). Given an element production rule P=A→α in DTD, where α is a regular expression of the sub elements. If each sub element type e in the production rule α is signed with the

102

J. Gao, T. Wang, and D. Yang

Fig. 4. the Virtual Accessible Tree

accessibility mark Y or N or C, denoted by accessOnType(e), where Y denotes for accessibility, N for inaccessibility, C for conditional accessibility, we call P the production rule with the accessibility on sub elements type. For each element node n in the XML tree, we can determine the production rule A→α for p, where node p is the parent node of node n, A is the element type for the node p. The accessibility mark on element type e for node n can be obtained with the following rules, where the left column denotes the current accessibility of element type e, the top row denotes the accessibility of the current node n. Table 1. the accessible transition matrix

Acc(n)=Y accessOnType(e)=Y accessOnType(e)=C accessOnType(e)=C

accessOnType(e)=Y accessOnType(e)=C accessOnType(e)=N Report * Customer | null Name

Location

Categroy

incoming

Acc(n)=N accessOnType(e)=C accessOnType(e)=C accessOnType(e)=N Report * Customer

Name

*

descriptions

Categroy

Location incoming

Categroy | Null * descriptions

Categroy | Null

Fig. 5. The DTD fragment for the User in Group A and B

After the generation of the production rules with the accessibility on sub element type, we derive the user specific schema recursively. That is, as for each element type e in the production rule P, we remain element e in P in the case of accessOnType(e)=Y; replace the element e with element (e|ε) in P in the case of accessOnType(e)=C; recursively find the first accessible element nodes under the element e and replace the element e with the accessible elements in the case of accessOnType(e)=N. Example 3. the user specified DTD for the users in group A and group B (in Example 2) can be describe in Fig. 5. Taking the left figure as example, the customer can be empty for users in group A. If none of customer lives in the "south" part of the city, the validated XML view contains only one element Report.

XFlat: Query Friendly Encrypted XML View Publishing

103

3.2 The Merging LST It seems that we can handle the case of the multiple users by extending the LST in definition 2 with the same accessibility of each node for multiple users. However, the increase of the users and related access specification rules also lead to more LST with the smaller size, which indicates that the increase size of relationship between LST. In order to balance the redundancy XML space and the increase of the size of the relationship between the LST, we propose the merging cost model for two LST. Definition 5. (Merging cost model) Given two LST L1 for user u1, L2 for user u2, The benefit of merging LST L1=(N1,E1,R1) and LST L2=(N2,E2,R2) can be defined as the size of {n| n∈N1 and n∈N2}, denoted as Benefit(L1, L2). The cost of merging LST can be defined as the size of {n| Acc(n)!=Acc(m), where m is the parent node n}, denoted as Cost(L1, L2). The relative benefit of the merging L1 and L2 is Benefit(L1, L2)/Cost(L1, L2). Intuitively, the cost of the merging can be regarded as the number of newly subtrees generated. The benefit of the merging can be defined as the number of nodes which belong to both LST. With the merging cost model, we can generate the LST for multiple users with the consideration of the space cost of final XML view. For each LST L1 of user u1 and each LST L2 of user u2, we calculate the benefit and cost of the merging operation between L1 and L2. If the relative benefit exceeds the given threshold, we merge two LST together. We repeat the merging process until no two LST can be merged.

Fig. 6. The Merged LST in the XML

Taking the XML in Fig. 1 and the access specifications in Example 2, the merged LST for two users can be described as in Fig. 7, where k1 is the key owned by staff in group A, and k2 owned by the user in group B. The relative cost of merging the LST rooted with node 1 for two groups is 7/2. If the threshold for the relative merging cost is set 0.5, four LST are obtained from the original XML tree in a whole. If one LST can be accessed by more than one user, which key can be used to encrypt the content of LST? Without the loss of generalization, suppose authorized user A and B are assigned with key ka and kb respectively, we adopt the idea of intermediate key [2] to solve the problem. System generates a key k which can be used in the encryption of the sub-tree. At the same time, k is encrypted as the plain text into ca

104

J. Gao, T. Wang, and D. Yang

with key ka, and into cb with key kb. ca and cb are called intermediate keys and distributed with the published XML file. User A with ka can decrypt ca to obtain key k to decrypt the encrypted LST. User B with kb can do it similarly. 3.3 The Whole Framework of the View Generation The final published XML view is composed of three layers, namely schema layer, metadata layer and encrypted data layer. The schema for the published XML view can be described as: PublishedView:-Schema*, Metadata*, EncryptedData Schemas:-user, EncryptedSchema* Metadata:-user, EncryptedLink*, Intermediatekey EncryptedLink:-FromSubTreeID, FromID, ToSubTreeID EncryptedData:-SubTree* SubTree:- TreeID, CipherText; Fig. 7. the Schema Fragment of the Published XML View

The schema layer enclose by < EncryptedSchema> contains specific schemas for each user enclosed by . The schema can be generated with the method in section 3.1 and will be protected by the encryption in the final XML view. The metadata layer enclosed by contains the relationship among the encrypted LST and the intermediate keys for the LST which can be accessed by user encoded by . The link information is also encrypted for the purpose of security. The relation among LST is established from the node enclosed by in LST enclosed by to the root node of LST enclosed by . The intermediate key for each LST is enclosed by < Intermediatekey >. The encrypted data layer enclosed by contains all encrypted LST. The encrypted data for one sub-tree are enclosed by . In order to distinguish different sub-trees for the metadata layer, we assign each LST with a randomly unique ID enclosed by .

4 Query Evaluation over the Encrypted XML View Given an encrypted published XML view, the authorized user can query the XML view with the given key. Different from the existing methods, the query evaluation needs to consider the underlying flat structure of LST. The basic query evaluation method over the view generated by XFlat takes a topdown fashion. System accepts an XPath and the key, decrypts top LST and evaluates the XPath in the root LST. If the data in another LST are needed in the evaluation process, we locate the LST L1 from the metadata layer in the XML view and decrypt L1, and evaluate the rest of XPath in L1. Such a process will not stop until the XPath has been processed. This method supports the decryption on demand strategy. That is, not all LST are needed to be decrypted in query evaluation. This method works efficiently on the axis {/} in XPath. However, the basic method incurs unnecessary cost

XFlat: Query Friendly Encrypted XML View Publishing

105

in the decryption cost in the case of the un-deterministic operators in the XPath, such as ancestor-descendant relation {//} or the wildcards {*}. The structure constraints in user specific schema can be used to optimize the XPath and reduce the search space. With the top down evaluation strategy, the key problem is to remove the un-deterministic operators in the XPath and reduce the cost of the unnecessary decryption of LST. We use the similar idea in [11, 13] to handle this problem. Both XPath and DTD are translated into tree automata. We define a production operation over the XPath tree automata and DTD automata. The final form is tree automata with the explicit element in each state transition rule, which can be regarded as the optimized form of XPath in the presence of DTD. With the consideration of the user specific schema, the XPath evaluation can be illustrated in the following figures: we decrypt the schema information with the key, the schema can be represented by tree automata; we make the production operation between the tree automata for XPath and the tree automata for DTD; The final tree automata can be evaluated directly on the decrypted LST; If another LST is needed in the evaluation, we locate that LST from the metadata layer of the view and evaluate the tree automata further in the decrypted LST. If the terminal state of the tree automata is reached, we know that the XML meet the requirement of the XPath, and return the current nodes to the user.

Key

XPath

Decrypt the schema

Decrypt the root LST

Schema Optimize the XPath into the optimized form

root LST

Optimized form Evaluate the optimized form against the LST No

Locate the LST from metadata and decrypt the LST

Fig. 8. the Framework of the Security Published XML View

5 The Analysis of XFlat 5.1 Security Discussion Dan Suciu et al define security property on encrypted XML view in [2]. We discuss the security of the XML view generated by XFlat with the same criteria. Property 1. Suppose t is an XML document, P is a set of access control specifications, and t0 is the generated XML view in XFlat. As for user si with key ki 1. There is an efficient way to reproduce t from t0; 2. It needs to guess the missing key if ki is not the correct key. We omit the detailed proof due to the limited space.  Since XFlat encrypts and stores sub-trees in a sequential way, some one may argue it is possible for an attacker to replace or remove one LST in the XML view. In the case of

106

J. Gao, T. Wang, and D. Yang

the replacement of an encrypted sub-tree, the replacement of the encrypted sub-tree will incur a decryption problem since that the key used to encrypt this sub-tree is an intermediate key generated by the system. In the case of the removal of an encrypted subtree, we can not locate the corresponding encrypted sub-tree from the randomly generated id reference stored in the metadata layer, which also raises an exception. In summary, the query evaluation over XML view generated in XFlat terminates abnormally on the replacement and removals of any parts in the final XML view. 5.2 Performance Study We generate test data sets by XMark (http://monetdb.cwi.nl/) and the XML generator with NASA DTD(http://www.cs.washington.edu/research/xmldatasets), and generate the XPath set by the XPath generator [12]. The experiments runs on the Dell Optiplex 260 with CPU 2GHz and 512 MB RAM. The programming language is Java on Windows 2000 with JDK 1.31. We make extensive experiments on the encrypted XML view generation and the query evaluation over the encrypted XML published view. The XML data set is generated by XMark with factor from 0.001 to 0.005. We generate 10, 20, 30, 40, 50 XPaths using XPath generator with d=0.05, w=0.05, p=0.05, where d, w, p denote for the possibility ratio of {//,*,[]} in the final XPath. The access control specifications are constructed from the generated XPath set. Each 5 XPath are assigned to one separate user. The condition for each access control specification is set null, and the sign for each specification is set to permission or denial randomly. The XPath evaluation set is selected from the generated XPath set. The encryption method we used is AES with the key size 128 (http://www.bouncycastle.org/). We mainly compare the work of XFlat and the nested method. We focus on the space and time cost in the XML view construction, and the time cost over the encrypted XML view generated from the different methods. Since the result on the NASA data set shows the same trend as that on the data from XMark, only the results on XMark are reported. The left part in the Fig 9 shows the time cost in the XML view generation used in the nested encryption method and XFlat(merged) method. Since the length of the encrypted string and the time of the initialization of AES engine in nested method are much higher

200

Merge Nested

600

180 160

500

Merge Nested

140

400

Size(K)

Time(s)

120 100 80 60

300

200

40 100 20 0

0 100

200

300

400

The Size of XML(k)

500

600

10

20

30

40

The Number of Access Rules

Fig. 9. The Time Cost in View Construction

50

XFlat: Query Friendly Encrypted XML View Publishing

107

13 5

12 11

XFlatNoDTD XFlatDTD Nested

10

8

Time(s)

Time (S)

9

XFlatNoDTD XFlatDTD Nested

4

7 6 5

3

2

4 3 1

2 1 0

0 100

200

300

400

The Size of XML(K)

500

600

10

20

30

40

50

The number of the Access Rules

Fig. 10. the Time Cost of the XPath Evaluation

than that in XFlat, the merged method outperforms the nested method. The right part shows the result of the space cost of the generated XML view. Although XFlat introduce the cost of the metadata between the LST, the size of the generated XML by XFlat is nearly the same or less than that of XML view by Nested. This is because that the times of the encryption in nested method are higher than that in XFlat and AES automatically fills the original plain string to a certain size. The experiments also show that the space cost of the view by nested method is less that that of XFlat due to the less times of the encryption when the number of access rules used in view generation is less. Fig.10 shows the query evaluation cost over the views generated by XFlat and nested method. We observe that XFlat with DTD consideration is much faster than that of the Nested method and XFlat without DTD. With the consideration of the DTD, less LST is needed to be decrypted in the evaluation process than those in the other methods.

6 Related Work Most of XML security research efforts focus on the security techniques at the server side, where any accesses to the data should be via the security layer in the data server. Among the efforts, access-control models for XML data are studied in [1, 3, 4, 5]. How to express the access control specification in XML is discussed in XACL [6]. The query evaluation over the security XML document is studied in [3]. The granularity of access, access-control inheritance, overriding, and conflict resolution have been studied for XML in [4, 5]. In particular, [1] proposes a method to optimize the query evaluation over the XML security view. The user specific DTD is derived from the access control specifications and the user’s query is rewritten into the original full document schema in PTIME[1]. However, the work also has done on the server side, which indicates the work [1] face different problems from that of XFlat. For example, no query rewriting is needed on the XFlat generated view since the accessible view for a user can be merged dynamically. In addition, the access control specification is XPath-based in XFlat rather than DTD-based in [1]. The access control on the published XML view can be implemented by cryptographic techniques [2, 8]. It assures that published data can only be accessed by the

108

J. Gao, T. Wang, and D. Yang

authorized user with keys. The works include an extension of XQuery to express to access control specifications and a protected tree model for security XML document. The main problem is that the tree structured published data can entail the nested encryption, which lead to the high cost of query evaluation. XPath evaluation is studied with or without DTD in [9,11]. The evaluation strategy in a bottom-up fashion is discussed in [10]. The evaluation strategy in a top-down fashion in the presence of schema is studied in [11, 13]. The XPath and schema can be expressed in some kind of the automata. The production of the automata can be used as the optimized form of XPath. However, the XPath evaluation in our paper runs at the granularity of sub-trees.

7 Conclusion In our paper, a method called XFlat is proposed to implement the access control specifications over XML document. XFlat not only guarantees the security of the XML view, but also improves the query evaluation efficiency over the XML view. Experimental results illustrate the effectiveness of our method. Future work includes the inference information detection of the published XML view.

References 1. W.F.Fan, C.Y. Chan, M.N.Garofalakis: Secure XML Querying with Security Views. In Proc. of SIGMOD 2004. 2. G.Miklau, D.Suciu. Controlling access to published data using cryptography. In Proc. of VLDB 2003. pp 898-909 3. S.Cho, S.Amer Yahia, L.lakshmanman, D.Srivastava. Optimizing the secure evaluation of twig queries, In Proc. of VLDB 2002 4. E.Damiani, S.d.Vimercata, S.Paraboshi and P.Samarati. A fine-grained access control system for XML documents. TISSEC 5(2), 2002, pp 169-202. 5. E.Bertino, S.Castano, E.Ferrari: Securing XML Documents with Author-X. IEEE Internet Computing 5(3): 21-32 (2001) 6. S.Hada and M.Kudo. XML access control language: Provisional authorization for XML documents. http://www.trl.ibm.com/projects/xml/axcl/xacl-spec.html 7. K.Aoki and H.Lipmaa. Fast implementations of AES Candidates. In the 3rd AES candidate conference, NIST, 2000, pp 106-120. 8. J.Feigenbaum, M.Y.Liberman, R.N.Wright. Cryptographic protection of database and software. In distributed computing and crypto, 1991, pp 161-172 9. G.Gottlob, C.Koch, R.Pichler: Efficient Algorithms for Processing XPath Queries. In Proc. of VLDB, 2002, pp 95-106,. 10. J.Clark. XML Path language(XPath), 1999. available from the W3C, http://www.w3.org/ TR/XPath. 11. M.F.Fernandez, D.Suciu: Optimizing Regular Path Expressions Using Graph Schemas. In Proc. of ICDT, 1998, pp 14-23. 12. C.Chan, P.Felber, M.Garofalakis and R.Rastogi. Efficient filtering of XML document with XPath expressions. In Proc. of ICDE,2002, pp 235-244. 13. J. Gao, D.Q.Yang, S.W.Tang, T.J.Wang. XPath logical optimization based on DTD. Journal of Software, 2004,15(12): pp 1860-1868

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in Wireless Sensor Networks* Haigang Gong, Ming Liu, Yinchi Mao, Lijun Chen, and Li Xie State Key Laboratory for Novel Software Technology, China Department of Computer Science and Technology, Nanjing University [email protected]

Abstract. Wireless sensor network consists of a large number of small sensors with low-power transceiver, which can be an effective tool for gathering data in a variety of environment. The collected data must be transmitted to the base station for further processing. Since network consists of sensors with limited battery energy, the method for data gathering and routing must be energy efficient in order to prolong the lifetime of network. LEACH and HEED are two of elegant energy efficient protocol to maximize the lifetime of sensor network. In this paper, we present CoDEED, a distributed energy efficient protocol. CoDEED clusters sensor nodes into groups and builds routing tree among cluster heads in which only root node communicates with the station directly. In addition, CoDEED introduces the idea of area coverage to reduce the number of work nodes within cluster in order to prolong network lifetime. Simulation results show that CoDEED performs better than LEACH and HEED.

1 Introduction Recent advances in wireless communications and microelectro-mechanical systems have motivated the development of extremely small, low-cost sensors that possess sensing, signal processing and wireless communication capabilities. Hundreds and thousands of these inexpensive sensors work together to build a wireless sensor network (WSN), which can be used to collect useful information (i.e. temperature, humidity) from a variety of environment. The collected data must be transmitted to remote the base station (BS) for further processing. WSNs have been envisioned to have a wide range of applications in both military as well as civilian domains [1]-[3] such as battlefield surveillance, machine failure diagnosis, and chemical detection. The main constraint of sensor nodes is their low finite battery energy, which limits the lifetime and the quality of the network. Since sensor nodes are often left unattended, e.g., in hostile environments, which makes it difficult or impossible to recharge or replace their batteries, the protocols running on sensor networks must consume the resources of the nodes efficiently in order to achieve a longer network lifetime. There are several *

This work is partially supported by the National Natural Science Foundation of China under Grant No.60402027; the National Basic Research Program of China (973) under Grant No.2002CB312002.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 109 – 120, 2006. © Springer-Verlag Berlin Heidelberg 2006

110

H. Gong et al.

energy efficient protocols proposed for wireless sensor networks [4]-[9], aiming to maximize the lifetime of the system under different circumstances. In this work, we present CoDEED, a distributed energy efficient data gathering protocol. CoDEED clusters sensor nodes into groups and builds routing tree among cluster heads in which only root node communicates the base station directly. In addition, CoDEED introduces the idea of area coverage to reduce the number of working nodes within cluster in order to prolong network lifetime. The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 formulates the system model and problem. Section 4 describes CoDEED protocol in detail. Section 5 discusses the simulation results. Section 6 concludes the paper and presents future research directions.

2 Related Work The goal of energy efficient protocol is efficient transmission of all the data to the base station so that the lifetime of the network is maximized. Direct transmission is a simple approach for data gathering in which each node transmits its own data directly to the base station. However, if the base station is far away, the cost of sending data to it becomes too large and the nodes will die quickly. In order to solve this problem, several approaches have been proposed [4]-[9]. LEACH [4]-[5] is one of the most popular data gathering protocol for sensor networks. The idea is to form clusters of the sensor nodes based on the received signal strength and use local cluster heads as routers to the base station. This will save energy since only cluster heads will communicate with the base station rather than all sensor nodes. The algorithm is run periodically, and the probability of becoming a cluster head for each period is chosen to ensure that every node becomes a cluster head at least once within 1/P rounds, where P is the desired percentage of cluster heads and a round is defined as the process of gathering all the data from sensor nodes to the base station. This ensures the energy dissipation of nodes is balanced among all nodes. LEACH also employs data fusion technology, defined as combination of several unreliable data measurements to produce a more accurate signal by enhancing the common signal and reducing the uncorrelated noise to save energy by reducing the amount of data transmitted in the system. LEACH achieves up to 8x improvement compared to the direct transmission approach. However, LEACH uses single–hop routing where each node can transmit directly to the cluster head and the base station. Therefore, it is not applicable to networks deployed in large regions. PEGASIS [6] takes it further and reduces the number of nodes communicating directly with the base station to only one by forming a chain passing through all nodes where each node receives from and transmits to the closest possible neighbor. The data is collected starting from each endpoint of the chain until the randomized head node is reached. The data is fused each time it moves from node to node. The designated head node is responsible for transmitting the final data to the base station. PEGASIS achieves a better performance than LEACH by between 100% and 300% in

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in WSNs

111

terms of network lifetime. However, PEGASIS introduces excessive delay for distant node on the chain and every nodes needs to know the location of their one-hop neighbor so that the cost to build chain is huge in large networks. Like LEACH, HEED [10] is another cluster based energy efficient data gathering protocol present by O. Younis, et al. HEED (Hybrid Energy-Efficient Distributed clustering) periodically selects cluster heads according to a hybrid of the node residual energy and a secondary parameter, such as node proximity to its neighbors or node degree. HEED achieves fairly uniform cluster head distribution across the network. Authors prove that HEED can asymptotically almost surely guarantee connectivity of clustered networks. Simulation results demonstrate that it is effective in prolonging the network lifetime and supporting scalable data aggregation. Unlike the above protocols, PEDAP is a centralized protocol presented in [7]. PEDAP assumes the base station knows the locations of all nodes in network. According to the information the base station builds a minimum spanning tree that can achieve a minimum energy consuming system. The base station sends each node the required routing information for that node. So, the cost of setting-up for periodically establishing the scheme is very small compared to distributed protocols. However, PEDAP is not fault tolerant and as a centralized algorithm, PEDAP has poor scalability definitely.

3 System Model and Problem Statement 3.1 Wireless Channel Model In a wireless channel, the electromagnetic wave propagation can be modeled as falling off as a power law function of the distance between the transmitter and receiver. Two wireless channel models are proposed in [11], the free space model and the multi-path fading model. If the distance between the transmitter and receiver is less than a certain crossover distance (d0), the free space model is used (d2 attenuation), and if the distance is greater than d0, the multi-path fading model is used (d4 attenuation). 3.2 Wireless Radio Model We use the same radio model in [5] for the radio hardware energy dissipation where the transmitter dissipates energy to run the radio electronics and the power amplifier, and the receiver dissipates energy to run the radio electronics. To transmit a k-bit message a distance d, the radio expends energy as (1).

­° k * E elec + k * e fs d 2 , d < d 0 E Tx = ® 4 °¯k * E elec + k * e amp d , d ≥ d 0 And to receive this message, the radio expends energy as (2)

(1)

112

H. Gong et al.

E Rx = k * E elec

(2)

Eelec, the electronics energy, depends on factors such as the digital coding, modulation, and filtering of the signal before it is sent to the transmit amplifier. And the amplifier energy, efsd2 or eampd4, depends on the distance to the receiver. In addition, like LEACH and PEGASIS, CoDEED also employs data fusion and aggregation to reduce the amount of data to deliver. EDA denotes the energy consumed by data fusion. For example, aggregating M k-bit signals into a single representative k-bit signal consumes energy M* EDA*k. 3.3 Problem Statement The key idea to design energy efficient protocol in sensor network is to minimize the total energy consumed by the system in a round while balancing the energy consumption among all sensor nodes. Clustering sensor nodes is an effective technique for achieving energy efficient goal. Clustering-based data gathering protocol must meet some requirement as follows: 1) The wireless communication in sensor network, including intra-cluster and inter-cluster communication, ought to use the free space channel model to avoid the distant power attenuation. In LEACH, the cluster heads broadcast their message in the whole network, which consumes energy drastically. Meanwhile, the cluster heads are elected randomly. They may be located on the edge of the network, i.e. in Fig. 1a, node A~D are the cluster heads. Node A and C are on the edge of the network, their distant cluster member must dissipate more energy to communicate with them. 2) The protocol should be completely distributed, self-organizing and scalable. The sensor nodes make their decision based on the local information independently. PEGASIS must maintain the global information to build an optimal chain that is not scalable in large sensor networks. The same PEDAP is. 3) The cluster heads are well distributed over the sensor field like HEED. In LEACH, the randomly selected cluster heads may be adjacent to each other, i.e. the node B and C in Fig. 1a, which may increase the probability of inter-cluster interference. 4) Cluster members should not be working simultaneously altogether. Sensors are usually deployed densely (high up to 20nodes /m3 [18]). In such a high-density network with energy-constrained sensors, it is neither necessary nor desirable to have all nodes work at the same time. Turning off some nodes does not affect the system function as long as there are enough working nodes to assure it. If all the sensor nodes work together, an excessive amount of energy would be wasted and data that collected would be highly correlated and redundant. Moreover, excessive packet collision would occur. CoDEED is a distributed, clustering-based data gathering protocol with intracluster coverage. The area covered by cluster is bounded with the cluster radius r. The nodes located within r of cluster head can be the members of this cluster. r is less than d0/2, which ensures the communication between the adjacent cluster heads satisfies the free space model(for requirement 1). The selected cluster heads in CoDEED are

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in WSNs

113

well distributed so as to ensure to reduce the interference inter-cluster (requirement 3). After the selection of cluster heads, CoDEED builds a routing tree among the cluster heads based on the local information, which only the root node communicates with the base station directly (Requirement 2). Finally, CoDEED selects enough active nodes that ensure the coverage of the cluster according to some coverage algorithms within the cluster, called intra-cluster coverage. Fig. 1b shows CoDEED clustering.

a. LEACH

b. CoDEED

Fig. 1. Clustering of LEACH and CoDEED

Fig. 2. Competing for cluster head

Assume that N nodes are dispersed in a field randomly, and the follow assumptions are hold: z z z z z

Nodes are location-unaware. All nodes have same capabilities and data fusion is capable. Power control is available. Intra-cluster and inter-cluster communication use different power level. Nodes are left unattended after deployment. Nodes periodically sense the environment and have always data to send in each round of communication.

4 CoDEED Protocol Design The operation of CoDEED is divided into rounds as LEACH. Each round includes a set-up phase and a working phase. In set-up phase, clusters are organized with the selection of active nodes (intra-cluster coverage) and the routing tree is constructed. In working phase, data are gathered from the nodes to the base station. For convenience, some symbols are defined as follows: Pinit: The initial percentage of cluster heads, and has no direct impact on the final number of cluster heads. Authors define the minimum optimal number of cluster heads that covers the sensor field in [10], e.g., 2 A / 27 ⋅ r 2 . So, Pinit = 2 A N 27 ⋅ r 2 , where A denotes the area of sensor field, N is the number of nodes and r is the radius of cluster.

114

H. Gong et al.

PC: The probability of becoming a cluster head. PC=Pinit * Ecur/Emax, Ecur is the current energy of node, and Emax is the initial energy. Ethreshold: The threshold energy of nodes. When is Ecur less than Ethreshold, node is not capable of being cluster head. SC: The set of candidate head. When receiving CANDIDATES message, SC adds the node send the message. When receiving CANCEL message, SC deletes the node send the message. The SC of candidate head includes itself. SH: The set of final cluster head. When receiving HEAD message, SH adds the node send the message. RSS: the receiver signal strength of the signal broadcasted by the base station. In free space channel model, the distance to the transmitter could be estimated according to the receiver signal strength. Bids: the cost for competing for cluster head. //Cluster formation 1. electable = FALSE 2. PC = max( Pinit * Ecur /Emax, Pinit * Ethreshold /Emax) 3.DO 4. IF SH =empty and SC empty and Ecur> Ethreshold 5. electable = TRUE 6. IF random(0 ) SC 9. isCandindate = TRUE 10. Wait T 11. IF SC != empty 12. electable = FALSE 13. IF isCandindate 14. IF myID = MaxBids nodeID in SC 15. Broadcast (myID, HEAD) 16. break 17. Wait T 18. IF isCandindate 19. IF SH != empty 20. Broadcast(myID,CANCEL) 21. isCandindate = FALSE 22. ELSE 23. IF PC = 24. Broadcast(myID,HEAD) 25. P = PC 26. PC = min (PC * 2 , 27.WHILE( P =1)

)

28.IF SH != empty 29. headID = MaxBids nodeID in SH 30. Send( myID, JOIN) to headID 31. ELSE 32. Broadcast(myID, HEAD) //Selection of active node 33. IF is non-Cluster Head 34. IF the number of 1-hop neighbors > Ncov 35. IF random(0.1) < 1- 1/ Ncov 36. set node SLEEP 37. set node ACTIVE and notify Cluster Head // Routing tree building 38. IF isClusterHead 39. Broadcast (myID, WEIGHT) 40. Wait T 41. ParentNode = Neighbor which send Max WEIGHT 42. Send ( myID, CHILD) to ParentNode 43. IF isCluster Head 44. Booadcast TDMA schedule to active node //working phase 45. IF isCluster Head 46. Collect data from its members 47. ELSE 48. send data to Cluster Head in its time slot

Fig. 3. CoDEED protocol pseudo-code

4.1 Cluster Formation

The pseudo-code for clustering is given in Fig. 3. Each node runs the algorithm independently to make the decision to be a cluster head or cluster member. Only the

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in WSNs

115

nodes whose Ecur is greater than Ethreshold and SC and SH are empty are qualified for rivaling for cluster head. Node becomes a candidate head with probability PC and broadcasts CANDIDATES message within cluster radius r, in which the message contain node ID and Bids for electing (Fig. 3, line 6~9). PC is not allowed to fall below a certain threshold, e.g. Pinit * Ethreshold /Emax,(line 2) in order to terminate the algorithm in O(1) iterations. After time T, which should be long enough to receive messages from any neighbor within cluster radius r, if candidate node’s SC only includes itself, meaning that no other nodes within its cluster radius r rival for cluster head, it will broadcast HEAD message to claim be final cluster head. Otherwise candidate nodes select the node that has the max Bids in SC. If this node is itself, meaning that it wins competition, it broadcasts HEAD message (line 13~16). If not, it will wait time T and receive the possible HEAD message from the node that has the max Bids in SC.. After time T, if these candidate nodes don’t receive any HEAD message, they should broadcast CANCEL message. The nodes located within range r of them will delete their node ID from SC (line 18~24). To non-candidate node, it will lost the right for competing when receiving CANDIDATES message and waiting for the follow HEAD message. If it receives message within time T, it doesn’t compete any more. If it receives message, it deletes the node that sent this message from SC and if SC is empty, it resumes the right for competing and doubles its PC (line 4~5). After iterations, if SH is empty, which means any cluster head does not cover the node, it will broadcast HEAD message to be cluster head itself. Otherwise it sends JOIN message to the node that has the maximum Bids in its SH (line 28~32). For example, n1and n2, n2~n4 are located within their cluster radius r for each other, as shown in Fig. 2. They broadcast HEAD message within radius r at the same time. Assuming the Bids for competing of the 4 nodes satisfies n1>n2>n3>n4. n1 will be a final cluster head and broadcasts HEAD message. n2 loses the competition with n1 and broadcasts CANCEL message. n3 and n4 receive the CANCEL message from n2, deleting n2 from their SC. And n3 will compete with n4 for cluster head in the next step of iteration. In addition, the nodes that lie in the shadowed area delete n2 from their SC, and they will rival for cluster head again in the next step for their SC is now empty. Bids may be the node’s current energy or other parameter such as the degree of the node, the communication overhead of node. In simplicity, we choose the residual energy of node as the bids contending for cluster head. 4.2 Selection of Active Nodes

Selection of active nodes within cluster is related to studies about coverage problem in WSNs, which has been studied in recent year [12]-[17]. In most case, “coverage” means area coverage, that every point in the monitored field be sensed by at least one sensor. When the ratio of coverage falls below some predefined value, WSN can no longer function normally. In [17], authors think it is hard to guarantee full coverage for a given randomly deployment area even if all sensors are on-duty and small sensing holes are not likely to influence the effectiveness of sensor networks and are acceptable for most application scenarios. According to their demonstration, a node can be sleep randomly when a node has 4 or more neighbors while maintaining more

116

H. Gong et al.

than 90% coverage of the monitored field. We introduce this idea into clusters that is called “intra-cluster coverage”, which selects some active node within clusters while maintaining enough coverage of the cluster. As in Fig. 3, cluster members can get their 1-hop neighbors within the sensing radius rs when they send JOIN message to their cluster head. After clustering, if the number a member’s 1-hop neighbors is greater than a threshold Ncov, then it goes to be asleep with probability 1 – 1/ Ncov, which means the more neighbors it has, the more probability to be asleep (line 33-38). If not, it goes to active and notifies its cluster head. Ncov is related to the quality of coverage determined by specific application. As in [17], if application requires 95% area coverage, is set to 6. If requiring 90% coverage, is set to 4. Using intra-cluster coverage has two advantages. The first is to preserve energy consumption in each round by turning redundant nodes’ radio off so that network lifetime is prolonged. The second is to reduce TDMA schedule overhead. Once clusters grouped, all cluster head broadcast a TDMA schedule packet in which contains the ID of members and time slot allocated to the member. When node density is high, the number of cluster members turns higher so that the length of TDMA schedule packet turns longer that consumes more energy to transmit and receive. However, the length of TDMA schedule packet would not too long with intra-cluster coverage because the number of active node varies slightly when node density goes higher. 4.3 Construction of Routing Tree

After clustering, cluster heads broadcast WEIGHT message within radius 2r, which contains node ID and weight W. Cluster head compares its own weight and the weight contained in WEIGHT message received from its neighbor cluster heads. If it has smaller weight, it selects the node that has the largest weight as its parent and sends the CHILD message to notify the parent node (Fig. 3. line 39~43). Finally, a routing tree will be constructed, which root node has the largest weight among all cluster heads. Noticeably, a cluster head may not receive any WEIGHT or CHILD messages when nodes are distributed sparsely, which occurs when most nodes died in the latter phase of network lifetime. If it doesn’t receive any message within a specified time, it will communicate with the base station directly. After routing tree construction, cluster heads broadcast a TDMA schedule to their active member nodes to be ready for data gathering. For example, as shown in Fig. 2, node A~E are cluster heads with their weight in parenthesis. B will receive WEIGHT message from A, C, D, E and select node A to be its parent. Similarly, node D and E choose B as their parent, while C chooses A as its parent. Node A receives WEIGHT message from node B and C, but their weight is less than node A. Then A will be the root node that communicates with the base station and routing tree is build. We define weight W of node i as Wi = RSSi * Ecur / Emax. After the deployment of sensors, the base station broadcasts probing message to all sensors and sensors acquire the RSS according to the received signal strength. RSS maintains constant during the network lifetime unless base station varies its location or sensor nodes are

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in WSNs

117

mobile. Apparently, node that is closer to the station and has the more residual energy would be the root node of routing tree for its higher weight. 4.4 Working Phase

Data gathering begins after cluster heads broadcast their TDMA schedules to their active member nodes. The active member nodes in cluster send their aggregated data to their cluster head in the allocated TDMA slots. Once the cluster head receives all the data, it performs data fusion to enhance the common signal and reduce the uncorrelated noise among the signals. The resultant data are sent to its parent if it has no child in routing tree. Otherwise it will be wait for the data from its child. The parent receives their child’s data, performing data fusion once more, and sends the aggregated data to its parent too. Finally, root node sends the gathering data to the base station and the network goes into the next round and repeats the operation described above. To reduce clustering overheads, each round may include more data cycle, which is defined as gathering the data sensed by all nodes to the base station once. Clustering is triggered every L data cycles, which L is a predefined parameter.

Table 1. Simulation Parameters Parameters

Value

Sensing Filed Node numbers Cluster radius r Sensing radius rs Sink position Initial energy Data packet size Broadcast packet size Ethreshold Eelec efs

Table 2. Average number of active nodes per cluster Node numbers

LEACH

100 200 400 600 800 1000 1200

20 18 20 24 29 31.6 34.7

HEED

4.76 7.69 13.3 20.3 23.5 29.4 36.4

CoDEED

4.76 7.53 11.8 13.4 13.7 13.1 13.6

eamp EDA Threshold distance d0 Data Cycles per round(L)

5 Performance Evaluation In order to evaluate the performance of CoDEED, we simulated LEACH, and HEED protocols as the baseline. The simulation parameters are listed in Table 1. Ncov is set to 6 to ensure 95% coverage that meets requirement of most WSN applications [17]. The area of sensing field is 200m×200m and sink is fixed at (100, 300). Node numbers vary from 200 to 1200 representing for different node density deployment. We observe the performance of the three cluster-based protocols from network lifetime and protocol overhead under different node density deployment. Network lifetime has

118

H. Gong et al.

two definitions: First Node Dies (FND), the time when the first node dies in network and Last Node Dies (LND), the time when the last node dies. 5.1 Simulation Results

Table 2 describes the average number of active nodes per cluster under different node density deployment. LEACH behaves different from the other two because the number of clusters in LEACH is optimally calculated with different node number. Both HEED and CoDEED are cluster-based protocol in which the size of cluster is bounded by cluster radius r. So the number of clusters in HEED and CoDEED vary with sensing field rather than node numbers in network. When node density is low, the average number of active nodes per cluster of HEED is the same as that of CoDEED because all nodes must be active to ensure the coverage of cluster. When nodes density turns from low to high, the number of active nodes per cluster of HEED increases linearly. However, the number of active nodes per cluster of CoDEED decreases and remains a constant value when node density is high enough. As seen from Table 2, when there are 600 nodes in the field, the number of active nodes per cluster in of HEED is 20.3 compared with 13.4 of CoDEED. With the less active nodes per cluster, CoDEED achieves the less protocol overhead and the more network lifetime.

Protocol overhead(J)

6 LEACH 5

HEED CoDEED

4 3 2 1 0 200

400

600

800

1000

1200

Number of nodes in netw ork

Fig. 4. Protocol Overhead under different node density

Fig. 4 shows the protocol over head of three protocols. Protocol overhead includes energy consumed by clustering and broadcasting TDMA schedule for all three protocols. To the latter two protocols, the overhead includes energy dissipated for construction of routing tree in addition. HEED consumes more energy for protocol overhead than CoDEED with increasing of node density. The reason is that the length of TDMA schedule packet of HEED turns longer and longer when node density turns higher, which consumes more energy to transmit and receive TDMA schedule packet. The energy consumption for TDMA scheduling of CoDEED remains the same when node density is high and its overhead only increases due to cluster formation and routing tree construction. So the overall overhead of CoDEED increases slower than that of HEED.

Distributed Energy Efficient Data Gathering with Intra-cluster Coverage in WSNs

119

Fig. 5 and Fig. 6 show FND and LND of network lifetime under different node density. LEACH performs the poorest among three protocols due to its unbalanced energy consumption. While node density increases, the FND of LEACH and HEED decreases and the LND of them increases. CoDEED behaves like HEED when node density is lower than 0.01nodes/m2. But when node density goes higher than 0.01nodes/m2, the FND and the LND of CoDEED increases drastically compared with LEACH and HEED. When there are 1200 nodes deployed in a field of 200m×200m (corresponding to node density 0.03nodes/m2), the LND of CoDEED is about 400 times than LEACH and 4 times than HEED. This is because intra-cluster coverage reduces the number of working nodes in each round and more nodes are asleep so that more energy is preserved to prolong network lifetime. 4500

1600 1400

LEACH

1200 1000

CoDEED

LEACH

4000

network lifetime (round)

network lifetime (round)

1800 HEED

800 600 400 200

HEED

3500

CoDEED

3000 2500 2000 1500 1000 500 0

0 200

400

600

800

1000

Number of nodes in netw ork

1200

Fig. 5. FND of protocols

200

400

600

800

1000

1200

Number of Nodes in netw ork

Fig. 6. LND of protocols

6 Conclusions and Future Work In this paper, we present CoDEED, a distributed energy efficient data gathering protocol with intra-cluster coverage. CoDEED clusters sensor nodes into groups and builds routing tree among cluster heads in which only root node communicates with the base station directly. In addition, CoDEED introduces the idea of area coverage to reduce the number of working nodes within cluster in order to prolong network lifetime. Simulation results show CoDEED outperforms far better than LEACH. Compared to HEED, though CoDEED performs almost the same as HEED when node density is low, it has far better performance than HEED when node density goes higher than 0.01nodes/m2.

Reference [1] D. Estrin and R. Govindan, J. Heidemann, and S. Kumar, “Next century challenges: scalable coordination in sensor networks”, in Proc. of MobiCOM '99, August 1999. [2] M. Tubaishat and S. Madria, “Sensor networks: an overview”, IEEE Potentials, 22(2), 20–23, 2003. [3] W. R. Heinzelman, A. Chandrakasan, and H. Balakrishnan, “ Energy-efficient communication protocol for wireless microsensor networks”, in Proc. of 33rd Annual Hawaii International Conference on System Sciences, Hawaii, January 2000.

120

H. Gong et al.

[4] W. R. Heinzelman, et al. “An Application -Specific Protocol Architecture for Wireless Microsensor Networks”, IEEE Transactions on Wireless Communications, vol. 1, no. 4, Oct. 2002 [5] S. Lindsey, et al. “Pegasis: Power efficient gathering in sensor information systems”, in Proc. of IEEE Aerospace Conference, March 2002 [6] Huseyin Ozgur Tan et al. “Power Efficient Data Gathering and Aggregation in Wireless Sensor Networks”, SIGMOD Record, Vol. 32, No. 4, December 2003 [7] S. Bandyopadhyay, et al. “An Energy- Efficient Hierachical Clustering Algorithm for Wireless Sensor Networks”, in Proc. of IEEE INFOCOM, April 2003 [8] Manjeshwar A, et al. “TEEN: A routing protocol for enhanced efficiency in wireless sensor networks”, in Proc. of PDPS’01, IEEE Computer Society, 2001 [9] R. Williams. “The geometrical foundation of natural structure: A source book of design”, Dover Pub. Inc., New York, pp. 51-52, 1979. [10] O. Yonis, et al. “HEED: A Hybrid, Energy-Efficient, Distributed Clustering Approach for Ad-hoc Sensor Networks”, IEEE Transactions on Mobile Computing, volume 3, issue 4, Oct-Dec, 2004 [11] T. Rappaport. “Wireless Communications: Principles and Practice”, Prentice-Hall Inc., New Jersey, 1996 [12] Chi-Fu Huang, et al. “The Coverage Problem in a Wireless Sensor Network”, in Proc. of WSNA’03, September 19, 2003, San Diego, California, USA. [13] D. Tuan and N. D. Georganas, “A Coverage-preserving node scheduling scheme for large wireless sensor networks,” in Proceedings of First ACM International Workshop on Wireless Sensor Networks and Applications, pp 32-41, 2002. [14] F. Ye, G. Zhong, S. Lu, and L. Zhang, “PEAS: A robust energy conserving protocol for long-lived sensor networks,” in Proceedings of the 23nd International Conference on Distributed Computing Systems (ICDCS), 2003 [15] H. Zhang and J.C. Hou, “Maintaining scheme coverage and connectivity in large sensor networks,” in Proceedings of NSF International Workshop on Theoretical and Algorithmic Aspects of Sensor, Ad Hoc wireless, and Peer-to-Peer Networks, 2004. [16] X. Wang, G. Xing, Y. Zhang, C. Lu, R. Pless, and C.D. Gill, “Integrated Coverage and Connectivity Configuration in Wireless Sensor Networks,” in Proceedings of the First International Conference on Embedded Networked Sensor Systems, pp 28-39, ACM Press, 2003. [17] Y. Gao, K. Wu, and F. Li, “Analysis on the redundancy of wireless sensor networks,” in Proceedings of the 2nd ACM international conference on Wireless sensor networks and applications (WSNA 03), September 2003, San Diego, CA. [18] E. Shih, S. Cho, N. Ickes, R. Min, A. Sinha, A. Wang, A. Chandrakasan, “Physical Layer Driven Protocol and Algorithm Design for Enery-Efficient Wireless Sensor Networks,” ACM SIGMOBILE Conference on Mobile Computing and Networking, July 2001, ROME, Italy.

QoS-Driven Web Service Composition with Inter Service Conflicts Aiqiang Gao1 , Dongqing Yang1 , Shiwei Tang2 , and Ming Zhang1 1

School of Electronics Engineering and Computer Science, Peking University, Beijing, 100871, China {aqgao, ydq, mzhang}@db.pku.edu.cn 2 [email protected]

Abstract. Web service composition provides a way to build value-added services and web applications by integrating and composing existing web services. In a two-stage approach for web service composition, an abstract specification is synthesized in the first phase and an executable composition process is generated in the second phase by selecting and binding a concrete web service for each abstract task. However, the selection of a web service for one task is not a stand-alone operation, as there may be compatibility conflicts between this service and services chosen for other tasks. This paper gives a method for dynamic web service selection in the presence of inter service dependencies and conflicts. Firstly, a method based on Integer Programming is discussed to implement the process of dynamic service selection. Then, inter service conflicts are explored and expressed formally, which are accommodated into the IP-based method. By combing domain specific service conflicts into a two-stage approach, the method in this paper provides a united approach for dynamic service selection that can integrate both QoS constraints and other domain specific constraints. Experiments show that this method is effective and efficient.

1

Introduction

The emerging paradigm of web services promises to bring to distributed computation and services the flexibility that the web has brought to the sharing of documents (see [1]). Web service composition ([1,2]) is to build value-added services and web applications by integrating and composing existing elementary web services. Because there are usually multiple alternative services for one task, it is time consuming to build composite web services based on this large service base. So, it is promising to adopt a two-stage approach. In the first phase, abstract specifications are defined; concrete web services are selected and bound according to QoS optimization for each abstract service in the second phase. In this paper, it is assumed that a composite web service defined using web service type (or abstract web service) is already present. Then the problem is to transform this abstract specification of composite service into an executable X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 121–132, 2006. c Springer-Verlag Berlin Heidelberg 2006 

122

A. Gao et al.

process by selecting and binding a concrete web service (or web service instance) for each task. However, the selection of a web service for one abstract task is not a stand-alone operation, as there may be dependencies on previously chosen services and compatibility conflicts between those chosen services. This paper will discuss the impact of inter service conflicts on the process of service selection. Firstly, an Integer Programming based method is introduced for dynamic service selection([14]). It is defined on the base of QoS criteria with the objective to optimize overall quality parameter. Then, inter service incompatibility is formally expressed and accommodated into this method by representing conflicts between services as domain specific constraints of the programming model. Though only incompatible web service pairs are discussed, this approach is general and can be extended to specify other domain specific constraints. Experiments are conducted to evaluate the effectiveness and efficiency of those methods. This paper is organized as follows: Section 2 gives a description for web service and composite web service; Section 3 builds an integer programming problem for web service composition with QoS optimization; Section 4 presents inter service conflicts formally; Section 5 shows some experimental results to illustrate the effectiveness and efficiency of this method; Section 6 reviews related works and Section 7 concludes this paper and discusses future work.

2 2.1

Web Service and Composite Web Service Web Service Description

For an application to use a web service, its programmatic interfaces must be precisely described. WSDL [12] is an XML grammar for specifying properties of a web service such as what it does, where it is located and how it is invoked. A WSDL document defines services as collections of network endpoints, or ports. In WSDL, the abstract definition of endpoints and messages is separated from their concrete network deployment or data format bindings. This allows the reuse of abstract definitions: messages, which are abstract descriptions of the data being exchanged, and port types that are abstract collections of operations. The concrete protocol and data format specifications for a particular port type constitute a reusable binding. A port is defined by associating a network address with a reusable binding, and a collection of ports defines a service([2]). Though the interfaces are essential to automatic composition and verification of composite web services, QoS properties of web services provide supports for dynamic service selection and composition. QoS model for web service in this paper includes four criteria: reliability, cost, response time and availability([14]). Reliability: the probability that a service request is responded successfully. Its value is computed using: Num(success)/Num(all), where Num(success) records the number of successful invocation and Num(all) the number of all invocation. Cost: The cost cost(s, op) is the cost that a service requester has to pay for invoking the operation op.

QoS-Driven Web Service Composition with Inter Service Conflicts

123

Response time: response(s) measures the expected delay between the moment when a request is sent and the moment when the results are received. Availability: The availability availability(s) of a service s is the probability that the service is accessible. So, the quality vector for an operation op of a service s is defined as quality(s, op)=(cost(s, op), response(s, op), reliability(s), availability(s)) 2.2

Web Service Composition

Web service composition is the process of building value-added services and web applications by integrating and composing existing elementary web services. In contrast to a web service instance that defines the concrete network address and programmatic interfaces of one single web service,web service type specifies the requirements for a class of web service instances. For web service composition, it is promising to define an abstract process using web service type and then bind a service instance for each task. It is assumed in this paper that the abstract representation of composite web service is already present. The task nodes are defined using web service types, independent of any web service instances, any of which can be candidates for executing this task. The standards for representation of composite web service have been proposed for years, such as BPEL4WS [9], WS-Choreography[10], OWL-S[11] etc. The constructs and composition patterns in those standards can be summarized using workflow patterns discussed by [3]. The usually used patterns are sequential, conditional choice (exclusive), parallel and iterative.

3

Integer Programming for Dynamic Web Service Selection

In this section, the concepts for Integer Programming are first introduced. Then zero-one IP model for web service composition is defined according to [14]. 3.1

Concepts of Integer Programming

Definition 1. (Integer Programming) In [4], the author states “Any decision problem (with an objective to be maximized or minimized) in which the decision variables must assume nonfractional or discrete values may be classified as an integer optimization problem.” For the sake of standardization, the linear integer problem is written as: m  cj xj Maximize (or minimize) z = subject to

m  j=1

j ∈ {1, 2, · · · , n}

j=1

aij xj + si = bi ,si ≥ 0,xj ≥ 0,sj is integer, i ∈ {1, 2, · · · , m},

124

A. Gao et al.

Definition 2. (Zero-One Integer Programming) Zero-one integer programming is a special case of integer problem with the variables being 0 or 1. It is also called binary integer programming. Integer Programming model provides a formalism for a class of decision problems. While, for the problem of dynamic service selection, the decision about whether one concrete web service should be selected for executing one abstract task is a binary variable with value 0 or 1. And the objective of dynamic service selection is to optimize some utility objective while keeping the user-defined constraints being satisfied. Thus, it is natural to map the problem of dynamic service selection to a zero-one integer programming problem. The mapping will be discussed in the following subsection. 3.2

Integer Programming for Dynamic Web Service Selection

To define a linear IP model, three inputs should be provided: a set of decision variables, an objective function and a set of constraints, where both the objective function and the constraints must be linear. The output of an IP problem is the maximum (or minimum) value of the objective function and the values of the decision variables. (1) Decision Variables For every web service sij that belongs to service class i, an integer variable yij is defined. yij will be 1 if web service sij is selected for executing task i and it will be 0 if sij is not selected. (2) Objective Function The objective function is to reflect some benefit or utility for service requesters. It is defined as weighted sum of the quality dimensions that are favorable to the requesters. Such quality criteria are cost (in negative), availability and reliability. Response time is excluded from objective function definition because it is more natural to be thought as end-to-end constraint. To express objective function, the quality matrix for a service type i should be first presented. It is already determined to invoke which operation of a web service,so quality matrix can be generated by merging the quality vectors of all candidate services. The matrix for i is a Mi ∗ 4 matrix with the number of candidate services be Mi . Each row corresponds to one candidate web service,with the columns corresponding to response, cost, availability and reliability, respectively. The matrix is of the form: ⎛ ⎞ t11 c12 a13 r14 ⎜ t21 c22 a23 r24 ⎟ ⎟ QosM atrixi = ⎜ ⎝ ··· ··· ··· ··· ⎠ tMi 1 cMi 2 aMi 3 rMi 4 Then, the matrix is normalized to fall into [0,1]. Based on this matrix, objective function is defined for the whole composite web service. For nodes without control constructs, the contribution to objective function M i QosM atrixi (j, k) × yij can be seen as the is defined using formula 1, where j=1

QoS-Driven Web Service Composition with Inter Service Conflicts

125

k-th dimension of the composite construct. Thus,formula ( 1) is the weighted sum L  of the quality dimensions.wk is the weight of k-th dimension, with wk = 1. k=2

objf uni =

Mi L

wk × QosM atrixi (j, k) × yij

(1)

k=2 j=1

For conditional, parallel and iterative cases, the contribution to the objective function is defined in the following way. Firstly, the QoS matrix for all components that participate in the composite construct are built. Then each quality dimension is computed for this composite construct. And again, weighted sum of the quality dimensions is computed as its contribution to objective function. Conditional and parallel constructs are used to illustrate this process. Suppose there are P branches in a conditional construct, with probability p1 , p2 , · · · , pP . The number of candidates for those P service type is M1 , M2 , · · · , MP , respectively. Then the four dimensions mentioned above is computed using formula( 2),respectively. P i=1

⎛ pi × ⎝

Mi

⎞ (yij × QosM atrixi (j, k))⎠ ,

(2)

j=1

where k denotes k-th dimension(k ∈ {1, 2, 3, 4}). With the definition for those dimensions, the weighted sum can be computed. For parallel constructs, suppose there are P parallel tasks. The number of candidates for those P service type is M1 , M2 , · · · , MP , respectively. Different from the conditional one, the definition of quality for parallel construct is not the same for all dimensions. Mi  QosM atrixi (j, k) × yij . For The quality for each branch is defined as j=1 P

M i

i=1

j=1

time dimension, the overall time is max{

QosM atrixi (j, 1) × yij }. Because

this definition can not be used in the objective function directly, one preferred branch x is picked for representing time dimension. For cost dimension, it is the same as conditional case. For reliability and availability dimensions, the overall quality is defined as P M  i (yij × QosM atrixi (j, k)). However, this formula can not be used in a i=1 j=1

linear IP model. So, the definition for these two dimensions are changed into formula ( 3). Mi P   i=1 j=1

(QosM atrixi (j, k)yij )

(3)

126

A. Gao et al.

Then logarithm function ln is applied to formula ( 3), which results in formula ( 4). Mi P (yij × ln(QosM atrixi (j, k))) (4) i=1 j=1

Again, weighted sum of these quality dimensions is computed as its contribution to objective function. Finally, the objective function for composite web service is computed by taking the sum of all those constructs as formula( 5). N

objf uni

(5)

i=1

(3) Constraints (a) exclusive allocation constraint: For each task i, only one of the Mi candiM i yij = 1. dates can be selected as the execution service. This is expressed as: j=1

(b) response time: which requires execution time to be in some limited value (c) other constraints: Other user-specified constraints can also be included in this method.

4

Expressing Inter Service Compatibility Conflicts

In Section 3, the process of dynamic services selection is defined on the assumption that the selection of a web service for one task is independent of others. However, this is not necessary the case because there may be conflict of interests between activities. If such a conflict exists, the dynamic selection methods discussed in last section can not be applied directly. This section will first express the conflicts that may exist between service instances and then accommodate those constraints into the IP approach for dynamic service selection. 4.1

Inter Service Conflicts

Each web service may be a global model that invokes a set of web service following some composition constructs according to [9] and [3]. Let A denotes the set of activities in the abstract flow definition. A = {a1 , a2 , · · · , an } is the set of n activities. W S = W S1 ∪ W S2 ∪ · · · ∪ W Sn is the set of all usable web services, where W Si is the set of candidate web services for activity ai with cardinality Mi . Some of the relationships among these elements are illustrated here. C : W S → A. Given a web service, C(ws) returns a set of activities that ws is capable to execute. For example, C(wsi )={ai1 , ai2 , · · · , aik } means that there are at least k operations defined by web service wsi . M : A → W S is a 1-to-1 mapping that returns the web service wsij (wsij ∈ W Si ) that is assigned to execute activity ai .

QoS-Driven Web Service Composition with Inter Service Conflicts

127

E : A → W S is a mapping that returns a set of web services that is not allowed to execute activity ai by any means. E(ai ) = {wsi1 , wsi2 , · · · , wsik }. IN COM is a binary relation defined on W S. < wsik , wsjl > belongs to IN COM if and only if wsik ∈ W Si and wsjl ∈ W Sj , and wsik and wsjl are incompatible web service pairs for activity ai and aj . Here, it is assumed that the incompatible web service pairs have been identified by considering the interfaces conflicts or semantic heterogeneities between inputs/outputs of web services. Example 1. An example taken from [8] is used to illustrate the concepts discussed above. Suppose that a retailer sends an order for three electronic parts to a distributor: item 1, item 2 and item 3. The distributor has a set of preferred suppliers from whom she orders the parts. The distributor needs to retrieve each item from the suppliers and assemble the final results to the requestor. Say suppliers A,B and C can supply item 1, suppliers D, E and F can supply item 2 and supplier G, H and I can supply item 3. Say further that there are some incompatibilities between the technology of suppliers. The incompatible sets might look like: (A,E),(B,F),(E,I) and (C,G) meaning that if supplier A is chosen to supply item 1, E should not be chosen to supply item 2 and so on. The definitions corresponding to this example are summarized here. A = {a1 , a2 , a3 }, with a1 , a2 , a3 corresponds to supplier services for item 1, item 2 and item 3, respectively. W S = {ws11 , ws12 , ws13 } ∪ {ws21 , ws22 , ws23 } ∪ {ws31 , ws32 , ws33 }. Those web services correspond to the suppliers mentioned above, that is, {A,B,C}, {D,E,F} and {G,H,I}. For web service wsi ∈ W S = {A, B, C, D, E, F, G, H, I}, the activity that it is capable to execute is defined as: C(ws11 ) = a1 ,C(ws12 ) = a1 ,C(ws13 ) = a1 ,C(ws21 ) = a2 ,C(ws22 ) = a2 ,C(ws23 ) = a2 ,C(ws31 ) = a3 ,C(ws32 ) = a3 ,C(ws33 ) = a3 . For this example, mapping E is not defined. Mapping M is the result of service selection which will be given after the service selection method is discussed. According to the incompatible pairs between the suppliers, the relation IN COM is defined as: < ws11 , ws22 >,< ws12 , ws23 >,< ws22 , ws33 >,< ws13 , ws31 >. 4.2

Accommodating Inter Service Conflicts in IP Model

Inter service conflicts discussed in last subsection can be expressed in the IP problem for web service composition. In Section 3, it is mentioned that a 0-1 integer variable yij is defined for each single web service wsij that is a candidate for task i. Based on those variables, the inter service relationships can be expressed formally.

128

A. Gao et al.

Set A is parsed from the definition of composite web service. Set W S can be retrieved from web service registry where the information about web service interfaces and QoS is stored. Mapping C represents the functionalities a web service can fulfill. It can also be retrieved from web service registry. If mapping E is defined and E(ai ) = {wsi1 , wsi2 , · · · , wsik }. Then the variables corresponding to the web service wsij ∈ E(ai ) is set to zero. Inter service incompatibilities are expressed as constraints of IP problem instance with their corresponding variables. Suppose < wsik , wsjl > is in IN COM , then this incompatible pair is defined as :yik + yjl ≤ 1, which excludes the possibility that both wsik and wsjl are picked for executing ai and aj simultaneously. If some web service is assigned to a given task beforehand, the variable is set to 1, while the variables for other candidates are set to 0. In this case, the number of allocation constraints is one less than the normal case. Example 2. For supplier example, the incompatible service pairs are expressed as y11 + y22 ≤ 1, y12 + y23 ≤ 1, y22 + y33 ≤ 1, y13 + y31 ≤ 1. Besides the constraints discussed above, there may be other constraints that is specific to application domains. The approach is general and it can be extended to accommodate those constrains.

5

Experiments

Experiments are performed to evaluate the effectiveness and performance of the methods discussed in Section 3 and 4. Experiments settings are: Pentium 4 1.5GHZ with 512M RAM, Windows 2000, jdk1.4.2. The package for computing integer programming model is lp solve. (http://groups.yahoo.com/group/ lp solve/) Abstract specifications of composite web services are defined using web service type that can be implemented by a collection of web service instances. Both web service types and web service instances are stored in Xindice (a native XML database,http://xml.apache.org/xindice/), where the set of service instances corresponding to a web service type are registered with it. Some columns like service identifier and service name are parsed and stored in Mysql to facilitate the process of identifying all candidate web services for a given service type. The QoS data are generated according to random variables following Gaussian distribution. The reason for selecting Gaussian distribution to describe quality dimensions is based on the consideration that Gaussian distribution can describe the overall characteristics of a set of individuals. The supplier example is defined as a composite web service with a sequence of three atomic web services. QoS data are generated and associated with each web service instance. Table 1 is the data for the supplier example. After representing this example as an programming problem and solving it, the solution is: B for item1 ,E for item 2 and G for item 3, which satisfies all constraints.

QoS-Driven Web Service Composition with Inter Service Conflicts

129

Table 1. QoS data for the supplier example wsij availability reliability price response A 0.985 0.994 96 1042 B 0.994 0.9987 90 837 C 0.992 0.985 76 1295 D 0.984 0.981 92 1038 E 0.983 0.994 73 1193 F 0.994 0.988 78 1157 G 0.996 0.989 67 840 H 0.993 0.999 82 1140 I 0.993 0.991 83 1171

Then, experiments are performed to evaluate the performance of this method. Constraints corresponding to inter service conflicts are added to the programming model. Fig. 1. shows the computational time for this method, where the case with conflicts is shown in left side and the case without conflicts in right side. The number of web services involved in service conflicts is equal to the number of tasks. The conflict between task i and j is generated by randomly picking two candidate web services, with one for task i and another for task j.This figure shows that the computing time increases as the number of tasks increases for both cases, in linear approximately. It also reveals that the computing time is acceptable with the problem size in experiments. For example, it costs almost 1.6982 seconds and 1.4521 seconds for cases with and without conflicts when there are 100 tasks and 50 candidates for each task on average,respectively. The difference of computational performance between cases with and without conflicts is illustrated by Fig. 2. On average, it takes 13% more time for cases when conflicts are present than case without conflicts. To explore the impact of the number of inter service conflicts on computational performance, experiments are conducted with the number of conflicts set to

Fig. 1. Performance of IP methods for service composition. The left side is for problem with inter service constraints and the right side is for problem without conflicts.

130

A. Gao et al.

Fig. 2. Comparison of computing time with and without conflicts. The left side is with 20 candidates per service type on average and the right side is 50 candidates on average.

one,two,three and four times of the number of tasks. Fig. 3. is the result for this set of experiments. It is obvious that the time for reading data will not be impacted, which is also validated by the left side of this figure. From the right side, it shows when the number of conflicts increases it takes more time to create a programming instance and solve it. This is reasonable because it takes more time and more memory to create an instance, which incurs more time to compute this instance and gives a solution.

Fig. 3. Impact of number of conflicts on computational performance. The left side is for data reading step and the right side is for model creating and solving time.

6

Related Works

The field of modeling web services and their interaction has received considerable attention in recent years. A variety of formalisms have been proposed for formal model of web services composition [1,15], automatic composition, analysis and verification [16,17,18,19]. Besides automatic composition and verification, the topic of QoS-driven web service composition begins to be attended. Related works about this topic are

QoS-Driven Web Service Composition with Inter Service Conflicts

131

covered in [14,13,21]. Other related work on QoS has been done in the area of workflow such as METEOR [20] and its following METEOR-S [21]. This work is mostly motivated by [14], in which Integer Programming is proposed for QoS-aware service composition. All quality dimensions are used for defining both constraints and objective function. While in this paper, the execution time is excluded from the objective function. The reason is that it is more natural to be considered as an end-to-end constraint as far as user favorable function is concerned. This paper also illustrates the process of defining objective function when there are composite constructs. Moreover, this paper focuses on the impacts of inter service conflicts on the process of service selection. In [14], the assumption holds that the selection of service instances for a task is independent of others. However, the independent assumption is unnecessarily true. This paper presents the mechanism for representing conflicts of interest between services. In [8], semantic web service composition with inter service dependencies is discussed.This paper represents inter service constraints in a more formal way than [8]. By expressing those constraints in an integer programming instance, this work can integrate constraints checking with web service selection. And this paper can select alternative web services based on QoS critiria, where the first service is chosen when multiple services are available. Web Service Endpoint Language (WSEL) is proposed by IBM to describe endpoint properties of web services([7]). Work [6] extends WSEL to specify the conflict of interest as one of the endpoint properties. This paper extends it with a general and formal means for expressing domain constraints. Also,different from [6], where the conflict of interest is represented in the format of first order predicate calculus, this paper can represent the conflict of interest in a more concise way by using linear constraints of 0-1 programming model.

7

Conclusion

In this paper, web service composition is considered as a two-phase process. Abstract specification of composite web service prescribes the relationships between the component web services. The process of service selection and binding is represented as a 0-1 integer programming problem. However, due to the existence of inter service conflicts and dependencies, the selection of a web service for a task may depend the selection for other tasks. This work presents an approach for representing inter service conflicts and dependencies. Then, they are accommodated in the IP model. This approach acts as a unified process for expressing domain constrains and quality of service constraints. Experimental results show this method is effective and efficient.

Acknowledgement This work is supported by the National Natural Science Foundation of China under Grant No. 90412010 and ChinaGrid project of the Ministry of Education, China.

132

A. Gao et al.

References 1. R. Hull, M. Benedikt, V. Christophides, and J. Su.E-services: A look behind the curtain. In Proc. ACM Symp.on Principles of Database Systems, 2003. 2. Aphrodite Tsalgatidou,Thomi Pilioura,An Overview of Standards and Related Technology in Web Services,Distributed and Parallel Databases,12,125-162,2002 3. W.M.P. van der Aalst,A.H.M. ter Hofstede,B.Kiepuszewski, and A.P.Barros. Workflow Patterns. Distributed and Parallel Databases,14(1):5-51,2003 4. Hamdy A. Taha, Integer Programming Theory, Applications, and Computations, Academic Press, 1975 5. Jinglong Shu,Renkai Wen,Theory of Linear Programming and the Application of its Model, Science Press:Beijing,China, 2003(in Chinese) 6. Patrick C.K.Hung, Specifying Conflict of Interest in Web Services Endpoint Language (WSEL),ACM SIGecom Exchange, Vol.3,No.3,August 2002 7. Leymann F.2001 Web Services Flow Language (WSFL 1.0). IBM Corporation 8. Kunal Verma, Rama Akkiraju, Richard Goodwin, Prashant Doshi, Juhnyoung Lee, On Accommodating Inter Service Dependencies in Web Process Flow Composition, Proceedings of the AAAI Spring Symposium on Semantic Web Services, March, 2004, pp. 37-43 9. Business Process Execution Language for Web Services, version 1.1, http://www.ibm.com/developerworks/library/ws-bpel/ 10. WS Choreography Model Overview, http://www.w3.org/TR/ws-chor-model/, 2004 11. OWL-S, http://www.daml.org/services/owl-s/1.1/ 12. W3C, ”Web Services Description Language (WSDL) Version 2.0”, W3C Working Draft, March 2003. (See http://www.w3.org/TR/wsdl20/.) 13. Tao Yu and Kwei-Jay Lin, Service Selection Algorithms for Web Services with Endto-End QoS Constraints, In: Proc. of the IEEE Intl. Conference on E-Commerce Technology,2004,129 - 136 14. Liangzhao Zeng, Boualem Benatallah, Anne H.H. Ngu, Marlon Dumas, Jayant Kalagnanam, Henry Chang, QoS-Aware Middleware for Web Services Composition, IEEE transactions on Software Engineering , 2004,30(5):311-327 15. Richard Hull,Jianwen Su. Tools for Design of Composite Web Services, In Proc. Int. SIGMOD 2004 16. A. Deutsch, L. Sui, and V. Vianu. Specification and verification of data-driven web services. In Proc. ACM Symp.on Principles of Database Systems, 2004. 17. Xiang Fu, Tevfik Bultan,Jiangwen Su, Analysis of Interacting BPEL Web Services ,in Proc.Int.World Wide Web Conf., 2004 18. S.Narayanan and S.McIlraith, Simulation,verification and automated composition of web services. In Proc.Int.World Wide Web Conf.,2002 19. D. Berardi, D. Calvanese, G. De Giacomo, M. Lenzerini and M. Mecella,Synthesis of Composite e-Services based on Automated Reasoning,AAAI 2004(www.aaai.org) 20. Jorge Cardoso,Quality of service and semantic composition of workflows.Ph.D Thesis,University of Georgia,2002 21. Rohit Aggarwal, Kunal Verma, John A. Miller and William Milnor, ”Constraint Driven Web Service Composition in METEOR-S,” Proc. of the 2004 IEEE Intl. Conference on Services Computing (SCC’04), pp. 23-32.

An Agent-Based Approach for Cooperative Data Management Chunyu Miao, Meilin Shi, and Jialie Shen 1

2

Department of Computer Sci. and Eng., Tsinghua University, 100084 Beijing, China {miaocy, shi}@csnet4.cs.tsinghua.edu.cn School of Computer Sci. and Eng., University of New South Wales, 2052 Sydney NSW, Australia [email protected]

Abstract. In these times where more and more real applications are embracing the middleware as a vehicle for conveying their data, the issue of efficient and effective interoperation between databases with different storage formats and local geographical locations has been becoming extremely important research topic. A large amount of systems have been proposed and developed. However, they mainly suffer from low accuracy and slow query response. The paper presents AgDBMS, a new database middle-ware system specifically designed for effective data management in distributed and heterogeneous environments. Distinguished from previous approaches, our system seamlessly integrates advanced agent technology to facilitate process of data source discovery, result integration, query processing and performance monitoring with a layered structure. In this study, we present the architecture and implementation detail of AgDBMS. The comprehensive experimental result using scientific data and queries demonstrates its advantage in robustness, efficiency and scalability for various distributed queries. keywords: Agent Technology, Cooperative Data Management, Multi Database.

1 Introduction In many real life applications, data from diverse source may be represented with different formats and stored in different geographical locations. The effective support of cooperative data management becomes an essential requirement for numerous distributed database applications, including GIS, E-Health and Remote Sensing. The main goal for the database middleware systems is to smoothly integrate data from multiple sources to facilitate efficient and effective information retrieval tasks. Such systems provide one or more integrated schemas, and able to transform from different sources to answer queries against this schema. The basic requirement of database middleware system includes, 1) Since data comes from many different servers, it might be able to access to a broad range of data sources transparently, 2) It should have sufficient query processing power to handle complex operations, and to compensate for the limitations of less sophisticated sources, and 3) Some transformation operations require that data from different sources is interrelated in a single query. It is to optimise and execute queries over diverse data sources, communicates with wrappers for the various data sources involved X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 133–144, 2006. c Springer-Verlag Berlin Heidelberg 2006 

134

C. Miao, M. Shi, and J. Shen

in the query, and compensates for limitations of less sophisticated sources. In general, there are two kinds of transforming data: first, the wrappers need to map data from the source’s model to the middle-ware model, and second, middle-ware system for database integrates the data with the integrated schema. Federated database systems and information integrated systems are the distinctive examples of database middleware systems. Now, some commercial and testing database middle-ware systems are available. The problem they are trying to solve is that a group of decentralised users, which are connected by computer network, accomplish one specific task cooperatively. The cooperative applications not only face the environments mentioned above, especially need the cooperation beyond systems. In other words, the database supported Computer Support Cooperative Work(CSCW) systems need to cooperate and intercommunicate with or without users’ intervention. CSCW users need to intercommunicate with each other. Although the users can find the modification of data, it is very important to know the data is modifying, and alarm other users automatically after the modification is finished. Furthermore, the system should support that several users can modify the certain data at the same time. The ultimate goal of this cooperation is to automated data management. The relative middle-ware system should automatically collect data from underlying databases, analyse them, pick them up, and send the final result to user. During the process, underlying databases operate with each other for the same purpose. We call this as the cooperation of databases. In this paper, we present an agent based architecture, call AgDBMS, for cooperative data management. Distinguished from previous approaches, our system seamlessly integrates advanced agent technology to facilitate process of data source discovery, result integration, query customisation and performance monitoring with a layered structure. The architecture contains query specification and task definition tool, agent community and data access module and illustrates great flexibility. In fact, based on the requirements of query tasks, AgDBMS can select agents with matching capabilities. The query tasks can be executed with relative agents which are dynamically cooperated by the system in order to provide the desirable results efficiently. Also, agents are not only used to encapsulate data, but also advertise query specification and search suitable data sources. In below study, we present the architecture and implementation detail of AgDBMS. Futhurmore, the comprehensive experimental results with scientific data demonstrate its advantage in flexibility, scalability, efficiency and effectiveness for wide ranges of queries. The rest of the paper is organised as follows: Section 2 gives some coverage of related work and background knowledge. Section 3 presents an the architecture of the AgDBMS and detail information for individual component. Section 4 describes the evaluation techniques and gives a detailed analysis of results from a suite of comprehensive experiments over SEQUOIA 2000 Storage Benchmark. Finally, in Section 5, we conclude the paper with summarisation and indicates future directions for this research.

2 Related Work Section 2.1 introduces preliminary knowledge about agent technology. Section 2.2 presents related techniques for data integration of heterogenous database systems.

An Agent-Based Approach for Cooperative Data Management

135

2.1 Agent Technology Recently, agent technology has emerged as an important paradigm for managing, accessing and organising various kinds of distributed applications. Agents are sophisticated software entities with a high degree of autonomy [16,17]. In general, it can operate without human direction and interaction and be integrated into an existing application/framework in order to form new functionality or optimise the execution of existing functions according to predefined requirements. Furthermore, agents can communicate each other to cooperatively complete certain tasks. In terms of agent structure, there is a lack of standard in the agent theory about what should be general components for a agent. However, at least one information attitude to maintain information about the environment and one pro-attitude to guide the actions need to be included into an agent. The BDI model has been a classical template for many agent architectures. In the skeleton, an agent has three primary attitudes of Belief, Desire and Intention and they can represent the informational, motivational and deliberative states of the agent. For AgDBMS, JACK Intelligent Agents [4] was chosen as the foundation of current implementation. 2.2 Heterogeneous Data Integration The need for effective heterogeneous data integration is widely acknowledged in many applications with large amount of data currently spreading across the Internet in a wide variety of formats [14,15]. To develop the advance technique for effective integrating heterogeneous systems, a lot of research projects have been carried out to address translation and semantic integration of the distinct data collections. Also, numerous systems have been proposed and developed to provide semantic brokering. Incorporating some heterogeneous information sources with multi-brokering has been implemented in the CORBA Trading Object Service [2]. In [3], agent based framework with syntactic brokering functions has been proposed. Recently, to address the problem of semantic representation, several data middleware systems have been developed. They include TSIMMIS [6], InfoMaster [7], and Information Manifold [8]. Recently, Nodine etc [10] proposed InfoSleuth framework to support an agent based system for information retrieval and discovery in dynamic and open environment. On the other hand, some projects focus on the deployment of the application specific functionality and efficient processing of queries with user defined operator. Haas etc. [11] developed a novel scheme to query data efficiently from various locations. In [12], Mayr etc. examined in how far known techniques for expensive server-site UDFs and techniques from distributed query processing apply. Based on the result, efficient execution techniques are developed for client-site UDFs and optimisation algorithms for queries with such client-site extensions. More recently, Rodriguez-Martinez and Rossopoulos developed MOCHA system to support large scale of query over distributed data sources. The system has been proved to scale well to large environments and a wide range of query types [9]. Boucelma etc. proposed an extension of MOCHA’s framework, which has been applied to GIS data [13].

136

C. Miao, M. Shi, and J. Shen

Query Specfication & Task Definition Tool

Agent Community

Database Access

Discovery/ Execution Agent

Access Agent

USER INTERFACE

Access Agent

User

. . . . .

DB1

DB2

. . . . .

Observation Agent Access Agent Integrate Agent

DBn

Fig. 1. Structure of Agent-based Cooperative Database Management System - AgDBMS

3 AgDBMS - An Agent Based Cooperative Data Management System In this section, we present an agent based cooperative data management system. Section 3.1 gives a introduction of logic layout for the systems, and then rest sections illustrate detail information for individual components of AgDBMS. 3.1 The Logic Architecture The logical structure of AgDBMS system proposed to facilitate cooperative data management in hetergenous distributed environment, is illustrated in Figure 1. The system utilises the composite structure and consists of three distinct components including: query specification and task definition tool, agent community and data access module. The main functionality of query specification and task definition tool is to provide a user friendly interface and assistance scheme to help user to define various kinds of query and task. Throughout the interface, users can input SQL statement or nature language like query. Users also can specify priority or various control parameters for different retrieval tasks. The Agent community is a group of agents to provide supportive service for data query and source discovery process under multi database systems. There are three kinds of agents with various functionality in the system including integration agent, discovery agent and observation agent. Data access module provides fundamental approach for perform data acquisition and query optimisation. In this paper, we are interested in query processing. The use of agents to perform various retrieval is presented in the next sections. 3.2 Access Agent (ACA) An access agent performs tasks provided by the process agents. In general, it contains numerous information including task identification, which is port that access agent lis-

An Agent-Based Approach for Cooperative Data Management

137

tens, data format in its database and it capacity to complete task. Access agents also provide the mechanisms by which users and other agents can retrieve that data and expand the transformation of its database. Since a data is typical heterogeneous, the access agent must perform some schema and data transformation. We call the data model as Canonical/Common Data Model (CDM) for the AgDBMS which is also used to transmit data between agents. The schemas of individual databases are merged into the global schema via an agent registration step. First, agents model their data as CDM. Then, agents provide an interface definition that describes the behaviour of these data models. The interface is described with XML language, which is commonality and can be easily extended. Between the interface and modelling level, agents can control the property, type of the data and the relationship between them. Further, agents need to deal the concurrency control, the access control and the cooperative control (together with other agents). For the relational database, the generation of access agent can be implied automatically which make the writing of agent as easily as possible. 3.3 Integration Agent (IA) Integrate agent is used to gather data which is collected by access agents, It can divide, unite and converge information into one result. The format of similar data came from individual access agent may not same, e.g. date may be expressed in long format or short format, and so on. Furthermore, the data may be exactly same, partly same, similar, even conflicting. While the tasks like converting dates are probably straightforward, some tasks could be very complex, such as figuring out that two articles written by different authors say ”the same thing”. In AgDBMS, we are using relatively simple integration rules based on patterns. The main reason is that using integration rules can efficiently perform information processing and merging tasks. The generation of integration agents can be complicated and time-consuming, so we try to make the coding involved in them automatically. This will significantly facilitate the task of implementing a new agent. Integration agents can interactive with each other, that is to say, one integration agent can act as other integration agent’s data source. In order to exchange a great amount of data, we use cache to pick up the respond of integration agents. The cache in one agent can be refreshed by clock, user’s order and the agent automatically. 3.4 Discovery/Execution Agent (DEA) The Discovery/Execution agent (DEA) is responsible for converting user specification for query tasks into real query execution. To facilitate the goal, it has a number of functionalities and detail information is as below, – User’s query task is assigned to the relevant access agents via DEA. This is done by making a query to the access agent and finding the relevant available access agents. If receiving numerous choices of data resource for a particular query, it informs the user who raised that particular task to do the final decision. It then makes connections to the access agent and query specifications are sent to it. – DEA can manage the execution flow of the tasks according to the predefined specification. It controls the tasks’ parameter to achieve the desirable goal. It can coordinate with other agents to complete task efficiently.

138

C. Miao, M. Shi, and J. Shen

– Once a task is assigned by an access agent, DEA agent is then initiated and monitoring progress of query process. Various event messages from the monitor agents are received to allow user to be aware of the status of the query process. 3.5 Observation Agent (OA) In networking and distributed environment, there are many factor to influence query process. The main functionality of observation agent (OA) is to guarantee success of data query procedure. It performs the monitoring and control of the execution of a given query at each local database selected by DEA. It also manages the execution of the tasks according to the CE rules (Conditional Execution rule) to avoid long waiting period and coordinates DEA and ACA to complete query task. It can enable, disable, suspend or resume the tasks according to the CE rules for a certain database. Also, the status of each task is reported back to user in real time. It allows users to make quick response to some emergency cases. All above procedures are done by sending messages to certain kind of type.

4 System Implementation The prototype architecture of AgDBMS, illustrated in Figure 2, has been fully implemented with Java and Oracle 9. Users can define and configure query tasks with specific parameters and input via web or GUI based interface, which is provided by the system. The communication channel connecting various kind of agent is developed based on Java Share Data Toolkit (JSDT) [5]. In current implementation, three types of data storage format are considered and they include XML, pure text and relation data type.

Execution Optimizator

Performance Monitor

Rule Registery

Listener Execution Planer

Performance

Rule Input/Query Result Data Source Registery

Query

Result

COMMUNICATION CHANEL

Listener

Integrator Query

Listener

Query Result

Feedback

Query

.........................

Discovery/Execution Agent

Executor

Data Wrapper

Data Wrapper Access Agent N

Access Agent 1

................ DB

Query Engine

Listener/Boardcaster

Listener/Boardcaster Executor

Feedback

Query Result

................ DB

DB

Data Storage

Fig. 2. Architecture of the AgDBM prototype

An Agent-Based Approach for Cooperative Data Management

139

Access agent consists of three components. They are listener/board caster, data wrapper and execution engine. The listener/boardcaster is used to listen query request and boardcast data information regularly to Observation Agent (OA) and Discovery/Execution Agent (DEA). Data wrapper allows system to transform local data format into universal ones and IBM’s XMLWrapper is used as foundation of current implementation. Execution engine is local query engine. In addition to listener, the Observation Agent (OA) has three components: rule registry, execution optimisation and performance monitor. The rule registry is used to store some predefine rule for data discovery and query. In current implementation, it is a database with Oracle 9. Performance monitor’s functionality is to watch and control quality of data retrieval process based on condition stored rule registry. Execution optimiser guarantees a query process results in optimised feedback and prompt response. The Discovery/Execution Agent has two distinct parts: query engine and data source registry. Query engine allows users to execute query. Data source registry is meta data repository, which contains some essential information about each data base currently available including size of data, format of data, and other essential parameters for various data source. Those information is updated regularly using pooling scheme. The User Interaction Agent consists of Monitor Interface and Query Definition Tool. The Monitor Interface provides a set of user friendly visual toolkits to allow system’s user get status of query execution from Observation Agent. Query Definition Tool provides users flexible environment to define query, check query result and customise query parameters.

5 Performance Evaluation To illustrate the advantage of our system, a comprehensive experimental study has been carried out. In following sections, we firstly introduce test data and corresponding queries. Then, experimental setup in term of database configuration and evaluation metric is given in section 5.1. Finally in section 5.2 and 5.3, testing results illustrate the effectiveness, scalability and robustness of the system. 5.1 Experimental Setup and Benchmark The data and related queries from the SEQUOIA 2000 Benchmark [1] are used to test AgDBMS framework. In our test setup, data is stored in different serves spread in network. The specification of those schemas and queries are shown in Table 1 and Table 2. The test set contains three relationships and five different queries which include some complicated operators. All test machines are running with Linux OS with 256 MB RAM and 500MHz Intel Pentium 4 CPU. The source of data is in three different formats, including relational database, ML and plain text. The goal of the study is to clearly find out how effective is the framework and how different component influenced performance of the framework. To achieve this, special benchmark, called Query Response Time (QRT) were designed to measure effectiveness of the AgDBMS from various angles. QRT is the total response time for user to obtain result after sending query. It might include CPU, agent communication and I/O cost.

140

C. Miao, M. Shi, and J. Shen Table 1. The schemas of Sequoia 2000 Benchmark Relations Data Size Polygons(landuse:Integer, location: Polygon) 20.4MB Graphs(identifier:Integer, graph:Polyline) 45MB Rasters(time:Integer, band:Integer, location:Rectangle, 230MB data:Raster, lines:Integer, samples:Integer)

Table 2. The schemas of SEQUOIA 2000 Benchmark Queries Query1

Query2

Query3

Query4

Query Statement Select landuse, Size(location), From Polygons, Where TotalArea(location) ≤ S; Select identifier, NumVertices(graph), From Graphs, Where ArcLength(graph) ≤ S ; Select G1.identifier, From Graphs G1, Graph G2, Where NumVertices(G1.graph) == NumVertices(G2.graph) AND G1.identifier ! = G2.identifier; Select time, band, location, From Rasters, Group by time;

5.2 Effectiveness To study the performance for querying over single data source, Query 1, 2, 3 and 4 from Table 2 have been applied to measure the effect of aggregates, selection and projections in terms of the query time. The experimental result are illustrated in Figure 3(a). From the figure, We can easily see that query times increase with the size of data volume. For example, query process related to table Rasters takes more time to complete than other relations. On the other hand, scalability is particularly important for data management system in distributed and heterogeneous environment, because such systems can potentially contain different number of agents for query and data discovery. retrieval. As the number of agents increases, the performance of query may degrade due to communication cost between agents or agent communities. In this experiment, we compare the query response time (QRT) of the AgDBMS system with different number of agents. For this set of experiments, we vary the number of agents in AgDBMS system from 5 to 50. Figure 3(b) shows experimental results. We can see that if the overhead relate to agent communication during data retrieval is an obstacle for systematic scalability, the QRT is expected to get rapidly worse as the number of agents increased. However, as the result in Figure 3(b) presents, the QRT tends to level off.

An Agent-Based Approach for Cooperative Data Management

RT

15 10 5 0

Query Response Time(sec)

18

20

Time (sec)

141

16 14 12 Query1 Query2 Query3 Query4

10 8

Query1

Query2

Query3

Query4

(a) Performance Vs. Various Query Types

5

10

15

20 25 30 35 Number of Agents

40

45

50

(b) Performance Vs. Mumber of Agent

Fig. 3. Effectiveness of AgDBMS

Average Query Time(sec)

6

4

2 Query1 Query2 Query3 Query4

0 5

10

15 20 25 Number of Query per Second

30

Fig. 4. Performance of AgDBMS with different number of query per second

Expect above, we also investigate how AgDBMS’s perform under a large number of simultaneous queries. During the evaluation, we vary the number of query issued to the system in each second and compare query times. From the results summaries in Figure 4, We can conclude that for AgDBMS, there is no signification increasing in average query response time when frequency of query increases, e.g. in the case of Query 1, around 2.86 seconds needed by AgDBMS to handle 30 queries per second which is only 10% increasing for the case - 5 queries per second. 5.3 Robustness The robustness is another important measurement for database middleware system. This is because the perfect networking or communication environment could not be always expected. In the experiment to test AgDBMS’s robustness against network failure, the number of agents in each community is set to be 20 and number of data source is 10. The query frequency was set to 5 per second and this experimental condition ensures the system operating in unstature state. In order to study the robustness of AgDBMS, certain number of communication channel between agents is randomly cut off and the query response time (QRT) is served as measurement. The number of failed channel

142

C. Miao, M. Shi, and J. Shen

Query Response Time(sec)

18 16 14 12 Query1 Query2 Query3 Query4

10 8 100

200 300 400 Number of Channel Failure

500

Fig. 5. Robustness of AgDBMS against channel failure

used are 100, 200, 300, 400 and 500. The experimental result is shown in Figure 5. Intuitively, as the failure number goes up, we are more likely to have a longer query response time. However, we did observe the situation. Indeed, it is shown that AgDBMS is robust against communication channel distortion and there is no significant increasing in communication cost when substantial part of channel is not available. This is because the agent based technology can provide more reliable query process and is capable to intelligently handle unexpected situation. 5.4 Scalability Scalability is particularly important for large middleware systems, because such systems can potentially contain huge volume of data and information. As the number of data objects increases, the performance of a system may degrade due to noise and other factors. In this experiment, we make analytic study on system throughput and response time of the AgDBMS system using different sizes of data. In the set of experiment, we measure how the AgDBMS can perform under different size of data and randomly pick 5MB, 10MB, 50MB, 100MB, 200MB from Raster data subset of SEQUOIA 2000 Benchmark. 20% of data are used for query. Figure 6 shows

18 Query Response Time(sec)

Average Thoughput(sec)

20 18 16 14 12 Query1 Query2 Query3 Query4

10 8 510

50

100 Size of Data

14 12 Query1 Query2 Query3 Query4

10 8

200

(a) Average Throughput

16

510

50

100 Size of Data

(b) Average Response Time

Fig. 6. Effect of Scalability

200

An Agent-Based Approach for Cooperative Data Management

143

experimental results of the AgDBMs system and we can find AgDBMS is very robust against the volume of data. There is no significant throughput and response degeneration for larger dataset. This is because the AgDBMS uses agent based technique to find more suitable date source in a dynamic environment and this approach significantly improves response time and throughput of whole system. From above, we can see that AgDBMS emerges as a robust and effective middleware technique with superior scalability to accommodate large size of data.

6 Conclusion and Future Work Modern database systems are often required to intelligent data process from different sources and with various formats. In this paper, we present a novel data oriented middle-ware solution, called AgDBMS, to the problem of effective data management in distributed and heterogeneous environment. Comparing from previous approaches, advanced agent technology is smoothly integrated into our system to facilitate process of data source discovery, result integration, query processing and performance monitoring. A set of extensive experiment has been carried out to study effectiveness, robustness and scalability of our proposed system. The result demonstrates its great advantage with real dataset. There is a great deal of future research stemming from this paper. In the ongoing work, we plan to extend the test data to the ones from other domains. Also, current experimental result is obtained with four predefined query types. It is interesting to evaluate the framework with other kinds of queries. Furthermore, there are many places the AgDBMS performance will be able to be tuned in a real database environment. In particular, indexing and agent communication efficiency will be greatly increased if good heuristics are developed to determine when to dynamically change the parameter based on user query demand. Finally, implementation of different object types, how to distribute these throughout the database, and corresponding cost model needs to be investigated.

References 1. M. Stonebraker, ”The SEQUOIA 2000 Storage Benchmark”, ACM SIGMOD Conference, 1993. 2. OMG, OMG Trading Object Service Specification, Technical Report 97-12-02, Object Management Group, http://www.omg.org/corba, 1997. 3. H. Nwana, D. Ndumu, L. Lee, and J. Collis, ZEUS: A Tool-Kit for Building Distributed Multi-Agent Systems, Applied Artifical Intelligence J., vol. 13, no. 1, pp. 129-186, 1999. 4. P. Busetta, R. Ronnquist, A. Hodgson, A. Lucas, Jack Intelligent agents - components for intelligent agents in java, Paolo Busetta, Ralph Ronnquist, Andrew Hodgson, and Andrew Lucas. Jack intelligent agents - components for intelligent agents in java. AgentLink News Letter, January 1999. 5. Justin Couch, Java 2 Networking, McGraw-Hill, 1999. 6. H. Garcia-Molina et al., The TSIMMIS Approach to Mediation: Data Models and Languages, J. Intelligent Information Systems, vol. 8, no. 2, 1997. 7. M.R. Genesereth, A. Keller, O.M. Duschka, Infomaster: An Information Integration System, Proc. ACM SIGMOD Conference, 1997.

144

C. Miao, M. Shi, and J. Shen

8. A. Levy, D. Srivastava, and T. Kirk, Data Model and Query Evaluation in Global Information Systems, J. Intelligent Information Systems, vol. 5, no. 2, 1995. 9. Manuel Rodriguez-Martinez, Nick Roussopoulos: MOCHA: A Self-Extensible Database Middleware System for Distributed Data Sources, ACM SIGMOD Conference, 2000. 10. Narian Nodine, William Bohrer, Anne HH. Ngu, Semantic Brokering over Dynamic Heterogeneous Data Souraces in InfoSleuth, ICDE Conference, 1999. 11. Laura M. Haas, Donald Kossmann, Edward L. Wimmers, Jun Yang Optimizing Queries Across Diverse Data Sources. VLDB Conference 1997. 12. Tobias Mayr, Praveen Seshadri, Client-Site Query Extensions, ACM SIGMOD Conference, 1999. 13. Omar Boucelma, Mehdi Essid, Zoe Lacroix, A WFS Based Meditation System for GIS Interoperability, ACM GIS, 2002. 14. M. Lenzerini, Data Integration: A Theoretical Perspective, ACM PODS Conference, 2002, 15. A.Y. Levy, A. Rajaraman, and J.J. Ordille, Querying heterogenous information sources using source descriptions, Proc. the 22nd Int. Conf. on Very Large Data Bases (VLDB96), 1996 16. M.J. Wooldridge, An Introduction to Multiagent Systems. John Wiley & Sons, 2002. 17. M.J. Wooldridge and N.R. Jennings, Intelligent Agents: Theory and Practice. The Knowledge Engineering Review, vol 10, p115-152, 1995.

Transforming Heterogeneous Messages Automatically in Web Service Composition Wenjun Yang, Juanzi Li, and Kehong Wang Department of Computer Science and Technology, Tsinghua University, Beijing, China, 100084 {ywj, ljz, wkh}@keg.cs.tsinghua.edu.cn

Abstract. When composing web services, establishing data flow is one of the most important steps. However, still lack of solution is proposed for the fundamental problem in this step about how to link two services with heterogeneous message types. It results in that many an available service candidate has to be abandoned in current web service composition systems because the types of their inputs are not compatible with that of request messages. This paper presents a new solution for linking heterogeneous messages automatically when composing web services. It converts request messages to the format of current service's input. As the transforming operation is deployed as the third-party web service, this solution can be integrated into current composition systems seamlessly. Available information in message schema is fully utilized for automated message schema matching and XSLT scripts used to convert data are auto-generated according to the message schema matching rules. This solution has been applied in SEWSIP, a prototype of semantic-based web service composition system, and evaluation of related experiments on it shows well results.

1 Introduction Web service composition refers to the process of combining several Web services to provide a value-added service. It is generating considerable interest in recent years and lots of technologies are imported into this research community for automatically discovering distributed services and automatically generating service processes. Unfortunately, a problem is still not well solved when establishing data flow for service processes, that is, how to link two or more sequential services which have heterogeneous message types. Since web services are developed independently, it is common that the output message type of the previous service is not compatible with the input message type of the following service. When executing such a process, process engines will abort abnormally for incompatible message types. Most of web service composition systems avoid this problem by selecting service candidates whose inputs are identified in structure with request messages. However, they have to abandon many an available service candidate because of incompatible message types when discovering and selecting concrete web services. To the best of our knowledge, no previous study has so far sufficiently investigated the problem of message heterogeneity. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 145 – 156, 2006. © Springer-Verlag Berlin Heidelberg 2006

146

W. Yang, J. Li, and K. Wang

Three questions arise for message heterogeneity problem: (1) how to formalize the problem (since it involves multiple cases of heterogeneity of hierarchical messages); (2) how to solve the problem and seamlessly integrate the solution into current web service composition systems; (3) how to make an implementation. (1) We formalize this problem as the problem of XML transformation based on schemas. Messages in web services are formatted in XML and their schemas are defined in WSDL documents. Thus, XML transformation technology can help to remove all cases of message heterogeneity. (2) We propose to transform heterogeneous messages with a message transformation component and deploy it as a web service. This transformation service can be inserted as an activity into composite processes where two or more linked services have incompatible message types. In this way, our solution can be seamlessly integrated into current composition systems. (3) We implement message transformation automatically on the basis of automated schema matching. Available information in message schemas is fully utilized for matching between schemas. The remainder of this paper is organized as follows. In Section 2, we introduce related work. In Section 3, we formalize the problem. In Section 4, we describe the mechanism for adapting heterogeneous messages. In Section 5, we present the algorithms to automated schema match. Section 6 gives our experiment methods and results and Section 7 gives the concluding remarks.

2 Related Work 2.1 Message-Level Composition Although current research on service composition mainly focuses on composition possibility and process optimization, there are a few papers that refer to message-level composition. However, their work aims to find the services with compatible message types. They do not take into consideration the problem of message heterogeneity. For example, B. Medjahed discussed the composability of web services from multiple levels including message similarity [1]. E. Sirin defines generic message match, denoting the match between the services whose output type is a subclass of the other service's input type [2]. Meteor-S offers graphical user interface for process designer to link messages among services [3]. They avoid this problem by filtering those service candidates owning incompatible message types. 2.2 Web Service Flow Languages Here we analyze how the two main web service flow languages, OWL-S and BPEL4WS, support heterogeneous message transformation. OWL-S OWL-S [4] is defined as OWL-based Web service ontology, aiming to facilitate the automation of Web service tasks, including automated Web service composition. Messages are represented as a set of concepts in OWL-S, thus they can be matched

Transforming Heterogeneous Messages Automatically in Web Service Composition

147

each other in terms of the semantics of the concepts. It is useful for service matchmaking but it is irrelevant to the problem of message heterogeneity. BPEL4WS BPEL4WS [5] provides a language for the formal specification of business processes and business interaction protocols. It supports assigning message data in parts with the help of XPath language. It is effective when both messages are locally compatible. However, it will become tedious for assigning data respectively for each element when serious heterogeneity exists in messages. Therefore, the better solution for this problem is to transform the heterogeneous messages in third-party side. 2.3 Message Heterogeneity in Distributed Systems A similar problem exists in distributed systems, where distributed components communicate one another with messages. Several message communication technologies, such as IIOP in CORBAR and Java Message Service in J2EE, are developed. However, the problem can be easily solved in this field because the messages are represented with objects and the types of messages are known by the other side before invocation. If the type of received object is not the same with required one, then errors in the distributed system will be thrown out without considering compensation. Thus, the solution in traditional distributed systems can not apply to web service composition systems.

3 Problem Analysis We analyze the problem of message heterogeneity in web service composition by a simple example. As shown in Fig. 1, and can be composed in function level to offer comprehensive functionality. The output message of is supposed to offer data to the input message of . However, both messages have different element names and orders in structure. Besides name heterogeneity and structure heterogeneity, the content of one input element may be concatenated by the contents of more than one request element. Moreover, one

Fig. 1. The problem of heterogeneous messages

148

W. Yang, J. Li, and K. Wang

service's input data is commonly offered by multiple services' output or user-given parameters, and each source corresponds to one schema with its own namespace. Thus, a great diversity of message heterogeneity increases the difficulty of this problem. Fortunately, messages are represented with XML format and their schemas can be retrieved from WSDL document. Thus XML transformation technology can be used to remove the heterogeneity of messages when composing web services. We take request messages as Source Data, the schemas of request message as Source Schemas, and the schema of input message as Target Schema. This problem is equal to transform the source XML data conformed to source schemas to result XML data conformed to target schema.

4 Solution for Transforming Heterogeneous Messages In this section, we first present a new automatic message transforming component in Section 4.1. Then we discuss how to integrate it with current web service composition systems in Section 4.2. 4.1

Message Transforming Component

Although there are many XML transforming components, they do not support automated transformation and (or) do not consider the message match characteristics described in Section 5.1. We develop a new automated message transformation component based on the match algorithm on message schemas. As shown in Fig. 2, it takes source schemas, target schema and source data as input, matches source schema to target schema by applying match algorithm, then generates XSLT scripts according

Fig. 2. Framework to message transformation component

Transforming Heterogeneous Messages Automatically in Web Service Composition

149

to match rules, and finally interprets XSLT scripts and outputs the result XML data conformed to the target schema. The framework of this component contains four parts, including Schema Matcher, XSLT Generator, XSLT Engine Wrapper and Web Service Interface. Schema Matcher accepts source schemas and target schema as inputs and applies match algorithm on them for generating match rules. It parses schemas into hierarchical structures and search match pairs between source schema tree and target schema tree. The match algorithm is described in details in Section 5. XSLT Generator reads match rules and automatically generates transformation script. Utilizing transformation script takes advantages of easy read and debug. Here we choose XSLT language as transformation script because it is popular and powerful enough. Moreover, there are several stable XSLT engines, such as XALAN and XT. As the entire match rules are translated into XSLT scripts, we use XSLT Engine to complete the transformation process. XSLT Engine takes XSLT scripts and source data as input, interprets XSLT and generates the result XML document conformed to the target schema. 4.2 Integration into Web Service Composition Systems To seamlessly integrate message transforming component into current composition systems, we deploy it as a web service, named Message Transformation Service. When message sources are not compatible with current service in the process flow, message transformation service can be inserted ahead of current service to smooth heterogeneity of messages (See Fig. 3). Message transformation service is stateless and can be invoked multiple times in one process wherever heterogeneous messages exist. It can also return the generated XSLT scripts for testing in debug environment.

Fig. 3. Invoking transforming service in process

If service process is manually designed, process designers can decide where this service should be inserted in the process. If the process is automatically generated, it is decided by the schema comparator, a tool developed in message transformation component to compute the compatibility of two schemas. Its core technology is also the match algorithm on message schemas. Message transforming component also offers graphical user interface for process designers to revise the match rules after automated match, it can only be applied for the systems supporting interaction with users. For automated composition systems, the correctness of transforming results

150

W. Yang, J. Li, and K. Wang

hardly depends on the accuracy of match between schemas. In Section 5, we discuss match algorithm in details.

5 Message Schema Match XML Schema Match takes two xml schemas called source schema and target schema as input and produces a mapping between elements of them that correspond semantically to each other. Lots of previous work on automated schema match has been developed in the context of data integration and information retrieval. However, more work can be done besides applying previous ones because here message schema match has its distinct characteristics. In this section, we firstly introduce these characteristics in Section 5.1. Then in Section 5.2, we discuss how to fully utilize available information in schema to get match candidates in terms of the similarity of elements. Finally, a match algorithm is presented which selects the correct match candidate for each element in the target schema. 5.1 Message Schema Match Characteristics Since service message is defined with W3C XML Schema language, message schema match is essentially equal to XML schema match. However, there are several distinct characteristics in message schema match which affect the decision in match algorithm. z

z

z

Web service message usually has simple structure. Service message is mainly used to load business data or operation parameters, thus it is usually defined in flat structure or in a shallow hierarchy for easy read and portable transfer. Multiple source schemas match to single target schema. Recall that service's input data can be offered by multiple services' output or user-given parameters, and each source corresponds to one schema with its own namespace. The cardinality in schema-level is n:1 here rather than 1:1 in the case of traditional schema match. Message schema match is driven by target schema. Traditional schema matches pursue high match rate between two schemas and they do not distinguish who is source schema and who is target one. However, message schema match is for message transformation. The generated match rules are required to guarantee that the transformation result should conform to target schema. Thus message schema match aims to find the correct source elements for each target one.

5.2 Match Approaches The available information for message schema match includes the usual properties of schema elements, such as name, description, data type, cardinality and schema structure, etc. Message schema match approaches find multiple match candidates and each one will be estimated the degree of similarity by a normalized numeric value in the range 0-1.

Transforming Heterogeneous Messages Automatically in Web Service Composition

151

Si is the set of elements in the ith source schema, S is the union of all source elements ( S = S1 ∪ S 2 ... ∪ S n ), sij is the jth source element Formally expressed, assume

in the set Si ,

T is the target element set, tk is the kth target element in the set T . Then

the match candidates can be represented as:

match(tk ) = {< sij , tk >| similarity ( sij , tk ) > threshold , sij ∈ S , tk ∈ T } . Here we only consider the cardinality of 1:1 in element level since such a case is the most common in message match. We will expand to the cases of 1:n and n:1 cases in the future work. In the rest of this section, we discuss how to utilize available information in schema to get the similarity of both elements. 5.2.1 Name Matching Name matching counts the similarity of schema elements based on their names. Several methods can be used to measure the similarity of two names, including VSM [6] and machine learning [7]. Because the messages usually have flat structure or simple hierarchy, we utilize dictionary to count them for better results. For element name1 and element name2, we divide them into several words respectively represented by: < w11 , w12 ,...w1m > and < w21 , w22 ,...w2 n > . The name similarity of two elements is represented as follows: m

n

∑∑WordSim( w

1i

NameSim(name1 , name2 ) =

i =1 j =1

m×n

, w2 j ) .

We select Wordnet [8] as the dictionary to compute the similarity of two words. Assume w is the nearest ancestor of w1i and w2 j in WordNet, then

simd ( w1 , w2 ) = max( simd ( s1i , s 2 j )) sim d ( s1 , s 2 ) =

s1i ∈ s ( w1 ), s 2 j ∈ s ( w2 )

2 × log p ( s ) log p ( s1 ) + log p ( s 2 )

In this way, the similarity of two words is computed by the semantic distance of them in WordNet. If w1i and w2 j are equal, the result is 1. As seen in this equation, more synonym two words are, more close to 1 the result is. 5.2.2 Constraint Filtering Constraint filtering adjusts the similarity of schema elements by element constraints, including data types, value range, optionality, cardinality, etc. Commonly speaking, if two elements describe the same utility, their definition probably contains compatible constraints. For example, the content of element and data element are likely both defined as string or string-compatible types, and their

152

W. Yang, J. Li, and K. Wang

cardinality can mostly be zero or more ( 0..∞ ) as the child of element . Element constraints can not be used alone for matching because there may be lots of irrelevant elements with the same constraints. It can be combined with other match approaches (e.g., name matching) to filter the match candidates. If the two elements in a match have incompatible constraints, then the similarity of this match should be weakened. For instance, if is an inner element and is a data element, then match on them is unreasonable since they have incompatible data types. We use the following formula to adjust the similarity:

similarity (e1 , e2 ) = similarity (e1 , e2 ) * ∏ fi (e1 , e2 ) (1 ≤ i ≤ n) . The function

f i corresponds to the ith constraint, described as follows:

{

th f i (e1 , e2 ) = 1, e1 and e2 is compatible for the i constraint . α i , otherwise

Here α i is a constant between 0 and 1 for the ith constraint. It denotes implicitly the importance of the ith constraint. These constants are initially assigned manually and adjusted dynamically according to match results. Now two kinds of constraints, data type and cardinality, are mainly considered because the constants on them are far less than 1 after training on corpus. Thus, if the two elements have high similarity on name but with incompatible data types, however, the pair of them is still likely to be kicked off from match candidate set. 5.3 Match Algorithm

After applying match approaches on each target element tk, we obtain match candidate set match(tk). However, it is not the final mission of message schema match. Only one candidate should be selected for each target element respectively from corresponding match candidate set, which is used as a match rule for transforming messages. Recall one of the characteristics in message schema matching is that multiple source schemas try to match to one target schema. The correct match results are usually to be that one source schema "occupies" one local part of target schema. Even in the case of single source schema, candidate selection can also be decided by the context information. One element's nearby elements are called its context elements, including its parent, brothers and children. For arbitrary match candidate between sij

tk , if the context elements of sij and tk match well, then this candidate is likely to be correct one. Presuppose C (e) denotes context elements of e , then context and

match can be measured with context match rate:

crate(< s, t >) =

| {< s ', t ' >| s ' ∈ C ( s), t ' ∈ C (t ), < s ', t ' >∈ match(t ')} | | C (t ) |

For example, both / and / are mapped to / with the same similarity counted by the previous match

Transforming Heterogeneous Messages Automatically in Web Service Composition

153

approaches, but is also mapped to , thus the latter match pair has higher context match rate than the former one, and consequently it is more likely to be the correct one. We designed the match algorithm shown in Fig. 4. _____________________________________________________________________ Step 1: get match candidate set for each target element Input: S --- all source elements, T --- all target elements Output: mTable --- an instance of hash table foreach (t in T) foreach (j in S) sname = NameSim( s.name, t.name) ; //name match

f = f data _ type ( s, t ) * f cardinality ( s, t ) ;

similarity ( s, t ) = sname * f ; if similarity ( s, t ) > threshold { Add < s, t > to match(t); Add pair to mTable . Step 2: Select correct match candidate Input: mTable --- obtained from step 1 Output: rules --- vector containing picked match candidate Traverse T in the order of top-down, t denotes current traversed element. achieve match(t) from mTable; foreach ( < s, t > in match(t))

c = crate(< s, t >) ; add into rules the match candidate with the biggest value of c. _____________________________________________________________________ Fig. 4. Message schema match algorithm

In this algorithm, Step 1 finds all the match candidates between source schemas and target schema. Step 2 traverses the target schema top-down meanwhile selects the pair with the biggest context match rate for each target element.

6 Experiments and Evaluation This solution is implemented in SEWSIP, a prototype of semantic-based web service composition system [9]. It publishes message transformation service in the URL of http://keg.cs.tsinghua.edu.cn/sewsip/services/msg/. We experiment on this prototype to testify the capability of our solution. 6.1 Data Sets

We tried to collect web service description (including WSDL documents and web service text description) from as many sources as possible for our experiments. Four

154

W. Yang, J. Li, and K. Wang

approaches were applied for service collection. (1) We look up UDDI centers to find web services, including IBM center (http://www-3.ibm.com/services/uddi/find), Microsoft center (http://uddi.microsoft.com/search/), et al. (2) We search Web services in Web service search engines like http://www.salcentral.com/search.aspx (3) We download web service description from service collection website like http://www.xmethods.net/. (4) We search Web services at Google (http://www. google.com) with "wsdl" or "asmx" as the file suffix. We mainly collect services in the domains of weather, address lookup, news booking and hotel booking. These kinds of services define message types with different level of complexity. Weather services have the most simple message types, commonly containing and parameters. Hotel booking services have the most complex message types with more than ten parameters. Finally, 118 available WSDL documents in the 4 domains are selected, and we publish them in the URL of http://keg.cs.tsinghua.edu.cn/sewsip/rawdata/ws/. We classify them in manual in terms of their service descriptions and operation names. Table 1 shows classification statistics on data sets. Table 1. Statistics on data sets No

Data Set

Number Percentage (%)

1

weather

31

26.27

2

address lookup

35

29.66

3

news booking

24

20.34

4

hotel booking

28

23.73

6.2 Experiment Methods and Evaluation Measures

The first experiment is to prove the significance of the solution to the problem of message heterogeneity. For each domain, we extract keywords from WSDL documents in this domain as function description, and selected the typical input message type as request message in this domain. Firstly, we search services in the data sets using function description as criterion only, and then using both function description and request message type as criteria. We compare the results to see how utilizing message types as search criteria affect the searching results. We measure the results with searching rate defined as c = S / T . S denotes the number of selected services and T denotes the total number of services in the corresponding domain. The second experiment is to evaluate message schema match algorithm. We arbitrarily group two WSDL documents from the same domain, and annotate the match pair for each group in manual. Then we apply match algorithm on their input message schemas to see if they can be matched correctly. Firstly, we apply name approach alone on the data set. Then we combine name approach, description approach and constraint filtering together. Finally, we apply the comprehensive match algorithm described in Fig. 4. We measure the results with precision and recall rate.

Transforming Heterogeneous Messages Automatically in Web Service Composition

Pr ecision =

ma I mm ma

Re call =

mm I m a

155

.

mm

ma denotes the auto-generated match pairs and mm denotes the manual-annotated match pairs. 6.3 Experiment Results

Table 2 shows the experiment results for the first experiment. The columns respectively represent domain names, number of selected services by both function and message types (N1), searching rate by both function and message types (C1), number of selected services by function description (N2), searching rate by function description (C2) and (C2 - C1). Table 2. Results for Searching Services No

Data Set

N1

C1 (%)

N2

C2 (%) C2-C1 (%)

1

weather

20

64.5

31

100.0

+35.5

2

address lookup

18

51.4

33

94.3

+42.9

3

news booking

12

50.0

20

83.3

+33.3

4

hotel booking

5

17.9

25

89.3

+71.4

Table 3 shows the experiment results for the second experiment. The columns represent respectively data sets, results for name approach, results for combining name approaches and constraint filtering, and results for message match algorithm. Table 3. Results for match message schemas Data set

weather address lookup news booking hotel booking

name approach

hybrid approaches

Match Algorithm

Precision

Recall

Precision

Recall

Precision

Recall

96.1 80.0 74.3 72.8

97.4 82.5 71.2 74.5

100.0 82.4 78.7 76.5

100.0 85.1 76.2 77.1

100.0 82.4 81.2 85.2

100.0 85.1 80.4 87.4

6.4 Discussion

(1) The results in Experiment 1 indicate that utilizing message types as one of searching criterion seriously reduces the searching rate. More than half services have to be abandoned because of incompatible message types. We can see from the changes in value of c1, more complex the message types are, fewer results can be searched. Therefore, the solution to remove message heterogeneity is significant to

156

W. Yang, J. Li, and K. Wang

enhance the numbers of service candidates, especially when the message types are complex. (2) The results of Experiment 2 show that match algorithm can not highly enhance the precision and recall compared with hybrid approaches when the message types are simple. Algorithm 1 select match candidate according to context match rate based on the match similarity from hybrid approaches. When message structure is simple, little context information can be utilized to distinguish correct match pair from others.

7 Conclusion and Future Work In this paper, we have investigated the problem of message heterogeneity in web service composition. We have defined the problem as XML transformation based on schemas. We have proposed a solution to integrate the operation of message transformation into current web service composition systems. Fully utilizing available information in message schemas, we implement automated message schema match sufficiently considering the characteristics of message schemas, so that we can transform heterogeneous messages automatically and dynamically. As future work, we plan to make further improvement on the accuracy of schema match. The match pattern of 1:n and n:1 will be supported and instance-level data will be utilized for schema match.

References [1] Medjahed B.. Semantic Web Enabled Composition of Web Services [PhD Dissertation]. Virginia Polytechnic Institute and State University, Virginia USA, 2004. [2] E. Sirin, J. Hendler and B. Parsia. Semi-automatic Composition of Web Services using Semantic Descriptions. Web Services: Modeling, Architecture and Infrastructure workshop in conjunction with ICEIS2003, 2002. [3] K. Sivashanmugam, J. Miller, A. Sheth, and K. Verma. Framework for Semantic Web Process Composition. Technical Report 03-008, LSDIS Lab, Computer Science Dept., UGA, http:/lsdis.cs.uga.edu/lib/download/TR03-008.pdf. [4] D. Martin, A. Ankolekar, M. Burstein, et al. OWL-S 1.1 http://www.daml.org/ services/owl-s/1.1/. [5] T. Andrews, F. Curbera, H. Goland, et al.. Business Process Execution Language for Web Services (V1.1). ftp://www6.software.ibm.com/software/developer/library/ws-bpel11.pdf. [6] J. Madhavan, P. Bernstein, K. Chen, et al.. Corpus based schema matching. In Proc. of the IJCAI-03 Workshop on Information Integration on the Web (IIWeb-03), 2003. [7] A. Doan, J. Madhavan, P. Domingos, and A. Halevy. Learning to map between ontologies on the semantic web. In Proceedings of the World-Wide Web Conference (WWW-2002), pages 662–673. ACM Press, 2002. [8] Wordnet. http://www.cogsci.princeton.edu/wn/. [9] W. J. Yang, J. Z. Li and K. H. Wang. Interactive Service Composition in SEWSIP. Accept to IEEE International Workshop on Service-Oriented System Engineering (SOSE05), 2005.

User-Perceived Web QoS Measurement and Evaluation System Hongjie Sun, Binxing Fang, and Hongli Zhang National Computer Information Content Security Key Library, Harbin Institute of Technology, Harbin 150001, China

Abstract. Quality of service(QoS) is so important for content Providers that they constantly face the challenges of adapting their web servers to support rapid growth and customer’s demand for more reliable and differentiated services. A web QoS measurement and evaluation system(WQMES) was designed and implemented based on in-depth research on the key techniques of web QoS. The prototype implementation of WQMES and the performance evaluation criteria based on performance aggregation are carefully introduced. Our contribution is presenting a single and quantitative result combining several web performance metrics. Experiment results indicate the scalable WQMES can do the real-time detection on web QoS from the end user’s perspective. The performance aggregation approach is a bran-new idea, and of a definite practicability.

1 Introduction The amount of web traffic in networks grows at a fast pace as more businesses are using the web to provide customers with information about their products and services. Web technology is the foundation of a wonderful communications medium, it provides a very convenient way to access remote information resources. It is essential that the web’s performance should keep up with increased demands and expectations. Because of the web’s popularity many web-based applications run into performance bottlenecks that drastically decrease the throughput and the usability of the content delivered over the web. Web QoS refers to the capability of a web site to provide better service to end users over various technologies. The degree of satisfaction of the user is generally expressed in non-technical terms. Users are not concerned with how a particular service the web site is provided, or with any of the aspects of the network’s internal design, but only with the resulting end-to-end service quality. An user’s perception of web service quality is defined in terms of system response delay service availability and presentation quality. Response delay is the most important issue and has several definitions depending on the type of web service envisaged. Typically, response delay is expressed using formula Dresponse=DDNS+Dconnection+Dserver+Dtransmission. Selvridge et al. found that long delays increase user frustration, and decrease task success and efficiency[1]. In another study, Bhatti et al. found that users tolerate different levels of delay for different tasks and regard the quality as ‘high’ for delays ranging from 0 X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 157 – 165, 2006. © Springer-Verlag Berlin Heidelberg 2006

158

H. Sun, B. Fang, and H. Zhang

through 5 seconds, ‘average’ in the interval 5 through 11 seconds and ‘low’ for delays longer than 11 seconds, and users who experience long delays are likely to abort active sessions prematurely[2]. For service providers on the Internet, high availability to the users is crucial, within the same content providers, the more available the more attractive to the customers. Presentation quality is concerned with user’s terminal. QoS is a crucial factor to the success of a web application on the market. The user’s satisfaction with web site response quality influences how long the user stays at the site, and determines the user’s future visits. Performance measurement is an important way of evaluating this quality. Measuring web services quality is how long it takes to search and get information from the web site. However, due to many uncertainties of Internet and web users, performance measurement of web applications is more difficult than traditional client/server measurement. A web server is forced to delay the response to a request when some resources necessary for processing the request are busy. Three possible bottleneck resources were identified: HTTP protocol processing, reading data from disk and transmission of data on the network downlink. How to construct a web performance measurement system with reliable performance evaluation criteria is so important for both content providers and final users. We present an user-perceived web QoS measurement and evaluation system(WQMES) based on active probing technique, offer an performance evaluation criteria based on performance aggregation to assess web QoS. Content providers can use this method to estimate QoS of their web sites, and thus establish a suitable distributed deployment of web site for customers and make appropriate decisions for optimizing site performance. The rest of the paper is organized as follows: The next section provides a brief overview of some of the literature related to web performance measurement and evaluation; section 2 presents the implementation of WQMES and related techniques; section 3 presents the experiments and results; the last section gives a brief summary of this research.

2 Related Work Users want to access content from the most appropriate service site without prior knowledge of the server location or network topology. Researchers within academia and industry have responded to this trend both by developing optimizations for servers and by developing mechanisms to test their performance. QoS issues in web services have to be evaluated from the perspective of the providers of Web services and from the perspective of the users of web services. Andresen et al.[3] propose a Server-side scheme which attempts to provide a solution based on server utilization. The technique uses a combination of DNS rotation and HTTP URL redirection for load balancing. SPAND determines network characteristics by making shared passive measurements from a collection of hosts and uses this information for server selection for routing client requests to the server with the best observed response time in a geographically distributed Web server cluster[4]. Sandra et al. use tcping to probe median bandwidth

User-Perceived Web QoS Measurement and Evaluation System

159

and median latency to do client-side selection[5]. A client-based approach has the advantage that the client-side selection scheme has an overall network vision, in terms of congestion of the links involved in the data transfer between server and client, that coincides with the end user’s experience of service quality[6]. Krishnamurthy et al. measured end-to-end web performance on 9 client sites based on the PROCOW infrastructure[7]. Li and Jamin use a measurement based approach to provide proportional bandwidth allocation to web clients by scheduling requests within a web server[8]. Their approach is not able to guarantee a given request rate or response time, it may be suitable only for static content and has not been evaluated on trace-based workloads. Shen et al. define a performance metric called quality-ware service yield as a composition of throughput, response time, etc., and propose application-level request scheduling algorithms to optimize the yield criterion. They use the system to enable service differentiation in the Ask Jeeves search engine. Their main focus is to provide guarantees on target values of the performance metric[9]. There are also many tools developed for web performance measurement, such as WebBench Httperf, etc. Along with the works mentioned above, most of the measurement works focus on measurement and analysis of signal metric. We have not seen the combination of multi metrics. A performance evaluation criteria was proposed based on performance aggregation to analysis the web QoS. The advantage of this approach is that many different metrics can be combined to achieve one quantitative convenient result to evaluate performance. This paper use WQMES to measure and evaluate performance of four web sites from the end user’s perspective.

3 WQMES In this section, the architecture and implementation of our prototype WQMES was proposed. A detailed introduction to the implementation of WQMES and its related technologies was also given. 3.1 System Design and Implementation There are two popular techniques for measuring the web performance. The first approach, active probing uses machines from fixed points in the Internet to periodically request one or more URLs from a target web service, record end-to-end performance characteristics, and report a time-varying summary back to the web service. The second approach, web page instrumentation, associates code with target web pages. The code, after being downloaded into the client browser, tracks the download time for individual objects and reports performance back to the web site. WQMES uses first active probing technology. We use TCP SYN packet to connect web site, if connect, Get command was sent to get the first level html page from the web server. From the download page, find all the sublinks inside the page, and get them through thread pool with multiple threads.

160

H. Sun, B. Fang, and H. Zhang

Manager Module Graph Visualizing Module

Assessment criteria

Database

Performance Evaluation Module

Analyzing Rules

Data Analysis & Processing Probing Rules & Data Set

Data Distributing

Probing Engine Buffer

Thread1

Pactet Receiver

Fill Packet TCP

Threadn

Packet Sender

Fig. 1. The implementation architecture of WQMES

WQMES consists of five function modules shown in figure 1: (1) Manager Module: Responsible for customizing task set(including probing rule and data set assessment criteria and analyzing rules, etc.) and sending commands to function modules. (2) Graph Visualizing Module: It fetches the corresponding data from Database based upon an user’s instruction, and uses the graphic interface to show the performance evaluation result. (3) Performance Evaluation Module: It uses the performance evaluation criteria to calculate the processed data based on assessment rules to get a quantitative evaluation result, then stores the result to the Database. (4) Data Analysis & Processing Module: It fetches the rude data from Database based on evaluation condition, abstracts and formats the data, then stores the result to the Database. (5) Probing Engine: Probing engine executes performance probing with multiple threads using active technology based on rules (including probing interval packet size and destination dataset). Thread pool concurrency model is used in Probing Engine, it is a variation of the thread-per-request. Data Distributing maintenances the thread pool by prespawning a fixed number of threads at start-up to service all incoming requests. Probing requests can execute concurrently until the number of simultaneous requests exceed the number of threads in the pool. At this point, additional requests must be queued until a thread becomes available.

User-Perceived Web QoS Measurement and Evaluation System

161

3.2 Performance Evaluation Criteria Based on Performance Aggregation Web QoS criteria includes conventional metrics such as throughput delay loss and jitter, as well as new QoS criteria based on utilization reliability and security. Four performance metrics are used here: delay delay jitter loss rate and utilization. We give a simple introduction to them. Delay: Web delay corresponds to response delay that includes DNS delay connection delay server delay and network transmission delay. Delay jitter: Delay jitter here is the variation of the web response delay. Loss rate: Loss rate is the fraction of unsuccessful connection to web site during a specified time interval. Utilization: Here we use the probability of successful download sublinks to denote it. Aggregation is an action combining contents from a number of different data sources. There are so many web performance metrics. Some of them are correlative, until now there is no rule to combine these performance metrics to one single quantitative result to evaluate the web QoS. We propose a concept of performance aggregation combining four of these metrics to reflect QoS of four webs from the end user’s perspective. The aim of performance aggregation for web QoS is to satisfy a sort of QoS rule and combine all the single metrics using a mathematical formula to bring a single quantitative result. We give a precise definition of performance aggregation as follows. Definition 1. We take X=(Yi)i∈V appropriate to set including all web performance metrics based on measurement. Where Yi=(yij)j∈Ri indicates the i-th web performance set based on measurement, Ri=(yi1,…,yin) indicates measurement set for i-th performance metric. V={1,…,M} indicates set of performance metrics, we assume there are M performance metrics. Performance aggregation is a n operation ⊗ on Y and M operation ⊕ on X. We express performance aggregation criteria as follows:

PAC = (⊕Yi )i∈V = (⊕(⊗ yi , j ))i∈V , j∈Ri

(1)

Different metrics have different sets, where Rloss={0,1} indicates packet loss. Let V={delay,loss,jitter,utilization}, deal ⊕ with ‘weighted sum’ and ⊗ with ‘average’. Formula (2) depicts the performance aggregation criteria for web i during period j.

paci, j =α

1 L 1 L 1L 1L Di, j,k + β ∑Di, j,k − Mi, j,k +γ ∑Li, j,k +φ ∑(1−Ui, j,k ) ∑ L k=1 L i=1 L i=1 L i=1

(2)

where Di,j,k is delay to web i during probing period j for k-th probing, (i=1,…,M; j=1,…,N; k=1,…,L); Mi,j is the average delay to web i during probing period j, M

i, j

=

L



k =1

D i , j ,k

L ; Li,j,k is loss rate to web i during probing period j for k-th probing,

162

H. Sun, B. Fang, and H. Zhang

⎧0 L i , j ,k = ⎨ ⎩ 1

reached

; Ui,j,k is the successful download sublinks to web i during

lost

probing period j for k-th probing; Į

ȕ

Ȗ and φ are performance coefficients that can

be adjusted according to different kinds of performance requirements. In order to compare the performance of different webs well-suited, we change formula (2) to relative performance evaluation formula(3).

rpac

i, j

1

=

i = 1,..., M

M

pac i , j



(3)

pac i , j

i =1

where M is the total number of webs. Formula(2) depicts that performance result is corresponding to a direct ratio with metrics. In formula(3), the performance is inverse ratio with metrics, that will better reflect the performance between different webs. We use relative performance metrics to evaluate performance between different webs. An integrated solution that covers several performance metrics is proposed by the performance aggregation criteria. The aim is to give a single intuitionistic and quantitative result to distinguish performance of different webs from an end user’s viewpoint.

4 Experiments and Results We use WQMES to measure and evaluate web performance. Probing packets were sent from one source to many image sites among a main web site. We use www.onlinedown.net as the main destination web site. Inside the main web site, four image sites were selected as destinations: Beijing Nanjing Wuxi and Guangzhou. Our focus is on validating WQMES, so we change Beijing Nanjing Wuxi and Guangzhou to A B C and D out of order. Our study is based on continuing measurement for a 6-hours period from June 29,2005 12:00 to June 29,2005 18:00, using WQMES to measure and evaluate web performance between four image sites. Unregulated active measurement traffic can cause an unpredictable negative impact on the actual application traffic. In order not to influence Internet traffic, sending probing packet follows a Poison distribution with a mean of 10 minutes per interval. Three threads were running the same time for sublinks within one image site and only first level html page was got each time. Each connection waits for five seconds. Performance data was collected over a 6-hour time period and was analyzed and processed by performance evaluation criteria to compare the web performance between four image site. Figure 2 shows connection delay comparison of four web sites. From Figure 2 we got the conclusion: DCD DCC DCB DCA. Figure 3 depicts response delay comparison of four web sites: DRC DRD DRB DRA. We find that the connect time is little in proportion to response time. And Response time is more influenced by the transmission time waiting time for each link thread number running the same time for sublinks and volume of the page.

User-Perceived Web QoS Measurement and Evaluation System

Fig. 2. Connection delay comparison

163

Fig. 3. Response delay comparison

The value of loss and utilization changes so little so we didn’t give the comparison chart of them. Figure 4 shows normalization form of relative performance aggregation criteria comparison of A B C and D using formula (3) under subscribed case Į=0.4 ȕ=0.2 Ȗ=0.2 and φ=0.2. From Figure 4 we got the conclusion: NRPACA >NRPACB >NRPACC >NRPACD. We found that NRPAC is influenced mostly by response delay, loss jitter and utilization also have influence on it. WQMES tests the web sites with simple active probe technology, the web sites are measured from the end user’s viewpoint. We stress, however, that the point of our analysis is not to make general claims about certain webs being better or worse than others, rather to show the utility of Fig. 4. NRPAC comparison WQMES.

5 Conclusion In this article, we proposed the WQMES for web QoS measurement and evaluation. We use WQMES to measure performance of four webs from the end-user viewpoint, experiment results show that probing and evaluating to different web sites of the same contents have different answers. We find that the connection delay is little in proportion to response delay. Response delay is more influenced by the transmission delay waiting time for each link thread number running the same time for sublinks and volume of the page. User-perceived web QoS is influenced mostly by response delay, loss jitter and utilization have influence on it too. WQMES can be used by

164

H. Sun, B. Fang, and H. Zhang

Internet customers or Internet Content Providers to track web services behavior and response delay at application level. It also can be used for server placement/selection, etc. Performance aggregation is an interesting and fresh means as a new conceptual model for analysing and quantifying user-perceived web QoS. The results show that our methodology is effective in measuring and evaluating web QoS. Acknowledgements. This work has been supported by National Science Foundation of China under the grant No.60203021, the National “863” High-Tech Program of China under the grant No.2002AA142020.

References 1. P.R.Selvridge, B.Chaparto and G.T.Bender, “The world wide waiting: Effects of delays on user performance”, in Proceedings of the IEA 2000/HFES 2000 Congress, 2000. 2. N.Bhatti, A.Bouch and A.Kuchinsky, “Integrating user-perceived quality into web server design”, in Proceedings of the 9th International World Wide Web Conference, pp.1-16. Elsevier, May 2000. 3. Andresen, Yand and Ibarra, “Toward a scalable distributed WWW server on workstation clusters”, JPDC: Journal of Parallel and DistributedComputing, Vol.42, 1997. 4. S.Seshan, M.Stemm and R.Katz, SPAND: Shared Passive Network Performance Discovery, USENIX Sumposium on Internet Technologies and Systems, 1997. 5. S.G.Dykes, K.A.Robbins and C.L.Jeffery, “An Empirical Evaluation of Client-side Server Selection Algorithms”, IEEE INFOCOM, vol.3, pp.1362-1370, March 2000. 6. C.Marco, G.Enrico and L.Willy, “Quality of Service Issues in Internet Web Services”, IEEE Transactions on Computers, vol.51, no.6, pp.593-594, June 2002. 7. K.Balachander and A.Martin, PRO-COW: Protocol compliance on the web, Technical Report 990803-05-TM, AT&T Labs, August 1999. 8. J.Bruno, J.Brustoloni, E.Gabber et al, “Disk Scheduling with Quality of Service Guarantees”, In proceedings of the IEEE ICMCS Conference, Florence, Italy, June 1999. 9. K.Shen, H.Tand, T.Yand and L.Chu, “Integrated resource management for cluster-based internet services”. In Proceedings of th 5th USENIX Symposium on Operating Systems Design and Implementation, Boston, MA, Dec.2002. 10. M.Andreolini, E.Casalicchio, M.Colajanni and M.Mambelli, “QoS-aware switching policies for a locally distributed Web system[A]”, Proc. of the 11th Int’l World Wide Web Conf[C], Honolulu, Hawaii, May 2002. 11. R.Fielding, J.Gettys and J.Mogul, et al. Hypertext Transfer Protocol-HTTP:/1.1, IETF RFC 2616, 1999, 6. 12. S.Dykes, K.Robbins and C.Jeffery, “An empirical evaluation of client-side server selection algorithms”, in Proceedings of INFOCOM’00, pp.1361-1370, March 2000. 13. G.Jin, B.Tierney, “Netest: A Tool to Measure axiom Burst Size, Available Bandwidth and Achievable Throughput”, Proceedings of the 2003 International Conference on Information Technology Research and Education, Aug.10-13, 2003, Newark, New Jersey, LBNL-48350. 14. C.Fraleigh, S.Moon, B.Lyles, C.Cotton, M.Khan, D.Moll, R.Rockell, T.Seely, C.Diot. “Packet-Level Traffic Measurements from the Sprint IP Backbone”, IEEE Network, 2003. 15. C.Demichelis and P.Chimento, “RFC 3393: Ip packet delay variation metric for ip performance metrics (ippm)”, November 2002.

User-Perceived Web QoS Measurement and Evaluation System

165

16. U.Hofmann, I.Miloucheva, F.Strohmeier and T.Pfeiffenberger. “Evaluation of architectures for QoS analysis of applications in Internet environment ”, The 10th International Conference on Telecommunication Systems, Modeling and Analysis Monterey, CA, USA, October 3-6, 2002. 17. T.Ngo-Quynh, H.Karl, A.Wolisz and K.Rebensburg. Using only Proportional Jitter Scheduling at the boundary of a Differentiated Service Network: simple and efficient. To appeared in 2nd European Conference on Universal Multiservice Networks ECUMN’02, April 8-10, 2002, Colmar, France.

An RDF Storage and Query Framework with Flexible Inference Strategy Wennan Shen and Yuzhong Qu Department of Computer Science and Engineering, Southeast University, Nanjing 210096, P.R. China {wnshen, yzqu}@seu.edu.cn

Abstract. In the Semantic Web, RDF (Resource Description Framework) and RDF Schema are commonly used to describe metadata. There are a great many RDF data in current web, therefore, efficient storage and retrieval of large RDF data sets is required. So far, several RDF storage and query system are developed. According to the inference strategy they used, they can be classified into two categories, one exclusively use forward chaining strategy; the other exclusively use backward chaining strategy. In most cases, the query performance of the former is superior to that of the latter. However, in some cases, the disadvantage of larger storage space may at some point outweigh the advantage of faster querying. Further, the existing systems that exclusively use forward chaining strategy have not yet presented a good solution to the deletion operation by now. In this paper, we design a RDF storage and query framework with flexible inference strategy, which can combine forward and backward chaining inference strategies. In addition, a new solution to the deletion operation is also given within our framework. The feasibility of our framework is illustrated by primary experiments.

1 Introduction The Web is a huge collection of interconnected data. Managing and processing such information is difficult due to the fact that the Web lacks semantic information. The Semantic Web has emerged as the next generation of the World Wide Web, and it is envisioned to build an infrastructure of machine-readable semantics for the data on the Web. In the Semantic Web [14], RDF [12] (Resource Description Framework) and RDF Schema [3] are commonly used to describe metadata. The Resource Description Framework (RDF) is the first W3C recommendation for enriching information resources of the Web with metadata descriptions. Information resources are, for example, web pages or books. Descriptions can be characteristics of resources, such as author or content of a website. We call such descriptions metadata. The atomic constructs of RDF are statements, which are triples consisting of the resource being described, a property, and a property value. A collection of RDF statements can be intuitively understood as a graph: resources are nodes and statements are arcs connecting the nodes. The RDF data model has no mechanism to define names for properties or resources. For this purpose, the RDF schema is needed to define resource types and X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 166 – 175, 2006. © Springer-Verlag Berlin Heidelberg 2006

An RDF Storage and Query Framework with Flexible Inference Strategy

167

property names. Different RDF schemas can be defined and used for different application areas. RDF Schema [3] is a semantic extension of RDF. It provides mechanisms for describing groups of related resources and the relationships between these resources. RDF schema statements are valid RDF statements because their structure follows the syntax of the RDF data model. There are a great many RDF data on current web, therefore, efficient storage and retrieval of large RDF data sets is required. So far, several RDF storage and query system are developed. According to the inference strategy they used, they can be classified into two categories, one exclusively use backward chaining strategy, such as Jena [17]; the other exclusively use forward chaining strategy, such as RStar [15] and Sesame [5]. The inference engine that uses forward chaining strategy is triggered when triples are inserted into an RDF storage, the generated triples by the inference engine are inserted into the storage together with the original triples. This will unavoidably increase the need for disc memory. However, the task of processing a query is reduced to simple lookup without inference. On the contrary, backward chaining inference engine is triggered when the query is evaluated. The main advantage of backward chaining inference is the decrease in required storage size and import data time, while the main disadvantage is the decrease in performance of query processing. In addition, a forward chaining based system needs a truth maintenance system (TMS) to maintain consistency as well as make derivations available. Consider a situation in which a triple insert into an RDF storage, and the triple match the premise part of a rule used in the inference engine, then the rule is fired, consequently, additional triples generated by the rule shall be insert into the RDF storage. If at some time later the triple needs to be deleted from the RDF storage, in order to maintain the consistency of the storage, the triples derived from it also should be deleted from the RDF storage. To cope with this scenario, a TMS system that records the justifications of triples should be built into forward chaining based system. As far as backward chaining based system concerned, this is not a problem since the insertion operation can’t result in additional derived triples. Many performance tests were conducted for current RDF storage and query systems [15,9], the results show that forward chaining based systems are superior to backward chaining based systems. However, in literature [4], the authors indicated that when RDF data consists exclusively of a large class or property hierarchy that is both broad and deep, or the complexity of the model theory and expressiveness of the modeling language increase (for example when moving from RDF Schema to OWL [16]), the disadvantage of larger storage space may at some point outweigh the advantage of faster querying. Based on the above considerations, we feel that the inference strategy employed by existing systems is not flexible enough for semantic web applications. Further, the existing systems that exclusively use forward chaining strategy have not yet presented a good solution to the deletion operation by now. Therefore, we design a RDF storage and query framework with flexible inference strategy, which can combine forward and backward chaining inference strategies. In addition, a new solution to the deletion operation is also given within our framework. The feasibility of our framework is illustrated by primary experiments.

168

W. Shen and Y. Qu

2 An RDF Storage and Query Framework 2.1 Overview Fig 1 shows an RDF storage and query framework with flexible inference strategy. There are three functions for the end user, that is inserting data, deleting data and querying data. Two kinds of inference engines, namely forward chaining inference engine and backward chaining inference engine, are designed for the functionality of data insertion and data query respectively. The framework has an inference rule controller to control rules used in the inference engines. As mentioned in the previous section, a forward chaining inference engine has special needs for truth maintenance system. Therefore, a truth maintenance system is built into the framework to maintain consistency as well as make derivations available. The TMS controller is designed to determine whether or not the truth maintenance system should be called. The key issues related to these components will be addressed in the following subsections.

deleting data

RDF storage

TM S system

forward chaining inference engine

TM S controller

inserting data

backward chaining inference engine

inference rule controller

RDF query language processor

querying data

Fig 1. An RDF storage and query framework

2.2 Inference Rule Controller and Inference Engines As discussed in section 1, forward chaining inference strategy and backward chaining inference strategy have their strong strength. Therefore, the framework uses a mixed strategy, which combines both of the inference strategies. There are two inference engines in the framework, forward chaining inference engine for data insertion and backward chaining inference engine for data query. The rules used in each inference engine are controlled by the inference rule controller. Applications can configure the inference engines through the controller according to their own characteristics. To insert a triple into the RDF storage, data insertion sends the triples to the forward chaining inference engine. The rules in the forward chaining inference engine may be fired by the triples inferred by the rules in the backward chaining inference engine. In order to make the query results complete, the forward chaining inference

An RDF Storage and Query Framework with Flexible Inference Strategy

169

engine make an inference based on the current RDF storage state and the rules in both of the two engines, then insert both inferred triples and original triples into the RDF storage except for the triples directly inferred by the rules in the backward inference engine, meanwhile, it inserts the dependence of triples into the TMS system. The whole procedure runs iteratively. To query information from the RDF storage, the backward chaining inference engine receives query from RDF query language processor, then it draws conclusions in terms of the current RDF storage state and the rules which inference rules controller specifics. 2.3 TMS Controller and TMS System A forward chaining inference system has special need for a truth maintenance system. Most of TMS systems are associated with forward chaining inference [6,7,8]. There are two related reasons for this need. One is to keep the consistency of RDF storage, the other is to help to deal with deletion operation. The TMS system in this framework records the justifications for each triple inferred by an RDFS rule. When a triple is removed from the RDF storage, any justifications in which it plays a part are also removed. The triples justified by removing justification are checked to see if they are still supported by other justification. If not, then these triples are also removed. Sometimes such a TMS system is too expensive to use and it is not needed for some applications. Consequently, applications can choose whether or not to use TMS system through the TMS controller component. 2.4 RDF Storage Most of existing RDF storage systems use relational or object-relational database management systems as backend stores [1,2]. This is a straightforward approach since it is appropriate to represent RDF triples in a relational table of three columns and the relational DBMS (RDBMS) has been well studied. Other components access RDF storage through standard SQL sentence. As to forward chaining inference engine, if the triples in the current storage match the premise part of a RDFS rule, then the rule is fired, newly derived triples are recorded into the storage, the justifications that justify these derived triples are inserted into the TMS system, then do the same actions to the derived triples until no additional triples are generated. As far as Backward chaining inference engine is concerned, if the search target matches the conclusion part of a RDFS rule, search the storage, if triples match the premise part of the rule, the matched triples add to the result set, then take the premise part of the rule as sub target, carry out the same actions. 2.5 RDF Query Language Processor Several languages for querying RDF data have been proposed and implemented, some in the form of traditional database query languages (e.g. SQL, OQL), others based on logic and rule languages. Judging from the impact of SQL to the database community, standardization of RDF query language will definitely help the adoption of RDF query engines, make the development of applications a lot easier, and will thus help the Semantic Web in general [10]. W3C set up RDF Data Access Working Group

170

W. Shen and Y. Qu

(DAWG) in Feb. 2004. DAWG devotes to developing specifications for RDF query language and access control protocol. SPARQL is a RDF query language developed by DAWG according to the technology requirement and design objectives referred above. RDF query language processor receives the request in a specific RDF query language form, analyzes and checks whether the submitted query accords with the syntax of the query language. A valid query is parsed and transformed into a medium state. Then send the result to the backward chaining inference engine.

3 A New Solution to the Deletion Operation The existing systems that exclusively use forward chaining strategy have not yet presented a good solution to the deletion operation by now. RStar [15] did not provide the deletion operation. In order to deal with the cyclic dependency problem in TMS system, Sesame [5] gives a complex algorithm, with which each deletion operation needs recalculating the closure of TMS system, so it isn’t applicable to applications with large TMS systems. Therefore, in this section, we give a new solution to this problem, which consists of two algorithms, including insertion algorithm and deletion algorithm. The insertion algorithm copes with the cyclic dependency problem, while the deletion algorithm is relative simple. At first, we give three definitions. Definition 1: Dependency between rules: Let rule 1 is a11, a12ĺb1; rule 2 is a21, a22ĺb2. If some conclusions in rule1 match some premises in rule2, then we can say that rule 2 depends on rule 1. Definition 2: Dependency between triples: If triple 3 can be inferred by triple 1 and triple 2 through certain rule, then we can say that triple 3 depend on triple 1 and triple 2. Definition 3: Justification in the TMS system has the form (inf, dep1, dep2, rule), where inf is a triple justified by the justification, dep1 and dep2 are triples justifying inf, and rule is the RDFS rule that produces the justification. When dep1=null and dep2=null, it indicates that inf is an explicit triple. 3.1 Dependency Between RDFS Entailment Rules The RDF Semantics [11] is a specification of a model-theoretic semantics for RDF and RDF Schema, and it presents a set of entailment rules. In [13], the author characterizes these rules as follow: − Type Rules assign default (“root”) types for resources (rules rdf1, rdfs4a and rdfs4b). − Subclass Rules generate the transitive closures of subclass (rules rdfs8, rdfs9, rdfs10). − Subproperty Rules are used to generate the transitive closure resulting from subproperty (rules rdfs5, rdfs6, rdfs7).

An RDF Storage and Query Framework with Flexible Inference Strategy

171

− Domain/Range Rules infer resource types from domain and range assignments (rules rdfs2 and rdfs3). The RDF Semantics specification was published on February 10,2004. It added rules related to rdfs:ContainerMembershipProperty (rdfs12) and rdfs:Datatype (rdfs13). Table 2 shows the dependency between RDFS entailment rules in terms of the RDFS Semantics specification. In the table, the rules on the horizon direction are triggering rules, and on the vertical direction are triggered rules. If one rule depend on another one, place correspond to those rules is filled with a token *. For example, definitions of rdfs3 and rdfs9 are presented in table 1. We can see that rdfs9 depend on rdfs3, so we place a token * in row 9, column 3. Table 1. Definitions of rdfs3 and rdfs9

rdfs3: rdfs9:

aaa rdfs:range xxx ,uuu aaa vvv ĺ vvv rdf:type xxx uuu rdfs:subClassOf xxx ,vvv rdf:type uuu ĺ vvv rdf:type xxx Table 2. Dependency between RDFS entailment rules

Rule: 1 2 3 4a 4b 5 6 7 8 9 10 11 12 13

1

2

3

* * * *

* * * *

*

*

* * *

* * *

* *

* *

4a

4b

5

6

* * * * *

* * * * *

*

*

7 * * * * *

8

9

10

11

12

13

* * * *

* * * *

* * * *

* * * *

* * * * *

* * * *

*

* * *

*

*

*

*

*

*

*

3.2 Cyclic Dependency of Rules When compute justification in the TMS system, cyclic dependencies [4] may occur. The following two examples present the problem. Example 1: 1. (uuu, rdf:type, rdfs:Resource) (explicit) 2. (rdf:type, rdfs:domain, rdfs:Resource) (explicit) Example 2: 1. (rdfs:subClassOf, rdfs:domain, rdfs:Class) (explicit) 2. (uuu, rdf:subClassOf, rdfs:Resource) (explicit) 3. (uuu, rdf:type, rdfs:Class) (derived) Example 1 shows that a justification (1, 1, 2, rdfs2) added into the TMS system in term of the rdfs2, namely triple 1 is justified by itself. As to Example 2, the following

172

W. Shen and Y. Qu

justifications, (3, 1, 2, rdfs2) (2, 3, null, rdfs8), are added into the TMS system according to rdfs2 and rdfs8 respectively. This presents that triple 2 depends on triple 3 justified by triple 2. All these examples have cyclic dependencies. The cyclic dependencies result in a problem. When delete a triple, if the TMS system contains a justification for it, it can’t be deleted. Therefore, in Example 1, triple 1 cannot be deleted because it is justified by itself. In Example 2, it seems that triple 2 cannot be deleted because the TMS system contains a justification that justifies it. However, the justification says that triple 3 depends on triple 2, so this deletion can be conducted. 3.3 Algorithm Two algorithms, namely insertion algorithm and deletion algorithm, are presented in Table 3 and Table 4 respectively. The following terms are used in the two algorithms. S: the set of justifications in TMS system. T: the set of triples including both explicit triples and derived triples. A: the set of triples that will be inserted into RDF storage. D: the set of triples that will be deleted from RDF storage. V: the set of triples that depend on the current inserted triple. I: the set of triples that were inferred by the current insert triple. Table 3. insertion algorithm

Step1. For each triple t in set A, insert (t,null,null,null) to S, then determine whether t is in set T, if yes, delete t from A, otherwise let V empty, bind V to t. Go to step 2. Step2. Determine whether A is empty, if yes, terminate,otherwise select a triple t2 from A, go to step 3. Step3. Insert t2 into T, and compute I of t2, meanwhile, bind t2’s V to each triple in I and get the dependency. Go to step 4. Step4. Determine whether I is empty. If yes, go to step 2, otherwise select a triple t4 from I, go to step 5. Step5. Insert the dependent triples (produce in step 3) of t4 to t4’s V, then determine whether t4 is in set T, if yes, add dependence of t4 to S when set V does not contain t4 (this action eliminates the cyclic dependency), otherwise add t4 to A. Go to step 4. Table 4. deletion algorithm

Step1. For each triple d in D, if d is explicit, then let d is derived. Otherwise, delete d form D. Go to step 2. Step2. Let a variable named removed is false. Go to step 3. Step3. If D is null or removed is false, terminate. Otherwise, go to step 4. Step4. For each triple t in D, if for any justification s (fs,d1s,d2s,rule) in S, fs is not equal t, delete t from D and T, let removed is true, then for each justification q(fq,d1q,d2q,rule) in S, if d1q is equal t or d2q is equal t, delete q from S, if fq is derived, add fq to D. Go to step 3.

An RDF Storage and Query Framework with Flexible Inference Strategy

173

4 Experiment We have developed a prototype system of the presented framework. In order to evaluate the feasibility of our framework, we conduct an experiment on Wordnet data set. Wordnet is a lexical resource that defines terms as well as their descriptions and semantic relations between them. In our experiment, we choose the Wordnet 1.6 schema (wordnet-20000620.xml) and the set of nouns (wordnet-20000620.xml). The experiment was run on a 2.0GHz PC with 512M physical memory. The operating system is Windows XP Professional and the backend database is mysql 4.1.12. The full set of RDFS rules is highly redundant. The feathers of some rules are rarely used, such as rdfs1, rdfs4a, rdfs4b, etc. In this experiment, we take rdfs2, rdfs3, rdfs5, rdfs7, rdfs9, rdfs11 into account. At first, we configure the forward chaining inference engine with all of these rules. Table 5 shows the number of triples inferred by each rule. We see that the size of the triples inferred by rdfs3 is more than half of all triples inferred. Table 5. The number of triples inferred by each rule with first configuration

rdfs2 0

rdfs3 122678

rdfs5 0

rdfs7 0

rdfs9 110554

rdfs11 1

Table 6. The number of triples inferred by each rule with second configuration

rdfs2 0

rdfs3 0

rdfs5 0

rdfs7 0

rdfs9 110554

rdfs11 1

Then, we configure the forward chaining inference engine with rdfs2, rdfs5, rdf7, rdfs9, rdfs11, and backward chaining inference engine with rdfs3. Table 6 shows the number of triples inferred by each rule. The following present two query examples used in our experiment. Query 2 relates to the rdfs3, but Query 1 doesn’t. Query 1: return comment of the verb in Wordnet. PREFIX wn: < http://www.cogsci.princeton.edu/~wn/schema/> PREFIX rdfs: select ?comment where{wn:Verb rdfs:comment ?comment} Query 2: return the type of the word in the form of “learning”. PREFIX wn: < http://www.cogsci.princeton.edu/~wn/schema/> PREFIX rdf: < http://www.w3.org/1999/02/22-rdf-syntax-ns> select ?type where {?ID wn:wordForm 'leaning'. ?ID rdf:type ?type} We evaluate the systems according to the first and second configuration with the query samples referred above. The query results generated by both systems are same. As demonstrated in Table 7, for queries relating to rdfs3, e.g. Query 2, the former configuration is superior to the latter configuration. However, for queries that don’t relate to rdfs3, e.g. Query 1, the advantage of the latter configuration is obvious. In addition, the latter system needs less storage space. The experiment illustrates that our framework is feasible.

174

W. Shen and Y. Qu Table 7. Comparative result of query time with different configuration

First configuration Second configuration

Query 1 (second) 1.130 0.812

Query 2 (second) 1.412 1.627

5 Conclusion In this paper, we present an RDF storage and query framework with flexible inference strategy, which can combine forward and backward chaining inference strategies. In addition, a new solution to the deletion operation is also given within our framework. The feasibility of our framework is illustrated by primary experiments. This work is a primary research in combing two inference strategies. More experiments are needed to figure out which kinds of configurations can best benefit from our framework. And automatic or semi-automatic configuration is very valuable to exploit the practical usage of our framework. These will be our future work.

Acknowledgments The work is supported in part by National Key Basic Research and Development Program of China under Grant 2003CB317004, and in part by the Natural Science Foundation of Jiangsu Province, China, under Grant BK2003001. We would like to thank Dr. Yuqing Zhai and Dr. Yangbing Wang for their suggestions on this paper.

References 1. Beckett, D.: Scalability and Storage: Survey of Free Software / Open Source RDF storagesystems. Latest version is available at http://www.w3.org/2001/sw/Europe/reports/rdf_ scalable_storage_report/ 2. Beckett, D., Grant, J.: Mapping Semantic Web Data with RDBMSes. Latest version is available at http://www.w3.org/2001/sw/Europe/reports/scalable_rdbms_mapping_report/ 3. Brickley, D., Guha, R.V.: RDF Vocabulary Description Language 1.0: RDF Schema. W3C Recommendation. Latest version is available at http://www.w3.org/TR/rdf-schema/ 4. Broekstra, J., Kampman, A.: Inferencing and Truth Maintenance in RDF Schema. In Workshop on Practical and Scalable Semantic Systems (2003) 5. Broekstra, J., Kampman, A., Harmelen, F.: Sesame: A Generic Architecture for Storing and Querying RDF and RDF Schema. Proc. of the 1st International Semantic Web Conference (2002) 54-68, 6. Doyle, J.: A truth maintenance system. Artificial Intelligence (1979) 231-272 7. Doyle, J.: The ins and outs of reason maintenance. In 8th International Conference on Artificial Intelligence (1983) 349-351 8. Finin, T., Fritzson. R., Matuszek, D.: Adding Forward Chaining and Truth Maintenance to Prolog. Artificial Intelligence Applications Proceedings of Fifth Conference (1989) 123 – 130

An RDF Storage and Query Framework with Flexible Inference Strategy

175

9. Guo, Y.B., Pan, Z.X., Heflin, J.: An Evaluation of Knowledge Base Systems for Large OWL Datasets. In Proceedings of the 3rd International Semantic Web Conference. Lecture Notes in Computer Science 3298 Springer (2004) 10. Haase1, P., Broekstra, J., Eberhart1, A., Volz1, R.: A Comparison of RDF Query Languages. In Proceedings of the Third International Semantic Web Conference, volume 3298. Lecture Notes in Computer Science, Hiroshima, Japan. Springer-Verlag (2004) 11. Hayes, P.: RDF Semantics. W3C Recommendation 10 February 2004. Latest version is available at http://www.w3.org/TR/rdf-mt/ 12. Klyne, G., Carroll, J.: Resource Description Framework (RDF): Concepts and Abstract Syntax. W3C Recommendation. Latest version is available at http://www.w3.org/TR/rdfconcepts/ 13. Lassila, O.: Taking the RDF Model Theory Out for a Spin. In Ian Horrocks and James Hendler, editors, Proceedings of the First International Semantic Web Conference, ISWC 2002, Sardinia, Italy, number 2342. Lecture Notes in Computer Science (2002) 307–317 14. Lee, T.B., Handler, J., Lassila, O.: The Semantic Web. In Scientific American, vol. 184(2001) 34-43 15. Ma, L., Su, Z., Pan, Y., Zhang, L., Liu, T.: RStar: An RDF Storage and Query System for Enterprise Resource Management. In Proceedings of the Thirteenth ACM conference on Information and knowledge management, Washington, D.C., USA (2004) 484 – 491 16. McGuinness, D.L., Harmelen, F.V.: OWL Web Ontology Language Overview. W3C Recommendation. Latest version is available at http://www.w3.org/TR/owl-features/ 17. Wilkinson, K., Sayers, C., Kuno, H., Reynolds, D.: Efficient RDF Storage and Retrieval in Jena2. Proc. of the 1st International Workshop on Semantic Web and Databases (2003) 131-151

An Aspect-Oriented Approach to Declarative Access Control for Web Applications Kung Chen and Ching-Wei Lin Department of Computer Science, National Chengchi University, Wenshan, Taipei 106, Taiwan {chenk, g9232}@cs.nccu.edu.tw

Abstract. This paper presents an aspect-oriented approach to declarative access control for Web applications that can not only realize fine-grained access control requirements but also accomplish it with very little runtime overhead. We devise a translation scheme that will automatically synthesize the desired aspect modules from access control rules in XML format and properly designed aspect templates. The generated aspect modules will then be compiled and integrated into the underlying application using standard aspect tools. At runtime, these aspect codes will be executed to enforce the required access control without any runtime interpretation overhead. Future changes of access control rules can also be effectively realized through these mechanisms without actual coding.

1 Introduction The principal difficulty in designing security concern such as access control into an application system is that it is a system-wide concern that permeates through all the different modules of an application. Although there is a generic need to enforce access control for protected resources, yet the specific constraint for granting access to each individual resource may not be the same. Hence in current practices it is very often to see the code for implementing access control scattered over the whole system and tangled with the functional code. This is not only error-prone but also makes it difficult to verify its correctness and perform the needed maintenance; Web applications are no exceptions. Indeed, “broken access control” is listed as the second critical Web application security vulnerability on the OWASP top ten list [13]. Instead of programmatic approaches, a better way to address this problem is declarative access control where access control logic is completely decoupled from the application code and is accomplished without actual coding [14]. This will not only improve the application’s modularity but also make the task of enforcing comprehensive access control more tractable. In the past, the typical approach to declarative access control is adopting a policy-driven and centralized authorization engine [2][14]. However, such approaches are often criticized for lack of expressiveness in access control requirements and low runtime efficiency due to policy interpretation. Specifically, Web application developers often have to handle X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp.176–188 , 2006. © Springer-Verlag Berlin Heidelberg 2006

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

177

the difficult cases of data-contents-related access control. For example, in a B2B ECommerce site, users from any registered organizations have the privilege to execute the “viewOrder” function, but they are allowed to view only orders of their own organization. This is called instance-level access control [8]. Furthermore, within a data record, certain fields, such as credit card number, may have to be excluded from screen view to protect the user’s privacy. We refer to this as attribute-level access control. Such fine-grained constraints are beyond the scope of popular declarative mechanisms such as JAAS [16][6]. This paper presents an aspect-oriented approach that can not only address many fine-grained access control constraints but also accomplish it in a declarative manner without incurring extra runtime overhead. Aspect-oriented programming (AOP) [10] uses separate modules, called aspects, to encapsulate system-wide concerns such as access control. Our previous work [3] has demonstrated the feasibility of implementing fine-grained access control for Struts-based Web applications [1] using AspectJ [11]. Here we extend it and devise a translation scheme that will automatically synthesize the desired aspect modules from access control rules defined in centrally managed XML configuration files using some properly designed aspect templates. The generated access control aspect modules will then be compiled and integrated into designated functional modules of the underlying application using standard aspect tools. At runtime, these aspect codes will be executed like common functional codes to enforce the required access control, no runtime interpretation overhead incurred. Furthermore, management and maintenance tasks are greatly simplified since future changes in access control rules can also be effectively realized through these mechanisms without actual coding. In short, our approach can be characterized by central management and distributed enforcement. The rest of this paper is organized as follows. Section 2 gives a brief introduction to AOP and describes related work. Section 3 outlines our approach to declarative access control. Section 4 describes our access control rules and aspect templates. Section 5 presents the design and implementation of our translation scheme. Section 6 concludes and sketches future work.

2 Background and Related Work In this section, we highlight the basics of AOP and review the relevant features of AspectJ. In addition, we also describe related work. 2.1 AOP and AspectJ In AOP, a program consists of many functional modules, e.g. classes in OOP, and some aspects that capture concerns that cross-cut the functional modules, e.g. security. The complete program is derived by some novel ways of composing functional modules and aspects. This is called weaving in AOP [10]. Weaving results in a program where the functional modules impacted by the concern represented by the aspect are modified accordingly. In languages such as AspectJ, the weaver tool is tightly integrated into the compiler and performs the weaving during compilation.

178

K. Chen and C.-W. Lin

To facilitate the weaving process, a set of program join points are introduced to specify where an aspect may cross-cut the other functional modules in an application. Typical join points in AspectJ are method execution and field access. A set of join points related by a specific concern are collected into a pointcut. Code units called advice in an aspect are tagged with a pointcut and determine how the application should behave in those crosscutting points. There are three kinds of advice: before, after, and around. The before advice and the after advice are executed before and after the intercepted method, respectively. The case for the around advice is more subtle. Inside the around advice, we can choose to resume the intercepted method by calling the special built-in method proceed(), or simply bypass its execution. The following aspect in AspectJ illustrates the power of around advice. It states that, when the update method of class Customer is about to execute, control is transferred to the around advice. If the particular constraint is false, the intercepted method will be aborted; otherwise, it will be resumed by calling proceed(). public aspect AccessControlPrecheck { pointcut pc(Data d): execution(public void Customer.update(Data d)) && args(d); void around(Data d) : pc(d) { if (constraint(d)) proceed(d); // granted, resume execution else forwardToExceptionHandler(“AccessDenied”); } // end around }

Note that args(d) captures the argument(s) passed to the intercepted method. Furthermore, AspectJ also allows aspect inheritance, abstract aspect, and abstract pointcut. We can write an aspect with abstract pointcuts or abstract methods. A subaspect then extends the abstract aspect and defines the concrete pointcuts and methods. 2.2 Related Work Applying AOP to security concerns is pioneered by [4][5]. They also sketched how to build frameworks in AspectJ for handling access control. However, they did not focus on Web applications, and neither did they look into access control modeling in detail as we did. The proposed aspects check the constraint before the attempted access. In contrast, we have both pre-checking and post-filtering aspects that covered finegrained constraints. Furthermore, we have devised a translation scheme to automatically synthesize access control aspects. Designing proper access control mechanisms for distributed applications have always been an active topic. A good survey on both declarative and programmatic mechanisms can be found in [2]. A strong appeal of declarative security mechanisms is presented in [14]. They also proposed a centrally managed framework called GAMMA. But they did not address data-level access control and neither did they use AOP. Sun’s J2EE [17] and JAAS [16] also includes a primitive form of declarative access control. However, one still needs to write tangled code to handle fine-grained constraints. Our work bears a closer relationship with that of Goodwin et al [8]. First, they used the four-tuple access control rules: [user group, action, resource, relationship], where

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

179

a set of predefined relationship, as opposed to our general constraint, is defined for each resource type. The major concern is instance-level constraints, no attribute-level constraint covered, though. Second, they also adopt an MVC-like architecture for Web applications. The controller intercepts all user commands and queries a centralized manager to make the authorization decision. Hence there will be some runtime interpretation overhead incurred by the manager. Furthermore, they did not consider different authentication types and neither did they use AOP in their framework.

3 Overview of Our Approach Figure 1 illustrates the system architecture and mechanisms of our approach. We worked towards declarative access control for Web applications from two opposite ends and managed to meet in the middle. At one end, the objective is to accommodate requirements. We use a flexible modeling scheme based on user-function-data relationship that can satisfy a wide range of access control requirements of various granularity levels, including both instance and attribute levels. A high-level form of access control rules is derived from it. At the other end, since access control is a system-wide crosscutting concern, we must impose considerable architectural disciplines on Web applications to layout a good foundation for enforcing the required access control modularly. In particular, we follow the well-accepted ModelView-Controller (MVC) [7] architectural pattern and adopt the popular Apache Struts framework [1] to structure our Web applications.

Struts-based Web Application

Access control Rules in XML Security administrator

Controller

Browser

Model View

Action class Weaving

Access control aspect code

Access control rule translator Aspect templates Application specification Developer

Fig. 1. System architecture and mechanisms for declarative access control

Next, we apply AOP to devise a declarative implementation scheme that bridges these two ends. We developed our implementation scheme in two stages. In the first stage, we did an in-depth analysis of the structures of aspect code that we developed

180

K. Chen and C.-W. Lin

manually for implementing the form of access control rules we employed. These aspects are classified into a few forms according to their internal structures. Basically, each access control aspect is divided into two parts: a generic part realized by an abstract aspect and a rule specific part realized by a concrete aspect. Indeed, these abstract aspects provide a solid basis towards building a declarative mechanism. In the second stage, we focus on how to automatically synthesize aspect code from access control rules. Given the abstract aspects derived in the previous stage, we only need to generate the parts that are rule-specific. Thus we prepared some aspect templates based on the derived aspect code structure to assist code generation. On the source side, in addition to the access control rules, we provide an application specification file that links the logical entities (data objects and operations) referenced in the rules to the real entities defined in the underlying application. Following the current practices of Web application development, we define both of the two input files in XML format and treat them as configuration files, one of each type for every application. Together with the pre-defined aspect templates, the two XML configuration files are processed by a rule translator into concrete access control aspects. The generated aspect modules will then be compiled and woven into designated functional modules of the underlying Web application using standard aspect tools. At runtime, the aspects will be executed like common functional codes to enforce the required access control. Our approach has the following features. First, all the access control rules of an application are kept in a configuration file, making the management and maintenance tasks easier. Second, the enforcement of access control is consistently applied to every designated functional unit using aspect technologies without installing a centralized authorization engine. Third, the codes that implement the required access control are automatically synthesized, compiled and linked to functional modules without actual coding. Future changes in access control rules can also be effectively realized through these mechanisms in a declarative way. Furthermore, there will be no runtime overhead due to access control policy interpretation. The main runtime overhead will be that incurred by aspect weaving and advice calls, yet, according to [9], such overhead in AspectJ is very low in general.

4 Access Control Rules and Aspect Templates This section describes the structure of our access control rules and aspect templates. Both are revised from our previous work [3], where more details can be found. 4.1 Access Control Rules Since RBAC [15], there have been many approaches proposed to model access control requirements for applications purposes. Here we take a simple yet generic approach that can support a wide range of access control requirements. We model the interaction between a user and a Web application as a sequence of access tuples of three elements: , indicating a user’s request to execute the function on a specific type of data object(s). The access control rules of an application

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

181

determine which access tuples are allowed and which must be denied. They are derived from the application’s access control requirements. In designing the form of our access control rules, we focus on the functionalities of an application and specify the access control rules in a function-oriented manner. Furthermore, as authentication is required prior to authorization, we also make authentication type part of the rule; the type can be id/password (PWD), digital certificate (DC), or any other supported methods of user identification. Specifically, the access control rules take the following form: Rule: Here funName is the name of a function whose access needs to be restricted, authType is the required user authentication type, and the constraint is a Boolean expression which must be evaluated to true to grant the attempted access. The attributeActions component is optional. When present, it specifies the attribute-level access constraints and actions we impose on the designated function. It takes the following form: constraint1 Æ action1; … ; constraintn Æ actionn where constraints are also Boolean expressions and the typical action is field masking, mask(specified_attributes). Clearly, the more data entities we can refer to in the constraint expression the wider the scope of access control requirements we can support. For generic purposes, we take an object-based approach to specify the constraints and supply five generic objects: User, Form, Data, Cxt, App, with various attributes that the constraint expression can refer to. The specific set of attributes for each object depends on individual application’s needs. Conceptually, the Form object and the Data object serve as the input and output of a function to execute, respectively. Typical attributes for the User object include user’s name and roles in an organization. The attributes of the Form object include the arguments passed to the protected function, while the attributes of the Data object refer to the data fields returned after executing the designated function. As will be shown later, the presences of the Data object in a constraint expression call for fine-grained access control. In addition, the context object (Cxt) provides methods to retrieve the datetime and location of any attempted access. This is the most often used contextual information for access control. The application object (App) is global to an application and stores various parameters related to access control. For example, certain functions are accessible only during working days and from specific machines. The definitions of working days and designated machine addresses will be the attributes of the App object. Example: the following is a set of access control constraints and corresponding rules for an online shopping system. (“&&” is the and operator, and “||” the or operator.) C1: All registered (authenticated) users can create order, but only VIP customers can create orders whose total amount exceed $100,000. R1: C2: Only sales managers authenticated through digital certificates can delete orders.

182

K. Chen and C.-W. Lin

R2: C3: Registered customers can list (view) only their own orders. Moreover, the credit card number should be excluded from display. R3: C4: Unclassified orders can be printed in batch mode by sales from dedicated machines during working days. R4: This form of access control rules is quite flexible and can model a multitude of requirements, from simple RBAC to sophisticated instance and attribute level constraints. For example, by referring to the attributes of Data, rules R3 and R4 require that unauthorized data instances must be filtered before returning to the user. 4.2 Aspect Templates Each access control rule will be realized by two types of aspects: authentication aspect and access control aspects. Here we focus only on access control access aspects, since authentication aspects are simpler and thus omitted. The access control aspect code is divided into two parts: generic part realized by abstract aspects and rule specific part realized by concrete aspects. The generic part captures the common code patterns one would develop manually to enforce a rule. After some analysis, we identified three most typical generic aspects, namely, Precheck, PostfilterSingle, PostfilterCollection. The availability of the data entities referenced in the constraint expression of a rule, such as user roles, function arguments and data contents, distinguishes these generic aspects. The prechecking aspect handles the case when the constraint expression involves only the User and Form objects, whose attributes are all available before executing the protected function. In contrast, the post-filtering aspects are used for the cases when the constraint expression also involves attributes of the Data object, which are available only after executing the protected function. Listing 1: An aspect template example public aspect extends PostfilterCollection { pointcut pc(..): && args(..); // utility functions protected boolean funConstraint(HttpServletRequest request) {

} // pre-checking protected boolean dataConstraint (HttpServletRequest request,Object uniData) {

} // post-filtering protected void attributeAction (HttpServletRequest request, Object uniData) {

} // field masking

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

183

protected String getErrorMessage(){ return "Error !! Not enough authority. Access denied"; } protected Collection getCollection (HttpServletRequest request) { return (Collection) } }

The rule-specific part of an aspect includes the authentication type, the pointcut definitions, the constraint expression to evaluate, and the optional removal of unauthorized data contents. Authentication type specifies which authentication aspects to use. The choice of pointcuts is crucial to obtaining all the various data entities we need to evaluate the access control constraints. As discussed in [3], we choose the execute method of user action classes as the targets for access control aspect weaving. The other rule-specific parts will be generated by the rule translator and put into a concrete aspect inheriting from one of the generic aspects described above. Basically, the code to be generated is the set of abstract methods declared in the generic aspects. To facilitate the translation, we have prepared three aspect templates that will be expanded to rule-specific concrete aspects. For example, Listing 1 shows the aspect template corresponding to the PostfilterCollection aspect.

5 Synthesizing Access Control Aspects This section describes the design and implementation of our translation scheme for synthesizing access control aspects. Due to space limitation, interested readers are referred to [12] for the complete schema and examples of synthesized aspect code. 5.1 XML Schema for Access Control Rules and Application Specification The translation tasks are greatly facilitated by an application specification file that supplies the definitions of the real entities referenced in the access control rules. We now describe the XML schemas for these two input files to our translator. 5.1.1 Access Control Rules In designing the XML schema for specifying the access control rules, we have, as much as possible, followed the structure of the rule format described in Section 4.1. A major deviation is taking out the authentication type item and grouping all the access control rules by it. In other words, all access control rules with the same authentication type requirement will be grouped together. This is also compliant with security practices, for different authentication types imply different security levels. Figure 2 highlights the structure of the XML schema for specifying the access control rules. To distinguish the high-level access control rules specified in Section 4.1 from their XML counterparts, we refer to them by abstract rules. Each abstract rule corresponds to an EnforcePoint element in our schema. Abstract rules requiring the same authentication types are grouped into a composite element called Enforce Domain.

184

K. Chen and C.-W. Lin

Fig. 2. The partial structure of the XML schema for access control rules

Inside an EnforcePoint, we have a sequence of Rule elements and AttributeAction element. Note that here the Rule element corresponds to the constraint expression in an abstract rule, and, to prepare for future extension, we also allow more than one Rule element for an EnforcePoint. Furthermore, as stated earlier, the attributes of the Data object referenced in the constraint expression are only available after executing the protected function, so we have divided the constraint expression in an abstract rule into two constraint elements, namely, FunConstraint and DataConstraint. An AttributeAction element applies a specified action to a group of attributes if the given constraint is true. For example, the mask action sets the specified attributes to “***”. Inside the constraint elements, in addition to the five generic objects and some simple operators, we provide a special object, _Library, that supplies various operations, such as equals and contains, one needs to specify the constraint expressions. The exact definitions of those operations will be provided in the application specification file. Listing 2 shows the configuration of the abstract rule R3 of Section 4.1. Listing 2: An example of access control configuration in XML format

PWD

// “or” is also supported

_Library.contains($User.getAttr(“Roles”),”Customer”)

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

185

_Library.equals($User.getAtr(“name”), $Data.getAttr(“owner”))



true

CreditCardNo



5.1.2 Application Specifications The main purpose of an application specification file is to map the generic objects, User, Form, Data, Cxt, and App, and other operations to the real entities in the underlying application. Figure 3 outlines the structure of the XML schema for application specification files. We group the required mappings into four sections: AuthTypeMapping, EnforcePointMapping, AttributeGroupMpping, and FunGroupMapping. The AuthTypeMapping handles the binding of the User object, which provides user-related attributes for access control purpose. Since different authentication types need different user account objects, the mapping of User object is associated with the AuthTypeMapping element. The FunGroupMapping specifies the bindings for the operations referenced in the constraint expressions through the _Library object.

Fig. 3. The partial structure of the XML schema for application specification

186

K. Chen and C.-W. Lin

The EnforcePointMapping is the main mapping element. It is composite and has three sub-elements: VarGroup, EnforcePoints, and MethodSignature. The VarGroup element specifies the bindings for the five generic objects. Since they have different nature in terms of applicable scope, there are three occurrences of the VarGroup element in the schema: one under EnforcePointMapping for specifying global objects, such as Cxt and App; another one under EnforcePoint for specifying local objects, such as Form and Data; the last one, not shown in Fig. 3, under AuthTypeMapping for specifying the User object. Every EnforcePoint element corresponds to an EnforcePoint element in the access control configuration file and thus there may be many EnforcePoint instances. Each EnforcePoint may have its own bindings for the Form and Data objects, which are mapped to the corresponding inputs and outputs of the associated function. As to the MethodSignature element, it is used for specifying the bindings for the argument objects passed to the EnforcePoints. Currently, they simply follow the signature of the execute method defined in the Struts framework. In the future, they can be changed to whatever the host Web application framework requires. Furthermore, to reuse attribute mappings among objects, the complete bindings of a generic object are divided into two stages. First, a generic object is mapped to one or more AttributeGroup elements using a Var element. Second, the detailed attribute mappings for every AttributeGroup are collected under the AttributeGroupMapping element. The following configuration sketches a mapping between a Var element and an AttributeGroup element, where an attribute group called order is assigned to the Data object.



...





...

5.2 Rule Translator The rule translator is responsible for synthesizing the aspect codes from the two XML configuration files described above using pre-defined aspect templates. Besides parsing the XML files and cross-checking their data contents for binding correctness, the translator needs to perform aspect template selection and code generation for each EnforcePoint element. Both tasks depend mainly on the Rule element in an EnforcePoint. Recall that we have three aspect templates based on the three generic aspects: Precheck, PostfilterSingle and PostfilterCollection. For example, if no data-related constraint expressions are present in the Rule element, then the translator will select the template based on the Precheck aspect; otherwise it will select one of the post-filtering aspect templates, depending on whether a

An Aspect-Oriented Approach to Declarative Access Control for Web Applications

187

collection requirement is specified in the AttributeGroup element associated with the Data object. Once the proper aspect template is selected, the remainder of the work is to synthesize constraint evaluation codes based on the binding specifications given in the application specification file.

6 Conclusions and Future Work In this paper, we have presented an aspect-oriented approach to declarative access control for Struts-based Web applications. Our access control modeling scheme can satisfy a wide range of requirements with different granularity. By employing aspectoriented technology, we have obtained a highly modular implementation of finegrained access control and the aspect code for enforcing access control is automatically synthesized. We argue that our scheme has achieved a proper balance between expressiveness of access control and runtime efficiency. We plan to further explore this line of study along two directions. First, we shall extend the Rules element. Currently, each EnforcePoint allows one Rule element only. It would be more convenient if the security administrator can specify the constraints using multiple rules. This is doable, but will complicate the code generation step a little and may have some minor impact on runtime performance. Second, we shall extend the aspect template set to cover more sophisticated application scenarios. It is conceivable that the three aspect templates cannot handle all application scenarios. Acknowledgements. This work was supported in part by the National Science Council, Taiwan, R.O.C. under grant number NSC-94-2213-E-004-012.

References [1] The Apache Struts Web Application Framework: http://struts.apache.org/ [2] Beznosov, K. and Deng, Y., Engineering Application-level Access Control in Distributed Systems, in Handbook of Software Engineering and Knowledge Engineering, Vol. 1., 2002. [3] Chen, K. and Huang, C.H., A Practical Aspect Framework for Enforcing Fine-Grained Access Control in Web Applications, First Information Security Practice and Experience Conference (ISPEC 2005), LNCS 3439, pp. 156-167. [4] De Win, B., Piessens, F., Joosen, W. and Verhanneman, T., On the importance of the separation-of-concerns principle in secure software engineering, Workshop on the Application of Engineering Principles to System Security Design, 2002. [5] De Win, B., Vanhaute, B. and De Decker, B., Security Through Aspect-Oriented Programming, Advances in Network and Distributed Systems Security, Kluwer Academic, pp. 125-138, 2001. [6] Fonseca, C.A., Extending JAAS for Class Instance-Level Authorization. IBM DeveloperWorks, April 2002. http://www-106.ibm.com/developerworks/java/library/jjaas/ [7] Gamma, Helm, Johnson and Vlissides, Design Patterns. Addison-Wesley, 1995.

188

K. Chen and C.-W. Lin

[8] Goodwin, R., Goh, S.F., and Wu, F.Y., Instance-level access control for business-tobusiness electronic commerce, IBM System Journal, vol. 41, no. 2, 2002. [9] Hilsdale, E. and Hugunin, J., Advice Weaving in AspectJ, Proceedings of the 3rd International Conference on Aspect-Oriented Software Development, Lancaster, 2004, pp. 26-35. [10] Kiczales, G., Lamping, J., Menhdhekar, A., Maeda, C., Lopes, C., Loingtier, J.-M., and Irwin, J., Aspect-Oriented Programming, in ECOOP '97, LNCS 1241, pp. 220-242. [11] Kiczales, G., Hilsdale, E., Hugunin, J., Kersten, M., Palm, J., and Griswold, W.G., Getting Started with AspectJ, Communications of ACM, vol. 44, no. 10, pp 59-65, Oct. 2001. [12] Lin, C.W., An Aspect-Oriented Approach to Fine-Grained Access Control for Web Applications, M.S. Thesis, National Chengchi University, July, 2005. [13] Open Web Application Security Project: The Top Ten Most Critical Web Application Security Vulnerabilities. http://www.owasp.org/documentation/topten [14] Probst, S. and Kueng, J., The Need for Declarative Security Mechanisms, IEEE Proceedings of the 30th EUROMICRO Conference (EUROMICRO’04), Aug. 2004. [15] Sandhu, R., Coyne, E., Feinstein, H., and Youman, C., Role-Based Access Control Models, IEEE Computer, 29(2):38–47, 1996. [16] Sun Microsystems, Java Authentication and Authorization Service (JAAS), http://java.sun.com/products/jaas/index.jsp [17] Sun Microsystem, Java 2 Platform, Enterprise Edition (J2EE), http://java.sun.com/j2ee/

A Statistical Study of Today’s Gnutella Shicong Meng, Cong Shi, Dingyi Han, Xing Zhu, and Yong Yu APEX Data and Knowledge Management Lab, Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P.R. China {bill, cshi, handy, redstar, yyu}@apex.sjtu.edu.cn

Abstract. As a developing P2P system, Gnutella has upgraded its protocol to 0.6, which significantly changed the characteristics of its hosts. However, few previous work has given a wide-scale study to the new version of Gnutella. In addition, various kinds of P2P models are used to evaluate P2P systems or mechanisms, but the reliability of some hypotheses used in the models are not carefully studied or proved. In this paper, we try to remedy this situation by performing a large scaled measurement study on Gnutella with the help of some new crawling approaches. In particular, we characterize Gnutella by its queries, shared files and peer roles. Our measurements show that the assumption that query arrival follows Poisson distribution may not be true in Gnutella and most peers incline to share files of very limited types, even when MP3 files are excluded. We also find that many ultrapeers in Gnutella are not well selected. Statistical data provided in this paper can also be useful for P2P modeling and simulation.

1

Introduction

The prosperity of P2P applications such as Napster[1], Gnutella[2], KaZaA[3][4] and BitTorrent[5][6] has created a flurry of recent research activity into this field. Today, there are data packets of various P2P applications transmitted in the Internet. However, compared with other popular systems, Gnutella has a public protocol specification and attracts much attentions from P2P researchers. Furthermore, Gnutella upgraded its protocol to improve performance. Many new phenomena could emerge during this change. Unfortunately, few previous work has performed a full-scale study on today’s Gnutella. In addition, the reliability of some wildly used assumptions are not carefully studied or proved yet. Previous research[7] stated that P2P traffic does not obey power laws, although power law distribution is widely used for simulating. This result could also lead people to doubt whether other assumptions like Poisson arrival of queries[8] would fit the real P2P environment. In this paper, we try to address these questions by performing a detailed measurement study on today’s Gnutella. By word “today”, we mean our work is different from others which focus on previous version of Gnutella - Gnutella 0.4. With the data collected by our measurements, we seek to present a statistical study on some significant aspects of Gnutella, including queries, shared files, X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 189–200, 2006. c Springer-Verlag Berlin Heidelberg 2006 

190

S. Meng et al.

peer roles, etc. By applying some innovative crawling approaches, we gain a remarkable performance improvement. For example, our crawler can theoretically collect all the submitted queues in the entire Gnutella network, which has never been accomplished before. To ensure the accuracy of our statistical data, a large populiation of peers are studied. Several novel findings could be learnt from our measurement results. First, Gnutella query arrival doesn’t obey Poisson distribution. Although Two Poisson distribution has a better fitness, it still implies that new models should be proposed to fit the actual distribution of Gnutella query arrival. Secondly, the types of shared files of a single peer are very limited. More than one third of peers only share files of one type, and it is also true when MP3 files are left out. Thirdly, peer role selection in Gnutella doesn’t perform well, which tremendously affects the performance of the entire system. Our statistical data could also be a foundation for future research on P2P modeling and simulation. The remainder of this paper is organized as follows: Section 2 summarizes the related work. In section 3, we describe the design of crawler and measurement methodology. Section 4 provides data analysis, which contains the discussion of several phenomena. Conclusions and future work are given in Section 5.

2

Related Work

Previous measurements and analysis work could be divided into two types according to their approaches to collect statistical information. One is crawling the P2P network by one or several P2P crawlers. Adar and Hubermann[9] measured the Gnutella system and found a significant fraction of “free riders”, which download files from other peers but don’t share any files. Sariou and et al.[10] measured Napster and Gnutella peers in order to characterize peers by network topology, bottleneck bandwidth, network latency and shared files, etc. Markatos’s work[11] also belongs to this kind. Different from the above work, he utilized crawlers located at different continents and showed that traffic patterns are very dynamic even over several time scales. However, all the above measurements were performed before 2003 and focused on Gnutella protocol 0.4 which has been replaced by protocol 0.6 now. Some of their results might not provide up-to-date information of Gnutella network. There are also some measurements performed under Gnutella 0.6. Klemm’s work[12] characterized the query behavior in Gnutella 0.6. Another similar work introduced by Sai and Christopher[13] also presented a study on Gnutella 0.6. They studied the duplication, length and content of Gnutella queries. Unfortunately, although their work gave an investigation on Gnutella queries, many important characteristics of the new Gnutella such as file distribution and peer roles were not studied in their papers. The other kind of measurement research[7] focuses on analyzing NetFlow[14] information provided by ISP’s routers. The representative work of this kind was

A Statistical Study of Today’s Gnutella

191

presented by Sen and Wang[7], which studied several P2P systems based on NetFlow data analysis. Their work provided a global view of P2P traffic, which may be hard to achieve by the first approach. However, this kind of measurements only observed flow level statistic and might not well exhibit other characteristics such as peer behavior and peer shared files. Our measurements is based on the first approach. Unlike previous ones, our measurements are large-scale, because we use multiple crawlers with different roles and trace millions of peers. Since our crawlers are designed for Gnutella 0.6, they are able to provide most up-to-date data. Thus we can avoid most of the problems mentioned above. Nevertheless, all these related work provides us experience and fundamental data.

3 3.1

Statistical Information Collecting Gnutella Protocol

As [10] introduced, without centralized servers which are used in Napster[1][15] and OpenNap[16][17], Gnutella peers construct an overlay network by maintaining point-to-point connections with a set of neighbors. Gnutella protocol specifies four message types, ping, pong, query and queryhit messages. P ing and pong messages are used to maintain overlay connectivity as well as discovering other peers. To locate a file, a peer initiates a controlled flooding by sending a query message to all of its neighbors(i.e. directly connected peers), which then forward the message to their neighbors. If a peer has one or more files that match the query, it responds with a queryhit message. In order to improve the scalability of Gnutella network, Gnutella upgraded its protocol from 0.4 to 0.6, which is the predominant protocol in Gnutella now. In the new protocol, peers are divided into ultrapeers and leafpeers[12]. Peers with a high bandwidth Internet connection and high processing power run in ultrapeer mode. On the contrary, less powerful peers run in leaf mode. Leafpeer keeps only a small number of connections open to ultrapeers. Only ultrapeer could connect with each other. Ultrapeer handles all the routing and forwarding work for leafpeers. This has an effect of making the Gnutella network scale, by reducing the number of peers on the network involved in message handling and routing, as well as reducing the actual traffic among them. Query message is forwarded to all connected ultrapeers, but is only forwarded to the leafpeers that have a high probability of responding. This is achieved by sending Query Routing T able (QRT)[2] from leafpeer to its connected ultrapeer. This QRT contains the hashed value of the names of files shared by the leafpeer and it plays a part as “Bloom Filter”[18]. When an ultrapeer has to decide whether it should forward the query to a certain leafpeer, it would look the query words through this QRT of the leafpeer. Thus ultrapeer could filter queries and only forward those to the leafpeers most likely to have a match.

192

3.2

S. Meng et al.

Crawling Gnutella 0.6

Crawler Design. Faced with the fact that Gnutella has a large population of concurrent users, we find acquiring information of the entire Gnutella network, even a large part of it, is very difficult with limited resources. To better the crawling performance, we divide the crawling task into several aspects. For each one, a group of specialized crawlers are built to fulfill the task. We will introduce the design of these crawlers in the rest of this section.

TCP/IP Layer Probing Probing Crawlers

Gnutella Overlay Probing Handshaking

Gnutella Network

GWebCache

Peer Filter (Discovered?)

Discovery Crawlers

Peer Address Set

Status Set

Seed Selector

Query Set

Crawler Ping Sending Fake QRT (Query Routing Table)

Query Logging Crawlers

Receiving Queries

Fig. 1. A Brief Topology of the Crawling System

Peer Discovery. In order to obtain data from a large set of Gnutella peers, crawler has to discover as many peers as possible. Our crawler find peers in three ways. The first is through Gnutella hand shaking protocol which could be used to exchange a few IP addresses of peers. However, this discovery method seems to be relatively slow compared with some vendors’ hand shaking protocols like LimeWire’s hand shaking. The other two are sending HTTP request to GWebCache[2] for a list of ultrapeer addresses and using Crawler Ping[2]. By comparing these approaches, we find vendor’s hand shaking performs the best. Peer status probing. Peer status data is essential for the characterizing of peers. After discovery crawler records the address of a newly found peer, probing crawler will handle the status probing work. In our measurements, peer status includes latency, bandwidth, shared file number, shared file size, and daily online time etc. All these parameters are obtained through Gnutella defined messages, except for the first two, which are gathered by TCP/IP layer measurements. Previous work[10] provided a detailed introduction to these measurements. Query logging. Compared with peer status probing, query logging collects data in a passive way. To perform a large-scale query collecting, we also designed a special approach by utilizing the new Gnutella protocol. Early work introduced by Markatos[11] performed some studies on a relatively small set of queries in Gnutella 0.4 where efficiently collecting queries with little duplications is difficult. In Gnutella 0.6, only ultrapeer would forward a query with the help of Query Routing Table which contains information of files shared by its leaf peer. At

A Statistical Study of Today’s Gnutella

193

first glance, this fact would limit the received queries of leafpeer crawler to those relevant with its shared files. Recent work such as Klemm[12] presented try to avoid this limitation by setting up ultrapeer crawlers and collecting queries submitted by its leafpeers for a relatively long period. However, this approach can only record a very limited sub set of the overall queries. In our measurements we run a number of leafpeer crawlers acting as query listeners. To break the limitation brought by QRT, crawler sends a query routing table to make its connected ultrapeers believe that the crawler has any file that others may ask for. This is done by sending a fake query routing table contains all the possible compositions of letters and figures with a length of three. Once ultrapeer receives such query routing tables, it will forward any query it received to our crawlers. Thus, our crawler could theoretically record all the query messages of the entire Gnutella network, which has never been achieved before. Fig. 1 shows the topology of our crawling system. Peer Filter in this figure takes charge of checking whether a newly found peer has already been discovered, as well as other filtering tasks. Seed selector provides most valuable peers for crawlers. E.g. if peer A is found to have lots of connected leafpeers, it will prefer to send A’s address to the query logging crawler than the address of other peers. All crawlers are implemented with Java based on the open source client LimeWire client, except for a script used for measuring latency and bandwidth of peers. We use 8 IBM PCs for the purpose of crawling and 2 servers installed with SQL Server for data storage. Our crawlers totally captured nine days of peer activities in three different periods and each period lasts about three days, all from Tuesday to Thursday in the first three weeks of April, 2005. During the experiment, we discovered 3,595,203 Gnutella peers. About 19% of the overall population, more precisely, 683,334 peers are running under ultrapeer mode.

4 4.1

Data Analysis Gnutella Query Arrival

Our statistical data contains 70,876,100 query messages collected from 218,842 ultrapeers in about 40 hours. In the following discussion, we try to characterize the distribution of query arrival in Gnutella network with the above data. Previous work introduced by Pandurangan[19] studied the problem of maintaining N-node P2P network as nodes join and depart according to a Poisson process. In another study[8], poisson distribution is used to generate queries for simulating purpose. However, it is interesting to explore what kind of distribution Gnutella query arrival really follows. To avoid the impact of incidental phenomenon, our measurements collect query messages forwarded by a large number of ultrapeers. The query arrival information we gathered actually records the query arrival of a great many leafpeers that connect to these ultrapeers. Thus we believe our data is capable of describing the real situation of query arrival in Gnutella network. We start at estimating how precisely Poisson model could fit the actual distribution of query arrival. Using maximum likelihood estimation, we construct

194

S. Meng et al. 0.07 0.05

Poisson Frequency

0.045

0.06

Poisson Frequency

Probability / Frequency

Probability / Frequency

0.04

0.05

0.04

0.03

0.02

0.035 0.03 0.025 0.02 0.015 0.01

0.01 0.005

0 0

1

2

3

4

5

6

Query Number in Unit of Time(15 minutes)

(a)

7

8 5

x 10

0 0

2

4

6

8

10

12

14

Query Number in Unit of Time(3 minutes)

16

18 4

x 10

(b)

Fig. 2. One Poisson Fitting. (a)One Poisson Fitting with 15 Minutes as Time Unit. (b)One Poisson Fitting with 3 Minutes as Time Unit.

many instances of query arrival model with different time units in Poisson distribution. We first divide the queries in time units and calculate the number of queries in each time unit. Then we can get the frequencies of different query number appearance in time unit. Based on these frequencies, we can obtain a Poisson fitting using maximum likelihood estimation. From Fig. 2(a) and 2(b) where circles denote the observed frequency of certain received query number in different time units(15 minutes and 3 minutes) and the curve is the probability estimated by Poisson distribution, we can see poisson model doesn’t fit the actual distribution well. Since there are two obvious crests of the frequency in the figures, we alternately use Two Poisson distribution to model query arrival. The Two Poisson model is a simple example of a Poisson mixture. P r2P (x) = απ(x, θ1 ) + (1 − α)π(x, θ2 )

(1)

As equation (1) shows, we use the method of moments[20] to fit the three parameters of the Two Poisson model, θ1 , θ2 and α. When time unit is set to 3 minutes, we find α = 0.4037, θ1 = 136.4097 and θ2 = 41.2252 can yield a much better fitness compared with Poisson distribution. Fig. 3(a) and 3(b) illustrate that Two Poisson distribution outperforms the Poisson distribution. We also use RMS(root mean square) errors calculated by equation (2) to make a comparison between these two models.  (2) err = (est − obs)2 As Table 1 shows, TU and ON are abbreviations for time unit and observation number respectively, Err is the calculated RMS errors and DR is the dropping rate on Err of Two Poisson against that of One Poisson. From this table, we can see Two Poisson model obviously has a better fitness than One Poisson model. Why Two Poisson? One possible reason is that peer population is different in different regions, e.g. Asia has much less users than America. Users of these

A Statistical Study of Today’s Gnutella 0.07

195

0.045

Poisson Frequency

Poisson Frequency

0.04

0.06

Probability / Frequency

Probability / Frequency

0.035

0.05

0.04

0.03

0.02

0.03 0.025 0.02 0.015 0.01

0.01 0.005

0 0

1

2

3

4

5

6

Query Number in Unit of Time(15 minutes)

(a)

7

8 5

x 10

0 0

2

4

6

8

10

12

14

Query Number in Unit of Time(3 minutes)

16

18 4

x 10

(b)

Fig. 3. Two Poisson Fitting. (a)Two Poisson Fitting with 15 Minutes as Time Unit. (b)Two Poisson Fitting with 3 Minutes as Time Unit. Table 1. RMS(Root Mean Square) Between One Poisson and Two Poisson One Poisson Two Poisson TU ON Err TU ON Err DR 15 154 0.1888 15 154 0.1846 2.22% 5 168 0.1741 5 168 0.1578 9.36% 3 149 0.1824 3 149 0.1463 19.80%

two regions usually don’t connect to the Gnutella network at the same time because they live in different time zones which could have a time difference of 14 hours. Thus peer population in different time periods, say 12 hours, could either roughly equal to peer population of Asia or that of America. This fact could further cause two most common frequencies of query submitting, because query submitting rate is obviously proportional to peer population. To prove this hypothesis, we alternatively use keywords in queries to trace the change happens to Gnutella peer population. Fig. 4 shows the change of Gnutella query number in 24 hours of April 6th, 2005, where the dashed line stands for the number of Non-English queries such as Chinese and Japanese queries, the solid line denotes the number of English queries and the dash-dot line shows the the number of overall queries. The number of overall queries dropped significantly at around 19:00 CST, which indicates that lots of English speaking users get offline at this time. Right after that, the number of Non-English queries starts to grow and reaches its peak value at 21:00 CST. However, since Non-English speaking users are not the predominate population in Gnutella, the overall query number still keeps at a relatively low, but stable level. Thus Fig. 4 clearly shows different peer population between regions could cause Two Poisson distribution of query arrival, because there are roughly two most common frequencies of query arrival. However, Two Poisson still does not have a satisfying fitness against the actual query arrival distribution. Thus, more in-depth study of the query arrival model will be part of our future work. In addition, Poisson model may work well when

196

S. Meng et al. 6

2.5

x 10

English Queries NonEnglish Queries All Queries

Number of Queries

2

1.5

1

0.5

0 6:00

11:00

16:00

21:00

1:00

6:00

China Standard Time(GMT+08)

Fig. 4. The Number of Queries in Different Languages in 24 hours of April 6th, 2005

query arrival is observed in a relatively short time (≤ 6h) because the size of population is not likely to change significantly during this period. 4.2

Shared Files

During the measurements we found lots of peers choose to refuse to respond with their shared file information. As a result, we totally gathered shared file information for 30,044,693 files from 514,562 Gnutella peers which is about 15% of the total discovered population. However, since these peers are randomly distributed among the population, our statistical data still can reflect the real distribution of shared files in Gnutella. Distinguished by file extensions, there are altogether 307 different file types in the data set. For the files that we collected, the total file size is about 122 TB and the size of files shared by Gnutella peers could be as large as 813 TB by a conservative estimation. As few previous work has given the overall distribution of Gnutella shared files a thorough examination, we provide this distribution as a starting point. Fig. 5(a) is the distribution of Gnutella files with respect to the number of each type.

Program 2.53%

Program 6% Archive 2%

Audio 45.41%

Archive 5.95%

Video 5%

Audio 79%

Document 4%

Picture 0.23%

Video 45.69%

Picture 4%

(a)

Document 0.19%

(b)

Fig. 5. Distribution of Gnutella Shared Files. (a)Distribution of File Types with Regard to File Number. (b)Distribution of File Types with Regard to File Size.

A Statistical Study of Today’s Gnutella

197

It shows that audio files, or MP3 files more precisely, dominate the proportion of all the files. Fig. 5(b) is another distribution of Gnutella files, which considers the total file size of each type. We can see audio files and video files are about the same size, both occupied around 45% of the overall shared contents. As far as we know, the distribution of shared files on a single peer hasn’t been carefully studied. However, information about this distribution is quite important for the study of peer behavior and search optimization. For the rest of this section, we try to answer the questions: “Does peer tend to share files of various types, or just few types?” by examining the distribution of file types on single peer. To perform this analysis, we use the entropy of shared file types on these individual peers as a tool to explore the purity of their shared files. This entropy is defined as follows: Entropy = −

N −1 i=0

pi · log2 (pi ), pi =

ni N

(3)

where ni is the numbers of file with type i and N is the overall file numbers on the observed peer. The benefit of using entropy is that we can examine shared file types and the number of files for each type at the same time. Moreover, entropy quantified the purity of peers’ shared files in a reasonable way. E.g. if a peer shares various types of files while most of the files belong to one type, this peer can still be considered to have high purity of its shared files. As Fig. 6(a) shows, 142,019 peers out of the total 514,562 peers have a zero entropy regarding file types. This suggests that a large number of peers(41.3%) only share files of one type. Furthermore, it is obvious that most peers have small entropies(78.3% peers have entropies less than 1). However, since most files shared in Gnutella are MP3 files and many peers could only share MP3 files, the above results could be affected by this phenomenon. Thus we exclude MP3 files and analyze the entropy of file types again. As Fig. 6(b) shows, we can still find lots of peers(37.3%) share files of one type and the entropies of 62.6% peers are less than 1, although the corresponding proportions drop a little bit. According to this result, we believe peers are intended to share files with very limited types. Most peers have few dedicated interests and peers which randomly share files are rare. 4.3

Peer Role Selection

One of the great changes from Gnutella 0.4 to 0.6 is that 0.6 protocol defines two different peer roles, ultrapeer and leafpeer. Autonomous selected ultrapeer devotes some of its bandwidth to forward query messages and plays an important part for the performance of the system. Although a sophisticated peer role selection algorithm should at least base on peer’s bandwidth and latency, we find that the result of clustering based on these two parameters does not match the actual peer role quite well. We randomly pick 1,000 peers, 176 ultrapeers and 824 leafpeers, from our measurements data since too many peers would make the figures unreadable and these peers are adequate to state our point. Note that the ultrapeer to leafpeer ratio is about 1:4, which is same as the ratio in

198

S. Meng et al. 4

5

2.5

x 10

14

x 10

12 2

Number of Peers

Number of Peers

10

1.5

1

8

6

4 0.5

2

0

0

0.5

1

1.5

2

2.5

3

3.5

0

0

0.5

1

1.5

2

2.5

3

3.5

Entropy of Peer Shared File Types(Without MP3)

Entropy of Peer Shared File Types

(a)

(b)

Fig. 6. Entropy of Shared Files on Individual Peers. (a)Number of Peers with Different Entropy(Including MP3 Files). (b)Number of Peers with Different Entropy(Excluding MP3 Files).

the Gnutella network. As Fig. 7(a) illustrates, peers are clustered by K-Means algorithm into two classes. Circles represent peers have high bandwidth and low latency. dot points stand for peers that have poor Internet connections. Fig 7(b) shows the real role of these peers, where circles and dot points respectively denote for ultrapeers and leafpeers. However, compared with Fig. 7(a), there are quite a few ultrapeers distribute in the low bandwidth and high latency region. Obviously, such peers are not suitable to be ultrapeers. To be precise, we find 20.45% ultrapeers in Fig. 7(b) are represented by dot points in Fig. 7(a), which means that they are not selected properly. In addition, we also find that only 55.70% ultrapeers have daily online time above the average. Notice that the performance of the Gnutella network largely depends on the performance of its “core”, a network constituted by connected ultrapeers, because this core is in charge of almost all the message routing and forwarding work. Thus the selection of the ultrapeer becomes very important for the prosperity of Gnutella. To bring an intuitive understanding of how bad poor role selection could influence system performance, we will examine the difference of system throughput between ideal and real role selection in the form of overall traffic volume of peers. We first find the poorly selected peers according to the clustering result. Then we estimate the total improvement on traffic volume when giving these peers the right roles. In particular, we calculate the traffic volume based on the following equation, (Bi × DOTi ) (4) TrafficVolume = i

where peer i belongs to the set of poorly selected peers we mentioned above. Bi and Di are the corresponding bandwidth and daily online time of peer i. We find that even for 31 miss-selected ultrapeers in Fig. 7(b), the total daily traffic volume can have a boost of 3.23 GB if these peers are assigned to the right role. Notice that this is only a conservative estimation, because well selected ultrapeers can also accommodate more leafpeers. It is actually part of our ongoing work to model the performance of Gnutella system.

A Statistical Study of Today’s Gnutella 5

5

10

10 Peers Should Be Leafpeer Peers Capable To Be Ultrapeer

Leafpeer Ultrapeer

4

Latency(Milliseconds)

Latency(Milliseconds)

199

10

3

10

2

10 1 10

4

10

3

10

2

2

10 Bandwidth(Kbps)

3

10

10 1 10

(a)

2

10 Bandwidth(Kbps)

3

10

(b)

Fig. 7. Peer Role Selection. (a)Peer Clustering By Latency and Bandwidth (log-log scale). (b)Peer Classification By Latency and Bandwidth (log-log scale).

There could be many reasons for the inaccurate peer role selection. Local information used for the selection could be misleading. The selection algorithm could be poorly implemented. However, we did find some popular Gnutella clients implement a relatively simple ultrapeer selection mechanism, e.g. LimeWire would permit a peer to run in ultra mode if it has a public IP address.

5

Conclusions and Future Work

In this paper we presented a measurement study performed on Gnutella 0.6. We collected parameters of different aspects from TCP/IP layer to P2P overlay. Based on the results, we studied query and file distribution in Gnutella 0.6 and looked into the result of Gnutella peer role selection. Several conclusions emerged from the results of our measurements. First, the assumption that query arrival follows Poisson distribution may not be true in Gnutella. Although more accurate models should be proposed, we found that Two Poisson could fit the query arrival much better than One poisson which is widely used in models and simulation. Second, the most files shared in Gnutella are MP3 files and more than one third of peers only share files of one type, it is also true when MP3 files are left out. Third, many ultrapeers in Gnutella are not well selected. We showed that this phenomenon could seriously lower system performance. The statistical data provided by this paper is also useful for P2P modeling and simulating real and up to date P2P environment. As part of ongoing work, we are carrying out more detailed research on peer behavior and Gnutella query distribution. We are also in the process of building file spreading models for Gnutella.

References 1. Lui, S.M., Kwok, S.H.: Interoperability of peer-to-peer file sharing protocols. ACM SIGecom Exchanges 3 (2002) 25–33 2. RFC-Gnutella: Rfc-gnutella 0.6. http://rfc-gnutella.sourceforge.net/developer/ (2003)

200

S. Meng et al.

3. KaZaA: Kazaa web site. http://www.kazaa.com/ (2001) 4. Good, N., Krekelberg, A.: Usability and privacy: a study of kazaa p2p file-sharing. In Cockton, G., Korhonen, P., eds.: CHI, ACM (2003) 137–144 5. Qiu, D., Srikant, R.: Modeling and performance analysis of bittorrent-like peerto-peer networks. In Yavatkar, R., Zegura, E.W., Rexford, J., eds.: SIGCOMM, ACM (2004) 367–378 6. Bharambe, A.R., Herley, C., Padmanabhan, V.N.: Some observations on bittorrent performance. In Eager, D.L., Williamson, C.L., Borst, S.C., Lui, J.C.S., eds.: SIGMETRICS, ACM (2005) 398–399 7. Sen, S., Wang, J.: Analyzing peer-to-peer traffic across large networks. IEEE/ACM Transactions on Networking (TON) 12 (2004) 219–232 8. Lv, Q., Cao, P., Cohen, E., Li, K., Shenker, S.: Search and replication in unstructured peer-to-peer networks. In: SIGMETRICS, ACM (2002) 258–259 9. Adar, E., Huberman, B.A.: Free riding on gnutella. First Monday 5 (2000) 10. Saroiu, S., Gummadi, P.K., et al: Measuring and analyzing the characteristics of napster and gnutella hosts. Multimedia Syst. 9 (2003) 170–184 11. Markatos, E.P.: Tracing a large-scale peer to peer system: An hour in the life of gnutella. In: CCGRID, IEEE Computer Society (2002) 65–74 12. Klemm, A., Lindemann, C., et al, M.K.V.: Characterizing the query behavior in peer-to-peer file sharing systems. In: Internet Measurement Conf. (2004) 55–67 13. Kwok, S.H., Yang, C.C.: Searching the peer-to-peer networks: the community and their queries. Journal of the American Society for Information Science and Technology 55 (2004) 783–793 14. Cisco: White paper - netflow services and applications. (http://www.cisco.com// warp/public/cc/pd/iosw/ioft/neflct/tech/nappswp.htm) 15. Napster: Napster web site. http://www.napster.com/ (2000) 16. Wikipedia: Opennap from wikipedia, the free encyclopedia. http://en.wikipedia. org/wiki/OpenNap (2005) 17. Asvanund, A., Clay, K., Krishnan, R., Smith, M.D.: An empirical analysis of network externalities in peer-to-peer music-sharing networks. Info. Sys. Research 15 (2004) 155–174 18. Bloom, B.H.: Space/time trade-offs in hash coding with allowable errors. Comm. of the ACM 13 (1970) 422–426 19. Pandurangan, G., Raghavan, P., Upfal, E.: Building low-diameter p2p networks. In: Proc. of the 42nd IEEE symposium on Foundations of Computer Science. (2001) 492 20. Harter, S.: A probabilistic approach to automatic keyword indexing: Part i. on the distribution of specialty words in technical literature. Journal of the American Society for Information Science 26 (1975) 197–206

Automatically Constructing Descriptive Site Maps Pavel Dmitriev and Carl Lagoze Cornell University, Department of Computer Science, Ithaca, NY, 14853, USA {dmitriev, lagoze}@cs.cornell.edu http://www.cs.cornell.edu

Abstract. Rapid increase in the number of pages on web sites, and widespread use of search engine optimization techniques, lead to web sites becoming difficult to navigate. Traditional site maps do not provide enough information about the site, and are often outdated. In this paper, we propose a machine learning based algorithm, which, combined with natural language processing, automatically constructs high quality descriptive site maps. In contrast to the previous work, our approach does not rely on heuristic rules to build site maps, and does not require specifying the number of items in a site map in advance. It also generates concise, but descriptive summaries for every site map item. Preliminary experiments with a set of educational web sites show that our method can construct site maps of high quality. An important application of our method is a new paradigm for accessing information on the Web, which integrates searching and browsing.

1 Introduction Recent research indicates that the Web is continuing to grow rapidly. However, the number of web sites did not increase much over time [8]. Thus, the growth is mostly due to the increase in the number of pages on web sites. According to the OCLC web survey [7], the average number of pages on a public web site already was 441 in 2002. Such increase in complexity of web sites inevitably makes them more and more difficult to navigate. It is not only the growth in the number of pages that complicates navigation on a web site. The dominance of search engines as the primary method of accessing information on the web discourages web site developers from paying enough attention to making their web sites easy to navigate. In addition, developers employ tricks to raise the ranking of their sites by search engines, making navigation even more difficult [9]. The traditional way to help users with navigating through a web site is a site map. A site map is a web page that contains links to all the main sections of the web site, and, possibly, a concise description of what each section is about. A site map is usually created and maintained by the web site administrator. In reality, however, many web sites do not have site maps. And those that do usually do not have descriptions of sections, and are often outdated. This is not surprising, since site maps have to be created and maintained manually. Thus, there is a strong need for a technique capable of automatically generating accurate and descriptive site maps. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 201 – 212, 2006. © Springer-Verlag Berlin Heidelberg 2006

202

P. Dmitriev and C. Lagoze

The task of automatically creating a site map can be thought of as consisting of several steps. First, important sections on a web site must be identified1. Then, the sections must be combined into a hierarchy, taking into account user-defined constraints, such as the desired depth of the site map, maximum number of items in the site map, etc. Next, anchortext must be generated for every item. Finally, summaries for the sections need to be generated, which accurately describe contents of the sections. There are several important problems one needs to solve in order to successfully implement the above steps. There are a variety of structural and content features of web pages that determine important sections on web sites, such as content, anchortext, link patterns, URL structure, common look-and-feel. Simple heuristics that use only one or two of these features are not sufficient to identify web site sections [3]. A more complex approach is required, which would combine all these diverse features in an appropriate way. Combining the sections into a hierarchy is difficult, too, since the link structure of the site usually allows for multiple ways to do it, and user-defined constraints may require merging some of the sections. Finally, generating the titles and summaries must be done in such a way that the user has enough information to decide whether he or she should navigate to the section, while keeping the summaries small to allow skimming through the overall structure of the web site. In this paper we propose a method for automatically constructing descriptive site maps. In contrast to the previous work, which used a heuristic-based approach [6], our work is based on a new semi-supervised clustering algorithm [3]. Given several sample site maps, a learning algorithm decides on the best way to combine multiple diverse features of web pages into a distance measure for the clustering algorithm, which is used to identify web site sections. Note that, in contrast to [6], our approach does not require specifying the number of clusters/sections in advance – the clustering algorithm decides on the optimal number of clusters automatically. The resulting clusters are then processed to identify leader pages, and iteratively merged into a hierarchy that satisfies user-defined constraints. Finally, titles and summaries are generated for site map items using multi-document summarization techniques. An important application of our algorithm is a new paradigm for accessing information on the Web, which integrates searching and browsing. Currently, when the user clicks on a link on a search engine results page, they get to a page that is totally out of context. There is generally no way to find out the function of this page in the web site as a whole, or navigate to related pages or other sections of the site. This is because web sites are generally designed with the assumption that the user will always start navigation from the root page of the site. Search engines, on the other hand, bring users to a “random” page on the site. To solve this problem, search engines could use our algorithm to build a site map for every site in their index. This can be done offline and in incremental fashion. Then, the site map, as well as various statistics about the site, can be presented to users when they navigate from search engine to the page. The rest of the paper is organized as follows. The next section presents an overview of related work. Section 3 describes our algorithm for constructing descriptive site maps. Section 4 discusses our experiments, and section 5 concludes the paper. 1

Note that this is different from simply identifying important pages. Important pages are the root pages of the sections; however, an assignment of each regular page to a root page must be computed as well, since it is essential for generating a high quality summary of the section.

Automatically Constructing Descriptive Site Maps

203

2 Related Work Research relevant to our work can be divided into two categories: recovering internal structure of a web site and automatic site map construction, and multi-document summarization. Below we give an overview of work in each of these areas. Recovering Web Site Structure. Eiron and McCurley [4] propose a heuristic approach to identifying important sections on a web site, which they call compound documents (cDocs). Their approach relies heavily on the path component of URL. Every outlink on a page can be classified as being up, down, inside, or across link, with respect to whether it links to a page in an upper, lower, same, or unrelated directory. Eiron and McCurley observe that, since cDocs are usually authored by a single person and created over a short period of time, they tend to contain the same (or very similar) set of down, inside, and across links. Another heuristic they use is based on an observation that outlinks of the members of a cDoc often have the same anchor text, even if their targets are different. Finally, they assume that a cDoc cannot span across multiple directories, with the exception of the leader page of a cDoc being in one directory, and all other pages being in a subdirectory. These heuristics are manually combined to identify cDocs on web sites. Li et al. [6] propose a heuristic approach to identifying logical domains – information units similar to cDocs – on web sites. Their heuristics are based on observations about what leader pages of logical domains tend to look like. They use a number of heuristics exploiting file name, URL structure, and link structure to assign a leader score to every page on the site. Top k pages are picked as leader pages of logical domains, and every other page is assigned to one of the leader pages based on the longest common substring of the URL, making adjustments to make sure that all pages of a logical domain are accessible from the leader page. The approach of [6] was also applied to automatic site map construction [2]. First, logical domains are identified using the approach described above. Then, for every pair of parent logical domain leader page and child logical domain leader page, l pages best describing the association between the two leader pages are chosen using a PageRank style algorithm. The site map is then constructed by finding shortest paths between all selected pages. The approach does not include generating anchortexts or summaries. Both of the above approaches are based on a number of pre-defined heuristics, and thus fail to account for a variety of content, style, and structural conventions existing in different communities on the Web. In addition, the latter approach requires specifying the values for parameters k and l in advance. Trying to overcome the limitations of the methods mentioned above, we proposed in our previous work [3] a new approach to finding compound documents. This work uses an extension of that approach to identify important sections on web sites. We describe our approach in detail in section 3. Summarizing Multiple Documents. The task of multi-document summarization is to produce, given a set of documents, a coherent summary of a specified size describing the contents of all documents at once. The approaches to multi-document summarization can be split into two classes: those that produce summaries consisting of whole

204

P. Dmitriev and C. Lagoze

sentences (or large sentence fragments) extracted from the original documents, and those that produce summaries consisting of new automatically generated sentences based on semantic information extracted from the documents. Although recently there has been some work in the latter area, the proposed methods are computationally intensive, and, to our knowledge, do not perform significantly better than the approaches from the more simple former class. Thus, we only concentrate on the former class here. Most of the approaches of the former class consist of three main stages. On the feature extraction stage, for every sentence a number of features are extracted relevant to estimating significance of the sentence; on the sentence ranking stage, every sentence is assigned a significance score based on the extracted features; on the summary generation stage a summary is generated from highly ranked sentences, paying particular attention to avoiding adding redundant information to the summary. Due to space limitations, we do not discuss how each of these stages is implemented in existing systems. An interested reader is referred to the DUC web site, http://duc.nist.gov, for more information. Our approach to generating summaries follows the standard framework outlined above. We describe the particular methods used in section 3.

3 Constructing Descriptive Site Maps Our approach to constructing descriptive site maps consists of three stages. On the compound documents identification stage, a web site is split into a set of cDocs, each cDoc representing an important section of the site; on the site map construction stage, the cDocs are combined into a hierarchy to produce a site map according to userdefined criteria; on the anchortext and summary generation stage, anchortext is generated for every item in the site map, and a summary is extracted for every leaf item. We describe each of these stages in detail in the subsequent sections. Finding Compound Documents. This stage is implemented using the system for automatic identification of compound documents we developed in our previous research [3], extended with additional functionality. We represent a web site2 as a graph with nodes corresponding to pages, and edges corresponding to hyperlinks. The process of finding cDocs consists of two phases. On the training phase (fig.1, left), a human labels cDocs on several web sites. Then, a vector of features3 is extracted for every hyperlink, and a logistic regression model is trained to map vectors of feature values to weights on the corresponding edges of the graph. On the working phase (fig.1, right), given a new web site, we repeat the process of extracting the features from the pages, and transforming them

2 3

In this paper, by a web site we mean all pages under the same domain name. The features we use are content similarity, anchor text similarity, title similarity, number of common inlinks, number of common outlinks, number of outlinks with common anchor text (the outlinks linking to different pages, but having identical anchor text), and whether the hyperlink points to the same directory, upper directory, lower directory, or other directory with respect to the path component of the URL.

Automatically Constructing Descriptive Site Maps

205

__

__

__

Xe1

Xe4 __

Xe2 __

Xe3

1

__

1

3

__

3

__

Xe1

we4

Xe7

Xe4

we1

we4

__

we2

Xe2

we2

__

Xe3

we3 we6 we5

__

Xe6

2

2

we1

__

Xe5

we7

__

Xe5

__

__

Xe6

Xe7

we3 we6

we7

we5

Fig. 1. Training phase (left), and working phase (right) of the process of identifying compound documents. Solid edges are positive, and dotted edges are negative training examples.

into vectors of real values. Then, we use the logistic regression model to compute weights on the edges. Finally, a graph clustering algorithm is applied to the weighted graph. The clusters are the cDocs we are looking for. We make a few notes about our clustering algorithm. The algorithm is an extension of the classical agglomerative clustering algorithm. It starts with every page being a singleton, and then repeatedly merges the clusters, picking edges one by one in the order of decreasing weight. However, there is a notable difference from the classical case. Our algorithm usually stops before all the pages have been merged into a single cluster, since it may decide not to merge two clusters under consideration on a particular step, based on the average weights of edges within the clusters, and the weight of the edge connecting them. The threshold controlling this process is learned during the training phase. In this work, we extended the clustering algorithm described above with the ability to satisfy the alldiff constraint, which requires, for a given set of pages, that no two of them belong to the same cluster. To satisfy it, every time we want to merge two clusters, we have to check that the merge is allowed. Note that this only introduces a constant overhead on every step of the clustering process4. We also note that this approach can be applied to enforce alldiff constraint in the classical agglomerative algorithm as well, making it produce, instead of a single cluster, the number of clusters equal to the number of pages in the alldiff set. We apply alldiff constraint to all pages with filenames index. or default.. The intuition (confirmed experimentally) is that two pages having such filenames are unlikely to belong to the same cDoc. Moreover, as we discuss in the next section, such pages are very likely to be leader pages of their cDocs. Constructing Site Maps. To construct a site map we need, for every cDoc, to (1) find a leader page, i.e., the most natural entry point into the cDoc, (2) combine all cDocs 4

To see this, let every page keep a pointer to its clusterID, and let every clusterID have a mark indicating whether it contains a page from the alldiff set. Then, to check the constraint we simply check whether both clusterIDs are marked, and, when two clusters are merged, we only need to update the clusterID and the mark fields.

206

P. Dmitriev and C. Lagoze

on a site into a single hierarchy, and (3) apply user-defined constraints to the hierarchy to produce a site map satisfying the user’s requirements. Below we describe how each of these steps is implemented. Identifying leader pages in cDocs is important, because these are the pages that items in the site map should link to. We use two simple, but reliable heuristics to identify the leader page in a cDoc5. First, the filename of every page is examined. If a page with filename index. or default. is found, it is chosen to be the leader page. Note that there can be at most one such page in a cDoc, due to the alldiff constraint described earlier. Second, for cDocs that do not contain pages with the above filenames, the page with the greatest number of external inlinks (i.e., inlinks from other cDocs) is chosen to be the leader page. After the leader pages are identified, the cDocs are combined into a hierarchy to produce a site map. The cDoc containing the root page of the web site is taken to be the root node of the site map. Then, all outlinks from the pages of the root cDoc are examined, and all cDocs whose leader pages these outlinks point to are taken to be the descendants of the root node in the site map. New nodes are then examined in the breadth-first order, and processed according to the same procedure. Finally, user-defined constraints are applied to the site map produced on the previous step. Currently, the user can specify three types of constraints in our system: the minimum number of pages a cDoc must have to be present in a site map, the maximum depth, or number of levels in the site map, and the maximum number of leaf items in the site map. These constraints allow the user balance the comprehensiveness and readability of the site map. All three constraints are enforced in a similar manner, by iteratively traversing the site map in a depth-first order, and merging the nodes that violate the constraints into their parent nodes. Table 1 shows the values for these constraints used in our experiments. Generating Summaries and Anchortexts. To generate summaries and anchortexts, we follow the standard multidocument summarization framework described in section 2, i.e., the following three steps are performed: feature extraction, sentence ranking, and summary and anchortext generation. On the feature extraction stage, anchortexts, titles, and content are extracted for every site map item (referred to as texts in the remainder of the paper). In addition, the centroid, as well as k most frequent keywords are extracted for every text. After that, every text is processed with Automated English Sentence Segmenter6 to identify sentence boundaries. Filtering is then applied to mark sentences not likely to be useful for the summary (the ones containing email addresses, phone numbers, copyright statements, etc.). These sentences will only be considered if a summary of the required size could not be constructed from unmarked sentences. 5

Our original plan was to train a classifier using the various features that could potentially be useful in determining the leader page of a cDoc. However, experiments with the heuristic approach showed that there is no need for that. Indeed, the heuristic approach showed 100% accuracy at identifying leader pages on all cDocs from 5 randomly chosen sites from our dataset. 6 http://www.answerbus.com/sentence/

Automatically Constructing Descriptive Site Maps

207

For sentence ranking, we experimented with two approaches: rank the sentences according to the similarity of the sentence to the centroid of the text, and rank the sentences according to the similarity of the sentence to the k most frequent keywords of the text. Once the ranking is computed, we can generate anchortexts and summaries for our site map. We tried three approaches to anchortext generation: the highest ranked sentence from the text consisting of achortexts of the leader page of the item, the highest ranked sentence from the text consisting of titles of all pages of the item, and the title of the leader page of the item. For summary generation, we tried eight approaches, which differ in the type of text they used, and in the ranking method. We applied the two ranking methods mentioned above to the text of the leader page, text of all pages, and text of first sentences of all pages. In addition, we tried a hybrid approach, taking text of the leader page, and using centroid and keywords of all pages to rank the sentences. Filter out all sentences of length more than a threshold; Pick the highest ranked sentence and include it in the summary; Until (the summary contains the desired number of sentences) do { Pick the next highest ranked sentence; If ((similarity of the sentence picked to the existing summary < t1) && (its similarity to the centroid/keywords > t2)) Then include the sentence in the summary; Else drop the sentence and continue; }

Fig. 2. Summary generation algorithm Table 1. Parameters of the algorithm

Name

Description

Value

Min. # pages in an item

The minimum number of pages a site map item has to contain The maximum height of a site map tree The maximum number of leaf items in a site map The maximum # sentences in the summary of a site map node The maximum # characters a sentence may have to be included in the summary (longer sentences can still be included, but must be cut up to 150 characters) The largest similarity a new sentence may have to the existing summary to be added to the summary The smallest similarity a new sentence must have with the centroid/keyword list to be added to the summary Number of words used to generate the list of most frequent terms

3

Max. depth of a site map Max. # leaf items Max. summary length Max. sentence length

Sentence similarity threshold Sentence relevance threshold # keywords

3 30 3 150

0.5 0.1 5

208

P. Dmitriev and C. Lagoze

The summary generation algorithm used in all these cases is shown on Figure 2. It has a number of parameters. The desired number of sentences in the summary is a parameter set by the user, which lets them control the size of the generated summary. The sentence similarity threshold, t1, is used to avoid including in the summary two sentences that are very similar. The higher the value of this threshold, the stricter the requirement is that a new sentence included in the summary must be substantially different from the current summary. Finally, the sentence relevance threshold, t2, ensures that the sentence picked, while being substantially different from the rest of the summary, is still relevant to the centroid, or contains some of frequent keywords. Table 1 shows all parameters used by our algorithm.

4 Experimental Results For the purpose of evaluating our algorithm, we used the same dataset that we used in [3] to evaluate our system for finding compound documents. This dataset consists of 50 web sites on educational topics. However, since many of the web sites in that dataset were too small to make site map construction for them interesting, we ended up using only 20 web sites. We are currently working on preparing a significantly larger dataset, which we will use for a more thorough evaluation. It is difficult to formally evaluate the quality of site maps, since there are too many subjective factors involved in deciding how good a site map is. Therefore, we conduct a separate evaluation of every component of the system. Figure 4 can give the reader an idea of what a typical site map generated by our system looks like. Compound Documents Identification. Since the quality of the system for identifying compound documents was extensively evaluated in [3], and we use the same dataset to evaluate our system, we do not repeat the details of the evaluation here. The key results of [3] were that the system could identify correctly most of the compound documents, a small number of training sites (typically 6) were enough to train a good logistic regression model, and the system was relatively insensitive to the choice of training examples. Here we used 10 sites for training, and the other 10 for testing. Leader Page Selection. As we mentioned in section 3, our heuristic approach to leader page identification produced 100% accuracy, when evaluated on all the compound documents from the 5 evaluation sites. Site Map Construction. We compare the site map produced from automatically identified compound documents to the one produced from gold-standard compound documents. Out of the 5 evaluation sites, site maps for 2 sites were identical, site maps for another 2 sites were very similar, with differences only in a few leaf-layer nodes, and for one of the sites the site map produced from the automatically identified cDocs was substantially different (and of less quality) than the one produced from the gold-standard cDocs. The difference, however, could be minimized with setting different values for the maximum number of allowed leaf nodes and maximum depth of

Automatically Constructing Descriptive Site Maps

209

The Prime Pages (prime number research, records and resources) o Lists of primes ƒ Single primes Keywords: prime, certificate, primes, bit, random. A 36,007 bit "Nearly Random" Prime A 36,007 bit "nearly random" prime This 10,839 digit prime does not have a nice short description!... While new records with cyclotomy are being boiled, I have another kind of large prime for you: a 36007 bits almost random, proved prime.... Mersenne Glossary Prime Curios!... http://www.utm.edu/research/primes/lists/single_primes/ - 7pages ƒ Lists of small primes (less than 1000 digits) Keywords: primes, first, digits, prime, twin. Move up one level] From other sites All the primes below 100,711,433 (5.8 million primes) All the primes below 2,000,000,009 (98 million primes)... Prime Lists FAQ e-mail list Titans Prime Links Submit primes What is small? Depends on your context, but since this site focuses on titanic prime... Lists of small primes (less than 1000 digits) Lists of small primes (Another of the Prime Pages' resources) Home Search Site Largest The 5000... http://www.utm.edu/research/primes/lists/small/ - 4pages ƒ Primes just less than a power of two Keywords: prime, bits, primes, 69, 45. Pages: 8-100 bits, 101-200 bits, 201-300 bits, 301-400 bits. n ten least k's for which 2n-k is prime.... Prime Lists FAQ e-mail list Titans Prime Links Submit primes Here is a frequently asked question at the Prime Pages: I am working on an algorithm and... Prime Lists FAQ email list Titans Prime Links Submit primes When designing algortihms, sometimes we need a list of the primes just less than a power... http://www.utm.edu/research/primes/lists/2small/ - 5pages o Modular restrictions on Mersenne divisors Keywords: prime, mod, p-1, theorem, proof. Let p be a prime and a any integer, then ap = a (mod p).... 1 (mod p)). Finally, multiply this equality by p-1 to complete the proof.... Let p be a prime which does not divide the integer a, then ap-1 = 1 (mod p).... http://www.utm.edu/research/primes/notes/proofs/MerDiv.html - 5pages o Proofs that there are infinitely many primes Keywords: prime, primes, primality, theorem, test. Prime Lists FAQ e-mail list Titans Prime Links Submit primes Euclid may have been the first to give a proof that there are infintely many primes.... Prime Page References Prime Page References (references for the Prime Pages) Home Search Site Largest Finding How Many?... Notice that different a's can be used for each prime q.) Theorem 2 can be improved even more: if F... http://www.utm.edu/research/primes/notes/proofs/infinite/index.html - 17pages

Fig. 3. A fragment of a site map for www.utm.edu/research/primes/index.html built using text of all pages and keyword-based ranking

the site map7. Overall, the performance of the system on this step, though not as good as it is on the previous step, is still very good. Anchortext Generation. Contrary to our expectations, the approach using anchortexts performed rather poorly, and the two approaches using titles showed reasonably good (and in most cases identical) results. It turned out that anchortexts within a single site often do not contain descriptive information at all, referring with the same word, such as “introduction”, or “index” to almost all leader pages. In many cases 7

In general, we found that the optimal values for the parameters directing site map generation process vary from one web site to another. Currently, we simply let the user specify these values. In the future we plan to investigate the ways of selecting them automatically, depending on, for example, the number of pages, or the number of cDocs on the web site.

210

P. Dmitriev and C. Lagoze

anchortexts repeat titles of the pages they point to, and often they contain pictures, rather than words. This is particularly interesting because anchortexts have been found very effective in web information retrieval [1], as well as information retrieval in intranets [5], exactly because of their descriptive properties: the home page of Google does not contain the phrase “search engine”, but anchortexts often do. We attribute the difference in the results between our own and previous experiments to the difference in the nature of the datasets. Previous work studied the public web, or a large corporate intranet as a whole. To our knowledge, this work is the first to analyze anchortexts within individual web sites. Our results suggest that anchortexts within the same web site are mostly useless. This suggests that search engines might be able to improve the quality of their results by ignoring all anchortexts that come from the same site as the page itself.

Fig. 4. Summary generation results using text centroids (left) and keywords (right)

Summary Generation. Again, 5 sites were picked randomly, which resulted in 58 items with summaries. The algorithm described in section 3 was used to generate summaries in eight different ways. Then, we evaluated the summaries generated using each of the methods according to the following question: “does the summary give the user a good idea of the topic/function/goal (depending on the type of the item) of the site map item it describes?” The possible answers are: (a) gives a very good idea; (b) gives a pretty good idea; however, some important information is missing; (c) gives a rough idea of it; (d) there are some clues of it in the summary, but they are not very easy to see; (d) the summary contains no useful information8. The results are shown on Figure 4. The results show that the quality of the summaries produced by all methods, except the ones using first sentences, is quite high. The best methods generated 86-87% of 8

This style of summary evaluation is one of the most commonly used in summarization community. In DUC 2004, 7 questions similar to ours were used to compare generated and ideal summaries. In our case, however, we do not have ideal summaries, so we cannot follow the same approach. In addition, the purpose of our summaries is different from DUC-style summaries. Rather than summarizing all the important points discussed on the pages belonging to the item, we want our summaries give the user an idea of topic, function, or goal of this particular section of the web site, so that they can decide whether to navigate to that section. Thus, we use a single question reflecting that.

Automatically Constructing Descriptive Site Maps

211

good summaries (answer a or b). The method using all pages of an item produced more summaries of very high quality than other methods. However, it also produced larger number of poor summaries. The reason for that is that if there are multiple topics discussed in a particular section of the web site, and this whole section corresponds to a single site map item, then the method tends to generate a summary describing only the most frequently mentioned topic. Somewhat surprising was the very good performance of the methods using text of the leader page. We think that, to large extent, this is due to the nature of our dataset. On educational web sites people provide good leader pages that give comprehensive description of the contents of their sections. We plan to conduct more experiments to see whether this will remain true for other domains. Poor performance of the methods using first sentences of pages is mainly due to the poor quality of the first sentences themselves. While in regular text first sentences often provide an overview of the section, on the Web they tend to contain lists of links to other sections of the web site, or header information irrelevant to the main content of the page. We conclude that such methods are not suitable for summary generation on the Web. Finally, we did not observe a significant difference in quality between summaries generated using text centroids and keywords9. Overall, the experiments showed that our method could generate high quality site maps. We believe that better parameter tuning can improve the results even further. We also plan to conduct a larger scale experiment to verify that these results hold on a larger number of web sites across multiple domains.

5 Conclusion and Future Work In this paper we described a system for automatic construction of descriptive site maps. The system is based on a combination of machine learning and clustering algorithms, which are used for automatic identification of web site sections. This framework provides a natural way for combing multiple structural and content features of web pages, and allows for automatic selection of the optimal number of sections. The sections are then combined to produce a site map, and multi-document summarization techniques are used to generate anchortexts and summaries for the items. Experimental results on a set of educational web sites show that our method can generate high quality site maps. We believe that our work provides a good example of how Web content and structure mining, combined with natural language processing, can be used together to solve an important practical problem. In the future we plan to extend this work in several directions. First, we will investigate the problem of automatic parameter selection for site map and summary generation. An important aspect of this problem is automatic evaluation of the quality of a summary and overall quality of a site map. Second, we plan to evaluate our system on 9

We note, however, that the users who saw our results generally preferred the way we present summaries generated using keyword centroids (see Figure 3). Probably, this is due to the fact that these summaries look similar to the snippets from a search engine results page. In fact, we intentionally tried to make the summaries look similar to Google’s search results.

212

P. Dmitriev and C. Lagoze

a larger number of web sites from multiple domains. Finally, we will explore new browsing paradigms resulting from integrating our approach with a search engine.

Acknowledgments This work is supported by the National Science Foundation under Grants No. 0227648, 0227656, and 0227888.

References 1. Anchor Text Optimization. http://www.seo-gold.com/tutorial/anchor-text-optimization.html 2. Candan, K.C., Li, W-S.: Reasoning for Web Document Associations and its Applications in Site Map Construction. Data and Knowledge Engineering, Vol. 43, Issue 2, November 2002 3. Dmitriev, P., Lagoze, C., Suchkov, B.: As We May Perceive: Inferring Logical Documents from Hypertext. 16th ACM Conference on Hypertext and Hypermedia, September 2005 4. Eiron, N., McCurley, K. S.: Untangling compound documents on the web. 14th ACM Conference on Hypertext and Hypermedia, August 2003 5. Eiron, N., McCurley, K. S.: Analysis of anchor text for web search. 26th ACM SIGIR Conference, July 2003 6. Li, W.-S., Kolak, O., Vu, Q., Takano, H.: Defining logical domains in a Web Site. 11th ACM Conference on Hypertext and Hypermedia, May 2000 7. OCLC Web Characterization Project. http://wcp.oclc.org/ 8. O’Neill, E. T., Lavoie, B. F., Bennett, R.: Trends in the Evolution of the Public Web, 19982002. D-Lib Magazine, Volume 9, Number 4, April 2003 9. Wall, D.: How to Steal to the Top of Google. http://www.seochat.com/c/a/GoogleOptimization-Help/

TWStream: Finding Correlated Data Streams Under Time Warping Ting Wang Department of Computer Science, University of British Columbia [email protected]

Abstract. Consider the problem of monitoring multiple data streams and finding all correlated pairs in real time. Such correlations are of special interest for many applications, e.g., the price of two stocks may demonstrate quite similar rise/fall patterns, which provides the market trader with an opportunity of arbitrage. However, the correlated patterns may occur on any unknown scale, with arbitrary lag or even out of phase, which blinds most traditional methods. In this paper, we propose TWStream, a method that can detect pairs of streams, of which subsequences are correlated with elastic shift and arbitrary lag in the time axis. Specifically, (1) to accommodate varying scale and arbitrary lag, we propose to use the geometric time frame in conjunction with a piecewise smoothing approach; (2) to detect unsynchronized correlation, we extend the cross correlation to support time warping, which is proved much more robust than Euclidian based metrics. Our method has a sound theoretical foundation, and is efficient in terms of both time and space complexity. Experiments on both synthetic and real data are done to show its effectiveness and efficiency.

1

Introduction

The advances in hardware technology have made it possible to automatically collect data in a stream-like manner. Typical applications of data streams include sensor network, financial data analysis and moving object tracking. The processing and mining of data streams have attracted upon intensive research recently. Some extensively studied problems include summarization [19], clustering [11], similarity search [9], etc. In this paper, we investigate another interesting problem, namely generalized correlation detection, that is to monitor multiple streaming time series1 and detect all correlated pairs in real time. The correlation is of general sense, that is two series are considered similar if a significant part of them (subsequences) demonstrate highly alike rise/fall patterns, neglecting the shifts and lags in the time axis. Fig.1(a) shows two sample time series X and Y , whose correlated parts (subsequences of same length) are highlighted. The one of Y is lagging 1

Following, we will use series and sequence interchangeably.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 213–225, 2006. c Springer-Verlag Berlin Heidelberg 2006 

214

T. Wang

its counterpart of X by about 3000 time units. Fig.1(b) illustrates two ways of matching the subsequences and computing the correlations. The left plot is the traditional one-by-one alignment method. Since the patterns of these two subsequences are not synchronized, it will produce a dissimilarity measure. While the time warping shown in the right plot allows for flexible shifts in the time axis, and produces more intuitive result. It can be expected the correlation based on the match produced by time warping is more accurate than the canonical one. 280

280

280

270

270

260

260

250

250

275

Value

270 265

250

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Time

Value

255

Value

260

240

240

230 225

230

230

220

220

210

210

Value

220 215 210 205 200

0

1000

2000

3000

4000

5000

6000

7000

8000

9000

10000

Time

(a)

200

0

20

40

60

80

100

Time (x40)

200

0

20

40

60

80

100

Time (x40)

(b)

Fig. 1. Two correlated time series under time warping. (a) two subsequences (highlighted) of the time series are correlated, with lag of about 3000 time units. (b) two possible ways of matching the subsequences, without/with time warping.

Detecting such generalized correlations in data streams is challenging for several reasons: (1) Streams grow continuously in length at high-rate. It is impractical to store voluminous historical data in memory. Thus the na¨ıve algorithm of sequentially comparing each pair of subsequences is unacceptable. (2) We cannot assume any knowledge regarding the length of subsequence to be compared, which is usually unavailable a priori. (3) The computation of time warping is expensive. Consequently, employing time warping for each pair of subsequences is prohibitive. In this paper, we aim to overcome all the problems above in detecting generalize correlations in data streams. To be specific, our main contributions are as follows: – We propose the concept of generalized correlation, which to the best of our knowledge, subsumes all the existing definitions of the correlation between two streams. We combine the concepts of cross correlation and time warping, and propose a similarity measure much more robust than the canonical correlation. – We present a method called TWStream2 , that captures generalized correlations in data streams in real time. The algorithm can handle semi-infinite, high-rate data streams incrementally, efficiently and incurs negligible error. – Experiments on both synthetic and real life data are done. Our method performs 105 times faster than the na¨ıve one, while the relative error is typically around 1%. 2

“TWStream” stands for Time Warping Stream.

TWStream: Finding Correlated Data Streams Under Time Warping

215

The remainder of the paper is organized as follows: Section 2 gives a brief survey of the related work on stream processing and mining. The details of our TWStream framework are presented in Section 3. We give a theoretical analysis of the accuracy and complexity of our approach in Section 4. Section 5 reviews the experimental results and Section 6 concludes this paper.

2

Related Work

Recently the processing and mining of streaming data have attracted upon intensive research. Some well studied problems include summarization [19], clustering [11] and similarity search [9]. We focus on the work on detecting correlation between streams: Yi et al. [18] propose a method to analyze co-evolving time sequences by modelling the problem as a multi-variate linear regression. Zhu et al. [19] propose StatStream for monitoring multiple time series. It divides a user-specified window of the most recent data into a set of basic windows, and maintains DFT coefficients for each basic window. It allows batch update and efficient computation of inner product. However, it cannot detect the correlation between two subsequences with lag larger than the basic window. Meanwhile, the setting of the length of sliding window requires a priori knowledge, which influences the sensitivity of algorithm to a large extent. Very recently, Sakurai et al. [16] propose BRAID for detecting correlation between streams with arbitrary lags. They use geometric probing and smoothing to approximate the exact correlation. However, their method compares two series on the whole sequence level, and will clearly miss all the subsequence correlations. Moreover, most of these methods are based on the classical correlation, and fail in detecting the out-of-phase correlation. To the best of our knowledge, none existing algorithm can satisfy the requirement listed in the introduction.

3 3.1

TWStream Method Preliminaries

Cross Correlation. The streaming time series is of the model X = {xi } (1 ≤ i ≤ t), where xt represents the most recent data point, and t increases per time unit. The cross correlation between two series X and Y is defined as: t (xi − x ¯)(yi − y¯) t ρ = i=1 = Σi=1 xˆi yˆi (1) σ(x)σ(y) where x¯ and σ(x) are the average and standard deviation of X respectively, and x ˆi is the z-norm of xi . The same notations apply to Y . The symbols used in this paper are listed in Fig.2(a). Note that the cross correlation is the inner product of z-norms, therefore ρ can be computed using the Euclidian distance of the normalized series: t t t t ˆ Yˆ ) = Σi=1 (ˆ xi − yˆi )2 = Σi=1 x ˆ2i + Σi=1 yˆi2 − 2Σi=1 x ˆi yˆi = 2 − 2ρ(X, Y ) (2) d(X,

T. Wang

Symbols X x[i : j] Head(X) Rest(X)  d w0 sji

Definition streaming time series subsequence from index i to j inclusive the first element of X the rest of X without Head(X) empty sequence Euclidian distance function basic window size the ith piecewise aggregate (PA) of level j

(a)

7

6

5

Y

216

4

3

2

1 1

2

3

4

5

6

7

X

(b)

Fig. 2. (a) List of symbols (b) Local Dynamic Time Warping

From this fact we can conclude that cross correlation is simply another similarity measure directly based on the Euclidian distance, and consequently suffers from the same problems as Euclidian based metrics. Local Dynamic Time Warping. It is forcefully proved that Euclidian distance is a brittle similarity measure [1], while Dynamic Time warping (DTW) is much more robust than Euclidian based metrics. Intuitively, time warping allows to flexibly shift the time axis, and matches the rise/fall patterns of two sequences as much as possible. The formal definition of the DTW distance between X and Y is given by: (3) dwarp ( ,  ) = 0, dwarp (X,  ) = dwarp ( , Y ) = ∞ ⎧ ⎨ dwarp (X, Rest(Y )) dwarp (X, Y ) = d(Head(X), Head(Y )) + min dwarp (Rest(X), Y ) (4) ⎩ dwarp (Rest(X), Rest(Y )) The computation of the DTW distance defines a time warping path in the matrix composed of the entries of (i, j), corresponding to the alignment between xi and yj . This path represents an optimal mapping between X and Y , as shown in Fig.2(b). Under time warping, X is transformed to a new sequence X  = {xk } (1 ≤ k ≤ K), where xk corresponds to the kth element of the path. To prevent pathological paths, where a relatively small section of one series is mapped onto a relatively large portion of the other, we adopt the Local Dynamic Time Warping (LDTW) [20] which restricts the warping path within a beam of width (2k+1) along the diagonal path, as shown in Fig.2(b). It is trivial to prove the computation of LDTW is of complexity O(kw), given w as the length of sequence. Generalized Correlation. The correlated parts of two streams can occur on any unknown scale, with arbitrary lag, or even out of phase. To accommodate such ‘any-scale’, ‘any-time’ and ‘any-shape’ correlation, we introduce the concept of generalized correlation (GC), which combines cross correlation with time warping, and measures the correlation on subsequence level. Following we give out the formal definition of GC between two sequences X = {xi } and Y = {yi } (1 ≤ i ≤ t). Without loss of generality, we assume the correlated pattern in Y is lagging its counterpart of X.

TWStream: Finding Correlated Data Streams Under Time Warping

217

Definition 1. Given streams X and Y , their generalized correlation is a function of index i and scale w, which determine x[i : i + w − 1] and y[t − w + 1 : t] as the subsequences to be compared. Let x = {xk } and y  = {yk } (1 ≤ k ≤ K) be the transformed sequences of x[i : i + w − 1] and y[t − w + 1 : t] under time warping, then the GC ρg (i, w) is defined as follows: ρg (i, w) =

K (xk − x ¯ )(yk − y¯ ) Σk=1 σ(x )σ(y  )

(5)

where x ¯ and σ(x ) represent the average and standard deviation of x respectively, the same notations apply to y  . Note that time warping favors the positive correlation, in order to detect high negative correlation, we can transform {xj } (i ≤ j ≤ i + w − 1) to its symmetric  form {xsj }, where xsj = 2 i+w−1 xk /w − xj , and follow the same procedure k=i above to compute GC. The problem we are to solve is: at any time point t, for each pair of streams X and Y , compute and report the GC value for any combination of scale w and index i in real time. Current Time

Scale (2i)

j=3

j=2

Least Time

j=1

j=0

(a)

t Index

(b)

Fig. 3. (a) A set of sliding windows (highlighted) containing the piecewise aggregates of the most recent 2j w0 (w0 = 4) data points respectively. The update is incremental and hierarchical from lower level to higher one. (b) A typical scheme of interpolation. The curve passing the leftmost snapshots of each level represents the least time limit.

3.2

Overview

The exact implementation can be derived directly from the definition of GC: at time point t, we compute time warping for each pair of x[i : i + w − 1] and y[t − w + 1 : t] for all combinations of i and w, and calculate the correlation between the transformed sequences. However such brute force technique requires more than O(t) space and O(t2 ) computation time. Following we introduce our TWStream algorithm, based on four observations, which gain significant improvement over the na¨ıve method. Geometric Time Frame. Given the scale w, instead of matching the pair of x[i : i + w − 1] and y[t − w + 1 : t] for all is, we take the snapshots of x[i : i + w − 1] at particular is of geometric orders (i is called the index of the snapshot). Specifically, snapshots are classified into orders ranging from 0

218

T. Wang

Algorithm TWStream Input: new data for all series at time t Output: detected GC, index and scale for each series X do {add new data to the hierarchy} AddNewElement(X); end for each pair of series X and Y do {update the snapshots of X} UpdateSnapshots(X); if output is required then {calculate the GC value} CalGC(X, Y ); output result if any; end end

(a) TWStream

Algorithm UpdateSnapshots(X) for j = 0 to log2 t do if t mod 2j = 0 and t/2j > w0 then k = t - 2 j w0 ; if k mod 2j+1 = 0 then add PA(2j w0 , k+1) to level j; if k > α2j+1 then remove the snapshot with the least index i on level j; end end else break; end end

(b) UpdateSanpshots(X)

Fig. 4. The pseudocode of TWStream algorithm (part 1)

to log2 t. The indices is of jth order satisfy (1) i mod 2j = 0 and (2) i mod 2j+1 = 0, that is they occur at time interval of 2j+1 . For each level j, only the last α snapshots are stored. For example, suppose t = 20 and α = 1, the snapshots of i = 19, 18, 12, 8 and 16 are taken, corresponding to level 0, 1, 2, 3 and 4 respectively. Based on the GC values computed for these specific is, the correlation coefficients for the rest indices can be obtained by interpolation. The justification for geometric time frame is: (1) In processing streaming data, we provide more importance for recent data. In geometric time frame, for more recent data, there is shorter distance between successive snapshots, and more points to interpolate to achieve better accuracy; (2) It achieves dramatic reduction in the time and space complexity, since currently we only need to store and match O(log t) subsequences, instead of O(t). We will prove in Section 4 that this approximation introduces negligible error. Piecewise Smoothing. To support time windows of varying scales, we can store the snapshots of different sizes. Suppose the basic (minimum) window size is w0 , we require the window size to follow a geometric progression, i.e. 2j w0 (0 ≤ j ≤ log2 t), to reduce the time and space complexity. Based on the GC values computed for snapshots of these specific scales, correlations for other scales can be estimated by interpolation. Nevertheless, under this approximation, the space and time requirement still grow linearly with time t, since the maximum window size is proportional to the length of sequence. We propose to use the piecewise smoothing (or piecewise aggregate approximation[13]) to solve this problem. To be specific, for a time window of size 2j w0 , instead of operating on the original series, we keep its piecewise aggregate (PA) of order j. That is we divide the time window into w0 non-overlapping short windows of size 2j , and compute the mean for each short window as its PA. Formally, let s0 = {s0i }(1 ≤ i ≤ 2j w − 1) be the original series in the time  j 0 j window, its PA of order j, sj = {sji }(1 ≤ i ≤ w), where sji = i2 k=(i−1)2j +1 sk /2 .

TWStream: Finding Correlated Data Streams Under Time Warping

219

We use PA(w, i) to denote PA for a time window with scale w and index i. This approximation reduces the space required for storing snapshot of any size to a constant w0 . Moreover, the time complexity of matching two subsequences is also reduced to w0 . We will show the theoretical justification for piecewise smoothing in Section 4. Incremental Update. The improvements above significantly reduce the time and space complexity, however, they alone are not sufficient to constitute a streaming algorithm. The problem of efficient update remains unsolved. Here, we propose an incremental update strategy to achieve constant update time per time unit. For both series X and Y , we maintain a set of hierarchical sliding windows, which contains PAs for the most recent 2j w0 (0 ≤ j ≤ log2 t) data points respectively. The windows are organized into a hierarchy where the window size doubles as level j increases. Fig.3(a) illustrates this hierarchy, in which the sliding windows are highlighted. At time t, we incrementally update j−1 PA for each level using the PA of lower level, formally sjk = (sj−1 2k−1 + s2k )/2. j Since sliding window of level j is updated per  2 time units, on average the ∞ complexity of update per time unit is constant ( k=0 1/2j ≈ 2). Note that the set of hierarchical sliding windows serves different purposes for X and Y . For Y , it contains the ‘queries’, which will be used to find correlated subsequences in X, while for X, the PAs in sliding window are added as snapshots to update the ‘database’. Filtering. The last but not least observation is that LDTW is a relatively expensive operation. If the user desires only the report of GC value higher than a threshold λ, instead of computing time warping for each pair, we can filter those pairs whose cross correlation value lower than a threshold λ (positively correlated with λ). The cross correlation of two sequences can be computed efficiently if their sufficient statistics (sum, sum of square, inner product) are available. The maintenance of sufficient statistics can be seamlessly incorporated into our framework. We omit this part due to the space limit, and more details are referred to [16]. Algorithm. AddNewElement(X) s0t = new data; for j = 1 to log2 t do if t mod 2j = 0 then k = t/2j ; j−1 sjk = (sj−1 2k−1 + s2k )/2; else k = t/2j−1 ; remove sj−1 k−w0 ; break; end end

(a) AddNewElement(X)

Algorithm. CalGC(X, Y ) for j = 0 to log2 t do if t mod 2j = 0 then q = sliding window of scale 2j w0 of Y ; for each snapshot s of scale 2j w0 in X do compute GC for s and q; end else break; end end interpolate the GC curve surface; report (GC, w, i) with GC above threshold λ;

(b) CalGC(X, Y )

Fig. 5. The pseudocode of TWStream algorithm (part 2)

220

T. Wang

3.3

Algorithm

Based on the observations above, we propose TWStream, an algorithm that captures correlated streams under time warping. TWStream maintains the snapshots of different granularity for the ‘base’ series X, and uses the PAs of the most recent data of Y as ‘queries’ to detect correlation. For the snapshots of X (w = 2j w0 , i in the geometric time frame), the correlations are computed exactly. The GC values for other combinations of (w, i) can be approximated by interpolation with the values of their neighbors. Fig.3(b) illustrates a typical interpolation scheme: on each level of scale, α snapshots are kept, which form an α× log2 t ‘grid’. The curve passing the kth (1 ≤ k ≤ α) snapshots of every level is called the kth interpolation curve. The leftmost (k = 1) interpolation curve represents a least time limit. The GC values for all the combinations of (w, i) on its right side can either be computed or approximated. In this scheme, more recent time gets better accuracy, since GCs for snapshots of ‘finer’ levels are available for interpolation. The detailed TWStream algorithm is presented in Fig. 4 and Fig. 5. For each coming data point, it first incrementally updates the PAs in the hierarchical sliding windows (AddNewElement ). It then adds the newly generated PAs as snapshots to the proper levels, and deletes the stale ones (UpdateSnapshots). Finally, the GC values are computed for the snapshots of the base series, and approximated by interpolation for other combinations of (w, i) (CalGC ).

4

Theoretical Analysis

In this section, we present a theoretical analysis of the accuracy and complexity of our approach, and provide the justification for the approximations we made in TWStream. 4.1

Accuracy

The experimental results show that the two approximations, geometric time frame and piecewise smoothing introduce negligible error in estimating GC values. Following, we provide the theoretical proof. Lemma 1. Let h (2j ≤ h < 2j+1 ) be an arbitrary time window, t∗ be the index nearest to (t − h) within the geometric time frame, then |(t − h) − t∗ | ≤ 2j−log2 (α−1) . Proof. For each level k, the geometric time frame stores the last α (α ≥ 2) snapshots, with interval 2k+1 between two successive indices, which covers a time window of 2k+1 (α − 1). Given that 2j ≤ h < 2j+1 , let k ∗ be the smallest k that satisfy 2k+1 (α − 1) ≥ h, we have k ∗ ≤ j − log2 (α − 1). So (t − h) will fall ∗ in one interval of level 2k +1 , |(t − h) − t∗ | ≤ 2j−log2 (α−1) . Thus for any user-specified index (t−h), we can find a snapshot within the radius less than the time window h, which means that we can approximate the GC value

TWStream: Finding Correlated Data Streams Under Time Warping

221

for any time instance by careful interpolation, though more recent time (small h) gets better accuracy. Also we can enhance the approximation by setting large α. Meanwhile it can be proved that there exists a lower bound (w0 /2) for α, which guarantees no loss of most recent information. The second error source is the approximation of piecewise smoothing. However, if the sequence is low-frequency dominant, the error introduced by piecewise smoothing is small, and can even be zero. It has been proved in [16] that for sequences with low frequencies, smoothing introduces only small error in computing cross correlation, while Keogh et al [12] show that the dynamic time warping after smoothing (PDTW) is a tight approximation for the DTW of the original series. Lemma 2. Piecewise smoothing introduces small errors in estimating the generalized correlation, given the sequences are low-frequency dominant. Proof. Combine the two facts above, this conclusion is straightforward. Synthetic Series 1

1

20

0.5

GC Value

30

10 0

Exact TWStream Snapshots

0 −0.5

−10 −20

0

0.5

1

1.5

2

2.5

−1 3100

3

3200

3300

3400

3500

3600

3700

3800

3900

4000

4100

Index (Scale = 512)

4

x 10

Synthetic Series 2

1

40

GC Value

30 20 10 0

0.5 0 −0.5

−10 −20

−1 0

0.5

1

1.5

2

2.5

3 4

x 10

(a)

0

500

1000

1500

2000

Scale (Index along the interpolation curve)

(b)

Fig. 6. GC estimation for synthetic series (a) two synthetic series (sines/cosines) (b) two snapshots of the GC values computed by exact implementation and TWStream, one for fixed scale (w = 512) and the other along one interpolation curve

4.2

Complexity

TWStream is efficient in terms of both time and space complexity. Specifically, the space required to store the snapshots is O(log t) and the amortized time for update per time unit is O(1). If output is required, the time required for the computation of GC values and interpolation is O(log t). – For each series of length t, TWStream has to keep snapshots for α log t different indices, and w0 space is required to store the PA for each index. Thus the space complexity of TWStream is O(log t). – For each series, we maintain the sliding windows for the most recent data at

log t different levels. The sliding window at level j is updated per 2j time units, so on average, the time complexity of updating per each time unit is log t constant, because j=02 1/2j ≈ 2.

222

T. Wang

– For the same reason, on average, out of log t sliding windows, only one ‘query’ sequence q is produced per time unit, which will be matched with (α log t) snapshots of the same scale. The time required to compute GC for q and snapshots is O(kw0 α log t), including computing LDTW path and cross correlation on the transformed sequences. The interpolation is based on an α× log t ‘grid’, thus the complexity is O(log t) if we use most typical interpolation methods (e.g., bilinear). The time required for the computation of GC values and interpolation is O(log t). Real Data Series 1

1

1000

GC Value

800 600 400

0.9

0.8

200 0

0

0.5

1

1.5

2

2.5

0.7 7200

3

7300

7400

7500

7600

7700

7800

7900

8000

8100

Index (Scale = 512)

4

x 10

Real Data Series 2

1

3500

GC Value

3000 2500 2000 1500

0.5 Exact TWStream Snapshots

0 −0.5

1000

−1 500

0

0.5

1

1.5

2

2.5

3 4

x 10

(a)

0

500

1000

1500

2000

Scale (Index along the interpolation curve)

(b)

Fig. 7. GC estimation for real-life series (a) two real life series (NYSE stock data). (b) two snapshots of GC values computed by exact implementation and TWStream, one for fixed scale (w = 512), and the other along one interpolation curve.

5

Empirical Analysis

To evaluate the effectiveness and efficiency of our approach, we performed experiments on both synthetic and real data. We compared TWStream with the exact implementation, aiming to answer the following questions: (1) How well does the estimation of TWStream approach the exact correlation? (2) How does the time warping influence its sensitivity to correlation? (3) How does the computation time of TWStream scale with the sequence length t? All the experiments are performed on a PC with Pentium IV 2.67 GHz and 512M memory, running Linux. The synthetic dataset we used is the sine/cosine, which consists of two sequences of length 65536, as shown in Fig.6(a). Each series is composed of a mixture of sine/cosine waves of different frequencies. The real life series come from the intraday trade and quota data provided by the NYSE Trade and Quote database. We chose two sequences of length 31200 for our experiment, as shown in Fig.7(a). The default setting of the parameters is: w0 = 32, α = 16, and correlation threshold λ = 0.8. 5.1

Effectiveness

Fig.6 and Fig.7 show the estimation of TWStream for synthetic and real data respectively. In each case, we randomly take two snapshots from the interpolated

TWStream: Finding Correlated Data Streams Under Time Warping

223

7

Time ExactI ExactII TWStream Error(%) 5120 2016 2016 2017 0.05 8192 1792 1682 1849 3.18 16384 1765 1696 1778 0.73 20480 972 948 960 1.23 4096 1744 1511 1752 0.46 7168 1960 1843 1949 0.56 8196 1893 919 1866 1.43 14336 588 0 608 3.40

(a)

10

6

10

Processing time per time unit (ms)

Data Real Real Real Real Syn Syn Syn Syn

5

10

4

10

Exact Tested value TWStream Tested value

3

10

2

10

1

10

0

10

−1

10

2

10

3

10

4

10

5

10

6

10

Length of sequences

(b)

Fig. 8. (a) number of high correlations detected by three methods: exact implementation with time warping (ExactI), exact without time warping (ExactII), and TWStream. (b) scalability of two methods, Exact I and TWStream in term of sequence length t.

surface, one for fixed scale (scale = 512) and the other along one interpolation curve, as plotted in Fig.6(b) and Fig.7(b). The dotted line represents the GC values computed by the exact implementation, while TWSteam computes correlations for snapshots, and approximates the missing values by interpolation. It is evident that in both cases, TWStream tightly approximates the exact method. For both data sets, at different time points, we measured the number of high correlations (larger than λ) detected by three methods: exact method with time warping (Exact I), exact implementation without time warping (Exact II), and TWStream. The results are listed in Fig.8(a). It is shown clearly that TWStream can detect high correlation as effective as the ExactI most of the time, and the relative error is typically around 1%. We also measure the influence time warping has on the algorithm’s sensitivity to high correlation. As can be seen in Fig.8(a), the number of detected high correlations using time warping (Exact I) is significantly larger than that without time warping (Exact II), which indicates that the time warping makes the method more sensitive to the out-of-phase correlation, that can hardly be detected by canonical correlation. 5.2

Efficiency

Fig.8(b) illustrates how the wall processing time of TWStream and Exact I varies as the length of sequence grows. It can be noticed that the computation time of exact implementation increases nearly quadratically with the sequence length. In contrast, the increase in the processing time of TWstream is unnoticable. This confirms our theoretical analysis, that is TWStream requires constants update time, and the computation of GCs and interpolation have the complexity of O(log t), which causes the insignificant increase. Typically, TWStream performs 105 times faster than the exact method when the sequence length reaches 1e+6.

6

Conclusion

We tackled the problem of monitoring multiple data streams and finding correlated pairs in real time. The correlated patterns can occur on any unknown scale,

224

T. Wang

with arbitrary lag, or even out of phase. We proposed the concept of generalized correlation to capture such ‘any-scale’, ‘any-time’ and ‘any-shape’ correlations. In our method TWStream, we use careful approximations and smoothing to achieve a good balance between scalability and accuracy. The experiments on both synthetic and real data confirmed the theoretical analysis: our approach worked as expected, detecting the generalized correlations with high accuracy and low resource consumption.

References 1. Aach, J., Church, G.: Aligning Gene Expression Time Series with Time Warping Algorithms. Bioinformatics 17:495-508, 2001. 2. Agrawal, R., Faloutsos, C., Swami, A.: Efficient Similarity Search In Sequence Databases. In Proc. of FODO Conf., 1993. 3. Bulut, A., Singh, A.: A Unified Framework for Monitoring Data Streams in Real Time. In Proc. of ICDE Conf., 2005. 4. Chan, K., Fu, A.: Efficient Time Series Matching by Wavelets. In Proc. of ICDE Conf., 1999. 5. Chan, K., Fu, A., Yu. C.: Haar Wavelets for Efficient Similarity Search of TimeSeries: With and Without Time Warping. In IEEE Transactions on Knowledge and Data Engineering, 15(3), 686-705, 2003. 6. Domingos, P., Hulten, G.: Mining High-Speed Data Streams. In Prof. of SIGKDD Conf., 2000. 7. Ganti, V., Gehrke, J., Ramakrishnan, R.: Mining Data Streams under Block Evlolutions. In SIGKDD Explorations, 3(2):1-10, 2002. 8. Faloutsos, C., Ranganathan, M., Manolopoulos, Y.: Fast Subsequence Matching in Time-Series Databases. In Proc. of ACM SIGMOD Conf., 1994. 9. Gao, L., Wang, X.: Continually Evaluating Similarity-Based Pattern Queries on a Streaming Time Series. In Proc. of ACM SIGMOD Conf., 2002. 10. Geurts, P.: Pattern Extraction for Time Series Classification. In Proc. of PKDD Conf., 2001. 11. Ghua, S., Meyerson, A., Mishra, N., Motwani, R., O’Callaghan, L.: Clustering Data Streams: Theory and Practice. IEEE TKDE, 15(3):515-528, 2003. 12. Keogh, E.: Exact Indexing of Dynamic Time Warping. In Proc. of VLDB Conf., 2002. 13. Keogh, E., Chakrabarti, K., Pazzani, M., Mehrotra, S.: Dimensionality Reduction for Fast Similarity Search in Large Time Series Databases. Knowledge and Information Systems, 3(3):263-286, 2000. 14. Keogh, E., Chakrabarti, K., Mehrotra, S., Pazzani, M.: Locally Adaptive Dimensionality Reduction for Indexing Large Time Series Databases. In Proc. of ACM SIGMOD Conf., 2001. 15. Korn, F., Jagadish, H., Faloutsos, C.: Efficiently Supporting adhoc Queries in Large Datasets of Time Sequences. In Proc. of ACM SIGMOD Conf., 1997. 16. Sakurai, Y., Papadimitriou, S., Faloutsos, C.: BRAID: Streaming Mining through Group Lag Correlations. In Proc. of ACM SIGMOD Conf., 2005 (to appear). 17. Yi, B., Faloutsos, C.: Fast Time Sequence Indexing for Arbitrary LP Norms. In Proc. of VLDB Conf., 2000. 18. Yi, B., Sidiropoulos, N., Johnson, T., Jagadish, H., Faloutsos, C., Biliris, A.: Online Data Mining for Coevolving Time Sequences. In Proc. of ICDE Conf., 2000.

TWStream: Finding Correlated Data Streams Under Time Warping

225

19. Zhu, Y., Shasha, D.: Statistical Monitoring of Thousands of Data Streams in Real Time. In Proc. of VLDB Conf., 2002. 20. Zhu, Y., Shasha, D.: Warping Indexes with Envelope Transforms for Query by Humming. In Proc. of ACM SIGMOD Conf., 2003.

Supplier Categorization with K -Means Type Subspace Clustering Xingjun Zhang1,3 , Joshua Zhexue Huang2 , Depei Qian1 , Jun Xu2 , and Liping Jing2 1

Computer School, The Beihang University, Beijing, China {xjzhang, dpqian}@xjtu.edu.cn 2 E-Business Technology Institute, The University of Hong Kong, Hong Kong, China {jhuang, fxu, lpjing}@eti.hku.hk 3 Department of Computer Science, Xi’an Jiaotong University, Xi’an, China [email protected]

Abstract. Many large enterprises work with thousands of suppliers to provide raw materials, product components and final products. Supplier relationship management (SRM) is a business strategy to reduce logistic costs and improve business performance and competitiveness. Effective categorization of suppliers is an important step in supplier relationship management. In this paper, we present a data-driven method to categorize suppliers from the suppliers’ business behaviors that are derived from a large number of business transactions between suppliers and the buyer. A supplier business behavior is described as the set of product items it has provided in a given time period, a mount of each item in each order, the frequencies of orders, as well as other attributes such as product quality, product arrival time, etc. Categorization of suppliers based on business behaviors is a problem of clustering high dimensional data. We used the k-means type subspace clustering algorithm FW-KMeans to solve this high dimensional, sparse data clustering problem. We have applied this algorithm to a real data set from a food retail company to categorize over 1000 suppliers based on 11 months transaction data. Our results have produced better groupings of suppliers which can enhance the company’s SRM. Keywords: Clustering, Feature Weighting, Supplier Categorization.

1

Introduction

Supplier categorization refers to the process of dividing suppliers of an organization into different groups according to the characteristics of the suppliers so each group of suppliers can be managed differently within the organization. 

This work was part-supported by the Intel University Research HPC Program and the EIES Science Foundation Project of Xi’an Jiaotong University.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 226–237, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Supplier Categorization with K -Means Type Subspace Clustering

227

Supplier categorization is an important step in supplier relationship management (SRM), a business strategy to reduce the logistic costs and improve business performance. Many large enterprises such as retail chain stores, manufactures and government bodies work with thousands of various suppliers to provide raw materials, middle stage products, final products and services. Management of these suppliers not only affect the logistics of a company but has a direct impact on the business performance of the company[1] A common method adopted in many organizations is to classify suppliers based on the product categories which suppliers can provide or the demographical characteristics of suppliers such as company’s size and location. These static characteristics do not always reflect the business reality between suppliers and the buyer, for example, the quality of the products supplied, the product delivery time, and costs. Therefore, categorization of suppliers based on static characteristics results in some suppliers mismanaged with a consequence of increasing logistic costs and damaging the entire business performance of an organization. A better approach to this problem is to categorize suppliers based on dynamic business behaviours. In this way, the suppliers with similar business behaviours can be grouped together and managed in the same way. We call it a data-driven categorization method. It is based on examining suppliers’ real performance to classify suppliers into groups[2]. Suppliers’ business behaviours can be described as the set of product items it has provided in a given time period, a mount of each item in each order, the frequencies of orders, as well as other attributes such as product quality, product arrival time, etc. These kinds of information are derived from real business transactions over time and represented as a set of behaviour attributes. By grouping suppliers based on behaviour attribute values, suppliers with similar business behaviour patterns in the same time period can be grouped together. As the business behaviour patterns for some suppliers may change over time, this data-driven categorization approach is dynamic and can better reflect real business situations. One technical challenge of this data driven approach is the space of behaviour attributes which can become very large in order to better represent supplier behaviours. For example, if we consider the number of unique product items provided by all suppliers as behaviour attributes, this number can easily exceed thousands in many large buyers. Therefore, this is a high dimensional data clustering problem. Another challenge is the data sparsity. Take the product item attributes as an example. In business reality, any supplier usually supplies a small subset of product items. We use the set of product items as behaviour attributes to describe suppliers, each supplier only has few attributes with values and leaves many attributes empty. This phenomenon represents a serious sparse problem in data. Therefore, the data-driven supplier categorization is a large, high dimensional, sparse data clustering problem which needs special clustering techniques to handle effectively[3]. In this paper, we present a subspace clustering method for data-driven supplier categorization. We use the recently developed clustering algorithm

228

X. Zhang et al.

FW-KMeans [4][5][6] to categorize suppliers from business behaviours represented in a large, high dimensional sparse data matrix. FW-KMeans is a k-means type subspace clustering algorithm that identifies clusters from subspaces by automatically assigning large weights to the attributes that form the subspaces in which the clusters are formed. The most remarkable metric of the algorithm is that it can handle highly sparse data. We have applied FW-KMeans to text data, which outperformed other k-means clustering algorithms. This work is the first time to apply FW-KMeans to dynamically categorize suppliers from high dimensional, sparse behaviour data which is different from the text data distribution. We have applied our data-driven supplier categorization approach to a real dataset from a food product retail chain company. This company operates over 330 outlets in a large Chinese city to sell both Chinese and Western food products. It has more than thousands suppliers around the world to supply over seven thousands different product items to its shops. To improve their supplier relationship management and reduce logistics costs, this company wanted to readjust its current supplier groupings based on dynamic business behaviour patterns. We have used the FW-KMeans clustering algorithm to its supplier transaction data and produced satisfactory results for the company. Some important results in this work include the following: • We propose a data-driven supplier categorization approach to categorize suppliers from dynamic business behaviours, rather than static demographical data commonly used; • We propose to use the k-means type subspace clustering algorithm FWKMeans to solve the high dimensional, sparse data clustering problem; • We have applied this approach to a real data set to demonstrate its usefulness in supplier relationship management. The rest of the paper is organized as follows. Section 2 describes the datadriven supplier categorization. Section 3 presents the method of subspace clustering with the feature weighting k-means algorithm. Following the discussion of our method, we present, in Section 4, a real case study of using FW-KMeans to categorize suppliers for a food product retail chain company. Finally, we draw some conclusions and point out our future work in Section 5.

2

Data-Driven Supplier Categorization

Supplier management manages the business with suppliers, including supplier selection, purchase order (PO), product delivery, quality control and payment. Effective supplier management can reduce logistic costs and affects the bottom line of business in a company. In a company, suppliers are usually divided into groups managed by different portfolio managers. Suppliers in each group are categorized with product categories provided and demographical characteristics of suppliers such as company’s size and location.

Supplier Categorization with K -Means Type Subspace Clustering

229

The above static categorization of suppliers does not really reflect the real business behaviours of suppliers in supplier groups. For example, two suppliers in the same group that provide similar product categories and have similar demographical characteristics may commit business in very different ways. For example, one supplier may always delay the delivery where the other can only provide a subset of products in good quality. Although these two suppliers are in the same group, they should be treated differently. Identification of different business behaviours of suppliers requires dynamic analysis of business data to discover the business behaviour of each supplier and categorize them according to the real business behaviours. This process is called data-driven supplier categorization which can help better understand suppliers, adjust the supplier grouping, and make the supplier management more efficient[7]. Supplier Relationship Management

Supply Chain Planning Optimization

A Component of Business Intelligence Analyzer Supervised Supplier Categorization Management Subspace Clustreing Static Supplier Categorize Behavior Data Preprocessing

Supplier Static Data Demographic Data

Product Data

Dynamic Data Transaction Data

Fig. 1. BI Component Architecture

Figure 1 shows the system architecture for data-driven supplier categorization and management. We can consider this as a component of the enterprise business intelligence solution system. The outputs of this component are used by supplier relationship management or supply chain planning and optimization. The supplier categorization component integrates two categorization results, the static categorization based on supplier’s product data and demographic data, and the dynamic categorization based on the data of business transactions committed by suppliers. To conduct dynamic categorization, the large volume of transaction data needs to be converted into behaviour data in the data pre-processing steps. The business behaviour of a supplier at a given time period can be described by a set of behaviour attributes, such as the set of product items purchased in the past 6 months, the quantity and amount of each item, the delivery of each purchase and the quality of the products in each delivery, etc. Each supplier is represented as a set of values of these behaviour attributes. The entire set of suppliers is represented as an N x P matrix where N is the total number of suppliers and P is the total number of behaviour attributes used to represent behaviours of all suppliers. Each element of the matrix is the value of a particular attribute for a particular supplier.

230

X. Zhang et al.

When the supplier behaviour matrix is created, the dynamic categorization of suppliers is treated as a clustering problem, i.e., clustering suppliers based on the behaviour attribute values. Given a matrix and a clustering algorithm, this can be easily done. However, the real characteristics of the behaviour matrix represent a number of challenges to the clustering algorithms. (1) The matrix can be very large with thousands of suppliers and behaviour attributes. This is a very high dimensional data clustering problem. (2) The matrix may contain different types of data, depending on the selected behaviour attributes. (3) The matrix is very sparse, i.e., different suppliers have values in different subsets of behaviour attribute. For example, if we use the product items as behaviour attributes, different suppliers supplied different kinds of products to the company, e.g., some supplying food products and some supplying toys. In this case, the suppliers providing food products did not have values for the toy product item attributes while the suppliers supplying toy products did not have values for the food product item attributes. This high sparse matrix requires the clustering algorithms to be able to find clusters from the subspace of the behaviour attribute space. Therefore, in this work, we use the newly developed k-means type subspace clustering algorithm FW-KMeans we discuss in the following section.

3

Feature Weighting K -Means Subspace Clustering Algorithm

The FW-KMeans algorithm was developed to cluster text data with high dimensionality and sparsity [6]. In text clustering, documents are represented using a bag-of-words representation [8]. In this representation (also named as VSM ), a set of documents are represented as a set of vectors X = {X1 , X2 , . . . , Xn }. Each vector Xj is characterized by a set of m terms or words, (t1 , t2 , . . . , tm ). Here, the terms can be considered as the features of the vector space and m as the number of dimensions representing the total number of terms in the vocabulary. Assume that several categories exist in X, each category of documents is characterized by a subset of terms in the vocabulary that corresponds to a subset of features in the vector space. In this sense, we say that a cluster of documents is situated in a subspace of the vector space. The supplier behavior data can be represented in the same way. Each supplier is equivalent to a document and each behavior equals to a term in the vocabulary Table 1. An example of supplier behavior data matrix

s0 s1 s2 s3 s4 s5

i0

i1

i2

i3

i4

1

1

1

0

0

1

1

0

0

0

0

1

1

0

0

0

0

1

1

1

0

0

1

1

1

0

0

0

1

1

C C0

C1

Supplier Categorization with K -Means Type Subspace Clustering

231

which is equivalent to the set of behavior attributes. Table 1 shows a simple example of the supplier behavior data matrix. There are 6 suppliers divided into two groups. Each column i is a behavior attribute for a product item. 1 indicate that the company bought that product from the supplier while 0 means that product item was not bought. The real supplier behavior data matrix can be much larger than this simple one. We can see that there are many zero entries in the table. In clustering we need to focus on the attributes with non-zero values, instead of the entire attribute space. How to find clusters from the non-zero attributes while ignoring the attributes with zero values is a problem of subspace clustering that can be solved by the feature weighting k-means algorithm[4]. The feature weighting k -means finds the weight for each feature from each cluster. Let Λ = (Λ1 , Λ2 , . . . , Λk ) be the set of weight vectors for all clusters; Λl = (λl,1 , λl,2 , . . . , λl,m ) be the weights for m features from the lth cluster. During the k-means clustering process, FW-KMeans automatically calculates the feature weights which produces a k × m weights matrix. That is to say, in each cluster m weights are assigned to m features. The weights can be used to determine which attributes are important in discovering the clusters. To calculate the weights in the k-means clustering process, we minimize the following cost function: m n k wl,j λβl,i [d(zl,i , xj,i ) + σ] (1) F (W, Z, Λ) = l=1 j=1 i=1

subject to

⎧ k  ⎪ ⎪ ⎪ w = 1, 1 ≤ j ≤ n ⎪ ⎨ l=1 l,j wl,j ∈ {0, 1}, 1 ≤ j ≤ n, 1 ≤ l ≤ k ⎪ m ⎪  ⎪ ⎪ λl,i = 1, 0 ≤ λl,i ≤ 1, 1 ≤ l ≤ k ⎩

(2)

i=1

where k(≤ n) is a known number of clusters, β is an exponent greater than 1 [5]; W = [wl,j ] is a k × n integer matrix; Z = [Z1 , Z2 , . . . , Zk ] ∈ Rk×m represents k cluster centers; d(zl,i , xj,i ) (≥ 0) is a distance or dissimilarity measure between object j and the centroid of cluster l on the ith feature. Usually, we use Euclidean distance: (3) d(xj,i , zl,i ) = (xj,i − zl,i )2 The value of parameter σ will affect the feature weighting process. If σ is much larger than d(˜ zl,i , xj,i ), the weights will be dominated by σ and λl,i will approach 1 to m . This will make the clustering process back to the standard k-means. If σ is too small, then the gap of the weights between the zero dispersion features and other important features will be big, therefore, undermining the importance of other features. To balance we calculate σ based on the average dispersion of the entire data set for all features as follows: nˆ m j=1 i=1 d(xj,i , oi ) (4) σ= n ˆ·m

232

X. Zhang et al.

where oi is the mean feature value of the entire data set. In practice we use a sample instead of the entire data set to calculate σ. (5% sample is used according to the sampling theory [9].) n ˆ is the number of objects in the sample. Experimental results in [4] have shown that this selection of σ is reasonable to produce satisfactory clustering results and identify important features of clusters.

4

Supplier Categorization with FW-KMeans

In this section, we discuss the use of FW-KMeans to categorize suppliers from a real data set taken from a food retail chain company and present the results. We first present supplier transaction data and conversion from transaction data to behavior descriptions of suppliers. Then we show the clustering results and discuss validation analysis. 4.1

Transaction Data

A supplier business behavior in a given time period is recorded in a set of business transactions occurred at different time points. A business transaction records a purchase of food products of a set of product items. The most relevant data attributes in a transaction include the amount, quantity and price of each product item and the time and date that the transaction was committed. In this experiment, We extracted transaction data of 974 suppliers in a 10 1/2 month period, from 1 January 2004 to 16 November 2004. Below is the summary of the data [10]. Table 2. Summary of transaction data Total number of transactions: Gross amount: Total number of items: Total number of purchased items: Number of items categories: Total number of suppliers:

3,945,190 Records 1,235,581,986HKD 16,323 Items 7,441 Items 100 ItemCats 974 Suppliers

The company has a total number of 2,399 suppliers that are grouped into four broad categories: active external suppliers, inactive external suppliers, internal suppliers with a supplier coded as 4XXXXX and internal suppliers with a supplier coded 9XXXXX. We were only interested in the 974 active external suppliers. Each record of the row transaction data contains much information. The focus of our studies is to classify suppliers according to the actual trade data using our clustering algorithm. We pre-processed the transaction data and converted it into behaviour attribute representation so the suppliers could be clustered according to behaviour patterns. The behaviour data matrix has 100 columns and 974 rows. Each column represents one item category and each row represents one supplier. In the behaviour

Supplier Categorization with K -Means Type Subspace Clustering

233

data, we only consider the items each supplier has provided. The cell of the matrix is the purchase amount (its dimension is money) for each supplier on each item category. Figure 2 show the data matrix. A black dot in the figure denotes that the supplier bought certain quantity of the given category. A blank cell indicates that the corresponding supplier didn’t supply the corresponding category items. We can see that the data is very sparse.

900 800 700

Suppliers

600 500 400 300 200 100

20

40

60

80

100

ItemCat

Fig. 2. Matrix Map Image of Trans. Data

4.2

Supplier Categorization with FW-KMeans

In this section, we discuss use of the FW-KMeans clustering algorithm to categorize the 974 suppliers based on the behaviors of product item supplying patterns. We first converted the transaction data into a supplier and behavior attribute matrix. The behavior attributes are 100 top product items purchased and the elements of the matrix are the purchase amount which was normalized to be a proportion of the real purchase amount over the total purchase amount from this supplier. This would make two suppliers of different sizes with the same purchasing pattern very similar. In this way, the suppliers in a cluster have the same set of items purchased and the purchase amount for each item accounted for the same portion of total purchases. The company already categorized their existing suppliers based on the static information of the suppliers. Because of the dissimilarity of purchase patterns in different suppliers of the same category, the current supplier classification makes a lot of bias on the supplier relationship management. For instance, the suppliers in the same class are treated same although their dynamic business

234

X. Zhang et al.

behaviours were very different. The purpose of re-categorizing suppliers based on real business behaviours was to adjust the existing categorization and offer suppliers with different business behaviours different relationship management strategies. 0 20 43

1 21 44

2 22 45

3 24 46

4 25 47

5 26 48

6 27 49

7 28 50

8 30 51

9 31 52

10 32 53

11 33 54

12 36 58

13 37 59

15 38

16 40

18 41

19 42

160 140 120 100 80 60 40 20

1 2 3 4 5 6 7 8 9 10 12 13 14 15 16 18 19 20 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 53 54 60

0

Fig. 3. Comparing Supplier Categories vs. Supplier Clusters

We used FW-KMeans to cluster the supplier and behaviour attribute matrix. The experiment was conducted on a machine with a Pentium (R) 1.8G CPU and 512M RAM. The parameter was assigned to 1.5 and K was assigned to 60 because the number of supplier classes in the current categorization is 60. Figure 3 shows the relationships between the result clusters and the company’s current supplier classes. The spectrum value is the clustering results. The x-axis is the company’s supplier classes and the y-axis is the count of suppliers. From the above chart, we can see that suppliers in some existing classes were clustered into different clusters. This indicates that suppliers in the same class had very different business behaviors although their static information is similar. From this figure we can observe three different types of the current supplier classes. The first type is the classes in which suppliers had similar supplying patterns such as class 1 and class 42. For these classes, adjustment is not needed. The second type is the classes which were merged into single clusters. This indicates that the current classification is two fine for the suppliers in these classes. Examples of this type are classes 9 and 12, which should be merged into one class because suppliers in these classes have similar supplying patterns. The third type of classes are those in which suppliers were divided into different clusters, such as classes 7, 8 and 60. These classes should be divided into subclasses because suppliers with different supplying patterns should be managed differently. This result demonstrates that some of the current supplier classes were well developed but others should be adjusted in order to better facilitate the resources of product purchases and improve the management of supplier relationship in the company. This kind of information was observed first time by the company buying staff and gave them a lot of insight in their current practice of managing suppliers.

Supplier Categorization with K -Means Type Subspace Clustering

4.3

235

Validation of Cluster Analysis

To validate the clustering results produced by the FW-KMeans clustering algorithm, we sorted suppliers on the cluster IDs and visualized the product supplying patterns of different clusters. Figure 4 shows part of the visualization result. 90

Cluster 5, 40, 44, 54

80 70 60 50 40 30 20 10 20

40

60

80

100

ItemCat Index

Fig. 4. Shows that clusters 5, 40, 44 and 54 are described in attributes (11, 24, 31, 18, 20, 21, 22, 23, 25, 26, 27, 29, 30, 33, 36, 37, 38), (44, 45, 47, 52, 54, 55), (78, 80, 86, 89, 92, 94, 98, 100) and (1, 3, 4, 5, 10, 12, 14, 15, 16, 17, 18, 30, 35) respectively

From the figure, we can clearly see that clusters 5, 40, 44 and 54 indeed had similar product supplying patterns, which implies that the clustering algorithm was effective in clustering such highly sparse data and solving the dynamic supplier categorization problems. We used the weights produced by the FW-KMeans algorithm to identify the important item categories in each cluster. However, because of the special case of zero dispersion in certain features, the largest weights may identify some item categories which were not supplied by the suppliers of the cluster. In this case we ignored these item categories. Although the weights for these items were large, they did not imply the semantics of the product items that we considered important. These are the items which contributed most to the semantics of the cluster so we used these items to categorize the suppliers. Figure 5 plots the weights of these items in different clusters. The horizontal axis is the index of the items and the vertical lines indicate the values of the weights. It is clear that each cluster has its own subset of key items because the lines do not have big overlapping in different clusters. Cluster 5 and cluster 54 have some overlapping because the goods supplied by two group suppliers are close to each other. The items corresponding to the clusters are listed in Figure 6. The suppliers of cluster 5 mainly supplied deepfreeze seafood products while the suppliers of clusters 40, 44 and 54 mainly supplied canned food, inebriant and deepfreeze meat respectively. In comparison to the company’s current supplier classes, the supplier categorization generated by our algorithm has two merits. Firstly, it contains the buying patterns, which can be used to manage suppliers to enlarge

236

X. Zhang et al. −8

8

x 10

ItemCat Weight

7 6 5 4 3 2 1 0

0

20

40

60

80

100

ItemCat Index

Fig. 5. Weights of the items in clusters 5, 40, 44,54

Cluster ID

Item Categories

Cluster 5

/ /

,

/

/

/

/

/ (

,

/

,

,

/

(

)/

,

/

,

,

,

/

/ ,

/

, /

)/ ,

, Cluster 40

,

/

/

/

,

/

,

,

,

,

/

,

/ Cluster 44

,

/

Cluster 54

/ /

/

, /

/

/ ,

/ /

/ /

/

,

//

/

/

/

,

/

Fig. 6. Item categories name corresponding to Cluster 5, 40, 44, 54

company’s profits. The key rule of the supplier categorization is the item categories, but the main items of a supplier may not be important to the company. Also, the importance of a supplier to the company and the importance of the company to a supplier are all buried in the buying patterns. Corresponding to the clustering results, this work can be done smoothly. Secondly, we can see from above clustering results, the item categories information is also included in our categorization.

5

Conclusions and Future Work

Dynamic categorization of suppliers based on business behaviors can give a clear insight to the business with suppliers, therefore being an important way to improve the supplier relationship management in a company. The data-driven method presented in this paper is an important step toward an efficient and dynamic categorization of the large number of suppliers in large organizations.

Supplier Categorization with K -Means Type Subspace Clustering

237

We have discussed the problems of dynamic supplier categorization and demonstrated our method to use the k-means type subspace clustering algorithm to effectively cluster the high dimensional, high sparse supplier and behavior data. The results produced from a real data set have shown that the clusters produced by the algorithm are indeed useful in real business. To use data mining techniques in supplier relationship management is still in its early stage and a lot of real business problems need to be identified and data mining solutions need to be developed. This work represents one step forward in this direction. Our current work is to implement a real system to incorporate the methodology discussed in this paper to be used in real company’s environment.

References 1. M.Bensaou., “Portfolios of buyer-suuplier relationships,” Sloan Management Review, vol. 40, no. 4, p. p35(1), 1999. 2. F. Olsen and M. Ellram., “A portfolio approach to supplier relationships,” Industrial Marketing Management, vol. 26, no. 2, pp. 101–113, 1997. 3. L. Parsons, E. Haque, and H. Liu, “Subspace clustering for high dimensional data: a review,” SIGKDD Explorations, vol. 6, no. 1, pp. 90–105, 2004. 4. L. Jing, M. Ng, J. Xu, and Z. Huang, “Subspace clustering of text documents with feature weighting k-means algorithm,” PAKDD, 2005. 5. Y. Chan, K. Ching, K. Ng, and Z. Huang, “An optimization algorithm for clustering using weighted dissimilarity measures,” Pattern recognition, vol. 37, no. 5, pp. 943– 952, 2004. 6. L. Jing, M. Ng, J. Xu, and Z. Huang, “On the performance of feature weighting k-means for text subspace clustering,” WAIM, 2005. 7. I. S. Lars-Erik Gadde, “Making the most of supplier relationships,” Industrial Marketing Management, no. 29, pp. 305–316, 2000. 8. B. Ricardo and R. PBerthier, “Modern information retrieval,” Addison Wesley, 1999. 9. P. Hague and P. Harris, “Sampling and statistics,” Kogan Page, 1993. 10. ETI., “Final report for business intelligence case study to x’s limited,” Technique report, 2005.

Classifying Web Data in Directory Structures Sofia Stamou1, Alexandros Ntoulas2, Vlassis Krikos1, Pavlos Kokosis1, and Dimitris Christodoulakis1 1

Computer Engineering and Informatics Department, Patras University, 26500 Patras, Greece {stamou, krikos, kokosis}@ceid.upatras.gr [email protected] 2 Computer Science Department, University of California, Los Angeles (UCLA), USA [email protected]

Abstract. Web Directories have emerged as an alternative to the Search Engines for locating information on the Web. Typically, Web Directories rely on humans putting in significant time and effort into finding important pages on the Web and categorizing them in the Directory. In this paper, we experimentally study the automatic population of a Web Directory via the use of a subject hierarchy. For our study, we have constructed a subject hierarchy for the top level topics offered in Dmoz, by leveraging ontological content from available lexical resources. We first describe how we built our subject hierarchy. Then, we analytically present how the hierarchy can help in the construction of a Directory. We also introduce a ranking formula for sorting the pages listed in every Directory topic, based on the pages' quality, and we experimentally study the efficiency of our approach against other popular methods for creating Directories.

1 Introduction Web Directories have emerged as an alternative to the well-established Web Search Engines, for locating information on the Web. Typically, a Web Directory, e.g. the Dmoz Directory [2], organizes Web pages in a subject hierarchy and allows users to locate interesting information by navigating through the hierarchy. Despite the simplicity of navigating in the contents of Web Directories, their editing and maintenance are tedious and time-consuming, since the task of assigning Web pages to topic Directories relies exclusively on the indispensable effort of human editors. However, the sheer quantity of information that is available on the Web restrains the exhaustive investigation of each and every Web page before these are assigned to topical categories. To make things worse, the staggering rates of Web’s evolution [22] get humans overwhelmed by the amount of data that they need to painstakingly examine and categorize within the Directories’ contents. Clearly, if we could help Web editors automate their task we would save a lot of time for a number of people. One way to alleviate the problem of categorizing Web pages inside a Directory’s topics is to employ machine learning techniques in order to build a classifier, which will then assign every Web page to a topic. However, this approach, requires a considerable number of training examples to build accurate classifiers, and might prove X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 238 – 249, 2006. © Springer-Verlag Berlin Heidelberg 2006

Classifying Web Data in Directory Structures

239

inefficient for Web scale classification. This is due to the Web’s dynamic nature, which imposes the need for re-training the classifier (possibly on a new dataset) every time a change is made. In this paper, we present an alternative approach for the effective population of Web Directories, which does not require training and, therefore, it can cope easily with changes on the Web. The only input that our method requires is a subject hierarchy that one would like to use and a collection of Web pages that one would like to assign to the hierarchy’s subjects. Besides the automatic population of Web Directories, our approach offers an efficient way of ordering the Web pages inside the Directory’s topics, by ranking the pages based on how “descriptive” they are of the category they are assigned to. At a high level our method proceeds as follows: First, we leverage ontological content from freely available resources created by the Natural Language Processing community, in order to build a subject hierarchy. Then, given a collection of Web pages, we pre-process them in order to extract the words that “best” communicate every page’s theme, the so-called thematic words. We use the pages’ thematic words and the hierarchy to compute one or more subjects to assign to every page. Moreover, we employ a ranking algorithm, which measures the pages’ closeness to the subjects, as well as the semantic correlations among the pages in the same subject, and sorts the pages listed in each Directory topic, so that pages of good quality appear earlier in the results. In Section 5, we experimentally evaluate the performance of our approach in categorizing a sample of nearly 320,000 Web pages and we compare it to the performance of other classification schemes. Obtained results show that the categorization accuracy of our automatic classification method is comparable to the accuracy of machine learning classification techniques. However, in our classification method, no training set is required. We start our discussion by presenting the subject hierarchy that we developed for the top level topics used in a popular Web Directory, i.e. Dmoz. Then, in Section 3, we describe how we identify thematic words inside every Web page and we show how we employ thematic words to assign Web pages to the hierarchy’s subjects. In Section 4, we introduce a ranking formula, which sorts the pages listed in every Directory topic by prioritizing pages of higher classification accuracy. Our experimental results are presented in Section 5 and we conclude our work in Sections 6 and 7.

2 A Subject Hierarchy for the Web Web Directories offer a browsable topic hierarchy that is used for organizing Web pages into topics. Currently, topic hierarchies are constructed and maintained by human editors, who manually locate interesting Web pages. Based on the pages’ content, the editors find the best fit for the page among the hierarchy’s topics. Apparently, the manual construction of Web Directories is tedious and may suffer from inconsistencies. To overcome the difficulties associated with editing Web Directories, we built a topic hierarchy, which we use for automatically categorizing Web pages. Our hierarchy essentially integrates domain information from the Suggested Upper Merged Ontology (SUMO) [3] and the MultiWordNet Domains (MWND) [1], into WordNet 2.0 [4]. Since a fraction of WordNet’s hierarchies is already annotated with

240

S. Stamou et al.

domain information, our task was essentially to anchor a domain label to the remaining hierarchies. To that end, we firstly anchored to those WordNet hierarchies that are uniquely annotated in either SUMO or MWND their corresponding domain labels. In selecting a domain label, for the hierarchies that are assigned a different domain between SUMO and MWND, we merged those hierarchies together and we picked the domains of the merged hierarchies’ parent nodes. Merging was generally determined by the semantic similarity that the concepts of the distinct hierarchies exhibit, where semantic similarity is defined as a correlation of: (i) the length of the path that connects two concepts in the shared hierarchy and (ii) the number of common concepts that subsume two concepts in the hierarchy [25]. Lastly, we attached to each of the hierarchy’s lower level concepts those WordNet hierarchies that encounter a specialization (is-a) relation to it. A detailed description of the process we followed for building our hierarchy can be found in [11]. To demonstrate the usefulness of our hierarchy in a real-world setting, we augmented the hierarchy with topics that are currently used by Web cataloguers for classifying Web data. For that purpose, we explored the first level topics in the Dmoz Directory and, using WordNet, we selected the Dmoz topics that are super-ordinates of the merged hierarchies’ root concepts. The selected Dmoz topics (shown in Table 1) were incorporated, through the is-a relation, in our hierarchy and formed the hierarchy’s first level topics. Table 1. The hierarchy’s first-level topics Topics in our Hierarchy Arts Health Sports News Games Society Science Computers Reference Home Shopping Recreation Business

At the end of this merging process, we came down to a hierarchy of 489 concepts that are organized into 13 topics. The resulting hierarchy is a directed acyclic graph where each node represents a concept, denoted by a unique label, and linked to other concepts via a specialization (is-a) link. The maximum depth of the hierarchy’s graph is 4 and the maximum number of children concepts (i.e. branching factor) from a node is 26. An important note here is that our hierarchy can be tailored to accommodate any first level topics that one would like to use, as long as these are represented in WordNet. In addition, the hierarchy could be used in a multilingual setting, through the use of aligned WordNets [29].

3 Finding Web Pages’ Thematic Words The main intuition in our approach for categorizing Web pages is that topic relevance estimation of a page relies on the page’s lexical coherence, i.e. having a substantial

Classifying Web Data in Directory Structures

241

portion of words associated with the same topic. To capture this property, we adopt the lexical chaining approach and, for every page, we generate a sequence of semantically related terms, known as lexical chain. The computational model we used for generating lexical chains is presented in the work of [6] and it generates lexical chains 1 in a three-step process: (i) select a set of candidate terms from the page, (ii) for each candidate term, find an appropriate chain relying on a relatedness criterion among members of the chains, and (iii) if it is found, insert the term in the chain. The relatedness factor in the second step is determined by the type of WordNet links that connect the candidate term to the terms stored in existing chains. We then disambiguate the words inside every generated lexical chain, using the scoring function f introduced in [27], which indicates the possibility that a word relation is a correct one. Given two words, w1 and w2, their scoring function f via a relation r, depends on the words’ association score, their depth in WordNet and their respective relation weight. The association score (Assoc) of the word pair (w1, w2) is determined by the words’ corpus co-occurrence frequency and it is given by: Assoc(w1, w2) =

log (p (w1, w2) + 1) . Ns (w1) < Ns (w2)

(1)

where, p(w1,w2) is the corpus co-occurrence probability of the word pair (w1,w2) and Ns(w) is a normalization factor, which indicates the number of WordNet senses that word w has. Given a pair (w1,w2), their DepthScore expresses the words’ position in WordNet hierarchy and is defined as: DepthScore (w1, w2) = Depth(w1) 2 < Depth(w2) 2 .

(2)

where, Depth(w) is the depth of word w in WordNet. Semantic relation weights (RelationWeight) have been experimentally fixed to 1 for reiteration, 0.2 for synonymy and hyper/hyponymy 0.3 for antonymy, 0.4 for mero/holonymy and 0.005 for siblings. The scoring function f of w1 and w2 is defined as: f s(w1, w2, r) = Assoc (w1, w2) < DepthScore(w1, w2) < Re lationWeight (r) .

(3)

The score of the lexical chain Ci that comprises w1 and w2, is calculated as the sum of the score of each relation rj in Ci. Formally: Score(Ci ) =

¦

f s (w j 1, w j 2 , r j ) .

r j in C j

(4)

To compute a single lexical chain for every downloaded Web page, we segment the latter into shingles [8], and for every shingle, we generate scored lexical chains, as described before. If a shingle produces multiple chains, the lexical chain of the highest score is considered as the most representative chain for the shingle. In this way, we eliminate chain ambiguities. We then compare the overlap between the elements of all shingles’ lexical chains consecutively. Elements that are shared across chains are deleted so that lexical chains display no redundancy. The remaining elements are merged together into a single chain, representing the contents of the entire page, and a new Score(Ci) for the resulting chain Ci is computed. 1

Candidate terms are nouns, verbs, adjectives or adverbs.

242

S. Stamou et al.

3.1 Categorizing Web Pages In order to assign a topic to a Web page, our method operates on the page’s thematic words. Specifically, we map every thematic word of a page to the hierarchy’s topics and we follow the hierarchy’s hypernymic links of every matching topic upwards until we reach a root node. For short documents with very narrow subjects this process might yield only one matching topic. However, due to both the great variety of the Web data and the richness of the hierarchy, it is often the case that a page contains thematic words corresponding to multiple root topics. To accommodate multiple topic assignment, a Relatedness Score (RScore) is computed for every Web page to each of the hierarchy’s matching topics. This RScore indicates the expressiveness of each of the hierarchy’s topics in describing the pages’ content. Formally, the RScore of a page represented by the lexical chain Ci to the hierarchy’s topic Dk is defined as the product of the chain’s Score(Ci) and the fraction of the chain’s elements that belong to topic Dk. We define the Relatedness Score of the page to each of the hierarchy’s matching topics as: RScore (i, k) =

Score(Ci )



# of Ci elements of Dk matched # of Ci elements

.

(5)

The denominator is used to remove any effect the length of a lexical chain might have on RScore and ensures that the final score is normalized so that all values are between 0 and 1, with 0 corresponding to no relatedness at all and 1 indicating the category that is highly expressive of the page’s topic. Finally, a Web page is assigned to the topical category Dk for which it has the highest relatedness score of all its RScores above a threshold ȉ, with T been experimentally fixed to ȉ= 0.5. The page’s indexing score is: IScore (i, k) = max RScore (i, k).

(6)

Pages with chain elements matching several topics in the hierarchy, and with relatedness scores to any of the matching topics below T, are categorized in all their matching topics. By allowing pages to be categorized in multiple topics, we ensure there is no information loss during the Directories’ population and that pages with short content (i.e. short lexical chains) are not unquestionably discarded as less informative.

4 Organizing Web Pages in Directories Admittedly, the relatedness score of a page to a Directory topic does not suffice as a measurement for ordering the pages that are listed in the same Directory topic. This is because RScore is not a good indicator of the amount of content that these pages share. Herein, we report on the computation of semantic similarities among the pages that are listed in the same Directory topic. Semantic similarity is indicative of the pages’ correlation and helps us determine the ordering of the pages that are deemed related to the same topic. To estimate the semantic similarity between a set of pages, we compare the elements in a page’s lexical chain to the elements in the lexical chains of the other pages in a Directory topic. Our intuition is that the more elements the chains of two pages

Classifying Web Data in Directory Structures

243

have in common, the more correlated the pages are to each other. To compute similarities between pages, Pi and Pj that are assigned to the same topic, we first need to identify the common elements between their lexical chains, represented as PCi and PCj respectively Then, we use the hierarchy to augment the elements of the chains PCi and PCj with their synonyms. Chain augmentation ensures that pages of comparable content are not regarded unrelated if their lexical chains contain distinct but semantically equivalent elements (i.e. synonyms). The augmented elements of PCi and PCj respectively, are defined as: AugElements ( PCi ) = Ci

* Synonyms (Ci ) and * Synonyms (C j )

(7)

AugElements( PC j ) = C j

where, Synonyms (Ci) denotes the set of the hierarchy’s concepts that are synonyms to any of the elements in Ci and Synonyms (Cj) denotes the set of the hierarchy’s concepts that are synonyms to any of the elements in Cj. The common elements between the augmented lexical chains PCi and PCj are determined as: ComElements(PCi , PC j ) = AugElements(PCi )

 AugElements(PC j ) .

(8)

We formally define the problem of computing pages’ semantic similarities as follows: if the lexical chains of pages pi and pj share elements in common, we produce the correlation look up table with tuples of the form . The similarity measurement between the lexical chains PCi, PCj of the pages Pi and Pj is given by:

σ s (PCi , PC j )

=

2



ComElements (PCi , PC j )

AugElements (PCi ) + AugElements (PC j )

.

(9)

where, the degree of semantic similarity is normalized so that all values are between zero and one, with 0 indicating that the two pages are totally different and 1 indicating that the two pages talk about the same thing. 4.1 Ranking Web Pages in Directory Topics We sort the pages assigned to a Directory topic, in terms of a DirectoryRank (DR) metric, which estimates the “importance” of pages in a Directory. DirectoryRank is inspired by, and thus resembles, the PageRank measure [23] in the sense that the importance of a page is high if it is somehow connected to other important pages, and that important pages are valued more highly than less important ones. While PageRank realizes the connection between pages in terms of their in/out-going links to other pages, DirectoryRank defines the connection between pages in terms of their semantic coherence to other pages in the Directory, this is; it estimates the importance of pages from their degree of semantic similarity to other important pages. Intuitively, an important page in a Directory topic, is a page that has a high relatedness score to the Directory’s topic and that is semantically close (similar) to many other pages in that topic. DR defines the quality of a page to be the sum of its topic relatedness score and its overall similarity to the fraction of pages with which it correlates in the given topic. This way, if a page is highly related to topic D and also corre-

244

S. Stamou et al.

lates highly with many informative pages in D, its DR score will be high. Formally, consider that page pi is indexed in Directory topic Tk with some RScore (pi, Tk) and let p1, p2, …, pn be pages in Tk with which pi semantically correlates with scores of ıs (PC1, PCi), ıs (PC2, PCi),…, ıs (PCn, PCi), respectively. Then, the DirectoryRank (DR) of pi is given by: DR ( pi , Tk ) = RScore ( pi , Tk ) + [σs (PC , PCi ) + σs (PC2, PCi ) + ...... + σs (PCn, PC )] / n . i 1

(10 )

where n corresponds to the total number of pages in topic Tk with which pi semantically correlates. High DR values imply that: (i) there are some “good quality” sources among the data stored in a Directory, and that (ii) more users are likely to visit them while browsing the Directory’s contents. Summarizing, the DirectoryRank metric determines the ranking order of the pages associated with a Directory and serves towards giving higher rankings to the more “important” pages of the Directory.

5 Evaluation of Automatic Categorization To study the effectiveness of our method in automatically assigning Web pages into a subject hierarchy, we run an experiment where we compared the efficiency of our method in categorizing Web pages in the Dmoz topics, to the efficiency of a Naïve Bayesian classifier in categorizing the same set of pages in the same topics. 5.1 Experimental Setup In selecting our experimental data, we wanted to pick a useful yet representative sample of the Dmoz’s content. By useful, we mean that our sample should comprise Web pages with textual content and not only links, frames or audiovisual data. By representative, we mean that our sample should span those Dmoz’s categories, whose topics are among the top level topics in our subject hierarchy. To obtain such a sample, we downloaded a set of 318,296 Web pages listed in the 13 Dmoz topics that are represented in our hierarchy. We parsed the downloaded pages and generated their shingles, after removing HTML markup. Pages were then tokenized, part-of-speech tagged, lemmatized and submitted to our classification system, which following the process described above, computed and weighted a single lexical chain for every page. To compute lexical chains, our system relied on a resources index, which comprised (i) the 12.6M WordNet 2.0 data for determining the semantic relations that exist between the pages’ thematic words, (ii) a 0.5GB compressed TREC corpus from which we extracted a total of 340MB binary files for obtaining statistics about word co-occurrence frequencies, and (iii) the 11MB top level concepts in our hierarchy. Table 2 shows some statistics of our experimental data. Our system generated and scored simple and augmented lexical chains for every page and based on a combined analysis of this information it indicates the most appropriate topic in the hierarchy to categorize each of the pages. To measure our system’s effectiveness in categorizing Web pages, we experimentally studied its performance against the performance of a Naïve Bayes classifier, which has proved to be efficient for Web scale classification [14]. In particular, we

Classifying Web Data in Directory Structures

245

Table 2. Statistics on the experimental data Category Arts Sports Games Home Shopping Business Health News Society Computers Reference Recreation Science Total

# of documents 28,342 20,662 11,062 6,262 52,342 60,982 23,222 9,462 28,662 35,382 13,712 8,182 20,022 318,296

Average # of shingles 30 13 17 11 12 16 25 37 45 25 33 19 32

trained a Bayesian classifier by performing a 70/30 split to our experimental data and we used the 70% of the downloaded pages in each Dmoz topic as a learning corpus. We then tested the performance of the Bayesian classifier in categorizing the remaining 30% of the pages in the most suitable Dmoz category. For evaluating the classification accuracy of both the Bayesian and our classifier, we used the Dmoz categorizations as a comparison testbed, i.e. we compared the classification delivered by each of the two classifiers to the classification done by the Dmoz cataloguers for the same set of pages. Although, our experimental pages are listed in all sub-categories of the Dmoz’s top level topics, for the experiment presented here, we focus on classifying the Web pages only for the top-level topics. 5.2 Discussion of the Experimental Results The overall accuracy results are given in Table 3, whereas Table 4 compares the accuracy rates for each category between the two classifiers. Since our classifier allows pages with low RScores to be categorized in multiple topics, in our comparison we explored only the topics of the highest RScores. Note also that we run the Bayesian classifier five times on our data, every time on a random 70/30 split and we report on the best accuracy rates among all runs for each category. The overall accuracy rates show that our method has improved classification accuracy compared to Bayesian classification. The most accurate categories in our classification method are Arts and Society, which give 90.70% and 88.54% classification accuracy respectively. The underlying reason for the improved accuracy of our classifier Table 3. Overall accuracy results from both classifiers Classifier

Accuracy

Standard Error Rate

Bayesian

65.95%

0.06%

Ours

69.79%

0.05%

246

S. Stamou et al.

Table 4. Comparison of average accuracy rates between categories for the two classifiers Category Arts Sports Games Home Shopping Business Health News Society Computers Reference Recreation Science

Bayesian classifier 67.18% 69.71% 60.95% 36.56% 78.09% 82.30% 64.18% 8.90% 61.14% 63.91% 20.70% 54.83% 49.31%

Our classifier 90.70% 75.15% 64.51% 40.16% 71.32% 70.74% 72.85% 55.75% 88.54% 74.04% 69.23% 62.38% 71.90%

in those topics is the fact that our hierarchy is rich in semantic information for those topics. This argument is also attested by the fact that for the topics Home and News, for which our hierarchy contains a small number of lexical nodes, the classification accuracy of our method is relatively low, i.e., 40.16% and 55.75% respectively. Nevertheless, even in those topics our classifier outperforms the Bayesian classifier, which gives for the above topics a classification accuracy of 36.565% and 8.90%. The most straightforward justification for the Bayesian’s classifier low accuracy in the topics Home and News is the limited number of pages that our collection contains about those two topics. This is also in line with the observation that the Bayesian classifier outperforms our classifier when (i) dealing with a large number of documents, and/ or (ii) dealing with documents comprising specialized terminology. The above can be attested in the improved classification accuracy of the Bayesian classifier for the categories Business and Shopping, which both have many documents and whose documents contain specialized terms (e.g. product names) that are underrepresented in our hierarchy. A general conclusion we can draw from our experiment is that, given a rich topic hierarchy, our method is quite promising in automatically classifying pages and incurs little overhead for Web-scale classification. While there is much room for improvement and further testing is needed before judging the full potential of our method, nevertheless, based on our findings, we argue that the current implementation of our system could serve as a Web cataloguers' assistant by delivering preliminary categorizations for Web pages. These categorizations could be then further examined by human editors and reordered when necessary. Finally, in our approach, we explore the pages’ classification probability (i.e. RScore) so that, upon ranking, pages with higher RScores are prioritized over less related pages. This, in conjunction with the pages’ semantic similarities, forms the basis of our ranking formula (DirectoryRank). An early study about the potential of DirectoryRank can be found in [28].

6 Related Work The automated categorization of Web documents into pre-defined topics has been investigated in the past. Previous work mainly focuses on using machine learning

Classifying Web Data in Directory Structures

247

techniques to build text classifiers. Several methods have been proposed in the literature for the construction of document classifiers, such as decision trees [5], Support Vector Machines [13], Bayesian classifiers [24], hierarchical text classifiers [19], [11], [9], [20], [26], [12], [21], [7], [17]. The main commonality in previous methods is that their classification accuracy depends on a training phase, during which statistical techniques are used to learn a model based on a labeled set of training exampled. This model is then applied for classifying unlabeled data. While these approaches provide good results, they are practically inconvenient for Web data categorization, mainly because it is computationally expensive to continuously gather training examples for the ever-changing Web. The distinctive feature in our approach from other text classification techniques is that our method does not require a training phase, and therefore it is convenient for Web scale classification. An alternative approach in categorizing Web data implies the use of the Web pages’ hyperlinks and/or anchor text in conjunction with text-based classification methods [10], [15], [16]. The main intuition in exploring hypertext for categorizing Web pages relies on the assumption that both the links and the anchor text of Web pages communicate information about the pages’ content. But again, classification relies on a training phase, in which labeled examples of anchor text from links pointing to the target documents are employed for building a learning model. This model is subsequently applied to the anchor text of unlabeled pages and classifies them accordingly. Finally, the objective in our work (i.e. populating Web Directories) could be addressed from the agglomerative clustering perspective; a technique that treats the generated clusters as a topical hierarchy for clustering documents [18]. The agglomerative clustering methods build the subject hierarchy at the same time as they generate the clusters of the documents. Therefore, the subject hierarchy might be different between successive runs of such an algorithm. In our work, we preferred to build a hierarchy by using existing ontological content, rather than to rely on newly generated clusters, for which we would not have perceptible evidence to support their usefulness for Web data categorization. However, it would be interesting for the future to take a sample of categorized pages and explore it using an agglomerative clustering module.

7 Concluding Remarks We have presented a method, which uses a subject hierarchy to automatically categorize Web pages in Directory structures. Our approach extends beyond data classification and challenges issues pertaining to the Web pages’ organization within Directories and the quality of the categorizations delivered. We have experimentally studied the effectiveness of our approach in categorizing a fraction of Web pages into topical categories, by comparing its classification accuracy to the accuracy of a Bayesian classifier. Our findings indicate that our approach has a promising potential in facilitating current tendencies in editing and maintaining Web Directories. However, in this work, we are leaving open for future investigation issues such as ranking pages within Directories, users’ perception of our system’s performance, etc. It is our hope though, that our approach, will road the map for future improvements in populating Web Directories and in handling the proliferating Web data. We now discuss a number of advantages that our approach entails and which we believe could be fruitfully explored by others. The implications of our findings apply

248

S. Stamou et al.

primarily to Web cataloguers and catalogue users. Since cataloguers are challenged by the prodigious volume of the Web data that they need to process and categorize into topics, it is of paramount importance that they are equipped with a system that carries out on their behalf a preliminary categorization of pages. We do not imply that humans do not have a critical role to play in Directories’ population, but we deem their “sine-qua-non” involvement in the evaluation and improvement of the automatically produced categorizations, rather than in the scanning of the numerous pages enqueued for categorization. In essence, we argue that our approach compensates for the rapidly evolving Web, by offering Web cataloguers a preliminary categorization for the pages that they have not processed yet. On the other side of the spectrum, end users are expected to benefit from the Directories’ updated content. Given that users get frustrated when they encounter outdated pages every time they access Web catalogs to find new information that interests them, it is vital that Directories’ contents are up-to-date. Our model ensures that this requirement is fulfilled, since it runs fast and scales up with the evolving Web, enabling immediacy of new data.

References 1. 2. 3. 4. 5. 6. 7. 8. 9.

10. 11. 12. 13. 14. 15. 16. 17.

MultiWordNet Domains http://wndomains.itc.it/. Open Directory Project http://dmoz.com. Sumo Ontology http://ontology.teknowledge.com/. WordNet 2.0 http://www.cogsci.princeton.edu/~wn/. Apte C., Damerau F. and Weiss S.M. 1994. Automated learning of decision rules for text categorization. In ACM Transactions on Inf. Systems, 12(3):233-251. Barzilay R. and Elhadad M. 1997. Lexical chains for text summarization. Master’s Thesis. Boyapati V. 2002. Improving text classification using unlabeled data. In Proceedings of SIGIR Conference, 11-15. Broder A.Z., Glassman S.C., Manasse M. and Zweig G. 1997. Syntactic clustering of the web. In Proceedings of the 6th WWW Conference: 1157-1166. Chakrabarti S., Dom B., Agraval R. and Raghavan P. 1998(a). Scalable feature selection, classification and signature generation for organizing large text databases into hierarchical topic taxonomies. In VLDB Journal, 7: 163-178. Chakrabarti S., Dom B. and Indyk P. 1998(b). Enhanced hypertext categorization using hyperlinks. In Proceedings of ACM SIGMOD Conference. Stamou S., Krikos V., Kokosis P. and Christodoulakis D. 2005. Web directory construction using lexical chains. In Proceedings of the 10th NLDB Conference. Chen H. and Dumais S. 2000. Bringing order to the web: Automatically categorizing search results. In Proceedings of the SIGCHI Conference: 145-152. Christianini N. and Shawe-Taylor J. 2000. An introduction to support vector machines. Cambridge University Press. Duda R.O and Hart P.E.1973. Pattern Classification and scene analysis. Wiley & sons. Furnkranz J. 1999. Exploring structural information for text classification on the WWW. In Intelligent Data Analysis: 487-498. Glover E., Tsioutsiouliklis K., Lawrence S., Pennock M. and Flake G. 2002. Using web structure for classifying and describing Web pages. In Proc. of the 11th WWW Conference. Huang C.C., Chuang S.L. and Chien L.K. 2004. LiveClassifier: Creating hierarchical text classifiers through Web corpora. In Proceedings of the 13th WWW Conference: 184-192.

Classifying Web Data in Directory Structures

249

18. Kaufman L. and Rousseeuw P.J. 1990. Finding groups in data: An introduction to cluster analysis. New York: Wiley & sons. 19. Koller D. and Sahami M. 1997. Hierarchically classifying documents using very few words. In Proceedings of ICML Conference: 170-178. 20. Mladenic D. 1998. Turning Yahoo into an automatic web page classifier. In the 13th European Conference on Artificial Intelligence: 473-474. 21. Nigam K., McCallum A.K., Thrun S. and Mitchell T.M. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2-3): 103-134. 22. Ntoulas A., Cho J. and Olston Ch. 2004. What’s new on the Web? The evolution of the Web from a search engine perspective. In Proceedings of the 13th WWW Conference: 1-12. 23. Page L., Brin S., Motwani R. and Winograd T. 1998. The pagerank citation ranking: Bringing order to the web. (http://dbpubs.stanford.edu:8090/pub/1999-66). 24. Pazzani M. and Billsus D. 1997. Learning and revising user profiles: The identification of interesting Web sites. In Machine Learning Journal, 23: 313-331. 25. Resnik Ph. 1999. Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. In Journal of Artificial Intelligence Research, 11: 95-130. 26. Ruiz M.E. and Srinivasan P. 1999. Hierarchical neural networks for text categorization. In Proceedings of SIGIR Conference: 281-282. 27. Song Y.I., Han K.S. and Rim H.C. 2004. A term weighting method based on lexical chain for automatic summarization. In Proceedings of the 5th CICLing Conference: 636-639. 28. Krikos V., Stamou S., Ntoulas A., Kokosis P. and Christodoulakis D. 2005. DirectoryRank: ordering pages in web directories. In Proceedings of the 7th ACM International Workshop on Web Information and Data Management (WIDM), Bremen, Germany. 29. WordNets in the world. Available at http://www.globalwordnet.org/gwa/wordnet_table.htm

Semantic Similarity Based Ontology Cache Bangyong Liang, Jie Tang, Juanzi Li, and Kehong Wang Knowledge Engineering Group, Department of Computer Science, Tsinghua University, 100084 Beijing, China {liangby97, j-tang02}@mails.tsinghua.edu.cn

Abstract. This paper addresses the issue of ontology caching on semantic web. The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation. Ontology serves as the metadata for defining the information on semantic web. Ontology based semantic information retrieval (semantic retrieval) is becoming more and more important. Many research and industrial works have been made so far on semantic retrieval. Ontology based retrieval improves the performance of search engine and web mining. In semantic retrieval, a great number of accesses to ontologies usually lead the ontology servers to be very low efficient. To address this problem, it is indeed necessary to cache concepts and instances when ontology server is running. Existing caching methods from database community can be used in the ontology cache. However, they are not sufficient for dealing with the problem. In the task of caching in database, usually the most frequently accessed data are cached and the recently less frequently accessed data in the cache are removed from it. Different from that, in ontology base, data are organized as objects and relations between objects. User may request one object, and then request another object according to a relation of that object. He may also possibly request a similar object that has not any relations to the object. Ontology caching should consider more factors and is more difficult. In this paper, ontology caching is formalized as a problem of classification. In this way, ontology caching becomes independent from any specific semantic web application. An approach is proposed by using machine learning methods. When an object (e.g. concept or instance) is requested, we view its similar objects as candidates. A classification model is then used to predict whether each of these candidates should be cached or not. Features in classification models are defined. Experimental results indicate that the proposed methods can significantly outperform the baseline methods for ontology caching. The proposed method has been applied to a research project that is called SWARMS.

1 Introduction The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation [1]. In recent years, semantic web has made significant progress, in particular through the development of infrastructure such as: ontology language like RDF, DAML+OIL and OWL, ontology editor like Protégé [2]. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 250 – 262, 2006. © Springer-Verlag Berlin Heidelberg 2006

Semantic Similarity Based Ontology Cache

251

The ontology based semantic web has many applications, for example, semantic retrieval, intelligent reasoning, and social network analysis. In these applications, there are a large number of requests to the ontology base. The request includes accesses of concept or instance. Such frequently accesses to the ontology base lead the ontology server to be very low efficient. In order to achieve high efficient response of the ontology server, it is necessary to conduct caching for the ontology data. This is exactly the problem addressed in this paper. In database community, the problem of data caching has been intensively investigated. The methodologies proposed can be used in the ontology caching. However, they are not sufficient for dealing with the problem. In the task of caching in database, usually the most frequently accessed data are cached and the recently less frequently accessed data in the cache are removed from it. Different from that, in ontology base, data are organized as objects and relations among objects. User may request one object, and then request another object according to a relation of that object. He may also possibly request a similar object that has not any relations to the object. Ontology caching should consider more factors and is more difficult. Unfortunately, despite the importance of the problem, ontology caching has received little attention in the research community. No previous study has so far sufficiently investigated the problem, to the best of our knowledge. Three questions arise for ontology caching: (1) how to formalize the problem (since it involves many different factors at different levels and appears to be very complex); (2) how to solve the problem in a principled approach; and (3) how to make an implementation. (1) We formalize ontology caching as that of classification. Specifically, we a classification model to determine if the candidates should be cached or not. (2) We propose to conduct ontology caching using machine learning methods. In the approach, when a user requests an object, we take its similar objects as candidates, and use a classification model to predict whether they should be cached or not. (3) It turns out that some of the tasks in the approach can be accomplished with existing methodologies, but some cannot. Caching the current accessed entity is similar to the principle of the "the same data is likely to accessed again in near future" in the caching theory. So caching the current accessed entity can be accomplished in with existing cache methodologies. But how to select the entities that have relations with the current accessed entity into the cache can't be solved using existing technologies. Because an entity may not be accessed in the previous history can have relations with the current accessed entity. Whether caching it or not is according to the importance of the relation and not according to the access history. For solving this problem, we implement a classification model which considers not only access history but also relations to the current accessed entity to determine if the object that has relations to the current accessed one should be cached. We tried to collect data from as many sources as possible for experimentation. Since in our project SWARMS, we use the software domain for demo, we collect the data in sourceforge and formalized them to instances in our software ontology (http://www.schemaweb.info/schema/SchemaDetails.aspx?id=235). Our experimental results indicate that the proposed classification methods perform significantly better than the baseline methods for caching.

252

B. Liang et al.

The rest of the paper is organized as follows. In Section 2, we introduce related work. In Section 3, we formalize the problem of ontology caching. In Section 4, we describe our approach to the problem and in Section 5, we explain one possible implementation. Section 6 gives our experimental results. We make concluding remarks in Section 7.

2 Related Works There have been some industry tools that investigate ontology caching methods. For example, Jena [3] is an ontology access tool which supports parsing of RDF, DAML+OIL and OWL files. However, the cache method in Jena is simple. It just creates an ontology model in memory and loads all the concepts and instances to the model. Keller and Basu propose a predicate-based client-side caching scheme that aims to reduce query-response times and network traffic between the clients and server [4]. The clients try to answer queries locally from the cached tuples using associated predicated descriptions. In the approach, queries executed at the server are used to load the client cache and predicate descriptions derived from these queries are stored at both the client and server to examine and maintain the contents of the cache. Lee and Chu propose a semantic caching method for web databases via query matching techniques [5]. The scheme proposed utilizes the query naturalization to cope with schematic, semantic and querying capability differences between the wrapper and web source. Further, a semantic knowledge-based algorithm is used to find the best matched query from the cache. Awaring context in database caching has been also noticed by Davidson [6] and Hanson [7]. There has been some other related works in data cache. For example, mediator system, which is usually as the role of content translation systems, needs the data cache method to get efficiency. A semantic cache of the ontology-based mediator system named YACOB is pro-posed by Marcel et al [8]. YACOB's objective is to overcome performance issues on ontology-based mediator system which provides domain knowledge for heterogeneous web sources' integration. The semantic region [9] was introduced by Dar et al. The semantic regions, like pages, provide a means for the cache manager to aggregate information about multiple tuples. Unlike pages, however, the size and shape (in the semantic space) of regions can change dynamically.

3 Ontology Cache as Classification Ontology caching is an important subject in semantic web. A large number of applications need ontology caching as a fundamental component to build up. Unfortunately, despite the importance of the problem, ontology caching has received little attentions. The problem of ontology caching is quite different from that of database caching. We here give a case study to show the problem of ontology caching and the main difference with database caching. Figure 1 shows the metadata definition in an ontology of a department of university. In the ontology, concept “Associate Professor” and “Full Professor” are sub classes of concept “Professor”. Concept “Master Student”

Semantic Similarity Based Ontology Cache

253

and “Phd Student” are sub classes of concept Student. Concept “Professor” has both properties and relations. The properties include “Name”, “Age” and the relations are “HasStudent” and “WorksInProject”. The difference between properties and relations is that the values of properties are data types, for example, string, integer, etc and the values of relations are instances. For example, in figure 1, an instance of concept “Professor” shows the values of the properties and relations. HasStudent Professor

Professor

+Name subClassOf -Age -HasStudent -WorksInProject Associate Professor

Student

HasPhdStudent Full Professor

Full Professor

Phd Student Name

Student

Student

Master Student

Professor

-Name -Major -Grade -StudentOf -InSameLabWith

Age HasStudent

Jack 45

HasStudent subPropertyOf

student1

pro1

Phd Student

HasPhdStudent

WorksInProject

project1

Fig. 1. The Ontology of a Department in a university Get Pro1

Get Pro1's Name

user1 Get Pro1's Student Using HasStudent Relation

Get student1's Name and age

user2

Fig. 2. An Example of two Users Visiting an ontology

Then, let us consider two users visiting the ontology base as shown in figure 2. User 1 visits the instance “Pro1” and gets the value of the property “Name”. User 2 visits the instance “Pro1”, gets its students using the "HasStudent" relation and gets the students' name and age. If the data cache method is used to perform the ontology cache, when the user 1 visits “Pro1”, the data cache method doesn't care about the semantics between concepts and instances in the ontology and will not load the students to the cache. Thus, the data cache can't be simply applied to the ontology cache without change. Ontology cache must care about semantic relations between concepts and instances. This makes it different from data cache. This paper proposes a method based on semantic similarity among concepts and instances to determine whether a concept or instance can be cached. In this paper, we formalize the ontology caching as that of the following definition: When a user access an entity in the ontology, the entity is denoted by ec, the candidates to be cached after ec is accessed is a set of entities and it’s denoted by Candidate(ec). A classification will be used to process each entity in Candidate(ec). The classification model will give whether an entity in Candidate(ec) should be cached or not.

254

B. Liang et al.

4 Our Approach: Semantic Similarity Based Ontology Caching We propose the SSOC (Semantic Similarity based Ontology Caching) for the task of ontology caching. Figure 3 shows the flow. 8VHUIHHGEDFN

2QWRORJ\$FFHVV/RJV 2QWRORJLHV

)HDWXUHV 3UHSURFHVV

6LPLODULW\ &DOFXODWLRQ

(QWLW\ VLPLODULWLHV

&DFKH0RGHO 7UDLQLQJ

Fig. 3. Semantic similarity based ontology cache flow

The “preprocess” process takes the ontologies and the ontology access logs as its input and output the features of entities in ontologies. Then the features are used as the input for similarity calculation and the entity similarities are generated. After training process, the cache model is learned with machine learning methods. The detail process is detailed in the following sections. 4.1 Semantic Similarity Semantic similarity is the measurement of the similarity between two entities in an ontology. We define the similarity measure as a real-valued function:

Sim( x, y ) : S 2 − > [0,1] where S is the set of entities in the ontology. Function Sim( x, y ) satisfies the reflexivity and symmetry: Sim( x, x) = 1 (Reflexivity) Sim( x, y ) = Sim( y, x) (Symmetry) The similarity between two concepts or two instances can be calculated by hierarchy similarities, property similarities, label similarities and access similarities. Concepts and instances' hierarchy similarities have different meanings. Denote the hierarchy similarity of two entities as Simh. For every two concepts, the similarity can be calculated as follows: Represent all concept A's parent concepts as P( A) , All concept B's parent concepts as P(B) , the hierarchy similarity of these two concepts are as follows:

Simch ( A, B ) =

| P( A) ∩ P( B) | | P( A) ∪ P( B) |

For every two instances, the hierarchy similarity can be calculated as follows: 1. Denote the concept of instance A as C (A) and concept of instance B as C (B ) . 2. Use the concept hierarchy similarity to compute this similarity between instance A and B, which is as follows:

Semantic Similarity Based Ontology Cache

Simih ( A, B) =

255

| P(C ( A)) ∩ P(C ( B )) | | P(C ( A)) ∪ P(C ( B )) |

Concepts have properties which may have data value or point to other concepts. Denote the property similarity of two entities as Simp. Two kinds of Properties should be noticed. One is properties with type constraint and the other is properties with cardinality constraint. Let Pd ( A) be the type constraint of concept A and Pc ( A) be the cardinality constraint of concept A. Let Pall ( A) be all the properties of concept A. The property similarity of two concepts is as follows: Simcp =

| ( Pd ( A) ∩ Pd ( B)) ∪ (( Pc ( A) ∩ Pc ( B )) | | Pall ( A) ∪ Pall ( B ) |

The property similarity of two instances is different from concepts because the instances have values of their properties. The same properties with the same value should be considered. Here we define the “same value” of two kinds of properties: 1) The same value of the properties which have data values is that the two values are the same. For example, the same of string values means that two strings are same in all characters. 2) The same value of the properties which points to another instance is that the two instances are the same. It means they are the one instance or have the relation of “sameIndividualAs” between them. Denote such same properties of instance A and B as Ps ( A, B ) . The property similarity of the two instances can be defined as follows: Simip =

| Ps ( A, B ) | | P ( A) ∪ P( B) |

Both concept and instance have labels that represent the textual names of the entities. Denote the label similarity of two entities as Siml. Although the textual name can be any symbols, the sharable and reusable features of ontologies make the textual name of an entity very close to the natural language words. The textual name of a entity are usually made up by one or more words. To calculate the label similarity of two entities, WordNet, a popular electronic dictionary is used to calculate the words’ similarities. Denote entity A’s textual name as name1 and entity B’s textual name as name2 . For two words w1 and w2 , the formula of the similarity is as follows: Sense( s1 , s 2 ) =

2 × log p( s ) log p( s1 ) + log p( s 2 )

count ( s ) , w1 ∈ s1 , w2 ∈ s 2 . S is a sense node in WorldNet. It is the total common hypernym for S1 and S2. To sum up, textual name similar measurement is defined as the follows:

Where p( s ) =

256

B. Liang et al. n

m

¦¦ Sense(w

1i , w2 j )

Siml (name1 , name2 ) =

i =1 j =1

n×m Where n is the word count in name1 and m is the word count in name2. The access log is very important for cache. Denote the access similarity of two entities as Sima. The access similarity can be obtained by the statistics on access log. The access similarity between two concepts or instances is as follows: In a period of time, a client may access many concepts and instances. Define the set of entities as S (client1 ) = {C1 , C2 ,...,Cn } . If set S contains both entity A and entity B, entity A and entity B have similarity in this client’s access. There are three kinds of clients with different access patterns about entity A and entity B. They are as follows: ­C1 ( A, B ) A ∈ S (C1 ) and B ∈ S (C1 ) ° ®C 2 ( A, B) ( A ∈ S (C 2 ) and B ∉ S (C 2 )) or ( A ∉ S (C 2 ) and B ∈ S (C 2 )) °C ( A, B ) A ∉ S (C ) and B ∉ S (C ) 3 3 ¯ 3 The access similarity of entity A and entity B will be enhanced if the number of C1 increases and will be reduced if the number of C2 increases. We can get little information from the number of C3. Therefore, the access similarity of A and B is as follows: Sima ( A, B ) =

count (C1 ) count (C1 ) + count (C2 )

Where Count (Cx ) is the number of Cx . The four semantic similarities make up the whole similarity. Here the linear function is used to calculate the similarity. Firstly, let Sim1 = Simh , Sim2 = Sim p , Sim3 = Siml , Sim4 = Sima , And the formula of the similarity function is as follows: Sim ( x , y ) =

4

¦x

k

Sim k ( x , y )

i =1

If the similarity of x and y is bigger than a given value, the x and y will be predicted that when one of them is accessed, the other will be also accessed in near future. So when one of them is accessed, the other will be loaded into the cache. Therefore, the cache model will be defined as the following formula: x is accessed, y is loaded to the cache iff Sim( x, y ) > λ . 4.2 Semantic Similarity Based Cached Model

The next task is to determine the parameters of xk , k = 1, 2,3, 4 and λ . A search algorithm should be applied to get the optimal values of these parameters. The search algorithm should have the following features: 1) The algorithm can get the approximate optimal solutions of the parameters. 2) The algorithm must be convergent in definite iteration steps.

Semantic Similarity Based Ontology Cache

257

After the xk , k = 1, 2,3, 4 and λ are determined, the cache model is generated. The cache model is the final output of the approach flow. 4.3 The Approach Flow

The preprocess component gives the feature presentation to each entity in the ontology base. After similarity calculation, every sub similarity of two entities is calculated and ready for training. In the initial step, the “cache” and “not cache” label is assigned randomly to the similarities. The user feedback will be used to adjust the model in a specific period.

5 Implementation We consider one implementation of the proposed approach. We employ a unified machine learning approach in caching prediction. The whole algorithm of the model includes the training, caching and feedback process. The training process constructs the model and the model stores the cache information in the following format: ( Entity A, {B, C , D,...}) The set of entities indicates all entities which should be cached when entity A is accessed. The feedback process alters the cache result and refines the training corpus. Genetic Algorithms (GAs) are adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic [10]. The improved genetic algorithm full-fills the requirements of the needed search algorithm in chapter 4.2 and is selected to train the parameters of the cache model. The training process includes two steps. Firstly, the preprocessing process analyzes the concepts, instances and the access log of the ontology server. The features for concepts and instances are computed and discretized for the improved genetic algorithm process. Secondly, each feature must be given whether cached or not. At the beginning of the algorithm, we use the random method to determine whether the feature means cached or not. For every feature, we generated fifteen random numbers range in [0, 1]. If the numbers ranged in [0.5, 1] are more than the numbers ranged in [0, 0.5], the feature is marked as cached. The input of the algorithm is a series of predefined results, just like the following format: Table 1. The training data

Simh 0.75 0.32 0.48 …

Simp 0.54 0.27 0.65 …

Siml 0.81 0.34 0.56 …

Sima 0.37 0.90 0.73

Cache or not Y N Y …

The feedback results are used to adjust the parameter values. The feedback results can be the approval or decline of the existing training data. An example is as follows:

258

B. Liang et al. Table 2. The feedback of decline an existing training data

Simh 0.32

Simp 0.27

Siml 0.34

Sima 0.90

Cache or not N->Y

After the training data is ready, improved genetic algorithm is used to search the optimal values of xk and λ . The training data is the input to the improved genetic algorithm and the result is the value of xk and λ . The individual representation, crossover, mutation and stop conditions are important parts of the algorithm. The individuals are represented in the following format: {x1 , x2 , x3 , x4 , λ} 0 ≤ xk , λ ≤ 1 There are many crossover methods in genetic algorithm. Here the arithmetical crossover is used. For two individuals as follows: s1 = {x1(1) , x2(1) , x3(1) , x4(1) , λ (1) } , s2 = {x1(2) , x2(2) , x3(2) , x4(2) , λ (2) } Generate a random number α whose range is from 0 to 1. The crossover operation will generate two new individuals as follows:

s1n = {α x1(1) + (1 −α ) x1(2) ,...,αλ (1) + (1− α )λ(2) } , s2n = {α x1(2) + (1−α)x1(1) ,...,αλ(2) + (1−α)λ(1) } The mutation is described as follows. Firstly, denote the individual for mutation as sm : sm = {x1 , x2 , x3 , x4 , λ} . Randomly select one of the elements of sm to mutate. For example we select x3 . Here denote x3 ’s possible minimal value as x3min and maximal value as x3max . Generate a random number as α . Denote x3 's value after mutation as x3′ . The formula is as follows: min °­ Random( x3 , x3 ), 0 ≤ α ≤ 0.5 x3′ = ® max °¯ Random( x3 , x3 ),0.5 < α ≤ 1

Random( x, y ) represents a random number range from x to y. So after mutation, sm will be as follows: sm′ = {x1 , x2 , x3′ , x4 , λ} . There are two conditions to end the search process. One is that the fitness of an individual is higher than a given value. The definition of an individual’s fitness is as follows: For a row of the training data which is denoted by {sim1 , sim2 , sim3 , sim4 , c} . It is correct if one of the following conditions is matched. 1)

¦ x sim k

k

k

>= λ AND c = Y , 2)

¦ x sim k

k

< λ AND c = N

k

Denote the correct row number of the training data as Countcorrect . Denote the whole row number of the training data as Countrow . The fitness is defined as

fitness =

Countcorrect . If the best fitness of the individuals is bigger than fitnessexp ect , Countrow

Semantic Similarity Based Ontology Cache

259

the algorithm can stop. Currently, fitnessexp ect is given manually. The other is that the generation has reached the given generation. It can be defined as follows:

gen > genexp ect So the stop condition can be summarized as follows:

fitnessbest > fitnessexp ect or gen > genexp ect The model has decided what entities should be loaded to the cache when a certain entity is accessed. And when the server is running, the model makes the cache work to process the clients’ requests. The feedback process is used to adjust the model. The cache results and the access information are shown to administrators for evaluation. The administrator can decide whether the result of each cache element correct or incorrect. The result will be added to the corpus. And the model can be reformed by beginning the training process again. The new model can be used to refresh the cache.

6 Experiments and Evaluations The cache approach has been used in project SWARMS (http://keg.cs.tsinghua.edu.cn/ project/pswmp.htm), which is a tool for exploring domain knowledge. The ontology for the software domain is designed by referencing the sourceforge web site, which is a famous web site for open source projects. We have crawled 1180 software projects and extracted them to instances. The instances are then put into out ontology server. The ontology server in the experiment contains 140 concepts, 116 relations and 6700 instances. We have two kinds of measures. One measure is the consume time of a sequence of queries. We define the time consume for processing n queries as Tn. For a specific n, the lower Tn, the better performance of the algorithm. The other measure is the hit rate of a sequence of queries. The hit rate is defined as Hn. For a number of queries n, if the number of hits is t. Hn can be computed by the following formula: Hn =

t (t P , it will randomly select an entity in diamond shape to access. When P = 0.5 , it means the entities are accessed with equal probability. We take the cache size as 800 entities. So the ratio of the cache size to the whole entities is as follows: Cachesize 800 Ratio = = = 0.12 . Firstly, we analyze the entities with equal Whole size 6700 + 140 access probability. When P = 0.5 , the comparison of time consume and hit rate for cache enabled and not enabled is as follows: 2000

hit rate

cache not enabled

1800 1600

0.32

1400

cache enabled

1200

0.31 0.3

1000

Hit Rate

Time (second)

hit rate

0.33

800

0.29 0.28

600

0.27

400

0.26

200

0.25 0.24

0 1000

2000

3000 4000 Queries Count

5000

1000

2000

3000 Queries Count

4000

5000

Fig. 5. The comparison of time consume and hit rate when P=0.5

When P = 0.8 , let the ratio of the entities in round shape to the whole entities is 0.2. The comparison of time consume and hit rate for cache enabled and not enabled is as follows:

Semantic Similarity Based Ontology Cache 2000

Hit Rate

1800

Cache not enabled

1600

0.5

Cache enabled

1200 1000 800 600

Hit Rate

0.6

0.4 Hit Rate

T im e (seco n d )

1400

261

0.3 0.2

400 0.1

200 0

0 1000

2000

3000 Queries Count

4000

5000

1000

2000

3000

4000

5000

Queries Count

Fig. 6. The comparison of time consume and hit rate when P=0.8

When a portion of entities has more probability of accessibility which is reasonable in the real world, the cache algorithm will get better performance. If the entities have same probability of accessibility, the algorithm is not so effective because the underlying thought of the algorithm is to divide the entities to a certain semantic region. The region is mostly a sub graph of the whole ontology’s graph. When the probability of the accessibility of every entity is the same, the semantic region can’t maintain long in the cache. For this reason, the cache algorithm is not so effective when P=0.5.

7 Conclusions In this paper, we have investigated the problem of ontology caching. We have defined the problem as that of classification. We have proposed a machine learning based approach to the task. Using Improved Generic Algorithm, we have been able to make an implementation of the approach. Experimental results show that our approach can significantly outperform baseline methods for ontology caching. The approach has been applied to a research project that is called SWARMS. As future work, we plan to make further improvement on the accuracy of caching. We also want to apply the caching method to other semantic web applications.

References [1] Berners-Lee, T., Fischetti, M. and Dertouzos, M. L. Weaving the Web: The Original Design and Ultimate Destiny of the World Wide Web. Harper San Francisco, 1999. [2] Franz Calvo, John H. Gennari: Interoperability of Protege 2.0 beta and OilEd 3.5 in the Domain Knowledge of Osteoporosis. EON 2003. [3] Jeremy J. Carroll. Unparsing RDF/XML. Proceedings of the 11th international conference on World Wide Web, 2002. [4] Arthur M. Keller and Julie Basu. A predicate-based caching scheme for client-server database architecture. The VLDB Journal(1996) 5: 35-47. [5] Dongwon Lee and Wesley W. Chu. Towards Intelligent Semantic Caching for Web Sources. Journal of Intelligent Information Systems(2001) 17: 23–45.

262

B. Liang et al.

[6] Davidson J. Natural language access to databases: user modeling and focus. Proceedings of the CSCSI/SCEIO Conference, Saskatoon,Canada, May, 1982. [7] Hanson EN, Chaabouni M, Kim CH, Wang YW. A predicate matching algorithm for database rule systems. Proceedings of the ACM SIGMOD International Conference on Management of Data, Atlantic City, NJ, May, 1990. [8] Marcel, K., Kai-Uwe, S., Ingolf, G. and Hagen, H. Semantic Caching in Ontology-based Mediator Systems. Proc. of Berliner XML Tage, 2003, 155-169. [9] Dar, S., Franklin, M. J., Jónsson, B. T., Srivastava, D. and Tan, M. Semantic Data Caching and Replacement. In proceeding of 22th Int. Conf. on Very Large Data Bases (VLDB'96), September, 1996, 330-341. [10] Zhu, J. Non-classical Mathematics for Intelligent Systems. HUST Press, 2001.

In-Network Join Processing for Sensor Networks Hai Yu, Ee-Peng Lim, and Jun Zhang Center for Advanced Information Systems, Nanyang Technological University, Singapore [email protected] {aseplim, jzhang}@ntu.edu.sg

Abstract. Recent advances in hardware and wireless technologies have led to sensor networks consisting of large number of sensors capable of gathering and processing data collectively. Query processing on these sensor networks has to consider various inherent constraints. While simple queries such as select and aggregate queries in wireless sensor networks have been addressed in the literature, the processing of join queries in sensor networks remains to be investigated. In this paper, we present a synopsis join strategy for evaluating join queries in sensor networks with communication efficiency. In this strategy, instead of directly joining two relations distributed in a sensor network, synopses of the relations are firstly joined to prune those data tuples that do not contribute to join results. We discuss various issues related to the optimization of synopsis join. Through experiments, we show the effectiveness of the synopsis join techniques in terms of communication cost for different join selectivities and other parameters.

1

Introduction

The emergence of wireless technologies has enabled the development of tiny, lowpower, wireless sensors capable of sensing physical phenomena such as temperature, humidity, etc.. Sensor networks have been adopted in various scientific and commercial applications [1, 2, 3]. Data collection in a sensor network is achieved by modeling it as a distributed database where sensor readings are collected and processed using queries [4, 5, 6]. In this paper, we address in-network join query processing in sensor networks. Join is an important operation in sensor networks for correlating sensor readings since a single sensor reading may not provide enough information representing a meaningful event or entity. Consider a sensor network covering a road network. Each sensor node can detect the ID’s of vehicles in close vicinity, record the timestamps at which the vehicles are detected, and keep the timestamped records for a fixed duration, say 1 hour. Suppose NR and NS represent two sets of sensor nodes located at two regions of a road segment, Region1 and Region2, respectively. To gather the necessary data for determining the speeds of vehicles traveling between the two regions, the following join query can be expressed. SELECT s1.vehId, s1.time, s2.time FROM s1, s2 WHERE s1.loc IN Region1 AND s2.loc IN Region2 AND s1.vehId = s2.vehId X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 263–274, 2006. c Springer-Verlag Berlin Heidelberg 2006 

264

H. Yu, E.-P. Lim, and J. Zhang Sink

Sink

Sink F

F

S R

(a) Naive Join

S

S R

(b) Sequential Join

R

(c) Centroid Join

Fig. 1. General Strategies

To evaluate the above query, sensor readings from Region1 and Region2 need to be collected and joined on the vehId attribute. We focus on addressing join scenarios whereby the join selectivity is so low that is it not cost-effective to ship source tuples to the sink for join. Therefore, efficient in-network join algorithms are required.

2

Preliminaries

Suppose a sensor network consisting of N sensor nodes. We assume there are two virtual tables in the sensor network, R and S, containing sensor readings distributed in sensors. Each sensor reading is a tuple with two mandatory attributes, timestamp and sensorID, indicating the time and the sensor at which the tuple is generated. A sensor reading may contain other attributes that are measurements generated by a sensor or multiple sensors, e.g., temperature. We are interested in the evaluation of static one-shot binary equi-join (BEJ) queries in sensor networks. A BEJ query for sensor networks is defined as follows. Definition 1. Given two sensor tables R(A1 , A2 , . . . , An ) and S(B1 , B2 , . . . , Bm ), a binary equi-join (BEJ) is R Ai =Bj S

(i ∈ {1, 2, . . . , n}, j ∈ {1, 2, . . . , m}) ,

where Ai and Bj are two attributes of R and S respectively, which have the same domain. We assume that R and S are stored in two sets of sensor nodes NR and NS located in two distinct regions known as R and S, respectively. A BEJ query can be issued from any sensor node called query sink, which is responsible for collecting the join result. Due to limited memory, the query sink cannot perform the join by itself. A set of nodes is required to process the join collaboratively, referred to as join nodes. The join processing can be divided into three stages, query dissemination, join evaluation, and result collection. In the query dissemination stage, the sink sends a BEJ query to one of the NR (and NS ) nodes using a location-based routing protocol such as GPSR [7].

In-Network Join Processing for Sensor Networks

265

Once the first NR (and NS ) node receives the query, the node broadcasts the query √ among the NR (NS ) nodes. The query dissemination cost is therefore O( N + |NR | + |NS |). When sensors receive the query, they send their local data to the join nodes which are either determined in the query dissemination phase, or adaptively selected according to the network conditions in the join evaluation phase. Once the join results are ready, the query sink collects the join results from the join nodes 1 . Our objective is to minimize the total communication cost for processing a given BEJ query in order to prolong the sensor network lifetime. In addition, the join scheme has to ensure that the memory space needed by the join operation on each join node does not exceed the available memory space. In the next section, we present several general strategies for performing in-network join. In Section 4 we describe a synopsis join strategy in which unnecessary data transmission is reduced by an additional synopsis join process.

3

General Strategies

In general strategies, join nodes NF are selected to join tuples of R and S, without attempting to firstly filter out tuples that are not involved in the join results (referred to as non-candidate tuples). When a join query is issued, a join node selection process is initiated to find a set of join nodes NF to perform the join. R tuples are routed to a join region F where the join nodes NF reside in. Each join node nf ∈ NF stores a horizontal partition of the table R, denoted as Rf . S tuples are transmitted to and broadcast in F . Each join node nf receives a copy of S and processes local join Rf  S. The query sink obtains the join results by collecting the partial join results at each nf . Note that though S could be large, the local join Rf  S at nf can be performed in a pipelined manner to avoid memory overflow [8]. The selection of NF is critical to the join performance. Join node selection involves selecting the number of nodes in NF , denoted by |NF |, and the location of the join region F . To avoid memory overflow, assuming R is evenly distributed in NF , |NF | should be at least |R|/m, where |R| denotes the number of tuples in R and m denotes the maximum number of R tuples a join node nf can store. Depending on the location of the join region, we have at least three join strategies, namely, naive join, sequential join, and centroid join (see Figure 1). Naive Join. In naive join, sensor nodes around the sink are selected as the join nodes NF , so that the cost of routing join results to the sink can be minimized (Figure 1(a)). The communication cost involves routing tables R and S to the join region F , broadcasting S to the join nodes, and sending the join results from NF to the sink (shown in Equation 1)2 . Naive join establishes a basis 1

Note that although the query sink may not be able to evaluate the join, it is able to consume the join results since it can retrieve and process the join results tuple by tuple with little memory usage.

266

H. Yu, E.-P. Lim, and J. Zhang

for performances of all join strategies, since any join strategy should at least perform better than naive join in terms of total communication cost in order to be a reasonable join strategy. Cnaive−join = |S| dist(S, F ) + |S| |NF | + (|Rf | dist(R, nf )+ nf ∈NF (1) |Rf  S|dist(nf , sink)) . Sequential Join. Sequential join minimizes the cost of routing and distributing R tuples to the join region by selecting the nodes NR as NF (see Figure 1(b)). In this case, R tuples remain in their respective nodes. S tuples are routed to the region R, and broadcast to all nodes NR . Each node ni ∈ NR performs the local join Ri  S where Ri is the local table stored at ni . Join results are delivered to the sink as shown in Figure 1(b). The communication cost of this strategy is: |Ri  S| · dist(R, sink) . (2) Cseq−join =|S| · dist(R, S) + |S| · |NR | + ni ∈NR

Centroid Join. Centroid join selects an optimal join region within the triangle formed by R, S, such that the total communication cost are minimized (see Figure 1(c)). The communication cost is shown in Equation 3. Path-Join [9] is an example of this strategy, which tries to find an optimal join region by minimizing a target cost function. Note that naive join and sequential join are special cases of centroid join. |Sj | · dist(nj , F ) + |S| · |NF | + |Ri | · dist(ni , F ) . (3) Ccen−join = nj ∈NS

ni ∈NR

The above three strategies can be further optimized for BEJ queries. A hashbased join can be applied in which both R and S are partitioned into a number of disjoint sub-tables, each with a join attribute value range. Each node nf in NF 3 is dedicated to join two subsets of Rv and S v with the same join value range v. In this way, tuples with the same join attribute value are always joined at the same join node, and the broadcasting of S in NF can be avoided. The major problem associated with general strategies is the communication overhead for transmitting non-candidate tuples in R and S, especially for queries with low join selectivity. 2

3

dist(A,B) refers to the hop distance between A and B. If A, B are two nodes, dist(A,B) is the average hop distance of the routes selected by the routing protocol. If A, B are two regions, dist(A,B) is the average hop distance between any pair of nodes from A and B. If A is a region and B is a node, dist(A,B) refers to the average hop distance between B and all nodes in A. A subset of NF is needed if one node does not have enough memory space for handling the join.

In-Network Join Processing for Sensor Networks Sink

267

Timestamp Vehicle-Type Speed (km/h) 10:23:12 car 82 bus 69 10:25:29 car 85 10:30:48 lorry 70 10:31:31 lorry 62 10:36:07 car 78 10:36:40

F

(a) Original Table R R

PR

L

PS

S

Tuple-ID Vehicle-Type Count t1 car 3 bus 1 t2 lorry 2 t4

(b) Synopsis S(R) Fig. 2. Synopsis Join Strategy

4

Fig. 3. An example of synopsis

Synopsis Join Strategy

The synopsis join strategy prunes non-candidate tuples and only joins candidate tuples. The key to the pruning process is to keep the cost overhead as low as possible. The synopsis join strategy comprises three phases, synopsis join, notification transmission and final join. 4.1

Synopsis Join

The synopsis join phase performs an inexpensive synopsis join, aiming at reducing the number of R and S tuples to be transmitted for final join. The synopsis join phase comprises two steps: synopsis generation, synopsis join. Synopsis Generation. A synopsis is a digest of a relation that is able to represent the relation to perform operations such as aggregation or join. We denote S(R) as the synopsis of a table R. A synopsis can be in any form such as histograms, wavelets, etc., which is generally smaller than the size of the corresponding table. In this paper, we adopt simple histograms as synopses where a synopsis is represented by the join attribute values of a table and their frequencies. For example, assume a sensor table R shown in Figure 3(a). Let the join attribute be Vehicle-type. The corresponding synopsis S(R) consists of two attributes, the join attribute value, whose domain is all possible values of Vehicle-Type, and the number of tuples for each Vehicle-Type, as shown in Figure 3(b). In synopsis generation, each sensor generates a synopsis of its local table. Consider a relation R distributed among NR sensor nodes. Each node ni ∈ NR stores a local table Ri that is part of R. ni generates a local synopsis S(Ri ) by extracting the join column AJ of Ri , and computing the frequencies of the distinct values in AJ . Assuming uniform data distribution, we can derive |S(Ri )| as: !  |R|/|AJ | " 1 |S(Ri )| = |AJ | 1 − 1 − . (4) |NR |

268

H. Yu, E.-P. Lim, and J. Zhang

Synopsis Join. In this stage, a set of synopsis join nodes NL in the synopsis join region L is selected to join the synopses of R and S to determine the candidate tuples in R and S (see Figure 2). Once NL nodes are determined, the local synopses are routed to NL for synopsis join. For BEJ queries, each synopsis join node nl ∈ NL is assigned a range v of join attribute values using a geographic hash function [10], so that only synopses with join attribute value in v are transmitted to nl for synopsis join. For a node ni ∈ NR , the local synopsis S(Ri ) is divided into |NL | partitions. A partition Slv (Ri ) containing a synopsis of tuples with join attribute values in v is sent to nl maintaining the range v. Consider the example shown in Figure 3. Suppose there are two synopsis join nodes nl1 and nl2 . nl1 is dedicated to handle join attribute values car, while nl2 handles bus and lorry. When a sensor ni1 generates a local synopses as the one in Figure 3(b), it divides the synopses into two partitions, one partition S1 (Rni1 ) contains tuples t1 and t3, whose join attribute values are car, and the other partition S2 (Rni1 ) contains tuples t2 and t4 whose join attribute values are bus and lorry. Therefore S1 (Rni1 ) and S2 (Rni1 ) are sent to nl1 and nl2 for synopsis join, respectively. The synopsis join nodes perform synopsis join as synopses from NR and NS nodes arrive. We denote a synopsis from a node ni ∈ NR received by a synopsis join node nl as Sl (Ri ). A synopsis join operation performed at nl is defined as follows. # # Sl (Ri )  Sl (Sj ) . (5) $

ni ∈NR

nj ∈NS

The operator is a merge function which takes multiple synopses as$inputs and produces a new synopsis. In particular, for our histogram synopsis, is defined as a function that accumulates the frequency$values if two input tuples are of the same join attribute value. The output of is therefore the accumulation of the input histograms. Synopsis Join Node Selection. The number of synopsis join nodes is determined by the sizes of local synopses NL nodes receive. Specifically, suppose a node’s memory space is ms (number of synopsis tuples that can fit into a node), the number of synopsis join nodes is determined as follows. 1 |NL | = |S(Ri )| . (6) ms ni ∈NR

The locations of NL nodes are selected so that the communication cost for routing local synopses is minimized. The communication cost of sending local synopses from NR and NS nodes to NL nodes can be expressed as: |Sl (Ri )| · dist(nl , ni ) + |Sl (Sj )| · dist(nl , nj ) . (7) nl ∈NL ni ∈NR

nl ∈NL nj ∈NS

Assuming the synopsis join region L is small, we can simplify the above equation: |S(Ri )| + |PS | |S(Sj )| , (8) Csynopsis−routing = |PR | ni ∈NR

nj ∈NS

In-Network Join Processing for Sensor Networks

269

where |PR | (or |PS |) is dist(R, L) (or dist(S, L)) 4 . Given the above equation, the optimal set of synopsis join nodes that minimize Csyno−join are located on the line connecting R and S. Therefore |PR | + |PS | = dist(R, S). Assuming  |S(R )| > i ni ∈NR nj ∈NS |S(Sj )|, it is obvious that Csynopsis−routing is minimized when |PR | is zero, and |PS | is dist(R, S). Hence, min(Csynopsis−routing ) = dist(R, S) |S(Sj )| . (9) nj ∈NS

Therefore, the optimal set of synopsis join nodes NL are chosen from nodes in NR that are nearest to NS , assuming the size of the region R is small compared to the distance between R and S. Optimal selection of NL for arbitrary R and S regions are part of the future work. 4.2

Notification Transmission

Each sensor node in NR and NS needs to be notified of which are the candidate tuples. To achieve this, a synopsis join node nl stores the ID of the sensor a local synopsis originates from. For each join attribute value a, it identifies two set of sensors NRa and NSa storing tuples with join attribute value a, and selects a final join node nf to join these tuples, such that the communication cost of sending data tuples with join attribute value a from NRa and NSa to nf , and sending the results from nf to the sink is minimized. Therefore nf is the node that minimizes the cost function in Formula 10. |Rai | · dist(ni , nf ) + |Sja | · dist(nj , nf ) a ni ∈NR

+



a nj ∈NS



|Rai | ·

ni ∈la r

|Sja | · dist(nf , sink) ,

(10)

nj ∈lv s

where |Rai | and |Sja | denote the number of R tuples in ni and S tuples in nj with the join attribute value a, respectively. In order to simplify the problem, the weighted centers of sensors in NRa and a NS are derived, respectively. The weighted center c of a set of sensors N storing a table T are defined in Equation 11, where Ti refers to the table stored in node ni , and loc(n) refers to the location of a node n. loc(c) = 

1

ni ∈N

|Ti |

·



|Ti | · loc(ni ) .

(11)

ni ∈N

a With Formula 11, the weighted centers cr and cs for sensors NSa can in NR and  a a a be computed respectively. Since ni ∈N a |Ri | = |R | and nj ∈N a |Sj | = |S a |, R S we can rewrite Formula 10 as:

|Ra | · dist(cr , nf ) + |S a | · dist(cs , nf ) + |Ra | · |S a | · dist(nf , sink) . 4

PR (or PS ) is the path connecting the centers of R (or S) and L.

(12)

270

H. Yu, E.-P. Lim, and J. Zhang

Formula 12 is minimum when nf is the generalized Fermat’s point [11] of the triangle formed by cr , cs , and the sink. Note that there may not exist a sensor located at the derived generalized Fermat’s point g. GPSR is used to select a node that is nearest to g as the final join node nf . The same operation is performed for all join attribute values handled by nl . When synopsis join is completed, nl obtains for each sensor node ni a set of a, nf pairs, which means tuples stored in ni with the join attribute value a are to be sent to nf for final join. The set of pairs are sent to ni in a notification message. A notification message can be broken up into multiple ones if it cannot fit into one network packet. The communication cost for notification transmission is similar to Equation 9. Cnotification = dist(R, S) |di | , (13) ni ∈NR ∪NS

where |di | denotes the total size of the notification messages sensor ni receives. 4.3

Final Join

Upon receiving a notification message from a synopsis join node, each node in NR or NS sends the candidate tuples whose join attribute values are specified in the notification message to a final join node nf . In the final join stage, a group of final join nodes NF are selected to join the candidate tuples sent from R and S, as shown in Figure 2. The final join node nf performs the join Rv  S v , and sends the join results to the query sink. If nf does not have enough memory space, it requests its neighbors to help in the join operation.

5

Experiments

In this section, we evaluate the performance of synopsis join strategy and compare it with other general join strategies, i.e., naive join, sequential join, and centroid join. Throughout the experiments, performance is measured by the total number of messages incurred for each join strategy. The control messages for synchronization and coordination among the sensors are negligible compared to the heavy data traffic caused by large tables. More realistic simulation and experiments will be included in our future work. We varied the following parameters: join selectivity, network density, node memory capacity and synopsis size. Join selectivity δ is defined as |R  S|/(|R|· |S|). The join attribute values are uniformly distributed within the domain of the attribute. Network density affects the number of neighboring nodes within the communication range of a sensor node. We varied the communication radius of the sensors to achieve different network densities. The size of the synopsis is determined by the data width of join attribute. If the synopsis size is small, the number of messages needed for routing the synopses to the synopsis join nodes becomes small. If it is large, we expect a high communication overhead incurred due to the transmission of synopses.

In-Network Join Processing for Sensor Networks

271

Experiment Setup. We created a simulation environment with 10, 000 sensor nodes uniformly placed in a 100 × 100 grid. Each grid contains one sensor node located at the center of the grid. The sink is located at the right-top corner of the area, with coordinates (0.5, 0.5). The regions R and S are located at the bottom-right and bottom-left corners of the network region, respectively, each covering 870 sensor nodes. Table R consists of 2000 tuples, while S consists of 1000 tuples. R and S tuples are uniformly distributed in R and S, respectively. We assume a dense network with GPSR as the routing protocol. The number of hops required to route a message from a source node to a destination node is approximated using the distance between the two sensors and the communication radius. The simplification enables analysis of network traffic under ideal conditions where there is no message loss. In addition, the overhead of GPSR perimeter mode is avoided with the assumption of dense network. Simulations and experiments under real conditions using GPSR are part of our future work. We set a message size of 40 bytes, which is equal to the size of a data tuple. A tuple in the join result is 80 bytes since it is a concatenation of two data tuples. Join Strategies. We evaluate and compare the performances of five different join strategies, namely, naive join, centroid join, sequential join, optimal join, and synopsis join. The optimal join provides a lower bound on the total communication cost involved in the join operation. It assumes that the query sink has a complete knowledge about the distribution of R and S. Hence, unlike centroid join, only candidate tuples are transmitted for the final join at NF . Similar to the final join phase of the synopsis join strategy, for each join attribute value a, an optimal node nf is selected such that the total cost of routing Ra and S a to nf and routing the result Ra  S a is minimized. The cost is expressed as in Equation 14. Since for any join strategy, the transmissions of candidate tuples and the join results cannot be avoided, the optimal join provides a lower bound on the total number of messages. Note that the assumption is impractical in real environment. (|Ra | · dist(R, nf ) + |S a | · dist(S, nf )+ Coptimal−join = nf ∈NF (14) a a |R  S | · dist(nf , sink)) . 5.1

Performance Evaluation

Performance vs. Join Selectivity. Figure 4(a) shows the total communication cost for different join selectivities while keeping the memory capacity, communication radius and synopsis size fixed at 250 × 40 bytes, 2 units and 10 bytes respectively. As shown in the figure, sequential join performs worse than all others due to the high cost of broadcasting S to all nodes in NR . Therefore we exclude it from subsequent experiments. As expected, optimal join outperforms all other strategies for all selectivities. When selectivity is lower than 0.001, synopsis join outperforms naive join and centroid join. This is because non-candidate tuples can be determined in the synopsis join stage, and only a small portion of

272

H. Yu, E.-P. Lim, and J. Zhang 1e+06

160000 naive-join synopsis-join centroid-join optimal-join

140000 naive-join synopsis-join centroid-join optimal-join sequential-join

120000

Number of Messages

Number of Messages

800000

600000

400000

100000

80000

60000

40000 200000 20000

0

0 1e-05

1e-04

0.001 Selectivity

0.01

0.1

2

(a) Impact of Selectivity

3

4

5 6 7 Communication Radius

8

9

10

(b) Impact of Network Density

150000 synopsis-join 10 bytes synopsis-join 20 bytes synopsis-join 30 bytes synopsis-join 40 bytes naive-join centroid-join optimal-join

700000 140000

naive-join synopsis-join centroid-join optimal-join

600000

130000

Number of Messages

Number of Messages

500000 120000

110000

100000

300000

200000

90000

100000

80000

70000 100

400000

0 150

200

250

300

350

400

450

500

1e-05

1e-04

Node Load (number of tuples/node)

(c) Impact of Memory Capacity

0.001

0.01

0.1

Selectivity

(d) Impact of Synopsis Size

Fig. 4. Experimental Results

data are transmitted during the final join. On the other hand, when selectivity is high, almost all data tuples are involved in the result. With large join result sizes, the final join nodes are centered around the sink. This explains why naive, centroid and optimal joins have the same communication cost. Moreover, synopsis join incurs unnecessary communication sending the synopses, making it less desirable for high selectivity joins. Although there is an overhead of using synopsis join when selectivity is high, it accounts for a small portion of the total communication cost. The overhead when selectivity is 0.1 is only 7%. Only when the selectivity is 0.005 and 0.01, the synopsis overhead accounts for a significant portion (20% ∼ 30%) of the total cost. Many queries have small selectivities where synopsis join is more suitable. Consider our BEJ query example in Section 1, the BEJ query joining on the Vehicle-ID attribute has a maximum join selectivity of 0.0005, which favors the synopsis join. Impact of Network Density. Figure 4(b) shows the scalability of the join strategies with varied network density. In this experiment, sensors have a memory capacity of 250×40 bytes. The join selectivity and synopsis size is 0.0001 and 10 bytes, respectively. As the network becomes denser, the total communication costs for all strategies reduce too. This is expected because with a larger communication range, fewer hops are needed to send a message across the network. Impact of Memory Capacity. Figure 4(c) shows the total communication cost with different memory capacities. In this experiment, the communication

In-Network Join Processing for Sensor Networks

273

radius is 2. The synopsis size is 10 bytes, and the selectivity is 0.0001. It is shown that the communication costs of all strategies do not change much when the memory capacity increases. The change in the memory capacity only affects the number of join nodes (and the number of synopsis join nodes for synopsis join). When the memory capacity is larger, there are fewer join nodes selected (8 join nodes reduced to 1 in our experiment setup), and fewer messages are required for sending the result tuples to the sink. There is no reduction in the communication cost of sending data from R and S to the join nodes. Therefore we cannot see much reduction in the total communication cost. Impact of Synopsis Size. Figure 4(d) shows the total communication cost with varied synopsis sizes and join selectivities. The memory capacity is 250 × 40 bytes. And the communication radius is 2 units. As shown in Figure 4(d), with the experiment setup, the smaller the synopsis size, the better the performance of the synopsis join. Small synopses results in lower communication overhead during the synopsis join stage. Therefore, it is beneficial for synopses with small join attribute width compared to the data tuple size. We also observe that the synopsis join performs slightly worse than the centroid join when the synopsis size is greater than 30 bytes, indicating that the overhead of sending the synopsis is greater than the cost savings in data tuple transmission.

6

Related Work

The popular aggregation-tree-based techniques for solving in-network aggregate queries [5, 6, 12] typically use an aggregation tree to progressively reduce data by merging partial results from child nodes so as to generate new results. The same data reduction technique cannot be directly applied to in-network join queries Several solutions have been proposed to handle joins in sensor networks. TinyDB [13] supports only simple joins in a local node, or between a node and the global data stream. Join operations across arbitrary pairs of sensors are not supported. Chowdhary et al. [9] proposed a path-join algorithm to select an optimal set of join nodes to minimize transmission cost involved in the join. Our technique differs from path-join by pre-filtering non-candidate tuples using synopsis join. Ahmad et al. [14] proposed a join algorithm by utilizing the data and space locality in a network. Their focus is on optimizing the output delay instead of the communication cost. Recently Abadi et al. [15] designed techniques to perform event detection using distributed joins. The technique joins sensor data with external static tables, and does not address the problem of joining in-network sensor readings. Also related is the work from Bonfils et al. [16] addressing the problem of optimal operator placement in a sensor network. The join is limited on only a single node, which is prohibitive for large data tables.

7

Conclusions

In this paper, we present a synopsis join strategy for efficient processing of BEJ queries in sensor networks. Unlike the general strategies, the synopsis join strat-

274

H. Yu, E.-P. Lim, and J. Zhang

egy executes a synopsis join step before performing a final join. The synopsis join step joins synopses generated by the sensors to filter out non-candidate data tuples and avoid unnecessary data transmission. As part of the synopsis join strategy, we have developed methods for determining the optimal set of synopsis join nodes and final join nodes. We have also performed cost analysis on synopsis join. Our preliminary experiments have shown that synopsis join performs well for joins with low selectivity and does not incur much overheads for high join selectivity.

References 1. Mainwaring, A., Culler, D., Polastre, J., Szewczyk, R., Anderson, J.: Wireless sensor networks for habitat monitoring. In: Proceedings of WSNA’02. (2002) 2. Estrin, D., Govindan, R., Heidemann, J.S., Kumar, S.: Next century challenges: Scalable coordination in sensor networks. In: Proceedings of MobiCom. (1999) 3. Estrin, D., Govindan, R., Heidemann, J.S., eds.: Special Issue on Embedding the Internet, Communications of the ACM. Volume 43. (2000) 4. Bonnet, P., Gehrke, J.E., Seshadri, P.: Towards sensor database systems. In: Proceedings of MDM, Hong Kong (2001) 5. Madden, S., Franklin, M.J., Hellerstein, J.M., Hong, W.: TAG: A Tiny AGgregation service for ad-hoc sensor networks. In: Proceedings of OSDI’02. (2002) 6. Yao, Y., Gehrke, J.E.: The cougar approach to in-network query processing in sensor networks. SIGMOD Record 31(3) (2002) 9–18 7. Karp, B., Kung, H.T.: GPSR: Greedy perimeter stateless routing for wireless networks. In: Proceedings of MobiComm’00, Boston, USA (2000) 8. Lu, H., Carey, M.J.: Some experimental results on distributed join algorithms in a local network. In: Proceedings of VLDB, Stockholm, Sweden (1985) 9. Chowdhary, V., Gupta, H.: Communication-efficient implementation of join in sensor networks. In: Proceedings of DASFAA, Beijing, China (2005) 10. Ratnasamy, S., Karp, B., Li, Y., Yu, F., Estrin, D., Govindan, R., Shenker, S.: GHT: A geographic hash table for data-centric storage. In: Proceedings of WSNA’03, Atlanta, USA (2002) 56–67 11. Greenberg, I., Robertello, R.A.: The three factory problem. Mathematics Magazine 38(2) (1965) 67–72 12. Nath, S., Gibbons, P.B., Seshan, S., Anderson, Z.R.: Synopsis diffusion for robust aggregation in sensor networks. In: Proceedings of SenSys ’04, ACM Press (2004) 13. Madden, S.: The Design and Evaluation of a Query Processing Architecture for Sensor Networks. PhD thesis, UC Berkeley (2003) 14. Ahmad, Y., U.Cetintemel, Jannotti, J., Zgolinski, A.: Locality aware networked join evaluation. In: Proceedings of NetDB’05. (2005) 15. Abadi, D., Madden, S., Lindner, W.: Reed: Robust, efficient filtering and event detection in sensor networks. In: Proceedings of VLDB. (2005) 16. Bonfils, B.J., Bonnet, P.: Adaptive and decentralized operator placement for innetwork query processing. In: Proceedings of IPSN. (2003)

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support for Verification Yanping Yang, Qingping Tan, Yong Xiao, Feng Liu, and Jinshan Yu School of Computer Science, National University of Defense Technology, Changsha 410073, P.R. China [email protected]

Abstract. Availability of a wide variety of Web services over the Internet offers opportunities of providing new value added services by composing existing ones. Service composition poses a number of challenges. A composite service can be very complex in structure, containing many temporal and data-flow dependencies between the component services. It is highly desirable therefore to be able to validate that a given composite service is well formed: proving that it will not deadlock or livelock and that it respects the sequencing constraints of the constituent services. In this paper, we propose an approach of composition analysis and verification based on Colored Petri nets (CP-nets), which is an extended version of Petri Nets, which have a sound mathematical semantics and a number of existing analysis tools. We provide translation rules of web composition language into CP-nets and a technique to analyze and verify effectively the net to investigate several behavioral properties. Our translation technique is essentially independent of which language we describe composition. As an example, to show the effectiveness of our technique, we pick up BPEL and translate the BPEL specification into CP-nets in a constructive way. These nets are analyzed and verified as prototypes of the specification.

1 Introduction Web services represent autonomous services with clear service definitions. Interaction with web services is possible through their service definitions, which can be made available in Web Services Description language (WSDL)[1]. Individual service definitions may represent limited business functionality. However, it is possible to compose functionality offered by different individual services, likely from different service providers, into a composite service represented as business process workflow. Accordingly, a current trend is to express the logic of a composite web service using a business process modeling language tailored for web services. A landscape of such languages such as Business Process Execution Language for Web Services (WSBPEL, or BPEL)[2], Business Process Modeling Language (BPML)[3] and Web Service Choreography Interface (WSCI)[4] has emerged and is continuously being enriched with new proposals from different vendors and coalitions. Practical experience indicates that the definition of real world Web services composition is a complex and error-prone process. However, all these proposals still remain at the X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 275 – 284, 2006. © Springer-Verlag Berlin Heidelberg 2006

276

Y. Yang et al.

descriptive level, without providing any kind of mechanisms or tool support for verifying the composition specified in the proposed notations. Therefore, there is a growing interest for the verification techniques which enable designers to test and repair design errors even before actual running of the service, or allow designers to detect erroneous properties (such as deadlock and livelock) and formally verify whether the service process design does have certain desired properties (such as consistency with the conversation protocols of partner service). In this paper, we’re interested in how much the Colored Petri Nets (CP-nets) [5] analysis and verification techniques can be used as a basis for raising reliability of Web services composition. As an example, to show the effectiveness of our technique, we pick up BPEL and translate a specification written in it into CP-nets in a constructive way. The nets are analyzed and verified as prototypes of the specification by the existing specialized CP-net tools such as Design/CPN[6] and CPN tools [7], which are two outstanding tools with a variety of analysis techniques and computing tools for CP-nets. So we make tool support available for and analyzing and verifying BPEL composition.

2 Related Works In the literature, a number of approaches to verify programs have been proposed. They can be part into two basic categories: One can translate a program into the input language of an existing verification tool; one can develop a new tool that can handle the program directly. For a discussion of the advantages and the disadvantages of the two approaches, the readers can refer to [20]. Next, we will discuss verification tools for Web Services composition that are closely related to ours. Most of existing approaches to verify business process are based on model checking techniques [8-12]. In [8], Nakajima describes how to use the SPIN model checker to verify web service orchestration. In order to do the verification using SPIN, business processes are first translated into Promela, the specification language provided by SPIN. The language used to compose Web Services is the Web Services Flow Language (WSFL) [21] which is one of BPEL’s predecessors. In [9], Karamanolis and his group translate business processes into FSP processes and use the LTSA toolkit [22] for model checking. The LTSA toolkit allows the user to specify properties in terms of deterministic FSP-processes. Similarly, Foster and his group [10] describe a BPEL plug-in for the LTSA toolkit. They translate BPEL program into FSP-processes and subsequently use the LTSA toolkit to verify the FSPprocesses. In [11], Koehler, Kumaran and Tirenni model business processes as nondeterministic automata with state variables and transition guards. These automata are subsequently translated into the input language of the model checker NuSMV[23]. Koehler et al show how NuSMV can be exploited to detect termination of business processes. In [12], Koshkina shows how to exploit an existing verification tool CWB[24] supporting techniques like model checking, preorder checking and equivalence checking to model and verify Web Services composition. Similarly, in [13], Schroeder presents a translation of business processes into CCS. Subsequently, the existing verification tool CWB can be used for verification.

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support

277

Using Petri nets to model and verify business processes is another choice. For the works of modeling business processes by means of Petri nets, we refer the reader to Van der Aalst [14], Martens [15], Narayanan [16] and Stahl [17]. In [14], workflow nets, a class of Petri nets, have been introduced for the representation and verification of workflow processes. In [15], Axel Martens translate BPEL to a Petri Net semantic. Due to the mapping into Petri nets, several analysis methods are applicable to BPEL processes models: the verification of usability of one Web service, the verification of compatibility of two Web services, the automatic generation of an abstract process model for a given Web service, and the verification of simulation and consistency. All presented algorithms are implemented within the prototype Wombat4ws [25]. In [16], Narayanan and his group take the DAML-S ontology for describing the capabilities of Web services and define the semantics for a relevant subset of DAML-S in terms of a first-order logical language. With the semantics in hand, they encode service descriptions in Petri Net formalism and provide decision procedures for Web service simulation, verification and composition. In [17], Christian Stahl and his group translate BPEL business process into a pattern-based Petri net semantic. Then they used the tool LoLA [19] for validating the semantic as well as for proving relevant properties of the particular process. In [18], Adam and his group develop a Petri net-based approach that uses several structural properties for identifying inconsistent dependency specification in a workflow, testing for its safe termination, and checking for the feasibility of its execution for a given starting time when temporal constraints are present. However, the approach is restricted to acyclic workflows. We believe that using the colored token of CP-nets to model different message and event type of business process are more natural. Our verification work of description written in BPEL is based on CP-nets. Table 1 shows the comparison between the existing approaches to Web services composition workflow verification. Table 1. Comparison of verification approaches

Koshkina

Composition Spec.

Formal Model

Formal Tools

BPEL

Labelled System

CWB

Transition

Foster

BPEL

FSP-processes

LTSA toolkit

Karamanolis

Abstract business process

FSP-processes

LTSA toolkit

Nakajima

WSFL

Promela

SPIN

Koehler

Abstract business process

Nondeterministic Automata

NuSMV

Stahl

BPEL

Petri net

LoLA

Martens

BPEL

Petri net

Wombat4ws

Narayanan

DAML-S

Petri net

KarmaSIM

Our work

BPEL

CP-nets

CPN tools

278

Y. Yang et al.

3 Analysis and Verification Approach Fig.1 illustrates our verification approach. Web services composition processes are translated into CP-nets, the input of Design/CPN or CPN tools. The formalization mainly concerns with the translation of composition specification into CP-net models. This is particularly important in discussions with Web services modelers unfamiliar with CP-nets.

Fig. 1. Verification and analysis approach

CP-nets were formulated by Jensen [5] as a formally founded graphically oriented modeling language. CP-nets are useful for specifying, designing, and analyzing concurrent systems. In contrast to ordinary Petri nets, CP-nets provide a very compact way of modeling complex systems, which makes CP-nets a powerful language for modeling and analyzing industrial-sized systems. This is achieved by combining the strengths of Petri nets with the expressive power of high-level programming languages. Petri nets provide the constructions for specifying synchronization of concurrent processes, and the programming language provides the constructions for specifying and manipulating data values. Practical use of CP-nets has been facilitated by tools to support construction and analysis of systems by means of CP-nets. In this paper, we use CPN tools to illustrate our work. The properties of CP-nets to be checked include boundness, deadlock-freedom, liveness, fairness, home, and application specific properties. The application specific properties are expressed as reachability of CP-nets. All the properties have their specific meaning in verifying Web services composition (cf. Table 1). CP-net models can be structured hierarchically. This is particularly important when dealing with CP-net models of large systems. The basic idea underlying hierarchical CP-nets is to allow the modeler to construct a large model from a number of smaller CP-nets called pages. These pages are then related to each other in a well-defined way. In a hierarchical CP-net, it is possible to relate a so-called substitution transition (and its surrounding places) to a separate CP-net called a subpage. A subpage provides a more precise and detailed description of the activity represented by the transition. Each subpage has a number of port places and they constitute the interface through which the subpage communicates with its surroundings. To specify the relationship between a substitution transition and its subpage, we must describe how the port places of the subpage are related to so-called socket places of the substitution transition. This is achieved by providing a port assignment. When a port place is assigned to a socket place, the two places become identical.

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support

279

Table 2. Behavior Properties of CP-nets Property

Original Meaning

Meaning in Verification

Reachability

The possibility of reaching a given state

Whether it is possible for a process to achieve the desired result.

Boundness

The maximal and minimal number of tokens which may be located on the individual places in the markings.

If a place is CP, then the number of tokens it contains is either o or 1, otherwise this indicates errors. If a place is a MP, then boundedness can be used to check whether the buffer overflows or not.

Dead Transitions

The transitions which will never be enabled.

There are no activities in the process that cannot be realized. If initially dead transitions exist, then the composition process was bad designed.

Dead Marking

Markings having no enabled binding element.

The final state of process instance is one of dead marking. If the number of dead markings reported by state space analysis tool is more than expected, then there must be errors in the design.

Liveness

A set of binding elements remaining active.

It is always possible to return to a activity if we wish. For instance, this might allow us to rectify previous mistakes.

Home

About markings to which CPnet is always possible to return

It is always possible to return to a state before. For instance, to compare the results of applying different strategies to solve the same problem.

Fairness

How often the individual transitions occur.

Fairness properties can be used to show the execution numbers in each process. We can find the dead activity that will never be executed.

Conservation

Tokens never destroyed

Certain tokens are never destroyed. Hence, resources are maintained in the system.

4 Transformation BPEL into CP-Nets Constructs The aim of this section is to provide a transformation from BPEL to hierarchical CPnets. The overall transformation idea can be concluded as follows: 9

9

The whole process is represented by a hierarchical CP-net. The interaction among partners is modeled in the supernet. The internal flow of each partner is represented by a subnet. Supernet interacts with subnets through the corresponding substitution transition of the latter and the socket-port pairs. Messages are represented by tokens. Different message type can be represented by the products type of the component part type of messages.

280

9

9

Y. Yang et al.

The BPEL activity is usually mapped to a CP-net transition. We do so for several reasons. First, mapping activities into places poses the following problem: if a place represents a subactivity state, when the actor returns the subactivity will start again. Thus, to represent the leaving point where a subactivity continues would be impossible. Secondly, the hierarchical modeling technique is by means of substitution transitions, and therefore if a transition represents a subactivity, there always remains the possibility of decomposing it into various actions (other transitions) and resting points (places) that enable interruptions and returns. Thirdly, modeling subactivities by transitions allows us to model data flow in the places of the subactivity flow more clearly. The control flow relations between activities are captured with CP-nets token firing rules and the arc inscriptions and transition guard expressions.

Each element of BPEL process is called an activity, which is the vital concept to verify the business process of composition. Next, we will translate activity of BPEL to hierarchical CP-nets constructs. 4.1 Activity Transformation BPEL activities may be primitive or complex. Atomic activities represent the basic unit of behavior of a Web service. The most important atomic activities include actions dealing with messages corresponding to the execution of operations defined in static service definition languages such as WSDL. They can be associated with one of the following types of WSDL operations: (1) One-way action: performed when receiving messages and no response needed. (2) Request-response action: performed when receiving a message and sending a response back to the sender. (3) Notification action: performed when sending messages to another service (4) Solicit-response action: performed when sending a message to another service and waiting for a response. Complex activities are recursively composed of other activities, and BPEL supports the definition of the following kinds of composition: (1) Sequence: this construct contains one or more activities that are performed sequentially, in the order in which they are listed within the element. (2) Switch: this construct supports conditional behavior. The activity consists of an ordered list of one or more conditional branches. (3) While: this construct supports repeated performance of a specified iterative activity until the given Boolean while condition no longer holds true. (4) Pick: this construct awaits the occurrence of one of a set of events and then performs the activity associated with the event that occurred. (5) Flow: this construct allows you to specify one or more activities to be performed concurrently. A flow completes when all of the activities in the flow have completed. The transformation details of atomic and complex activities into CP-nets can be illustrated by these examples Table 3.

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support Table 3. Activity Transformation Examples

One-way

Request-response

Notification

Solicit-response

Sequence

Switch

While

Pick

While

281

282

Y. Yang et al.

5 A Worked Example Fig.2 is a schematic illustration of the example from the specification document [2]. The example scenario is a process for handling a purchase order within a virtual

Fig. 2. Purchase Order process

Fig. 3. Purchase Order SuperPage

Transform BPEL Workflow into Hierarchical CP-Nets to Make Tool Support

283

enterprise comprising a Customer, an InvoiceProvider, a ShippingProvider and a SchedulingProvider. On receiving the purchase order from a customer, the process initiates three tasks concurrently: calculating the final price for the order, selecting a shipper, and scheduling the production and shipment for the order. While some of the processing can proceed concurrently, there are control and data dependencies between the three tasks. In particular, the shipping price is required to finalize the price calculation, and the shipping date is required for the complete fulfillment schedule. When the three tasks are completed, invoice processing can proceed and the invoice is sent to the customer. The corresponding hierarchical CP-nets formalisms of Purchase Order process are illustrated as Fig.3.

6 Conclusions In this paper, we introduce an approach to verify and analyze Web services composition. We pick up BPEL, which is the de facto industry standard of Web services composition specification. We present the transformation algorithms from BPEL to CP-nets constructs in a constructive way. These generated CP-net models can be analyzed, verified and simulated as prototypes of the BPEL processed by many existing and specialized analysis and verification tools. As future work, the backannotation techniques from CP-nets are being considered.

References [1] [2] [3] [4] [5] [6] [7] [8] [9]

[10]

[11]

[12]

http://www.w3.org/TR/wsdl http://www128.ibm.com/developerworks/library/ws-bpel/ http://www.bpmi.org/bpml-spec.esp http://ifr.sap.com/wsci/specifica tion/wsci-spec-10.htm K. Jensen, “Colored Petri Nets Basic Concepts, Analysis Methods and Practical Use”, Volume 1, 2 and 3, second edition, 1996. http://www.daimi.au.dk/designCPN/ http://www.da imi.au.dk/CPNtools/ S. Nakajima, "Verification of Web service flows with model-checking techniques," presented at First International Symposium on Cyber Worlds, 2002. C. Karamanolis, D. Giannakopoulou, J. Magee, and S.M. Wheater. “Model checking of workflow schemas”. In Proceedings of the 4th International Enterprise Distributed Object Computing Conference, pages 170–179, Makuhari, Japan, September 2000. IEEE. H. Foster, S. Uchitel, J. Magee, and J. Kramer, "Model-based verification of web service composition," presented at Automated Software Engineering, 2003. Proceedings. 18th IEEE International Conference on, 2003. J. Koehler, G. Tirenni, and S. Kumaran. “From business process model to consistent implementation: a case study for formal verification methods”, the 6th International Enterprise Distributed Object Computing Conference (EDOC02), Lausanne, September 2002. IEEE CS, pages 96–106. M. Koshkina. “Verification of business processes for web services”. Master's thesis, York University, 2003.

284

Y. Yang et al.

[13] M. Schroeder. Verification of business processes for a correspondence handling center using CCS. In A.I. Vermesan and F. Coenen, editors, Proceedings of European Symposium on Validation and Verification of Knowledge Based Systems and Components, pages 1–15, Oslo, June 1999. Kluwer. [14] W.M.P. van der Aalst. “Verification of workflow nets”. In P. Azema and G. Balbo, editors, Proceedings of the 18th International Conference on Applications and Theory in Petri Nets, volume 1248 of Lecture Notes in Computer Science, pages 407-426, Toulouse, June 1997. Springer-Verlag. [15] A. Martens. “Distributed Business Processes -- Modeling and Verification by help of Web Services”. PhD thesis, Humboldt-Universit¨at zu Berlin, July 2003. Available at http://www.informatik.hu-berlin.de/top/download/documents/pdf/Mar03.pdf. [16] S. Narayanan and S. McIlraith, "Analysis and simulation of Web services," Computer Networks, vol. 42, pp. 675-693, 2003. [17] Christian Stahl. “Transformation von BPEL4WS in Petrinetze”. Diplomarbeit, HumboldtUniversitÄat zu Berlin, April 2004. [18] Adam, N., Alturi, V. & Huang, W.-K. (1998), "Modeling and Analysing of Workflows Using Petri Nets", Journal of Intelligent Information Systems 10(2), 131-158. [19] Karsten Schmidt. Lola --- a low level analyser. In Nielsen, M. and Simpson, D., editors, International Conference on Application and Theory of Petri Nets, LNCS 1825, page 465. Springer-Verlag, 2000. [20] W. Visser, K. Havelund, G. Brat, S. Spark, and F. Lerda. Model checking programs. Automated Software Engineering, 10(2):203–232, April 2003. [21] http://www.ibm.com/software/solutions/webservices/pdf/WSFL.pdf [22] http://www.doc.ic.ac.uk/jnm/book/ltsa/LTSA.html [23] http://nusmv.irst.itc.it/ [24] http://homepages.inf.ed.ac.uk/perdita/cwb [25] http://www.informatik.hu-berlin.de/top/wombat/

Identifying Agitators as Important Blogger Based on Analyzing Blog Threads Shinsuke Nakajima1 , Junichi Tatemura2 , Yoshinori Hara3 , Katsumi Tanaka4,5 , and Shunsuke Uemura1 1

Graduate School of Information Science, Nara Institute of Science and Technology, 8916-5 Takayama-cho Ikoma Nara 630-0101, Japan {shin, uemura}@is.naist.jp 2 NEC Laboratories America, Inc., 10080 North Wolfe Road, Suite SW3-350, Cupertino, CA 95014, USA [email protected] 3 Internet Systems Research Laboratories, NEC Corporation, 8916-47, Takayama-cho, Ikoma, Nara, Japan [email protected] 4 Dept. of Social Informatics, Kyoto University, Yoshida Honmachi Sakyo-ku Kyoto 606-8501, Japan [email protected] 5 National Institute of Information and Communications(NICT), 3-5 Hikaridai Seika-cho Soraku-gun Kyoto 619-0289, Japan

Abstract. A blog (weblog) lets people promptly publish content (such as comments) relating to other blogs through hyperlinks. This type of web content can be considered as a conversation rather than a collection of archived documents. To capture ‘hot’ conversation topics from blogs and deliver them to users in a timely manner, we propose a method of discovering bloggers who take important roles in conversations. We characterize bloggers based on their roles in previous blog threads (a set of blog entries comprises a conversation). We provide a definition of agitators as bloggers’ roles who have a great influence on bloggers’ discussion. We consider that these bloggers are likely to be useful in identifying hot conversations. In this paper, we discuss models of blogs and blog thread data, methods of extracting blog threads, discovering important bloggers.

1

Introduction

The broadband infrastructure and ubiquitous computing have created an environment in which people are continually online on the WWW. Given this environment, people are increasingly publishing their reactions (e.g., comments and opinions) to current events (e.g., news). Users may state their opinion of a current news article, followed by other users who react to their opinions by stating differing opinions. In this sense, the web can be seen as a place for conversation rather than for archived documents. Triggered by an event, a hot conversation may quickly propagate from one site to another through the web. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 285–296, 2006. c Springer-Verlag Berlin Heidelberg 2006 

286

S. Nakajima et al.

A weblog, or blog for short, is a tool or web site that enables people to publish content promptly. In the blog world, people are not only content consumers, but content suppliers. According to the “Glossary of Internet Terms [1]”: “A blog is basically a journal that is available on the web. The activity of updating a blog is ‘blogging’ and someone who keeps a blog is a ‘blogger.’ Blogs are typically updated daily using software that allows people with little or no technical background to update and maintain the blog. Postings on a blog are almost always arranged in chronological order with the most recent additions featured most prominently.” A blog entry, a primitive entity of blog content, typically has links to web pages or other blog entries, creating a conversational web through multiple blog sites. Since conventional search engines treat the web as a snapshot of hyperlinked documents, they are not very effective for capturing conversational web content such as blogs. A new approach is required for timely delivery of hot conversations over multiple blogs on the web. To capture potentially hot conversations on the web, we propose a method for discovering bloggers who take important roles in these conversations. This information can then be used to acquire important hot conversations. A blog site is usually created by a single owner/blogger and consists of his or her blog entries, each of which usually has a permalink URL to enable direct access to the entry. Blog readers can discover bloggers’ characteristics (e.g., their interests, role in the community, etc.) by browsing their past blog entries. If readers know the characteristics of a particular blog, they can expect similar characteristics to appear in future entries in that blog. Our goal is to develop a method of capturing hot conversations by automating readers’ processes for characterizing and monitoring blogs. In our method[2], an important blogger is defined on the basis of his or her role in a blog thread, i.e., a set of blog entries comprising a conversation on a specific topic. We think it is likely that bloggers take various roles in a thread, including acting as an agitator who stimulates discussion and have a great influence on discussion. We believe that it is important blogger who is useful for identifying hot conversations. We first describe related work and describe blogs and a model of blog thread data, and then the extraction of blog threads. This is followed by a discussion of how important bloggers are identified, and the evaluation of our method. We end with a summary and outline our plans for future work.

2

Related Work

Recently, the number of research activities related to blog search[3], blog communities[4][5] and so on have rapidly increased. Several researches have conducted research on blogspaces: Kumar et al. studied the burstiness of blogspace[6]. They examined 25,000 blog sites and 750,000 links to the sites. They focused on clusters of blogs

Identifying Agitators as Important Blogger

287

connected via hyperlinks named blogspaces and investigated the extraction of blog communities and the evolution of the communities. Gruhl et al. studied the diffusion of information through blogspace[7]. They examined 11,000 blog sites and 400,000 links in the sites, and tried to characterize macro topic-diffusion patterns in blogspaces and micro topic-diffusion patterns between blog entries. They also tried to model topic diffusion by means of criteria called Chatter and Spikes. Adar et al. studied the implicit structure and dynamics of blogspace[8]. They also examined both the macro and micro behavior of blogspace. In particular, they focused on not only the explicit link structure but also the implicit routes of transmission for finding blogs that are sources of information. However, their purpose was not to acquire important web content. There are numerous reports of studies on topic detection and tracking[9]. In this paper, we adopt a similar technique to discriminate agitators. Allan et al. studied first story detection[10]. According to this study, when a new story arrives, its feature set is compared to those of all past stories. If it is sufficiently different, the story is marked as a first story; otherwise, it is not. Though these previous studies of FSD are relevant to us, we cannot adopt these methods directly since blog threads include relations between entries based on replylinks, which are different from a simple news stream.

3

Blogs and Blog Thread Data Model

A blog is a website that anybody can easily update and use to express his/her own opinions in a public space. To put it another way, blogs are a storehouse of information that reflects “public opinion”. Although there is a lot of trivial information in blogspace, there is also a lot of important information. Before defining our model of blog thread data , let us discuss these definitions of blog sites and blog entries. Examples are shown in Figures 1. site = (siteU RL, RSS, blogger+, siteN ame, entry +) entry = (permaLink, blogger, time, title?, description, comment∗ ) comment = (blogger, permaLink, content, time) Site

PermaLink time

Blog entry sourceLink

blogger

RSS

SiteURL

description

Blog site

comment

Blog entry

replyLink

Web site

SiteName blogger

title

Blog entry 䋺 䋺 䋺 䋺

Blog entry

(except blog entry)

Fig. 1. Example of blog site and blog entry

288

S. Nakajima et al.

A blog site has a site URL, RSS (Really Simple Syndication / RDF Site Summary), site name, and entries, and is managed by one or more bloggers. A blog entry has a permalink for access, a publication time, title, and entry description. A comment includes the content of the comment and the time when it was written. A blogger posts a blog to an entry identified by a permalink. replyLink = (ei , ej ), (ei → ej ) trackbackLink = (ei , ej ), (ei → ej ) sourceLink = (ei , wi ), (ei → wi ) where ei , ej ∈ E , E is a set of blog entries, and wi ∈ W , W is a set of Web pages except blog entries. replyLinks and sourceLinks are hyperlinks to other blog entries or web pages contained in the description of a blog entry. We do not include automatically added hyperlinks, such as links to next or previous entries, or other links unrelated to the content of the blog entry. This is because we want to remove link noise and to ensure that all blogspot.com pages point to blogger.com, etc. The method for removing link noise is described in Section 4.2. Here, a trackbackLink is a special case of a replyLink. For trackbackLink = (ei , ej ), there is not only a replyLink of (ei → ej ) but also a link of (ej → ei ) to indicate the existence of a replyLink. An example of a blog thread is shown in Fig.2. We define a blog thread as follows. A blog thread is composed of entries connected via replyLinks to a discussion among bloggers. There is one exception. As Fig. 2 indicates, sets of entries that are not connected to each other via replyLinks are considered the same thread if they refer to the same website via a sourceLink. Comments attached a blog entry are not used in extracting the blog thread because we want to identify important bloggers by analyzing blog thread. Namely, a blog thread is a directed connected graph and is defined as follows. thread := (V, L), V = W ∪ E, L = Ls ∪ Lr W is a set of websites. E is a set of blog entries.

rootEntry sourceLink

replyLink replyLink

Blog site

Blog site Blog site

Blog entry

Blog entry

Blog entry

Blog site Blog entry

Web site Blog site Blog entry

Blog site Blog entry

rootEntry

Fig. 2. Example of blog thread

Identifying Agitators as Important Blogger

Ls ⊆ {(e, e )|e ∈ E, e ∈ W },

289

Lr ⊆ {(e, e )|e ∈ E, e ∈ E}

Ls corresponds to a set of sourceLink. Lr corresponds to a set of replyLink. Ideally, the entries in a blog thread should share common topics. However, it is given that topics will sometimes change. In future research, we will pursue separating blog threads to accommodate this.

4 4.1

Extraction of Blog Threads Crawling Through Blog Entries

First, our system crawls through blog entries to extract blog threads. The system adds unregistered RSS feeds to the RSS list by crawling through public opml files. The system crawls through RSS feeds registered on the RSS list and registers the title, permalink, and list entry date as ungained entry data in the RSS. The RSS is actually an extension of RDF (resource description framework) language, and it is an XML application that conforms to the W3C’s RDF specification. These days, most blog sites syndicate their content to subscribers by means of an RSS. OPML (outline processor markup language) is an XML format for outlines. Our system had registered about 1,000,000 RSS feeds (=blog sites) and over 15,000,000 entries as of April 1, 2005. 4.2

Extraction of Blog Threads

At first, we need to extract the hyperlinks from descriptions of blog entries to discover possible connections between the entry and other web pages (including blog entries) before extracting blog threads. Therefore, we have to be able to recognize the scope of the description of an entry, based on an analysis of the HTML source. However, since each blog site server has its own tag structure, we need to set up parsing rules for each target blog site server. That being said, we limited our target sites to 25 famous blog-hosting sites, and some key famous sites and then set up the appropriate rules. To extract hyperlinks from blog entries, the system crawls through the permalinks of entries and obtains the entries. Moreover, descriptions of the entries are extracted from the HTML text by analyzing the tag structure, and hyperlinks are extracted from the description and added to the list of links. In this way, the system obtains link list. Incidentally, a blog thread is a set of entries connected to each other via replylinks and referring to a common web page via a sourcelink (see Fig. 2). The procedure for extracting blog threads is given below. (1) The system judges whether each hyperlink in the link list is a replyLink or sourcelink by checking whether the destination URL of the hyperlink appears in the entry list. (2) If it is a replyLink, the departure and destination URLs of the replyLink are checked to see whether or not they are registered in the existing thread data.

290

S. Nakajima et al.

If they are, they are added to the existing thread data. They become elements of a new thread if they do not. (3) If it is a sourceLink, the departure URL of the sourceLink is checked to see whether or not it is registered in the existing thread data. The destination URL of the sourceLink is then checked to see whether it matches with the Web page URL referred by a previous entry registered in an existing thread. If there is an existing thread, it is added to that existing thread data. If not, it becomes an element of a new thread. The extracted thread data represents sets of entries, each with a date and link data. Consequently, the system can analyze the time-series data for the entries in a thread and the link structure of a blog thread.

5 5.1

Discovering Agitator as Important Blogger Agitator in a Blog Thread

An Agitator often stimulates the discussion in a blog thread so that it becomes more active. Thus, we may be able to predict whether a blog thread will grow by watching the Agitator’s entries. If the system statistically judges that the thread frequently grows just after a particular blogger has published an entry, then that blogger is determined to be an Agitator. We focus on how to define and find an agitator, because we are interested in discovering important, popular blog conversations on a specific topic. We believe that we can discover important blog conversations by watching agitators on specific topics. 5.2

Discriminants for Agitator

In this section, we introduce three aspects that characterize a blogger as an agitator. Given a blog thread, the following aspects discriminate an entry ex from the other entries in the blog thread. We can then characterize the bloggers who publish such entries by aggregating the values from multiple blog threads. – Aspect 1: link-based discriminant An entry by an agitator is characterized by the number of links to an entry from other entries. That is, an agitator is a blogger who is popular with other bloggers on a topic. ex , an entry by an agitator is identified based on the following discriminant. (kx ) > θ1 , where kx is the number of entries in threadi that have a replylink to ex . – Aspect 2: popularity-based discriminant An entry by an agitator is characterized by a dramatic increase in the popularity of a thread shown by the number of entries published just after the agitators’ entry.

Popularity of a blog thread (a number of blog entries)

Identifying Agitators as Important Blogger

291

Tangential line

Agitator:1

Time

Fig. 3. Feature of agitator in time-series data for popularity of blog thread

As shown in Fig. 3, the time-series data for the popularity of entries in a blog thread seem to reflect periods of stagnation and increased activity. Therefore, we consider that an agitator’s entry appears between the end of a stagnant period and the beginning of a period of an increased activity. ex , an entry by an agitator is identified using the following discriminant. (lx /mx ) > θ2 , where lx is the number of entries in threadi that were published in the t (days) after ex and mx is the number of entries in threadi that were published in the t (days) before ex . Therefore, (lx /mx ) corresponds to an approximation of a second derivative value. – Aspect 3: topic-based discriminant Entries by an agitator often have different characteristics in terms of content from entries in threadi that were published before ex . In addition, they often have similar characteristics in terms of content to entries of threadi that were published after ex . In other words, an agitator may have a big impact on the topic of the blog thread. ex , an entry by an agitator, is identified based on the following discriminant. 

    % 1 x+n & % 1 x−n & Similarity n · x+1 ei , ex − Similarity ex , n · x−1 ei > θ3 ,

where ex−n is a feature vector of the nth latest entry in threadi before ex was published and ex+n is a feature vector of the nth earliest entry in threadi after ex was published. In this paper, feature vectors of each entry are calculated by means of the TF(term frequency) values. Similarity between feature vectors is calculated based on the cosine-correlation. We believe that the point of topic change can be used as a basis for determining agitators.

292

6

S. Nakajima et al.

Evaluation of Method for Identifying Agitator

6.1

Observation of Discriminants for Identifying Agitator

We discuss the discriminants of aspects of agitator (5.2) by means of examination of real blog data. Examples of real blog thread data are shown below. Fig. 4, 5 and 6 are data of a real thread (A).

14 10

3

12

15

5

16 7

2 6

17

8

18 9 11

1

13

4

Sep. 20

Sep. 25

Sep. 30

Oct. 5

Oct. 10

Oct. 15

㪈㪇

㪉㪇 A number of blog entries in a thread

㪈㪌

㪈㪇

㪌 approximate second derivative value

㪌 㪇 㪪㪼㫇㪅㩷㪉㪇

㪪㪼㫇㪅㩷㪉㪌

㪪㪼㫇㪅㩷㪊㪇

㪦㪺㫋㪅㩷㪌

㪦㪺㫋㪅㩷㪈㪇

㪇 㪦㪺㫋㪅㩷㪈㪌

approximate second derivative value 䋨 lx/mx, see 4.2 Aspect 2 䋩

A number of blog entries in a thread

Fig. 4. Example of link graph of a blog thread (A)

Fig. 5. Time-series data of popularity and second derivative value of thread (A)

In Fig. 4, each circle corresponds to a blog entry. Each arrow corresponds to a replyLink between entries. The numbers in the circles denote the number from the oldest date entry to the newest date entry. The horizontal axis indicates publishing date of entries. Each date is in 2004. The vertical axis values are arbitrary. There are some replyLinks from older entries to newer entries in Fig. 4. Generally speaking, bloggers can edit their entries without affecting the publish data, thus some of the older entries have replyLink to newer entries. We allow such hyperlinks from newer entry to older entry as replylink. As shown in Fig. 4, the entry of No. 5 of thread (A) seems important in the thread, as it is cited from 9 other entries as well as accompanied by an increase in

293

㪇㪅㪇㪈

㪉㪇 A number of blog entries in a thread Degree of topic change

㪈㪌

㪈㪇



㪌 㪇 㪪㪼㫇㪅㩷㪉㪇

㪪㪼㫇㪅㩷㪉㪌

㪪㪼㫇㪅㩷㪊㪇

㪦㪺㫋㪅㩷㪌

㪦㪺㫋㪅㩷㪈㪇

Degree of topic change 䋨 see 4.2 Aspect 3 䋩

A number of blog entries in a thread

Identifying Agitators as Important Blogger

㪄㪇㪅㪇㪈 㪦㪺㫋㪅㩷㪈㪌

Fig. 6. Time-series data of popularity and degree of topic-change of thread (A)

thread activity. Thus, the entry might be an agitator based on aspect 1. However, many in-links does not always mean an agitator. People also refer via replyLinks when disagreeing with another blogger. Thus, link analysis alone is not sufficient to determine if they are agitators or not. Next, let us discuss aspect 2 with regards to the time-series data as shown in Fig. 5. The solid line in Fig. 5 corresponds to the time series data of popularity of the thread (A). The circle in black corresponds to the entry of No. 5. The dashed line corresponds to (lx /mx ), which is an approximation of a second derivative value of popularity of blog entries, as explained in Section 5.2. In Fig. 5, we can see that the values of (lx /mx ) are high during the time just before the popularity increases drastically. It is important to detect when the thread become hot not by subjective means but by objective values. Next, we will discuss aspect 3 of agitator. Fig. 6 shows the time-series data of popularity and degree of topic-change of the thread (A). The dashed line in Fig. 6 corresponds to the left part of the discriminant for aspect 3 of an agitator. This corresponds to the degree of topic change. There is a fair possibility of the topic change if the degree is high. According to Fig. 6, the degree of topic change becomes high when No. 5 entry is published. Hence, blogger of No. 5 entry of thread (A) seems to be candidate of agitator since the entry satisfies aspect 1, 2 and 3 of agitator. It turns out that the blogger is a famous blogger who writes about topics of Information Technology. The blog site is ”http://blog.japan.cnet.com/umeda/”. Though we talk about determining candidates of agitator, it is impossible to judge that a blogger is an important blogger or not just based on analyzing one thread. Therefore, it is necessary to analyze multiple threads in order to discover important bloggers. Thus, we need to develop a system of discovering important bloggers using multiple blog threads analysis. In addition, we have to consider how to remove noises of blog data, since blog data has a lot of noises like miswritten html files, hyperlinks to non-existing url, advertisement links unrelated to the entry content, and so on.

294

6.2

S. Nakajima et al.

Evaluation of Discriminants for Identifying Agitators

The process for identifying agitators based on the three discriminants (discussed in section 5.2) should be applied to multiple threads to avoid misidentifying agitators. And our method for identifying agitators must have high recall rates as our system’s goal is to identify hot conversations using the contents of bloggers who are recognized as agitators. Consequently, we evaluate the recall factor of our system. We use 6 one month periods of blog data from Nov. 2004 to Apr. 2005. We investigate whether sites determined to be agitator candidates in past months are again judged to be agitator candidates or not. The procedure for the evaluation of discriminants for identifying agitator is given below. 1. Extract each set of blog threads in each period. (We use threads that have more than 10 entries for the evaluation.) 2. Calculate agitator score of each entry in the extracted threads based on the three discriminants. 3. Rank blog sites (=bloggers) based on their agitator scores in each term, and judge the top 10%, 20%, or 30% blog sites as agitator candidates. 4. Acknowledge the sites which are agitator candidates filled one of conditions given below. – If a site is considered an agitator candidate twice in past 3 months. – If a site is considered an agitator candidate twice in past 2 months. 5. Calculate recall rate of the agitators by investigating if acknowledged agitators in the past are considered agitators again. Table 1. Blog data for the evaluation blog entries Nov. 2004 Dec. 2004 Jan. 2005 Feb. 2005 Mar. 2005 Apr. 2005

1,414,209 1,332,242 1,237,820 1,936,088 1,907,145 1,876,620

blog threads bloggers (over 10 entries) (blog sites) 80 168 108 267 54 111 150 591 264 335 200 933

Table 2. Number of acknowledged agitators Aspect1(link) Aspect2(popularity) Aspect3(topic) 3months 2months 3months 2months 3months 2months Top 10% 22 12 10 4 0 0 Top 20% 73 37 21 9 13 4 Top 30% 94 51 45 20 39 19

Identifying Agitators as Important Blogger

295

Table 3. Result of recall rate of each discriminant Aspect1(link) Aspect2(popularity) Aspect3(topic) 3months 2months 3months 2months 3months 2months recall rate 28.7% 25.5% 17.8% 20.0% 10.3% 15.8% number of agitators 94 51 45 20 39 19

Table 1 shows number of blog entries, target blog threads that have more than 10 entries and blog sites that appear in target threads. Table 2 shows number of acknowledged agitators. “Top 10%” denotes sites ranked in top 10% of each agitator score - they are considered agitator candidates. “3months” denotes sites that are acknowledged as agitator at least twice in past 3months. As shown in Table 2, the system cannot secure enough acknowledged agitators using only the top 10% and 20%. Thus, the system should use at least top 30% sites when picking agitator candidates. Table 3 shows recall rate in the case of aspect 1, 2 and 3. The system use top 30% sites when picking agitator candidates. As Table 3 indicates, all the recall rates are not high enough. However, we believe that the system can find blog entries of important bloggers more easily than a system that does not consider the effect of acknowledged agitators. Namely, we may say that the acknowledged agitators are helpful to discover hot conversations. However, there is room for improvement for our discriminants for identifying agitators. In the cases of aspect 1, 2 and 3, they have both common and different agitator candidates. Some candidates of aspect 1 seem to be blog sites similar to original news sources. On the contrary, the some agitator candidates of aspect 2 seem to be opinion leaders of some blog communities. And the some agitator candidates of aspect 3 seem to be another kind of blog site different from aspect 1 and 2. Sufficed to say, aspect 1, 2 and 3 differs from each other a little. Thus, it may be possible to identify other types of agitators by using each discriminant individually.

7

Conclusions

We proposed a method for identifying agitators as important bloggers. The results of this study can be summarized as follows: 1. We described a method of extracting blog threads, and extracted threads from real blog data ( more than 15,000,000 entries ) registered in our system. 2. We described a method for identifying agitators as important bloggers by establishing discriminants for them. 3. We evaluate our method for identifying agitators, and the result indicates a possibility that we can identify agitators by using the 3 discriminants that we propose, in order to discover current popular conversations.

296

S. Nakajima et al.

In addition, in future work we plan to improve our discriminants for identifying agitator and to develop the system of discovering hot conversations on the Web based on blog contents by agitators.

Acknowledgements This research was partly supported by a grant for Scientific Research (17700132) from the Ministry of Education, Culture, Sports, Science and Technology of Japan.

References [1] Matisse’s Glossary of Internet Terms http://www.matisse.net/files/glossary.html [2] S. Nakajima, J. Tatemura, Y. Hino, Y. Hara, K. Tanaka: ”Discovering Important Bloggers Based on Analyzing Blog Threads”, WWW 2005 2nd Annual Workshop on the Weblogging Ecosystem (2005). [3] K. Fujimura, T. Inoue, M. Sugisaki: ”The EigenRumor Algorithm for Ranking Blogs”, WWW 2005 2nd Annual Workshop on the Weblogging Ecosystem (2005). [4] B. Tseng, J. Tatemura, Y. Wu: ”Tomographic Clustering To Visualize Blog Communities as Mountain Views”, WWW 2005 2nd Annual Workshop on the Weblogging Ecosystem (2005). [5] K. Ishida: ”Extracting Latent Weblog Communities - A Partitioning Algorithm for Bipartite Graphs -”, WWW 2005 2nd Annual Workshop on the Weblogging Ecosystem (2005). [6] R. Kumar, J. Novak, P. Raghavan, A. Tomkins: ”On the Bursty Evolution of Blogspace”, The Twelfth International World Wide Web Conference (2003). http://www2003.org/cdrom/papers/refereed/p477/p477-kumar/p477kumar.htm [7] D. Gruhl, R. Guha, D. Liben-Nowell, A. Tomkins: ”Information Diffusion Through Blogspace”, The Thirteenth International World Wide Web Conference (2004). http://www2004.org/proceedings/docs/1p491.pdf [8] E. Adar, L. Zhang: ”Implicit Structure and Dynamics of Blogspace”, WWW2004 Workshop on the Weblogging Ecosystem: Aggregation, Analysis and Dynamics (2004). [9] J. Allan: ”Topic Detection and Tracking”, Kluwer Academic publishers (2002). [10] J. Allan, V. Lavrenko, H. Jin: ”First Story Detection In TDT Is Hard”, In Ninth International Conference on Information Knowledge Management (CIKM’2000) (2000).

Detecting Collusion Attacks in Security Protocols Qingfeng Chen1, Yi-Ping Phoebe Chen1, Shichao Zhang2, and Chengqi Zhang2 1

School of Information Technology, Deakin University, Melbourne, VIC 3128, Australia {qingfeng.chen, phoebe}@deakin.edu.au 2 Faculty of Information Technology, University of Technology Sydney, PO Box 123, Broadway NSW 2007, Australia {zhangsc, chengqi}@it.uts.edu.au

Abstract. Security protocols have been widely used to safeguard secure electronic transactions. We usually assume that principals are credible and shall not maliciously disclose their individual secrets to someone else. Nevertheless, it is impractical to completely ignore the possibility that some principals may collude in private to achieve a fraudulent or illegal purpose. Therefore, it is critical to address the possibility of collusion attacks in order to correctly analyse security protocols. This paper proposes a framework by which to detect collusion attacks in security protocols. The possibility of security threats from insiders is especially taken into account. The case study demonstrates that our methods are useful and promising in discovering and preventing collusion attacks.

1 Introduction With its rapid growth, electronic commerce (e-commerce) has played a central role in global economy. However, the vast growth potential of electronic commerce is weakened due to security concerns. For example, a customer’s transaction record can be maliciously intercepted and revealed by computer or network hackers. Threats to the security of electronic transactions can be classified into internal and external. The internal threat is clearly a danger, but most companies are more concerned about the external threat. Many companies feel reasonably safe that the internal threat can be controlled through corporate policies and internal access control. Hence, they focus on the unknown outside users who may gain unauthorised access to the corporation's sensitive assets. Although it is usually very hard for a single principal to break through the protective barriers surrounding secure messages, several principals may put their respective secrets together to launch a collusion attack. As a fundamental measure to fulfill corporate security objectives, security protocols have been commonly treated as a requisite of e-commerce systems. However, their designs create a difficult and error-prone task, and some subtle flaws have been found in a number of security protocols that were previously believed to be secure [1]. Subsequently, there have been many remarkable efforts made to the analysis of security X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 297 – 306, 2006. © Springer-Verlag Berlin Heidelberg 2006

298

Q. Chen et al.

protocols by developing methodologies, theories, logics, and other supporting tools [1, 3]. These efforts are effective in overcoming weaknesses and reducing the redundancies at the design stage of protocols. Among such efforts, theorem proving [1] and model checking [5] have been regarded as two of the most efficient approaches for analysing security protocols. However, the possibility of internal threats, as mentioned above, has been greatly underestimated and ignored in traditional approaches. Traditional approaches unrealistically assume that no principal can access secrets which exceeds his/her usual legal authority. However, a user who attempts to obtain unauthorised data by colluding with other principals might discover more secrets which would otherwise remain protected. For example, principals A, B and C in Figure1 can collude with each other to generate a message {m1, m2, m3} even though none of them previously knew this message individually. Therefore, detecting collusion attacks is critical so that reliable analysis of security protocols can be achieved. There have been many efforts to ensure that digital data is secure within the context of collusion. A general fingerprinting solution used to detect any unauthorised copy is presented in [6]. A novel collusion-resilience mechanism using pre-warping was proposed to trace an illegal un-watermarked copy [7]. However, no work has been conducted to detect collusion attacks in security protocols. The possibility that a collusion attacks will occur is, in fact, determined by the extent of message sharing. Therefore, we can identify workable datasets by identifying frequent itemsets from transaction databases using data mining algorithms [2]. The obtained frequent itemsets can be used to search for collusion attacks according to a user’s request. This paper proposes a framework by which to formalise collusion attacks and identify them within security protocols. Frequent itemsets that may launch attacks are extracted from transaction databases. This reduces the search space. In particular, they are converted into the form of Prolog in order to match the rules in the established knowledge base. The case study demonstrates that our approach can complement the traditional analysis of security protocols. The remainder of this paper is organised as follows. Section 2 presents basic concepts and notations. Section 3 presents a detection model which includes the formalisation of collusion attacks. A case study is given in Section 4. Section 5 concludes this paper.

2 Basic Concepts Suppose A denotes atom symbols. Let L be proposition formulae formed in the usual way from a collection of atom symbols and using logical connectives such as ∧, ¬ and →. Let X, Y ∈ A be principals such as Sender, Receiver and Third party. Let α, β, γ and m be secure messages and φ, ϕ and ψ be formulae. Let CA be Certificate Authority, Z be attackers, k be a key and Cert(X)CA be X's certificate signed by CA. Kp(X) and K-1(X) represent the public/private key pairs of X, respectively; S(m, K-1(X)) represents the signed message m using K-1(X); E(m, k) represents the encrypted message m by k. The process of secure message usually consists of four steps which include generation, sending, receiving and authentication in e-commerce systems. They are

Detecting Collusion Attacks in Security Protocols

299

transmitted via either plaintext or ciphertext. The following rules are derived from BAN logic [1] and present the fundamental operation for secure messages. (1) Generation Rule. If message m is generated by X, X must know

generate( X , m) know( X , m) (2) Delivery Rule. If X knows message m and sends m to receiver Y, Y knows the message m.

know( X , m) ∧ send ( X , Y , m) know(Y , m) (3) Public Key Rule. If Y knows message m, Y can sign this message using his/her private key.

know(Y , m) know(Y , S (m, K −1 (Y ))) (4) Encryption Rule. If Y knows message m and a key k, Y can encrypt this message using k.

know(Y , m) ∧ know(Y , k ) know(Y , E (m, k )) (5) Belief Rule. X generates message m and sends it to Y. If Y sees this message and m is fresh, Y believes X in the message m.

send ( X , Y , m) ∧ know(Y , X , m) ∧ fresh(m) believe(Y , X , m) This rule indicates that principal Y believes that the message m from X is not a replay. The timestamp is usually used to ensure the freshness of secure messages [3]. (6) Certificate Rule1. If CA2's certificate is signed with CA1's private key, and Y verifies CA2's public key using CA1's public key, Y believes CA2's public key.

signsc (CA1, CA2, Cert (CA2)CA1) ∧ verify (Y , CA2, Kp (CA2)) believe(Y , CA2, Kp (CA2)) (7) Certificate Rule2. If X's certificate is signed with CA2's private key, and Y verifies X's public key, principal Y believes X's public key.

signsp (CA, X , Cert ( X )CA) ∧ verify (Y , X , Kp ( X )) believe(Y , X , Kp ( X )) The first four rules describe the generation, transition and basic encryption operation of messages. The remainder validates the belief surrounding the message freshness and the validity of the principal's public keys. More rules can be found in [4] but are not included here due to time constraints.

300

Q. Chen et al.

A collusion attack usually consists of attacker Z, a group of participants P = {P1, … , Pn}, and a threshold of collusion attack k, 1 ≤ k ≤ n. Figure 1 presents an instance about how A, B and C collaborate to disclose a secret to Z. Note, not all combinations of k principals are able to obtain the secret. The responsibilities of the principal are given below: − Participant P: the user who participates in the transaction using electronic transaction protocols. − Attacker Z: a user who intends to collect messages from various participants to launch attack. − Threshold k: the minimum number of participants who can achieve a collusion attack.

Fig. 1. A collusion attack handled by A, B and C

Definition 2.1. The access structure ī of the group P = { P1, …, Pn} denotes principals who may jointly recover secret s by putting their individual secrets together. MX denotes a set of secure messages of X. A(k, n) is the threshold scheme that allows the secret to be recovered if the currently active subgroup A ⊂ P has k < n principals. ī = {A | α1 ∧ … ∧ αk → s, αi ⊂ where αi represents a subset of messages from

M X i , Xi ∈ A, 1 ≤ k ≤ n} M Xi .

Example 2.1. Let MTom = {Jim, order}, MBob = {order, one, textbook} and MAlice = {biology, textbook} be secure messages. Nobody can obtain a complete understanding of this order alone but “Jim orders one biology textbook” can be derived by integrating {Jim} ⊂ MTom, {order, one} ⊂ MBob and {biology, textbooks} ⊂ MAlice together. Hence, we have ī = {Tom, Bob, Alice} and k = 3. From the observation, the above instance cannot generate a collusion attack for all principals who must participate in generating the order. Tom and Bob for instance may cooperate to get “Jim orders one textbook” but never know it is a textbook of biology. According to Definition 2.1, a collusion attack must satisfy three prerequisites.

Detecting Collusion Attacks in Security Protocols

(1) α1 ∧ … ∧ αk → s, αi ⊂

301

M X i , Xi ∈ A;

(2) 1 ≤ k ≤ n; and (3) ∀ αi, αi must belong to more than one principal at least. If αi belongs to a single participant Xi only, it is not difficult to confirm that Xi participated in the attacks. Nevertheless, some secrets may be shared among several principals. In this case, it is hard to determine who should be responsible for the disclosure of secrets. This may endanger transaction security, as attacker Z is able to generate secret s without passing the usual authentication process.

3 A Framework to Detect Collusion Attacks As mentioned above, each αi used to launch attacks must belong to more than one principal. Hence, it is necessary to discover the αi that is shared by more than one principal. The secure messages from each principal can be viewed as a transaction database. Therefore, the detection of αi can be converted to identify frequent kitemsets (2 ≤ k ≤ n). Three steps are used to detect collusion attacks: 1. identify frequent k-itemsets from the transaction database of principals; 2. construct knowledge based inference rules, and; 3. detect collusion attacks by matching frequent itemsets with the knowledge base. 3.1 Identifying Frequent Itemsets Let I = {i1, …, in} be a set of items and D be a collection of transactions, called the transaction database. Each transaction T ∈ D consists of a collection of items. Let A ⊆ I be an itemset. We can say that a transaction T contains A in the case of A ⊆ T. An itemset A in a transaction database D has a support, denoted as supp(A). Hence, we have: supp(A) = | TA | / | D | % (1) where TA represents transactions in D, which contain itemset A. An itemset A in D is called a frequent itemset if its support is equal to, or greater than, the minimum support minsupp that is designated by a user or experts. The details can be found in the support-confidence framework [2]. In this paper, Frequent Patterns (FP) tree algorithm is used to identify frequent itemsets from transaction data. Nevertheless, we do not focus on discovering valid rules of interest like in traditional data mining. From the observation, minsupp needs to be specified so that frequent itemsets can be identified. According to the prerequisites of collusion attacks mentioned above, each message subset αi must belong to at least two principals. Suppose there are n(n ≥ 3) principals in a transaction. Then,

minsupp =

m n

(2)

302

Q. Chen et al.

Here, n must be equal to, or greater than, 3 for it is impractical that a collusion attack be handled in a transaction which includes only two principals. In other words, it is not difficult to detect this attack if it really happens. On the other hand, m ≥ 2 can be tuned by users in terms of different security demands. The bigger its value is, the more the identified frequent itemsets will be. Example 3.1. Suppose there are four principals, P1, P2, P3, and P4, in a transaction T. Their datasets are represented by {α, β, γ, μ}, {α, β, μ}, {β, γ, ν} and {α, γ}, respectively. Let m = 2. According to formula (2), we have minsupp = 2/4 = 0.5. Then, supp(α) = 3/4 = 0.75 > minsupp, supp(β) = 3/4 = 0.75 > minsupp, supp(γ) = 3/4 = 0.75 > minsupp, supp(μ) = 2/4 = 0.5 ≥ minsupp and supp(ν) = 1/4 = 0.25 < minsupp. Thus, frequent 1-itemsets include {α}, {β}, {γ} and {μ}. In the same way, we can identify frequent 2-itemsets, such as supp(α ∪ β) = 2/4 = 0.5 ≥ minsupp. 3.2 Dealing with Knowledge and Facts This section suggests how to construct a knowledge base and manipulate derived facts from transaction databases. For brevity, it assumes that communication channels and keys are secure and reliable. Additionally, the secure messages are assumed to be fresh, and there is correct association between public keys and principals. Consequently, the belief in message freshness and the validity of principal’s public keys is not discussed below. A knowledge base comprises the knowledge that is specific to the domain of application, including such things as facts in the domain, and rules that describe the relations or phenomena in the domain. The inference rules of knowledge base consist of the basic manipulation of secure messages in security protocols. Facts are defined as general knowledge that is commonly accepted by people. For example, “Alice knows her own public/private keys”. Suppose R denotes the inference rules of a knowledge base. Then, R = {rule1, rule2, …, rulen} where the rule in knowledge base are of the form: rulei = {(N, [Conditionij ], Conclusioni) | 1≤ i ≤ n, 1 ≤ j } where Conditionij is a set of simple assertions linked by logic connectives, Conclusioni is a simple assertion without logic connectives, and N is the rule name. The assertions in rules can be terms that contain variables. Example 3.2. The generation rule and delivery rule in Section 2 can be written as rule1 = (1, [generate(X, m)], know(X, m)) and rule2 = (2, [know(X, m), send(X, Y, m)], know(Y, m)) respectively. Each transaction database comprises a collection of secure messages from a corresponding principal. As mentioned above, we aim to identify frequent itemsets from transaction databases. The detection of collusion attacks is implemented by matching derived frequent itemsets with knowledge bases. Suppose the transaction database T contains m principals. Then,

Detecting Collusion Attacks in Security Protocols

T = { M P1 , …, where each

303

M Pm }

M Pi (1 ≤ i ≤ m) denotes the set of secure messages of principal Pi.

Example 3.3. Based on Example 2.1, we have T = { MTom, MBob, MAlice } = {{Jim, order}, {order, one, textbook}, {biology, textbook}}. 3.3 Detecting Collusion Attacks The established knowledge base and derived frequent itemsets are the foundation for detecting potential collusion attacks in security protocols. In this paper, the intrinsic inference mechanisms of Prolog are used to manipulate the knowledge base and frequent itemsets. Nevertheless, the frequent itemsets need to be converted to the forms of predicate that conform to Prolog. In addition, the host names of secure message are required to identify principals who may involve themselves in collusion attacks. Definition 3.1. Suppose Fk = {{α1, …, αk} | supp(α1 ∪ … ∪ αk) ≥ minsupp, αi ∈ T, 1 ≤ i ≤ k} denotes a set of frequent k-itemsets from transaction T. Let P = {P1, …, Pn} be principals who participate this transaction. Then, know(Pj, {α1, …, αk}) iff {α1, …, αk} is a frequent itemset of Pj

(3)

In this definition, the predicate know(Pj, {α1, …, αk}) denotes that the principal Pj knows the message {α1, …, αk}. According to formula (2), {α1, …, αk} ought to be known by more than one principal. Example 3.4. As mentioned in Example 3.1, α, β, γ and μ are frequent 1-itemsets and α ∪ β is frequent 2-itemsets. Hence, we have know(P1, α), know(P1, β), know(P1, γ), know(P1, μ), know(P2, α), know(P2, β), know(P1, α ∪ β) and know(P2, α ∪ β) after conversion. The converted frequent itemsets can be collected via interaction using a user interface. Additionally, the fact database is emptied before collection. Once users submit a detection request, we need a reasoning procedure to efficiently manipulate the knowledge base and derived frequent itemsets. As for if-then rules, there are two basic ways of reasoning [8]: − −

backward chaining, and forward chaining.

Backward chaining starts with a hypothesis and works backwards, according to the rules in the knowledge base, toward easily confirmed findings. However, the forward chaining is in the opposite direction. The back chaining is chosen as the reasoning method in our detection model, which searches for the goal we want to verify to data. The detection starts with a pre-defined suspect secure message that may suffer from collusion attacks. If it eventually reaches the goal, the authentication succeeds. In other words, a collusion attack is found. Otherwise, if the goal cannot be proven, based on existing information, it is natural to conclude that no collusion attack occurred in the current transactions.

304

Q. Chen et al.

4 A Case Study To illustrate the application of our approach, an instance in respect of an online transaction extracted from SET protocol [9] is presented below. It is flexible for us to analyse other security protocols due to the extensibility of knowledge base. This example presents a registration form request handled by a cardholder C. It aims to obtain a valid registration form from certificate authority (CA) to complete registration. If the registration form can be obtained, it is not difficult to initiate a certificate request to gain valid certificates issued by CA. As described in [9], the transited secure messages comprise primary account number (PAN), registration form request (RegFormReq), symmetric key k1 and public key-exchange key of CA (Kpb(CA)). Only CA, C and the Issuer know PAN, which is effectively obfuscated using a blinding technique. Suppose there are four principals in this process. The set of secure messages from each principal can be regarded as a transaction database, such as M P1 = {PAN, RegFormReq, k1, Kpb(CA)} and M P 4 = {null, null, null, Kpb(CA)}. Table 1 presents the secure messages in transaction databases. Table 1. Secure messages in registration form request

Principal P1 P2 P3 P4

PAN PAN PAN PAN null

RegFormReq k1 RegFormReq k1 RegFormReq k1 RegFormReq null null null

Kpb(CA) Kpb(CA) Kpb(CA) Kpb(CA) Kpb(CA)

The primary work is to identify frequent itemsets from Table 1. Let m = 2. According to formula (2), minsupp = 2/4 =0.5. As a result, the frequent itemsets can be derived using Frequent Patterns (FP) algorithm [2]. frequent 1-itemsets: {PAN}, {RegFormReq}, { k1}, {Kpb(CA)}; frequent 2-itemsets: {PAN, RegFormReq}, {PAN, k1}, {PAN, Kpb(CA)}, {RegFormReq, k1 }, {RegFormReq, Kpb(CA)}, { k1, { Kpb(CA)}; z frequent 3-itemsets: {PAN, RegFormReq, k1}, {PAN, RegFormReq, Kpb(CA)}, {RegFormReq, k1, Kpb(CA)}; z frequent 4-itemsets: {PAN, RegFormReq, k1, Kpb(CA)}. z z

After obtaining these frequent itemsets, it is necessary to convert them to the forms of predicate as Prolog. {PAN}, for example, can be transformed to know(P1, PAN), know(P2, PAN), know(P3, PAN) and know(P4, PAN). The knowledge base that consists of inference rules and facts can be constructed via a user interface mentioned in Section 3. Once the processes are completed, the user can submit a detection request: ?- Detection(E(RegFormReq, k1)) Backward chaining search is applied here. The detection model attempts to find matched rules to the verified goal from the knowledge base. The detection system

Detecting Collusion Attacks in Security Protocols

305

finally returns a “true” value for Detection(E(RegFormReq, k1)) since {RegFormReq, k1} is a frequent itemset and satisfies the encryption rule of knowledge base. Finally, an early warning of collusion attacks is sent to the user. In the same way, the user can send another request: ?- Detection(S(, Kpb(CA))) In this request, the detection system needs to deal with the match of two frequent itemsets including {k1, PAN} and {Kpb(CA)}. Finally, it is ascertained that the transaction contains potential collusion attacks for both {k1, PAN} and {Kpb(CA)} are frequent itemsets and satisfy the public key rule of knowledge base. Certainly, the user can put in any detection requests but they are not mentioned here and left to readers’ imagination.

5 Conclusion Security protocols have played a nontrivial role in guaranteeing secure e-commerce. Accordingly, many approaches were developed to validate them by detecting potential flaws. Regardless of the widespread analysis with formal methods, collusion attacks, a hidden and hazardous security issue, have been greatly neglected. This paper presents a novel data mining-based model to detect collusion attacks in security protocols. Especially, the set of secure messages of each principal is viewed as a transaction database. Consequently, the detection can be converted to identify frequent itemsets in transaction databases. The case study demonstrates that our approaches are useful in the analysis of security protocols.

Acknowledgement This work is partially supported by ARC discovery grants (DP0559251, DP0449535, DP0559536 and DP0667060) from Australia Research Council and China NSF research grants (60496327, 60463003).

References 1. Burrows M., Abadi M., Needham R., “A logic for Authentication”, ACM Transactions on Computer Systems, 8(1), pp 18-36, February 1990. 2. Chengqi Zhang, and Shichao Zhang, “Association Rule Mining: Models and Algorithms”, LNAI 2307, Springer-Verlag, Germany, 2002. 3. Denning D. and Sacco G., “Timestamp in Key Distribution Protocols”, Communications of ACM, 24(8), pp 533-536, August 1981. 4. Qingfeng Chen, Chengqi Zhang and Shichao Zhang, “ENDL: A Logical

Framework for Verifying Secure Transaction Protocols”, Knowledge and Information Systems, 7(1), pp 84-109, 2005. 5. Heintze N., Tygar J., Wing J., and Wong H., “Model Checking Electronic Commerce Protocols”, Proceedings of the 2nd USENIX Workshop on Electronic Commerce, pp 147164, Oakland, California November, 1996.

306

Q. Chen et al.

6. Boneh D. and Shaw J., “Collusion-secure fingerprinting for digital data” IEEE Transactions on Information Theory, 44(5), pp. 1897-1905, September, 1998. 7. Celik M. U., Sharma G. and Tekalp A. M., “Collusion-resilient fingerprinting using random pre-warping”, Proceeding of IEEE International Conference of Image Processing, pp. 509512, 2003. 8. Bratko I., “Prolog Programming for Artificial Intelligence”, Addison-Wesley, 1990. 9. SET Secure Electronic Transaction Specification, Book 1: Business Description, Version 1.0, May 31, 1997.

Role-Based Delegation with Negative Authorization Hua Wang1 , Jinli Cao2 , and David Ross3 1

Department of Maths & Computing, University of Southern Queensland, Toowoomba QLD 4350 Australia [email protected] 2 Department of Computer Science & Computer Engineering, La Trobe University, Melbourne, VIC 3086, Australia [email protected] 3 Engineering Faculty, University of Southern Queensland, Toowoomba QLD 4350 Australia [email protected]

Abstract. Role-based delegation model (RBDM ) based on role-based access control (RBAC) has proven to be a flexible and useful access control model for information sharing on distributed collaborative environment. Authorization is an important functionality for RBDM in distributed environment where a conflicting problem may arise when one user grants permission of a role to a delegated user and another user grants the negative permission to the delegated user. This paper aims to analyse role-based group delegation features that has not studied before, and to provide an approach for the conflicting problem by adopting negative authorization. We present granting and revocation delegating models first, and then discuss user delegation authorization and the impact of negative authorization on role hierarchies.

1

Introduction

Delegation is the process whereby an active entity grants access resource permissions to another entity in a distributed environment. Delegation is recognised as vital in a secure distributed computing environment [Abadi et al. 1993; Barka and Sandhu 2000a]. However, a conflicting secure problem may arise when one user grants permission of a role to a delegated user and another user does reject the permission to the delegated user. The most common delegation types include user-to-machine, user-to-user, and machine-to-machine delegation. They all have the same consequence, namely the propagation of access permission. Propagation of access rights in decentralized collaborative systems presents challenges for traditional access mechanisms because authorization decisions are made based on the identity of the resource requester. Unfortunately, access control based on identity may be ineffective when the requester is unknown to the resource owner [Wang et al. IEEE03]. Recently some distributed access control mechanisms have been proposed: Lampson et al. [1992] present an example on how a person can X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 307–318, 2006. c Springer-Verlag Berlin Heidelberg 2006 

308

H. Wang et al.

delegate its authority to others; Blaze et al. [1999] introduced trust management for decentralized authorization; Abadi et al. [1993] showed an application of express delegation with access control calculus; and Aura [1999] described a delegation mechanism to support access management in a distributed computing environment. All these papers have not analysed the conflicting secure problem. The National Institute of Standards and Technology developed role-based access control (RBAC) prototype [Feinstein, 1995] and published a formal model [Ferraiolo et al. 1992]. RBAC enables managing and enforcing security in largescale and enterprise-wide systems. Many enhancements of RBAC models have been developed in the past decade. In RBAC models, permissions are associated with roles, users are assigned to appropriate roles, and users acquire permissions through roles. Users can be easily reassigned from one role to another. Roles can be granted new permissions and permissions can be easily revoked from roles as needed. Therefore, RBAC provides a means for empowering individual users through role-based delegation in distributed collaboration environments. However, there is little work on delegation with RBAC. This paper analyses role-based delegation model based on RBAC and provides a solution for the conflicting problem adopting negative authorization. The remainder of this paper is organized as follows: Section 2 presents the related work associated to delegation model and RBAC. As the results of this section, we find that both of group-based delegation with RBAC and negative authorization for delegation model has never analysed in the literature. Section 3 proposes a delegation framework which includes group-based delegation. Granting authorization with pre-requisite conditions and revocation authorization are discussed. Section 4 provides an approach for the conflicting problem by adopting negative authorization and briefly discusses how to use negative authorization in delegation framework. Section 5 concludes the paper and outlines our future work.

2

Related Work

The concept of delegation is not new in authorizations [Aura 1999; Barka and Sandhu 2000a; Wang et al. IEEE03; Wang et al. ACSC05], role-based delegation received attention only recently [Barka and Sandhu 2000a, 2000b; Zhang et al. 2001, 2002]. Aura [1999] introduced key-oriented discretionary access control systems that are based on delegation of access rights with public-key certificates. A certificate denotes a signed message that includes both the signature and the original message. With the certificate, the issuer delegates the rights R to someone. The systems emphasized decentralization of authority and operations but their approach is a form of discretionary access control. Hence, they can neither express mandatory policies like Bell-LaPadula model [1976], nor possible to verify that someone does not have a certificate. Furthermore, some important policies such as separation of duty policies cannot be expressed with only certificates. They need some additional mechanism to maintain the previously granted rights and the histories must be updated in real time when new certificates are issued. Delegation is also applied in decentralized trust management [Blaze et al.

Role-Based Delegation with Negative Authorization

309

1999; Li et al. 2000]. Blaze et al [1999] identified the trust management problem as a distinct and important component of security in network services and Li et al [2000] made a logic-based knowledge representation for authorization with tractable trust-management in large-scale, open, distributed systems. Delegation was used to address the trust management problem including formulating security policies and security credentials, determining whether particular sets of credentials satisfy the relevant policies, and deferring trust to third parties. Other researchers have investigated machine to machine and human to machine delegations [Wang et al. WISE01; Abadi et al. 1993]. For example, Wang et al [WISE01] proposed a secure, scalable anonymity payment protocol for Internet purchases through an agent which provided a higher anonymous certificate and improved the security of consumers. The agent certified re-encrypted data after verifying the validity of the content from consumers. The agent is a human to machine delegation which can provide new certificates. However, many important role-based concepts, for example, role hierarchies, constraints, revocation were not mentioned. Zhang et al [2001, 2002] proposed a rule-based framework for role-based delegation including RDM2000 model. RDM2000 model is based on RBDM0 model which is a simple delegation model supporting only flat roles and single step delegation. Furthermore, as a delegation model, it does not support group-based delegation. This paper focuses exclusively on a role-based delegation model which supports group-based delegation and provides a solution of the conflicting problem with negative authorization in the role-based delegation model. We will extend our previous work and propose a delegation framework including delegation granting and revocation models, group-based delegation. To provide sufficient functions with the framework, this project will analyse how does original role assignment changes impact delegation results. This kind of group-based delegation and negative authorization within delegation framework have not been studied before.

3

Delegation Framework

In this section we propose a role -based delegation model called RBDM which supports role hierarchy and group delegation by introducing the delegation relation. 3.1

Basic Elements and Components

RBAC involves individual users being associated with roles as well as roles being associated with permissions (Each permission is a pair of objects and operations). As such, a role is used to associate users and permissions. A user in this model is a human being. A role is a job function or job title within the organization associated with authority and responsibility. As shown in Figure 1, the relationships between users and roles, and between roles and permissions are many-to-many.

310

H. Wang et al.

Many organizations prefer to centrally control and maintain access rights, not so much at the system administrator’s personal discretion but more in accordance with the organization’s protection guidelines [David et al. 1993]. RBAC is being considered as part of the emerging SQL3 standard for database management systems, based on their implementation in Oracle 7 [Sandhu 1997]. Many RBAC practical applications have been implemented [Barkley et al. 1999, Sandhu 1998].

Senior− Junior

OPERATION

User_Name

USERS

n

PERMISSIONS

m

m

ROLES m

n

m

n

Role_Name

User−role assignment (UA)

n OBJECT

Permission−role assignment (PA)

Indicates many−to−many relationships

Fig. 1. RBAC relationship

A session is a mapping between a user and possibly many roles. For example, a user may establish a session by activating some subset of assigned roles. A session is always associated with a single user and each user may establish zero or more sessions. There may be hierarchies within roles. Senior roles are shown at the top of the hierarchies. Senior roles inherit permissions from junior roles. Let x > y denote x is senior to y with obvious extension to x ≥ y. Role hierarchies provide a powerful and convenient means to enforce the principle of least privilege since only required permissions to perform a task are assigned to the role. Although the concept of a user can be extended to include intelligent autonomous agents, machines, even networks, we limit a user to a human being in our model for simplicity. Figure 2 shows the role hierarchy structure of RBAC in an example of a problem-oriented system POS which has two projects. The following Table 1 expresses an example of user-role assignment in POS. There are two sets of users associated with role r: Original users are those users who are assigned to the role r; Delegated users are those users who are delegated to the role r. The same user can be an original user of one role and a delegated user of another role. Also it is possible for a user to be both an original user and a delegated user of the same role. For example, if Christine delegates her role HO1

Role-Based Delegation with Negative Authorization

311

Director (DIR)

Project 1 Head Officer (HO1)

Collaborator 1 (Co1)

Report 1 (Re1)

Project 2 Head Officer (HO2)

Report 2 (Re2)

Analysis Project (AP)

Collaborator 2 (Co2) Assessment Project (AsP)

Community Service (CS)

Fig. 2. Role hierarchy in POS

Table 1. User-Role relationship RoleName UserName DIR Tony HO1 Christine HO2 Mike Co1 Richard Re1 John CS Ahn

to Richard, then Richard is both an original user (explicitly) and a delegated user (implicitly) of role Co1 because the role HO1 is senior to the role Co1. The original user assignment (UAO) is a many-to-many user assignment relation between original users and roles. The delegated user assignment (UAD) is a many-to-many user assignment relation between delegated users and roles. We have the following components for RBDM model: U, R, P and S are sets of users, roles, permissions, and sessions, respectively. U AO ⊆ U × R is a many-to-many original user to role assignment relation. U AD ⊆ U × R is a many-to-many delegated user to role assignment relation. U A = U AO ∪ U AD. Users: R ⇒ 2U is a function mapping each role to a set of users. U sers(r) = {u|(u, r) ∈ U A} where U A is user-role assignment. 5. U sers(r) = U sers O(r) ∪ U sers D(r) where U sers O(r) = {u|∃r > r, (u, r) ∈ U AO} U sers D(r) = {u|∃r > r, (u, r) ∈ U AD}

1. 2. 3. 4.

312

3.2

H. Wang et al.

Role-Based Delegation

The scope of our model is to address user-to-user delegation supporting role hierarchies and group delegations. We consider only the regular role delegation in this paper, even though it is possible and desirable to delegate an administrative role. A delegation relation (DELR) is existed in the role-based delegation model which includes three elements: original user assignments UAO, delegated user assignment UAD, and constraints. The motivation behind this relation is to address the relationships among different components involved in a delegation. In a user-to-user delegation, there are five components: a delegating user, a delegating role, a delegated user, a delegated role, and associated constraints. For example, ((Tony, DIR), (Christine, DIR), Friday) means Tony acting in role DIR delegates role DIR to Christine on Friday. We assume each delegation is associated with zero or more constraints. The delegation relation supports partial delegation in a role hierarchies: a user who is authorized to delegate a role r can also delegate a role r that is junior to r. For example, ((Tony, DIR), (Ahn, Re1), Friday) means Tony acting in role DIR delegates a junior role Re1 to Ahn on Friday. A delegation relation is one-to-many relationship on user assignments. It consists of original user delegation (ORID) and delegated user delegation (DELD). Figure 3 illustrates components and their relations in a role-based delegation model.

UAO

ORID

Constraints

UAD

DELD

Fig. 3. Role-based delegation model

From the above discussions, the following components are formalized: 1. DELR ⊆ U A × U A × Cons is one-to-many delegation relation. A delegation relation can be represented by ((u, r), (u , r ), Cons) ∈ DELR, which means the delegating user u with role r delegated role r to user u who satisfies the constraint Cons. 2. ORID ⊆ U AO × U AD × Cons is an original user delegation relation. 3. DELD ⊆ U AD × U AD × Cons is a delegated user delegation relation. 4. DELR = ORID ∪ DELD

Role-Based Delegation with Negative Authorization

313

In some cases, we may need to define whether or not a user can delegate a role to a group and for how many times, or up to the maximum delegation depth. We only analyze one-step group delegation in this paper which means the maximum delegation path is 1. The new relation of group delegation is defined as delegation group relation (DELGR) which includes: original user assignments UAO, delegated user assignments UAD, delegated group assignments GAD, and constraints. In a user-group delegation, there are five components: a delegating user (or a delegated user), a delegating role, a delegated group, a delegated role, and associated constrains. For example, ((Tony, DIR), (Project 1, DIR), 1:00pm–3:00pm Monday) means Tony acting in role DIR delegates role DIR to All people involved in Project 1 during 1:00pm–3:00pm on Monday. A group delegation relation is one-to-many relationship on user assignments. It consists of original user group delegation (ORIGD) and delegated user group delegation (DELGD). Figure 4 illustrates components and their relations in role-based delegation model.

UAO ORIGD Constraints

ORID

GAD DELGD

UAD

DELD

Fig. 4. Role-based group delegation model

We provide elements and functions in group delegation: 1. G is a set of users. 2. DELGR ⊆ U A×GA×Cons is one-to-many delegation relation. A delegation relation can be represented by ((u, r), (G, r), Cons) ∈ DELR, which means the delegating user u with role r delegated role r to group G who satisfies the constraint Cons. 3. ORIGD ⊆ U AO ×GAD ×Cons is a relation of an original user and a group. 4. DELGD ⊆ U AD × GAD × Cons is a relation of a delegated user and a group. 5. DELGR = ORIGD ∪ DELGD

314

4

H. Wang et al.

Delegation Authorization

This section analyses delegation authorization and provides an approach for the conflicting secure problem with negative authorization. 4.1

Authorization Models

The delegation authorization goal imposes restrictions on which role can be delegated to whom. We partially adopt the notion of prerequisite condition from Wang et al. [ADC03] to introduce delegation authorization in the delegation framework. A prerequisite condition CR is an expression using Boolean operators  ∧ and   ∨ on terms r and r¯ where r is a role and  ∧ means “and”,  ∨ means “or”, for example, CR = r1 ∧ r2 ∨ r3 . The following relation authorizes user-to-user delegation in this framework: Can delegate ⊆ R × CR × N where R, CR, N are sets of roles, prerequisite conditions, and maximum delegation depth, respectively. For group-based delegation mentioned last section, N = 1. The meaning of (r, cr, n) ∈ Can delegate is that a user who is a member of role r (or a role senior to r) can delegate role r (or a role junior to r) to any user whose current entitlements in roles satisfy the prerequisite condition CR without exceeding the maximum delegation depth n. There are related subtleties that arise in RBDM concerning the interaction between delegating and revocation of user-user delegation membership and the role hierarchy. Definition 1. A user-user delegation revocation is a relation Can − revoke ⊆

R × 2R , where R is a set of roles. The meaning of Can-revoke (x, Y ) is that a member of role x (or a member of an role that is senior to x) can revoke delegation relationship of a user from any role y ∈ Y , where Y defines the range of revocation. Table 2 gives the Can-revoke relation in Figure 2. There are two kinds of revocations [Wang et al. ADC03]. The first one is weak revocation; the second one is strong revocation. Definition 2. A user U is an explicit member of a role x if (U, x) ∈ U A, and

that U is an implicit member of role x if for some x > x, (U, x ) ∈ U A. Weak revocation only revokes explicit membership from a user and do not revoke implicit membership. On the other hand, strong revocation requires Table 2. Can-revoke RoleName Role Range HO1 [Co1, CS]

Role-Based Delegation with Negative Authorization

315

revocation of both explicit and implicit membership. Strong revocation of U  s membership in x requires that U be removed not only from explicit membership in x, but also from explicit (implicit) membership in all roles senior to x. Strong revocation therefore has a cascading effect up-wards in the role hierarchy. For example, suppose there are two delegations ((T ony, DIR), (Ahn, AP ), F riday) and ((John, Re1), (Ahn, AP ), F riday) and Tony wants to remove the membership of AP from Ahn on Friday. With weak revocation, the first delegation relationship is removed, but the second delegation has not yet removed. It means that Ahn is still a member of AP . With strong revocation two delegation relationships are removed and hence Ahn is not a member of AP . 4.2

An Approach for the Conflicting Problem

In the real world of access control, there are two well-known decision policies [Bertino et al. 1997]: a. Closed policy: This policy allows access if there exists a corresponding positive authorization and denies it otherwise. b. Open policy: This policy denies access if there exists a corresponding negative authorization and allows it otherwise. It is quite popular to apply closed policy in centralize management system. However, in uncentralized environment, the closed policy approach has a major problem in that the lack of a given authorization for a given user does not prevent this user from receiving this authorization later on. Bertino et al. [1997] proposed an explicit negative authorization as blocking authorizations. Whenever a user receives a negative authorization, his positive authorizations become blocked. Negative authorization is typically discussed in the context of access control systems that adopt open policy. The introduction of negative authorization brings with it the possibility of conflict in authorization, an issue that needs to be resolved in order for the access control model to give a conclusive result. The types of conflicts brought about by the negative authorization are beyond this paper. Negative authorization is rarely mentioned in RBAC literature, mainly because RBAC Models such as RBAC96 and the proposed NIST standard model are based on positive permissions that confer the ability to do something on holders of the permissions. As we previously discussed a delegation relation ((u1, r1), (u , r ), Cons1) ∈ DELR, which means the delegating user u1 with role r1 delegated role r to user u who satisfies the constraint Cons1. What will happen if there is another delegation ((u2, r2), (u , ¬r ), Cons2) ∈ DELR which means the delegating user u2 with role r2 rejected to delegate role r to user u who satisfies the constraint Cons2. We analyse the solution of this conflicting problem with role hierarchy. We may use one of the following policies: 1. Denial takes precedence (DTP): Negative authorizations are always adopted when conflict exists. 2. Permission takes precedence (PTP): Positive authorizations are always adopted when conflicts exists.

316

H. Wang et al.

These two policies are too simple for enterprise collaborations since it is not an efficient solution. In enterprise environment, role hierarchy is a very important feature since a senior role has all permissions of its junior roles. It means a senior role is more powerful than a junior role. Therefore, some differences with negative authorization between a senior role and its junior role are necessary. A practical solution for the above conflicting problem is: 1. Role r can delegate to user u if r1 is senior to r2, 2. Role r cannot delegate to user u if r1 is junior to r2. For the security reason, we suggest using DTP policy for two roles without hierarchy relationship when a conflicting problem happens. We summarize the above discussion for the conflicting problem. 1. Role r can delegate to user u if r1 is senior to r2, 2. Role r cannot delegate to user u if either r1 is junior to r2 or there is no hierarchy relationship between r1 and r2.

5

Conclusions and Future Work

This paper has discussed role-based delegation model and negative authorization for a solution of conflicting secure problems which may easy arise in distributed environment. We have analysed not only delegating framework including delegating authorization and revocation with constraints, but also group-based delegation. To provide a practical solution for the conflicting problem, we have analysed role hierarchies, the relationship of senior and junior role, and positive and negative authorizations. The work in this paper has significantly extended previous work in several aspects, for example, the group-based delegation and negative authorizations. It also begins a new direction of negative authorizations. The future work includes develop algorithms based on the framework and solution proposed in this paper and the delegating revocation model including constraints.

References 1. Abadi, M., Burrows, M., Lampson, B., and Plotkin, G. 1993. A calculus for access control in distributed systems. ACM Trans. Program. Lang. Syst. 15, 4(Sept.), 706-734. 2. Al-Kahtani, E. and Sandhu, R. 2004. Rule-Based RBAC with Negative Authorization, 20th Annual Computer Security Applications Conference, Tucson, Arizona, 405-415. 3. Aura, T. 1999. Distributed access-rights management with delegation certificates. Security Internet programming. J. Vitec and C. Jensen Eds. Springer, Berlin, 211235. 4. Barka, E. and Sandhu, R. 2000. A role-based delegation model and some extensions. In Proceeings of 16th Annual Computer Security Application Conference, Sheraton New Orleans, December, 2000a, 168-177.

Role-Based Delegation with Negative Authorization

317

5. Barka, E. and Sandhu, R. 2000. Framework for role-based delegation model. In Proceedings of 23rd National Information Systems Security Conference, Baltimore, October 16-19, 2000b, 101-114. 6. Barkley J. F., Beznosov K. and Uppal J. 1999, Supporting Relationships in Access Control Using Role Based Access Control, Fourth ACM Workshop on RoleBased Access Control, 55-65. 7. Bell D.E., La Padula L.J. 1976. Secure Computer System: Unified Exposition and Multics Interpretation, Technical report ESD-TR-75-306, The Mitre Corporation, Bedford MA, USA. 8. Bertino, E. P. Samarati, P. and S. Jajodia, S. 1997. An Extended Authorization Model for Relational Databases, In IEEE Transactions On Knowledge and Data Engineering, Vol. 9, No. 1, 145-167. 9. Blaze, M. Feigenbaum, J., Ioannidis, J. and Keromytis, A. 1999. The role of trust management in distributed system security. Security Internet Programming. J. Vitec and C. Jensen, eds. Springer, Berlin, 185-210. 10. David F. F., Dennis M. G. and Nickilyn L. 1993. An examination of federal and commercial access control policy needs, In NIST NCSC National Computer Security Conference, Baltimore, MD, 107-116. 11. Feinstein, H. L. 1995. Final report: NIST small business innovative research (SBIR) grant: role based access control: phase 1. Technical report. SETA Corporation. 12. Ferraiolo, D. F. and Kuhn, D. R. 1992. Role based access control. The proceedings of the 15th National Computer Security Conference, 554-563. 13. Lampson, B. W., Abadi, M., Burrows, M. L., and Wobber, E. 1992. Authentication in distributed systems: theory and practice. ACM Transactions on Computer Systems 10 (4), 265-310. 14. Li, N. and Grosof, B. N. 2000. A practically implementation and tractable delegation logic. IEEE Symposium on Security and Privacy. May, 27-42. 15. Sandhu, R. 1997. Rational for the RBAC96 family of access control models. In Proceedings of 1st ACM Workshop on Role-based Access Control, 64-72. 16. Sandhu R. 1998. Role activation hierarchies, Third ACM Workshop on RoleBased Access Control, Fairfax, Virginia, United States, ACM Press, 33-40. 17. Sandhu R. 1997. Role-Based Access Control, Advances in Computers, Academic Press, Vol. 46. 18. Wang, H., Cao, J., and Zhang, Y. 2005. A flexible payment scheme and its role based access control, IEEE Transactions on Knowledge and Data Engineering(IEEE05), Vol. 17, No. 3, 425-436. 19. Wang, H., Cao, J., Zhang, Y., and Varadharajan, V. 2003. Achieving Secure and Flexible M-Services Through Tickets, In B.Benatallah and Z. Maamar, Editor, IEEE Transactions Special issue on M-Services. IEEE Transactions on Systems, Man, and Cybernetics. Part A(IEEE03), Vol. 33, Issue: 6, 697- 708. 20. Wang, H., Zhang, Y., Cao, J., and Kambayahsi, J. 2004. A global ticket-based access scheme for mobile users, special issue on Object-Oriented Client/Server Internet Environments. Information Systems Frontiers, Vol. 6, No. 1, pages: 35-46. Kluwer Academic Publisher. 21. Wang, H., Cao, J., and Zhang, Y. 2002. Formal Authorization Allocation Approaches for Role-Based Access Control Based on Relational Algebra Operations, The 3nd International Conference on Web Information Systems Engineering (WISE’2002), Singapore, pages: 301-310. 22. Wang, H., Sun, L., Zhang, Y., and Cao, J. 2005. Authorization Algorithms for the Mobility of User-Role Relationship, Proceedings of the 28th Australasian Computer Science Conference (ACSC05), Australian Computer Society, 167-176.

318

H. Wang et al.

23. Wang, H., Cao, J., Zhang, Y. 2003. Formal authorization approaches for permission-role assignment using relational algebra operations, Proceedings of the 14th Australasian Database Conference(ADC03), Adelaide, Australia, Vol. 25, No.1, 125-134. 24. Wang, H., Cao, J., Zhang, Y. 2001. A Consumer Anonymity Scalable Payment Scheme with Role Based Access Control, Proceedings of the 2nd International Conference on Web Information Systems Engineering (WISE01), Kyoto, Japan, 73-72. 25. Yao, W., Moody, K., and Bacon, J. 2001. A model of OASIS role-based access control and its support for active security. In Proceedings of ACM Symposium on Access Control Models and Technologies (SACMAT), Chantilly, VA, 171-181. 26. Zhang, L., Ahn, G., and Chu, B. 2001.A Rule-based framework for role-based delegation. In Proceedings of ACM Symposium on Access Control Models and Technologies (SACMAT 2001),Chantilly, VA, May 3-4, 153-162. 27. Zhang, L., Ahn, G., and Chu, B. 2002. A role-based delegation framework for healthcare information systems. In Proceedings of ACM Symposium on Access Control Models and Technologies (SACMAT 2002). Monterey, CA, June 3-4, 125-134.

Approximate Top-k Structural Similarity Search over XML Documents Tao Xie, Chaofeng Sha, Xiaoling Wang, and Aoying Zhou Department of Computer Science and Engineering, Fudan University, Shanghai 200433, China {txie, cfsha, wxling, ayzhou}@fudan.edu.cn

Abstract. With the development of XML applications, such as Digital Library, XML subscribe/publish system, and other XML repositories, top-k structural similarity search over XML documents is attracting more attention. The similarity of two XML documents can be measured by using the edit distance defined between XML trees in previous work. Since the computation of edit distances is time consuming, some recent work presented some approaches to calculate edit distance by using structural summaries to improve the algorithm performance. However, most existing algorithms for calculating edit distance between trees ignore the fact that nodes in a tree may be of different significance, and the same edit operation costs are assumed inappropriately for all nodes in XML document tree. This paper addresses this problem by proposing a summary structure which could be used to make the tree-based edit distance more rational; furthermore, a novel weighting scheme is proposed to indicate that some nodes are more important than others with respect for structural similarity. We introduce a new cost model for computing structural distance and takes weight information into account for nodes in distance computation in this paper. Compared with former techniques, our approach can approximately answer the top-k queries efficiently. We verify this approach through a series of experiments, and the results show that using weighted structural summaries for top-k queries is efficient and practical.

1

Introduction

Nowadays, Extensible Mark-up Language (XML) is becoming pervasive in more and more applications, such as Digital Library, XML subscribe/publish system, and other XML repositories. The structural similarity between individual XML documents can be used to determine the category of a new document with respect to a collection of pre-categorized documents. After modelling XML documents as rooted ordered labeled trees, similar XML documents mostly have similar tree structures. Then, this problem can be transformed to computing structural similarity among trees. Unfortunately, comparing XML trees is an expensive operation, so some elegant and efficient algorithms are needed. There are some related works on the similarity computation on trees [1, 3, 4, 5, 6, 7]. The common measure they use to describe the difference between trees X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 319–330, 2006. c Springer-Verlag Berlin Heidelberg 2006 

320

T. Xie et al.

is edit distance. One tree can be changed to another through a sequence of edit operations (insert, delete or relabel ). Each edit operation is assigned a nonnegative cost. The sum of all the costs of operations in the sequence is called the cost of the whole edit sequence. For a given transformation, there might exist infinite number of sequences for conducting such a transformation; therefore, the edit distance between two trees is defined as the cost of the edit sequence with the minimum cost. The algorithm in [1] is so far the best one for computing tree edit distance, which is called ZS algorithm in the thereafter in this paper. Because their method should match nearly each pair of nodes in two trees, the computation is timeconsuming and so not acceptable in real world applications. Recently, some approximate algorithms for XML documents similarity search has been proposed to make the tree edit distance computation practical and efficient. These works mostly focus on modifying the allowed edit operations or setting restrictions on them. Different from those works, [2] proposes another approach by reducing nesting and repetitions in order to get the summary structure of a tree before applying edit distance algorithm. Generally, the summarized tree has quite less number of nodes than the original tree, so their approach reduce the computation cost dramatically. On the other hand, most existing work [1, 2, 6] assign the same cost for each unit edit operations on all nodes. However, it can be easily observed that different nodes in an XML tree may be of totally different significance. For instance, in Fig. 1, node SigmodRecord can be more significant than nodes author intuitively, because SigmodRecord represent a larger/higher category/semantics information than author. As a result, taking the weight characteristics in structure into account will be helpful to reach a better understanding for structural similarity.

ĂĂ ĂĂ

Fig. 1. An XML tree example

This paper is devoted to address the aforementioned problem in some extent. The contribution of the work reported here could be summarized as follows 1. Different from the existing works, in this paper, we introduce the weight factor for structural similarity and propose two different weighting methods

Approximate Top-k Structural Similarity Search over XML Documents

321

for nodes according to the levels they appear. Experiments show that the weight information is helpful for doing similarity search. 2. Based on nodes’ weight, an algorithm for obtaining the summary structure from the original XML tree is presented in this paper. Our approach is different from [2], for we accumulate the weight information when we neglect some repeated nodes, which makes our approach work well and precisely when approximating the original tree edit computation. 3. We present a new cost model for edit operations on weighted nodes and a new definition for distance WD, which is a weighted distance and is used as the measure for Top-k similarity search. We conduct a series of experiments on synthetic data and real data set as well. Experimental result show that, for top-k structural similarity search, our approach can achieve acceptable result with high performance. The rest of this paper is organized as follows. Section 2 reviews the related work on structural similarity over XML documents. Section 3 introduces two weight functions on XML trees. Section 4 introduces our improved summary algorithm, in which, weight information are accumulated when deleting some repeated nodes. And the summary distance between two trees is defined in Sect. 4. The edit distance algorithm based on the new cost model is given in Sect. 4. Section 5 is for the experiments and the analysis of the experimental results. Finally, in Sect. 6 some concluding remarks are given.

2

Related Work

Structural similarity between two XML documents has been studied in the literature [2, 9, 12, 13]. Most of the them are based on the similarity search over trees after modelling XML documents as rooted ordered labeled trees. The edit distance[1] measure is an intuitive way to describe the distance between two trees. However, the computation of edit distance between two trees is of quite high complexity. For instance, the time complexity of ZS algorithm in [1] is O(|M ||N |depth(M )depth(N )), and the time complexity of Chawathe’s algorithm in [10] is O(|M ||N |), where M and N are the number of nodes in two trees respectively, and depth(M) and depth(N) are the heights of two trees respectively. To improve the efficiency of these algorithms, [2, 8, 9] consider adding restrictions on the allowed edit operations, which is different from [1] allowing insert and delete operations on anywhere in the trees. [8] proposes a Restricted Top Down Mapping (RTDM ) in transforming a tree into another, and it only looks for identical subtrees at the same level. This modification results in better performance though the worst time complexity is still O(|M ||N |). [9] adds both restrictions on insert and delete operations, i.e., they are only allowed on leaf nodes, and introduces new InsertTree and DeleteTree operations. These two restrictions reduce unnecessary mapping operations between node pairs in [1, 10]; however, the overall complexity is still O(|M ||N |). [2] adopts another approach, which is based on the structural summary obtained by reducing nesting and

322

T. Xie et al.

repetition nodes. But it loses some structural description information, such as repeated number. In addition to edit distance, representing XML with vector or feature [11, 12] is also an alternative approach which is originally from data mining or information retrieval field. It can run fast but achieve approximate results for real edit distances, so such an approach can only be used in filter-and-refine applications. In a word, the previous researches compute the structural similarity of XML documents either using tree edit distance or using simplified representation, such as structural summary or feature vector. However, computing tree edit distance with ZS approach is a time-consuming task, so it is not practical for similarity search over large XML data repositories. On the other hand, simplified approaches ignore some information of XML structure, which might be important for searching. The work reported here is to design some new techniques based on tree edit distance to calculate the structural similarity of XML documents.

3

Preliminary

In this section, we give some formal definitions about XML document and edit distance on XML trees. And then, we introduce two weight functions on XML document tree, the first one is linear to level and the second one is exponential to the level. These weight functions will be used for distance computing in Sect. 4. 3.1

Problem Statement

We represent XML document as a rooted ordered labeled tree, as in Fig. 1, which is an XML tree for the SIGMOD Record dataset [14]. Each node in the tree stands for an element in XML document. With this tree model, the key problem of answering Top-k queries in XML repositories could be transformed to the similarity computation over trees. The measure with Edit Distance [1] is one of the best approaches to describe distance between two trees in the literature. Our approach is based on edit distance as well. We introduce edit operations at first. Generally, there are three kinds of edit operations: insert, delete and relabel. (1) Insert means to insert a node, (2) delete means to delete a node while remaining its subtrees, (3) relabel means to change a node’s label. Given two trees, we can transform one into another by performing a sequence of such edit operations over the original tree. Such a sequence can be represented by a graphical mapping between two trees. For example, the mapping in Fig. 2 shows the way to transform the left tree into the right one. Dotted lines map the nodes in the left tree to the nodes in the right one. Two nodes connected by such a dotted line are checked to see if they have the same labels. If not, a relabel operation is done. For those nodes in the left (right) tree which aren’t reached by the dotted lines, some delete(insert ) operations are needed. Since one node can be mapped to any node in another tree or not, there are a large number of possible mappings between two trees. With a cost model

Approximate Top-k Structural Similarity Search over XML Documents

R

R

A

P

I

Q

B

E

323

A

C

F

M

N

P

M

B

E

D

F

N

Fig. 2. Edit operations on two trees

illustrating costs for edit operations, each mapping is associated with an overall cost of transforming one tree into another. Thus, the edit distance problem is actually equal to finding a mapping between trees with the minimal cost. But such a computing process is time consuming, and the previous works [1, 10] address this problem using dynamic programming algorithms. We will introduce a new approach for computing edit distance over XML documents based on structural summary in Sect. 4. Before introducing the new cost model and weight definition, two weight functions should be introduced at first. 3.2

Linear Weight Function

Consider a rooted labeled tree T . Let depth(T )(≥ 1) denote its depth and level(n) denote the level of node n (level(root) = 0). The linear weight function weight(n) could be defined for a node n: weight(n) = 1 −

level(n) depth(T )

(1)

This formula builds a relation between the weight of node and its level. The weights of all nodes fall in the range of (0, 1]. The weight of the root node is 1 and weights for other nodes are all smaller than 1 but bigger than 0. This weight is easy to calculate with linear time and we can get it by traversing the tree starting from the root and to all nodes in the tree. This weight function will be used for distance computation in Sect. 4. Figure 3 is an example about XML tree and the linear weight for nodes in each level. 3.3

Exponential Weight Function

The second weight function is similar to [13]. Assuming taht the depth of a tree T is d, i.e. depth(T ) = d . We assign weight of γ d to the root node. γ is a parameter which can be adjusted to reflect the importance of level information in the tree. For children of the root node, that is, the nodes on level 1, we assign weight of γ d−1 to them. According to this rule, the nodes at level i get weight

324

T. Xie et al. depth(T)=3

depth(T)=3 level(article)=0

3

level(article)=0

2

level(title)=1

level(title)=1 2

level(author)=2

Fig. 3. Linear weight function

1

level(author)=2

Fig. 4. Exponential weight function

of γ d−i , and the resulting weights of all nodes fall in the interval of [γ, γ d ]. We define the exponential weight function as follows: weight(n) = γ depth(T )−level(n)

(2)

It can be seen that this weight function treat the nodes on different levels very differently. Given the tree in Fig. 3, and assume γ = 2, we can get the exponential weights of the nodes as in Fig. 4. Both liner and exponential weight functions are easy to compute. The motivation of these weights is to reflect the node’s significance or relevance from the semantic viewpoint. For example, edit operations in the higher level of tree should have more influence than the edit operation on the leaf nodes. But the previous works on tree edit distance ignores this difference and treats all edit operations with same/uniform cost. In Sect. 5, we will show the experiments result by using the weight function defined here.

4

Structural Summary and Distance Computation

In this section, we present the algorithm on structural summary by using the weight defined in last section. We also introduce the new cost model for XML summary distance. 4.1

Structural Summary

From the above discussion, it could be seen that the computation of the edit distance of two original trees is extremely expensive. Therefore, we have to simplify the trees before applying the distance algorithm. Note the fact that XML documents have many identical sub-structures, that is, there are many nodes with the same paths from the root to them. Here, we ignore the position information (i.e., “/a[1]/b[1]” and “/a[1]/b[2]” are regarded to be the same paths.), and we can treat these nodes as belonging to an equivalent class and use one node to represent them. An example is given in Fig. 5. The original tree T1 and its summary structure after simplification are shown there. The change of the

Approximate Top-k Structural Similarity Search over XML Documents

325

weight information of the two trees are listed in the right columns. Taken node A as an example, where A is the fifth child of root node R, there is already a node A with the same path of R/A appearing before it. So, we add its children nodes to node A before deleting it. In Fig. 5, we use the linear weight function. For the second node A to be deleted, we add its weight 2/3 to the remaining node A, so the new weight of node A is 2/3 + 2/3 = 4/3. With such a simplification, the number of the nodes is reduced from 16 to 11, consequently the computation cost for tree distance calculation is reduced.

R

R

node : weight A

B

I

P

E

B

C

M

F

A

N

P

Q

R

A

P

I

B

Q

E

C

F

M

N

R A B B C A P I E F M N P Q

1 node : weight 2/3 2/3 R 1 2/3 A 4/3 2/3 B 4/3 2/3 C 2/3 1/3 P 2/3 1/3 I 1/3 1/3 Q 1/3 1/3 E 1/3 1/3 F 1/3 1/3 M 1/3 1/3 N 1/3 1/3

A

P

A

M

P

B

D

E

F

node : weight

N

R

A

P

M

Fig. 5. Tree T1

B

D

E

F

R A A B D P M P E F M N

1 2/3 2/3 2/3 2/3 1/3 1/3 1/3 1/3 1/3 1/3 1/3

node : weight R A B D P M E F N

1 4/3 2/3 2/3 2/3 1/3 1/3 1/3 1/3

N

Fig. 6. Tree T2

We combine the two procedures, one for simplifying the original tree, the other for weight assigning, in our Algorithm 1. When we calculate the summary structure of a Tree T rooted at node n, the weight information for n’s children is initialized at first, then n’s child nodes with the same label are combined. If one child node has sibling nodes occurring after it with the same label, those repeated sibling nodes will be deleted, and their weights are added to it. After this procedure, there are no repeated nodes in the level right below n. For each child node of n, this procedure is invoked recursively. 4.2

Summary Distance Based on New Cost Model

In this section we introduce a new cost model based on weight and simplified XML tree. Based on ZS tree distance, the cost model is redefined as follows: 1. insert : γ(∧ → ν) = weight(ν) 2. delete : γ(υ → ∧) = weight(υ)  |weight(υ) − weight(ν)| 3. relabel : γ(υ → ν) = |weight(υ) + weight(ν)|

if if

υ.label = ν.label υ.label = ν.label

where, γ denotes the cost for each edit operation, υ → ∧ means deleting node υ in the source tree, and ∧ → ν means inserting node ν into the source tree. The explanations for the insert and delete costs are obvious. For the relabel

326

T. Xie et al.

Algorithm 1 getSummary Input: Node n is the root of the input XML tree; Output: A summary structure of input tree rooted at n 1: assign weights to node n and its children nodes; 2: for all node p ∈ n.L do 3: for all node q ∈ n.L do 4: if ((q.label == q.label) && (q.seq > p.seq)) then 5: p.weigth+ = q.weigth; 6: add q’s children nodes to p; 7: delete q; 8: break; 9: end if 10: end for 11: end for 12: for all node q ∈ n.L do 13: getSummary(q); 14: end for 15: return; Note: n.L is a list containing the child nodes of n, p.seq is index of node p in its parent node n’s children list L

operation, though their labels may be the same, the numbers of different nodes they represent can be different, so the difference of their weights is used to denote the relabel cost. If their labels are different, we assign the sum of their weights to this operation. Based on the ZS edit distance algorithm, we reinvent the definition of edit distance to our new summary distance on weighted trees. Definition 1. Given two trees T1 and T2 , and assuming that S(T1 ) and S(T2 ) are the associated summary structures, respectively. Let ED(T1 , T2 ) denote the edit distance between two trees and weight(T ) denote the sum of all nodes’ weight in tree T . The weighted summary distance WD of two trees is defined as: W D(T1 , T2 ) =

ED(S(T1 ), S(T2 )) weight(T1 ) + weight(T2 )

(3)

To illustrate the computation of edit distance with our new cost model, in Fig. 6, we give another tree T2 , and its summary structure can be obtained using the algorithm in Sect. 4.1. Weight information is assigned using the linear weight function. Now, we calculate the cost of transforming the summary structure of tree T1 in Fig. 5 to the summary structure of tree T2 in Fig. 6. The operation sequence is: (relabel(B, B), 4/3 − 2/3), (relabel(C, D), 2/3 + 2/3), (delete(I), 1/3), (delete(Q), 1/3), (insert(M ), 1/3) and (delete(M ), 1/3). Suppose (insert(A), γ) stands for inserting a node A with cost γ. The sum of the costs of the whole sequence is relabel(B, B) + relabel(C, D) + delete(I) + delete(Q)+ insert(M )+ (delete(M ) = 10/3. Note that, if we use traditional cost model, edit operations like relabel(B, B) should have cost of 0. In our algorithm,

Approximate Top-k Structural Similarity Search over XML Documents

327

we treat this node representing a set of equivalent nodes. So, we should do a set of difference operations though their labels are the same. From weight(T1 ) = 21/3 and weight(T2 ) = 17/3, we get the final summary distance between T1 and T2 is (10/3)/(21/3 + 17/3) = 10/38. This value describes the difference of these two trees. In fact, since the sum of the two weights is the upper bound of the edit distance between their summary structures, the summary distance is always in [0, 1]. 4.3

Answering Top-k Queries

To answer top-k queries, the steps of our algorithm are described as follows : 1. First, construct tree models for all XML documents. 2. Get summary structure for each tree using our algorithm getSummary(), and the weights of nodes are also initialized. 3. Use WD distance formula to calculate distances between the summary structures of the input tree and any other tree in the data set. Store these distances in an array. 4. Sort the distance array in ascending order and then return the first k (may more than k) distances with corresponding documents. Because this procedure is quite simple, we omit the details due to lack of space.

5

Experiments

The edit distance algorithm of ZS in [1] and our weighted distance algorithm are implemented and tested on synthetic and real data set. All the experiments are conducted on a platform with Pentium IV 3.2 G CPU and 2 GB of RAM. The synthetic data is a set of XML documents generating with DTDs [14, 16] using IBM’s XMLGenerator [15]. The real data is gotten from the XML repository [14]. In our experiments, the top-k answers are compared with those gotten using real edit distance. The default value of k is set to 10. To measure the quality of our results, a definition of precision is given. Assuming that we get m answers in ZS algorithm and n answers in the proposed algorithm. If q of our n answers are in the m ones, we use q/n to describe the precision of our result. Because we only focus on how many answers in our k results are correct, we do not use the metric of recall as in [2], which should be calculated as q/m. 5.1

Time Performance

The time performance of our algorithm is tested with the increase of the input tree size on the synthetic dataset generated from SigmodRecord.dtd by using XMLGenerator. This dataset has 1000 documents whose average node number is 35 and the maximum is 78. We change input tree’s size and observe the time changes in two algorithms. Using CPU time as the standard measure, we get time

328

T. Xie et al.

costs of two algorithms in Fig. 7, where the time includes the costs of constructing trees for all XML documents and the calculation of the distance array as well. We also take the time of distance calculation as the cost of answering top-k query. For each tree size, 10 queries are processed and average of their calculating time are used in two algorithms. Figure 7 shows that, as the number of nodes in the input tree increases, the time cost of ZS distance algorithm increases rapidly. However, in our approach, we calculate the distance of summary structures of two trees instead, which usually does not change. Let S(M ) denote the number of nodes in the summary structure of tree M . Our distance algorithm has time complexity O(|S(M )||S(N )|depth(M )depth(N )). The influence brought by input tree’s size in our algorithm is very tiny time difference in the stage of getting the summary structure for the input tree; therefore, this can be ignored when comparing with high cost distance computation.

250000

200000 150000 Time (ms)

Time (ms)

200000 150000 100000

100000 50000

50000 0

0 11

13

18

23

28

35

42

50

58

64

70

#Nodes ZS edit distance

Our weighted distance

79

300

600

1500

3000

#Documents ZS algorithm

Our algorithm

Fig. 7. Time performance with increasing Fig. 8. Time performance with increasing tree size dataset size

In Fig. 8 we show how the algorithms is performing when the size of dataset is increasing. We get the testing data from three categories, each of which has 1000 documents. We take documents from these three collections to form date sets of size 300, 600, 1500 and 3000. For the top-k queries, in both two algorithms, the time increases in a linear way with respect to the document numbers in the data set. But the time cost of our algorithm is far less than the cost of ZS algorithm. 5.2

Precision Evaluation

Extensive experiments are conducted on real datasets to compare the precision of our approach with algorithm in [1]. From the SIGMOD Record data, we get 51 XML documents with root element issue, and from Mondial data we get 231 XML documents rooted at Country. The average node number of the former dataset is 87 and the number of the latter is 57. 10 queries are processed on each dateset. Then, using the definition of our precision, we compare two algorithms in terms of both time and precision. The results when adopting the linear weight function are shown in Table 1. It can be seen that our approach achieves higher precise results, and the time costs

Approximate Top-k Structural Similarity Search over XML Documents

329

are far less than that of ZS algorithm. For the SIGMOD Record dataset, the reduction on time can be 98%. The reason is that they have a large number of repeated elements, and the nodes in their summary structure are far less than in the original trees. And for Mondial data whose node numbers do not decrease as much as in the SIGMOD Record dataset after transformation, its time reduction is not so high. But, the more close to the original structure after transformation, the more precise results we will get. So the resulting precision on Mondial data is higher than that on SIGMOD Record data. Table 1. Time and precision comparison with ZS algorithm ZS algorithm Our algorithm precision Time(ms) m Time(ms) n q Sigmod(51 docs) 18490 108 344 103 85 0.825 Mondial(231 docs) 6592 192 707 112 107 0.955

In above experiments, we assume a unit cost in ZS algorithm. That is, we are comparing our results containing level information with ZS results containing no level information. In next experiments, the weight information is added to the ZS edit operations, and then redo the comparison. Table 2 shows results when comparing our result with weighted ZS algorithm. The results show that ZS algorithm gets a higher precision than with no weight information, and this confirms our observation about that weight information is helpful for structure similarity computation. Furthermore, we test two weight functions on this dataset, and find both of them are helpful and the exponential one could achieve better performance for Sigmod and Mondial dataset. Table 2. Time and precision comparison with weighted ZS algorithm Linear Weight Function m n q precision SigmodRecord(51 docs) 103 103 91 0.883 Mondial(231 docs) 132 112 109 0.973 Exponential Weight Function SigmodRecord(51 docs) 103 103 93 0.912 Mondial(231 docs) 131 112 111 0.991

6

Conclusion

Structural Similarity search in XML documents has been gaining more and more attention from the related communities. Previous work was mainly based on tree edit distance and ignored the level information of nodes. From a semantic point of view, we regard nodes in the high levels as more significant ones, and the

330

T. Xie et al.

edit operations for them need more costs. In this paper, we propose two weight functions to add weight information on nodes. Moreover, we propose a weighted distance between summary structures of two trees. Calculation of this distance is based on our new cost model. When getting the summary structure of a tree, our approach can reduce tree size at the same time when making our distance more closer to original distance. Experimental results show our approach works well in answering top-k queries. For the future work, we are planning to study the structure similarity further and try to apply it in XML clustering and XML retrieval.

Acknowledgement This work is partially supported by NSFC under grant No. 60496325, 60228006 and 60403019.

References 1. Zhang, K., Shasha, D.: Simple fast algorithms for the editing distance between trees and related problems. SIAM J. Comput.18 (1989) 1245–1262 2. Dalamagas, T., Cheng, T., Winkel, K., Sellis, T.: Clustering XML Documents using Structural Summaries. In EDBT Workshops (2004) 547–556 3. Tai, K: The Tree-to-Tree Correction Problem. J. of the ACM. 26 No. 3. (1979) 422–433 4. Shasha, D., Wang, J., Zhang, K., Shih, F.: Exact and approximate algorithms for unordered tree matching. IEEE rans. Sys. Man. Cyber. 24 (1994) 668–678 5. Selkow, S.: The tree-to-tree editing problem. Information Processing Letters. 6 184–186 6. Zhang, K.: A constrained editing distance between unordered labeled trees. Algorithmica 15 (1996) 205–222 7. Zhang, K., Statman, R., Shasha, D.: On the editing distance between unordered labeled trees. Inf. Process. Lett. 42 (1992) 133–139 8. Castro, D., Golgher, P., Silva, A., Laender, A.: Automatic web news extraction using tree edit distance. In WWW (2004) 502–511 9. Nierman, A., Jagadish, H.: Evaluating structural similarity in XML documents. In WebDB (2002) 61–66 10. Chawathe, S: Comparing hierarchical data in extended memory. In VLDB (1999) 90–101 11. Kailing, K., Kriegel, H., Sch¨ onauer, S., Seidl, T.: Efficient Similarity Search for Hierarchical Data in Large Databases. In EDBT (2004) 676–693 12. Yang, R., Kalnis, P., Tung, K.: Similarity Evaluation on Tree-structured Data. In SIGMOD (2005) 754–765 13. Bertino, E., Guerrini, G., Mesiti, M.: Measuring the Structural Similarity among XML Documents and DTDs. (2001) http://www.disi.unige.it/person/MesitiM 14. http://www.cs.washington.edu/research/xmldatasets 15. http://www.alphaworks.ibm.com/tech/xmlgenerator 16. http://www.xmlfiles.com

Towards Enhancing Trust on Chinese E-Commerce Zhen Wang1 , Zhongwei Zhang1, and Yanchun Zhang2 1 Department of Mathematics and Computing, University of Southern Queensland, Toowoomba, QLD 4350, Australia 2 School of Computer Science and Mathematics, Victoria University, Melbourne, VIC 8001, Australia

Abstract. E-Commerce has been much more popular in western countries where the development of E-Commerce systems has been relatively matured. While the technology and social credit environment are well developed, E-Commerce is relatively new and receives less acceptance within commercial industries in China. Building trust has been identified as one of the major concerns in E-Commerce. In this paper, we will develop an computational model, which may be used to improve the trust-building relationship among consumers, retailers and authorities. This model considers a number of factors including direct experiences, customer recommendations, and authority certification, while parties involved in the E-Commerce activities can confidently establish and reliably enhance the trustworthiness. We also conduct a case study on how to improve consumer-retailer trust relationship in an E-Commerce application by the use of the trust model, which is one of functional electronic storefronts with the consideration of trustworthiness in China. In addition, the findings from this research will also be helpful for national policy maker on legislation of Chinese E-Commerce activities. Keywords: Trust, E-Commerce, Trust-building,Computational model, Chinese E-Commerce, E-Commerce legislation.

1 Introduction “With the rapid development of Internet technology, the landscape of exchanging information and doing business has been completely changed” [15]. As a new way of doing business, E-Commerce is increasingly affecting, changing and even replacing the traditional commerce approach. People are increasingly accepting and using E-Commerce than ever before. Without any doubt, many network applications, based on initial Door to Door to B2B, B2C, and C2C models, have a profound influence on the global economy. However most of the development of E-Commerce is primarily achieved within a handful of countries, particularly in countries like US, Japan and some of European countries [5]. Since the advent of the Internet in the late 1990’s in China, various applications can be seen in the following years, which stirred thousands of ambitious young Chinese people to establish their own business on-line with a dream of getting wealthy overnight. Over the past few years, almost 70% of the growth in Internet users from all over the world occurred only in China [5]. However, in terms of E-Commerce development, there is still a gap between China and the developed countries [4]. With X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 331–342, 2006. c Springer-Verlag Berlin Heidelberg 2006 

332

Z. Wang, Z. Zhang, and Y. Zhang

the increased popularity of the Internet and the continuous improvement of Internet technologies, the development of E-Commerce has just made an impressive start. What hinder the development of Chinese E-Commerce can be attributed to various problems and barriers. Some of these problems such as payment method, distribution problem, and security issues have been partially alleviated. Unfortunately the consumers lack of trust towards E-Retailers has been identified as the biggest barrier that needs to be overcome in long-term. In the paper, we will identify the challenges in developing Chinese E-Commerce and propose a model to cooperate with current E-Commerce system as a mean of enhancing trustworthiness. The paper is organized into 6 sections. Section 2 introduces the current standing of E-Commerce, and then Section 3 briefly reviews the challenges in developing E-Commerce, and examines the “Trust problem”, which is identified as the major obstacle that hinders the spreading of E-Commerce in China. In Section 4, we propose a computational model that can be used to address the trust problem. Section 5 specifically discusses how to apply the model to improve trustworthiness, and gives a case study in Section 6. Section 7 concludes the paper by discussing further directions for improving trust between E-Commerce users.

2 Current Chinese E-Commerce Standing 2.1 Issues in E-Commerce Trust has always been the main concern among most of Chinese E-Commerce companies [6]. On one hand, people are impressed by doing business on-line, but on the other hand they are still worried about using E-Commerce widely. Consequently, a gap appeared between on-line retailers’ interest in attracting shoppers to their electronic storefronts and many consumers’ trust in those activities [2]. Nevertheless, research [18,19] have shown that the trust problem is not only technical problem , but more of a social problem caused by various reasons. First and foremost, for historical reasons, people may lack of confidence with others, therefore it’s quite difficult to put trust in someone else in the beginning. Secondly, the commercial law and regulation for ECommerce cannot fully protect the interests of consumers and merchants. Therefore E-Commerce users lack the confidence to take the risk when communicating with an unfamiliar party. Thirdly, the social credit system and payment system in China are still not strong enough when carrying complex E-Commerce transactions, so that on-line business frauds may happen, and harm consumers. Last but not least, Internet security and cryptography techniques, which can increase consumers’ confidence in on-line activity, are still under development. Due to all these issues, E-Commerce specialists commonly regard that the bottleneck of Chinese E-Commerce is as a trust problem than anything else. 2.2 Opportunities of Chinese E-Commerce In 2005, Chinese E-Commerce has been experiencing the upsurge of B2C markets. With the recognition of trust problem, many retailers are experimenting with various

Towards Enhancing Trust on Chinese E-Commerce

333

trust building strategies to establish trustworthiness towards E-Retailers. Establishing trust between E-Commerce users is a long term process. Apart from more education or training in the long term, we can resort to technology to enhance consumer trust in an unfamiliar E-retailer. One effective method is to participate in third-party assurance programs, which assist consumers in an accessible level of trust they should place in an E-Commerce transaction. In that way, merchants who agree to meet a third party assurers standards can either use the assurer-certified technology, or agree to be bound in some way by the assurers procedures or oversight are registered by the assurer and permitted to display an identifying logo or assurance seal on their website. Consumers can reveal specific validation of the merchants good-standing with the assurer or additional disclosures related to the merchants’ business practices or history. Some theories suggest that trust in an E-retailer can be specifically defined as consumers’ willingness to accept vulnerability in an on-line transaction based on their positive expectations regarding an E-Retailers future behavior. By evaluating a party’s post behavior and tracking its activities, the party’s future behavior can be expected and trust level can be considered so that consumers can make purchasing decision according to the trust level of E-Retailer. In this paper, a third party assurance program that can be cooperated with the current E-Commerce system is proposed. Through tracking E-Retailer’s activity records, collecting consumer experiences, and recommendations, the computation model is capable of considering the level that consumer can trust an unfamiliar E-Retailer. More detail about the model is described in Section 4.

3 Challenges in Developing Chinese E-Commerce Chinese E-Markets is undeniably of great potential. E-Markets are not only beneficial for national economy, but also for the global economy, especially since China has entered the World Trade Organization (WTO). However, in terms of developing E-Commerce, there is a gap between China and other developed countries. The primary cause is that the advent of information age has a great impact on the economy of industrialized countries, while the social environment and China hasn’t become used to its coming. These can be noticed from various aspects including information infrastructure, barriers of social environment, degree of technology innovation, level of awareness, and trustworthiness and so on. Technically, the improvement is obvious, but it’s difficult to catch up in short term for some social problems. 3.1 Informationization Gap Current social environment in China is yet to match the rapid informationization. The understanding of the relation and interaction between industrialization and informationization is inadequate. In addition, the importance and urgency of informationization in social and economic development haven’t been fully appreciated. Apart from that, for a long term progress, the awareness and acceptance to informationization are still underway both theoretically and practically. Furthermore, the global information infrastructure are building up, China is not a exception. However, the popularity of informational and electronic facilities in China

334

Z. Wang, Z. Zhang, and Y. Zhang

is still behind the USA, even though 70% of the growth in Internet users occurred in China [3]. That mainly due to the huge population of China and the imbalanced development of different regions. In terms of the innovation of information technology, China is catching up to the developed countries progressively, although the exploring and utilization of information resources and services are not yet adequate. 3.2 Social Barriers in E-Commerce In China, E-Commerce activities are impeded by some social barriers as well. For instance, the law and regulation are not flawless [16]. E-commerce gives a full impetus to social and economic development; in the meantime, it raises some new problems. So we need to address the inadequacy of traditional policy and laws in the new circumstance, and recommend new and effective law and regulation system. E-commerce may cause multi-facet problems, such as legitimacy and authentication of E-currency, E-contract, and E-bill. Taxation is another problem. It includes how to collect business tax and custom; whether new tax and collecting methods are needed for some intangible products (e.g. software, electrical audio and video); how to secure market and prevent monopoly, especially telecommunication monopoly; how to protect privacy and Intellectual Property(IP); how to manage and control export and import of intangible products, and so on [17]. There is a lack of E-Commerce standard. Standardization and legislation is a difficult but important task in developing E-Commerce. For instance, before the E-Signature Standard was invented in China, there was no particular standard to normalize the behaviors on Internet, therefore, it could not setup a standard business environment compatible with international standards, which has a huge impact on integration of individual country and global economy [20]. Another problem associated with E-Commerce is the social credit system and payment system. The present E-Commerce can only be regarded as quasi-E-Commerce. Since the credit system lacks of trust, effective monitoring and payment mechanism. The Internet commodity transaction centers only fulfilled parts of E-Commerce processes, but by no mean to be full processes from pre-purchase to post-purchase. The current distribution system for Chinese E-Commerce is still not satisfied enough [21]. For instance, the delivery of products has yet to be separated from manufacturing and commercial enterprisers. The incomplete distribution system still responds passively to production and sales departments, where different processing links such as warehousing, transporting and loading functions as independent entities so that the distribution problem has not been solved well at present. In summary, these informationization gaps and social barriers indeed hinder the development of Chinese E-Commerce, but many people consider lack of consumers trust in E-Commerce merchants, technology, social, financial and legal infrastructures of ECommerce environment as the most concerned issues that affect the development in an individual country, since most traditional cues for assessing trust in physical world are not available on-line.

Towards Enhancing Trust on Chinese E-Commerce

335

3.3 Trust Issue Trust is a catalyst for human cooperation [7], and it has received considerable attention in the business and social science literature. Lack of trust can result in a waste of time and resources on protecting ourselves against possible harm and thereby clogs up the economy. Consumer’s trust in an E-Retailer can be defined as consumers’ willingness to accept vulnerability in an on-line transaction based on their positive expectations in regard with an E-Retailers future behavior. Factors that affect trust in E-Commerce for consumers include security risks, privacy risks, and lack of reliability in E-Commerce processes in general. As pointed by Nielsen [8], real trust builds through a company’s actual behavior towards its customers over time, and it is seen to be difficult to build and easy to lose [9]. Theoretically, there are three trust-building processes summarized in [2]: 1. Knowledge-based trust is described as a form of trust that develops over time as one party learns about intentions, capabilities, and activity experiences. Example can be recommendation, evaluations, and reputations. 2. Institutional-based trust relied on the creation of a “trust infrastructure”, of socially recognized third-party intermediaries that certify the trustworthiness of parties in a commercial exchange or actually enforce trustworthy behaviours on the part of one or both partners. Such example is certificate, membership of association. 3. Trust transfer happens when one party ascribes trustworthiness to an unfamiliar exchange partner based on that partner’s association with a trusted party. In the ext section, we will describe a computational model, which can be used to facilitate the trust- building process in E-Commerce, particularly for Chinese E-Commerce market.

4 Computational Model of Trust 4.1 Relevant Works According to the context and purpose for which trust is established, the technologies and strategies used to build trust will vary. Numerous models for the way, which trust is established and maintained in an E-Commerce setting, have been proposed and experimented. The Cheskin Research and Studio Archetype/Sapient eCommerce Trust Study describes trust as a dynamic process that depends or retreats as a function of experiences [10]. Nielsen’s model points out that real trust builds through a company’s actual behavior towards its customers over time. Egger and de Groot’s model of trust for E-Commerce (MoTEC) defined four components of trust consideration [11]: – Factors affecting trust before the site is accessed, which including brand reputation, previous off-line experiences with the merchant, and differences between individuals in their general propensity to trust. – Web interface properties such as graphical user interface and usability. – Information content including information provided by merchant about services, privacy policy, and practices.

336

Z. Wang, Z. Zhang, and Y. Zhang

– Relationship management such as customer services, post-purchase communication and so on. Since trust must be based on experience over time, establishing an initial trust level can be a major challenge to potential customers. It’s obvious that without initial trust, merchant cannot build a good transaction history, and without a good transaction history, consumers may not build trust in these merchants. Therefore, a trust model that can build initial trust of merchant from collecting off-line experiences, reputation, system guarantees, testimonials and track transaction histories, post customers’ evaluations and other information will be valuable to establish trust between consumers and merchants. 4.2 ERC2 G Model The computational model is based on the idea of Reputation accumulation from Role Play Game (RPG) and idea of Reputation System combining the concept of Mathematical Trust Models [12,13,14]. Within RPG, the idea is for player to gain an initial value of reputation, and during playing, the level of reputation will be changed depends on player’s activities. For example, if the player steals something from others, the reputation level will be dropped; and if the player helps somebody, the reputation will gain reasonable growth. In addition, the reputation level will affect player’s future activity such as permissions of some tasks, relationship with others, and final result of the game. In terms of doing E-Commerce, establishing trust is the way to build good reputation and relationship with both customers and business partners, which positive activities will increase that level, otherwise destroy reputation immediately. In case of Reputation System, distribute and aggregate feedbacks about participant’s behaviors can be gathered, and these post experiences with a remote transaction partner is projected into the future, giving a measure of their trustworthiness. For instance, the first Web sites to introduce reputation schemes were on-line auction sites such as eBay. The collected information, records, and specification will be translated into numerical values with different weight, and influence the value of “trust level” directly. Figure 1 represents the ERC2 G computational trust model, which utilizes multiple information sources and assigns different weights to different information sources. As shown in the model, the top trust level will be the output of the model, which is influenced by five categories of information sources including: (1) direct experience; (2) experiences and recommendations of other parties; (3) Certification and evaluation schemes; (4) digital credentials; and (5) system guarantees. When customer intends to buy a product or service provided by an E-Commerce system, the model computes a trust value based on specifying the values of different information weights. The information sources can be gathered from the service itself such as a successful transaction and customer evaluation, off-line reputation and certificates from other parties or associations, or even some system guarantees like new security techniques adoption and so on. Since the computational model depends on several information sources, the following need to be considered: 1. Accuracy of information Much of the information needed to compute trust level can be gathered from various sources as mentioned before. This information could be accurate or could be

Towards Enhancing Trust on Chinese E-Commerce

337

Fig. 1. Computational Trust Model

designed to mislead the user into falsely trusting the merchant. Therefore, the user would like to ensure the accuracy of the supplied information so that trust the party that can be trusted. 2. Complexity of information As the number of parties involved in transaction, the number of indirect interactions and the depth of the interaction level may increase. In addition, when the complexity of a composite service increases, the depth of the path that the trust information request traverses down and trust information traverse back gets deeper. 3. Anonymity of information In most of Web services, the identity of constituent services may not be known to the user, which might become a threat to the user since some of the constituents might be malicious to specific user. In that way, user will like to request more information about the constituents. The trust model will be influenced by the willingness of the service to supply that information. Similarly, when consumers’ information needs to be passed to a constituent service, the user will have to be notified of that interaction. 4. Dynamic behavior That can be considered as the major feature of the computational trust model, because it can determine the trust level more dynamically than other trust establishing method. In order to keep value of trust level updated, any changes of value from information source must influence the trust value immediately and accurately. Web services with the adoption of computational trust model should have the capability to change dynamically in many different ways that could affect the trust values of different users without changing any other interaction details.

338

Z. Wang, Z. Zhang, and Y. Zhang

5. Context The information required to build trust is based on the context. For example, when the context of trust is the reliability of a service, this may require some information about the reliability of all the related services. When the context of trust is the security of the credit card transaction performed by a service, this context will require information pertaining to a set of services that handle credit card transactions. This means, when a trust related request is made, the service must be able to understand the context, or at least the information required and number of users that the information is pertaining to. We have briefly introduced the concept of the computational trust model, and in Section 5, we will discuss how to use the model in an E-Commerce system in detail.

5 The Computational Model in E-Commerce Scenario The computational trust model forms the base for server side programs of a third party that aims to track activities, maintain trust level and assure trustworthiness of registered E-Retailers to their consumers. Figure 2 illustrate the complete scenario of cooperating with E-Commerce system in means of enhancing trust.

Fig. 2. Scenario of Trust Model in E-Commerce System

For a functional E-Commerce system of merchant, there have been some system guarantees, positive off-line experiences, and reputations before participating in thirdparty assurance. Therefore, the third party trust system estimate an initial trust value through evaluating existing evidence of supports, testimonials from customers and certificates of the E-Retailer. If the initial trust value is acceptable for both the E-Retailer and the third party authority, their assurance agreement will become effective, and the trust level will be activated and available to be viewed by customers of that E-Retailer.

Towards Enhancing Trust on Chinese E-Commerce

339

During assuring, any new valid evidence is acceptable to increase the trust level of ERetailer, such as enhancement of system security, association’s evaluation, and so on. As shown in Figure 2, the interaction between consumer, E-Commerce system and trust system consists 14 steps in general. Initially, consumer searches product information by sending a request, then the server side database is searched and return specific product information if found. At this stage, consumer is permitted to request trust level and past history of the E-Retailer. The request will be sent to the trust system located at third party server side, once authorization and authentication succeed the corresponding trust level of the E-Retailer and related history summary will be shown to consumer. According to the trust information, consumer can make decision to purchase. The process of making order, issuing invoice, process payment has no changes after participating assurance program, but once the transaction completed, the transaction result will be sent to trust system immediately. In addition, if customers satisfy with the service provided by E-Retailer, they are welcomed to give customer evaluation, and these evaluations also influence the trust level of the E-Retailer. Further more, accessing third party’s web site, E-Retailers can write testimonial or recommendation to their business partners. In future study, more information sources that may affect E-Retailer’s trustworthiness will be involved in the trust system, and aims to evaluate trust level more synthetically. In next section, there is a case study that implementing E-Commerce system with the participation of trust assurance for EI Computer1 .

6 Trust Building in E-Commerce System EIComputer We will describe a case study of assisting trust-building in an E-Commerce system for EIComputer, whose implementation was reliant on XML, JSPs, JDBC and servlets techniques. 6.1 The Structure of System The structure of system is a distributed, three-tier, web based application. The client tier is represented by user’s web browser. The browser displays static XHTML documents and dynamically created XHT ML (such as forms) that allow the user to interact with the server tier. The server tier consists of JSPs and servlets that act on behalf of the client. JSPs and servlets perform tasks such as generating list of products via connecting database and retrieving data there, getting information from client and store them in session or database, adding items to shopping cart, viewing shopping cart and processing the final order. The database tier might use the comdb database created by IBM cloudscape 102 , and the preparedStatement3 is used for batch-processing. Figure 3 illustrates the interactions between application components. In the diagram, names without extensions represent servlet aliases. For example, registration is the alias 1 2 3

EI Computer is an electronic storefront to sell electronic hardware. The Apache Derby open source database of IBM. Java library class to create prepared statement object of database query.

340

Z. Wang, Z. Zhang, and Y. Zhang

Fig. 3. Structure of EI Computer web application

for servlet RegistrationServlet. After deploying the application, user can visit the web application via entering a valid URL. This URL requests the index page of the application (index.html). User either can login as a member or register as a new member, otherwise they won’t be allowed to access the main page. By clicking “login” button after entering member ID and password, the XHT ML will pass the information to authorization entity to validate member’s identity via comparing matched information from login table of comdb. If user is authorized, product category list will be shown, otherwise user will fail accessing and be asked to re-login or register. As the user chooses to register as new member, the registration entity contains application form linked from storefront is available for accessing. The action of this application form is to invoke the ProcessReg entity with information user entered as parameters. The result of successful registration is issuing a memberID to the applicant, and then return to the storefront to login. Within the Authorization entity, user can view all available products by accessing link of specific category. This invokes catalog entity to interact with database comdb to create table of products of current category dynamically. From this document, user can specify quantity of items, and then add items into shopping cart. Adding specific product to shopping cart invokes ShopCart entity to add products information into user session and return an document containing the contents of the cart, and subtotal of each product or press button link to ViewCart entity, which shows items in the user’s cart. At this stage, user can continue shopping (Category entity) or proceed to check out (Order entity). In the latter case, the user is presented with a form to input delivery information and credit-card information. Then the user submits the form to invoke ProcessOrder entity, which completes the transaction by sending a confirmation document to user, and

Towards Enhancing Trust on Chinese E-Commerce

341

order information will be added into orders table of comdb. In addition, user session will be cleared automatically. 6.2 Trust-Assurance Within EIComputer For the purpose of establishing trust with consumers, the EI Computer registered assurance services provided by a third party called eTrust 4 like shown in Figure 3. The trust system of eTrust is also implemented as a distributed, three-tier, web based application. The management of EI Computer may provide evidence to eTrust either in paper format or electronic format to form an initial trust value. If the initial value can be accepted by both the merchant and eTrust (they assure the E-Retailers that meet their requirements), an agreement will become effective, and a use ID will be issued to EI Computer. During assuring, EI Computer can use the user ID and password to login, login information will be passed to eTrust authorization entity to validate user’s identity through comparing the account table of trustdb. Since login successfully, user can check the current trust level and history records maintained by eTrust system, provide new evidences and recommend business partners as well. All these process are done by manipulating database, JSPs and serverlets. When consumer makes request to view the trust level of EI Computer, it will call the requestTrust serverlet with authorized information of E-Retailer passed as parameters. Since authorization succeeds, needed trust information will be found in trustdb, and a trust level will be computed to be shown to consumer by showTrust entity.

7 Conclusion and Future Work In this paper, we surveyed the E-Commerce activities currently under going in China. We have closely analyzed the gap between the informationization and barriers of Chinese E-Commerce systems. Among the topics of facilitating development of a positive, effective and satisfied E-Commerce system, “trust” is a major issue, which has not received adequate attention in past decades. We have proposed a computational trust model (ERC2 G), which can foster a trust relationship between consumers and merchants, and illustrate how to apply the model in a Chinese E-Commerce system. From the case study of EI Computer, we briefly talked about the implementation of E-Commerce systems with the consideration of enhancing trust. However, establishing trust must be a complex process, which involves more information sources, so continuous improving and enhancing the trustworthiness will be the main objective we need to achieve in future work.

References 1. Xiaoqi. L., China’s E-Commerce report, Economic Science Press, China, Translated from Chinese, 2003. 2. Kathrn. M. Kimery, Maey. McCord, Third-party assurance: Mapping the road to trust in eretailing, Journal of Information Technology Theory and Application, 2002. 4

eTrust is a third party assurer.

342

Z. Wang, Z. Zhang, and Y. Zhang

3. Yinjian. Wu, Initiate the new age of Chinese E-Commerce, High-tech Bureau of High Technology, Ministry of Science and Technology, 2002. 4. King. R, A., Chinese E-Commerce coming to sense, People’s Daily, http://english.people. com.cn/english/200103/22/eng20010322 65691.html,2001. 5. UNCTAD secretariat, E-Commerce and Development Report 2004, UNITED NATIONS, New York and Geneva, 2004. 6. David, Dockhorn., China to Transform eCommerce Global Economy in the New Millenium, Mad Penguin, 2005. 7. Anne. Patton, M., and Audun. J, Technologies for Trust in Electronic Commerce, 2004. 8. Nielsen, J., Trust or Bust: Communicating Trustworthiness in Web Design, Jakob Nielsen’s Alertbox, http://www.useit.com/alertbox/990307.html, 1999. 9. Nielsen, J., R. Molich, C. Snyder, and S. Farrell, E-Commerce User Experience, Technical report, Nielsen Norman Group, 2000. 10. Cheskin Research, Trust in the Wired Americas, Cheskin Research, http://www.sapient. com/cheskin/, 2000. 11. Egger, F. and B. de Groot, Developing a Model of Trust for Electronic Commerce, Proceedings of the 9th International World-Wide Web Conference, 2000. 12. Zimmermann, P., The Official PGP User’s Guide, MIT Press, 1995. 13. Maurer, U., Modeling a Public -Key Infrastructure, Proceedings of the Fourth European Symposium on Research in Computer Security, 1999. 14. Josang, A., An Algebra for assessing Trust in Certification Chains, The Internet Society, 1999. 15. Li. H. X., A study on Chinese E-Commerce current situation and developing strategies of China, Huazhong Normal University, 2001. 16. Li. Y. F., Current status, problem and recommendation of Chinese E-Commerce legislation, CHINA.COM.CN, 2002 17. Li. L., Global communities and E-Commerce legislation activities, International trade Organization, 2000. 18. Konrad, K., Barthel, J., Fuchs, G. Trust and Electronic Commerce - More than a Technical Problem , Proceedings of the 18th IEEE Symposium on Reliable Distributed Systems, October 19-22, Lausanne, Switzerland, 1999. 19. Braczyk, H.-J., Barthel, J., Fuchs, G., Konrad, K. Trust and Socio-Technical Systems. In: G. Mller K. Rannenberg: Multilateral Security in Communications, Vol. 3: Technology, Infrastructure, Economy, Addison-Wesley-Longman, 1999. 20. Shen. J. M., Digital Signature and Chinese E-Commerce Security, CCID Net, http://tech.ccidnet.com/pub/article/c782 a189499 p1.html, 2004. 21. Li. L., International Organization and relevant legislation activities, E-Commerce World, Vol.12, pp34-36,2000.

Flexible Deployment Models for Location-Aware Key Management in Wireless Sensor Networks Bo Yu1 , Xiaomei Cao2 , Peng Han1 , Dilin Mao1 , and Chuanshan Gao1 1

Department of Computer Science and Engineering, Fudan University, 200433, China {boyu, 031021071, dlmao, cgao}@fudan.edu.cn 2 National Laboratory of Novel Software Technology, Nanjing University, 210093, P.R. China [email protected]

Abstract. Location-aware key management schemes, which take deployment information as a priori, can effectively strengthen the resilience against attacks for Wireless Sensor Networks. However, they also raise several new challenges such as deployment flexibility, inter-group connectivity, and security resilience. Aiming at these challenges, we propose three flexible deployment models and the corresponding key management scheme in this paper. We provide analytical evaluations of the proposed models and key management scheme. The results indicate that our approaches achieve a wonderful performance in inter-group connectivity, deployment flexibility, as well as resilience in security.

1

Introduction

Recent advance in nano-technology makes it possible to develop low-power battery operated sensor nodes which are tiny and cheap, and could be deployed in a wide area. Security mechanisms that provide confidentiality and authentication are critical for the operations of many sensor applications. However, individual sensor nodes have limited power, computation, memory, and communication capabilities, which make it infeasible to use traditional public-key cryptosystem. Key management has become a challenging problem in wireless sensor networks. A number of key management schemes for wireless sensor networks have been proposed recently [2,3,4,5,6]. Most of the existing key managements are based on random key pre-distribution, which is a probabilistic approach for setting up session keys between neighboring nodes. Random key pre-distribution schemes have a common weakness that they are vulnerable to selective node attacks and node replication attacks [10]. These attacks can be effectively prevented by location-aware key management [8,9,10]. However, there are still several challenges in location-aware key management such as connectivity between groups, deployment flexibility, and security resilience. Existing schemes require the deployment information as a priori before deployment, which makes it rather inflexible in large-scale applications. Deployment fields in these schemes usually X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 343–354, 2006. c Springer-Verlag Berlin Heidelberg 2006 

344

B. Yu et al.

are partitioned into grids, so inter-grid connectivity has become a tough problem in these approaches. In this paper, we propose three flexible deployment models and corresponding key management scheme. Our deployment models include blend grid model, border grid model and scattering grid model. We also present the corresponding key management scheme for our deployment models. To the best of our knowledge, our scheme is a first attempt to discuss the correlation between deployment models and key management. The main contributions of this paper are as follows: – Flexible deployment models and the corresponding key management scheme. – Improved connectivity in group-based deployment and strengthened resilience against attacks. – Thorough theoretical analysis of our scheme and the existing location-aware key management schemes. The rest of the paper is organized as follows. Section 2 proposes several challenging problems in location-aware key management schemes. Then we present three deployment models and corresponding key management scheme in Section 3. Section 4 shows the analysis and comparison between our approaches and the existing schemes. Section 5 describes related work in security in sensor networks. Finally, we provide our conclusion in Section 6.

2 2.1

Problem Statement The Attack Model

An inside attackers might be a compromised sensor node, which might be reprogrammed to launch undetectable attacks, e.g. selective forwarding, sybil attacks and wormhole attacks. Protection against part of these attacks is the security goals of secure routing algorithms. Karlof and Wagner [1] discuss some countermeasures against routing attacks. However, some of these attacks need both secure routing and key management to defend. Especially, we define two kinds of attacks against key management as follows: 1. Selective node attack : By analyzing the deployment pattern of the network, the attacker can selectively compromise a subset of nodes to maximize its compromised knowledge at least cost. 2. Node replication attack : The attacker can utilize the compromised information to fabricate similar nodes and deploy them into any part of the network. Several papers [8,9,10] have pointed out the danger of node replication. These two types of attacks could lead to several other critical attacks such as sybil attacks and wormhole attacks. Security schemes against these attacks are getting more and more attention.

Flexible Deployment Models for Location-Aware Key Management

2.2

345

Challenges

The potential of node location information in defending against node attacks is acquiring more attention such as in Liu’s [4], Du’s [5] and Huang’s [10]. Locationbased key management schemes effectively intensify the network’s strength against selective node attacks and node replication attacks. However, these schemes all have a precondition that deployment information must be known before deployment, which is hard to fulfill in many cases. As such, we have to face a dilemma: deployment flexibility or network security. What’s more, these schemes usually are based on group deployment. They use quite different methods to make nodes in adjacent groups connected, which lead to quite different performance in topology and in security. In this paper, we propose three deployment models and the corresponding key management scheme aiming at the above three goals.

3

Deployment Models for Location-Aware Key Management

In this section, we present three deployment models and their corresponding key management scheme, deferring the analysis for the next section. 3.1

Motivation

In the existing location-based key management schemes, any sensor node in one grid can setup a pairwise key with another node within the same grid as well as with another node in a neighboring grid. Nodes in the same grid have no difference in the way how the keys is pre-distributed and installed on the nodes. However, by analysis, we find out that most of the nodes within one grid only communicate with its neighbors in the same grid, especially when the grid width is relatively more greater than the signal range of a node. We plot the analysis result in Figure 1. One can see that as the ratio of grid width to node signal range increases over about 6.5, more than 50 % nodes in one grid only have inner-grid communications. Here we define two types of secret information pre-installed in sensor nodes before deployment: – Inter-grid secret keys, which are used to set up a pair-wise key between two inter-grid neighboring nodes. – Inner-grid secret keys, which are used to set up a pair-wise key with an inner-grid neighbor node. From Figure 1, it is intuitive that not all the sensor nodes need to carry the intergrid secret keys. Most of the sensor nodes only communicate with inner-grid neighbors. If unnecessary secret keys are not installed in sensor nodes, the benefits would be at least two-fold. First, resilience against node capture is strengthened, which is the most important. Less information would be exposed to the adversary when a node is compromised. Second, more memory would be freed for other purpose.

B. Yu et al. Fraction of nodes only have inner−grid communication

346

1

0.8

0.6

0.4

0.2

0

2

4

6 8 10 12 14 16 Ratio of grid width to node signal range

18

20

Fig. 1. Fraction of nodes only having inner-grid communications

Fig. 2. Deployment Model

3.2

Deployment Model

In this section, we propose three deployment models for our key management schemes: blend grid model, border grid model, scattering grid model. The former two are designed for manual deployment or controlled autonomous deployment. The last one is designed for large-scale autonomous deployment such as a helicopter high in the sky dropping sensor nodes group by group. 1. Blend Grid Model First, we divide the deployment area into square grids, as shown in Figure 2(a). Then we classify the sensor nodes into two types: gateway nodes and inner nodes. In blend grid model, gateway nodes and inner nodes are blended and randomly uniformly deployed into the target field. The inner nodes only hold inner-grid secret keys and can only set up a pair-wise session key with nodes in the same grid. The gateway nodes hold both inner-grid secret keys and inter-grid secret keys, so they can set up a pair-wise key with both nodes in the same grid and gateway nodes in the adjacent grids. In this way, the purpose of reducing unnecessary secret keys in sensor memory is accomplished. Communications between nodes are restricted by grids. Node replication attacks could be prevented. On the other hand, compared with the existing deployment models in [8,9,10], blend grid model doesn’t need to know the exact geographical shape of the target field. Both flexible deployment and network security are achieved.

Flexible Deployment Models for Location-Aware Key Management

347

Fig. 3. Deployment distribution on one grid

2. Border Grid Model After introducing blend grid model, we find that it is intuitive that gateway nodes should be deployed along the borderline of grids. If deployed along the borderlines, gateway nodes will work better playing their roles as gateways to set up inter-grid communications. Figure 4(b) shows an example partition of a target field using border grid model. In many applications, node positions could be pre-computed before deployment. Nodes will finally be deployed to the expected positions manually or by autonomous vehicles, such as mobile robots or unmanned helicopters. So border grid model would be supported quite well by these deploying methods. 3. Scattering Grid Model Scattering grid model are designed for some large-scale applications. In these applications, a great number of nodes are scattered by a helicopter high in the sky in the predetermined scattering positions. We suppose that through some special mechanical equipment gateway nodes can be scattered more far away from the scattering point than inner nodes. In this way, gateway nodes have more chances to communicate with other gateway nodes. Because nodes are scattered around the scattering point which is the center of a grid, the grids in scatter grid model doesn’t have clear borderlines, as shown in Figure 2(c) and Figure 2(d). Figure 2(c) is a sample of squarely arranged grids. Figure 2(d) is a sample of hexagonally arranged grids. There might be a few more uncovered areas in hexagonal grids than in square grids. In this paper, we model the sensor deployment distribution as a Gaussian distribution (also called Normal distribution). Gaussian distribution is widely studied and used in practice. Du’s work [5] also supposes the Gaussian distribution to be the pdf(probability density function) for node deployment. We assume that the overall deployment distribution for any node k in grid Gi,j follows a two-dimensional Gaussian distribution: foverall (x, y) =

2 y) ] 1 − [(x−jx )2 +(y−j 2σ2 e 2πσ 2

348

B. Yu et al.

where (xi , yj ) is the coordinate of the scattering point. For nodes in one grid are divided into gateway nodes and inner nodes,we can also have:   fgateway (x, y) dx dy+ foverall (x, y) dx dy =  finner (x, y) dx dy = 1 where fgateway (x, y) is the pdf for gateway nodes and finner (x, y) for inner nodes. We define fgateway (x, y) as follows: fgateway (x, y) = α

2 2 y ) −r ] 1 − [(x−jx )2 +(y−j 2σ2 e 2πσ 2

where r is the expected distance where most of the gateway node are away from the scattering point, α is the coefficient which decide the fraction of gateway nodes to overall nodes. Then we have: finner (x, y) = foverall (x, y) − fgateway (x, y) We provide three example plots of fgateway (x, y), finner (x, y), fgateway (x, y) in Figure 3. We can find the gateway pdf shows a basin-shaped mesh which means most of the gateway nodes are distributed along a circularity outside the grid, while the inner nodes are relatively close to the scattering point. 3.3

Key Management

Our scheme doesn’t rely on specific key establishment algorithms and can choose either of the existing schemes as the basic pairwise key establishment algorithm, which keep our scheme scalable to specific applications. The key point is how to use the basic algorithms to deal with gateway nodes and inner nodes. For simplicity for discussion, we take Du’s symmetric-matrix-based algorithm [5] as our basic key establishment algorithm. In Du’s scheme, a key space is defined as a tuple (D, G), where D is a random (λ + 1) × (λ + 1) symmetric matrix over a finite field GF (q). We say a node picks a key space (D, G) if the node carries the secret information generated from (D, G). For detail, please refer to [5]. We present our key management scheme based on Du’s scheme as follows: Pre-distribution. We call all the sensor nodes to be deployed in one grid one group. A group of nodes includes both inner nodes and gateway nodes. Group is the minimal deployment unit. First, the key server generates ωb global key spaces j , j = 1, . . . , ωb . We select τb distinct key spaces from the ωb spaces for each Sglobal gateway nodes in all groups. We use Du’s multiple-space method to strengthen the gateway nodes resilience against attacks. Then we suppose that there are m inner nodes and n gateway nodes in one group. For any group Gi , where i is the i group id, the off-line key server generates ωg inner key spaces Sinner = (Di , G). i All nodes in group G including m inner nodes and n gateway nodes choose τg i key spaces from Sinner .

Flexible Deployment Models for Location-Aware Key Management

349

Key Establishment. After deployment, each node need set up a pairwise session key with its neighbor. If two nodes belong to the same group, they could easily set up a pairwise session key, for they share the common inner key space. If two nodes are gateway nodes and share at least one global key space, they could set up a session key. If two nodes are not in the above two cases, these two nodes need to find a multi-hop key path to establish a session key. Multi-hop key path establishment is widely used in random key pre-distribution schemes, e.g. Eschenauer’s [2], Liu’s [4], and Huang’s [10]. Key management should also include other mechanisms such as key update, node addition and revocation, etc. Our approaches for these mechanisms can be similar to the existing schemes.

4

Analysis

First we list below notations which appear in the following analysis: ωb number of candidate global key spaces for gateway nodes τb number of global key spaces picked by one gateway node ωg number of candidate key spaces for nodes in one group τg number of key spaces picked by one node in the group m number of inner nodes in one group n number of gateway nodes in one group fjx ,jy (x, y) probability density function where (jx , jy ) is the center of a grid r communication range of a sensor node R(x, y) signal covered area of a sensor node located at (x, y) It’s easy to deduce the probability of any two neighboring nodes sharing at least one key space: %ω&%ω−τ & ((ω − τ )!)2 (1) Pr(ω, τ ) = 1 − τ % &τ2 = 1 − ω (ω − 2τ )!ω! τ

Our following analysis is also based on Equation 1. The pdfs (probability density function) for our three deployment models used in the following analysis are omitted due to page limits. 4.1

Connectivity Analysis

1. Connectivity with in the same group. Key establishment with in the same group include inner-inner, inner-gateway, and gateway-gateway key establishment. Inner-inner refers to two inner nodes within the same group, and inner-gateway , gateway-gateway also refer to two nodes in the same group. For all nodes in one group hold the same key spaces, there are no differences of key

350

B. Yu et al. 1

Pr[sharing at least one space]

0.9 0.8 0.7 0.6 0.5 0.4 0.3 τ=1

0.2

τ=2 τ=3

0.1 0

τ=4 0

2

4

6

8

10

ω

12

14

16

18

20

Fig. 4. Probability of sharing at least one key space with the help of at most one middle node when each node randomly choose τ spaces from ω spaces

establishment in these three types. The probability of any two neighboring nodes in the same group setting up a session key with the help of at most one middle node is: P rinnergroup = 1 − (1 − P r2 (ωg , τg ))q−1 · (1 − P r(ωg , τg )) where

(2)

 q = (m + n)

fjx ,jy (x, y) dx dy (x,y)∈R

is the number of neighbors of the current node, fjx ,jy (x, y) is the pdf for our deployment models. We plot a probability distribution example in Figure 4, where q is supposed to be 4, i.e. each node has 4 neighbors. From example, one can see that, only 2 out of 6 spaces (τ = 2, ω = 6) are needed for each node to make the probability of sharing at least one space to be more than 95%. Memory usage can be saved, while security is still guaranteed. Security analysis is left in Section 4.2. 2. Connectivity between two adjacent groups. Connectivity between adjacent groups has been a challenging problem for group-based deployment. In our scheme, gateway nodes are especially designed to make nodes in two adjacent groups connected, and we mainly discuss the connectivity of gateway nodes. A gateway node can set up a session key with another gateway node in the adjacent group in 3 ways: directly, or with the help of a middle gateway node within the same group, or with the help of a middle gateway node in the adjacent group. We can deduce the probability of two gateway nodes setting a session key with the help of at most one middle node: P rintergroup = 1 − (1 − P r(ωg , τg )P r(ωb , τb ))q1 ·(1 − P r2 (ωb , τb ))q2 ·(1 − P r(ωb , τb ))

(3)

Flexible Deployment Models for Location-Aware Key Management

351

Pr[setting a inter−group session key]

1 0.9 0.8 0.7 0.6 0.5 0.4 0.3

Huang’s

0.2

Du’s Blend Grid Model

0.1

Border Grid Model

0 50

100

150

200

grid width

Fig. 5. Probability of two nodes setting a session key. The two nodes belongs to two adjacent groups respectively and are deployed near the borderline.



where



q1 = n (x,y)∈R(x0 ,y0 )

 q2 = n

(x,y)∈R(x0 ,y0 )

fjx1 ,jy1 (x, y) dx dy, 

fjx2 ,jy2 (x, y) dx dy 

(jx1 , jy1 ), (jx2 , jy2 ) are the center points of the adjacent grids, fjx ,jy (x, y) is the pdf for our deployment models. P rintergroup is a function of ωb , τb , ωg , τg , and n. We compare our blend grid model, border grid model, Huang’s scheme and Du’s scheme in Figure 5, all of which are supposed to follow uniform distribution. We suppose that there are 100 nodes deployed in each grid, and ωb = 50, τb = 8, ωg = 6, τ2 = 2, r = 25, m : n = 9 : 1. From the figure, one can see that our border grid model shows the best performance while the grid width increases, because gateway nodes, which especially designed for setting up a inter-group session key, are deployed along the border lines. Our blend grid model shows poor performance, because the percent of gateway nodes is too small (10%) in average, and it’s hard for two nodes belong to two adjacent groups to find a common gateway node to set up a session key. 4.2

Security Analysis

Our key management scheme doesn’t reply on specific key establishment algorithms, but for simplicity for discussion, we take Du’s algorithm [5] as the key establishment algorithm. Du’s algorithm has a nice security threshold, say T, that if more than T nodes are compromised, the rest nodes of the network are in great danger, if less, no nodes will be threatened. It is intuitive that for inner nodes in our deployment model, we should choose a small τ , as long as T is greater than the number of inner nodes in one grid, m. So the memory usage is saved, and even if a great number of nodes in one grid are compromised, the rest nodes either in the same grid or in different grids will not be in danger. However, as for gateway nodes, because all the gateway nodes share common key spaces, resilience against attacks

352

B. Yu et al. 1 τ=5,ω=50 τ=4,ω=50 τ=3,ω=50 τ=2,ω=6 τ=2,ω=8 τ=2,ω=10

Pr[at least one space is broken]

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

100

200

300

400

500

600

700

800

900

1000

Number of compromised nodes

Fraction of compromised links among uncompromised nodes

Fig. 6. Probability of at least one space broken when the adversary has captured x nodes 1 Du1 0.9

Du2 Huang

0.8

Our scheme 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

0

500

1000

1500

2000

2500

3000

3500

4000

4500

5000

Number of compromised nodes

Fig. 7. Probability of compromised links among uncompromised nodes. Du1 refers to Du’s key establishment algorithm [5]. Du2 refers to Du’s key management scheme using deployment knowledge [9].

is important for gateway nodes. So we should choose such a ω-τ combination that better resilience can be achieve, as long as inter-group connectivity is still guaranteed. Figure 6 depicts several ω-τ combinations which can help us decide the values of ω, τ, m, n for inner nodes and gateway nodes. The figure is based on Du’s analysis [5]. From the figure, one can see that when ω = 6, τ = 2, T ≈ 100, which indicates that m, the number of inner nodes in one grid, should be less than 100. One can also find that when ω = 50, τ = 4, T ≈ 420. So for example, we can set ω = 50, τ = 4 for global key spaces for gateway nodes, ω = 6, τ = 2 for inner key spaces for inner nodes in each grid. If we suppose m : n = 9 : 1, the number of 420 · 100 = 37.8, and the secure threshold for overall network, overall grids, ng = 100/9 Tnetwork = 10 × 420 = 4200, which is extended to 10 times of Du’s scheme [5]. It means that if the adversaries randomly chooses nodes to compromise, they will have to compromise at least 4200 nodes to broke one key space.

Flexible Deployment Models for Location-Aware Key Management

353

We plot the analysis results of Du’s key establishment scheme [5], Du’s location-aware key management scheme [9], Huang’s scheme [10], and our scheme in Figure 7. As shown in the figure, our scheme show a nice threshold property that the adversary has to compromise as much as 4000 nodes to compromise links between uncompromised nodes.

5

Related Work

Eschenauer and Gligor [2] presents a key management scheme for sensor networks based on probabilistic key pre-deployment. Chan et al [3] extend this scheme and present three new mechanisms for key establishment based on the framework of probabilistic key predeployment. Du [5]proposes a symmetricalmatrix-based key establishment scheme. And later, he proposes a method [9] to improve the Eschenauer-Gligor scheme using priori deployment knowledge. Liu [8] also proposes location-based key management scheme, which is based on partitioned grids and bivariate polynomials. Huang [10] uses location information and extends Du’s work [5]. Huang’s key management scheme shows quite well resilience against node replication attacks. Recently, there a number of studies investigating the implementation of PKC (Public Key Cryptography) in sensor networks [12,13]. Watro [12] presents TinyPK which allow authentication and key agreement between a sensor network and a third party. Du [13] proposes a scheme to authentication Public Keys with symmetric key operations.

6

Conclusion and Future Work

Due to constrains of power, communication, and computation capabilities, key management has become a hot topic of studies in security of sensor networks. In this paper, we present three deployment models and the corresponding key management scheme. To the best of our knowledge, our scheme is a first attempt to discuss the correlation between deployment models and key management. Our approaches are proved to be flexible in deployment, resilient in security and support high inter-group (inter-grid) connectivity.

References 1. C. Karlof and D. Wagner, Secure Routing in Wireless Sensor Networks: Attacks and Countermeasures. First IEEE International Workshop on Sensor Network Protocols and Applications, May 2003. 2. L. Eschenauer and V. Gligor, A Key-Management Scheme for Distributed Sensor Networks. In Proc. Of ACM CCS 2002. 3. Haowen Chan, Perrig A., Song D., Random key predistribution schemes for sensor net-works, Security and Privacy, 2003. Proceedings. 2003 Symposium on , 11-14 May 2003 Pages:197 - 213.

354

B. Yu et al.

4. D. Liu and P. Ning, Establishing Pairwise Keys in Distributed Sensor Networks. 10th ACM Conference on Computer and Communications Security (CCS 03), Washington DC, October, 2003. G/A. 5. Wenliang Du, Jing Deng, Yunghsiang S. Han, and Pramod Varshney, A Pairwise Key Pre-distribution Scheme for Wireless Sensor Networks. In Proceedings of the 10th ACM Conference on Computer and Communications Security (CCS), Washington DC, October 27-31, 2003. 6. Di Pietro R., Mancini L.V., Mei A.,Efficient and resilient key discovery based on pseudo-random key pre-deployment. Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International 26-30 April 2004 Page(s):217. 7. Sencun Zhu, Shouhuai Xu, Sanjeev Setia, and Sushil Jajodia. Establishing Pairwise Keys For Secure Communication in Ad Hoc Networks: A Probabilistic Approach. In Proc. of the 11th IEEE International Conference on Network Protocols (ICNP’03), Atlanta, Georgia, November 4-7, 2003. 8. Donggang Liu, Peng Ning, Location-Based Pairwise Key Establishments for Static Sensor Networks, in 2003 ACM Workshop on Security in Ad Hoc and Sensor Networks (SASN ’03), October 2003. 9. Wenliang Du, Jing Deng, Yunghsiang S. Han, Shigang Chen and Pramod Varshney, A Key Management Scheme for Wireless Sensor Networks Using Deployment Knowledge. In Proceedings of the IEEE INFOCOM’04, March 7-11, 2004, Hongkong. Pages 586-597. 10. D. Huang, M. Mehta, D. Medhi, L. Harn, Location-aware Key Management Scheme for Wireless Sensor Networks, in Proceedings of 2004 ACM Workshop on Security of Ad Hoc and Sensor Networks (SASN ’04, in conjunction with ACM CCS2004), pages 29-42, Oct. 25, 2004. 11. P. Corke, S. Hrabar, R. Peterson, D. Rus, S. Saripalli, G. Sukhatme, Autonomous deployment and repair of a sensor network using an unmanned aerial vehicle, Robotics and Automation, 2004. Proceedings. ICRA ’04. 2004 IEEE International Conference on Volume 4, April 26-May 1, 2004 Page(s):3602 - 3608 Vol.4. 12. R. Watro, D. Kong, S. fen Cuti, C. Gardiner, C. Lynn, and P. Kruus, TinyPK: Securing Sensor Networks with Public Key Technology in ACM SASN, Washington, DC, Oct. 2004. 13. Wenliang Du, Ronghua Wang, and Peng Ning. An Efficient Scheme for Authenticating Public Keys in Sensor Networks. In Proceedings of the 6th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc), May 25-28, 2005, Urbana-Champaign, Illinois, USA.

A Diachronic Analysis of Gender-Related Web Communities Using a HITS-Based Mining Tool Naoko Oyama1, Yoshifumi Masunaga2, and Kaoru Tachi1 1

Institute for Gender Studies, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku,Tokyo, 112-8610 Japan {oyama, tachi}@cc.ocha.ac.jp http://www.igs.ocha.ac.jp 2 Department of Information Science, Ochanomizu University, 2-1-1 Otsuka, Bunkyo-ku, Tokyo, 112-8610 Japan [email protected]

Abstract. Web mining is one of the most important research issues in modern information technology. Among Web mining techniques, the Web structure mining is essential to find Web communities. A variety of mining algorithms and tools have been developed for finding Web communities. In this research, we pay attention to the HITS-based Web structure mining tool “Companion-”, which is able to provide diachronic as well as synchronic analysis of Web communities. The capability of diachronic analysis is essential, particularly when a mining tool is applied to investigate a Web community—like the gender-related community—that changes shape dramatically over time according to socio-cultural and political changes. In this paper we first observe how changes in the real world communities, i.e. organizations their members, can be reflected on the Web as changes in Web communities. Second, we analyze gender-related Web communities to discover new trends or movements that are hard to identify without a mining tool capable of diachronic analysis. Last, we investigate a case that calls for the deciphering capability of miners.

1 Introduction In the present era, a wide variety of organizations and individuals have become to transmit information through the Internet. With many new Web sites constructed every day, Web content has become more diverse and the Web’s structure ever more complex. The staggering volume of information on the Web makes it difficult to find truly useful information. Therefore, Web mining is one of the most important research issues in modern information technology. In general, there are three categories of Web mining: Web contents mining, Web usage mining and Web structure mining. Automatic discovery of content patters in Web documents, mining of Web access logs, and Web community extraction based on Web link structure analysis are typical research topics for these three Web mining categories, respectively. This paper pays special attention to Web structure mining, taking gender-related community as an interesting application domain. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 355 – 366, 2006. © Springer-Verlag Berlin Heidelberg 2006

356

N. Oyama, Y. Masunaga, and K. Tachi

As its name indicates, the Web’s structure is a product of “nonlinearly tangled links that are provided like a cobweb between sites.” Information on the links between sites is written into Web pages as a string of characters through HTML, the standard Web language. When these recognizable characters are analyzed, the relationships between Web sites become visible. Therefore, analysis of the linkage structure makes it possible to discover the internally fabricated “Web community.” In this paper, we refer to a Web community as a collection of Web pages created by individuals and/or organizations with a common interest on a specific topic. To find Web communities, some link analysis techniques consider the Web as a graph, with Web pages as nodes and hyperlinks as edges, and automatically identify such Web communities by extracting distinctive graph structures [1, 2, 3]. Among many graph-based Web community extraction algorithms, HITS-based mining algorithms seem promising, where HITS represents Hyperlink-Induced Topic Search. They are based on the notion of proposed authorities and hubs, where an authority is a page with good contents on a topic that is pointed to by many good hub pages, and vice versa. HITS is an algorithm that extracts authorities and hubs from a given subgraph of the Web with efficient iterative calculation [1]. In this paper we use a HITS-based Web structure mining tool named Companion- [3] which is based on a related page algorithm named Companion [2]. Companion- is capable of diachronic analysis as well as synchronic analysis of Web communities. This capability is essential when applying a mining tool to investigate a Web community—like the gender-related community—that changes shape dramatically over time due to sociocultural and political changes. In the real world, gender awareness has been growing rapidly in recent years due to the developing movements toward gender mainstreaming in many countries [4]. Gender-related Web sites follow this trend: their number is increasing rapidly and the structure of their linkage is changing diachronically. In this paper, we argue that by analyzing gender-related communities constructed on the Web, it is possible to capture precisely how society and people’s consciousness regarding gender mainstreaming have been changing over time. We used Companion- to analyze five consecutive years of Web archives collected at the Kitsuregawa Laboratory of Institute of Industrial Science, University of Tokyo, up to February, 2003. The results of the analysis of the Web archive demonstrate how the concept of gender and the gender-related community have been transformed diachronically: We first observe how changes in the real world, i.e. the world made up of individuals and organizations, can be reflected on the Web as changes in a Web community. Second, we analyze the gender-related Web community to discover a new trend or movement that is hard to identify without using a mining tool with diachronic analysis capability. Last, we investigate a case that calls for the deciphering capability of miners. That is, when miners first look at some phenomena recognized in the Web community, they may find them difficult to interpret. However, after careful examination, something new is discovered in the real world. The authors have little knowledge of related works; the closest related articles concern the digital divide or studies on the gender gap in computer and Internet use.

A Diachronic Analysis of Gender-Related Web Communities

357

2 Web Archives and a Web Mining Tool 2.1 Web Archives In order to extract and analyze a Web community, it is necessary to collect the relevant Web pages by a search robot. As our analysis source, we adopted Japanese Web archives (in jp-domain), collected annually at the Kitsuregawa Laboratory of Institute of Industrial Science, University of Tokyo, for the years 1999-2003. These particular years have meaning for the analysis of gender-related Web communities in Japan because the Basic Law for a Gender-equality Society was approved in 1999. Table 1 summarizes the statistical figures of these Web archives. The total number of Web pages includes the number of Web pages linked to the jp-domain. The seed pages need to be identified in order to operate Companion-, and this number indicates the total number of these seed pages. Acknowledging the possibility of a seed page being artificially induced in terms of the users’ extracting community, we defined a seed page as a Web page in which more than three in-links are connected. The total number of communities explains all the communities identified based on the seed pages. Table 1. Statistical figures of Web archives used in this research Collected year

Collected period

Total number of Web pages

Total number of URLs

Total number of links

Total number of seed pages

Total number of communities

1999

July August

17M

34M

120M

657K

79K

2000

June August

17M

32M

112M

737K

88K

2001

Early October

40M

76M

331M

1404K

156K

2002

Early February

45M

84M

375M

1511K

170K

2003

Early February

34M

82M

338M

1487K

181K

M: million K: kilo 2.2 Companion-: A HITS-Based Web Mining Tool The mining tool we utilized for discovering and examining Web communities is Companion- [3]. Companion- is a HITS-based Web mining tool based on Companion [2], and its features are twofold: “Web Community Browser” makes synchronic analysis possible while “Community Evolution Viewer” makes diachronic analysis possible. The results of diachronic analysis can be seen as the evolution of a Web community [5]. As mentioned earlier, diachronic analysis capability is essential, particularly when a mining tool is applied to investigate a Web community—like the gender-related community—that changes shape dramatically over time. This is the reason why we adopted Companion- to perform our research. Fig. 1 depicts the gender-related Web community at February 2003 created by “Web Community Browser,” where the key word “gender” (in Japanese) is specified in the left-hand window of the screen, 21 gender-related communities are shown in the middle of the screen, and the community ID and its name and other useful information are given in the right-hand window of the screen.

358

N. Oyama, Y. Masunaga, and K. Tachi

Fig. 1. A synchronic analysis result by “Web Community Browser”

The Viewer can display both modes of “Main History” and “Detailed History.” By effectively using these separate displays, we are able to examine the community more thoroughly. Fig. 2 depicts part of the development process of gender-related Web communities in five years, tracing back from the Web archive of February 2003, which was created by “Community Evolution Viewer” in Main History mode. On the Viewer, a comparison of the present, the one year before and the next year of Web sites are displayed using four colors for the sake of visualizing differences. Each of the five pillars is divided into left-hand and right-hand sub-pillars so that sites moved to other communities, sites newly appearing, sites moved from other communities, and sites disappearing at the next observation are identified in addition to the sites that have continued, and those for which there is no change. Because Companion- is a HITS-based mining tool, the presence of an authority which is linked to by many Web sites as well as hubs with large numbers of Web links plays a crucial role when extracting the Web community. Web communities are identified as a collective entity gathered around common interests by the grouping of seed pages. These extracted communities are then given “headline tags” according to the frequency in emergence of key words derived from anchor texts responding to the URLs of the Web pages. The following sections show concrete examples of communities and their identifications.

A Diachronic Analysis of Gender-Related Web Communities

359

Fig. 2. A diachronic analysis result by “Community Evolution Viewer”

3 A Diachronic Analysis of Gender-Related Web Communities 3.1 Purpose of Analysis For the purpose of elucidating how the concept of gender has been transforming, the following two approaches are necessary for the analysis of gender-related Web communities: (1) A synchronic analysis: in other words, a snapshot approach that captures the mutual relationships among the communities at particular moments in time. (2) A diachronic analysis: in other words, an approach examining how a Web community has been transforming over time. In this investigation, we employ approach (2) because the time-sensitive nature of gender studies in Japan is the focus of the analysis. (Our preliminary research employing approach (1) has already been reported [6].)

360

N. Oyama, Y. Masunaga, and K. Tachi

In addition, in analyzing Web communities, we need to pay special attention to the relationship between communities in the real world where individuals or organizations are actors/subjects and the constructed communities on the Web, which constitute the accumulated information originally sent from these actors/subjects in the real world. Therefore, it is inevitable to consider the questions as follows: (a) Reflection of real-world communities in the Web: Can we observe a reflection of the real-world community on the Web? Are the key phenomena observed in the real-world community also recognizable in its Web community? (b) Discovery by Web community mining: What does the analysis of Web communities offer? Does it reveal something new that has not been considered or discussed seriously in real-world communities? (c) The way of deciphering: Are some phenomena recognized in a Web community hard to interpret at a glance? Are their limitations with respect to the Viewer’s analysis capability or the “deciphering” ability of miners? 3.2 Reflection of the Real-World Communities in the Web We examine question (a) by applying it to the recognizable case of gender-related communities in the real world. For example, we observed how the Web community has been responding to the real world by particularly focusing on the “women’s center” community (a group keyword of woman, center, male and female, and participation was given as the headline tag: identification 35392 as of February 2003) considering the relationship between the terminology of gender and the development of the gender community. Fig. 3 shows the development process of women’s centers in five years, tracing back from the Web archive of February 2003, and the collective shape of the related Web pages, which can be extracted by inserting the keyword “gender” in Japanese to the Viewer. The development process of community is identifiable when we trace back through time (beginning from February 2003) by counting the number of URLs. The line thickness shows how many communities have been moved. Twenty-one communities emerged as of February 2003 when the key-word was “gender.” The communities in which the term “gender” appears frequently are placed in the upper part of the display window. The development of this Web community perfectly corresponds to the reality in Japan, where women’s centers have sprung up everywhere from the municipal to the prefecture level, inspired by their mutual experience to build a coalition among themselves. In each municipal entity, spaces/bases for women had started to be established approximately ten years ago; however, due to the approval of the Basic Law for a Gender-equality Society in June 1999, the incorporation of sections for gender policy enforcement into “women’s centers” as working-level sections was accelerated. Many centers changed their names to “women’s center,” and they opened their own Web sites in order to express their strong support for gender equality. According to the list of the “related facilities for gender equality” to check whether these places have their own Web sites, we perceived that the display of URLs was rare for “fujin kaikan (ladies’ houses),” but seen for almost all the “jyosei center (women’s centers).” Also, nicknames in hiragana or katakana were given to the centers to express their uniqueness. The nicknames and the names of regions can be

A Diachronic Analysis of Gender-Related Web Communities

361

Fig. 3. Progress of “Women’s Center” community (In “Detailed History” mode): Integration and division

362

N. Oyama, Y. Masunaga, and K. Tachi

found in the collective figure of key words. We also observed a tendency of unification in the term that describes “women”, changing from “fujin” to “jyosei”, though “fujin” did appear once in 2000. Although some people complained about the discomfort among men using facilities that have “women” in their names, most people agreed to use the name of “women’s center” since gender equality in society has not been fully realized. As of the year 2000, prefecture-funded women’s centers were under construction in ten prefectures, and only seventeen prefectures (out of a total of 47 prefectures in Japan) did not have women’s centers [7]. The women’s centers have begun to rename themselves as “danjo kyodosankaku (gender-equality) centers.” Some researchers are concerned about this tendency because this name change is coupled with negative realities such as budget cuts for the centers and for women’s education; taken together, the term “danjo” (men and women) in the name may imply that the “program for the promotion of women’s status” has lost its legitimacy [8]. As we have explained, the Viewer that connects communities through common URLs demonstrates a significant ability to recognize changes to Web site titles. The “women’s center” community records the lowest rank out of 21 communities extracted by the keyword “gender.” This is because the term “gender” did not frequently appear on the headline tag of the community or on the strings of characters contained in the anchor links. Although the “Basic Law for a Gender-equality Society” envisioned the institutionalization of the concept of “gender,” this newly imported concept was eliminated from the final version of the law. In the process of discussing the law, the fundamental principle of “being free from the bounds of gender” was rewritten and replaced by the phrase, “regardless of sex difference”; the rephrasing of the basic philosophy of the law effectively stabilized the dualism of male and female sex in society [9]. This influence seems the most distinctive in the case of public organizations among the gender-related communities. 3.3 Discovery by Web Community Mining We also examined question (b). We have explained before that “Main History,” the fundamental constituent of the “Community Evolution Viewer,” represents horizontally the quantitative changes of the composition of URLs over time. Almost all the community Web site constituents are increasing yearly, but the scales of some communities have quantitatively diminished according to the examination done in February 2003. Two sexual harassment communities, for instance, show this horizontal change. Fig. 4 depicts the phenomenon perceived by the research. The way of developing the community in the past coincides with that of other communities at four observation times. That means although it appears as if the number of the Web sites has decreased, the community has merely divided based on its contents. In the past, the contents were not very specialized/differentiated, so that the community was conceived of as one community. The “Detailed History” mode, which clearly shows the transfer of Web sites, can provide a picture of how the community was divided; as of February 2003, the community recognized as one before was differentiated into two communities: on the one

A Diachronic Analysis of Gender-Related Web Communities

363

Fig. 4. Discovery of two different communities with the identical root community track

hand, the community of campus sexual harassment sites and similar organizations’ sites and on the other, personal Web sites that show an interest in the issue of sexual harassment. It might be difficult to explain the reason why some Web sites are categorized as the latter. In fact, these Web sites were eliminated from the official sites of the university organizations. From a different point of view, it may also be explained that these Web sites became more focused and specialized in terms of themes and topics. Analyzing the web communities exposes the “reality” of the division within the real-world community. This is the reality that the Web mining tool poses to us: deciphering the result produced by Companion- should be evaluated as a legitimate methodological “discovery” in the process of developing Web mining tools. 3.4 The Way of Deciphering Finally, we examined question (c). The headline tags attached to the groups of the connected communities vary when observed through the community evolution viewer. For example, Fig. 5 demonstrates that the groups of a community have different tags in the developmental process. When we scrutinize the process more carefully, changes in the word order or the contents of the tags can be also recognized. Furthermore, it should be critically questioned that in spite of looking into the more detailed list regarding the keywords appearing on the headline tags horizontally shown as history, we are not able to find the keyword “gender.” Why does this happen? Is this a matter that needs to be discussed?

364

N. Oyama, Y. Masunaga, and K. Tachi

Fig. 5. Progress of a gender-related community without the keyword “Gender”

This may be possible to be interpreted as an inevitable result caused by the particular feature of the viewer: an examination system that considers the number of the common URLs as a primal resource and focuses on the commonality of the communities in history. Instead, we argue that the unique feature of the viewer provides a new possibility for analyzing the Web communities. Fig. 5 shows the transformation of the Science Fiction (SF) fans’ community, starting from the archive of the year 2003. A keyword of “gender” emerges in the archive as of February 2003. The immediate cause of the emergence is considered to be the Web site of an organization called “Gender SF Research Group,” which seeks for the availability of “free thinking that is not confined to the conventional binary system of sex/gender” joining the SF community. As the fact of the “Gender SF Research Group” proves, new members can bring different implications to the Web communities. Therefore, it is a significant contribution to be able to conceive the preference of the community, to read how the community has changed, and to predict the next new realm of a community’s interest. Moreover, for gender researchers, the transformation history as shown in Fig. 5 is a precious resource because it makes them re-acknowledge the social, cultural, and political environment in which the keyword of “gender” emerged. 3.5 Validity of Mining Results by CompanionCompanion- is a modified version of Companion [2], and provides better precision than HITS [1] and Companion. Note that all three, i.e. HITS, Companion and Companion-, are based on related page algorithms. Although they provide synchronic analysis capability, only Companion- provides diachronic analysis capability, i.e. the ability to extract the evolution of Web communities from a temporal series of Web

A Diachronic Analysis of Gender-Related Web Communities

365

archives. Therefore the results obtained by using the Viewer are not obtained by HITS and Companion unless they provide an interface for diachronic analysis. The validity of the mining results obtained by Companion- may change depending on the concept of community that we conceive. We examined 21 gender-related Web communities created by “Web Community Browser” using the archive of the year 2003 carefully, and found that most member-pairs in a community do not have significant links in between. This observation is understandable due to the intrinsic nature of a HITS-based mining algorithm. In other words, a Web community induced by a HITS-based mining tool is meaningful when one intends to group a set of Web pages together as a community, where each member is equally important as an “authority” on a specific topic. There are people who want to link (or even back-link) their Web pages together to promote fellow feeling. For such people, using HITS to enumerate all members of a Web community may be problematic because the Web communities that one is interested in may be overshadowed by a more dominant Web community. To resolve this problem a Web community identification algorithm was introduced based on a maximum flow framework [10]. It defines a community to be a set of Web pages that link (in either direction) to more Web pages in the community than to pages outside of the community. Therefore a maximum flow Web mining tool may be more profitable if we want to identify Web communities that are more tightly coupled to each other than to non-members. We intend to construct a “portal site for gender studies” in our institute based on this research. For this purpose our HITS-based experiment seems to be promising because each Web site of a community should be equally authorized on the same topic.

4 Concluding Remarks By analyzing the Web communities constructed around the keyword “gender” on the Web, we have attempted to gain new conceptions of societal gender phenomena, available for observation only on the Web. Using Companion- as a Web mining tool, the research examines the development of gender-related communities extracted through the search term “gender.” This method enabled us to examine the gender communities subjectively as well as objectively, and in fact, to perceive a heretofore unrealized phenomenon. Plus, since the developmental process of the Web communities was examined thoroughly by gender specialists, the present analysis provided a positive example in terms of the meaningful nature of its findings. In other words, we were able to verify that the texturized displays, which seem nonsense at first glance, become a vital and challenging space to be deciphered (=analyzed) through the eyes of specialists of a particular field. The research concretely demonstrates the effectiveness and impact of collaborating with researchers of specific fields who are familiar with the reality of the contents in facilitating the development of appropriate Web mining tools. This kind of dialog with researchers can only make the analysis tool more sophisticated. If the functions of the mining tools improve, there is certainly the possibility of finding even more interesting types of results from their analyses. Simultaneously, we have cautioned

366

N. Oyama, Y. Masunaga, and K. Tachi

that if particular tendencies or characteristics of the analysis tool are not seriously considered, the examination research may be flawed. Reflection upon the “analysis tool” itself is always necessary for such examinations.

Acknowledgements The authors are thankful to Professors Masaru Kitsuregawa and Masashi Toyoda of Institute of Industrial Science, University of Tokyo, who gave us active support and help in this research. This research was partly supported by a Grants-in-Aid for Scientific Research of MEXT of Japan in the Category of Scientific Research (B) (2) (Grant number 15300031) on “Construction of a Portal Site for Gender Studies using a Diachronic Analysis Method of Web Communities” (2003-2006).

References 1. Kleinberg, J. M.: Authoritative Sources in a Hyperlinked Environment. Journal of the ACM. Vol. 46. No. 5. (1999) 604-632 2. Dean, J. and Henzinger, M. R.: Finding Related Pages in the World Wide Web. Computer Networks. Vol. 31. No. 11-16. (1999) 1467-1479 3. Toyoda, M. and Kitsuregawa, M.: Creating a Web Community Chart for Navigating Related Communities. Proc. of Hypertext 2001 (2001) 103-112 4. Tachi, K.: Re-thinking of Gender Concept. (in Japanese) Journal of Gender Studies 1 (18th Issue), Institute for Gender Studies, Ochanomizu University (1998) 81-95 5. Toyoda, M. and Kitsuregawa, M.: Extracting evolution of web communities from a series of web archives. Proc. of Hypertext 2003 (2003), 28-37 6. Masunaga, Y. and Oyama, N.: Community Analysis and Portal Site Construction for Gender-related Websites – Aspect of Globalization observed in the Web Communities –. (in Japanese) Research Report on Globalization and Gender Model, Ochanomizu University (2002) 101-122 7. Osawa, M. (ed.): Women’s Policy in the 21st Century and the Basic Law for a Genderequality Society. (in Japanese) Gyosei, Tokyo (2000) 8. Takemura, K. (ed.): Post Feminism. (in Japanese) Sakuhin-sha, Tokyo (2003) 9. Tachi, K.: The Basic Law for a Gender-equality Society and Gender Concept. (in Japanese) Journal of Women’s Council Kanagawa. Vol. 35. Women’s Council Kanagawa. Japan (2004) 10-11 10. Flake, G., Lawrence, S. and Giles, C. L.: Efficient identification of web communities. Proc. of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2000) 150–160

W3 Trust-Profiling Framework (W3TF) to Assess Trust and Transitivity of Trust of Web-Based Services in a Heterogeneous Web Environment Yinan Yang1, Lawrie Brown1, Ed Lewis1, and Jan Newmarch2 1 School of IT&EE, UNSW@ADFA, Canberra, Australia [email protected], {l.brown, e.lewis}@adfa.edu.au 2 School of Network Computing, Monash University, Melbourne, Australia [email protected]

Abstract. The growth of eCommerce is being hampered by a lack of trust between providers and consumers of Web-based services. While researchers in many disciplines have addressed Web trust issues, a comprehensive approach has yet to be established. This paper proposes a conceptual trust-profiling framework through a range of new user-centred trust measures. W3TF is a generic form of trust assessment that can help build user confidence in an eCommerce environment. It incorporates existing measures of trust (such as Public Key Infrastructure), takes account of consumer perceptions by identifying trust attributes, and uses Web technology (in the form of metadata), to create a practical, flexible and comprehensive approach to trust assessment.

1 Introduction The meaning of trust in the context of eCommerce is still evolving, along with the Web environment and technologies [3, 11, 13, 14]. Traditional trust relationships between business parties are based on legitimate physical identities such as a shopfront or business premises. This physical manifestation is in contrast to the eCommerce environment of the Web, where business providers and consumers identify each other by some electronic means such as their websites, email addresses, a public key or certificate. Recent surveys have shown that one of the biggest concerns for Internet consumers (Web users) is a lack of trust in websites [9, 1]. Many researchers identify the credibility of a website as a very important factor that consists of two key components: Trustworthiness and Expertise [4, 5]. The first dimension of credibility is defined by the terms well-intentioned, truthful, unbiased, and so on; capturing the perceived goodness or morality of the sources. The other dimension of credibility is defined by terms such as knowledgeable, experienced, or competent; capturing the perceived knowledge and skill of the source. Shneiderman et al, identified two principles and associated guidelines to enhance cooperative behaviours and to win user/customer loyalty [17]. A number of trust factors were identified, such as assurances, references, certifications from third parties, and guarantees of privacy and security. These identified trust factors are also more or less agreed among researchers of empirical trust studies and surveys [2, 7, 16]. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 367 – 378, 2006. © Springer-Verlag Berlin Heidelberg 2006

368

Y. Yang et al.

These concerns have been addressed using different approaches by Jøsang et al [12] who focused on a particular mathematical modeling approach to trust, and recently by Herzberg and Jbara [8] who also focused on a practical technique for presenting a trust measure in a user’s web browser. Electronic (Digital) security technology plays an important role in establishing trust in an eCommerce environment [12]. It also provides a tangible perception of trust for online consumers. From the viewpoint of security communities, online trust can be secured through public-key cryptography, which has been used for antispoofing, authentication, authorisation, non-repudiation, and secure data communications. The major PKI models adopted by industry are primarily hierarchically structured to form a vertical trust environment [18]. However, the Web provides an unrestricted or unlimited number of hypertext links (that is, hyperlinked webdocuments) to form a horizontal referral environment. The combination of horizontal and vertical environments gives rise to a heterogeneous environment. Measurement of trust in this heterogeneous environment requires a different approach from those already established [6, 10]. These distinguishing characteristics of the general operation of eCommerce pose a challenge for online consumers to gather sufficient information in a heterogeneous environment on which to base trust assessments. The novel contribution of this paper is to develop a generic trust-profiling framework to assess the trustworthiness of webdocument(s). It will do this by translating identified trust criteria into trust metadata that are assigned to proposed trust categories and trust domains, which can then be evaluated using various calculations, and the result distributed to Web users or other applications or trust systems.

Fig. 1. Hypothetical ACME Travel online-service provider’s operational environment

W3TF to Assess Trust and Transitivity of Trust of Web-Based Services

369

We then present an example application of the proposed W3 trust-profiling framework for the fictitious Acme Travel, who promote their holiday packages on the Web (Figure 1). Its webdocuments have a number of external hyperlinks to other business partners, professional associations and certificate authorities, and its website is referenced by peer professional associations and certificate authorities, as evidence of its validity.

2 Brief Description of the W3TF The proposed Web trust-profiling framework (W3TF) is a generic trust-profiling framework for evaluating the trust and transitivity of trust of online services in a heterogeneous environment [19], where Transitivity of Trust concerns how the trust value of a webdocument can influence or be influenced by another hyperlinked webdocument (or nodes). It proposes two main trust assessments, standalone and hyperlinked trust assessments, based on different types of webdocument content and relationships, as illustrated in the Acme Travel example. All hyperlinked webdocuments combine a horizontal Web referral environment and a hierarchical PKI environment to form the heterogeneous environment identified by W3TF. Each trust assessment has a number of components and is based on various types of trust information, which can be extracted from various sources and then classified into various trust categories. Trust assessments are then carried out using the various trust categories with their associated trust domains. Trust information is represented by a proposed initial set of trust metadata [20]. Figure 2 is a diagrammatic conceptual view of the proposed W3 trust-profiling framework.

Fig. 2. Conceptualised trust-profiling framework of W3TF

A website may have arbitrary number of standalone webdocuments and external hyperlinks. The service provider, its business associates and partners can alter their webcontents, and hyperlinks to other webdocuments, at will. Hence, there is very

370

Y. Yang et al.

little restriction on, or standards for, the changes that service providers can make to their webdocuments. In the W3TF, all internal hyperlinks and webdocuments sharing the same DNS name are known as ‘standalone webdocuments’; all external hyperlinks and associated webdocuments residing on different websites are considered as ‘hyperlinked webdocuments’. The trust profile of a webdocument is the result of a combination of both standalone and hyperlinked trust assessments, which can be stored in a trust database for future reference.

3 Trust Assessment on Standalone Webdocument Standalone trust assessment is the analysis of the trust information in a webdocument without considering hyperlinks and hyperlinked webdocuments and their webcontents. Before trust assessment starts, necessary trust information is extracted and categorised into predefined trust categories for ‘cross-examining’ against trust criteria in each trust category [21]. Standalone trust assessment is carried out based on the following initial three trust categories: • Category A relates to the contents of the webdocument that provide information about an online service provider and their business. This self-declared information is placed in a webdocument by individual providers and might include details about primary and secondary businesses and contact information. Possible sources of information include the HTML document contents and HTTP protocol metadata. • Category B relates to affiliation and compliance such as membership of business and professional associations, reputation, policies, and legal status, which can be sourced from a third party. Each claim must be verified with peak bodies or a trusted third party. • Category C relates to relationships between an online service provider and a PKI certificate (PKI cert) authority. Each PKI cert must be verified with the PKI certificate issuer that is a third party. Category A metadata becomes part of the online-service Web referral trust domain. Metadata for categories B and C are classified in the evidence-of-approval trust domain. The collective trust values of the metadata of each category represent the overall trust value of a webdocument in a heterogeneous Web environment. Trust assessment on a standalone webdocument is based on a parallel assessment of both trust domains, as shown in Figure 2. The number of categories can be extended to incorporate other forms of trust information as required. Each trust category comprises a number of trust attributes and each attribute is represented by certain trust metadata. One way to assess the level of trust (or relative degree) of the overall trust value of a webdocument is by using the trust weighting of the proposed trust metadata through the contribution from each trust category. Each trust category has a set of predefined trust attributes. A trust attribute acts as an atom of trust. Each trust attribute carries some ‘weight’ of trust value, which allows interpretation of the trust perspective of a webdocument.

W3TF to Assess Trust and Transitivity of Trust of Web-Based Services

371

The trust value is weighed from each trust attribute of the category with a consideration given to elements of uncertainty. Then the collective trust value of each category contributes to the overall trust value. However, before applying any theories and formulae, each trust assessment component must be formalised and their interrelationships denoted in a symbolic and generic form, to which various calculations can then be applied for weighing trust attributes and estimating a trust value of webdocuments. The collective trust weights of each of the categories A, B and C are combined to contribute to the overall trust value of the targeted webdocument.

4 Transitivity of Trust In the hyperlinked web referral environment, transitivity of trust is the central thread in trust assessments [22]. It concerns the extent to which the trust value of a webdocument influences or is affected by hyperlinked webdocuments (or nodes). The purpose of transitivity of trust is to achieve scalable trust, which allows a certain level of trust to travel to a number of nodes (or entities) and still be able to maintain that level of trust in a specific time frame. Transitivity of trust assessment is to ensure (or maintain) the measurement of a trust relationship among the maximum number of hyperlinked webdocuments by identifying any penalty factors in the online-service Web referral environment, e.g. online service spam behaviour. Each hyperlink and hyperlinked webdocument must be able to demonstrate a need (or justification) for the existence of a relationship between the targeted webdocument and the hyperlinked webdocument. Transitivity of trust assessment includes relevance assessment and subordinate assessment, which examine different penalty factors and fading factors in different trust domains of the hyperlinked webdocuments. As part of a transitivity of trust assessment, pruning is used to reduce unrestricted hyperlinked webdocuments to a manageable size using relevance assessment to arrive a relevant tree, on which a trust profile of the targeted webdocument can be based. The proposed method of evaluating trust for hyperlinked webdocuments uses a transitivity of trust assessment that includes: • relevance assessment: of the business relationship between two hyperlinked webdocuments; • subordinate assessment: of the trust implications and the influence of hyperlinked webdocuments; • fading (and penalty) factor analysis: of elements that will reduce online trust as it travels between hyperlinked webdocuments; and • pruning arrangements in the web referral environment: of possible ways to ensure a reasonable and manageable sized tree for real-time trust assessment. All hyperlinked webdocuments belonging to other websites will be assessed in hyperlinked trust assessments.

372

Y. Yang et al.

4.1 Relevance Assessment Relevance assessment analyses evidence of business relationships between hyperlinked webdocuments to ensure the purpose of this business relationship is to fulfill business requirements. Relevance assessment measures the relevance of online service(s) between the targeted webdocument and a hyperlinked webdocument. The targeted webdocument’s primary service could act as a benchmark for other hyperlinked webdocuments to compare or match up with, thereby providing an indication of whether there is a relevance relationship between the targeted webdocument and the hyperlinked webdocument. Relevance assessment serves two purposes. First, it ensures all hyperlinked webdocuments have some kind of relevance relationship with the targeted webdocument in the online-service domain. Second, it provides a mechanism to prune down the number of hyperlinked webdocuments to a more manageable size according to the requirements or definition of the online-service domain. The result of this process is described as a relevance tree. In a relevance tree, each node is considered as both standalone and hyperlinked to the ‘targeted’ node, unless the relevance tree has only one node. So trust evaluation is based on a standalone assessment followed by a hyperlinked assessment. In a relevance tree, all nodes except the targeted webdocument (node) are labelled as hyperlinked webdocuments with a relevance relationship with the targeted webdocument and so labelled as subordinate nodes of the targeted notes. Subordinate assessment provides additional trust evaluations for hyperlinked webdocuments. These assessments can be used to analyse the transitivity of trust and demonstrate the influence of hyperlinked webcontents on the trust value of a webdocument. 4.2 Penalty and Fading Factor Analysis The role of penalty or fading factor analysis is to examine elements of uncertainty in each trust domain. These elements of uncertainty can be seen as potential barriers for achieving scalable trust in a heterogeneous Web environment. A number of uncertainty elements in each domain, both tangible (i.e. facts) and intangible (such as user confidence based on practical experience or perceptions), can be identified. These elements of uncertainty are defined as penalty factors for the online-service Web referral trust domain; and as fading factors for the evidence-of-approval trust domain. Both factors can reduce the weight of trust during trust and transitivity of trust assessment. Penalty factors for the online-service domain are determined through relevance assessment of the Primary Service between the targeted webdocument and its hyperlinked webdocuments. If the degree of relevance is less than, say, 50% then the hyperlinked webdocument is tagged as irrelevant and the targeted page will be recorded as having a penalty factor. If a targeted webdocument has more irrelevant links, then it will have more penalty factors. This penalty will reduce the trust value of Category A of the targeted webdocument. The trust value of Category A will contribute to the overall trust value of the webdocument. The trust value of each node in the relevance tree will then influence or determine the trust value of the targeted webdocument. So the more penalty factors in Category A, the lower the trust value of the category.

W3TF to Assess Trust and Transitivity of Trust of Web-Based Services

373

The fading factor analysis in Category B is based on the result of verification of each claim that is linked to a trusted third party (TTP). Additional fading factors are accumulated by each negative verification result—either unverifiable claims or false claims—in the trust category. So the more fading factors in Category B, the lower the trust value of the category. In Category C, fading factor analysis is based on the length of the hierarchical certification path to reach the root certificate authority (CA). The root CA is the most trustworthy during the certification process according to the X.509 PKI standard [18]. At the end of the verification and validation processes, ‘approval-trees’ (for example, a number of hops to the trusted third party in Category B; the number of entities in a chain of certificates for Category C) will be constructed. These trees are used for calculating fading factors in each category. The more hops, the more fading factors will be accumulated. However, in practice, there is often a direct hyperlink (or single step) between the subject webpage and the trusted third party website to verify professional affiliation in Category B; the same frequently applies for the PKI chain of trust in Category C. This ‘fading factor’ by back-propagation, as mentioned in [15], led to the proposed W3 trust-profiling framework. However, the W3TF has consolidated and extended the use of the factor through the new concept of transitivity of trust in trust domains and essential trust evaluation processes. 4.3 Subordinate Assessment The proposed subordinate assessment analyses the trust implications and influence of hyperlinked webdocument(s). In a relevance tree, each webdocument (that is, node) often has hyperlinked webdocuments. These hyperlinked webdocuments could be described as ‘child’ (or subordinate) nodes of the parent node. A webdocument may have a number of child nodes, which also have their own child nodes, which can be treated as ‘grandchild’ nodes. The structure of family generation (that is, the parent, children and grandchildren) is used to express the tree structure of hyperlinked pages in a trust domain. Subordinate assessment analyses how each hyperlinked webdocument’s trust value affects its parent node, hence, the overall trust value of the targeted webdocument. This analysis will be incorporated into a tree pruning process, such as depth-bounded/ breadth-bounded pruning techniques. 4.4 Total Trust Assessment Total trust assessment combines the trust values of all hyperlinked webdocuments in the relevance tree. It is a recursive process. Each hyperlinked webdocument in the relevance tree is cross-examined by both standalone and hyperlinked trust assessments. This process is repeated until all hyperlinked nodes in the original relevance tree have been examined. An initial trust profile is the result of a total trust assessment of a webdocument when performed for the very first time. This initial trust profile can be stored in a trust profile database for future reference. The total trust assessment of the targeted webdocument is based on combining the trust assessments of standalone webdocuments residing on the same website (Category A, B, C of each

374

Y. Yang et al.

webdocument) and hyperlinked webdocuments (with the associated subordinate nodes) in the relevance tree, and normalising the result. In other words, the result of the total trust assessment is not only based on the standalone web document’s trust assessment, but also takes account of the subordinate assessment of all hyperlinked webdocuments.

5 Illustrative Example of a W3TF Transitivity of Trust Evaluation In a heterogeneous web environment, the trust-profiling process starts with identifying the targeted webdocument on a website where a trust profile is required, along with webdocuments that are hyperlinked to the starting point. Acme Travel’s trust profile is based on trust assessments of the targeted webdocument as well as of BlueSky Airline, Bayview Restaurant, Comfy Accommodation and Cairns Travel Bureau webdocuments (Figure 1). After the relevance assessment is carried out on all hyperlinked webdocuments starting on Acme Travel’s targeted webdocument, a relevance tree is generated from a graph of all hyperlinked webdocuments. Based on this relevance tree (Figure 3), each node is subjected to standalone trust assessment and subordinate assessment. The result of recursive trust assessment is the trust profile of Acme Travel. The proposed W3TF evaluation process is a recursive one, which combines standalone and hyperlinked trust assessments on a webdocument and its hyperlinked webdocuments. After the standalone trust assessment is done on the trust categories of a webdocument, transitivity of trust is assessed on hyperlinked webdocuments according to different types of inter-relationships. The result of both trust assessments is the trust profile of the webdocument for which a trust assessment was required.

Fig. 3. Symbolic notation of the relevance tree of Acme Travel

The evaluation process thus combines the following elements used in the standalone and the hyperlinked trust assessments. To apply a mathematical formula for weighing and combining trust values for trust-profiling evaluation [22], a symbolic notation is necessary (Figure 3). The results of this assessment are summarised in Table 1. P represents the trust assessment resulting from the combination of values of the trust metadata in category A of the online-service domain of a standalone webdocument.

W3TF to Assess Trust and Transitivity of Trust of Web-Based Services

375

In generic terms, a suitable value of P can be obtained from a function based on the number of attributes present: • Count the number of attributes present; for example, 5 • Divide by the total number of attributes in Category A; that is, 5/16 • Assign the above calculation result to P value; that is, P = 0.3125 Based on this, and on hypothesized page contents for each (see [19]), P values for Comfy, Bayview, BlueSky and Acme are 0.9375, 1, 0.9375 and 0.6 respectively. Q represents the trust assessment resulting from the combination of professional affiliations (Category B) and a chain of certificates (Category C); that is, the result of verification of the evidence-of-approval domain. Including consideration of the fading factors, Q values for Comfy, Bayview, BlueSky and Acme are 0.7, 0.7, 0.8 and 0.8 respectively. R represents the relevance assessment resulting from the measurement of the relevance of online service(s) between a hyperlinked webdocument with the targeted webdocument. The target Acme webdocument has a default R value of 1. For the other pages, key Category A attributes (for example, Primary-Service) are compared with the target webdocument to determine their degree of relevance. This comparison gives R values for Comfy, Bayview, and BlueSky of 0.5, 1.0, 0.6 respectively. S represents the results of subordinate assessment, which is based on trust assessment of other standalone webdocument(s) in the relevance tree. A standalone webdocument (e.g. the targeted webdocument) often has hyperlinks to other webdocument(s), each of which has a relevant trust value of S1, S2… Sn. That is, S = s (S1, S2, …, Sn) where Si = s (TTi, Ri) for some functions (see description of TT below). S is the contribution to this document from children’s total trust value and associated relevance-value (R) in the relevance tree. In general, S is the sum of the combination of the total trust value of child nodes (TT-child) and the relevance values (R-child) of direct-subordinate nodes; that is, Si = s(TT, R) = ™ (TT_child × R_child) / no. of children) . If there is no child node, then S = 0, being a special case for leaf nodes of a tree. Based on this, S values for Bayview and Acme are 0.2895 and 0.4519. TOT represents the assessment of transitivity of trust, which concerns how the trust value of a webdocument is influenced or affected by hyperlinked webdocuments. It is desirable to be able to achieve scalable trust on the Web, which allows a certain level of trust to travel to a number of nodes (or entities) and still be able to maintain a certain level of trust within specific period. TT denotes the result of overall trust assessment of a webdocument. It combines the values of categories A, B, C of the standalone webdocument and associated subordinate nodes in the relevance tree and normalises the result. TT can be measured through a number of possible ways, which include extracting trust attributes, weighing each trust category and combining the result of all trust assessment components including P, Q and S for each webdocument. TT is required for every node in a relevant tree. In generic terms, a value of TT can be obtained from a function based on the total trust value of each hyperlinked webpage, which can be expressed in the following way: TT = tt (P, Q, S). For a leaf node with no hyperlinked child node, TT = tt (P, Q). Computing TT is a recursive process. The TT

376

Y. Yang et al.

of Acme value is based on each total trust value of each node in the relevance tree (Figure 3). The following formula is used for the TT value: TT = (P, Q, S) = (P + Q + S) / 3, so TT of Comfy = (0.9375 + 0.8 + 0) / 3 = 0.579 TT of Bayview = (1 + 0.35 + 0.2895) / 3 = 0.5465 TT of BlueSky = (0.9375 + 0.85 + 0) / 3 = 0.5958 TT of Acme = (0.6 + 0.85 + 0.4519) / 3 = 0.7666 That is, the total value of the targeted page Acme is 75.06% as shown in Table 1. Table 1. Total Trust calculation of Acme Node ID Comfy Bayview BlueSky Acme

P Cat-A 0.9375 1 0.9375 0.6

Q Cat--B 0.7 0.7 0.8 0.8

Cat-C 0.9 0 0.9 0.9

R

S

TT

0.5 1 0.6 1

0 0.2895 0 0.4519

0.579 0.5465 0.5958 0.766

The total trust value of Acme’s holiday package webpage, is 76%, which combines the values of two domains including associated fading factors, with the standalone trust value of the subordinate value of the relevance tree. The results can either be self stored or stored at a third party's trust database for historical information, and displayed to end users through a front-end client interface In brief, the proposed trust evaluation model performs the following functions in different components: 1. Input component: identifies sources of trust information, assigns the default weight for each trust attribute according to its category and draws a graph by following each external hyperlink from Category A of the targeted webpage, Acme; 2. Trust metadata construction component: constructs the relevance tree based on relevance assessment (i.e. assessing fading factors) according to the Primary Service of Category A of Acme; 3. Trust evaluation component: calculates P, Q, S and TT values including fading factors in Categories B and C of each node in the relevance tree; 4. Trust metadata reconstruction component: updates the trust metadatabase with associated trust profiles for future reference; and 5. Output component: prepares different formats of the total trust value of the targeted webpage, Acme, which can be read either by devices or Web users. Full details of the trust evaluation and possible formulae are provided in [19].

6 Conclusion The proposed W3 trust-profiling framework (W3TF) combines efforts by Web research communities with associated issues from the wider Web trust spectrum,

W3TF to Assess Trust and Transitivity of Trust of Web-Based Services

377

including government and industry, to present a promising approach for online trust assessment and a sound foundation from which further studies might be built. The W3TF is versatile. It can be expanded to accommodate new trust attributes, categories and domains, and trust can be ‘weighed’ (and therefore evaluated) by using various mathematical formulae based on different theories and policies. Clearly further work is required to validate the practical implementation of this framework. This work would involve deploying a prototype implementation of the framework, investigating other possible sources of trust attributes, and evaluating various models for combining the trust attributes into a final overall value. In a heterogeneous Web environment, transitivity of trust can be achieved through a combination of standalone and hyperlinked trust assessments and appropriately constructed relevance tree. W3TF provides a mechanism for the evaluation of trust and transitivity of trust through trust metadata and associated trust categories, relevance assessment, subordinate assessment, fading factor analysis, trust weighting to allow evaluation of different trust domains. Based on this trust profile, we believe that online consumers can make a more informed decision, and consequently, their user confidence would be improved.

References [1] France Belanger, Varadharajan Sridhar, & Craig Van Slyke. Comparing the Influence of Perceived Innovation Characteristics and Trustworthiness Across Countries, Proceeding of the International Conference on Electronic Commerce Research (ICECR-5), Nov 2002 [2] Cheskin & Studio Archetype/Sapient, San Francisco, eCommerce Trust Study, Jan 1999. www.cheskin.com/think/studies/ecomtrust.html [3] iTrust, Aspects of Trust, The iTrust working group, http://www.itrust.uoc.gr /dyncat.cfm?catid=37, cited 28 Oct 2005 [4] B. Fogg, J. Marshall, O. Laraki, A. Osipovich, C. Varma, N. Fang, J. Paul, A. Rangnekar, J. Shon, P. Swani, M. Treinen. What Makes Web Sites Credible? A Report on a Large Quantitative Study. Proceedings of the SIGCHI Conference on Human factors in Computing Systems, Seattle, USA, pp 61-68. 2001. ISBN:1581133278. [5] BJ Fogg, Jonathan Marshall, Tarmi Karmeda, Joshua Solornon, Akshay Rangnekar, John Boyd, & Bonny Brown. Web Credibility Research: a Method for Online Experiments and Early Study Results, 2001. http://www.webcredibility.org/studies/WebCred Fogg CHI 2001 short paper.PDF [6] Gorsch, D. InternetstLimitations, Product Types, and the Future of Electronic Retailing. Proceeding of the 1 Nordic Workshop on Electronic Commerce, Halmstad University: Viktoria Institute, May 2001. [7] T. Grandison and M. Sloman. A Survey of Trust in Internet Applications, IEEE Communications Surveys and Tutorials, Fourth Quarter 2000, Vol 3, No 4, IEEE www.comsoc.org/livepubs/surveys/public/2000/dec/grandison.html. [8] Amir Herzberg and Ahmad Jbara, Reestablishing Trust In the Web, Dr. Dobb's Journal, Oct 2005 [9] Donna L. Hoffman, Thomas P. Novak, Marcos Peralta. Building Consumer Trust Online. Communications of ACM, Vol 42, No 4, pp 80-85, Apr 1999. ISSN: 0001-0782.

378

Y. Yang et al.

[10] Head, M.M., Yuan, Y., Archer, N. Building Trust in E-Commerce: A Theoretical Framework. Proceeding of the Second World Congress on the Management of Electronic Commerce, MCB Press, Jan 2001. [11] S. Jones. TRUST-EC: Requirements for Trust and Confidence in E-Commerce, Workshop Requirements for trust and confidence in E-commerce, Luxembourg, CEC, 1999. [12] Audun Jøsang, Ingar Glenn Pedersen and Dean Povey. PKI Seeks a Trusting Relationship, In Ed Dawson, Andrew Clark, Colin Boyd (eds), Information Security and Privacy: Proceedings of ACISP 2000, Lecture Notes in Computer Science, Vol 1841, pp191-205, Springer-Verlag, 2000. http://security.dstc.edu.au/papers/pkitrust.pdf [13] Peter Keen. Electronic Commerce and the Concept of Trust, 1999. http://www.peterkeen. com/recent/books/extracts/ecr1.htm, cited 28 Oct 2005 [14] Luis F. Luna-Reyes, Anthony M. Cresswell, George P. Richardson. Knowledgeth and the Development of Interpersonal Trust: a Dynamic Model. Proceedings of the 37 Hawaii International Conference on System Sciences (HICSS ’04) – Track 3, p30086a, 2004. [15] Massimo Marchiori, The limits of Web metadata, and beyond, The World Wide Web Consortium (W3C), MIT Laboratory for Computer Science, USA, 1998. http://www7.scu.edu.au/programme/fullpapers/1896/com1896.htm [16] Princeton Survey Research Associates. A Matter of Trust: What Users Want From Web sites, Jan 2002. http://www.consumerwebwatch.org/news/report1.pdf [17] Ben Shneiderman. Designing trust into Online Experiences, Communications of the ACM, Vol 43. No 12, pp57-59, Dec 2000. ISSN:0001-0782. [18] ITU-T Recommendation X.509, Information Technology - Open Systems Interconnection - the Directory: Authentication Framework, International Telecommunication Union, Jun 1997. ISBN: 0733704263. [19] Yinan Yang, W3 Trust-Profiling Framework (W3TF) to assess Trust and Transitivity of trust of Web-based services in a heterogeneous Web environment, PhD Thesis, School of Information Technology and Electrical Engineering, University of New South Wales, ADFA, Canberra, Australia, Aug 2004. [20] Y. Yang, L. Brown, J. Newmarch and E. Lewis, Trust Metadata: Enabling Trust and a Counterweight to Risks of E-Commerce, Proceedings Asia Pacific World Wide Web Conference, p197-203, 1999. [21] Y. Yang, L. Brown, J. Newmarch, E. Lewis, A Trusted W3 Model: Transitivity of Trust in a Heterogeneous Web Environment, Proceedings of the Fifth Australian World Wide Web Conference, Queensland, pp59-73, Apr 1999. ISBN:1863844554. [22] Y. Yang, L. Brown, E. Lewis, and J. Newmarch. W3 Trust Model: a Way to Evaluate Trust and Transitivity of Trust of Online Services, Proceedings Internet Computing Conference, Las Vegas, USA, Jun 2002.

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree Cong-Le Zhang, Sheng Huang, Gui-Rong Xue, and Yong Yu Apex Data and Knowledge Management Lab, Department of Computer Science and Engineering, Shanghai JiaoTong University, Shanghai, 200030, P.R. China {zhangcongle, grxue, shuang, yyu}@apex.sjtu.edu.cn

Abstract. Since we can hardly get semantics from the low-level features of the image, it is much more difficult to analyze the image than textual information on the Web. Traditionally, textual information around the image is used to represent the high-level features of the image. We argue that such “flat” representation can not describe images well. In this paper, Hierarchical Representation (HR) and HR-Tree are proposed for image description. Salient phrases in HRTree are further to distinguish this image with others sharing the same ancestor concepts. First, we design a method to extract the salient phrases for the images in data records. Then HR-Trees are built using these phrases. Finally, new hierarchical clustering algorithm based on HR-Tree is proposed for users’ browsing conveniently. We demonstrate some HR-Trees and clustering results in experimental section.. These results illustrate the advantages of our methods.

1 Introduction With the rapid growth of the World Wide Web (WWW), images have been playing more and more important role in representing web pages. For information retrieval task, there is an increasing need to provide automatic approaches to analyze images, such as image description mining and image clustering. As shown in Figure1, many images and textual information are organized into regular structures, e.g. product list, sports, entertainment news, etc, which are called data records in [11, 18]. Describing and clustering these images can provide many value-added services (e.g. comparative shopping, convenient browsing). Traditional information retrieval technologies can hardly apply to images directly since the low level features of the images, such as pixels, color and brightness etc., are far from semantic meaning. Fortunately, images in web pages are usually associated with the context information. Mining and reorganizing images’ context information has been popular approach to convert images into some forms of semantic representation in recent years. Several approaches [1, 2] have been reported for image retrieval and clustering. In traditional Content-Based Image Retrieval (CBIR) approaches [1, 4, 5], the similarity between two images is measured using color, brightness and texture etc. Other approaches [7, 9 , 10] exploited textual information surrounding the images, which take the entire context as the image representation. We call this method “flat representation”. However, the context may include the noise information. As shown in Figure 1, the phrase “Product X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 379 – 390, 2006. © Springer-Verlag Berlin Heidelberg 2006

380

C.-L. Zhang et al.

Number” surrounding all images is useless to distinguish the different images. We notice that in Figure 1, “Apple” is an abstract description since both of them are Apple products, it distinguish these two images with other brands’ products; the detailed information, like “iMac G5 Desktop” and “iBook G4 Notebook”, further distinguish image1 from image2. If we extract such terms to describe the image, we can build a “hierarchical representation” and improve hierarchical images clustering.

Fig. 1. One simple example of images in web pages, blue area is a “block”

We call the terms which have high contributions to the description of image salient phrases. Using salient phrases, we propose “Hierarchical Representation (HR) Tree” to describe images in web pages (See Fig.2). Our basic idea is that the sub-phrases of one concept would distinguish the image with others who share the same ancestor concepts. Based on HR-Trees, hierarchical image clustering algorithm is proposed. HR-Tree is associated with hierarchical clustering, since sub-clusters should group around more concrete, more distinguishable concepts than their parent collection does. Meanwhile, phrases in HR-Tree are established by following the same strategy. Hierarchical representation is more suitable for user to browse. Besides, each subcluster is labeled with some readable phrases, which enable users to identify the group they are interested in at a glance. All of these make the results of image clustering more natural and friendly.

Fig. 2. The example 3-level Hierarchical Representation Tree for two images in Fig.1

In this paper, we focus on images on data records. We evaluate the salience of each phrase in describing images using some characteristics of the data records structure. To show the feasibility and advantage of our models, we conduct an experiment on image describing and clustering. Good performance is achieved due to the stronger ability of HR-Tree to represent images.

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree

381

The major contributions of our paper are: 1) algorithm to mine text associated with image in web pages; 2) effective methods to extract salient phrases for the images in data records; 3) improved hierarchical representation model using HR-Tree and 4) Hierarchical Clustering algorithm based on HR-Tree. The rest of the paper is organized as follows. Some related works are introduced in Section 2. How to acquire candidate textual information for images is discussed in Section 3. Hierarchical Representation and corresponding algorithms are given in Section 4. In Section 5, HR-Trees are applied in clustering images. Section 6 presents the experimental results. Finally, we conclude the paper and discuss the future work.

2 Related Work The related work covers these topics: image retrieval and clustering; data record mining; key word extraction; web document clustering. Several image retrieval system and methods have been proposed. Traditionally, image search [1] and clustering [3] used image content to analyze its semantics. Some systems [4, 5] based on CBIR were designed and implemented. There exists the problem that it is very hard to learn the semantic meaning of an image from low level visual features [2]. Some commercial image search engines like Image Google [10] and AltaVista [9] use the text extracted from HTML page to describe the image. Such flat representation does not emphasize important phrases and does not reveal the structural relationship between concepts, which is exploited by our approach to improve image clustering. Data records are important resource in web and many works have discussed the problem of mining data records from web pages. Some approaches [12, 13] proposed wrapper induction approach, which extracts rules learned from manually labeled pages and data records. Automatic methods [16, 17] discovered patterns or grammars from initial pages. They both need manual efforts. MDR [11] and DEPTA [17] identified each data record by analyzing HTML tags or DOM trees. Our mining algorithm has some similarities to them while we utilize the characteristic of records to mine image information. Besides, we consider the page linked to the image. It is pointed out that document summarization will improve web page classification [5]. It has some relationship with our work since we both consider the selection of “important” words. The difference is that we extract salient words or phrases to describe and cluster images in data records. Topic finding [14] and key phrase extraction [15] provide the methods to evaluate the importance of a word in a document. Zeng [19] ranked the salient phrases in snippet using several common-used metrics. The idea of feature calculation is important for our work. Functions to calculate features are re-designed and tested to fit well with our model. Some research [3, 7] discussed the issues on image clustering. All of them use the text surrounding images as the text feature. They combine this feature with some others to represent images and then perform clustering. Traditional document clustering was applied and the result clusters was unreadable for users. Recently, some works [19, 20] proposed new methods to identify sets of documents that share common phrase, but these works are flat clustering. Our contribution in this paper is to combine this idea with HR-Tree and propose hierarchical image clustering method.

382

C.-L. Zhang et al.

3 Mining Textual Information in Data Record Data records are those regular structural objects [2, 11], e.g. product list, service list, news, sports, entertainment information. In this paper, our study focuses on product list. The model can be easily expanded to other kinds of data record set. Before explaining the idea of hierarchical representation (HR), we briefly introduce our method to mine the text for the image in data records. We call them “image documents”. We notice two kinds of good candidates: 1) information in the block which we call “block text”. In our work, “block” means a piece of or a group of information, e.g. blue area in figure 1; 2) the description text in the linked page, if the link exists for one image, which we call text in “description page”. The “block text” is similar to the context of image mentioned in previous works. Moreover, we consider the data record structure of web page. For an image, the context in the same block with it is treated as valid. We mine block text by analyzing HTML tags and DOM tree. We focus on two kinds of tags: , which indicates the occurrence of the image;

,

, etc., which present the occurrence of “block”. After throwing away those images with high ratio between width and height, and those with small size, (they are always noisy images), we then build the image-block tree which is a kind of variant of DOM tree, as shown in figure 3. We call the collection of several sub-trees representing the image as image-region. One image region is possible to contain several images. Our first job is to identify such regions.

Fig. 3. An example of Image-Block Tree; the red area is the first image region

According to the work of [18], adjacent image regions have the same parent and have the same, continuous structure. We have to solve: 1) where does the first image region begin and 2) how many nodes does one image region have. Our solution is try to find a image region start from each node s sequentially, and try one node, two nodes … k nodes combination and check whether the structure will repeat. We use the smallest s as the start position, and k as repeated cycle. The complexity of this algorithm is small; the proof is presented in [11]. If one image region contains several images, we split the text in image region according to image-block tree and assign segment to each image [18]. Besides block text, we trace the link from the image to description page. We notice that in description page: 1) Nodes to describe an image are always siblings in DOM tree 2) Useful information is similar with block text we have already mined. Thus, we first mine these sub-trees according to 1) and then compare them with block text. We

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree

383

obtained one sub-tree most similar with block text, and then add the contents of the sub-tree to image document.

4 Hierarchical Representation of One Image We use n-gram scheme to obtain raw phrases from image document. In this section, we convert the flat image representation into hierarchal image representation by building HR-Tree. Traditional method treats the entire context as the description for an image. As mentioned in the introduction, we noticed the fact that the importance of phrases should not be considered isolated, but in some environment. For an image of “Apple G4 Notebook”, in a collection of electronic products with different brands, “Apple” distinguishes this image with other brands’ products. Thus, “Apple” is salient in this collection. Then consider the sub-collection full of “apple” products, “notebook” further distinguishes this with other apple products like desktops, iPods and etc. Thus, “notebook” is salient in the sub-collection. Since all images contain “Apple”, “Apple” is not salient now. The sub-sub-collection full of “Apple Notebook”, “G4” is salient. In this paper, we call “Apple” and “Notebook” as G4’s ancestor phrases and the environment, i.e. a group of images, collection. Based on this fact, we propose Hierarchical Representation tree (HR-Tree) to describe the image, which will benefit hierarchical clustering (See Section 5). HR-Tree of image I is composed of salient phrases. Salient phrase p is good to distinguish I in a specific environment that all images sharing the same ancestor phrases of p . The route to construct the tree is a kind of Breadth-First Search. The key point of the idea is to rebuild image collection, i.e. environment, at each step of tree building. Figure 2 shows 3-level HR-Trees for the images in Figure 1. The zero level is the image. The first level phrases are salient in describing image in initial collection C . Each i th level phrase is salient in C ’s sub-collection C0 , whose images share all his ancestor phrases. Below is our algorithm to 1) rank salient phrases for the image in certain environment; 2) build HR-Tree recursively based on 1). Rank Salient Phrases According to the characteristic of the information in data records, we proposed the methods to rank salient phrases for an image in a collection C , with ancestor phrases p1 , p 2 ,... p k . According to [15, 19], we turn the problem to feature calculation. In the rest of this section, we denote the candidate phrase as

p , image document

as I . We proposed four features: TFIDF; LEN; DSP; NR. TFIDF: This property is calculated just as traditional meaning of Term Frequency/ Inverted Document Frequency. Our idea is that a good candidate phrase often binds to one image (TF is high) but less appear in other places (IDF is high). S 1 ( p ) = TF × IDF =

freq ( p, I ) df ( p ) × ( − log 2 ) Size( I ) N

(1)

384

C.-L. Zhang et al.

p occurs in I , Size(I ) is the number of words in image document. df ( p ) is the number of image documents contain p . N is Here, freq( p, I ) is the number of times the size of image collection. LEN (Phrase Length): It is the count of words in a phrase, used to emphasize long phrases. For example, “cell phone” is better than “cell” or “phone”. S 2 ( p ) = LEN = n

(2)

DAP (Distance from Ancestor Phrases): In a sub-collection that all image documents share the phrases p1 , p 2 ,... p k , if the phrase p occurs near to them, we assume it is more likely to be a concept closely associated with these shared concepts and further distinguish this image. Below is the calculation formula. If p occurs more than one time in image document, we choose the maximum result as the value. k

S 3 ( p ) = DAP = 1 −

¦ D ( p, p

i

)

(3)

i =1

k × size ( I )

D ( p, pi ) is the distance from p to pi . The result is between 0 and 1. There is one special case when k=0, i.e. ranking phrases in first level. In this case, we use first occurrence, i.e. one minus, the number of words that precede the phrase first appearance, divided by the number of words in the document. NR (Near Reminder) the phrases adjacent high frequency words always remind some important information. For example, in a notebook collection, we need the size of memory, hard drive to describe the image. High frequency words like “RAM” and “Hard” are good reminders. We consider one adjacent word:

S 3 ( p ) = NR =

1 freq (t , C ) ¦ | l ( p ) ∪ r ( p ) | t∈l ( p ) ∪r ( p ) Size(C )

(4)

Here, freq(t , C ) means the number of times t occur in collection. l ( p) ∪ r ( P) is the corpus containing right and left phrases adjacent to p . Performance and combination of the features are provided in experimental section. Build HR-Tree: Below is the basic algorithm to describe images in initial collection C , Based on the ranking algorithm above. P is the set of ancestor phrases. φ is empty set 1. Main( C ) 2. 3.

For each Image I : Put k sub-trees = HRTs( C , I , φ ) under root Image;

1. HRTs ( C , I , P ): 2.

Extract k most salient phrases p1 , p2 ,... pk for

I in current environment

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree

3. 5.

For each phrase pi : Rebuild sub-collection Ci ⊂ C , for each I j ∈ Ci , I j contain pi . Put trees T1 , T2 ,...,Tk = HRTs( C i , I , P ∪ {pi }) under pi

6.

Return

4.

385

k trees rooted with p1 , p2 ,... pk .

In our algorithm, calculation and extraction of salient phrases is based on salience ranking algorithm. When we apply the HR-Trees to cluster images, the initial collection is just composed of images to be clustered. The algorithm is recursive. Each node in the final tree is calculated under new environment, whose images share ancestor phrases, just following the definition of Hierarchical Representation. One critical issue exists that whether one phrase is allowed to appear in more than two places in one HR-Tree? According to the algorithm, same phrase in two places describe this image in different environment. Thus, even the same phrase may play different role in different places. Thus the multi-occurrences of one phrase are allowed in the same HR-Tree. HR-Tree is an improved model to describe images. We should choose some parameters for this model when it’s applied to clustering. The parameters include the time to stop extending HR-Tree and the number of branches under each tree node. These issues are discussed in Section 5.

5 Clustering Images Using HR-Tree In this section, we apply HR-Tree to cluster images. In information retrieval area, clustering will group the search result into different clusters and enable users to identify their required information at a glance. Given that each phrase in HR-Tree is salient to describe the image in a collection C , we propose one new hierarchical clustering algorithm based on HR-Tree. Our assumption is that, if all m images’ HR-Trees contain one identical phrase p , and that phrase is extracted under the same collection C0 in all these m trees, then p is a good phrase to describe these images in that environment. These m images have high possibility to be clustered into one group, named p . Our algorithm is soft clustering and each result cluster is labeled with some phrases. The detailed algorithm of clustering initial image collection C into hierarchically clusters is explained as follows. Firstly, we cluster initial collection C0 into first-level cluster C1 , C2 ,...Ck a) using C0 as initial collection, we run the HR-Tree establishment algorithm to build trees with height two. The algorithm stops after obtaining phrases in the first level. b) We check each phrase in first level and assign it to the images whose HR-Trees contain it. After that, images group around k phrases, which have most image assignments, are selected to form k clusters. Each cluster is named with a salient phrase. Secondly, we divide one image cluster into subgroups, each of which represents a more concrete concept. If we want to further cluster one collection Cx named q , we

386

C.-L. Zhang et al.

will extend phrase q in all HR-Trees to the second level and repeat step b) mentionedabove except that the check range is these extended phrases in second level. The result clusters are second level sub-clusters named with two phrases: q and their new corresponding phrase. In order to cluster i th level sub-cluster named q1 , q 2 ...q i to (i + 1) th level ones, we extend the path of q1 , q2 ...qi in these HR-Trees to (i + 1) th level,

and repeat step b). (See Left part of Fig.6. as an example of HR-Tree phrases extending) Besides, we combine two sub-clusters if they overlap too much; and eliminate those have very small amount of images. For clustering algorithm, not all HR-Trees nodes have to be extended. Branches are created only when it is necessary. This makes the algorithm efficient. The advantage of our algorithm is that our results are more natural since the hierarchical representation is more reasonable for the human beings’ browse habit. Besides, giving each cluster a readable name makes it friendly for user’s browsing.

6 Experiments We conduct several experiments to test the effectiveness of our methods. Firstly, we crawl web pages contained images and data records from some web sites (See Table 1). After parsing these pages, we extract images and corresponding candidate information using our mining algorithm (See section 3). And then, they are used to establish HR-Trees to test the effective of different feature to rank the salient phrases (See section 4). We would illustrate some example of HR-Trees to display their description ability. Then we would conduct hierarchical clustering on images and give some demonstration to show the advantage of HR-Tree. 6.1 Experimental Result on Mining Algorithm We test our mining algorithm using data sets in Table 2. We select 15 hot queries of 3 different types (Table 1) and submit them to 3 web sites to acquire images for product. We randomly select some images from automatically crawled data and mine their description manually, in order to guarantee the result of comparison reasonable. Then we compare image document between manual result and auto result from our algorithm. (See Cor/Wr/Miss column in Table 2). Two wrongs in the first row stems from the images linking to the pages in other web site. It could be solved by restricting that linking pages should be in the same web site. Table 1. 15 queries of 3 groups to crawl data from web sites. Return Page Num means the number of pages we crawled from one web site for each query.

Type Brand Product Name General Term

Queries Apple; IBM; Lenovo; Hp; Acer; Television; Cell Phone; Camera; Notebook; Printer;

Movie; Tea; Flower; Travel; Shoe;

Return Page Num 10 5 10

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree

387

Table 2. The test data set and some statistics. URL is the website where we crawl images; Total Images means all product images crawled from that site; test images Num is the number of images randomly selected and their image document were mined manually; Cor/Wr/Miss means using our mining algorithm, test images correct/wrong connect to their image document or image document is not complete (Miss). Avg Len means the average length of mined text.

URL shopping.yahoo.com www.compusa.com www.shopping.com

Total Img Test Imgs Num 1832 30 750 15 2177 30

Cor/Wr/Miss 27/2/1 15/0/0 29/0/1

Avg Len 179 281 97

6.2 Ranking Salient Phrases Features to rank salient phrase are the foundation for building HR-Trees and hierarchical clustering. Here we test their performance and give a method to combine them together. For the test images in Table 1, we manually select top 10 salient phrases. We assume the more one feature could select salient phrases according to manual result, the better the feature is. We use single feature to build HR-Trees’ first level, with initial collection C being the results from queries of one type. And when we singly use feature DAP, we assign the query as its ancestor phrases. We follow the evaluation measure in [20]; P@N means the precision in top N results. First four columns in Figure 4 are results.

Fig. 4. Performance for ranking salient phrases. First four columns are performance of single features. LEN’; DAP’; NR’ are results of improved features. AVG is average effect.

We see the performance of LEN, DAP and NR do not perform well in Fig.4. We notice that some common words and phrases, like “customer”, “review”, still have chance to get high score in LEN and NR. We improve the method that for phrases who obtained low TFIDF would have no chance to be chosen, i.e. just assign 0 to features. We re-evaluate the performance of these features, see LEN’; DAP’ and NR’ in Fig.4. We take average of TFIDF, LEN’, DAP’ and NR’ and obtained AVG as combined result. Last column in Fig.4.displays AVG effect. Since AVG has already reach an acceptable precision, we did not use some more complicated combination.

388

C.-L. Zhang et al.

6.3 Illustrative Example of HR-Trees in Describing Images We select 3 images in data sets and build HR-Trees with initial collection just as defined in section 6.2. With limited space, we let each node has 3 branches and we only display the phrases in top 4 levels. When collection is very small, we stop extending HR-Tree, (e.g. G4 Æ eMac has no branch) since no more phrases is necessary to further distinguish the image. Indent indicates the next level phrases. Image

HR-Tree Apple 1.25GHz; 256M; 40GB;Panther; G4; eMac; CRT;CD-RW; Desktop; G4; CD-RW; 12X; Television Sony; KDE37XS955; GRAND; Projection; KDE37XS955; 37-in; Sony; Plasma; TV; Flower Red; dozen; roses; bouquet Roses; red; south American; dozen her; Favorite; roses; bouquet;

Desktop Apple; G4;CD-RW;12X; 17-inch; eMac; FLAT; CRT; CD-RW; G4;32X10X32; Combo; Sony Television; KDE37XS955; GRAND; Projection; Tuner; HDTV; TV;112.43; Plasma; 37-in; TV; 112.43; Red flower; dozen; roses; bouquet dozen; roses; South American; bouquet; Roses; bouquet; dozen; South America;

G4 eMac; PowerPc; 1.25GHz;eMac; Monitor; 256MB; 1.25GHz;Desktop; eMac; TV Plasma; Sony; 37-in; 112.43 112.43; Plasma; lbs; HDTV; 16:9; 37-in; Sony; 112.43; Roses bouquet; South American; red; favorite dozen; South American; bouquet; flower; baby;

6.4 Cluster Result Since the purpose of image clustering is for human browsing convince, in this section, some demonstrations of our hierarchical clustering results are provided. The results show, compared with traditional methods, ours algorithm brings good effect on clustering as well as convenience for browsers. Left part of Fig.5 is the interface of our hierarchical cluster system. User enters queries in left corner. In order to speed up, we use block text for describing images only. The left frame below the query is the names and size of clusters and their layer structural. When user double clicks one cluster name, the system hierarchically cluster the images and display sub-clusters names. In the interface, we enter the query “flower”; the left is images in one cluster named “flower, roses, romance”. Another demonstration, some first level clusters of query “Samsung”, is provided in right part of Fig.5. Hierarchical clustering results and its layered structure of one cluster named “Television” are displayed in left part of Fig.6. “Plasma” and “CRT” are sub-clusters of “16:9”. Right part is the HR-Tree of the image in blue area; red arrows represent the route to extend HR-Tree in executing hierarchical clustering.

Image Description Mining and Hierarchical Clustering on Data Records Using HR-Tree

389

Fig. 5. Left part is the interface of our clustering system. Right part is some first level clusters of query “Samsung”.

Fig. 6. Left part: Hierarchical clustering result about Samsung Television based on HR-Tree. Each row is one cluster. Right part is the HR-Tree of the image in blue region. Red arrows indicate the extending route of the HR-Tree.

Meanwhile, we also conduct clustering experiment based on traditional K-means. We found it could cluster images of “television” and “cell phone” into different groups since texts describing them differ greatly. However, traditional cluster results are bad to images of “Samsung Television” since the texts describing “Plasma” and “CRT” are very similar except some phrases like “Plasma” and “CRT”. Our algorithm captures these salient phrases and form more natural clusters for browsers.

7 Conclusion and Future Work We proposed a new model, HR-Tree, to describe the images of data records in web pages and apply this model in image hierarchical clustering. We design some feature to select the salient phrases in data records and organize these phrases in HR-Tree to well distinguish this image. Experimental results demonstrate the description ability of HR-Trees. Hierarchical clustering based on HR-Tree can generate reasonable clusters in hierarchical structure with salient phrases as cluster name. In future, we will further apply the model of HR-Tree on other types of images in the web. We will re-design the features for particular images. We would design some quantitative evaluation for image clustering with HR-tree and compare with other

390

C.-L. Zhang et al.

hierarchical image clustering approaches. We also expect the improved image description can benefit other applications, e.g. image retrieval and classification.

References 1. B. S. Manjunath, W. -Y. Ma, “Texture Features for Browsing and Retrieval of Image Data”, IEEE Trans on PAMI, Vol.18, No. 8, pp. 837-842, 1996. 2. D. Cai, X. He, Z. Li, W.-Y. Ma and J.-R. Wen, Hierarchical Clustering of WWW Image Search Results Using Visual, Textual and Link Analysis. ACM Multimedia2004. 3. Y. Chen, J. Z. Wang, and R. Krovetz. “Content-based image retrieval by clustering”. ACM SIGMM international workshop on Multimedia information retrieval, 2003. 4. C. Frankel, M. Swain, and V. Athitsos, “WebSeer: An image search engine for the world wide web”, TR-96-14, University of Chicago, 1996. 5. Dou Shen, Zheng Chen, Qiang Yang, Hua-Jun Zeng, Benyu Zhang, Yuchang Lu, WeiYing Ma: Web-page classification through summarization. SIGIR 2004: 242-249 6. Jau-Yuen Chen, Charles A. Bouman, Jan P. Allebach: Multiscale Branch-and-Bound Image Database Search. (SPIE) 1997: 133-144 7. Wataru Sunayama, Akiko Nagata, Masahiko Yachida: Image Clustering System on WWW using Web Texts. HIS 2004: 230-235 8. Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, Craig G. Nevill-Manning: KEA: Practical Automatic keyphrase Extraction. ACM DL 1999: 254-255 9. AltaVista image search, http://www.altavista.com/image/ 10. Google image search engine, http://images.google.com/ 11. Bing Liu , Robert Grossman , Yanhong Zhai, Mining data records in Web pages, ninth ACM SIGKDD, August 24-27, 2003, Washington, D.C. 12. Nicholas Kushmerick, Wrapper induction: efficiency and expressiveness, Artificial Intelligence, v.118 n.1-2, p.15-68, April 2000 13. Yalin Wang , Jianying Hu, A machine learning based approach for table detection on the web, WWW02 14. Liu B., Chin C. W., and Ng, H. T. Mining Topic-Specific Concepts and Definitions on the Web. WWW'03, Budapest, Hungary, 2003. 15. Ian H. Witten, Gordon W. Paynter, Eibe Frank, Carl Gutwin, Craig G. Nevill-Manning: KEA: Practical Automatic Keyphrase Extraction. ACM DL 1999: 254-255 16. Valter Crescenzi, Giansalvatore Mecca, Paolo Merialdo: RoadRunner: Towards Automatic Data Extraction from Large Web Sites. VLDB 2001: 109-118 17. Arvind Arasu, Hector Garcia-Molina: Extracting Structured Data from Web Pages. SIGMOD Conference 2003: 337-348 18. Yanhong Zhai, Bing Liu Web data extraction based on partial tree alignment. WWW’05, 2005. Pages: 76 – 85 19. Hua-Jun Zeng, Qi-Cai He, Zheng Chen, Wei-Ying Ma, Jinwen Ma: Learning to cluster web search results. SIGIR 2004: 210-217 20. Oren Zamir, Oren Etzioni: Web Document Clustering: A Feasibility Demonstration. SIGIR 1998: 46-54

Personalized News Categorization Through Scalable Text Classification Ioannis Antonellis, Christos Bouras, and Vassilis Poulopoulos Research Academic Computer Technology Institute N. Kazantzaki, University Campus, GR-26500 Patras, Greece Computer Engineering and Informatics Department, University of Patras, GR-26500 Patras, Greece {antonell, bouras, poulop}@ceid.upatras.gr http://ru6.cti.gr

Abstract. Existing news portals on the WWW aim to provide users with numerous articles that are categorized into specific topics. Such a categorization procedure improves presentation of the information to the end-user. We further improve usability of these systems by presenting the architecture of a personalized news classification system that exploits user’s awareness of a topic in order to classify the articles in a ‘per-user’ manner. The system’s classification procedure bases upon a new text analysis and classification technique that represents documents using the vector space representation of their sentences. Traditional ‘term-to-documents’ matrix is replaced by a ‘term-to-sentences’ matrix that permits capturing more topic concepts of every document.

1 Introduction Information that exists on the World Wide Web and the users that have access to it or produce it have reached outrageous numbers. This state is not static, but a dynamic continuingly changing condition that converts the Internet into a chaotic system. It is estimated that more than two billion pages exist at present while the number of the Internet users is uncountable. The consequence of the popularity of the Web as a global information system is that it is flooded with a large amount of data and information and hence, finding useful information on the Web is often a tedious and frustrating experience. The solution to finding information is search engines, but their main problem is that they search every corner of the Web and often the results, even to well defined queries, are hundreds of pages. We focus on the needs of the Internet users who access news information from major or minor news portals. From a very brief search we found more than thirty portals that exist only in USA. This means that if one wants to find information regarding to a specific topic, he will have to search one by one, at least the major portals, and try to find the news of his preference. A better solution is to access every site and search for a specific topic if a search field exists in the portals. The problem becomes bigger for someone who would like to track a specific topic daily (or more times per day). X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 391 – 401, 2006. © Springer-Verlag Berlin Heidelberg 2006

392

I. Antonellis, C. Bouras, and V. Poulopoulos

Classification of information into specific categories can give solution to some of the aforementioned issues. However, it is not possible to provide personalized results as standard classification procedure does not involve users’ interests. All classification algorithms that have been proposed in the past, in order to achieve qualitative and efficient categorization of text, such as Naïve Bayesian method, support vector machines, decision trees and others, classify a document di to a category cj regardless of the target group that will use categorized results. Many well-known systems try to solve this problem by creating rss feeds or personalized micro-sites where a user can add his own interests and watch the most recent and popular issues on them. The RSS feeds have become very popular and most of the news portals use them. But still, the problem is the filtering of information. Regarding the personalization issue, the attempts that have been made from the major search engines and portals include only the issue of viewing already categorized content according to the user’s interests. This means that the user is not included into the classification procedure. MyYahoo is a very representative example [12]. Following the login the user is empowered with functionality that helps to personalize the page. More specifically, the user can add his special interests on news issues by selecting general topics from a list. Every time the user accesses the web page, the more recent results on the topic are displayed. This procedure seems very helpful but it does not include the user into the classification and rating procedure. Another representative example is the service that is provided by the Google and more specifically the news service [9]. The page that appears is fully customizable and the user can add his own query to the appearing results but his choice is not included in the categorization mechanism but only to the rating mechanism of the entire web. In this paper, the proposed news portal architecture bases upon scalable text classification, in order to include the user in the classification procedure. Without having prior knowledge of user’s interests, the system is able to provide him articles that match his profile. The user specifies the level of his expertise on different topics and the system relies on a new text analysis technique in order to achieve scalable classification results. Articles are decomposed into the vector representation of their sentences and classification bases upon the similarity of the category vectors and the sentences vectors (instead of the document-article vectors). This procedure enables the system to capture articles that refer to several topics, while their general meaning is different. The rest of the paper is structured as follows. Section 2 presents the general architecture of the system where the main feature is distribution of workload and modularity of the mechanism. Section 3 describes how personalization is implemented in our portal, in order to exploit user’s awareness of a topic and further enhance the categorization procedure. A new text analysis technique is presented and analyzed and we introduce a new scalable classification algorithm that relies on this technique in order to provide personalized classification results. Section 4 refers to the role of the user to the core functionality of categorization. In section 5 experimental evaluation of our portal is presented and section 6 introduces some concluding remarks and issues about future work on the system.

Personalized News Categorization Through Scalable Text Classification

393

2 General Architecture of the System The system consists of distributed sub-systems that cooperate in order to provide enduser with categorized news articles from the web that meet his personal needs. The main features of the architecture are: 2.1 Modularity: Creating Autonomous Subsystems The core mechanism of the system we created can be described as a general manager and a main database. This is the module where everything starts from and concludes to. The subsystems of the mechanism can work autonomously but the general manager is responsible for the cooperation of them. As we can see from Figure 1 the whole system consists of a manager, a database system and seven subsystems.

Fig. 1. General Architecture

The crawler sub-system is responsible for fetching web documents that contain useful news articles. Except from a standard crawler mechanism, it also maintains a list of RSS urls from many major portals. Content extraction manager uses the web components technique [5], [6] and some heuristics, in order to extract the text from the fetched web documents. Preprocessing manager, Keyword Extraction manager, Keyword – Document matcher and Dynamic Profile manager are implementing the Scalable Classification Algorithm that we introduce in Section 3. 2.2 Distributing the Procedure The procedure of retrieving, analyzing and categorizing content from the World Wide Web is sequential because each step needs the previous to be completed in order to

394

I. Antonellis, C. Bouras, and V. Poulopoulos

start. This does not preserve the implementation of a distributed system for the completion of each step, but introduces a limitation that step number N+1 cannot be started if step N is not completed. This means that step N for the process on text X can be completed in parallel with step N for the process of text Y.

3 Personalizing the Portal Presentation of the articles to the users must capture user-profile information in order to improve end-user results. Instead of treating this procedure as a standard text classification problem, we also consider dynamic changes of Web users’ behavior and ‘on-the-fly’ definition of the category topics. The main technique that our system exploits in order to provide personalized results is the use of scalable text classifiers instead of standard text classifiers. Scalable classifiers, permit the classification of an article into many different categories (multilabel classification). In addition, using the article decomposition that we present below (Section 3.1) we can exploit user’s expertise in a category in order to relax or tighten a carefully selected similarity threshold and provide users with a wider or tighter set of answers. Consider, for example, the text article of Figure 2 and Web users A and B. A is a journalism that needs information about Linux in order to write an article about open source software in general, while B is an experienced system administrator looking instructions on installing OpenBSD 3.6. It's official: OpenBSD 3.7 has been released. There are oodles of new features, including tons of new and improved wireless drivers (covered here previously), new ports for the Sharp Zaurus and SGI, improvements to OpenSSH, OpenBGPD, OpenNTPD, CARP, PF, a new OSPF daemon, new functionality for the already-excellent ports & packages system, and lots more. As always, please support the project if you can by buying CDs and t-shirts, or grab the goodness from your local mirror. Source: Slashdot.org

Fig. 2. Example News Article

A well-trained standard classification system would then provide the above document to both users, as it is clearly related to open source software and to OpenBSD operating system. However, it is obvious that although user A would need such a decision, it is useless for user B to come across this article. Trying to investigate the cause of user’s B disappointment, we see that standard text classification systems lack the ability to provide ‘per-user’ results. However, user’s knowledge of a topic should be taken into account while providing him with the results. It is more possible that a user who is aware of a category (e.g. user B knows a lot about Linux) would need less and more precise results, while non-expert users (such as the journalism) will be satisfied with a variety of results. Scalable text classification problem can be seen as a variant of the classical classification where many similarity classes are introduced and permit different, multi-label classification results depending on the similarity class.

Personalized News Categorization Through Scalable Text Classification

{

Definition 1. (Scalable Text Classification) let C = c1 ,! , c C

{

of categories and D = d1 , !, d D

395

} a set of growing set

} a growing set of documents. A scalable text classi-

fier that defines p similarity classes is a function Φ = D × C → ℜ p . It follows from Definition 1 that given an initial test set of k training data (text documents) TrD = {trd1, trd2, …, trdk} already classified into specific m training categories from a well-defined domain TrC = {trc1, trc2, …, trcm}, the scalable text classifier is a function that not only maps new text documents to a member of the TrC set using the training data information but also: • Defines p similarity classes and p corresponding similarity functions that map a document into a specific category c. Similarity classes can be shown as different ways to interpret the general meaning (concept) of a text document. • Permits the classification of each document into different categories depending on the similarity class that is used. • Permits the definition of new members and the erasure of existing ones from the categories set. That means that the initial set TrC could be transformed into a newly defined set C with or without all the original members, as well as new ones.

4 Text Analysis Using Document Decomposition into Its Sentences Having the vector space representation of a document, it is clear that we have no information on how such a vector has been constructed, as it can be decomposed in infinite ways into a number of components.

G

Definition 2. (Document Decomposition into Sentences) Let d i = [ v1 , v2 , ! , vk ]

G

the vector representation of a document d i . A document decomposition into its sen-

G

G

G

G

G

tences is a decomposition of vector d i of the form d i = s1 + s2 + ! + sn , where

G

component s k

G

is a vector sk = ª v1′, v2′ , ! , v′s º

¬

k

¼

representing k-th sentence of

document. Using a decomposition that Definition 2 provides us, we can therefore compute the standard cosine similarity using Equation 1. A modified version of a ‘term-todocument’ matrix, that we call it ‘term-to-sentences’ matrix can also be used to include information about the sentences decomposition. Figure 3 provides an example.

& & cos d i , c j





& & di ˜ c j & & d cj

¦ ¦

n

& s & s 1 k

k 1 k n

k

& ˜cj & cj

(1)

396

I. Antonellis, C. Bouras, and V. Poulopoulos

D2

D1

s1

s2

s3

s4



sk

t1

a1

a11

a12

a13

a14



a1k

t2

a2

a21

a22

a23

a24



a2k

t3

a3

a31

a32

a33

a34



a3k

t4

a4

a41

a42

a43

a44



a4k

t5

a5

a51

a52

a53

a54



a5k

t6

a6

a61

a62

a63

a64



a6k

t7

a7

a71

a72

a73

a74



a7k

t8

a8

a81

a82

a83

a84



a8k

t9

a9

a91

a92

a93

a94



a9k















tm

am

am1

am2

am3

am4





Dn

amk

Fig. 3. Example ‘term-to-sentences’ matrix, with term to sentences analysis of a specific document. Values aij satisfy equation: a = k a , ∀1 ≤ i ≤ n . i

¦

j =1 ij

5 Scalable Classification Algorithm The most useful characteristic of the proposed classification algorithm is its scalability feature. A text document can be classified into many different categories depending on the similarity of the semantic representation of its sentences with the categories. Exploiting user’s level of expertise in a specific area, we can relax or tighten a similarity threshold of the distance between a specific number of sentences of an article and some categories, in order to allow classification of the article in many categories. Formal definition of the Training Phase of the Scalable classification algorithm is shown in Figure 4: Training Phase 1) Decompose labeled text documents into their sentences 2) Compute term to sentences matrix of every category using some indexing method 3) Compute category vectors by combining the columns of the corresponding term to sentences matrix 4) Estimate categories similarity threshold, by computing the cosines of the angles between the different category vectors of step 3 5) For each category, estimate sentences similarity threshold by computing the cosines of the angles between all sentence vectors with the corresponding category vector Fig. 4. Training Phase of the Scalable Classification Algorithm

Main characteristics of the classification phase (Figure 5) include (a) the ability to adjust the number of sentences k that must much a sentences similarity threshold in order to classify the corresponding document to a category and (b) the feedback that

Personalized News Categorization Through Scalable Text Classification

397

the algorithm implicitly takes in order to re-compute categories vectors and therefore capture semantic changes of the meaning of a topic as time (arrival of new text documents) passes. Classification Phase Decompose unlabeled text document into its sentences Compute term to sentences matrix of the document Compute document vector by combining the columns of the term to sentences matrix Estimate similarity (cosine) of the document vector with the category vectors computed at step 3 of Training Phase. If cosine matches a similarity threshold computed at step 4 of Training Phase classify the document to the corresponding category 5) Estimate similarity (cosines) of each sentence with the category vectors computed at step 3 of Training Phase 6) If a cosine matches a similarity threshold computed at step 5 of Training Phase classify the document to the corresponding category (allowing scalable multi-category document classification) 7) The category vector computed during step 3 of Training Phase is re-computed based on the newly acquired data after the classification of the unlabeled text document to categories matching the threshold criterion 1) 2) 3) 4)

Fig. 5. Classification Phase of the Scalable Classification Algorithm

It is important to mention that the procedure of ‘estimation of similarity’ involved in many steps of our algorithm, can be implemented using a variety of techniques such as (a) simple cosine computation, (b) latent semantic analysis of the matrix so as to produce its low rank approximation and then compute the similarity [3, 4, 8, 13, 15] (c) other low rank approximation of the matrix that either use randomized techniques to approximate the SVD of the matrix [1, 2, 8, 9] or use partial SVD on cluster blocks of the matrix and then recombine it to achieve fast and accurate matrix approximation.

6 Scalability as Personalization Users of the system select the level of their expertise on different categories. Using this information, the core mechanism of the system that implements the Scalable Classification Algorithm changes the number k of sentences (according to Table 1) that should match the threshold criterion of a category in order to be classified. Table 1. Configuration of number of sentences that much the threshold criterion vs user expertise

k (number of sentences)

User expertise

1

low

2

medium

3

high

398

I. Antonellis, C. Bouras, and V. Poulopoulos

7 Experimental Evaluation Experimental evaluation involves two main steps. Firstly, we analyze the performance of the Scalable Classification algorithm, using a well known dataset [7]. Using data gathered during this procedure, we also specify different criterion thresholds and apply them to the core mechanism of the presented system. At last, experimental results of the real articles’ classification are presented. In order to evaluate our scalable classification technique we used the 20 newsgroup dataset [7], which is a widely used dataset in the evaluation process of many classification algorithms (both supervised and unsupervised). The 20 newsgroup dataset is a collection of articles of 20 newsgroups. Each category contains 1000 articles. We preprocessed the documents so as to use only the main text (as Subject section may contain many keywords of the corresponding category). In order to evaluate the similarity values between different category vectors we used the standard metric [12] that computes the cosine of the corresponding vectors aj and q using Formula 2.

(2) Below, we present evaluation of the similarity thresholds obtained for the ‘sentence vs. category’ using the 20 newsgroup dataset. All experiments were conducted using data collected using both the Rainbow tool [16] for statistical analysis and separation procedures of the datasets, as well as using the TMG [17] a recently developed MATLAB toolbox for the construction of term document matrices from text collections. Comparing the twenty category vectors it turns up that different category vectors create a minimum angle of 19.43 degrees and a maximum angle of 53.80 degrees. It is also easily seen that semantically different categories create large enough angles (e.g. alt.atheism and comp.os.ms-windows.misc create and angle of 42.71 deggres) while semantically close categories create smaller angles (e.g. talk.religion.misc and alt.atheism create an angle of 19.44 degrees). That means that a ‘category vs. category’ threshold can be estimated to an angle 19.43 degrees with a corresponding similarity value of 0.94. Figure 6 presents the sentence vs. category vectors similarities for different categories of the 20 newsgroup dataset. The basic results can be summarized as: • General categories (like alt.atheism or soc.religion.christian) have a dense uniform allocation of similarities in the range [0-0.1] and a sparse uniform allocation in the range [0.1 – 0.5] • Well structured categories seem to be indicated from a uniform sentence vs. category similarity chart Trying to investigate on an easy way to identify general categories and proceed on further separation, non-well structured categories seem to reside on ‘term to sentence’

Personalized News Categorization Through Scalable Text Classification

(a)

(c)

(b)

(d)

399

Fig. 6. Sentence vs category vectors for different categories of the 20-newsgroup dataset (first line) with the corresponding ‘term-to-sentences’ matrix using function spy of MATLAB (second line) (a) comp.os.ms-windows.misc (b) comp.windows.x

matrices that have a blocked structure. Figure 6 provides a visualization of the matrix elements of the ‘term to sentence’ matrix where large values are identified by intense color. Figures of categories that were identified as not well structured in the previous Section are shown to have a matrix with blocked structure (e.g. (c) or (d) matrices).

8 System Evaluation Using the similarity threshold of 19.43 degrees that we computed using the 20 newsgroup dataset, we tuned the core mechanism of the system that uses the Scalable Classification Algorithm so as to classify an article into a category if k sentences of the article much this criterion. Figure 7 shows how many business articles are also classified to other categories for three values of k. As value of k increases, the amount of multi-labeled articles decreases. We also, tested the classification feedback that our Scalable Classification Algorithm provides. Figure 8, reports the maximum and the minimum angle between the different category vectors, as time passes and newly classified articles affect the category vectors. We run the system for a period of 15 days and we computed the angles between the re-computed category vectors at the end of every day. It is easily seen that minimum angles vary close to 20 degrees, while maximum angles are close to 40 degrees.

400

I. Antonellis, C. Bouras, and V. Poulopoulos

Category 1

Business

2

Education

3

Entertainment

4

Health

5

Politics

6

Sports

7

Technology

Fig. 7. Multi-labeled business articles for different values of k (number of sentences to much the threshold criterion)

Fig. 8. Maximum and Minimum angles between category vectors, for a period of 15 days. Classification feedback of our algorithm results in small variances of the vectors that represent each category.

9 Conclusions and Future Work In this paper, we propose a new technique for personalized article classification that exploits user’s awareness of a topic in order to classify articles in a ‘per-user’ manner. Furthermore, the architecture of the backend of a portal that uses this technique is presented and analyzed. Unlike standard techniques for personalization, user only specifies his level of expertise on different categories. The core of the system relies on a new text analysis and classification method that decomposes text documents on their sentences in order to capture more topic concepts of every document. For future work, we will further explore the classification of real articles using our system. It will be interesting to apply data mining techniques on data deriving from the amount of multi-labeled documents and try to identify the behavior and impact of

Personalized News Categorization Through Scalable Text Classification

401

major ‘alarm news’. The scalable classification algorithm is also of independent interest and we intend to study theoretically its performance.

Acknowledgements Ioannis Antonellis’s work was partially supported by a Zosima Foundation Scholarship under grant 1039944/ 891/ B0011 /21-04-2003 (joint decision of Ministry of Education and Ministry of Economics)

References 1. D. Achlioptas, F. McSherry, Fast Computation of Low Rank Matrix Approximations, STOC ’01 ACM. 2. Y. Azar, A. Fiat, A. Karlin, F. McSherry, J. Saia, Data mining through spectral analysis, STOC ’01 ACM. 3. M.W. Berry, S.T. Dumais & G.W. O’ Brien, Using Linear Algebra for Intelligent Information Retrieval, UT-CS-94-270, Technical Report. 4. M. W. Berry, Z. Drmac, E. R. Jessup, Matrices, Vector Spaces, and Information Retrieval, SIAM Review Vol. 41, No 2 pp 335-362. 5. C. Bouras, V. Kapoulas, I. Misedakis, A Web - page fragmentation technique for personalized browsing, 19th ACM Symposium on Applied Computing - Track on Internet Data Management, Nicosia, Cyprus, March 14 - 17 2004, pp. 1146 – 1147. 6. C. Bouras and A. Konidaris, Web Components: A Concept for Improving Personalization and Reducing User Perceived Latency on the World Wide Web, Proceedings of the 2nd International Conference on Internet Computing (IC2001), Las Vegas, Nevada, USA, June 2001, Vol. 2, pp.238-244. 7. CMU Text Learning Group Data Archives, 20 newsgroup dataset, http://www-2. cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.html. 8. P. Drineas, R. Kannan, A. Frieze, S. Vempala, V. Vinay, Clustering of large graphs via the singular value decomposition, Machine Learning 56 (2004), 9-33. 9. P. Drineas, R. Kannan, M. Mahoney, Fast Monte Carlo algorithms for matrices II: Computing a low-rank approximation to a matrix, Tech.Report TR-1270, Yale University, Department of Computer Science, February 2004. 10. S. Dumais, G. Furnas, T. Landauer, Indexing by Semantic Analysis, SIAM. 11. Google News Service, http://news.google.com 12. W. Jones and G. Furnas, Pictuers of relevance: A geometric analysis of similarity measures, J. American Society for Information Science, 38 (1987), pp. 420-442. 13. T. K. Landauer, P. W. Foltz, D. Laham (1998). Introduction to Latent Semantic Analysis. Discourse Processes, 25, pp. 259-284. 14. My Yahoo!, http://my.yahoo.com 15. C. H. Papadimitriou, P. Raghavan, H. Tamaki, S. Vempala, Latent semantic indexing: A probabilistic analysis, 17th Annual Symposium on Principles of Database Systems (Seattle, WA, 1998), 1998, PP 159-168. 16. Rainbow, statistical text classifier, http://www-2.cs.cmu.edu/~mccallum/bow/rainbow/. 17. D. Zeimpekis, E. Gallopoulos, Design of a MATLAB toolbox for term-document matrix generation, Proceedings of the Workshop on Clustering High Dimensional Data, SIAM 2005 (to appear).

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines Louis Yu, Kin Fun Li, and Eric G. Manning Department of Electrical and Computer Engineering, University of Victoria, Box 3055, Victoria, British Columbia, Canada V8W 3P6 {yul, kinli, emanning}@uvic.ca

Abstract. Much research in recent years has been devoted to meta-search and multilingual search to improve performance and increase the scope of the search. Since most existing web search algorithms are originally developed for English web documents, one would question the efficiency and performance of these techniques as they are applied to documents of other languages. In this work, we have chosen Chinese web search and documents for our study. Potential issues and problems in applying well-known English language based algorithms to Chinese web documents are identified and discussed. Through our qualitative and exploratory quantitative analysis, it can be concluded that these algorithms and techniques cannot be directly used to develop an efficient Chinese search engine.

1 Introduction The Internet has become an indispensable tool to find information with instantaneous results using any one of the existing search engines. However, today's search engines still suffer from recall and precision problems [12]. Therefore, it is desirable for searchers to use different search engines and also explore collections of documents that are not written in their native languages. Much research in recent years has been devoted to meta-search and multilingual search to increase the scope of the search and to improve search performance. The main research issues being how one would translate the monolingual query into various languages to retrieve documents from multilingual collections, and merge the results from multiple lists of returned items of different languages into a singly ranked list. This naturally leads to wider research areas, venues, and issues, since most existing web search algorithms are originally developed for English web documents, one would question the efficiency and performance of these techniques if they are applied directly to documents of other languages. For instance, should the same algorithm be used for relevance ranking in each of the selected collections of a meta-multilingual search? Perhaps it would better to use different or modified algorithms developed for the specific language of each collection, select the most relevant items, and then merge these into a ranked list. In this work, we have chosen Chinese web search and document processing for our study. The number of Chinese web pages has increased dramatically in the past few years and it is expected that the majority of web pages will be written in Chinese in the X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 402 – 413, 2006. © Springer-Verlag Berlin Heidelberg 2006

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

403

very near future. In 1997, there were only 300,000 computers connected to the Internet in China. Today, there are more than 87 million users browsing close to 600,000 web sites and 5.4 million pages, with about 1.2 million domain names [2]. Another survey has shown that China ranks second in the world in Internet user population. With only 8% of the Chinese population currently online, this projects a huge potential market for the Chinese Internet services [4]. Table 1. Chinese Engines: Top Five Search of ‘Tsunami’ Google news.sina.com.cn/z/sumatraearthquake/index.shtml www.nju.edu.cn/njuc/dikexi/earthscience/chp7/hx.htm blog.roodo.com/tsunamihelp blog.yam.com/tsunamihelp news.xinhuanet.com/world/2004-12/26/content_2381825.htm Yisou news.sina.com.cn/z/sumatraearthquake/index.... news.tom.com/hot/ynqz news.21cn.com/zhuanti/world/dzhx/index.shtm… news.xinhuanet.com/world/2004-12/26/content... www.nju.edu.cn/njuc/dikexi/earthscience/chp... Zhongsou news.sina.com.cn/z/sumatraeart... www.xinhuanet.com/world/ydyhx/... news.sohu.com/s2004/dizhenhaix... www.phoenixtv.com/phoenixtv/72... 61.139.8.15/newstanfo/zhuanti/.. Baidu news.sina.com.cn/z/sumatraearthquake post.baidu.com/f?kw=œ£–• www.nju.edu.cn/njuc/dikexi/earthscience/chp7/hx.htm www.xinhuanet.com/world/ydyhx news.sohu.com/s2004/dizhenhaixiao.shtml Tianwang news.sina.com.cn/z/run/rollnews/12/index.shtml dl.dadui.com/softdown/9768.htm www.discloser.net/html/175487,72838920.html www.51do.com/web/2362/crappdmspjlmallopl.html www1.netcull.com/topic/ydyhx.shtml

Of the three main areas of web information retrieval and analysis, content mining is the most difficult for the Chinese language as compared to structure and usage mining. This has a direct impact on the quality of the search results. There are several issues that make Chinese web search and document processing much more difficult than that of the English web. First, there are many different character sets and encoding schemes in use, depending on the geographic region of the web site and the political preference of the author. Big Five (BIG5) or Dawu, the traditional Chinese character set, is used in

404

L. Yu, K.F. Li, and E.G. Manning

Taiwan and Hong Kong. In China, GB or Guojia Biaozhun (National Standard), is used to represent simplified Chinese characters. Increasingly, new web sites either use GBK, Guojia Biaozhun Kuozhan (Extended National Standard), or the multilingual Unicode Standard, both of which contain a larger character set that includes GB and BIG5 [5]. The second problem associated with Chinese language processing is that there is no white space between words as in the English language. Depending on how one reads a sentence or combine the separate characters, it is possible to have multiple valid interpretations of a phrase or sentence. Therefore, Chinese word (or character, bigram, trigram, etc.) segmentation is very difficult and has remained as an open research problem that affects the quality of the search, thus making the direct adaptation of English language based web search algorithms a challenge. The effectiveness of the term extraction process, the clustering of similar documents, and the categorization of documents affect the search engine provider’s capability of indexing its document base, and providing good summarization in an optimal fashion [16]. In addition, the white space problem at the query level dictates how accurate the matching, and the translation if multilingual databases are used, will be. From the authors’ experience, the irrelevance (and hence the proper ranking) of many of the search results is quite prominent. Low precision and low recall are particularly acute in Chinese web search. The low recall problem is due to low coverage of the web by many Chinese search engines. In addition, the returned lists from various search engines have minimal overlap (see Table 1 on search results of ‘Tsunami’ from five popular Chinese search engines). It is evident that these engines differ from one and other in their methodology to index relevant information, and approach to rank the results relevancy to the query. We speculate that these problems are partly caused by applying existing English based algorithms to Chinese documents. To explore the Chinese web characteristics, we have performed some exploratory experiments on Chinese web graphs and their link structures. Throughout this work, we use six web graphs chosen from our experiments (three randomly selected sites from China and three randomly selected sites from North America) as typical cases to illustrate the materials presented. Table 2 shows the notation and the type of the web sites used. Table 2. Exploratory Web Sites Legends C1 C2 C3 E1 E2 E3

Type a China .com site a China .org.cn site a China .com.cn site a North America .com site a North America .com site a North America .org site

For illustration purpose, we only present the first three layers of each web graph. Thus, layer 1 is the root node, layer 2 contains the nodes referenced by the root node, and layer 3 has the nodes linked from layer 2 nodes. The size of the each web graph is shown in Table 3.

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

405

Table 3. Size of the Web Graphs Root Node C1 C2 C3 E1 E2 E3

# of Layer 2 Nodes 3 8 7 9 15 9

# of Layer 3 Nodes 760 131 754 209 356 165

Total # of Nodes 764 140 762 219 372 175

Section 2 of this work examines existing web mining techniques and their suitability to be used in Chinese web search. Potential issues and problems in applying well-known English language based algorithms will be identified. In order to build a better Chinese search engine, one must use a hybrid approach by combining various effective techniques, as we argue in Section 3. Finally, current status and work in progress are presented in Section 4.

2 Applying Web Mining Process and Techniques to Chinese Web In the web mining process, many techniques originating from the data mining area are used to discover and extract information from web documents. It should be noted that most of the web mining techniques existing today are originally developed for English language search engines. Most of the non-English search engines still employ to certain extent, if not all, some aspects of resource finding, information extraction, or ranking techniques used in English search engines. An example would be the various Google engines used in different languages, which essentially utilize the same PageRank algorithm as the regular (English) Google search engine [1]. 2.1 Resource Finding and Information Selection Many well-developed approaches and techniques are used to retrieve as many relevant items as possible from the web or data collections. These include document classification and categorization, user feedback interfaces, and data visualization. These techniques are language independent and, therefore, could be applied directly to documents written in any language. From the retrieved documents, relevant information is extracted. This is important, as only appropriate information should be presented to the users. For unstructured data within a document, linguistic approaches are necessary to perform syntactic and semantic analysis on the textual information. However, these analyses necessitate algorithms that are specifically developed for the Chinese language. In this case, English based algorithms must be modified, if they are to be used at all. Information can also be obtained from semi-structured data by analyzing meta-information embedded in the document, such as HTML tags, headings, and delimiters. Some research has been focused on automated adaptive algorithms using machine learning techniques. For example, Kushmerick [13] and Muslea [17] focus on “wrapper induction”, where documents to be processed are highly regular, such as the HTML text emitted by CGI scripts. These approaches may be less dependent on the

406

L. Yu, K.F. Li, and E.G. Manning

language the document is written in; however, the performance is affected by how the document creator configures its meta-information, and therefore the information selected may not be reliable. Hybrid approaches have also been proposed to extract both unstructured natural texts and semi-structured regular text [7][9][18]. This combination of examining meta-information, as well as analyzing the content of the document would be the best approach for extracting information from Chinese documents, though one needs to develop algorithms based on the linguistic rules concerning the syntax and semantics of the Chinese language. 2.2 Information Analysis The main objective of information analysis is to examine and rank the selected documents according to their relevance to the query. There are many ranking algorithms in existence. Most techniques are of a statistical natural to estimate a document’s relevance ranking. The frequency of the keyword appearance in the document is often used as one of the major factors in most methods to estimate a document’s relevance. The technique of using a relevance factor such as query density—the ratio of the number of keywords over the total number of words in the document—should work well in ranking Chinese documents as long as the query is simple. Complex and long queries would aggregate the Chinese white space problem, and hence the accuracy of the index terms used as a result of poor query segmentation. Another popular relevance parameter used is a weighted factor or priority on a specific set of documents. For example, academic documents such as published papers may be ranked before other items that contain the same search word. This method, though, may have limited applicability in the Chinese web environment. We have observed that most of the published papers indexed by Chinese search engines are articles found on the Chinese web, and may not be reviewed or rigorously researched as the ones appearing in formal academic publications. Visiting frequency is also used as a relevance determining factor; for example, the method proposed by Craswell et al. [7] requires a database being maintained on the frequency of users visiting a specific website. It is reasoned that the web sites most frequently visited should be more useful; thus they should have higher rankings than others. This ranking technique is language independent and, therefore, appears to work well for web searches in different languages. Upon deeper examination, this method may not be appropriate in the Chinese web setting. The majority of the Chinese web sites are devoted to specific purposes, in particular, entertainment (e.g., movie ratings, mp3 downloads, chats and blogs, etc.) and consumer information (e.g., electronic product ratings, pricing comparisons, etc.) These web sites tend to have a higher frequency of visits than other official web sites run by professional organizations, reputable national corporations, and governments. To locate properly researched and formal information, referrals from an entertainment web site may not be the best resources. For instance, using a president’s name as the query, the president’s biography from an academic site would provide more accurate and valuable information than reviews found on an entertainment web

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

407

site for a movie about the president; however, the entertainment web site would certainly have a higher rank due to its higher frequency of visits. This is also true for news sites as ranking algorithm with frequency of visits as a prime relevancy decision factor will give top ranking priority to news sites than to academic or government sites, though for some information seekers, news web sites may provide little or no useful information as to what they are looking for. The popularity of news and entertainment sites is evident from the top ranked items shown in Table 1. Table 4. External Link Classification External Link C1 C2 C3 E1 E2 E3

.com 100% 96% 91% 97% 74% 70%

.org 0% 3% 4% 3% 15% 14%

.gov 0% 1% 5% 0% 5% 15%

.edu 0% 0% 0% 0% 6% 1%

In our exploratory investigation, we have also observed that Chinese entertainment web sites tend to reference other entertainment sites; similarly, a news web site has a majority of its external links to sites of the same type. In contrast, North American web sites seem to be more diversified in their external links. As can be observed from Table 4, this is indeed the general case. Both E2 and E3 have much higher percentage of external links to .org and .gov sites. C3 and E1 are presented as atypical cases. 2.3 The Google PageRank Algorithm The Google PageRank algorithm by Page and Brin [1] is the most well known algorithm in ranking document relevance. Each web page’s ranking is mainly determined by the characteristics of its inbound and outbound links. This approach makes sense, as link analysis provides a very reasonable measurement of how important a web page is as regarded by its peers. However, this may not be the case in the Chinese web setting, as only a small percentage of the Chinese population is actively involved in designing web pages. Moreover, we have observed that Chinese web pages reference each other in a more concentrated fashion, usually within a close-knit community, as supported by other researchers [15]. For example, a research group at a university would make links to other research groups within the same school. Fig. 1 shows the percentage of external links over the total number of links of each web graph. It is evident that the North American sites make a large number of external references as compared to the Chinese sites. Here, we define internal links as reference back to the same site, or the same organization, i.e., sites that have the same top-level and second-level domains in their URLs. For instance, www.ece.uvic.ca and www.cs.uvic.ca belong to the same organization. Indeed, our exploratory investigation shows that most Chinese web graphs have the majority of their links clustered internally within the first three to four layers of the root node.

408

L. Yu, K.F. Li, and E.G. Manning

Percentage of External Links 80%

73%

70%

71%

External Links

60% 50% 40%

32%

30% 20% 10%

4%

3%

4%

C1

C2

C3

0% E1

E2

E3

Web Sites

Fig. 1. External Link Characteristic

Thus, using link structure analysis to estimate ranking would not be meaningful, as the inbound and outbound links do not provide significant weights as defined in the PageRank algorithm. When applying relevance ranking algorithms to Chinese documents, one needs to examine further the bilateral referencing relationship among the pages to eliminate any biasing effect. Furthermore, ranking parameters may also be affected by cultural discrepancy due to geographical differences, thus rendering the ranking algorithm less effective. China is a large country with many provinces, and each ethnic group has its own dialect, religion, and way of living. These differences, no matter how subtle, may affect how Chinese people think and design web pages. (In our exploratory investigation, we have not, so far, observed any inter-provincial links in Chinese web sites.) This is in contrast to other countries such as Canada and the United States, where the cultural environment from one province or state to another does not differ greatly due to the homogeneity of the population and the identifiable national culture. Therefore, it is important that Chinese web mining algorithms must consider these differences in localized cultures. 2.4 Suitability of Existing English Based Algorithms for Chinese Web Mining From our brief qualitative and exploratory quantitative analysis, it can be concluded that due to the complexity of the Chinese language, as well as the Chinese web culture, methods and techniques originally developed for English based search engines cannot be used directly in the development of Chinese web search, for performance and efficiency reasons. However, they can be used as frameworks with additional processing such as physically retrieving the documents, or at least the summaries, for further content analysis, to ensure the quality and relevance of the documents ranked. In the Chinese language, a word may have some very close synonyms. A highly relevant document could be reached using a synonym of the query. This necessitates a thesaurus approach, while incorporating user feedback mechanisms. Also, one must pay attention to the meaning of the query for different implications. A Chinese noun

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

409

quite often can be used as a verb or an adjective. Simply counting the frequency of the query in a document may not give a good indication of relevance. This is especially important in information analysis, as there is arguably more complexity in the Chinese language and its representation. There have been some methods and techniques discussed in an attempt to achieve a more effective Chinese web mining process. Several useful concepts have been proposed on Chinese document indexing and retrieval models. Interested readers should refer to the excellent survey by Luk and Kwok [16]. In order to develop an effective and efficient search engine to rank and process Chinese documents, modifications to existing techniques by incorporating linguistic rules pertinent to the characteristics of the syntax and semantics of the Chinese language are necessary.

3 Building Better Chinese Web Search Engines There are currently more than three hundred active Chinese search engines [3]. Most of these engines’ databases are relatively small, some are specialized engines searching for focused information such as consumer electronics or entertainment, and others are simply powered by the bigger search engines including the Big Fives: Google China (http://www.google.com/intl/zh-CN), Yahoo Yisou (http://www.yisou.com), Zhongsou (http://www.zhongsou.com), Baidu (http;//www.baidu.com) and Tianwang (http:// e.pku.edu.cn). These engines work reasonably well and are very popular among Chinese users [14]. However, until English based mining algorithms are modified and better web mining techniques are developed specifically for Chinese web search, we have no choice but to utilize the current search capabilities to bring to the users the best results obtainable. We have performed various experiments in Chinese web search. One particularly illustrative case is shown in Table 1. A query of ‘tsunami’ in Chinese was used for a simple search on the Big Fives. We found minimal overlaps in the returned results from each engine. For illustrative purpose, Table 1 only shows the top five items from each engine. Among the sixteen distinct documents, only one appears in all five lists. One item appears in three lists and three other items appear in two lists. This leads to the observations that these search engines may be using different relevance ranking algorithms, and the avoidable fact that they only have partial overlapped coverage in their indexed databases. In contrast, web sites in North America exhibit different characteristics. The top five items from four of the major search engines, Yahoo, Google, AltaVista, and MSN, show more overlaps when the English word ‘tsunami’ was used as a query, as shown in Table 5. There are nine distinct documents as compared to the sixteen found in Chinese search. One interesting artifact is that four distinct URLs, all with the same top-level and second-level domains (i.e., washington.edu), point to the exact same web page; hence, we consider these four URLs as one single document. This item appears in all search engines’ top five list. Two items appear in three of the four lists and two other items appear in two lists. Only two out of the nine documents do not have a replicate in another engine’s result list. Moreover, if the lists of the top ten items are considered, a much higher overlap is observed among the four North American search engines. The same, though, cannot be said of the Chinese search engines.

410

L. Yu, K.F. Li, and E.G. Manning

This difference in overlap characteristic between Chinese and North American search engines suggests that either the English engines use very similar merging and ranking algorithms, and/or their crawlers search similar web space. This characteristic deserves further investigation. Table 5. English Engines: Top Five Search of ‘Tsunami’ Yahoo www.geophys.washington.edu/tsunami en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake en.wikipedia.org/wiki/Tsunami tsunamihelp.blogspot.com www.geophys.washington.edu/tsunami/intro.html Google www.tsunami.org/ www.pmel.noaa.gov/tsunami/ tsunamihelp.blogspot.com/ www.ess.washington.edu/tsunami/ en.wikipedia.org/wiki/Tsunami AltaVista www.flickr.com/photos/tags/tsunami www.geophys.washington.edu/tsunami en.wikipedia.org/wiki/2004_Indian_Ocean_earthquake en.wikipedia.org/wiki/Tsunami tsunamihelp.blogspot.com MSN www.tsunami.org www.tsunami.org/faq.htm wcatwc.arh.noaa.gov www.geophys.washington.edu/tsunami/intro.html www.geophys.washington.edu/tsunami/welcome.html

For the Chinese search engines case, all items from Tianwang do not appear in any other search engine’s results. Most of the sixteen documents are news items. In addition, some results are blog or forum pages. (In contrast, only one item is of a blog nature from the results of the North American search engines.) It is, therefore, very difficult, if not impossible, to judge which Chinese engine provides highly relevant results. Moreover, relevance ranking seems to be a formidable task due to the subjectivity of individual users. Furthermore, commercial secrets and a lack of published information prevent us from analyzing the mining techniques used by these engines, and evaluating them accordingly. However, two definitive conclusions can be drawn: 1. Meta-search is a necessity due to the low coverage of the Chinese engines and the complexity of the Chinese language, and 2. An accurate and reliable ranking system may be irrelevant or immaterial, as the onus of selecting the most useful items should be more appropriately placed on the users.

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

411

We propose taking advantage of many of the current best practices used in Chinese web search, using meta-search as a foundation, with ideas borrowed from multilingual information retrieval, user feedbacks, and visualization, etc. 3.1 Adapting Meta-search Technology In an environment where a large number of search servers are available, a meta- search increases the scope and the quality of the search. There are many English language based meta-search engines available with varying degree of success. Some of these engines incorporate a collection selection process to identify the most promising search engines for the current query, while others simply treat all available search engines equally. As for the documents retrieved from each individual search engine, equal numbers of documents are selected from each returned collection, or the selection is proportional to the quality of the documents retrieved from each collection historically. The most challenging task in a meta-search is the merging of the selected documents into a singly ranked list. Most merging algorithms are statistics based and incorporate the historical performance of the individual search engines. Various types of information are used to compute the final global rank of a document. A conclusion that can be drawn from the previous result overlap comparison is that fusion algorithms that work well to merge results from English search engines may have to be modified, to take the minimal overlap characteristic into account, when used in merging Chinese search engines’ results. This is especially crucial when using the class of overlap-weighted merging algorithms. On the other hand, as we argued earlier, an effective global ranking algorithm may be impossible, and as a matter of fact, it is irrelevant to have a ranked list, as users are highly subjective in their opinion of how relevant an item is to the query. In China, there are many web search engines that claim to perform meta-search. We have examined over forty such engines including the well-known ones such as metaFisher, 9om.com, eNo1.com, bestdh.com, etc. We found that many of these are unusable due to poor user interface and extremely slow access. Many of them are paid service, or simply serve as a portal to other popular web search engines. The only exception that we noted is Wideway Search (www.widewaysearch.com). This meta-search engine performs reasonably well, but we could not find any information regarding how the search is done and how the merged ranking is determined. Moreover, this site specifies that it is a beta test version, which has not been updated since 2000. The current status of Chinese meta-search engines shows that there is room for improvement. There is also strong evidence to support our proposal of using meta-search as a basis for better Chinese web search. 3.2 Adapting Multilingual Information Retrieval Technology In multilingual information retrieval, the most common approach to create appropriate queries is the use of a multilingual dictionary. The translated query is then submitted to search engines of different languages. This dictionary idea can be used in processing ambiguous query in the Chinese language. Jin and Wong have explored a possible Chinese dictionary approach for information retrieval [11]. The best place to deploy the dictionary approach is at the query entry level. Instead of mapping translated queries,

412

L. Yu, K.F. Li, and E.G. Manning

when a user starts inputting the first character, an intelligent interface can present common words or phrases associated with the entered character to facilitate the accurate entry of the desired query. Furthermore, synonym selection can be presented to users based on a thesaurus during query input. This is similar to the mechanism of the pinyin automatic Chinese keyword conversion system used in Google. Concepts and specification from the World Wide Web Consortium Internationalization Activity’s work on Ruby text [10] can also be utilized in this intelligent interface. Ruby is an annotation system for phonetic transcription, widely used for Asian languages, that appears near the corresponding characters. Common phrases and synonyms can be displayed with the keyed-in query for user selection. This has the additional benefit of serving as an intelligent guide to those users who may have problems conceptualizing and pinpointing the exact query to use.

4 Current and Future Work We are currently conducting extensive experiments pertaining to the analysis of applying English based algorithms to Chinese documents. We expect that the results will support the conclusions from our qualitative and exploratory quantitative analysis reported here, that many English based algorithms would not work well in the Chinese web setting. The backbone of a meta-search engine utilizing the Big Five Chinese search engines has been designed and implemented. Advanced user interface and search features are currently being developed. We are also working on the specification for an intelligent agent to facilitate user query input.

References 1. Brin, S., Page, L., “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, http://www-db.stanford.edu/~backrub/google.html (Jul. 31, 2005). 2. China Internet Information Center, “A Survey and Report on the Status of Internet Development in China”, http://www.cnnic.net.cn/download/2004/2004072002.pdf (Jul. 31, 2005). 3. Chinese-search-engine.com, “Chinese Search Engine Survey”, http://chinese-searchengine.com/chinese-search-engine/survey.htm (Sep. 21, 2004). 4. Chinese-search-engine.com, “Marketing China: Simple Facts About China”, http://chinesesearch-engine.com/marketing-china/china-facts.htm (Sep. 21, 2004). 5. Chinese Mac FAQ, “Character Sets and Encodings”, http://www.yale.edu/chinesemac/pages/charset_encoding.html (Jul. 31, 2005). 6. Ciravegna, F., “Challenges in Information Extraction Text for Knowledge Management”, IEEE Intelligent Systems and Their Applications, 2001. 7. Craswell, N., Hawking, D., Thistlewaite, P., “Merging Results from Isolated Search Engines”, The Tenth Australasian Database Conference, 1999. 8. Foo, S., Li, H., “Chinese Word Segmentation and Its Effect on Information Retrieval”, Information Processing and Management, vol. 40, issue 1, Jan. 2004, pp. 161-190. 9. Freitag, D., Kushmerick, N., “Boost Wrapper Induction”, The Seventeenth National Conference on Artificial Intelligence (AAAI-2000), 2000.

The Adaptability of English Based Web Search Algorithms to Chinese Search Engines

413

10. Ishida, R., “Ruby Markup and Styling”, http://www.w3.org/International/tutorials/ruby (Jul. 31, 2005). 11. Jin, H., Wong, K.F., “A Chinese Dictionary Construction Algorithm for Information Retrieval”, ACM Transactions on Asian Language Information Processing, vol. 1, no. 4, Dec. 2002, pp. 281-296. 12. Kosala, R., Blockeel, H., “Web Mining Research: A Survey,” ACM SIGKDD Explorations, vol. 2, issue 1, Jul. 2000, pp. 1-15. 13. Kushmerick, N., Weld, S.D., Doorenbos, R., “Wrapper Induction for Information Extraction”, International Joint Conference on Artificial Intelligence, 1997, pp. 729-737. 14. Li, Kin F., Wang, Y., Nishio, S., Yu, W., "A Formal Approach to Evaluate and Compare Internet Search Engines: A Case Study on Searching the Chinese Web" The Seventh Asia Pacific Web Conference, Mar. 29-Apr. 1, 2005, Shanghai, China, in Lecture Notes in Computer Science 3399, Springer-Verlag, pp. 195-206. 15. Liu, G., et al., “China Web Graph Measurements and Evolution,” The Seventh Asia Pacific Web Conference, Mar. 29-Apr. 1, 2005, Shanghai, China, in Lecture Notes in Computer Science 3399, Springer-Verlag. 16. Luk, R.W.P. Kwok, K.L., “A Comparison of Chinese Document Indexing Strategies and Retrieval Models”, ACM Transactions on Asian Language Information Processing, vol. 1, no. 3, Sep. 2002, pp. 225-268. 17. Muslea, I., Minton, S., Knoblock, C., “A Hierarchical Approach to Wrapper Induction”, The Third International Conference on Autonomous Agents, 1999, pp. 190—197. 18. Soderland, S., “Learning Information Extraction Rules for Semistructured and Free Text”, Machine Learning, pp 1–44, 1999.

A Feedback Based Framework for Semi-automic Composition of Web Services Dongsoo Han, Sungdoke Lee, and Inyoung Ko School of Engineering, Information and Communication University, 119 Munjiro, Yuseong-Gu, Daejeon 305-732, Korea {dshan, sdlee, iko}@icu.ac.kr

Abstract. In this paper, we propose a feedback based framework for semi-automatic composition of Web services. In the framework, whenever a service process is constructed by composing Web services, the connection patterns of Web services in the service process are analyzed and registered in terms of data and control flows. Also a method of measuring the degree of coupling between Web services is devised within the framework. Semi-automatic composition of Web services is possible in the framework because the framework can recommend which Web services are more tightly coupled with each other using the method. When one highlights a Web service in a service domain, the next and the previous connectable Web services are listed according to the degree of coupling with the Web services.

1

Introduction

With the proliferation of Web services [1]-[3], constructing a service process by integrating available Web services on the Internet has become a common activity. Many tools to design and run such service processes have been developed and announced. However, there is still plenty of room to improve functions in such tools. Among the improvements, facilities supporting efficient design of a service process are essential for the success of the tools. For instance, if a system or a tool can identify Web services that can be connected to a Web service and the system is equipped with a mechanism assigning priorities to the connectable Web services, semi-automatic composition of Web services would be possible. The burdens of service designers, who have to integrate or composite Web services, will be drastically alleviated, if the semi-automatic composition of Web services is properly applied and used. In semi-automatic composition of Web services, a part of a service process design is supported either by a system or by a tool. In the design of a service process, appropriate Web services should be located and invoked at the right place in the service process for the successful design of the service process. Semiautomatic composition of Web services supports activities of searching and listing pre or post Web services that can be connected to a highlighted Web service in a service process. If a service process designer chooses a Web service, the Web service becomes a highlighted Web service and then the pre or post connectable X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 414–424, 2006. c Springer-Verlag Berlin Heidelberg 2006 

A Feedback Based Framework for Semi-automic Composition

415

Web services to the chosen Web service are searched and listed. If a Web service is what the process designer is seeking for, the Web service is selected and connected to the highlighted Web service. If there is no connectable Web service, a service should be either developed newly or searched through UDDI [3]. This procedure is repeated until the whole service process is constructed. Several works have been done for the support of semi-automatic composition of Web services [5]-[7]. Attaching semantic description to a Web service description is one approach, where the semantic description is used to identify the pre and post Web services to the Web service. In other researches, service and data ontology is predefined and the coupling of Web services is evaluated based on the distance of the services in the ontology. Theoretically the two approaches are quite appealing. However they have some drawbacks to be applied in real fields. The semantic description based approach will be successful only when the semantics of most Web services are properly specified and understood correctly. Standards of Web service semantic descriptions should be agreed and accepted by the community. The ontology based approach assumes that the service and data ontology is properly defined beforehand. But this assumption does not always holds and the usefulness of the method is not apparent yet. The two previous approaches share a common feature in that they have unidirectional information flows from the tools to process designers. No information flows from the defined service process back to the systems or tools that support semi-automatic composition of Web services. However, the service processes defined in the past or under construction could be an invaluable coupling information source of Web services. In this paper, we propose a feedback based framework for semi-automatic composition of Web services. In the framework, whenever a service process is constructed by composing Web services, the connection patterns of Web services in the service process are analyzed and registered in terms of data and control flows. We also develop a method of measuring the degree of coupling between Web services within the framework. Semi-automatic composition of Web services is possible in the framework because the framework can recommend which Web services are more tightly coupled with each other using the method. When service process designer chooses a Web service in a service domain, the previous or next connectable Web services are listed according to the degree of couplings. Several benefits we can expect from the feedback based framework for semiautomatic composition of Web services. First of all, by using feedback information additionally, we can support semi-automatic composition of Web services more completely. Secondly, the semantic description based approach or ontology based approach can be easily combined together with the method in our framework that will be developed in this paper. That is, the advantages of using the semantic description based approach or ontology based approach are still valid in our framework. Lastly, we found that our method is fairly easy to implement because the feedback mechanism is quite comprehensive and our coupling degree evaluation method for Web services is clear.

416

D. Han, S. Lee, and I. Ko

The next section introduces several related work on semi-automatic composition of Web services. Section 3 describes our feedback based framework for the composition of Web services. This section contains the architecture of the framework and the coupling degree evaluation method for Web services. Section 4 illustrates the ontology incorporation in the framework. Section 5 illustrates user interfaces and we briefly explain the usage of the user interfaces in this section. Conclusion and future work are in Section 6.

2

Related Work

Semi-automatic composition of services or applications is not a new idea. Ko et al. [5] tried to integrate publicly available services and applications on the Internet in GeoWorld project. They used service ontology and data ontology to support the integration of services and applications. In the Geoword project, they developed a system that recommends services to users with priorities when users select a service template defined in the service ontology. For the connection of services, they viewed data as an essential element to connect services. That is, services are indirectly connected via data. Sirin et al. [6] proposed a method for semi-automatic composition of Web services. They tried to add some semantic descriptions in WSDL [2] to help the semi-automatic composition of Web services. Their proposal is certainly one of the approaches to take, but some standardization efforts are required for the adoption of their proposal. Liang et al. [7] also proposed a semi-automatic method for composing Web services by supporting the methods for discovery, description and invocation of Web services. For the discovery of a composite service template, they constructed a service dependence graph and developed an algorithm that finds a service for the given input and output parameters. The service dependence graph contains operation nodes and data entity nodes. In some sense, they share the same idea with Ko et al. in which services are connected with each other via data. The framework proposed in this paper, shares many ideas with the previous approaches. For instance, the ontology technology developed by Ko et al. is also used in our framework with slight modifications. But none of the previous approaches take into account using the composition patterns buried in the past Web services. Our work is distinguished from these approaches because we actively utilize the connection information in the composition or integration of Web services built in the past. That is, connection patterns of Web services used in the past is the major source for the support or semi-automatic composition of Web services in our framework.

3 3.1

Feedback Based Framework Architecture of Framework

Fig. 1 shows the architecture and the components of our feedback based framework for semi-automatic composition of Web services. We assume service and

A Feedback Based Framework for Semi-automic Composition

417

Process Modeling Tool

New Service Creation

create services

notify absence of services

select service

ask recommendation of service

Service Registration register services

register processes

search services

UDDI

Service Recommendation search services register services

Service/ Data Ontology

recommend services

provide information

Service Selection find dependencies

Process Repository

Fig. 1. The architecture of feedback based framework for semi-automatic composition of Web services

data ontology is defined and stored already so that it can be searched by the service recommendation module. When the service recommendation module receives a highlighted Web service from a process modeling tool, it searches the service and data ontology and returns a list of connectable Web services to the highlighted Web service. Process designers browse the returned list of Web services and select a Web service. Then the selected Web service is connected to the highlighted Web service in the process modeling tool. In selecting a Web service, process designers may not be able to find an appropriate connectable Web service in the list. In that case, the services should be newly created or located from public UDDI [3] and the newly created or located Web services are registered in the service/data ontology for later references. Up to now, there is no big difference between our method and conventional frameworks. The uniqueness of our framework originates from its usage of Web services connectivity information in service processes defined in the past. When a process design is over, the connection patterns of Web services in the defined process are analyzed and stored in the process repository. Once enough Web services connectivity information in the defined service processes are accumulated, the service recommendation module refers to the process repository as well as service/data ontology for the recommendation of connectable Web services. Since the process repository contains more practical information on the connectivity of Web services, the capability of the recommendation module will be improved further by using the information in the process repository. Meanwhile, when the recommendation module shows a list of Web services, the module has to decide the order of Web services in the list. As the order of a Web service in the list means a rank for the recommendation, some reasonable

418

D. Han, S. Lee, and I. Ko dk

dk read

write

Si

Sj

read

read

Si

Sj

SiW {dk} - SjR{dk}

SiR{dk} - SjR{dk}

dk read

Si

write

Sj

SiR{dk} - SjW {dk}

dk write

Si

write

Sj

SiW {dk} - SjW {dk}

Fig. 2. Four data access patterns of Web services Si and Sj

mechanism to decide the order is necessary. In this paper, coupling of Web services is used as a means for deciding such orders. That is, more tightly coupled Web service is recommended with higher priority. In the next subsection, we explain the details on the coupling of Web services. 3.2

Coupling of Web Services

The coupling of Web services is one of means in computing the connection possibility among Web services. The more tightly coupled Web services in a service process can be considered to have higher possibility to be connected each other in other processes. This assumption may not always hold in every situation. Nevertheless, when we take into account the fact that some parts of a service are repeatedly used in other services, the assumption is acceptable to some extent. In order to apply the coupling of Web services in the computing of the connection possibility among Web services, we devise equations to measure the degree of couplings of Web services. For the explanation of the equations, we introduce some notations and examples. Fig. 2 shows four different data access patterns when two Web services Si and Sj are directly connected and refer to a common data dk . For example, when web service Si writes to data dk , and Web service Si reads the data, we use the notation SiW {dk } − SjR {dk }. We also devise notations to represent the service connection patterns of arbitrary Web services Si and Sj in processes. We use the following notations to represent the service connection patterns: • Si ≺i Sj : Web service Si is an immediate predecessor of Web service Sj in some service processes. • Si ≺ Sj : Web service Si is a predecessor of Web service Sj in some service processes.

A Feedback Based Framework for Semi-automic Composition

Coupling of Services

Coupling of Services

tight

tight

loose

419

Assumption: Service Si is a processor of service Sj

loose

SW Si W SR i (d i (d (d k) ) ) ) S Wk - S R k - S W k - S R j (d j (d j (d j (d k) k) k) k)

SR i (d

Data access patterns

S i

f

S S j

i

||

S S j

i


enumeration > string > datatime > number > other datatype. Notice that the probability depicts how likely a candidate on relation Ri should be extracted as an sub concept of the concept the Ri corresponds. There are alternatives to select the candidates that are to be conceptualized. We may either set a threshold or refer to user to choose top N based on user’s preference. For our example introduced in section 4.1, Table.4 shows a part of the resulting selected candidates together with their probabilities on relation ’STUDENT’. Lattice building The preceding calculation gives us a list of candidates (now we name them concepts) which should be merged into the ontology. However, we still do not know the hierarchy relationship among them, i.e. the sub-super concept relationship.

Mining Query Log to Assist Ontology Learning from Relational Database

445

Table 4. Sample selected candidates 1 2 3 4 5 6 7 8 9

(gender = ’MALE’) : (age > 20) : (gender = ’FEMALE’) : (age < 18) : (gender = ’FEMALE’) AND (age > 20) : (gender = ’MALE’) AND (age > 20) : (gender = ’FEMALE’) AND (age < 18) AND (dept = ’CC12’): (gender = ’FEMALE’) AND (age > 20) AND (dept = ’CC12’): (dept = ’CC12’) :

20.171944 19.904554 19.290842 17.6745 16.93405 16.331322 12.718531 12.515452 11.71472

For example, it is obviously in Table.4 concept 5 should be sub concept of 2 and 3. But only such intuitive judgement does not suffice to construct a sound hierarchy since there is implicit concept that should be modeled. For example, also in Table.4, intuitively both 7 and 8 should be sub concept of 3 and 9. However, it is more appropriate to generate concept (gender = ’FEMALE’) AND (dept = ’CC12’) as sub concept of 3,9 and super concept of 7,8. For this purpose, we introduce FCA into our context. Formal Concept Analysis (FCA) is introduced by WILLE [15], and [16] offers more extensive information. FCA starts with a formal context K which is defined as a triple K := (G, M, I) where G is a set of objects, M is a set of attributes, and I is a binary relation between G and M (namely, I⊆G×M ). For a g ∈ G and m ∈ M, (g, m) ∈ I means that g has an attribute m. The derivation operators are defined as below: F or X ⊆ G, X  := {m ∈ M |∀g ∈ X, (g, m) ∈ I} and dually,

F or Y ⊆ M, Y  := {g ∈ G|∀m ∈ Y, (g, m) ∈ I}

A formal concept (or briefly concept) of a formal context (G, M, I ) is a pair (X, Y), where X is a subset of G, Y is a subset of M, and they satisfy the conditions: X ’ = Y and Y ’ = X. X is called the extent of the concept and Y is called the intent of the concept. The partial order(i.e. subconcept-superconcept relationship) between concepts is defined as below: (A1 , B1 ) ≤ (A2 , B2 ) ⇔ A1 ⊇ A2 (⇔ B2 ⊇ B1 ) A concept lattice of K which is denoted by B(G, M, I) is always a set of concept of K and partial order ≤ between concepts. With a formal context K, we can build the corresponding concept lattice according to a certain algorithm. We borrow the idea of FCA to help us build the concept hierarchy among the selected candidates, we name these candidates S. We set up the formal context K with G={CA|CA ∈ S }, M ={CO|CO ∈ CA, CA ∈ S }, I ={(CA,CO)|CO ∈ CA, CA ∈ S }. In our experiment, we employed the classical Ganter’s algorithm [17], which is the core algorithm in the FCA project TOSCANA[18], to build the

446

J. Zhang, M. Xiong, and Y. Yu

Fig. 4. Partial lattice result

concept lattice. Figure.4 shows the hierarchy result for the concepts in Table.4 on relation ’STUDENT’ after applying the algorithm. Note that this is a sub hierarchy for ’STUDENT’ and is merged into the upper ontology generated by schema extraction. Every concept in Figure.3 has a sub hierarchy like this. Therefore, we are able to obtain a detailed and lower level ontology.

5

Usability

Firstly, one may ask how far our approach would interfere with the daily working of legacy database applications, especially in enterprise scenario. Some traditional work migrate data records to class instances while creating the ontology. This requires the program to maintain the consistency between ontology instances and data records all the time, which obviously depresses the efficiency of the database server. Our approach does not migrate data and thus it does not require daily interruption with legacy database application. Instead, we enable user to access ontology instances by building a mapping layer between the ontology classes/properties and database terms. Figure.5 shows an example how class BoyStudent is mapped to database terms. By this merit ontology users can access instances transparently, and data consistency is by its nature maintained. Secondly, one may argue that how our approach can apply since the query log may be unavailable in many application scenarios. This is really one crucial question we are facing. Our answer is to log query for a certain period of time. For our demonstrate example on a real university application database, we collected queries for two months for the experiment. Finally, note that our approach is to help ontology learning from database. The query log mining method can be integrated into any global process of this

Mining Query Log to Assist Ontology Learning from Relational Database

447

Fig. 5. Example mapping between ontology class and database terms

context. By applying these methods, we can build a bridge between ontology and legacy database and thus bring the legacy systems into semantic space. Many on-researching ontological techniques can be applied, and many potential semantic applications can be derived to facilitate the semantic vision.

6

Conclusion and Future Work

In this paper we present a novel approach that aims to glorify the ontology learning process by mining user query log. The general idea is based on the fact that user queries can mostly reflect the conceptual recognition of the database in human’s mind. In addition, we introduced a schema extraction sub approach which serves as the input of the mining phase. Our approach can be integrated into any global process of ontology learning method from database. We examined our approach on a real application database and the result is admissive and promising. To our best knowledge, currently there is a lack of formal measurement to check a learned ontology for validity and reliability in this context. In the future, we will pay attention to how such a measurement could be established. Besides, we will further tone up the mapping architecture between the ontology and database to guarantee a smooth and efficient access to the potential semantic application on database, so as to make our work more effective and powerful.

References 1. O’Leary, D.E.: Using ai in knowledge management: Knowledge bases and ontologies. IEEE Intelligent Systems (1998) 34–39 2. Chandrasekaran, B., Josephson, J.R., Benjamins, V.R.: The ontology of tasks and methods. In: 11th Workshop on Knowledge Acquisition, Modeling and Management. (1998) 3. Dogan, G., Islamaj, R.: Importing relational databases into the semantic web. (2002)

448

J. Zhang, M. Xiong, and Y. Yu

4. Beckett, D., Grant, J.: (Swad-europe deliverable 10.2: Mapping semantic web data with rdbmses) 5. Stojanovic, L., Stojanovic, N., Volz, R.: Migrating data-intensive web sites into the semantic web. In: Proceedings of the 17th ACM Symposium on Applied Computing (SAC). (2002) 6. Kashyap, V.: Design and creation of ontologies for environmental information retrieval. In: Proceedings of the 12th Workshop on Knowledge Acquisition, Modeling and Manage-ment (KAW). (1999) 7. Astrova, I.: Reverse engineering of relational databases to ontologies. In: Semantic Web: Research and Applications. First European Semantic Web Symposium. Proceedings (Lecture Notes in Comput. Sci. Vol.3053) : 327-41 , 2004. (2004) 8. Phillips, J., Buchanan, B.: Ontology-guided knowledge discovery in databases. In: Proceedings of the First International Conference on Knowledge Capture. (2001) 9. J., P., et al.: Data reverse engineering of legacy databases to object oriented conceptual schemas. Electronic Notes in Theoretical Computer Science 72 (2003) 11–23 10. P., J.: A method for transforming relational schemas into conceptual schemas. In: In 10th International Conference on Data Engineering(ICDE). (1994) 11. Petit, J.M., Toumani, F., Boulicaut, J.F., Kouloumdjian, J.: Towards the reverse engineering of denormalized relational databases. In: Proceedings of the Twelfth International Conference on Data Engineering(ICDE96). (1996) 218–227 12. Shen, W., Zhang, W., Wang, X., Arens, Y.: Discovering conceptual object models from instances of large relational databases. International Journal on Data Mining and Knowledge Discovery (1999) 13. Shekar, R., Julia, H.: Extraction of object-oriented structures from existing relational databases. SIGMOD Record (ACM Special Interest Group on Management of Data) 26 (1997) 59–64 14. Lammari, N.: An algorithm to extract is-a inheritance hierarchies from a relational database. In: Proceedings of International Conference on Conceptual Modeling, Paris, France. (1999) 218–232 15. Wille, R.: Restructuring lattice theory: An approach based on hierarchies of concepts, Reidel, Dordrecht, Boston (1982) 445–470 16. Ganter, B., Wille, R.: Formal Concept Analysis: mathematical foundations. Springer (1999) 17. Ganter, B.: two basic algorithms in conceptual analysis. Technical report, Darmstadt University (1984) 18. TOSCANA - A graphical tool for analyzing and exploring data. LNCS 894, Heidelberg, Springer (1995)

An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks Yanli Cai1, Minglu Li1, and Min-You Wu1,2 1

Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200030, China 2 Department of Electrical and Computer Engineering, The University of New Mexico, Albuquerque, New Mexico, USA {cai-yanli, li-ml, wu-my}@cs.sjtu.edu.cn

Abstract. An effective approach for energy conservation in wireless sensor networks is putting redundant sensor nodes to sleep while the active nodes provide certain degree of coverage. This paper presents an Area-based Collaborative Sleeping protocol (ACOS) that can control the degree of coverage for diverse applications. The ACOS protocol, based on the net sensing area of a sensor, controls the mode of sensors to maximize the coverage, minimize the energy consumption, and to extend the lifetime of the sensor network. The net sensing area of a sensor is the area of the region exclusively covered by the sensor itself. We study the parameter tuning in ACOS to guide to configure the network for diverse applications. The simulation shows that our protocol has better coverage of the surveillance area while waking fewer sensors than other sleeping protocols. It extends the lifetime of sensor networks significantly

1 Introduction Energy conservation has been a substantial research topic in wireless sensor networks. A typical sensor node such as an individual mote, can only last 100-120 hours on a pair of AA batteries in the active mode [1]. Power sources of the sensor nodes are non-rechargeable in most cases. However, a sensor network is usually desired to last for months or years for applications such as habit monitoring[2]. Significant energy savings can be achieved by reducing the duty cycle of a sensor node. In this approach, some nodes are scheduled to sleep (or enter a power saving mode) while the active nodes provide certain degree of coverage. The coverage of a sensor network, measured by the fraction of the region covered, represents how well a region of interest is monitored. The degree of coverage needed depends on specific applications. In applications such as military surveillance, it is necessary to provide as much coverage to a security-sensitive region as possible. The coverage should be maintained even after some nodes have failed. Sleeping protocols have been intensively studied, such as RIS [3, 4], PEAS [5] and PECAS [3]. Those sleeping protocols attempt to wake fewer sensors for certain degree coverage, and thus to extend the lifetime of sensor networks. The main contributions of this work are as follows. We propose a sleeping protocol, named Area-based Collaborative Sleeping protocol or ACOS. This protocol is X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 449 – 460, 2006. © Springer-Verlag Berlin Heidelberg 2006

450

Y. Cai, M. Li, and M.-Y. Wu

fully distributed, only depending on the local information. This protocol precisely controls the mode of sensors to maximize the coverage and minimize the energy consumption based on the net sensing area of a sensor. The net sensing area of a sensor is the area of the region exclusively covered by the sensor itself. If the net sensing area of a sensor is less than a given value, the net area threshold ϕ, the sensor will go to sleep. Collaboration is introduced to the protocol to balance the energy consumption among sensors. We study the parameter tuning in ACOS to guide the choice of net area threshold for diverse applications and to reduce overhead. Performance evaluation shows that ACOS has better coverage of the surveillance area while waking fewer nodes than other state-of-the-art sleeping protocols, and extends the lifetime of sensor networks. The rest of the paper is organized as follows. Section 2 discusses previous research related to the coverage problem. Section 3 presents the network model and power consumption model. Section 4 describes the design of ACOS. Section 5 provides a detailed performance evaluation and comparison with other state-of-the-art protocols. We conclude the paper in Section 6.

2 Related Work Different coverage methods and models have been surveyed in [6, 7, 8, 9]. Three coverage measures are defined [6], which are area coverage, node coverage, and detectability. Area coverage represents the fraction of the region covered by sensors and node coverage represents the number of sensors that can be removed without reducing the covered area, while detectability shows the capability of the sensor network to detect objects moving in the network. Centralized algorithms to find exposure paths within the covered field are presented in [7]. In [8], the authors investigate the problem of how well a target can be monitored over a time period while it moves along an arbitrary path with an arbitrary velocity in a sensor network. A given belt region is said to be k-barrier covered with a sensor network if all crossing paths through the region are k-covered, where a crossing path is any path that crosses the width of the region completely [9]. Power conservation protocols such as GAF [10], SPAN [11] and ASCENT [12] have been proposed for ad hoc multi-hop wireless networks. They aim at reducing the unnecessary energy consumption during the packet delivery process. In [13], a heuristic is proposed to select mutually exclusive sets of sensors such that each set of sensors can provide a complete coverage. In [14], redundant sensors that are fully covered by other sensors are turned off to reduce power consumption, while the fraction of the area covered by sensors is preserved. The problem of providing periodic energy-efficient radio sleep cycles while minimizing the end-to-end communication delays is addressed in [15]. Target coverage problem has been studied in sensor surveillance networks where a set of sensors and targets are deployed in [16, 17]. In [16], heuristics are proposed to divide the sensor nodes into a number of sets, which are activated successively, and at any time instant only one set is active. In [17], target watching timetable for each sensor is built to achieve maximal lifetime. Sleeping protocols such as RIS [3, 4], PEAS [5] and PECAS [3] have been proposed to extend the lifetime of sensor networks. In RIS, each sensor independently

An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks

451

follows its own sleep schedule which is set up during network initialization. In PEAS, a sensor sends a probe message within a certain probing range when it wakes up. The active sensor replies to any received probe message. The sensor goes back to sleep if it receives replies to its probes. In PEAS, an active node remains awake continuously until it dies. PECAS makes an extension to PEAS. Every sensor remains within active mode only for a duration and then goes to sleep.

3 Network and Energy Consumption Model 3.1 Network Model We adopt the following assumptions and notations for the network model throughout the paper. z

Consider a set of sensors S = {s1, s2, …, sn}, distributed in a two-dimensional Euclidean plane.

z

Sensor sj is referred as a neighbor of another sensor si, or vice verse, if the Euclidean distance between si and sj is less than 2r.

z

Relative locations [18] are used for our protocol.

z

A sensor has two power consuming modes: low-power mode or sleep mode, and active mode.

z

For simplicity, the sensing region of each sensor is a disk centered at the sensor with radius r, called the sensing range. As shown later, the protocol can easily extend to an irregular sensing region.

z

The net sensing region of sensor si is the region in the sensing range of si but not in the sensing range of any other active sensor. The net sensing area or net area of si is the area of the net sensing region. The net area ratio, denoted as βi, is the ratio of si’s net sensing area to si’s maximal sensing area, ʌr2.

3.2 Energy Consumption Model For a sensor node, the energy consumption mainly consists of three parts: the processor, radio and sensors such as the sounder, microphone. The processor typically has two power levels when it is in active and sleep mode. The radio at least has four power levels corresponding to the following states: transmitting, receiving, idle listening and sleeping. Typically, the power required to idle listening is about the same as receiving. Each sensor on a node usually consumes quite a little energy comparing to the processor or radio. The sleeping power of a sensor node component is usually two to four orders of magnitude less than the active power. According to Mica2 Mote sensor nodes [1], we set up the energy consumption levels of different components, as shown in Table 1. Most components work between

452

Y. Cai, M. Li, and M.-Y. Wu Table 1. Energy Consumption Levels

Component Processor active Processor sleep Radio transmit Radio receive Radio sleep Radio idle listening Sensors active Sensors sleep

Current 8 mA 15 uA 27 mA 10 mA 1 uA 10 mA 5 mA 5 uA

Power 20 mW 37.5 uW 67.5 mW 25 mW 2.5 uW 25 mW 12.5 mW 12.5 uW

2.2V to 3.3V. For simplicity, we assume each component works at 2.5V. As several kinds of sensors may work together to finish a task, such as sensor positioning by using sounder and microphone, we assume that the current of all active sensors on one node is 5mA. We also assume the battery of a sensor node can last for 100 hours when the node keeps the processor, radio and sensors all active all the time.

4 The ACOS Protocol Each sensor node has five states: Sleep state, PreWakeUp state, Awake state, Overdue state, and PreSleep state. The Sleep state corresponds to the low-power mode. The PreWakeUp, Awake, Overdue and PreSleep states belong to the active mode. The PreWakeUp and PreSleep states are transit states and last for a short period of time, while the Awake and Overdue states may last for several minutes or hours. Every sensor remains in the Awake state for no more than Twake_Duration. The state transition diagram of ACOS is show in Fig.1.

PreWakeUp

Awake

Sleep

PreSleep

Overdue

Fig. 1. State transition diagram for optimized ACOS

Sensor si has a decreasing sleep timer T isleep_left, which represents the time left before si wakes up again and a decreasing wake timer T iwake_left, which is initialized as Twake_Duration when the sensor turns from the low-power mode to the active mode. The value of Twake_Duration - T iwake_left indicates how long the sensor has been in the active

An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks

453

mode since it turned from the low-power mode to the active mode. Sensor si maintains an active neighbor list nListi by collecting information from every message received. For any neighbor sk in nListi, sk’s location and T kwake_left are also stored in nListi. All the above timers decrease as the time progresses. 4.1 Waking Up When sensor si wakes up, its state changes from Sleep to PreWakeUp. It broadcasts a message PreWakeUp_Msg to its neighbors within radius 2r and waits for Tw seconds. When any neighboring sensor sj is in Awake state and receives this message, sj sends back a Reply_PreWakeUp_Msg including its location and T iwake_left into the message. Upon receipt of a Reply_PreWakeUp_Msg from any neighbor sj, si extracts the location of sj and T jwake_left and stores them into its nListi. At the end of Tw, si computes the net area ratio ai. If βi is less than ϕ, it shows that si is not contributing enough coverage, and it is unnecessary for si to work at this moment. So it returns back to Sleep state and sleeps for a period time of the minimum value of all T jwake_left, from nListi. It is possible that several neighbors around an active sensor sj get its T jwake_left and all wake up at the same time. The consequence is that not only they contend with communications, but also most of them may decide to start to work because of unawareness of each other. To avoid this situation, a random offset İ can be added to the sleep time. If βi is equal or greater than ϕ, si changes to Awake state, initialize its wake timer T iwake_left and broadcasts a Wake_Notification_Msg including its location and T iwake_left to its neighbors. When si is still in the Awake state and hears a PreWakeUp_Msg from sj, it replies sj with a Reply_PreWakeUp_Msg, including its T iwake_left. Although sensor si in its Overdue state is also in the active mode, it does not reply to PreWakeUp_Msg so that si is not counted by newly waked-up sensors and is more likely able to go to sleep in a short time. This is how energy consumption balance among sensors is achieved. When si is in the Awake state and its wake timer T iwake_left has decreased to zero, it changes from Awake to Overdue state. 4.2 Going to Sleep A simple way to enable a node to go to sleep is described as following. When si is in the Awake or Overdue state and hears a Wake_Notification_Msg, it updates its list nListi and recalculates the net area ratio βi first. If βi is less than ϕ, si broadcasts a Sleep_Notification_Msg to its neighbors. Then it changes to the Sleep state and sleep for the minimum value of all T kwake_left, from nListi. Also, a random offset is added to the sleep time. But two problems may be caused. The first problem is the unawareness of dead neighbors. When a sensor si receives a Wake_Notification_Msg, it computes its net area ratio βi. The calculation of βi depends on information stored in the local neighbor list nListi. The information may be outdated, because some of neighbors may have died from physical failure or energy depletion without notification. The second problem is about sleep competition caused by a waking up sensor. Consider sensor si which decides to wake up after computing the net sensing area

454

Y. Cai, M. Li, and M.-Y. Wu

ratio βi. It then broadcasts a Wake_Notification_Msg to its neighbors, and several neighbors may receive this message. Each of them computes their net area ratio without collaboration, and many of them may go to sleep. We call this situation multiple sleeps. In some cases, multiple sleeps are needed to reduce overlap, but in other cases, multiple sleeps should be avoided. 4.2.1 Dealing with Dead Neighbors Updating local information can solve dead neighbors problem. When si receives a Wake_Notification_Msg and its net area ratio βi is less than ϕ, it changes to PreSleep state and clears the current information in its nListi. Then it broadcasts a PreSleep_Msg to its neighbors and waits for Tw seconds. When a neighbor sj, in its Awake or Overdue state, hears this message, sj sends back a Reply_PreSleep_Msg including its location and T iwake_left. At the end of Tw, si recomputes the net area ratio βi’. If βi’ is equal or greater than ϕ, this indicates that some neighbors died after the last time si woke up and that si should not go to sleep at the moment. 4.2.2 Dealing with Multiple Sleeps Caused by a Waking Up Sensor The multiple-sleeps problem is harder than the dead-neighbor problem. In some cases, multiple sleeps are necessary to reduce overlap, while in other cases multiple sleeps will cause sensing holes and should be avoided. Fig.2 shows examples of these two cases.

s2 s2

s0

s1

s1

s0

(a)

(b)

Fig. 2. Example of two cases in multiple sleeps problem

In Fig.2, s0 is a sensor that just woke up and broadcasted a Wake_Notification_Msg. The shadowed regions are net sensing regions of sensors s1 and s2 in their Overdue state. We set ϕ=0.5 in this example. Because s1 and s2 are not counted when their neighbor s0 woke up, s0 decides to turn to work. According to baseline ACOS, both s1 and s2 in Fig.2(a) and Fig.2(b) should go to sleep, as the net area ratios of s1 and s2 are less than ϕ, respectively. From Fig.2(a), we can see that both s1 and s2 could go to sleep, because the sleep of s1 does not increase s2’s net area too much or any at all. However, in Fig.2(b), s2 should go to sleep as it has less net area than s1. After the sleep of s2 in Fig.2(b), s1’ net area ratio β1 increases and becomes greater than ϕ Thus, s1 should not go to sleep. The protocol is enhanced by making the neighbors that are ready to sleep collaborate with each other. When sensor si receives a Wake_Notification_Msg from sj, it updates its net area ratio βi’ and broadcasts a SleepIntent_Msg to its neighbors,

An Area-Based Collaborative Sleeping Protocol for Wireless Sensor Networks

455

including βi’. Within Tw seconds, it receives SleepIntent_Msg from its neighbors, who are intent to sleep too. At the end of Tw seconds, it chooses the sensor sk who has the minimum value of β among the neighbors from whom a SleepIntent_Msg had been received. If β’ is greater than sk’s net area ratio βk’, then si does not hold the minimum net area ratio and its neighbor sk may go to sleep. Then si recomputes the net area ratio βi’’ by regarding sk as a sleep node. If βi’ is less than βk’, it indicates that si does have the minimum net area ratio. If βi’’is less than ϕ, it indicates the sleep of sk does not largely increase si’s net area. So si could go to sleep relatively safely in the case of βi’< βk’ or βi’’ i, p∈ Pe, tp ∈T, rp ∈R, gp ∈G

466

M.H. Tran, Y. Yang, and G.K. Raikundalia

We now model the four awareness schools using TL. Due to space limits, we cannot describe all properties of the awareness schools, instead we focus on some important aspects of each awareness school. Conversational Awareness Conversational awareness corresponds to communication process which involves four components: senders, receivers, messages and languages [17]. In this research, we assume that senders and receivers use the same language. Hence, only three components—senders, receivers, messages—are considered. Each utterance of a conversation can be modelled as: M (s, ℑ) |= inform(Sender, Receiver, Message). When a sender sends a message to a receiver, the following awareness conditions need to hold: aware(Sender, inform(Sender, Receiver, Message)) A sender needs to be aware that the sender informs a receiver of a message. aware(Sender, receive(Receiver, Message)) A sender needs to be aware that the receiver actually receives a message. aware(Receiver, inform(Sender, Receiver, Message)) A receiver needs to be aware that a sender informs a receiver of a message. That means that the receiver knows who the sender is and what the message is. Workspace Awareness Workspace awareness corresponds to the process of maintaining users’ perception of other people and objects in a shared workspace. To show how TL is useful in modelling workspace awareness, we focus on three important aspects of workspace awareness: presence awareness of people, awareness of objects, and awareness of people’s activities. a) Presence Awareness of People Presence awareness of other people consists of three cases: awareness of past members (i.e., who were in a group), current members (i.e., who are currently in a group), and future members (i.e., who are going to join a group). A user is said to know another person p if the user knows some relevant attributes xi of p. As mentioned in Definition 2, it is impossible and even unnecessary to know all attributes of one person. Hence, it is reasonable to say that the user only needs to know what is relevant to their collaborative context: know(user, p) ⇐ (∃xi∈p) know(user, xi), where i∈[1, n]. • Case 1: Awareness of past members Past members include those who were once in a group. Operator once is denoted as ‘♦’. Person p is considered once in group Group, iff at state sj in the past, p joined a group, and at another state in the past sk (after sj), p left the group: M (si, ℑ) |= ♦ part-of(p, Group) ⇔ (∃j: j< i) M (sj, ℑ) |= join(p, Group) ∧ (∃k: j < k < i) M (sk, ℑ) |= leave(p, Group) aware(user, past-members) ⇐ (∀p: p≠user) ♦ part-of(p, Group) ∧ know(user, p) • Case 2: Awareness of current members Current members are those who joined a group and have not yet left the group.

F@: A Framework of Group Awareness in Synchronous Distributed Groupware

467

aware(user, current-members) ⇐ (∀p: p≠user) part-of(p, Group) ∧ know(user, p) • Case 3: Awareness of coming members Coming members are those who eventually become part of a group. A person is eventually in a group iff the person is not currently in the group now and will join the group: M (si, ℑ) |= ¡ part-of(p, Group) ⇔ M (si, ℑ) |= ¬ part-of(p, Group) ∧ (∃j: j > i) M (sj, ℑ) |= join(p, Group) aware(user, coming-members) ⇐ ((∀p: p≠user) ¡ part-of(p, Group)) ∧ know(user, p) b) Awareness of Objects Here, we show how TL can be used to model information about locations of objects (e.g., shared artefacts) in a workspace. We denote f(x): x ĺ d(x), which means that a function takes variable x and returns the value of predicate d(x), either true or false. For example, f(x): x ĺ position(obj, x) means that there is a value x that is a position of an object in the workspace (i.e., position(obj, x) holds). To show that an object is located at position x at node si, we use: M (si, ℑ) |= position(obj, x). A user is aware of the position of an object in the workspace at state si, if there is location x of the object that makes position(obj, x) true at state si, and the user knows x: aware(user, position-of-object) ⇐ position(obj, x) ∧ know(user, x), where obj ∈O It is important to model the previous and next positions of an object in the workspace. Operators previous and next are denoted as ‘•’ and ‘ο’, respectively [20]. The position of an object before l time units •=l or after m time units ο=m can be recursively obtained from the previous and next operators. For example, proposition ϕ is true at 2 time units before the current state: •=2 ϕ Ł •(• ϕ). c) Awareness of People’s Activities People’s activities (e.g., viewing, working, gazing and reaching), A, are relations between People and O in a shared workspace: A ⊆ People × O. Each activity act ∈ A performed by person p ∈ People upon object obj ∈ O gives results Ω. We have: Aact(p, obj) ĺ Ω, where act denotes an activity: act ∈ ¢ viewing, working, gazing, reaching². Each person’s activity Aact is a function that takes values of person p and object obj, and returns value Ω. A user is said to be aware of other people’s activity, if for all people of the group, the user knows Ω: aware(user, activity-location) ⇐ ((∀p∈People) Aact(p, obj) = Ω) ∧ know(user, Ω) Contextual Awareness Contextual awareness involves information about goals, tasks and results of people in a group. A goal refers to a state that users aim to achieve that can be a common goal or individual goal. The literature shows that goals held by a group can be considered as a goal hierarchy [18]. At the top of the hierarchy are common goals that are shared

468

M.H. Tran, Y. Yang, and G.K. Raikundalia

by all members of a group. Common goals are decomposed into many sub-goals. The bottom of the hierarchy consists of operational goals that can be achieved by individual actions. As shown in the abstract level, Purpose is a set of goals G, tasks T and results R: Purpose = {G, T, R}, and T is a mapping between goals and results, i.e., T ⊆ G × R. t: g ĺ r, where t ∈ T, g ∈ G, and r ∈ R In our research, we assume that when users want to achieve a goal, they are aware that they have the goal: has(user, g) Ÿ aware(user, has(user, g)). A user is contextually aware if the user is aware of other people’s tasks, and the corresponding results: aware(user, contextual-awareness) ⇐ (∀p∈People) know(user, tp: gp ĺ rp) Self-awareness In general, awareness includes two aspects: knowledge and consciousness [13]. So far, we have discussed three schools of GA including conversational awareness, workspace awareness, and contextual awareness. They all deal with the knowledge aspect of awareness, i.e., those awareness schools examine what information needs to be input into members’ knowledge so that people can establish a common ground, and maintain awareness of other people in a group. Here, we look at the consciousness aspect of awareness, which we refer to as self-awareness. We define self-awareness as an awareness school that is derived from the other three awareness schools. Based on the information provided by conversational awareness, workspace awareness, and contextual awareness, users know the relationship between their own tasks, goals, activities, results, etc. and those of other people in a group. Earlier, we used statement aware(user, ϕ) to present the fact that a user is aware of fact ϕ at state s: M (s, ℑ) |= aware(user, ϕ) In our research, we assume that if a user is aware of fact ϕ, then the user is also aware of that he/she is aware of ϕ. Thus, we have: aware(user, ϕ) Ÿ aware(user, aware(user, ϕ)) Let us take an example of how self-awareness can be derived from contextual awareness. We assume that a user is aware of the user’s own goal, and also aware of another user’s goal. When two goals are the same, the user knows that he/she has a common goal with the other user. aware(user, common-goal(user, p, goal) ⇐ (∃p: p≠user) has(user, goal) ∧ has(p, goal) ∧ aware(user, has(p, goal))

4 Applying F@ to Awareness Mechanism Design 4.1 Applying F@ in Instant Messaging Here, we illustrate how F@ was used in designing mechanisms to support conversational awareness, workspace awareness, and contextual awareness in Instant Messaging (IM). Conversational Awareness Mechanisms In face-to-face interaction, the following three conversational awareness conditions often hold:

F@: A Framework of Group Awareness in Synchronous Distributed Groupware

aware(Sender, inform(Sender, Receiver, Message)) aware(Sender, receive(Receiver, Message)) aware(Receiver, inform(Sender, Receiver, Message))

469

(ca.1) (ca.2) (ca.3)

Through verbal and non-verbal cues, people are aware of to whom they are talking, and whether the listeners can hear them. However, these conditions often do not hold in computer mediated communication tools such as IM systems. In general, IM meets (ca.1) and (ca.3) by providing visual cues such as “Who is typing” in the case of textbased conversation, and a coloured bar that raises in the case of audio conversation to inform a sender and a receiver that the sender is sending a message. But, that is often only supported for the case of one-to-one conversations. When a conversation involves a group of more than two people, these two conditions often do not hold. For example, when two people are typing at the Fig. 2. Buddy View same time, IM does not indicate who they are; or when two people are talking at the same time, there is no visual indicator showing who is talking. Based on the formula addressed in F@, we have developed an IM client that features visual cues indicating many people are concurrently typing in a conversation. Fig. 2 shows the scenario of three users, i.e. Pete, Jane and Kate, composing messages simultaneously. Current IM systems fail to meet (ca.2). That is, a sender does not know whether a receiver actually receives the message. Hence, in many cases, a sender Fig. 3. Track View needs to ask the receiver for confirmation (i.e., if the receiver actually receives the message). We suggest that IM systems can be improved by distinguishing the cases when a message is delivered to a receiver successfully and when a message does not reach an intended receiver. IM systems can provide awareness mechanisms such as a message pool that keeps all failed-to-deliver messages, and allows a sender to choose if the sender wants to re-send or simply ignore those messages. The issue of providing support for (ca.2) is even more problematic in the case of audio and video communication. IM users are provided with no awareness cues informing them if receivers attend to their broadcasted audio and video contents. We suggest that IM systems can include awareness mechanisms such as Track View [26] that are used to inform a local user of who else is currently listening to the user’s auditory track and who else is currently viewing the user’s webcam (Fig. 3). We are not suggesting that Track View is necessarily the best way to resolve the problems addressed in (ca.2), but rather show an example of how F@ can be used in the design of awareness mechanisms. Workspace Awareness Mechanisms Supporting presence awareness of other people is one important aspect of workspace awareness [14]. F@ addresses three presence awareness conditions using TL formu-

470

M.H. Tran, Y. Yang, and G.K. Raikundalia

las, which reflect requirements of showing people who were once in an IM group conversation, who are currently in the conversation, and who are going to join the conversation: aware(user, past-member) ⇐ ♦part-of(p, Group) ∧ know(user, p) aware(user, current-member) ⇐ part-of(p, Group) ∧ know(user, p) aware(user, coming-member) ⇐ ¡ part-of(p, Group) ∧ know(user, p)

(pa.1) (pa.2) (pa.3)

To date, IM systems show only a list of active users who are currently in a group (i.e., pa.2) and do not show information about users who either were in a group (i.e., pa.1) or who are going to join the group (i.e., pa.3). Drawing on these conditions, Buddy View is an awareness mechanism in our IM application that has been designed in such a way that both current and past conversants of an IM conversation are shown. Fig. 2 illustrates an example of three current users—Pete, Jane and Kate—who are currently participating in a discussion, and one past user—Karl—who had left the discussion. Buddy View also indicates how long the current users have been in a discussion and how long ago past users had left the discussion. Supporting presence awareness of coming users who are going to join a discussion has not yet been implemented in Buddy View, but that is very straightforward and can be easily implemented in a similar way. Again, when showing Buddy View, we are not advising that this is necessarily the best way to support presence awareness addressed in (pa.1), (pa.2), and (pa.3), but rather illustrate an interpretation of F@ in designing awareness mechanisms. Contextual Awareness Mechanisms Conventional IM systems display all messages in a linear order which has been found highly limited in building context of a group conversation [22]. Contextual awareness is weakly supported in current IM, as there is no structural coherence between messages (i.e., messages are sequentially displayed one after another). As a result, it makes it difficult to connect coherently between questions and answers, and between two consecutive messages of the same person. Often, the users copy-and-paste the contents of the previous message into their currently composed messages in order to link two messages together. Fig. 4. A tree layout of IM This issue becomes problematic when the number of users participating in a discussion increases, and/or a discussion involves multiple topics. F@ addresses contextual awareness by showing the dependence between actions and a directed goal: aware(user, contextual-awareness) ⇐ (∀p∈ People) know(user, tp: gp ĺ rp), where tp is task, gp is goal, and rp is result.

F@: A Framework of Group Awareness in Synchronous Distributed Groupware

471

Interpreting this formula, we have adopted a tree layout in our IM system to enrich contextual awareness of a conversation. As shown in Fig. 4, by using the tree layout, users can post a question directly to a particular question. 4.2 Applying F@ in Collaborative Writing Now, we show how F@ was used in the design of awareness mechanisms to support workspace awareness and self-awareness for a collaborative writing system, CoWord [23]. Workspace Awareness Mechanisms In collaborative writing, miniature views are used to show a global view of current positions of all artefacts in the workspace [15]. But F@ addresses the issue of providing current and previous positions of artefacts: aware(user, position-of-object) ⇐ (∃x) position(obj, x) ∧ know(user, x) To facilitate users’ awareness of artefacts, after showing artefacts’ current locations in a shared workspace, it is also important for groupware to visually display the previous positions of artefacts. An example of such a visualisation technique is a local replay technique proposed in MLB [28]. Popular awareness mechanisms like radar views and multi-user scrollbars often show where users are currently viewing, and fail to distinguish between users’ viewing and working Fig. 5. Split Window View areas. F@ indicates that groupware should separate different activities, and provide values of the corresponding locations at which activities occur: aware(user, activity-location) ⇐ (∀p∈People) Aact(p, obj) = Ω ∧ know(user, Ω) To address the issue of showing locations of different activities, we have implemented Extended Radar View [27], which supports awareness in collaborative writing by providing authors with simultaneous views of other people’s working and viewing areas in a shared document (Fig. 5). Self-awareness Mechanisms F@ describes how TL can be used to reason about self-awareness based on other awareness schools. For example, if a user knows that his/her goal is the same as another person’s goal, then the user knows that two people share the common goal, and need to work together closely: aware(user, common-goal(user, p, goal) ⇐ ((∃p: p≠user) has(user, goal)) ∧ aware(user, has(p, goal))

472

M.H. Tran, Y. Yang, and G.K. Raikundalia

This TL formula is applicable in designing awareness mechanisms for collaborative writing systems. For example, collaborative editors can include a list of authors’ tasks such as Dynamic Task List (DTL) [25]. DTL shows tasks for which each author is responsible, and is dynamically updated whenever new tasks are assigned to authors. In the scenario illustrated in Fig. 6, DTL shows a scenario in which Peter, Tom and Jim share a common goal of writing “Chapter 3”.

Fig. 6. Dynamic Task List

5 Conclusions and Future Work This paper has reported F@—our novel framework of group awareness (GA) for synchronous distributed groupware—which aims to provide a better understanding of GA, and to assist groupware designers in developing mechanisms for GA. The framework consists of abstract and concrete levels. The abstract level identifies four awareness schools, including conversational awareness, workspace awareness, contextual awareness and self-awareness. The concrete level adopts temporal logic to describe formally some time-related aspects of the four awareness schools. The paper has also presented examples of how F@ can be used to design awareness mechanisms. As future work, we will continue to develop awareness mechanisms based on the principles of F@, conduct usability experiments to evaluate the effectiveness and efficiency of our mechanisms, and enhance F@ based on the results of the evaluation.

References 1. Baecker, R. M., Nastos, D., Posner, I. R., Mawby, K. L.: The User-Centred Iterative Design of Collaborative Writing Software. Proc. of InterCHI’93. (1993) 399-405. 2. Benford, S., Fahlen, L.: A Spatial Model of Interaction in Large Virtual Environments. Proc. of ECSCW'93. (1993) 109-124. 3. Carroll, J. M., Neale, D. C., Isenhour, P. L., Rosson, M. B., McCrickard, S. D.: Notification and Awareness: Synchronizing Task-Oriented Collaborative Activity. Inter. Journal of Human-Computer Studies, Vol. 58. (2003) 605-632. 4. Clarke, E. M., Emerson, E. A., Sistla, A. P.: Automatic Verification of Finite-State Concurrent Systems Using Temporal Logic Specifications. ACM Trans. on Prog. Lang. & Systems, Vol. 8. (1986) 244-263. 5. Davis, E.: Representations of Commonsense Knowledge. San Mateo, CA: Morgan Kaufmann (1990). 6. Dix, A. J.: Formal Methods for Interactive Systems. San Diego, California, US: Academic Press (1991). 7. Dix, A. J.: LADA a Logic for the Analysis of Distributed Action. Proc. of Eurographics Workshop on the Specification and Design of Interactive Systems. (1994) 317-332.

F@: A Framework of Group Awareness in Synchronous Distributed Groupware

473

8. Dourish, P., Bellotti, V.: Awareness and Coordination in Shared Workspaces. Proc. of CSCW’92. (1992) 107-114. 9. Galton, A. P.: Temporal Logics and Their Applications. London: Academic Press (1987). 10. Godefroid, P., Herbsleb, J, Lalita, J., Li, D.: Ensuring Privacy in Presence Awareness Systems: An Automated Verification Approach. Proc. of CSCW’00. (2000) 59-68. 11. Greenberg, S., Hayne, S., and Rada, R.: Groupware for Real-Time Drawing: A Designer's Guide. Great Britain: Mc Graw-Hill (1995). 12. Grudin, J.: Groupware and Social Dynamics: Eight Challenges for Developers. Communications of the ACM, Vol. 37. (1994 ) 92-105. 13. Gutwin, C., Workspace Awareness in Real-Time Distributed Groupware. Ph.D. Dissertation, Department of Computer Science, University of Calgary, (1997). 14. Gutwin, C., Greenberg, S.: A Descriptive Framework of Workspace Awareness for RealTime Groupware. Computer Supported Cooperative Work, Vol. 11. (2002) 411-446. 15. Gutwin, C., Roseman, M., Greenberg, S.: A Usability Study of Awareness Widgets in a Shared Workspace Groupware System. Proc. of CSCW'96. (1996) 258-267. 16. Johnson, C. Expanding the Role of Formal Methods in CSCW. M. Beaudouin-Lafon (ed): Computer Supported Cooperative Work, John Wiley & Sons, (1999). 17. Lipnack, J. and Stamps, J.: Virtual Teams: People Working Across Boundaries With Technology. New York, US: John Wiley & Sons (2000). 18. Malone, T. W., Crowston, K.: The Interdisciplinary Study of Coordination. ACM Computing Surveys, Vol. 26. (1994) 87-119. 19. Muller, M. J., Raven, M. E., Kogan, S., Millen, D. R., Carey, K.: Introducing Chat into Business Organizations: Toward an Instant Messaging Maturity Model. Proc. of GROUP'03. (2003) 50-57. 20. Papadopoulos, C.: An Extended Temporal Logic for CSCW. The Computer Journal, Vol. 45. (2002) 453-472. 21. Rodden, T.: Population the Application: A Model of Awareness for Cooperative Applications. Proc. of CSCW’96. (1996) 87-96 22. Smith, M., Cadiz, J. J., Burkhalter, B.: Conversation Trees and Threaded Chats. Proc. of CSCW'00. (2000) 97-105. 23. Sun, D., Xia, S., Sun, C., Chen, D.: Operational Transformation for Collaborative Word Processing. Proc. of CSCW'04. (2004) 437-446. 24. Ter Hofte, H.: Working Apart Together - Foundations for Component Groupware. Enschede, Netherlands: Telematica Instituut (1998). 25. Tran, M. H., Raikundalia, G. K., Yang, Y.: Methodologies and Mechanism Design in Group Awareness Support for Internet-Based Real-Time Distributed Collaboration. Proc. of APWeb'03. (2003) 357-369. 26. Tran, M. H., Yang, Y., Raikundalia, G. K.: Supporting Awareness in Instant Messaging: An Empirical Study and Mechanism Design. Proc. of OzCHI'05. (2005). 27. Tran, M. H., Yang, Y., Raikundalia, G. K.: Extended Radar View and Modification Director: Awareness Mechanisms for Synchronous Collaborative Authoring, Proc. of AUIC'06. (2006). 28. Vogel, J., Effelsberg, W.: Visual Conflict Handling for Collaborative Applications. Proc. of CSCW'04 (Demonstration). (2004). 29. Watabe, K., Sakata, S., Maeno, K., Fukuoka, H., Ohmori, T.: Distributed Multiparty Desktop Conferencing System: MERMAID. Proc. of CSCW'90. (1990) 27-38. 30. Weir, P., Cockburn, A.: Distortion-Oriented Workspace Awareness in DOME. British Conf. on Human-Computer Interaction. (1998) 239-252. 31. Yang, Y., Sun, C., Zhang, Y., Jia, X.: Real-Time Cooperative Editing on the Internet. IEEE Internet Computing, Vol. 4. (2000) 18-25.

Adaptive User Profile Model and Collaborative Filtering for Personalized News* Jue Wang1, Zhiwei Li2, Jinyi Yao2, Zengqi Sun1, Mingjing Li2, and Wei-ying Ma2 1

State Key Laboratory of Intelligent Technology and System, Tsinghua University, 100080 Beijing, P.R. China [email protected][email protected] 2 Microsoft Research Asia, 100080 Beijing, P.R. China {zli, jinyi.yao, mjli, wyma}@microsoft.com

Abstract. In recent years, personalized news recommendation has received increasing attention in IR community. The core problem of personalized recommendation is to model and track users’ interests and their changes. To address this problem, both content-based filtering (CBF) and collaborative filtering (CF) have been explored. User interests involve interests on fixed categories and dynamic events, yet in current CBF approaches, there is a lack of ability to model user’s interests at the event level. In this paper, we propose a novel approach to user profile modeling. In this model, user's interests are modeled by a multi-layer tree with a dynamically changeable structure, the top layers of which are used to model user interests on fixed categories, and the bottom layers are for dynamic events. Thus, this model can track the user's reading behaviors on both fixed categories and dynamic events, and consequently capture the interest changes. A modified CF algorithm based on the hierarchically structured profile model is also proposed. Experimental results indicate the advantages of our approach.

1 Introduction With the explosive growth of the World Wide Web, reading news from Internet becomes time-consuming. Users have to spend much time in retrieving just a small portion of what they are interested in from massive amounts of information. This creates a need for automatic news recommendation to cater to individual user’s taste. According to previous studies [1][2], user’s interests consist of preferences in multiple fixed concept categories, like World, Sports, etc. Within each category, user’s interests can be classified into long-term interests and short-term interests. Both of these two kinds of interests actually consist of interests at different levels. The key problem of personalized news recommendation is to find a proper way to model user’s interests and track their changes. In CBF field, a prevailing way is to model user’s interests in a hierarchically structured profile [1][2][3][4]. However, all these approaches neglect modeling user’s interests at the event level. In CF field, *

This work was performed at Microsoft Research Asia, sponsored by CNSF (No. 60334020).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 474 – 485, 2006. © Springer-Verlag Berlin Heidelberg 2006

Adaptive User Profile Model and Collaborative Filtering for Personalized News

475

neither the traditional memory-based algorithms nor the prevalent model-based methods have taken user similarity at different interest level into account. In this paper, we explore the personalization issue in the following two aspects. First, we propose a hierarchical user profile model with fixed concept categories at high levels and dynamic event nodes at low levels. The profile’s content and structure will be dynamically modified to adapt to user interests changes. Second, according to [12], we utilize information from different levels of the profile’s hierarchy to calculate user similarity, thus user’s multi-level interests can be taken into account in both CBF and CF methods. The rest of this paper is organized as follows. Related work and their limitations are discussed in Section 2. Our approaches for adaptive profile modeling and modified collaborative filtering are presented in Section 3 and Section 4 respectively. A unified news recommendation framework is proposed in Section 5. Description of experimental methodology and experiment results are illustrated in Section 6. Conclusion and future work are presented in Section 7.

2 Related Work Previous work in this field tries to model user’s interests in a two-layer framework [1][2] or in hierarchical representations [3][4][5]. Approaches proposed in [1] [2] can handle user’s different interests at two levels; however, their simple bipartition lack the capability to model and track users’ interests more precisely. The hierarchical structure in [3] [5] formalizes a general to specific relationship on the fixed categories. In [5], Pretschner et al propose a profile model with hierarchical concept categories. Only the content of this model can be changed to adapt to user’s interests changes, while little information can be derived from the fixed structure of the hierarchy. In PVA [3], both the content and the structure are changeable. However, the construction of this profile and the way it changes must follow a predefined global structure. The common limitation of the above two systems is that user’s most detailed interests they can model are fixed concept categories. While to news readers, events are more precise description of their interests. A unique thinking of hierarchical user profile model is proposed by Nanas et al [4]. The approach can highlight the impact of relevant documents and can model the semantic relationship of keywords. However, this model is unable to describe user interests according to explicit concept categories and events, which are more intuitive to human knowledge [12]. Many works within CF field have been proposed to utilize information from multiple users to improve the recommendation accuracy [6][7][8][9][11]. However, none of them have considered computing user similarity from hierarchical profile structure.

3 Adaptive User Profile Model In this section, we will introduce an adaptive user profile model which can model user interests at both category and event levels. After that, we will describe how it can be learned and utilized for prediction.

476

J. Wang et al.

3.1 Model Description As illustrated in Figure 1, this model consists of two layers, one layer with fixed concept categories, and the other with dynamic event nodes.

Fig. 1. Adaptive User Profile Model

The category hierarchy is fixed and common for different users while the dynamic structured event layer is generated according to event threads within the user’s reading history. The leaf node within the category layer is defined as Lowest-CategoryNodes (LCN). 3.2 Profile Representation Our method considers two important aspects of user’s interests: what he/she is interested in (content) and to what extent he/she is interested in (vitality). 3.2.1 Content Representation We apply the traditional vector space model (VSM) to describe both profile nodes and news documents with different weighting schemes. For news documents, the term weight is defined by TF-IDF method for its appropriateness in an online system. The term weight of a node is obtained from the documents assigned to it. Due to the temporal relation of news documents, the node’s weights must be updated periodically to reflect its decaying impact on user’s interests. The classic Rocchio method is utilized to update the term weights incrementally.

Vi k = (1 − α ) ∗

¦V

d

* Dk

−1

+ α ∗ Vi k −1

(1)

d ∈Dk

Where

Dk

is the set of newly inserted documents in node i at period k .

number of documents in

Dk

. Vd is the keyword vector of document d . Vi

Dk k −1

is the

and Vi k

Adaptive User Profile Model and Collaborative Filtering for Personalized News

477

are the keyword vectors of node i at period k − 1 and k respectively. α ( α ∈ [0,1] ) is the decaying factor for node content. The noniterative version of (1) is k

Vi k = (1 − α ) ∗ ¦ α k − j * ( ¦ Vd * D j j =0

−1

(2)

)

d ∈D j

Simply, the term weight of a node is a weighted combination of its original term weights and the term weights of newly inserted news documents. Decaying factor α indicates the speed on which old documents lose impact on the current interest content. 3.2.2 Vitality Representation Besides the term vector, we also assign each node a real number, called energy value, to indicate the vitality of user’s interests on it. Eik = β ∗ Eik −1 +

¦ cos(V

i

k

(3)

, Vd )

d ∈Dk

Where Eik −1 and Eik are the energy values of node i at period k − 1 and k respectively. cos(Vi k , Vd ) is the cosine similarity between vector the decaying factor for node energy. The noniterative version of (3) is k

Eik = ¦ β k − j j =0

¦ cos(V

i

j

, Vd )

Vi

k

and

Vd

. β ( β ∈(0,1) ) is

(4)

d ∈D j

The energy works like the endogenous fitness of an artificial life, controlling the life span of user’s interests on the node. A node’s energy value increases when fresh documents are assigned to it which indicates user interests on this node keeps on increasing. Due to the decaying factor, nodes that receive few new documents will gradually fade out, indicating that the user is losing interests on it. 3.3 Profile Learning

Within each period, after receiving user feedback, our profile model can learn from the feedback to keep on par with user’s interests. News documents browsed by users are firstly assigned to corresponding interest nodes, then the profile will be updated in both content and structure. During assign process, we first check whether the incoming document d could describe user’s most detailed interests. If not, user’s more general interests will be checked. And this process will be repeated until d is assigned to corresponding LCN node: LCN(d). The assign algorithm is illustrated in Figure 2. After assigning all the user feedback documents, both the content and structure of user profile will be updated. The split and merge operation will be performed when needed.

478

J. Wang et al.

Assign process: For incoming document d 1. In the dynamic layer, start from leaf node layer L f , set threshold value Th

2.

N r = arg max cos(Vd ,Vi ) N i ∈L f

3.

If cos(Vd , Vr ) > Th , then assign d to N r , return Else Th = Th *0.8 ; Set nearest layer above as L f , goto 2 Fig. 2. Assign Process

When both the vitality Ei and diversity

Ei = Ei DNi

of user’s interests in node N i

have grown to a certain extent, split will be performed on this node in order to depict user’s interests in a more detailed way. Split Process: For a given node N i 1. If Ei > Thhigh and Ei < Thaverage , then goto 2, otherwise return.

2.

Perform clustering on node N i by k-means, the sub clusters are {N sub }

* * 3. N sub = arg max Esub ; create N sub as N i ’s child N sub ∈{ N sub }

4.

* Update N sub and N i according to equation (2) and (4)

Fig. 3. Split Process

In our current study, we employ k-means clustering and split the sub-node with the biggest energy as the event to be described in more detailed manner. Further improvement can be made by applying advanced event detection approaches like [10]. When receiving few or even no fresh documents, a node’s energy value will decline gradually, implicating user’s interest vitality is declining. When a node’s energy value has been kept below some threshold for some duration, it should be removed from user profile. Before deleting a low energy node, its potential contribution to user’s longer-term interests should be preserved in its direct parent: Merge Process: For a given node N i , whose life span is TNi

If Ei Tth Then assign all document in N i to N i ’s parent: N parent ; update N parent Fig. 4. Merge Process

Adaptive User Profile Model and Collaborative Filtering for Personalized News

479

3.4 Prediction Methodology

Based on the user profile, a numeric score will be assigned to each new document to assess its value to user’s interests. Then the incoming documents will be ranked by this score, and the top N documents will be delivered to the user. The assessment is determined by both its similarity to the node and user’s interest vitality in that node. For a new document d, we first find the most related node n by the similar method as assign process. Then the prediction score is calculated by the following equation: Pp (u , d ) = En ∗ cos(Vd , Vn ) Pp (u, d ) denotes the score for document d predicted by user u’s profile model.

Given the incoming document set Dk in period k , the final prediction score for each document within Dk is: P ' p (u , d ) =

Pp ( u , d ) M A X { P p ( u , d )} d ∈ Dk

4 Profile Based Collaborative Filtering After each user’s profile has been built, we can perform collaborative filtering on information from these profiles. In the rest of this paper, we will employ MBCF to denote memory-based collaborative filtering algorithm, CBCF to denote content-boosted collaborative filtering algorithm [11] and PBCF to denote our profile-based collaborative filtering algorithm. To solve the sparsity problem, we create a pseudo user-ratings vector vu for every user u . : if user u browsed document d ­°1 vu , d = ® P ( u , d ) : otherwise °¯ p

Pp (u, d ) denotes predicted rating by user profile. The pseudo ratings vectors of all users put together give a dense pseudo ratings matrix V. We then perform collaborative filtering using this dense matrix. We then apply OGM [12] to calculate user similarity by leveraging the hierarchical profile structure. Due to the space limitation, we refer readers to [12] for detailed description of OGM. The top M users with highest similarity to active user a are selected as a neighborhood. To make the final prediction, we have to combine the pseudo predictions of active user a and users in neighborhood. However, we want to consolidate the confidence in the pure-profile prediction for the active user, like [11], we incorporate a Self Weighting factor swa .

480

J. Wang et al.

The final prediction for active user a is calculated by: M

swa (va ,d − va ) + ¦ sim(a, u) Pa ,u (vu ,d − vu ) u =1 u≠a

P(a, d ) = va +

M

(5)

swa + ¦ sim(a, u) Pa ,u u =1 u ≠a

In (5), va is the average prediction value, va , d is the pseudo user-rating for user a.

swa is the Self Weighting factor. sim(a, u ) is calculated by OGM. Pa ,u is the Pearson Correlation [9] between user a and u.

5 Unified Recommendation Framework Figure 6 illustrates an overview of our unified recommendation framework. It provides users with ranked news stories according to users’ potential interest needs considering both his/her own interests and information from peer users. This framework can be categorized as a task of Find Good Items [13], which is the core recommendation task and recurs in a wide variety of academic and commercial systems.

Fig. 5. MyNews System Architecture

When a new document comes, it is firstly assigned to user profile (P1~ Pn), then the user-rating of this document is predicted by each user’s profile model. After that, our PBCF algorithm will be utilized to compute the final predictions of the document. All the documents will be re-ranked according to their final ratings. Finally, the top N (in our system, N is 20) news are delivered to each user. Then the user’s feed-back is utilized to update the profile model. In this way, a unified framework combining CBF and CF together for news recommendation is established. However, without the CF step, our system degenerates to recommendation systems for single user like PVA in [3].

Adaptive User Profile Model and Collaborative Filtering for Personalized News

481

From figure 6, we can see that PBCF algorithm utilizes user profile model not only to make prediction for unrated items, but also for calculating users’ similarity when performing CF. The traditional MBCF approach only relies on pure user-ratings, and the CBCF algorithm depends on both user ratings and news content.

6 Experimental Evaluation In this section, we describe the dataset, metrics and methodology for the comparison between different algorithms, and present the results of our experiments. 6.1 Data Set

We use proxy log data from ten proxy servers in a university of USA. For our specific purpose, we extract news browsing logs from these data according to their urls. A statistics of extracted logs is illustrated in table 1. Table 1. Proxylog data analysis

Time Span Total user (IP) number News browsing log number (All users) Unique news page number (All users) Users* number (Who browsed for more than 8 days) News browsing log number (Users*) Unique news page number (Users*) Access number per day per person (Users*)

2002 3.20 ~ 2002 3.31 10765 112060 38922 342 80615 21788 19.64

We can see that each news page was visited for 2.9 times on average indicating that users within this community must share some common reading preference. To alleviate sparsity, we only select users with more than 8 days news browsing history (Users* in table 2). 6.2 Evaluation Metrics and Methodologies

In this study, the computational cost is far from critical comparing to the updating period, we focus on the prediction accuracy for performance evaluations. We use the traditional IR measurement precision to evaluate our algorithm’s performance. The precision function is defined below: Precsion =

Dvisited Ns

Where N s is the number of news documents recommended by our system, Dvisited is the document set that the user actually visited in the N s recommended ones.

482

J. Wang et al.

To demonstrate the improvement of our algorithms, we design the following experiments. We run online experiments for MyNews without PBCF and compare it with the baseline algorithm PVA to see the effect of the dynamic structure. First, we run online experiments with our dynamic profile and compare it with the baseline: static PVA profile. Second, we add PBCF into this framework and compare system performances with and without PBCF to illustrate how our profile based collaborative filtering can improve the over performance. Third, we compare PBCF with the baseline MBCF and CBCF algorithms so that we can see the appropriateness of proposed CF algorithm on hierarchically structured profile. 6.3 Data Preprocessing

Before starting full experimental evaluation of different algorithms we determined the sensitivity of different parameters to different algorithms and from the sensitivity plots we fixed the optimum values of these parameters and used them for the rest of the experiments. To determine the parameter sensitivity, we select 10 days data and subdivide it into a training set and a testing set. Then our experiments are carried out on them. We conducted a 10-fold cross validation by randomly choosing different training and testing sets each time and taking the average Precision values. 6.4 Experiment Results

In this section, we show the performance improvement of our algorithms over baseline algorithms and discuss about the reason for the experiment phenomena. 6.4.1 Dynamic Structure vs. Static Structure We implement PVA algorithm and run it in the same manner as MyNews: First we randomly select 50 users for test. For each test user, we select 50 news document per day as testing data, including the user’s actual accessed pages. And the user’s feedback is generated according to actual proxy logs. We run this experiment on 10 successive days and calculate the average precision as the system performance. In this experiment, we set N s as 5, 10 and 20 to test our system’s performance at different scopes. The experiment result is illustrated in figure 7. Obviously, MyNews outperforms PVA at 3 different scopes. This is due to its ability to model user interests at the event level, thus it matches user in a more detailed manner than PVA. We also find that on day 3, PVA’s precisions at all scopes drop dramatically, while MyNews maintains good performance. The reason is that a hot sport event ended and most users turned to another hot event within the same category at that day. The category vector of PVA fails to depict this alternation and the category’s energy drops. While MyNews models these events by different nodes and the vanishing of one does not impair user’s interests on others.

Adaptive User Profile Model and Collaborative Filtering for Personalized News

483

        





0\1HZV# 39$#





'D\



0\1HZV# 39$#









0\1HZV# 39$#

Fig. 6. Comparison between MyNews and PVA

6.4.2 Profile Model Only vs. Profile Model + PBCF To show the efficacy of our PBCF algorithm, we run MyNews with and without PBCF algorithms respectively under the same situation as in 6.4.1 except for Ns is set to 20, which is also the default setting for the left experiments.

Fig. 7. Comparison between with and without PBCF

From figure 8, we can see that MyNews with PBCF outperforms MyNews without PBCF. The average precision increase after adding PBCF is 8.2%. The proposed collaborative filtering algorithm works well since the browsing overlap between users are high. The precision gap between two different scenarios decreases as the training data increases. Two reasons are responsible for this phenomenon, on the one hand, profile model are purely based on single user’s browsing history, and as training data increases, its accuracy will increase correspondingly. On the other hand, PBCF consider profile predictions from multiple users, the relative beneficial effect it makes to the final prediction will decrease as the precision of pure profile model increases.

484

J. Wang et al.

6.4.3 Comparison Between PBCF, MBCF and CBCF From figure 8, we can see that the performance variance of MBCF is much bigger than those of CBCF and PBCF. This is because its randomicity is greater due to the sparse environment of pure user ratings. We can see that PBCF outperforms MBCF and CBCF although the average precisions of three CF algorithms are close. When users’ interests change on day 3, our PBCF smoothly tracks user interests change while the CBCF approach needs longer time to follow this change.

Fig. 8. Comparison between PBCF, MBCF and CBCF

7 Conclusion and Future Work In this paper, we propose a novel profile model for personalized news recommendation. This profile model involves a fixed layer of concept categories and a dynamic layer of event nodes. The two-layer structure can adaptively model user’s interests and interests changes both at the category level and at the event level. Based on the fixed category hierarchy, we also propose a modified collaborative filtering method. By utilizing user profile hierarchy to perform CF, our approach shows its strength over traditional CF algorithms. Our experiment indicates the improvement in recommendation accuracy. The future work mainly involves three directions: First, we can expect further improvement on the node splitting method in our profile model by adopting the state of the art news event detection algorithms [10]. Thus the accuracy of profile-based prediction will be enhanced correspondingly. Second, we can exploit approaches to build a global hierarchy in both category and event levels to utilize hierarchy information in both these levels for better collaborative filtering performance. Third, we can explore appropriate ways to apply hierarchical Bayesian models to establish a unified framework for profile-based and collaborative filtering based recommendations [14].

Adaptive User Profile Model and Collaborative Filtering for Personalized News

485

References 1. D. Billsus, and M. J. Pazzani. A Personal News Agent that Talks, Learns and Explains. In Proceedings of the Third International Conference on Autonomous Agents, 1999. 2. D. H. Widyantoro, T. R. Ioerger, and J.Yen. An Adaptive Algorithm for Learning Changes in User Interests. In Proceedings of the Eighth International Conference on Information and Knowledge Management (CIKM'99), 1999. 3. C. C. Chen, M. Chen, and Y. Sun. PVA: A Self-Adaptive Personal View Agent. In Proceedings of the Seventh ACM SIGKDD international conference on Knowledge discovery and data mining, 2001 4. N. Nanas, V. Uren, and A. D. Roeck. Building and applying a concept hierarchy representation of a user profile. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, 2003 5. A. Pretschner, and S. Gauch. Ontology Based Personalized Search. In 11th IEEE Intl. Conf. On Tools with Artificial Intelligence, 1999 6. P. Resnick, N. Iacovou, and M. Suchak. Grouplens: An Open Architecture for Collaborative Filtering of Netnews. In Proceeding of the ACM 1994 Conference on Computer Supported Cooperative Work, 1994. 7. T. Hofmann. Collaborative Filtering via Gaussian Probabilistic Latent Semantic Analysis. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in information retrieval, 2003 8. L. Si, and R. Jin. Flexible mixture model for collaborative filtering. In Proceedings of the Twentieth International Conference on Machine Learning, 2003 9. J. Herlocker, J. Konstan, A. Borchers, and J. Riedl. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, 1999 10. Z. Li, B. Wang, and M. Li. Probabilistic Model of Retrospective News Event Detection. In Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, 2005 11. P. Melville, R. J. Mooney, and R. Nagarajan. Content-Boosted Collaborative Filtering for Improved Recommendations. In Proceedings of the the Eighteenth National Conference on Artificial Intelligence (AAAI), 2002. 12. P. Ganesan. H. Garcia-Molina, and J. Widom. Exploiting hierarchical domain structure to compute similarity. ACM Transactions on Information Systems (TOIS) archive Volume 21, Issue 1, January 2003 13. J. Herlocker, J. Konstan, and L. G. Terveen. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), Volume 22 Issue 1, 2004 14. P. N. Klein. Computing the Edit-Distance between Unrooted Ordered Trees. Lecture Notes in Computer Science, Volume 1461, Chapter p. 91, 2003 15. K. Yu, V. Tresp, and S.Yu. A Nonparametric Hierarchical Bayesian Framework for Information Filtering. In Proceedings of the 27th annual international conference on Research and development in information retrieval, 2004

Context Matcher: Improved Web Search Using Query Term Context in Source Document and in Search Results Takahiro Kawashige, Satoshi Oyama, Hiroaki Ohshima, and Katsumi Tanaka Department of Social Informatics, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan {takahiro, oyama, ohshima, tanaka}@dl.kuis.kyoto-u.ac.jp

Abstract. When reading a Web page or editing a word processing document, we often search the Web by using a term on the page or in the document as part of a query. There is thus a correlation between the purpose for the search and the document being read or edited. Modifying the query to reflect this purpose can thus improve the relevance of the search results. There have been several attempts to extract keywords from the text surrounding the search term and add them to the initial query. However, identifying appropriate additional keywords is difficult; moreover, existing methods rely on precomputed domain knowledge. We have developed Context Matcher: a query modification method that uses the text surrounding the search term in the initial search results as well as the text surrounding the term in the document being read or edited, the “source document”. It uses the text surrounding the search term in the initial results to weight candidate keywords in the source document for use in query modification. Experiments showed that our method often found documents more related to the source document than baseline methods that use context either in only the source document or search results.

1

Introduction

A person reading a Web page or editing a word processing document often searches the Web by using a term on the page or in the document as part of a query. The first results listed following a search using only this term will naturally reflect the most common meaning of the word. If this meaning differs from that in the document being read or edited, the “source document”, the person can add more keywords to the query to narrow down the results. For inexperienced people, identifying appropriate keywords can be difficult. For experienced people, it can be tedious. To make this process easier, we have developed a method that automatically adds keywords to a user’s query to improve the search results. It is based on the assumption that there is a correlation between the source document and the user’s query. It uses both the text surrounding the search term in the source X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 486–497, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Context Matcher: Improved Web Search Using Query Term Context

487

document and the text surrounding the term in the initial search results. A person using our system can more easily find the desired information than by using existing Web search engines alone. Section 2 reviews related work. Section 3 describes our method, and Section 4 describes its implementation. Section 5 presents some of the experimental results, and Section 6 discusses future work. Finally, Section 7 concludes the paper with a brief summary.

2 2.1

Related Work Using Context in Source Document

Several approaches that use the user’s context for information retrieval have been proposed[8]. That by Finkelstein et. al. [7] is the most relevant here. Like with our method, a user first selects a suitable term from the document being read to use as a search term. The method then extracts additional keywords from the text surrounding the selected term and adds them to the query. The modified query is then forwarded to a Web search engine. The selection of the additional keywords is done using a semantic network constructed beforehand. This network is made by collecting documents in 27 domains (computers, business, entertainment, and so on). Each candidate keyword is then represented by a 27dimension vector. Each dimension corresponds to the frequency of a domain in the collected documents. The distance between the candidate keywords and the original search term is measured using the network. The candidate keyword closest to the selected text is added. While this method requires preselected domains and a semantic network, ours does not. Instead, it uses the text surrounding the search term in the initial search results to select additional keywords. The Watson Project[4][5][6] automatically modifies the user’s query by using text in the source document. It also searches the Web for opinions at odds with that in the source document and presents the information, such as company information and maps, that matches the user’s needs. To modify a query, it weights the terms in the source document based on their frequency and position in the document. The additional keywords are selected using only the information in the source document. In contrast, our method also uses the information in the initial search results. 2.2

Using Context in Search Results

Xu and Croft[9] and Yu et. al.[10] modify the query by using text surrounding the search term in the initial search results. They extract candidate keywords from the text and use them to modify the query. We do this as well, plus we extract additional keywords from the source document. 2.3

Using and Matching Contexts in Both Source Document and Search Results (Our Approach)

The most difficult step in using context in source document is selection of appropriate additional term from the surrounding text. If the selected additional term

488

T. Kawashige et al.

is not relevant or too specific, the search results become too biased. To solve this problem, previous methods need precomputed domain knowledge that measures the semantic similarity of terms. On the other hand, using context in search result becomes problematic when the search results contain contexts different from that in source document. In such a case so called topic drift occurs and the precision of search results deteriorates significantly. To resolve these problems simultaneously, we propose using contexts in both source document and in search results. Our method matches the two kinds of contexts and uses terms that frequently appear in both of them. Even if context in source document or search results alone is ambiguous, comparing them can reduce the ambiguity of both contexts. Over methods that use only context in source document, our method has the following advantage: By using context in search results, it can select appropriate additional terms from the text surrounding the query term without prior domain knowledge. From the opposite viewpoint, the advantage of our method over methods that use only context in search results is as follows: By using context in source document, our method can robustly select relevant context terms

3 3.1

Query Modification Using Text in Both Source Document and Search Results Overview

A flow of the proposed method is as follows. 1. User selects a term in source document for use in initial query. 2. System extracts and analyzes nouns surrounding search term in source document. 3. System searches Web with query and retrieves results. 4. System extracts text surrounding query term in searchresults. 5. System weights nouns extracted in step 2 based on results retrieved in step 3. 6. System identifies noun with highest weight as next keyword to add. 7. System adds keyword to query. 8. System searches Web with modified query and retrieves results. 9. System shows search results to user. 10. User indicates whether results are satisfactory. 11. If results are satisfactory, processing ends. Otherwise, processing returns to step 3. 3.2

Query Modification

We explain the query modification which adds the keyword to the query when user selects the text in the reading document as the query. Identifying Candidate Keywords. First, the sentence in the source document containing the search term is extracted, as well as the preceding and following ones. These sentences are morphologically analyzed, and the nouns are extracted. These nouns are candidate keywords to be added to the query. This is shown as step 1 in Fig. 1.

Context Matcher: Improved Web Search Using Query Term Context Source Document

489

Candidates keywords Game League All-Star . .

1.User selects a term 2. Extract nouns from source document 5. Weights keyword candidates

6. Add keyword with highest weight to query

3. Web Search

Yankees AND All-Star

Search Results

4. Extract text surrounding query term

Fig. 1. Extraction of the nouns and scoring the weight

Weighting Candidate Keywords. Next, the first query is used to search the Web and obtain the initial search results. Usually the first 20 search results are used. The text surrounding the search term in each result is extracted (step 2 in Fig. 1). The candidate keywords are weighted using this extracted text (step 3 in Fig. 1), and the one with the highest weight is added to the query (step 4 in Fig. 1). Counting Occurrences of Candidate Keywords in Search Results. We define ki as a noun extracted from the source document and Tj as the text surrounding the search term in the search results. The number of occurrences of the candidate keywords in the search results is given by  1 ki is appearing in Tj (1) fj (ki ) = 0 ki is not appearing in Tj The weight of ki is given by wi =



fj (ki )

(2)

j

After the nouns have been extracted from the text surrounding the search term in the source document, the number of search results containing each noun is counted. Weighting. If the extracted nouns were weighted based simply on the frequency of their occurrence in the text surrounding the search term in the results, the more commonly used nouns, such as “informatio”, would usually be more heavily weighted. This is because they appear in many documents in a wide variety of

490

T. Kawashige et al.

domains. Adding such a noun to the query would thus tend to produce search results similar to those obtained by the previous query. To better reflect the user’s intention, we modified Eq. (2) to lower the weight of the more commonly used nouns:  j fj (ki )  (3) wi = D(ki ) where D(ki ) is the number of search results when searching the Web using query as ki and wi is the weight of ki . The higher the number of search results, the lower the weighting. This is similar to the TF-IDF weighting scheme in which the frequency (tf ) of a term is its frequency in the document and idf is the inverse of the document frequency (df ), which is the number of documents containing the term. The result of multiplying tf by idf is the degree to which the term characterizes the document. In Eq. (3), D(ki ) correspond to df . Using this weight, we can better select nouns related to the first query. 3.3

Adding More Keywords

A simple way to add keywords is to add the one with the highest weighting in turn. However, we can better narrow down the search results by adding a keyword related to the search term. As illustrated in Fig. 2, we weight the candidates again using the text in the latest results. First, we search the Web using the query modified as described in Section 3.2 and get the results (step 1 in Fig. 2). We then weight the candidate keywords again using the text in the latest results (step 5 in Fig. 2). The keyword with the highest new weight is then added to the query. The search results first used to weight the keywords include not only documents related to the source document but usually also many unrelated documents. Candidate keywords that do not appear in any of the documents Source Document

Search Results of First Query

3. Search Web

5. Weight candidate keywords

2. Extract nouns from source document Candidate keywords

8.Web search with modified A query B S AND C C . 7. Add keyword 5. Weight . . J S AND C AND J

7. Add keyword

9.Show Search Results of Modified Query

Fig. 2. Subsequent query modification

Context Matcher: Improved Web Search Using Query Term Context

491

receive a weight of zero. There can be a large number of such keywords. Weighting them again using the results obtained using the modified query reduces the number of zero-weight candidates because there are more documents related to the source document in the second set of results. Weighting using these results increases the number of candidates, which increases the likelihood of adding an appropriate keyword to the query.

4 4.1

Implementation Environment

We implemented our method using the Microsoft Visual Studio C# .NET, Microsoft Word, and the Google API[1]. Noun Extraction. We used the Chasen[2] system to extract the nouns from the text surrounding the search term in the source document. Chasen is a Japanese morphological analysis system developed in the Computational Linguistics Laboratory at the Nara Institute of Science and Technology. Users can easily change a system Search Engine. We used Google[3] as the search engine. The document extracts shown on the Google results page are used as the text surrounding the search term.

5

Experimental Results

We first evaluated the effects of changing the parameter values. Next we compared the results of query modification with our method with those using other methods. 5.1

Precision

We defined precision as the ratio of relevant documents in the search results. P =

N M

(4)

where M is the number of documents and N is the number of relevant documents. 5.2

Effect of Changing Parameter Values

Number of Sentences Defined as Surrounding Text. Increasing the number of sentences defined as surrounding text obviously increases the number of candidate keywords, which would increase the likelihood of adding a more appropriate keyword to the query. However, the farther a sentence is from the search term, the less likely it is to be closely related. We thus adjusted the number of sentences used as surrounding text and counted the number of nouns extracted.

492

T. Kawashige et al.

We compare average number of nouns extracted when the number of sentences was zero (only the sentence containing the search term) and 1, 2, or 3 sentences before and after that sentence. The average number of nouns extracted was less than 20 for 0 and 1 sentence(6.6 at 0 sentence and 18.0 at 1 sentence). In this case, it is thought that there are fewer candidate keywords because nouns not related to the search term are included. For more than 2 sentences, the number of nouns extracted was higher(33.4 at 2 sentences and 44.2 at 3 sentences), butthe execution time was longer. We thus decided that a total of five sentences was best. Number of Search Results Used. Changing the number of search results used to weight the candidate keywords changes the weights, which could change the noun added to the query. The fewer the number of results used, the greater the number of documents related to the source document that are not included in the search results. We estimated the appropriate number of results to use based on the number of nouns with a weight greater than zero. Wecompare the average number of nouns having a weight greater than zero when we used 5, 10, 20, and 30 search results. When we used 5 or 10 results, the number of nouns with a weight greater than zero was about 2(1.9 at 5pages and 2.4 at 10 pages). This is probably not enough to modify the query appropriately because there are fewer candidates. When we used 20 or 30 results(5.0 at 20 pages and 6.2 at 30pages), more time was spent. We thus decided to use 20 pages. 5.3

Comparison with Other Methods

We compared our method with a method that uses only the context in the source document and with a method that uses only the context in the search results. In the first method, the most frequent nouns in the text surrounding the search term in the source document are added one by one to the query. In the second method, the most frequent nouns in the text surrounding the search term in the first results listed following the search are added one by one to the query. We used the following terms, which have multiple meanings, as the initial search terms. – Fuchu City(a city in Hiroshima Prefecture; a city in Tokyo Prefecture) – Pitcher(a person who pitches a baseball; a container for holding and pouring liquids) – Mahura (clothing worn around one’s neck; a device to dampen exhaust noise) – Keyboard(a musical instrument; an input device) – Sanjo(a street in Kyoto; a city in Niigata Prefecture) – Jaguar(a car; an animal) We used five text documents for each meaning and selected a term for the first query. We first evaluated the methods based on the number of results related to the source document among the first 20 results listed of the modified query with one added keyword. We then added another keyword and evaluated them again. We judge a retrieved page is relevant if the initial keyword is used in it for the same meaning as in the source document.

Context Matcher: Improved Web Search Using Query Term Context

493

Table 1. Average precision with one added keyword and two added keyword(original keywords in Japanese) One keyword Initial Search Source query results document method method Fuchu city Hiroshima 25.0 5.0 51.4 Tokyo 70.0 100.0 70.0 pitcher baseball 30.0 100.0 90.0 container 50.0 0 51.3 mahura device 65.0 100.0 65.0 clothing 20.0 0 89.0 keyboard instrument 5.0 0 41.4 input device 95.0 100.0 89.2 Sanjo Kyoto 30.0 5.0 58.8 Niigata 50.0 70.0 82.5 Jaguar car 40.0 70.0 84.0 animal 5.0 0 40.0

added Two keyword Our Source method document method 85.7 62.6 69.4 89.4 94.0 87.0 100.0 64.0 90.0 76.5 74.0 96.0 10.6 75.6 95.4 88.5 100.0 100.0 96.3 88.0 65.0 91.0 7.0 35.0

added Our method 85.0 95.6 96.0 100.0 96.2 82.0 25.6 90.4 100.0 99.2 66.0 31.3

One Keyword Added. Table 1 show the average precision of the three methods for one added keyword. With the “search results” method, the precision was high for one meaning and low for the other one for all search terms. This is because the query is modified to reflect the contents of the first results listed. For example, the precision of the search results was 100% when the search term was “Fuchu City” and the source document was about Fuchu City in Tokyo. It was close to 0% when the source document was about Fuchu City in Hiroshima. This is because pages about Fuchu City in Tokyo are more frequently linked to by other pages (which is how Google orders its search results) and thus comprised most of the first 20 results listed. Because this method does not consider the context of the source document, it modifies the query based solely on the popularity of the search term, not on how it is used in the source document. Our method does not suffer this problem because it considers the context of the source document. As shown in Table 1, the precision of our method was about equal to or higher than that of the “source document” method for “Fuchu City”, “Sanjo”, and“pitcher”. It was particularly higher for “Sanjo in Kyoto” and “pitcher as a containe”. Table 2 show the keywords added to the query by our method and the source document method and the resulting precision for “Sanjo in Kyoto” and “pitcher as a container”. With the source document method, the precision was high when “Karasuma” was added for “Sanjo in Kyoto” and “handle” was added for “pitcher as a container”, while in the other cases it was low. For these cases, our method added keywords related to the source document, such as “Kyoto” and “mizusashi”, in spite of their low frequency in the document because it also considered the text in the search results. The precision was thus very high. In contrast, the precision with our method was very low for “keyboard as an instrument” and “Jaguar as an animal”. Table show the keywords added by our

494

T. Kawashige et al.

Table 2. Added keywords and precision for “Sanjo in Kyoto” and “pitcher as a container” and “keyboard as an instrument” for our method and source document method (original keywords in Japanese) Our method Source document method keyword precision(%) keyword precision(%) Sanjo cafe 100 Karasuma 100 (Kyoto) Nakakyo 100 10 30 Kyoto 100 shopping street 35 Kyoto 100 plan 70 pitcher mizusashi 100 Nepenthes 10 (container) mizusashi 100 handle 90 mizusashi 100 things 40 mizusashi 100 works 65 keyboard control 20 keyboardist 75 (instrument) reality 5 band 60 compact 0 keyboard 70 multi 0 live 60

method and the source document method and the resulting precision for “keyboard as an instrument”. When we look the precision by the first query, the parts in which the precision by the first query is low coincide with the parts in which the precision by our method is low. This is because, if there are few documents related to the source document in the results of the first query, the weight of the keywords related to the source document are reduced. As a result, unrelated keywords, such as “control” and “reality”, are added to the query, and the query is not modified appropriately. For “Fuchu City in Hiroshima”, the average precision with our method was 86%, which is higher than with the source document method. Our method often modified this query appropriately. For example, the modified query “Fuchu City AND Hiroshima” returned results about Hiroshima Prefecture. However, when we use the document mentioned the Fuchu antenna shop of Hiroshima located in Tokyo as sorce document, our method modified the query to “Fuchu City AND Tokyo”, resulting in precision of zero. If both nouns reflecting meanings of the search term appear in the text surrounding the search term in the source document, a keyword may be added that has a meaning unrelated to the source document, so the query is not modified appropriately. Two Keywords Added. We next compared our method with the source document method after two keywords had been added to the query. We do not compare our method with the search results method because the latter method modifies the query based on the text in the first results returned by the search without considering the text in the source document. Our method added two keywords as described in Section 3.3. The source document method added the keywords with the highest and second highest weights. The average precisions are shown in Table 1. Our method again had lower precision for “keyboard as an instrument” and “Jaguar as an animal” and higher precision for the other cases. Table 3 show the keywords added to the query by our method and the source

Context Matcher: Improved Web Search Using Query Term Context

495

Table 3. Added keywords and change in precision for “Fuchu in Tokyo” for our method and source document method (original keywords in Japanese)

doc1 doc2 doc3 doc4

Our method Source document method keywords precision(%) keywords precision(%) Toyko case 95 branch bonus 85 December case 85 inspection headquarters 80 local museum 100 00 local 95 planetarium local 95 museum site 85

document method and the precision for “Fuchu City in Tokyo”. In this case a margin of the precision between when added a keyword and when added two keywords is high. Documents 1 and 2 are about a robbery case that occurred in Fuchu City in Tokyo and documents 3 and 4 are about the Kyodo-no-mori museum in Tokyo. Our method had higher precision for every case. There appears to be a small but consistent advantage for two words. Combined with Reranking. Another way to use query term context in source document and in search results is reranking the results according to the similarity between the text surrounding the search term in the source document and the text surrounding the search term in the search results. Reranking can also be combined with query modification. Table 4 shows preliminary experimental results with reranking. We reranked the top 100 results of each query and measured the precision of the top 20 results after reranking. We used the cosine similarity between feature vectors of surrounding texts, where each feature represents word frequency. In many cases, reranking the results of the initial query yields improved precision scores, but they are not as good as precision scores by query modification. For some queries, reranking the results of the modified query can achieve further improvement in precision. Table 4. Average precision with reranking Initial query Sanjo Kyoto Niigata Jaguar car animal

6

35 50 50 5

Reranking One keyword added Reranking the results the initial results by our method of the modified query 73.0 91.0 96.0 61.7 76.7 75.0 53.8 76.3 67.5 5.8 18.3 20.8

Future Work

There are some limitations with our method that need to be addressed. As discussed in section 5.3, if the search results used for weighting the candidate keywords include few documents related to the source document, the keyword added to the query is likely to be unsuitable. We could alleviate this problem by

496

T. Kawashige et al.

increasing the number of search results used for weighting. This would lengthen the execution time, however. It is thought that the number of documents related to the source document will not increase even if we increase the number of search results used for weighting to solve this problem. We need to investigate this problem. Also in section 5.3 we mentioned the problem of “Fuchu City ” being incorrectly change to “Fuchu City AND Tokyo” when the source documents mentioned “Fuchu City” in Hiroshima included “Tokyo”. This also needs to be investigated. Users expect search results to be presented quickly. The execution time of our method is still too long for it to be practical. One way to reduce the execution time is to store the information used for query modification in cache. Although in this paper we have focused on query modification for Web search, the idea of matching contexts can be exported to other problems such as image or video retrieval.

7

Conclusion

We have developed a Web query modification method that uses both the context of the search term in the source document and in the search results to better reflect the user’s intention. It first extracts candidate keywords from the text surrounding the search term in the source document. These keywords are weighted based on the search results of the first query, and the one with the highest weight is added to the query. Experiments showed that our method often found more documents related to the source document than a method using only the source document and a method using only the search results. However, our method took longer to execute. Since we plan to increase the number of keywords added, we need to speed up execution. Our goal is to relate the Web to the source document. We will thus enhance our method so that it finds not only related documents but also documents supplemental to the source document and to the document opposite the source document. Therefore, we will address the situation in which the user also composes or edits the document.

Acknowledgement This work was supported in part by Grants-in-Aid for Scientific Research (Nos. 16700097 and 16016247) from MEXT of Japan, by a MEXT project titled “Software Technologies for Search and Integration across Heterogeneous-Media Archives,” and by a 21st Century COE Program at Kyoto University titled “Informatics Research Center for Development of Knowledge Society Infrastructure.”

References 1. Google API http://www.google.com/apis/index.html. 2. Morphological Analyzer Chasen http://chasen.naist.jp/hiki/ChaSen/.

Context Matcher: Improved Web Search Using Query Term Context

497

3. Google http://www.google.com/. 4. J. Budzik and K. Hammond. Watson: Anticipating and contextualizing information needs. In In Proceedings of 62nd Annual Meeting of the American Society for Information Science, 1999. 5. J. Budzik and K. Hammond. User interactions with everyday applications as context for just-in-time information access. In Proceedings of the 2000 International Conference on Intelligent User Interfaces, 2000. 6. J. Budzik, K. J. Hammond, L. Birnbaum, and M. Krema. Beyond similarity. In Proceedings of the 2000 Workshop on Artificial Intelligence and Web Search, 2000. 7. L. Finkelstein, E. Gabrilovich, Y. Matias, E. Rivlin, Z. Solan, G. Wolfman, and E. Ruppin. Placing search in context: The concept revisited. In In Proceedings of the Tenth International World Wide Web Conference (WWW10), 2001. 8. S. Lawrence. Context in web search. IEEE Data Engineering Bulletin, 23(3):25–32, 2000. 9. J. Xu and W. B. Croft. Query expansion using local and global document analysis. In Proceedings of the Nineteenth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 4–11, 1996. 10. S. Yu, D. Cai, J.-R. Wen, and W.-Y. Ma. Improving pseudo-relevance feedback in web information retrieval using web page segmentation. In In Proceedings of International WWW Conference, 2003.

Weighted Ontology-Based Search Exploiting Semantic Similarity Kuo Zhang1, Jie Tang1, MingCai Hong1, JuanZi Li1, and Wei Wei2 1

Knowledge Engineering Lab, Department of Computer Science, Tsinghua University, Beijing 100084, P.R. China {zkuo99, j-tang02}@mails.tsinghua.edu.cn {hmc, ljz}@keg.cs.tsinghua.edu.cn 2 Software Engineering School, Xi’an Jiaotong University, Xi’an 710049, P.R. China [email protected]

Abstract. This paper is concerned with the problem of semantic search. By semantic search, we mean searching for instances from knowledge base. Given a query, we are to retrieve ‘relevant’ instances, including those that contain the query keywords and those that do not contain the keywords. This is contrast to the traditional approaches of generating a ranked list of documents that contain the keywords. Specifically, we first employ keyword based search method to retrieve instances for a query; then a proposed method of semantic feedback is performed to refine the search results; and then we conduct re-retrieval by making use of relations and instance similarities. To make the search more effective, we use weighted ontology as the underlying data model in which importances are assigned to different concepts and relations. As far as we know, exploiting instance similarities in search on weighted ontology has not been investigated previously. For the task of instance similarity calculation, we exploit both concept hierarchy and properties. We applied our methods to a software domain. Empirical evaluation indicates that the proposed methods can improve the search performance significantly.

1 Introduction The Semantic Web is an extension of the current web in which information is given well-defined meaning, better enabling computers and people to work in cooperation [1]. In recent years, semantic web has made significant progress. And several systems have been developed for semantic search [2, 3, 4 and 5]. However, semantic search does not seem to be so successful. One of the biggest problems is that most of these systems view semantic search as a problem of conventional keyword based search. In keyword based search, when a user types a query, the system returns a list of ranked documents that contain the query keywords. In semantic web, there are usually two kinds of data: ontology and instances. By semantic search in this paper, we mean searching for instances from the semantic web. Keyword based search can serve current web well. However, the method seems not sufficient for the task of semantic search. For example, the instance that users really want may not contain the query keywords. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 498 – 510, 2006. © Springer-Verlag Berlin Heidelberg 2006

Weighted Ontology-Based Search Exploiting Semantic Similarity

499

In this paper, we try to address the semantic search in a novel approach. Our proposal is to make use of relations and instance similarities for improving the semantic search. In ontology, concepts and relations between concepts are defined to describe the data on the semantic web (See [6] for details of ontology). Such relations can be useful for search. We also assume that similarities between instances can be leveraged in search process. Experimental results in section 6 have verified the correctness of the assumption. Specifically, we first employ keyword based search method to retrieve instances for a query. Before semantic search process, we provide a friendly method for user interaction, and perform the refinement for the retrieved instances by the user feedback. Then by using relations, we also retrieve the instances that do not contain the query keywords but ‘connect’ to instances containing the keywords by some kind of relations; and by using the instance similarities, we retrieve the instances that are similar to the top ranked instances. We make use of the observation that given an instance that is what the user wants, instances similar to the instance can also be what the user wants. Furthermore, we use weighted ontology as the underlying data model to make the search processing more effective. We have implemented the proposed methods as a new feature in Semantic Web Aiding Rich Mining Service (SWARMS) project [7] (http://keg.cs.tsinghua.edu.cn /project/pswmp.htm). In SWARMS, we use OWL [8], one of the most popular ontology languages, as the knowledge representation language. We applied the system to the software project domain. Our task then is to search for software projects for the given query. Our empirical experiments indicate that the proposed methods significantly outperform the keyword based methods for semantic search. The rest of the paper is organized as follows. In section 2, we introduce the related work. In section 3, we describe weighted ontology. In section 4, we present an instance similarity measure and in section 5 we describe our search approach. Section 6 gives the experimental results. We make concluding remarks in section 7.

2 Related Work In [5] authors describe an interesting approach for ranking query results using semantic information. The approach is oriented towards determination of the link relevance and reflects the semantic link-based nature of the Semantic Web. And the approach combines the characteristics of the inferencing process and the content of the information repository used in searching. The query results might be extended by inference, which uses rules from the domain ontology, i.e. the query process includes ontologybased inferencing. In their work though, queries are expressed in terms of concept instances not keywords. A very interesting approach for querying Semantic Associations on the Semantic Web is presented in [3]. Semantic Associations capture complex relationships between entities involving sequences of predicates, and sets of predicate sequences that interact in complex ways. A Semantic Association Query (SAQ) is a pair of entity keys or ids, and the result of a SAQ is a set of Semantic Associations that exist be-

500

K. Zhang et al.

tween the entities expressed in the query. They calculate the weight of a relation instance using a measure that is similar to the notion of relation importance proposed in this paper. However, we also take into account concept importance to determine the ranking of the complex path connecting entities. Significant efforts have been devoted to hybrid search engines which integrate semantic search and traditional search techniques [4, 9, 10 and 11]. The work presented in [4] describes a framework that combines traditional search engine techniques together with ontology based information retrieval. A “document” that represents an instance simply consists of the concatenation of the values of all its properties. Whereas, we give different weights to text content in the values of different properties according to the weighted ontology. In its semantic search phrase, spread activation algorithm is used to explore the concept graph. Given an initial set of concepts, the algorithm obtains a set of closely related concepts by navigating through the linked concepts in the graph. Since the model does not encompass the notion of instance similarity, all captured associations are derived from the explicit information existing in knowledge base. On the other side, the strength of our approach lies in exploiting implicit similarity knowledge. Another semantic searcher [2] which is built on Semantic Web infrastructure is designed to improve traditional web searching. They implemented two semantic search systems which, based on the denotation of the search query, augment traditional search results with relevant data aggregated from distributed sources. Their idea of navigating through the instances graph is also used in our work.

3 Weighted Ontology The underlying data model in our process is ontology. To facilitate further description, we summarize their major primitives and introduce some shorthand notations. The main components of an ontology are concepts, relations, and axioms [6]. A concept represents a set or class of entities or ‘things’ within a domain. The concepts can be organized into a hierarchy. Relations describe the interactions between concepts or properties of concepts. Axioms are used to constrain values for concepts or instances. Instances are the ‘things’ belong to a concept. The combination of an ontology with associated instances is what is known as a knowledge base. In semantic search, we have found that different concepts or different relations have different importances. We also call the importance as weight of the concept or relation. In the rest of this section, we first present the ontology definition formally. We then extend the ontology definition so as to support the definition of importance. A core ontology can be defined as: Definition 1. A core ontology is a structure: O: = (C, ≤C, R, σ, ≤R) consisting of (i) two disjoint sets C and R whose elements are called concept identifiers and relation identifiers, resp., (ii) a partial order ≤C on C, called concept hierarchy or taxonomy, (iii) a function σ :R C or C×C called signature, and (iv) a partial order ≤R on R, called relation hierarchy.

Weighted Ontology-Based Search Exploiting Semantic Similarity

501

Relation can be defined as: Definition 2. For a relation r R with |σ (r)| = 2, its domain and range are defined by domain(r) := the first element of U DQGUDQJH U):= the second element of U  So far, the ontology definition can not express the importance of concept. We extend the definition, and define the relation and concept importance as follows: R with |σ (r1)| = 1, we define its importance by Definition 3. For a relation r1 R with |σ (r2)| = 2, we define its imp(r1 , domain(r1)) [0, 1]. For a relation r2 importance to its domain and its range by imp(r2 , domain(r2)) [0, 1] and imp(r2 , range(r2)) [0, 1]. And for a concept c C and a relation r R with c ≤C domain(r), imp(r , c) = imp(r , domain(r)). C, we define its importance by imp(c) Definition 4. For a concept c for ci, cj GC, ci ≤C cj, imp(ci) ≥ imp(cj).

[0, 1], and

Value of imp(c) captures how important c is to this domain. Concepts or relations with higher importance contribute more in the search results. Users may care more for the instances of concept with higher importance. Ontology designers conducted the importance assignment for the relations and concepts according to their domain background knowledge. When conflicts occur, we use the average value as the importance. The importance can be assigned manually or can be learned automatically. The process of manual assignment is complex, time-consuming, and error-prone. Automatic assignment seems more feasible and more accurate. However, in this paper, we confine ourselves to the search processing. We will leave this to future work.

4 Instance Similarity The notion of similarity is used in many contexts to identify objects having common ‘characteristics’ [12]. This section describes a method for measuring the similarity between instances. In our method, ontology structures of both hierarchical concepts and non-hierarchical relations are exploited to estimate the instance similarity. 4.1 Measures Exploiting Concept Hierarchy Let us start the illustration from an example (shown in figure 1). The example shows a hierarchical taxonomy. Each node contains projects belonging to that concept. The numbers of projects is written in the round brackets. It shows an unbalanced hierarchy. By unbalanced hierarchy, we mean sibling nodes with different numbers of projects. For each node on the tree, we use depth to denote the number of edges on the path from it to the root node. And we use LC(i1, i2) to denote the lowest common concept node of both i1 and i2. For example, depth(pdf1)=3, depth(LC(f1, pdf1))=1. The depth based measure is defined as [12]:

sim(i1 , i2 ) =

2 * depth ( LC (i1 , i2 )) depth (i1 ) + depth (i2 )

For example, sim(f1, pdf1) = 2*1/(4+3)=2/7.

502

K. Zhang et al. Project (80000) Multimedia(20000)

Print(1000)

………….. Video (2000)

Display (500)

Graphics (9200)

Special Effects (10)

Audio (2500)

Viewers (5000)

Editors (300)

Editors (3000)

Font (10)

PDF Related (600) Image (40)

se1, se2 belong to Special Effects. v1, v2 belong to Viewers. f1 belongs to Font, pdf1 belongs to PDF Related. a1 belongs to Audio, m1 belongs to Multimedia.

Fig. 1. The unbalanced software concept hierarchy

Both sim(se1, se2) and sim(v1, v2) are 0.75 when computed using the formula above. However, looking at Figure 1, we see that se1, se2 are rather similar since only ten projects are Multimedia: Video: Special Effects related like them, whereas v1, v2 are somewhat less similar since there are five thousands projects about Multimedia: Graphics: Viewers. Allowing for the numbers of concept instances, we define another similarity measurement. For two instances i1 and i2, we define

simc (i1 , i2 ) = max(0,1 −

1 log

count ( HC ( i1 ,i2 ))

− log count ( LC ( i1 ,i2 ))

)

(1)

where count(c) means the number of instances that belong to concept c; HC(i1, i2) means the highest concept in the hierarchy that i1, i2 both belong to. If there is no such concept exists, then count(LC(i1, i2)) = count(HC(i1, i2)), simc(i1, i2)=0. If i1, i2 are the same instance, count(LC(i1, i2)) = 1, simc(i1, i2) §1. The similarity score is symmetric; that is simc(i1, i2) = simc(i2, i1). And it is in range [0, 1). Table 1 shows the similarity values calculated by the depth based measure and the new measure. Table 1. Comparison of the two measures simc

depth based

number based

se1, se2

0.75

0.88

v1, v2

0.75

0.64

f1, pdf1

0.29

0.77

a1, m1

0.4

0.28

The new measure makes use of the instance population in addition to schema hierarchy. This similarity measure seems more accurate than the previous one in unbalanced hierarchy.

Weighted Ontology-Based Search Exploiting Semantic Similarity

503

4.2 Measures Exploiting Property Similarity Usually, instances of the same concepts may have common properties. We define property similarity between instances based on the similarity scores of their property value-pairs:



sim p (i1 , i2 ) =

imp ( r , c ) * simr (i1 , i2 )

imp ( r , c ) ≠ 0



(2) imp ( r , c )

imp ( r , c ) ≠ 0

where c=LC(i1, i2), and simr(i1, i2) means the similarity score of i1 and i2 on property r. Properties can have different types. We calculate the similarity in terms of following rules: (i) If the range of property r is numeric, the similarity is computed as: simr (i1 , i2 ) = 1 −

vr (i1 ) − vr (i2 )

(3)

max(vr (i1 ), vr (i2 ))

where vr(i) represents the value of instance i on property r. This measure is only applicable for positive numbers. (ii) If the range of property r is textual, we employ VSM (Vector Space Model) [13] to compute the Cosine Similarity between the two strings. For computing the Cosine Similarity, we extract bag of words from a string and construct a vector accordingly, finally compute the similarity by cosine similarity method. (iii) If the range of property r is a semantic concept c, the problem can be transformed into bag similarity computing. (See [12, 14 and 15] for details of bag similarity.) We exploit the Jaccard’s Coefficient here:

simr (i1 , i2 ) =

bag r (i1 ) ∩ bag r (i2 ) bag r (i1 ) ∪ bag r (i2 )

(4)

where bagr(i1) is the set of instances i, which belongs to c and has (i1, r, i) relation. 4.3 Combining Concept and Property Similarity After that, we combine simc and simp by using linear interpolation:

sim(i1 , i2 ) = (1 − α ) * simc (i1 , i2 ) + α * sim p (i1 , i2 )

(5)

where α is an experience parameter. At present, we determine it empirically as 0.4. The instance similarities are computed allowing for the weighted ontology. We compute the instance similarities in advance. And we think it will be interesting to explore other more effective ways to combine these two kinds of similarity.

504

K. Zhang et al.

5 Our Approach 5.1 The General Architecture We perform semantic search in three stages of processing: keyword based search, semantic feedback, and semantic data search. Figure 2 shows the flow. Picking relevant results User

Keyword query

Initial results

1.Traditional Search Engine

2.Semantic Feedback

Initial results with new relevance

3.Semantic Data Search

Weighted Ontology Final Results

Fig. 2. General Architecture of the Proposed Search

The input is a query containing one or several keywords (here the query is the same as that in the keyword based search). Using the keyword based search, we retrieve the instances that contain the query keywords. The retrieved instances are ranked in terms of relevance scores. In this stage, we take into consideration of the weighted ontology. In semantic feedback, we prompt the top ranked instances (top 10) to the user in natural language. The user can specify which instance he or she prefers to or not. In semantic data search, we exploit the instance relations and similarities to re-retrieve the instances that do not contain the query keywords. 5.2 Keyword Search and Semantic Feedback In keyword search, we developed our keyword based search engine by making use of VSM (Vector Space Model). We extract a bag of words for each instance and construct a vector accordingly. We next compute the value of each element in the vector by tf/idf measure [16 and 17]. Allowing for the weight ontology, we make some change to the measure:

tf (i , t ) = ∑ imp ( r , LC (i )) * tf r (i , t )

(6)

r ∈R

where t is a word occurring in instance i. LC(i) is the most specific concept that i belongs to. Notation tfr(i, t) is the frequency of word t in instance i. In semantic feedback, we hope to capture user’s information needs more accurately. We adopt the strategy of feedback. Then the essential task is to provide a friendly interaction between users and semantic data. We return the initial result instances by human readable text. Two approaches can be used here. The first is that we use the web document from where the instance is extracted to represent the instance. Sometimes, this method is inapplicable since some instances are not annotated from a single page.

Weighted Ontology-Based Search Exploiting Semantic Similarity

505

Therefore, we propose a new technique to present instances in the form of natural language. For example, in Figure 3, the instance &d3 can be described as the following piece of text: Jack is 25 years old. Jack use Java programming language. Jack is male. Jack developed a project, and the project is named database2.

Currently, a template based method is applied to this work. A template is designed for each concept. For an instance, the property and value pairs will be filled to the template for generating the natural language expression. develop Project

conceptual relation

Developer manage

subClassOf

in_charge_of Database

database1

windows

Network

Manager

Schema Instances name develop

&s1

Mike

&d1

OS name

C++

language

manage

develop

&m1 &d2

Any OS Java

develop

&s2 language

relation instance

25 Java

name

database2 name

language

network1

windows

Jack

OS C++

name language

typeOf

age

&s3

&d3

sex

Male

develop

Fig. 3. The Instance Graph

In this way, users can easily understand what the instance is about. And then give their feedbacks to the system. We give three options for each top ranked result: relevant, barely relevant and irrelevant. And the relevant scores are adjusted according to user selection: scores of instances indicated relevant are adjusted to 1, barely relevant to 0.4 and irrelevant to 0. 5.3 Semantic Data Search In this stage, we try to retrieve instances that do not contain the query keywords but may be ‘relevant’ to the query through relations and instance similarities. Spread Activation technique is a processing framework designed for semantic networks and ontologies in AI area and has been successfully used in semantic data processing [4, 18, 19 and 20]. We employ Spread Activation in our semantic search stage.

506

K. Zhang et al.

We first explore the knowledge base by using relations between instances. To reduce the computational complexity we confine that each node only can be spread at most one time. Instance ni can spread to nj in this way: (7) w( n j ) = w( n j ) + w( ni ) * imp ( r , LC ( ni )) * imp ( LC ( n j )) * s ( ni , r , n j ) where w(n) is the relevant score of instance n. Functions LC(), imp() are the same as described above. And s(ni, r, nj) is used to measure the specificity of the relation instance, if ni is the object of r and nj is the subject of r, it is defined as:

s ( ni , r , n j ) =

1 log (1 + num( r , n j ) * num( ni , r ))

(8)

where num(r, nj) is the number of instances that take nj as object of relation r. And num(ni, r) is the number of instances that take ni as subject of relation r. Similar notions of this specificity have been used in [4] and [5]. The higher the specificity is, the more valuable the relation instance is. In this way, spreading may occur between &s1 and &s2 since there is a relation path connecting them:

& s1 ← &d1 → &s2 develop

develop

By making use of instance similarities for spreading, we choose the top ranked instances to spread the weight to other instances. Instance ni spreads to nj in this way:

w( n j ) = w( n j ) + w( ni ) * imp ( LC ( n j )) * sim( ni , n j )

(9)

where functions w(), LC(), imp() and sim() are the same as described above. In our experiment, the relevant scores of the ten most similar instances to ni will be updated. As the example in Figure 3, &s1 and &s3 are not related by relation, but spreading may occur due to their high similarity.

6 Experiments 6.1 Software Domain We implemented the weighted ontology-based semantic search engine as a component of SWARMS and tested it in a software domain application. We defined the software ontology corresponding to the knowledge schema on SourceForge (http://sourceforge.net). Figure 4 shows part of the ontology. We downloaded 9796 projects from the web site and organized them into the instance base. Now there are around 112,098 node instances together with 166,320 relation instances. In the SourceForge website there are 19 main software topics (e.g. Internet, Multimedia, etc). Most of them have sub-topics, for example, the Multimedia topic has various sub-topics such as Graphics, Audio and Video. Each project has one or more forums used to discuss the problems about the project. Each project is developed by several developers and one developer also may be involved in several projects.

Weighted Ontology-Based Search Exploiting Semantic Similarity

Foundry_member

Usage_statistic

Version

has_usage_statistic

has_project

has_version

Person

Category

belong_to

has_released_package

sub_category

conceptual relation

Released_package

Project

member_of

Member

507

subClassOf

has_public_forum

post reply Developer

Public_forum

developed_by

Concept

Message has_message

manage Project_admin

Help

Discussion

Fig. 4. Part of the ontology for software

6.2 Experiment Setup and Measure Starting from the annotated data from SourceForge website, we developed a search test involving ten people in the evaluation. Each of the evaluator is given five queries (as shown in Table 2). Table 2. The Test Queries Query 1 Query 2 Query 3 Query 4 Query 5

Math Graphic Editor Semantic Web Java Medical Image Process Game Online Web Linux Machine learning algorithm

For each query, we performed the comparison of four types of searches: a-rw, brsw, c-rsf and d-rswf. Table 3 shows the experimental search types. The first type of search only tested relation spreading. For the other three types, we tested the three techniques proposed in this paper. Table 3. Test Types Search Type ID a-rw b-rsw c-rsf d-rswf

Use relation spreading Yes Yes Yes Yes

Use similarity search No Yes Yes Yes

Use weight in semantic search Yes Yes No Yes

Use feedback in semantic search No No Yes Yes

In evaluation, we proposed a measure that takes into account both relevance and rank position. We define the measure as: R=



pos ( i ) ≤ N

ri log(1 + pos ( i ))

(10)

where i is an instance in the result list, ri is the relevance score given by evaluator. pos(i) is the rank of i in the result list. Only the top N results are considered to com-

508

K. Zhang et al.

pute the R measure. Because users tend to only care about the few top ranked results, here we set N as 30. 6.3 Results and Discussion The R scores of four types of search on all the five queries are shown in Figure 5. Each score is the average of scores from the ten evaluators. The four bars from left to right respectively represent the four types of searches: a-rw, b-rsw, c-rsf and d-rswf. From Figure 5, we see that the semantic search engines (type b, c, d) exploiting instance similarity are, on average, able to provide more relevant results and give better ranking of the retrieved results than search engine (type a) that only use relations.

Fig. 5. Experiment Results for the Five Queries

These empirical results deserve several comments. First, we can see that type b search is 18% better than type a search, which means instance similarity method used in spreading process boosts the performance. Naturally we are interested in why type b search works so well. So we interviewed three evaluators to try to answer that question. Their explanation is that the improvement mainly comes from the augmentation of relevant retrieved instances. Some relevant instances retrieved in type b search are not included in type a search results. They usually contain no or few query keywords and have no relation with the initial results, and they are retrieved due to their high similarity with the top ranked instances. Nevertheless, type c searches do not improve much in the comparison to type a searches (only 9% on average). The reason is that the similarity computed based on unweighted ontology is inaccurate, which hurts the final performance. We see that the type d searches have 18% higher score on average than type c searches. When instance similarities are computed using weighted ontology, the relevant scores are much better. The intuition behind this is that the weighted ontology makes the instance similarity more accurate. Accurate similarity can reduce the irrelevant entities introduced into the spreading process. Note that, in general, the performance of feedback is correlated with the initial performance of traditional search. When the initial results are very good or very bad, feedback does not affect the final performance very much, because users can not provide much more information by annotating the top ranked instances in the initial

Weighted Ontology-Based Search Exploiting Semantic Similarity

509

results. We see that, in the first query, type d search are 22% higher than type b search. But in the second query, with the bad initial results, the improvement is only 5.56%. And in the third query, with fabulous initial results, feedback even does not provide any improvement.

7 Conclusion In this paper, we have investigated the problem of semantic search. We have extended the ontology definition so as to support importance of concept and relation. We have proposed an approach to the task of semantic search by using instance similarities and semantic feedback technique. Experiment results show that the proposed methods are effective and can improve retrieval performance significantly. As future work, we plan to make further improvement on the accuracy of semantic search. And we also want to apply the proposed method to more applications to test its effectiveness, such as book, news or paper searching.

References 1. T. Berners-Lee, J. Hendler, and O. Lassila: The Semantic Web. Scientific American, vol. 284(5), (2001) 34–43 2. R. Guha, R. McCool, and E. Miller: Semantic Search. In Proceedings of the Twelfth International World Wide Web Conference (WWW 2003), Budapest, Hungary (2003) 700-709 3. K. Anyanwu and A. Sheth:7KH 2SHUDWRU'LVFRYHULQJDQG5DQNLQJ$VVRFLations on the Semantic Web. SIGMOD Record, Vol. 31 (2002) 42-47 4. C. Rocha, D. Schwabe, and M. Poggi: A Hybrid Approach for Searching in the Semantic Web. In Proceedings of the Thirteenth International World Wide Web Conference (2004) 5. N. Stojanovic, R. Struder, and L. Stojanovic: An Approach for the Ranking of Query Results in the Semantic Web. In Proceedings of the 2nd International Semantic Web Conference (ISWC2003), Springer-Verlag (2003) 500-516 6. T.R. Gruber: A Translation Approach to Portable Ontology Specifications, Knowledge Acquisition, Vol.5 (1993) 199-220 7. BY. Liang, J. Tang, G. Wu, P. Zhang, K. Zhang, etc. SWARMS: A Tool for Exploring Domain Knowledge on Semantic Web. In AAAI'05 workshop : Contexts and Ontologies: Theory, Practice and Applications. (2005) 8. M. Dean and G. Schreiber (editors): OWL Web Ontology Language Reference. W3C Recommendation, http://www.w3.org/TR/owl-ref/ (2004) 9. J. Davies, R. Weeks, and U. Krohn: QuizRDF: Search Technology for the Semantic Web. In WWW 2002 workshop on RDF & Semantic Web Applications, Hawaii, USA (2002) 10. Froogle. http://froogle.google.com 11. A. Sheth, C. Bertram, D. Avant, B. Hammond, K. Kochut, and Y. Warke: Managing Semantic Content for the Web. IEEE Internet Computing 6(4) (2002) 80-87 12. P. Ganesan, H. Garcia-Molina, and J. Widom: Exploiting Hierarchical Domain Structure to Compute Similarity. ACM Trans. Inf. Syst. 21(1) (2003) 64-93 13. B.Y. Ricardo and R.N. Berthier: Modern Information Retrieval. Pearson Education Limited. (1999) 14. CJ van Rijsbergen: Information Retrieval. 2nd ed., Butterworths, London (1979)

510

K. Zhang et al.

15. A. Strehl, J. Ghosh, and R. Mooney: Impact of Similarity Measures on Web-page Clustering. In Proceedings of the AAAI Workshop on AI for Web Search (2000) 16. G. Salton and C. Buckley: Term-weighting Approaches in Automatic Text Retrieval. Inf. Process. Manage. 24, 5 (1988) 513–523 17. A. Singhal: Modern Information Retrieval: A Brief Overview. Bulletin of the IEEE Computer Society Technical Committee on Data Engineering, 24(4) (2001) 35–43 18. H. Chen and T. Ng: An Algorithmic Approach to Concept Exploration in a Large Knowledge Network (Automatic Thesaurus Consultation); Symbolic Branch-and-Bound vs. Connectionist Hopfield Net Activation. Journal of the American Society for Information Science 46(5) (1995) 348-369 19. P. Cohen and R. Kjeldsen: Information Retrieval by Constrained Spreading Activation on Semantic Networks. Information Processing and Management, 23(4) (1987) 255-268 20. K. O’Hara, H. Alani, and N. Shadbolt: Identifying Communities of Practices: Analyzing Ontologies as Networks to Support Community Recognition. In IFIP-WCC, Montreal (2002)

Determinants of Groupware Usability for Community Care Collaboration Lu Liang, Yong Tang , and Na Tang Dept. of Computer Science, Sun Yat-sen University, Guangzhou 510275, P.R. China [email protected], [email protected]

Abstract. Due to the special requirements of community care, software systems in this industry always have to be developed into shared-workspace groupware systems, which are claimed to achieve job performance more quickly and easily. But the adoption of this kind of systems in Australia is very little and unsuccessful so far. It is observed that the deployment of them always encounters resistance from community care real workers who must use them. One possible cause of this resistance is that the groupware usability is quite dissatisfying, which further causes that the system does not meet user expectations. This research aims to gain a deeper understanding of user perspective and expectation to the interface usability of Community Care Groupware System (CCGS) basing on the specific context of use. A task modeling method is introduced to improve the quality of groupware usability evaluation. A triangulated research methodology comprising unstructured observation, structured and semistructured questionnaire, and unstructured interview, was conducted on 23 community care workers. Five determinants that most influence the interface usability of CCGS from user’s perspective, and several relevant suggestions that make a CCGS really usable in everyday use are elicited from data collection and analysis.

1 Introduction It is widely recognized by both Federal and State governments that the Community Care sector in Australia is in crisis. The Australian population is ageing rapidly, forcing a greater expansion in community care facilities to take the pressure off expensive hospital beds [1]. These community care facilities are homes to many elderly people across Australia and should provide the resident with the best care possible to make their final years peaceful and enjoyable. However, both Federal and State governments are finding that recruitment and retention of nurses in community care is a major problem [1]. One of the important causes of disaffection of community care workers is “documentation burden” – the onerous task of documenting all aspects of patient care [2,3,4]. Although the advantages of Information Technology itself and the benefits from its widespread adoption in many other industries have given practitioners great expectaX. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 511 – 520, 2006. © Springer-Verlag Berlin Heidelberg 2006

512

L. Liang, Y. Tang , and N. Tang

tion to offer solutions to some of these problems in community care, the uptake of Information System in the health care industry is lower than that in other industries, such as tourism, finance etc, and it is even lower in the community care sector than in the general health care industry. Not surprisingly, from our literature review, there has been little research on the use of IT and the success of a usable Information System in Community Care of Australia. For the obvious characteristic of Community Care work is spatial and temporal distribution requesting for shared-workspace, there is a need to design and implement the systems into sort of groupware systems (called Community Care Groupware System CCGS). But from investigation, the deployment of CCGS often encounters resistance from those professionals who must use them. A major possible barrier to the effective uptake of CCGS could be the lack of understanding of the occupational, social and cultural characteristics and the general computer skills of community care workers. Such barrier directly causes that CCGS does not meet the user expectations. Developers often rely on inappropriate interface models and task analysis models of general information systems, resulting in a serious mismatch between the work practices, social characteristics and computer skills of community care workers and the actual performances of CCGS. Therefore, a deeper understanding of the context of use, users’ perspective and expectation to CCGS could, potentially, improve the acceptance and usability of CCGS, ensuring that the success of CCGS can be more easily achieved and more potential benefits are realized.

2 Context of Use of CCGS in Australia 2.1 The Importance of Context of Use to Usability Usability is a large concept. A broad and formal definition of usability in ISO 9241 [5] is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use”. Context of use consists of the users, tasks, equipment (hardware, software and materials) and physical and organizational environments which influence the interaction [6]. Donyaee and Seffah [7] designed QUIM (Quality in Use Integrated Map) to analyze the structure of usability. We simplify and ameliorate the model as Figure 1. Usability is decomposed into five levels that are all built on the basis of context of use. Additionally, from the lowest level to the top, usability is reversely evaluated and predicted. Distinguished with functional requirement, usability requirements always answer such questions: How do users approach this work? How do they think about the tasks? How do they judge a successful experience? [8] To obtain these answers more successfully, worthwhile experience from the past projects on the same domain, and a picture of the end user and usability requirements are collected and built just before the developer team set to design. If a better understanding of user needs and context of use brings a new picture, this may change design assumptions and even development approaches of the system [8]. Furthermore, Bevan and Macleod [6] have

Determinants of Groupware Usability for Community Care Collaboration

513

Usability Dimensions decompose

Criteria

verify

Measure Scales Evaluation Methods

Context of Use Fig. 1. The hierarchy of usability

claimed that the characteristics of the context may be as important in determining usability as the characteristics of the product itself, so that changing any relevant aspect of the context of use may change the usability of the product. To be more detailed, working characteristic in any industry or business is an important component of specified context of use, and play an important role in understanding and achieving usability performance of the system, and further deeply influence the acceptance and adoption of CCGS. 2.2 Working Characteristics of Community Care in Australia Community Care is a community-based health service, which provides high quality services for keeping the best health and quality of life in Australia. Services go to all sections of the community, individuals, families and groups with special needs. Shared home care is a cooperative work that involves different professionals, consists of mobile workers and requires immediate and ubiquitous access to patient-oriented data which are documented in different information systems. Sue Richardson etc have conducted a survey on 2881 residential community care facilities across Australia regarding their staff’s current characteristics and workforce. With the usable data from 1746 respondents, a completive and forceful report [9] was generated by the research group. Several specific working characteristics that directly influence the context of use of CCGS have been analyzed and elicited from this report: a) The majority of community care professionals are nursing professionals more than other healthcare professionals [10]. The typical nurse is female, Australian born,

514

L. Liang, Y. Tang , and N. Tang

mostly community about 40-50, has modest school education but not too much, and lack of ordinary computer skills [9]. b) There is a strong demand for long-term staff in community care. About one third staff is part-time worker [9], which causes the jobs for them tend to be temporary and facilities have to frequently make an extra effort to recruit and train new employees. c) The healthcare services for the dependant elderly is always provided in their home by mobile workers. A patient can receive services from several different nurses. Workers spend much time out of the facilities, so they often do not see each other face to face. However, since multiple workers can work with the same patient, their actions are interdependent and require tight collaboration. d) As much time has to be distributed to relative indirect work, many workers feel they do not have the right balance of time and duties to make the most of their skills in providing direct care on each resident [9]. This issue makes most of them feel under pressure to work harder. e) Not many community care nurses feel very satisfied of the job itself. 35 percent of them claim to be dissatisfied or neutral [9]. A possible reason could be that they devote a dissatisfying amount of time to the direct care of the residents and they can not effectively and efficiently use their specific care skills, as most of them strongly agree that they have enough and necessary skills on job itself. f) As the amount of residents in facilities keeps on increasing year by year, the labour intensity in community care becomes bigger and bigger. Much more documentation has been found to take a lot of time. Some of the workers cite paperwork as one of the worst aspects of their job, others complain about the pattern of hours that they worked.

3 A Task Modeling Method for Groupware Usability Poor usability in distributed shared-workspace systems has prompted CSCW (Computer Supported Cooperative Work) researchers to develop discount usability methods for designing and evaluating groupware [11]. Discount methods work well with low fidelity prototypes, which allow evaluations to take place during early development when there is no operational prototype for users to test in a real work setting. Since the evaluation does not happen in real work, information about the task context must be articulated and used synthetically. As there are currently no modeling or analysis schemes appropriate for groupware usability evaluation, Pinelle and Gutwin [12] has developed a new task modeling technique that is able to represent the flexibility and variability that is inherent in realworld group task behaviour. This technique uses a level of analysis that fitly allow for usability evaluation of a group interface. As shown in Figure 2, the task model comprises four component hierarchies. On the high level is scenario, a description of activities related to achieving a specific outcome, as well as user specifications, a group goal and a set of circumstances. The second level is tasks, the basic components of

Determinants of Groupware Usability for Community Care Collaboration

515

Scenario

Task1

Individual Instantiations

Task2

Collaborative Instantiations

Task...

( collaborative mechanics)

consist of

Actions Fig. 2. The structure of task model for groupware usability evaluation

scenarios. Tasks only describe what occurs in a scenario, but not how it occurs. How the task occurs is represented by task instantiations, which are explicitly classified into individual task instantiations (the taskwork component of a task) and collaborative task instantiations (the teamwork component of a task). Additionally, a list of mechanics of collaboration is defined to identify how collaborative activities take place and allow important but easily-overlooked aspects of a task. The lowest level is actions, which describe common ways to carry out the mechanic specified in the collaborative task instantiation. Using the task model in the usability evaluation of CCGS can explicitly provide the real-world context. It helps to improve the efficiency of evaluation by allowing both the developers and users to consider how well the prototype design supports workers in carrying out the work processes that are commonly used to attain the intended outcomes for the scenario. It also helps to find out the interface usability problems more easily and comprehensively.

4 Methodology 4.1 Sampling To get the user perspective influenced by special social, cultural and working characteristics of community care sector, to find out what factors most influence community care professionals’ attitude to CCGS, we concentrate on the third level in usability hierarchy “usability criteria for CCGS” (see Figure 1) and conduct a survey on community care nurses in Illawarra region of Australia. The survey targets on real community care nurses who had better have a modest familiarity and experience with a CCGS. This requirement ensures that the participants have already had some ideas about what a CCGS is, what complaints could be from their past experience, and what their needs and expectation of a CCGS are. Due to the stressful workload and pre-

516

L. Liang, Y. Tang , and N. Tang

cious time of community care professionals as well as the low adoption of CCGS, the final amount of participants is 23. 4.2 Methods The core of this research is identification of the discrepancy between current and desired characteristics of CCGS perceived by community care professionals as a basis for CCGS develop decision making. We were aware of the complexity of this issue, and used a triangulated methodology in a survey to elicit the real perspective of users and determinants of usability more correctly and clearly. The procedure is shown in Figure 3.

Complaint? Questions?

Demonstrate an example CCGS

Observer participants when using

Finish Questionnaire

Unstructured Interview

Suggestions?

Fig. 3. The procedure of investigation

Before survey, we give a clear introduction and explanation of the research purpose to each participant, making sure they would accurately focus on what we really want. As some of the participants haven’t used CCGS in job for some time and some haven’t realized the usefulness of CCGS in the past, we prepared an existing CCGS as an example to show all the participants how its functions are and how it works. Then each of them was asked to use the system for a while to enhance their feelings and perception. Meanwhile, unstructured observation approach was used in this stage. Records of what questions they asked in using, what complaints they poured out, and what suggestions they mentioned unconsciously, may reflect sort of their true perception. After the intervention with CCGS, each participant was required to finish a structured questionnaire comprising three parts that focus on different issues. Then, semistructured questions were asked during a 30- to 45- minutes interview. We recorded field notes during the interviews using a strategy described by Schatzman and Strauss [13]. They were organized into observational notes. The combined notes were analyzed by the constant comparative method in an inductive way [14].

Determinants of Groupware Usability for Community Care Collaboration

517

5 Results 5.1 Description of Participants Table 1. Description of participants

Characteristic

Description

Gender

19 of 23 participants are female, other 4 are male.

Age

7 participants community from 50~60, 10 from 40~50, 4 from 30~40, the rest 2 from 20~30. The average age of them is 45. 15 of 23 participants are full-time nurses in community care facilities, the others are part-time nurses. Half of participants worked in community care over 10 years. In the others, 3 worked for 5~10 years, 3 worked for 3~5 years, and 5 worked for only one or two years. All the participants are working in community care units now. Some of them have worked in nursing home (high care facility) and hostel (low care facility) before, and the others have worked in hospital or retirement villages before. 20 participants have used CCGS in job before, the other 3 haven’t. In those 20 persons, 7 used it for more than 3 years, 5 used for 1~3 years, the other 8 used for less than one year. But the average frequency of using is about half an hour to one hour per day, which looks a little low.

Job position

Working years

Work units

Experience on CCGS

5.2 Five Factors That Most Influence the Usability of CCGS As the purpose of this research is to find out the most important criteria for measuring groupware usability of CCGS which can guide the development in a more effective and efficient way, we prepare a list of existing criteria from literature review [5, 15~19] in our questionnaire and penetrate them in the topics of interview, in order to extract the important items for CCGS in the general list. From the valid response questionnaires, we carefully analyze both the quantitative and qualitative data, then get a conclusion about five important items in terms of user perspectives: - High Accuracy High correctness of the information user can get from the system, and the high degree to which the task goals represented in the output have been achieved. - Time Efficiency The availability of the output information at a time suitable for the user. It requires “user can spend the least time to accomplish specific task or achieve specific workflow”. - High Learnability Fast speed and high facility with which the user feels that he or she is able to master the system, or to learn how to use new features when necessary. It measures of how rapidly a user can become productive, and how rapidly an infrequent user can re-learn the product after periods of not using it.

518

L. Liang, Y. Tang , and N. Tang

- Simplicity The low level of complexity of interface in terms of the layout, the amount of items, and the functional blocks. And the low complexity of workflow in terms of the amount of operation steps and the collaborative components and people involved in an action. - Useful Help The system is self-explanatory, as well as more specific things like the adequacy of help facilities and documentation. It always communicates in a helpful way and assists in the resolution of operational problems. It refers to the user’s perception to the understandable information provided on the interface and the means for user to easily get this information. 5.3 User Expectation to CCGS Besides the above five factors regarded as the most important requirements for achieving high groupware usability, qualitative results about user expectation and recommendations to CCGS are further concluded as follow: - Design CCGS for Extremes: Information systems in healthcare especially in community care is still in the early stage, and community care staff currently work under extreme conditions (stressful working environment, overworked staff, insufficient staff to cope with duties and low computer education), so on the basis of meeting the system requirement, it is better to designed the interface of CCGS very simply, clearly, and easily for learning. - Use additional computing equipments: community care staff prefers to use additional technology such as PDA that can provide input at any necessary time conveniently and save time. - There are not many requirements on the attractiveness of interface. Community care staff does not care much about the subjective feelings, such as the color, shape of items and aesthetics. But they want the consistence of interfaces that can help them to remember how to use the system more easily. - Community care staff prefers graphical functions, which are easy to read and interpret, to calculate and analyze plenty of data. - Shared workspaces, files, items and chat text areas that allow multiple meetings of groups or subgroups within the home care team should be well designed in CCGS.

6 Conclusion In this article we addresses the poor usability problem of Community Care Groupware System (CCGS) in Australia, which make researchers in user-centered software engineering feel very hard and tied in well handling distributed shared-workspace groupware systems. We aim to find out why the adoption of CCGS in Australia is not as successful as in other industries, what the community care workers’ perspective and expectation to a usable CCGS are in their daily work, and what the most important usability factors of CCGS are. We first interpret the importance of context of use in usability, and give the specific working characteristics of community care industry in Australia. Then, a task modeling method proposed by David and Carl is introduced to assist the discount groupware usability evaluation of CCGS. Next, we conduct a survey that is supported by a triangulated research methodology comprising unstruc-

Determinants of Groupware Usability for Community Care Collaboration

519

tured observation, structured and semi-structured questionnaire (focus on the most important usability criteria), and unstructured interview. Five determinants that most influence the groupware usability of CCGS from user’s perspective, and several relevant suggestions that make a CCGS really usable in everyday use are elicited from data collection and analysis. We think these groupware usability guidelines will be quite significant and valuable in the future development and evaluation of CCGS.

Acknowledgments This work is supported by the National Natural Science Foundation of China (Grant No.60373081) and the Guangdong Provincial Science and Technology Foundation (Grant No.045503). We get the idea of this paper and the real world data for supporting the results from the collaborative research with university of Wollongong in Australia. We thank Dr. Ping Yu for her contribution to the research methodology, and thank the related research students who have participanted in conducting the scenario and data collection in university of Wollongong.

References 1. Steketee M. Hospital Case. Weekend Australia September 23, (2003) 2. Metherell M. Hospital backstop plan for local GPs. Sydney Morning Herald August 18, (2003) 3. Fitzroy, N. and Moseby, S. The stress of community care documentation. Australian Nursing Journal 9(10) (2002) 4. Pelletier D., Duffield C. The complexities of documenting clinical information in longterm care settings in Australia. Journal of Gerontological Nursing 28(5) (2002) 5. ISO 9241-11, Ergonomic requirements for office work with visual display terminals-Part 11: Guidance on usability, 1998 6. Bevan, N. and Macleod, M. Usability measurement in context. Behaviour and Information Technology 13, (1994) 132-145 7. Donyaee, M., Seffah, A. QUIM: An Integrated Model for Specifying and Measuring Quality in Use, Eighth IFIP Conference on Human Computer Interaction, Tokyo, Japan (2001) 8. Whitney Q. Balancing the 5Es: Usability. Cutter IT Journal 17(2) (2004) 9. Richardson, S. and Martin, B. The care of older Australians: a picture of the residential Community Care workforce. A report of Australia National Institute of Labour Studies (2004) 10. AIHW (Australian Institute of Health and Welfare). Australia’s Health 2004. Canberra, Australian Institute of Health and Welfare, (2004) 11. Steves, M., Morse, E., Gutwin, C., and Greenberg, S. A comparison of usage evaluation and inspection methods for assessing groupware usability. In Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Group Work. Boulder, Colorado, Sept., ACM Press, (2001) 125-134 12. David P., and Carl G. Task Analysis for Groupware Usability Evaluation: Modeling Shared-Workspace Tasks with the Mechanics of Collaboration. ACM Transactions on Computer-Human Interaction, 10(4) (2003) 281-311 13. Schatzman L, Strauss A. Field research: strategies for a natural sociology. Englewood Cliffs, NJ: Prentice-Hall (1973)

520

L. Liang, Y. Tang , and N. Tang

14. Strauss A, Corbin J. Basics of qualitative research: grounded theory procedures and techniques. 1st Ed. Newbury Park, CA: Sage (1990) 15. Wiethoff, M., Arnold, A. G. and Houwing, E.M. 1991. The value of psychophysiological measures in human-computer interaction, in H-J. Bullinger(ed) Proceedings of the 4th International Conference on Human Computer Interaction, (1991) 661-665 16. Nielsen, J. Usability Engineering, Academic Press, Boston (1993) 17. Bevan, N. Measuring usability as quality of use. Software Quality Journal 4, (1995) (115150) 18. James E., Sammy W. Development of A Tool for Measuring and Analyzing Computer User Satisfaction. Management Science, 29(5) (1983) 530-545 19. Dix, J., Finlay, E., Abowd, D., Beale, R. Human-Computer Interaction, 2nd Edition, Prentice Hall, Pearson Education, UK.p.162 (1997)

Automated Discovering of What is Hindering the Learning Performance of a Student Sylvia Encheva1 and Sharil Tumin2 1

Stord/Haugesund University College, Bjørnsonsg. 45, 5528 Haugesund, Norway [email protected] 2 University of Bergen, IT-Dept., P.O. Box 7800, 5020 Bergen, Norway [email protected]

Abstract. This paper focuses on a system framework supporting personalized learning. While learning styles describe the influence of cognitive factors, learning orientations describe the influence of emotions and intentions. The system responds to students’ needs according to their learning orientations. Such a system requires cooperation among several educational organizations, since it is quite difficult for a single organization to develop an item pool of questions tailored for individuals with different learning orientations. The cooperation needs serious consideration of security issues. We propose a model for sharing protected Web resources that secures privacy. Keywords: Automated tutoring system, prior knowledge.

1

Introduction

In the usual academic education students are viewed as passive recipients of information. The new information age demands an educational concept developing students’ abilities to deal with problems that are not obvious today. University graduates are expected to posses solid knowledge of methods and competency in analysis and synthesis as well as the professional competencies needed to operate and manage a modern enterprise. All these requirements show a need for the development of new way of content delivery and assessment [5]. Methods of group instruction that are as effective as one-to-one tutoring are presented in [6]. Some applications of artificial intelligence in education are discussed in [4], [7], and [14]. In this paper we propose a system framework assisting each student at several levels of the knowledge acquisition process and efficiently transmits knowledge and skills. Our idea is to develop a system that focuses on the students’ learning process by modeling an automated system preventing students from becoming overwhelmed with information, preventing students from losing track of where they are going, and permitting them to make the most of the facilities the system offers. The instruction is directed at what still needs to be mastered. A student X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 521–531, 2006. c Springer-Verlag Berlin Heidelberg 2006 

522

S. Encheva and S. Tumin

is encouraged to work on a concept until it is well understood. A student experiencing a misconception is advised on that, while others who have mastered the concept proceed with the next topic. The system contains also an interactive theoretical part (lecture notes); problems to be solved at home; tests assessing recall of facts, the ability to map the relationship between two items into a different context, and conceptual thinking, asking students to evaluate consequences and draw conclusions; and various quizzes. The paper is organized as follows. Related work is discussed in Section 2. learning orientations are considered in Section 3. Student navigation while solving problems is discussed in Section 4, and the system architecture is described in Section 5. The paper ends with a conclusion in Section 6.

2

Related Work

A Web-based tutoring tool with mining facilities to improve learning and teaching is described in [21]. An intelligent system for assisting a user in solving a problem was developed in [18]. An system using assessing Bayesian nets for assizing students’ knowledge is proposed in [20]. In addition to that the system in [9] assists the student to navigate the unknown concepts. An ITS teaching students how to solve problem is presented in [11]. A personalized intelligent computer assisted training system is presented in [22]. A model for detecting student misuse of help in intelligent tutoring systems is presented in [3]. An investigation of whether a cognitive tutor can be made more effective by extending it to help students acquire help-seeking skills can be found in [16] where the authors also found that 72% of all student actions represented unproductive help-seeking behavior. A proliferation of hint abuse (e.g., using hints to find answers rather than trying to understand) was found in [1] and [16]. However, evidence that when used appropriately, on-demand help can have a positive impact on learning was found in [24], [25], and [27]. All these topics are both important and interesting. We focus on a different aspect of use of intelligent tutoring systems which is an attempt to first detect lack of prior knowledge of each student and then fill in the gaps. A tutoring system called Smithtown [26] was designed to enhance an inductive inquiry skills in economics. CaseBook [10] is a Web environment enabling problem-based learning to be used by different educators in different classrooms. While problem-based learning improves long-term retention it tends to reduce initial levels of learning [13].

3

Learning Orientations

While learning styles describe the influence of cognitive factors, learning orientations describe the influence of emotions and intentions. According to their

Automated Discovering of What is Hindering the Learning Performance

523

learning orientations [19] students are divided in four groups - transforming learners, performing learners, comforming learners and resistant learners.

4

What the System Provides for Students

Contents (lecture notes, examples, assessments, etc.) are saved in a predetermined directory structure on the server and are presented to students as Web documents and Web forms. A subject name is followed by two hypermedia containers: information and topics. Information contains – a description of the subject’s content, literature, evaluation methods, etc., – messages about all current changes and links to compulsory assignments, and – a short summary of theory and examples from previous subjects helping students in their work with the current subject. Topics contain – theory and examples presented in traditional face-to-face teaching. For students with special interest, additional material is placed under a hyperlink ‘for those who have special interest’. The contents of such materials go beyond the compulsory curriculum and are not included in exam questions. – problems to be solved in tutorial hours and as home work – illustrations: Java applets, interactive examples, and animations. Problems in the theory part and for home work in the current version are followed by four buttons (Fig. 1).

Answer

Hint

Solution

Detailed solution

Fig. 1. Help is provided in four steps

Intelligent agents in the theoretical part present different students with different pages according to their abilities. Intelligent agents in the problem solving part give additional explanations and examples that help to clear up current difficulties and misconceptions. 4.1

Guiding a Student While Solving Problems

Student navigation in an automated tutoring system should prevent students from becoming overwhelmed with information and losing track of where they are going, while permitting them to make the most of the facilities the system offers.

524

S. Encheva and S. Tumin

Fig. 2. Recommendation of a new topic

Automated Discovering of What is Hindering the Learning Performance

525

In this system a new topic is recommended to a student (Fig. 2) after assessing his/her knowledge: – The system first suggests a test for the student. Based on his/her answer a tutor agent recommends him/her to proceed with the next topic or to read hints and to solve the suggested examples. However, students have an opportunity to jump to the next topic without following the system’s recommendations. – The system gives information about the amount of theory and problems to be considered as the content of a 2 × 45 min lecture and recommends related problems to be solved for further reading. Based on student’s responses while solving the recommended problems, the system either gives suggestions for solving problems from the database that will clear a misconception (if necessary) or advises the student to proceed with another topic. – The system gives information about • the level of knowledge of each student, • compares the results of every student with the results of the class, • compares the results of different classes, and • provides information on the amount of students going back to a particular problem and/or a statement. This helps content developers to improve on certain parts by including further explanations and better examples. 4.2

Subject’s Navigation

A student will navigate from a root node (start page) to a leave node (specific topic). Only one entry point is provided, which reduces navigational confusion and increases student satisfaction. The navigational structure of the system allows the insertion and/or removal of elements at any time by the course builder. 4.3

Online Tests

HTML Web forms are compiled by a server-side script on-the-fly by randomly choosing ten questions from a set of twelve files with ten questions each (in the current version). Questions with the same sequence number in different files are similar in content and level of difficulty. Rash Model [2] requirements have been applied while establishing similarity among questions. In a preliminary work Rash Model and judgemental approach were compared as two different item selection methods. No significant difference was observed between Rash Model and judgemental approach in terms of their reliability. A significant difference was observed while using Rash Model and judgemental approach in favor of Rash Model in terms of tests content validity.

5

The System

The framework is composed of three main components: base system, runtime support, and agents.

526

S. Encheva and S. Tumin

Base System: Apache is used as a Web server with a Python interpreter extention using a mod− python module. PostgreSQL is used as a relational database that supports Structured Query Language (SQL). Python is an object-oriented scripting language. Having a Python interpreter module in the Web server increases performance and reduces response time significantly. A standard Apache implementation with Common Gateway Interface (CGI) scripts can also be used. However, this will lead to a performance penalty in that the Web server needs to start a new external Python interpreter each time a CGI script is called. A relational database management system (DBMS) is used to store tests and students’ data. PostgreSQL provides us with database support for flexible implementation. All data is stored in a relational database and can be queried programmatically by Python scripts using SQL. Other relational databases such as MySQL and Oracle can be used instead. We have developed contents of learning materials on a Linux workstation. Runtime support: Dynamic HTML pages are created by server-side scripts written in Python. Python programs are also used for database integration, diagnostics, and communication modules. 5.1

Agents

The system is supported by student profile, test, and diagnostic agents which we will discuss further in this paper. Diagnostic Agent. The system structure is defined by pedagogical requirements. This structure defines dependencies among learning materials, levels and relationships between tests options, and inference rules used in a diagnostic agent. This structure is crucial in providing each student with a personalized learning work-flow for efficient learning. The system provides different students with different pages according to their needs. The responses from each student to the suggestions from the system provide the diagnostic agent with necessary data. The diagnostic agent analyzes these data using the programmed inference rules and provides the student with an immediate recommendation on how to proceed. The student status is saved in the database. Student Profile Agent. In the recommendation on how to proceed, a student can choose to subscribe to one or more suggested learning materials. The student’s learning material subscriptions are placed in a stack-like structure in the student profile data. The student profile agent presents the student with the top most learning material in the profile stack for each new learning session. Initially, the profile stack contains a sequential ordering of learning materials in a given subject. A student can choose to skip any presented learning material and go to the next one at any time. A student is considered to have completed

Automated Discovering of What is Hindering the Learning Performance

527

Agents Diagnostic agent

Student

Server side script

Internet

Test agent

Student profile agent

Web server

Teacher Database Learning materials

Fig. 3. System architecture

a course when his/her profile stack is empty and he/she has passed all compulsory tests assigned to the course. All learning material names taken by the student during the course and scores of the tests are saved in the student audit-trail. Such audit-trail data is used for billing purposes while global analysis of the course and feedback data is used to improve learning materials for each subject. The server-side script (SSS) contains student and teacher modules. The student module contains, among other things, student registration, student administration, student authentication, and authorization. The teacher module provides an interface for a teacher to define topics and subjects, students’ status reports/diagnostic, and messaging. SSS modules and agents communicate with each other by a request-respond mechanism in which remote procedure calls are done among different modules/agents providing students with a dynamic and personalized learning environment. Test Agent. A pedagogically crafted scheme with a set of questions and answers provides a test agent with a programmed intelligence, in which wrong answers to a drill lead learners to appropriate hints and examples. The students can then subscribe to those learning materials suggested by the hints or try the drill again and continue with the current course. Students can jump back to the current course trail at any time while following trails suggested by the test agent. The agent calculates scores, shows result status, and keeps track of assessments

528

S. Encheva and S. Tumin

taken by each student. After each assessment the test agent sends summarized information to the diagnostic agent. 5.2

Sharing Protected Web Resources

Most information and communication technology (ICT) based learning systems require user authentication and authorization data to reside locally in their user database. Therefore, organizations using such a system have to export their users’ data to the system. This will involve a complicated data synchronization mechanism. Our model does not involve the export of user authentication and authorization data across cooperating organizations, which simplifies user management as there is no duplication of data and synchronization problems associated with it. Another important advantage of this model is security enhancement since user authentication and authorization data never leaves its home organization. Let us denote a domain server, which provides Web resources, by RPDS, and another domain server, which provides user authentication/authorization, by UDAS (Fig. 4). In a collaborative user/resource framework, a RPDS provides a Web portal entry point for all resources and applications offered to Web clients. An UDAS maintains a domain users and groups database and provides a Web-based logon application for domain users and web services for users authentication and authorization, based on Simple Object Access Protocol (SOAP).

Fig. 4. Pear-to-pear communication framework

Automated Discovering of What is Hindering the Learning Performance

529

A RPDS will assign a Web browser with a unique portal session identifier (PSID) the first time an unauthenticated user is trying to access a protected Web resource controlled by the RPDS. This generated PSID is saved in the Web browser’s cookie by the RPDS’s authentication application. The user is then asked to provide her home domain name for authentication. The PSID, Web browser’s IP address (BIP), and the home domain name are saved in a session database in the RPDS. The PSID is associated with BIP for the session by the RPDS and is not yet associated with a valid user. The RPDS will then redirect the user’s browser to her home domain’s UDAS. The PSID and RPDS’s URI are sent as ”get”-parameters in the redirection. A logon Web-form is then displayed on the user’s Web browser by the UDAS. The user is authenticated if she provides a valid user-identifier (ID) and a valid password. The UDAS will then create a unique authentication session identifier (ASID) associated with the {PSID, BIP, ID}-triplet. The {PSID, BIP, ID}triplet and the {ASID, timestamp}-pair are saved in the session database at UDAS. The PSID is then associated with an authenticated ID for the session by the UDAS. The UDAS redirects the user’s browser back to RPDS. The ASID is sent as a ”get”-parameter in the redirection. Using the given ASID as a parameter, the RPDS makes a call to an authentication service at the user home domain SOAP server (UDWS). If the ASID is still valid, the ID associated with the ASID is returned. The PSID is then associated with an authenticated user ID at the RPDS. An ASID is a short-term one-time (i.e. having a limited period of validity and can be used only once) session identifier and is used between a RPDS and an UDAS for session control. At any time, the RPDS can obtain a new ASID by calling the UDWS and using a valid PSID and a BIP as parameters. The newly obtained ASID can be used to check user’s authorization from UDAS group data or transfer the session to another RPDS, for example. Using a short-term ASID and a long-term PSID, a simple security mechanism for a user’s session can be maintained between RPDSs and UDASs. PSID is bound to BIP at the RPDS domain side and to the ID after a valid user’s logon at the UDAS domain side. The two initial URI-redirections (logon redirect and response redirect) are done over HTTPS. By using a peer-to-peer remote procedure call over a SOAP, an ASID is used to map the validity of an ID for user authentication and authorization. A user never discloses her user-identifier and password in a foreign domain. Both HTTPS and SOAP have to be compromised in order for a rouge user or application to gain illegal access to resources. The risk for illegal access is minimized by using two calls, first to obtain the short-term one-time ASID associated with {PSID, BIP}, and second to obtain the user authentication and authorization associated with the ASID. Remark 1. A user can have several user accounts from different domains. This can introduce a collusion of rights in the framework. Suppose a person possesses two user accounts from two different domains. She is a student in one domain and a teaching staff member in another domain. A simple role policy was set to

530

S. Encheva and S. Tumin

permit access to a resource where the user in the student role can only submit and read her own documents while the user in the teacher role can read all submitted documents. One method to solve such a problem is to assign a globally unique key (personal number) to each person. The key must be unique in all collaborating domains. A central key management is needed and may not be practical. Another method is to make a more detail role policy and access audit trail to expose a collusion of rights problems. A domain maintains its own users. The framework supports one level of ’chain of trust’ only. A RPDS refers only to the user’s home domain. An UDAS does not refer to other domains for a user authentication and authorization. The framework security mechanism can go into a loop if a multi-level chain of trust is allowed.

6

Conclusion

The presented system framework is a response to the the increased demand for the necessity of developing effective learning tools that can be smoothly integrated in the educational process. The potential of Web-enhanced learning to improve undergraduate engineering students’ achievements is explored. The system uses each student’s diagnostic reports on miscalculations, misconceptions, and lack of knowledge and offers advice in the form of additional reading, hints and tests and recommends an interaction with the human tutor when needed. We are currently working on optimizing the amount of time a student should use working with the system and the amount of knowledge obtained. An interesting question to work on is how effective hints are in preventing students from misusing the help functions in the system.

References 1. Aleven, V., and Koedinger, K. R.: Limitations of Student Control: Do Student Know when they need help? In G. Gauthier, C. Frasson, and K. VanLehn (Eds.), Proceedings of the 5th International Conference on Intelligent Tutoring Systems, Intelligent Tutoring Systems 2000 Berlin: Springer Verlag (2000) 292–303 2. Baker, F.: The Basics of Item Response Theory. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park (2001) 3. Baker, R.S., Corbett, A.T., and Koedinger, K.R.: Detecting student misuse of intelligent tutoring systems. Lecture Notes in Computer Science, Vol. 3220. SpringerVerlag, Berlin Heidelberg New Jork (2004) 531–540 4. Beck J., Stern M., and Haugsjaa E.: Applications of artificial intelligence in education. ACM Crossroads (1996) 11–15 5. Birenbaum, M.: Assessment 2000: towards a pluralistic approach to assessment. In M. Birenbaum and F. J. R. C. Dochy (Eds.), Alternatives in assessment of achievements, learning processes and prior knowledge. Evaluation in education and human services. Boston, MA: Kluwer Academic Publishers (1996) 3–29 6. Bloom B: The 2 sigma problem: The search for methods of group instruction as effective as one-to- one tutoring. Educational Researcher 13(6) (1984) 4-16

Automated Discovering of What is Hindering the Learning Performance

531

7. Brusilovsky P.: Adaptive and intelligent technologies for Web-based education. Special Issue on Intelligent Systems and Teleteaching 4 (1999) 19–25 8. Bush, M.: A multiple choice test that rewards partial knowledge. Journal of Further and Higher Education 25(2) (2001) 157–163 9. Butz C.J., Hua S., and Maguire R.B.: A Web-Based Intelligent Tutoring System for Computer Programming. IEEE/WIC/ACM Web Intelligence conference (2004) 159–165 10. http://www.cc.gatech.edu/faculty/ashwin/papers/er-05-03.pdf 11. Conati C., Gertner A., and K. VanLehn K.: Using Bayesian networks to manage uncertainty in student modeling. User Modeling and User-Adapted Interaction 12(4) (2002) 371–417 12. Dunkin, M. J., Barnes, J.: Handbook of Research on Teaching, NY (1986) 13. Farnsworth C.: Using computer simulations in problem-based learning. In M. Orey, Proceedings of the Thirty-Fifth ADCIS Conference (1994)137–140 14. Johnson W.L.: Pedagogical agents for Web-based learning. First Asia-Pacific Conference on Web Intelligence (2001) 43 15. Hron, A., and Friedrich, H.F. : A review of web-based collaborative learning: factors beyond technology. Journal of Computer assisted Learning 19 (2003) 70–79 16. Koedinger K.R., McLaren B.M. and Roll I.: A help-seeking tutor agent. Proceedings of the Seventh International Conference on Intelligent Tutoring Systems, ITS 2004, Berlin, Springer-Verlag, (2004) 227–239 17. Lepper M.R., Woolverton M., Mumme D., and Gurther G.: Motivational techniques of expert human tutors: Lessons for the design of computer-based tutors. In S.P. Lajoie, S.J. Derry (Eds.): Computers as cognitive tools. LEA, Hillsdale, NJ (1993) 75–105 18. Liu C., Zheng L., Ji J., Yang C., Li J., and Yang W.: Electronic homework on the WWW. First Asia- Pacific Conference on Web Intelligence (2001) 540–547 19. Martinez, M.: Adaptive Learning: Research Foundations and Practical Applications. In Stein, S., and Farmer, S., S. (eds.), Connotative Learning. Washington D.C.: IACET (2004) 20. Martin J. and Vanlehn K.: Student assessment us- ing Bayesian nets. International Journal of Human- Computer Studies 42 (1995) 575–591 21. Merceron, A. and K. Yacef: A Web-based Tutoring Tool with Mining Facilities to Improve Learning and Teaching . Proceedings of 11th International Conference on Artificial Intelligence in Education. F. Verdejo and U. Hoppe (eds) Sydney, IOS Press (2003) 22. Pecheanu E., Segal C. and Stefanescu D.: Content modeling in Intelligent Instructional Environment. Lecture Notes in Artificial Intelligence, Vol. 3190. SpringerVerlag, Berlin Heidelberg New Jork (2003) 1229–1234 23. www.cc.gatech.edu/faculty/ashwin/papers/er-05-03.pdf 24. Renkl, A.: Learning from worked-out examples: Instructional explanations supplement self- explanations. Learning and Instruction, 12, (2002) 529–556 25. Schworm, S. and Renkl, A.: Learning by solved example problems: Instructional explanations reduce self-explanation activity. In W. D. Gray and C. D. Schunn (Eds.), Proceeding of the 24th Annual Conference of the Cognitive Science Society, Mahwah, NJ: Erlbaum (2002) 816–821 26. Shute, V.J. and Glaser, R.: Large-scale evaluation of an intelligent tutoring system: Smithtown. Interactive Learning Environments, 1, (1990) 51-76 27. Wood, D.: Scaffolding, contingent tutoring, and computer-supported learning. International Journal of Artificial Intelligence in Education, 12 (2001) 280–292

Sharing Protected Web Resources Using Distributed Role-Based Modeling Sylvia Encheva1 and Sharil Tumin2 1

Stord/Haugesund University College, Bjørnsonsg. 45, 5528 Haugesund, Norway [email protected] 2 University of Bergen, IT-Dept., P.O. Box 7800, 5020 Bergen, Norway [email protected]

Abstract. In this paper we propose a model that simplifies distributed roles management in cooperating educational organizations by creating group/role relationships to protect Web resources. Organizations share their user and group data with each other through a common communication protocol using XML-RPC. Arranging users into groups and roles makes it easier to grant or deny permissions to many users at once. We argue that our model may be used across organizations, based on the group structure and independent collaborative administration and because it provides a high level of flexibility and usability. Keywords: Collaboration, user authentication and authorization.

1

Introduction

Information security is one of the most difficult problems in managing large networked systems. This is particularly true for educational organizations that are attempting to reduce the complexity and cost of security administration in distributed multimedia environments such as those using World Wide Web services. Computer-based access control can prescribe not only who or what process may have access to a specific system resource, but also the type of access that is permitted [10]. In Role-Based Access Control (RBAC), access decisions are based on an individual’s roles and responsibilities within the organization or user base, [2], [4], and [11]. Most information and communication technology (ICT) based learning systems require user authentication and authorization data to reside locally in their user database. Therefore, educational organizations using such a system have to export their users’ data to that system. This will involve a complicated data synchronization mechanism. We propose a model that simplifies user management in a large networked system by creating a group for each role. One can add or remove users from roles by managing their membership in corresponding groups. Users are associated with groups, roles are associated with permissions and group-role relations provide users with an access control and permissions for a resource. This X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 532–543, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Sharing Protected Web Resources Using Distributed Role-Based Modeling

533

framework provides a distribution of user-group management and role-resource management. All organizations that are members of such a system share their users’ and groups’ data across organizations through a common communication framework. Separation of duties appeared to be of great value in a case of collaboration among various job-related capabilities where, for example, two roles have been specified as mutually exclusive and cannot both be included in a user’s set of authorized roles. Separation of duty requires that for any particular set of transactions, no single individual is allowed to execute all transactions within that set. Since users’ management is done on independent sites, it is difficult to guarantee the uniqueness of users across inter-organizational boundaries. A person can be affiliated with many organizations at the same time. This problem is difficult to solve and may not be a major issue if a conflict of interests can be resolved in a role-group relationship. The rest of the paper is organized as follows. Related work is listed in Section 2. Basic terms and concepts are presented in Section 3. A collaboration among independent organizations and a conflict of roles are discussed in Section 4. The framework is presented in Section 5 and the system architecture is discussed in Section 6. The paper ends with a description of the system implementation in Section 7 and a conclusion Section 8.

2

Related Work

A formal model of RBAC is presented in [9]. Permissions in RBAC are associated with roles, and users are made members of appropriate roles, thereby acquiring the roles’ permissions. The RBAC model defines three kinds of separation of duties - static, dynamic, and operational. Separation of duties was discussed in [10], [15], and [5]. A framework for modeling the delegation of roles from one user to another is proposed in [3]. A multiple-leveled RBAC model is presented in [7]. The design and implementation of an integrated approach to engineering and enforcing context constraints in RBAC environments is described in [17]. While RBAC provides a formal implementation model, Shibboleth [14] defines standards for implementation, based on OASIS Security Assertion Markup Language (SAML). Shibboleth defines a standard set of instructions between an identity provider (Origin site) and a service provider (Target site) to facilitate browser single sign-on and attribute exchange. Our work differs from Shibboleth in modeling implementation and user/group/role management. Shibboleth invests heavily on Java and SAML standards. Our model is more open-ended based on XML-RPC written in Python. The Origin site manages user and group memberships of users while the Target site manages permissions and role memberships of groups. The Origin site provides procedures callable using XML-RPC from Target sites to facilitate authorization on a protected resource. Additional needed procedures come to being by mutual agreement betwen sites.

534

3

S. Encheva and S. Tumin

Basic Terms and Concepts

In this paper a user ϕ is defined as a valid domain identity at a particular organization Γ . A valid domain identity can be a human being, a machine or an intelligent autonomous agent. A group Ω is a set of users {ϕj }s1 , i.e. Ω = {ϕj |ϕj ∈ Γ }. A group is used to help the administration of users. The security settings defined for a group are applied to all members of that group. A role Φ contains a set of groups {Ωi }l1 associated with similar duty and authority. User administration is simplified by creating a group for each role. One can add or remove users from roles by managing their membership in corresponding groups. A resource Υ defines a set of protected Web objects υj , j = 1, ..., m. An action Ψ is a matrix of operations ςi , i = 1, ..., n on objects υj ∈ Υ, j = 1, ..., m. A permission Λ defines a right of a role Φ to perform an action ΨΦ on a resource Υ . A user ϕ has a role ΦΩ when ϕ ∈ Ω and Ω has a role Φ. An authorization gives a set of permissions to a user to execute a set of operations (e.g. read, write, update, copy) on a specific set of resources (e.g. files, directories, programs). An authorization also controls which actions an authenticated user can perform within a Web-based system. All non zero elements of the matrix Ψφo define the permissions of a role within a system. An authenticated user, who belongs to a group Ω in an organization Γi , will have permissions to perform actions at another organization Γj if Ω defined at Γi is a member of a role in Γj .

4

Collaboration Among Independent Organizations

The user-group management in an organization provides a centralized account identity database from which users who belong to that organization authenticate themselves. The management system must also provide a centralized database for the group membership of users. The group membership of a user can be queried at any time by service providers. The role-resource management in another organization (i.e. a service provider) provides a centralized permissions’ database where an action is defined. The management system must also provide a database for the roles membership of groups. Collaboration among organizations entails that all of them must agree on the name of the group to be used in user-group and group-role relations. A group name acts as a bridge in an inter-organization authorization mechanism. All users and groups are identified using the domain-name of their organization. 4.1

Ranking of Roles

This collaborative management model can be used by a security administrator to enforce a policy of separation of duties. Separation of duties appears to

Sharing Protected Web Resources Using Distributed Role-Based Modeling

535

be of great value in a case of collaboration among various job-related capabilities where, for example, two roles have been specified as mutually exclusive and cannot both be included in a user’s set of authorized roles. Separation of duty also requires that for any particular set of transactions, no single user is allowed to execute all the transactions within the set. A system administrator can control access at a level of abstraction that is natural to the way those enterprises typically conduct business. This is achieved by statically and dynamically regulating users’ actions through the establishment, and the definition of roles, relationships, and constraints. A static separation of duty enforces all mutually exclusive roles at the time an administrator sets up role authorizations, while a dynamic separation of duty enforces all rules at the time a user selects roles for a session. The dynamic separation of duty places constraints on a simultaneous activation of roles. A user can become active in a new role only if the proposed role is not mutually exclusive with any of the roles in which the user is currently active. We propose an XML-RPC communication mechanism for determining a domain user authentication and authorization (Fig. 1), where, in a collaborative independent management among organizations, role data in a service provider domain contains references to an external groups’ data from clients’ domains.

Users at Org 1

Group 1

Users at Org 2

Group 2

Users at Org 3

Group 3

Group management

Role

Resource

Role management

Fig. 1. Collaborative independent management

Defining disjoint groups’ permissions is a duty of role managers at service provider domains and assigning users to proper groups is a duty of group managers at service client domains. These managers need to cooperate very closely. Policies and rules governing a resource’s usage must be documented and understood by all parties. The managers at service provider domains have the right and the means to block any user in an event of a conflict. A dynamic separation of duty requires that a user can not hold two conflicting roles in the same session, e.g., an examinee and an examiner of the same subject. A role with less permissions has lower rank than a role with more permissions. A conflict of interests constrain must be checked by the application. An audit track of user assigned roles can be used to expose conflicts. Roles can be ranked in such a way that a higher ranked role also contains all the rights of all lower ranked roles, i.e. (Φ1 < Φ2 < Φ3 ) ⇒ (Λ3 ⊃ Λ2 ⊃ Λ1 )

536

S. Encheva and S. Tumin

Permission on resource 2 Role of user 2

Prmission on resouce 1 Role of user 1

Fig. 2. User 2 has both a permission on resource 1 and a permission on resource 2

The ranking order of roles on a resource depends on operations (Fig. 2). Thus e.g. a role with ’read-only’ permission is ranked lower than a role with ’read-write’ permission. Role managers define a ranking order of roles on a resource. Role managers need to work very closely with the service implementers. Furthermore, a role – – – –

is given to a user after authentication, defines authorization on a resource, defines operational rights and responsibilities of a user on a resource, and is a dynamic attribute of a user operating on a resource.

Roles conflicts appear when a user simultaneously has both a higher ranked role and lower ranked roles on a resource. In such a case, the user will get the role with the least rank, and, therefore receives minimum permission on that resource as illustrated below. * ϕ ∈ (Ω1 ∩ Ω2 ∩ Ω3 ) ⇒ Φϕ = Φ1 (Φ1 < Φ2 < Φ3 ) ⇒ (Λ3 ⊃ Λ2 ⊃ Λ1 ) The service provider administrators have the right to place a user and/or an external group in a ’quarantine group’ in relation to a protected resource. Users and groups, placed in a quarantine group, loose all their rights on the corresponding resource as long as they are in that quarantine group. When an authenticated user ϕi at an organization Γk wants to access an application at Γl , protected by a role Φ at Γl , the application – checks that the user ϕi at an organization Γk does not belong to the corresponding quarantine group for the role Φ at Γl , – consults the corresponding authorization server as to whether the user ϕi at the organization Γk belongs to the group Ωi at Γk , – checks for a roles’ conflict for that session, and – checks session constrains. Three management areas are described below: User-group management enforces a static separation of duty by defining a set of disjoint groups for conflicting roles. Thus a user cannot be a member of several disjoint groups for conflicting roles.

Sharing Protected Web Resources Using Distributed Role-Based Modeling

537

Group-role management enforces a dynamic separation of duty by defining a set of disjoint roles for a particular resource. A user cannot be granted permission to have more than one role for the same resource. Role-resource management assigns permissions to roles on resources and adds groups to appropriate roles.

5

Framework

Suppose users at domain dom1 are clients of services provided by domain dom2. User per@dom1 who belongs to group gr@dom1 in dom1 will have permission to use a resource at dom2 if there is a role rol@dom2 where gr@dom1 is a member. Managers at dom1 manage a central users’ and groups’ database. A person should have one and only one domain identity. User and group management should be done centrally at an ICT center, while group membership management may be done by local department managers. To participate in this framework, dom1 must have a domain authentication server on which the domain users can do their authentication and provide a single-sign-on mechanism for a user to access published services. We propose a simple Web form authentication mechanism applying user identification, password, session number, and cookies. A system at dom1 must also provide an authorization server from which other domains can access the authenticated user’s valid session number and group memberships on demand. Any authenticated user can access any of the shared services, defined by group-role relations at other domains, provided that he/she belongs to the proper groups at his/her home domain. Since user management is done on independent sites, it is difficult to guarantee the uniqueness of users across inter-organizational boundaries. A person can be affiliated with many organizations at the same time. This problem is difficult to solve and may not be a major issue if conflicts of interest can be resolved in a group-role relation. Managers at dom2 manage a central roles and resources database. The relationships among operations and resources are also managed centrally and are defined in the central database. Resource names and data are provided by a service provider at the local departmental level. The central ICT managers manage role membership in domain groups and role permissions on resources. To participate in this framework dom2 must provide a portal service through which all services published by this domain can be reached. This portal accepts authenticated domain users applying the agreed single-sign-on mechanism from its own domain and other participating domains. Once the portal at dom2 accepts a user, the user can then access any services accessible by the user’s domain group defined by a corresponding role membership of the group. Each time an authenticated domain user wants to access a protected resource, the dom2 portal needs to query the domain user membership

538

S. Encheva and S. Tumin

of a particular domain group at dom1 depending on the domain group’s role membership for that particular resource. We propose an XML-RPC communication mechanism for determining domain user authentication and authorization, where, in a collaborative independent management among organizations group management assigns users ϕij membership to the group Ωi at the organization Γj , and role management assigns a group membership to a role Φ. Role data in a service provider domain contains references to external group data from client domains. (m) (m) (m) A permission Λdom2 defines a right of the role Φdom2 on a resource Υdom2 . An unauthenticated user who belongs to Ωdom2 (for example) must first be authenticated (login) at his/her home domain (dom2) where he/she has her identity defined. After a successful login the user will have a unique session identification (PSID). The service provider (dom2) will have the user’s PSID and home domain name as a current active session identifier for that particular user. When (m) (m) the user tries to access the resource Υdom2 with permission Λdom2 the service provider (dom2) connects to the clients’ authorization services’ port at the user’s (m) home domain (dom1) to check the user membership of a group Ωdom2 using the PSID as a parameter.

6

System Architecture

The framework is composed of four main components: Web server, communication module, database, and Web resources. Web server Web servers provide the users’ interface to the system. A user is authenticated when a correct credential (user-identifier, password) is given in a sign-on Web form. A Web server sets and reads client’s (Web browser) cookies. Authorized users are presented with the protected Web resources. user browser PSID

HTTPS

logon

logon redirect

PSID, BIP, ID

authentication redirect

resources

sessions

ASID

RPDS

UDAS

ASID

client

users, groups, sessions

UDWS

ID, GRP

XML-RPC Resource Domain

Users Domain

Fig. 3. System architecture

Sharing Protected Web Resources Using Distributed Role-Based Modeling

539

Communication module The module provides a peer-to-peer, request/respond communication between servers. A resource domain server can send a request to a user domain server for user’s authorization data (user, group membership). A resource domain acts as a client to a user domain for users’ data. Users’ credentials never leave the user domain server. Database Both domain servers are supported by a relational database management system (DBMS). Users’ credentials, group membership, and sign-on session keys are stored in the user domain’s database. Group and role relationships are stored in the resource domain’s database. The resource domain also keeps track of users’ sessions, profiles, and status in its database. Web resources The resource domain server provides authorized users access to protected Web resources. Server-side scripts are responsible for enforcing resource protection policies and providing users with dynamic HTML pages. Let us denote a domain server that provides Web resources by RPDS, and another domain server that provides user authentication/authorization, by UDAS (Fig. 3). In a collaborative user/resource framework, a RPDS provides a Web portal entry point for all resources and applications offered to Web clients. An UDAS maintains a domain users and groups database and provides a Web-based single sign-on application for domain users and Web services for users authentication and authorization, based on XML-RPC. A RPDS will assign a Web browser with a unique portal session identifier (PSID) the first time an unauthenticated user is trying to access a protected Web resource controlled by the RPDS. This generated PSID is used as a cookie name with an empty value and is saved in the Web browser’s cookie by the RPDS’s session controller. The user is then asked to provide his/her home domain name for authentication. The PSID, the Web browser’s IP address (BIP), and the home domain name are saved in a session database in the RPDS. The PSID is associated with the BIP for the session by the RPDS and is not yet associated with a valid user. RPDS - session:(PSID, BIP) → (none); cookie(PSID) = None

The RPDS will then redirect the user’s browser to his/her home domain’s UDAS. The PSID, RPDS’s IP address (RIP), and RPDS’s URI are sent as ”GET”-request parameters in the redirection. A logon Web-form is then displayed on the user’s Web browser by the UDAS. The user is authenticated if he/she provides a valid credential (user-identifier (ID), password). The UDAS will then create a unique authentication session identifier (ASID) associated with the session {PSID, RIP, BIP, ID}-quadruplet. The {PSID, RIP, BIP, ID}-quadruplet and the {ASID, timestamp}-pair are saved in the session database in the UDAS. The PSID is then

540

S. Encheva and S. Tumin

associated with an authenticated ID for the session by the UDAS. The session is valid as long as the session’s quadruplet exist in the UDAS’s session database. UDAS - (ASID, timestamp) → session:(PSID, RIP, BIP, ID)

The UDAS redirects the user’s browser back to the RPDS. The ASID is sent as a ”GET”-request parameter in the redirection. Using the given ASID as a parameter, the RPDS makes an XML-RPC call to an authentication service at the user home domain (UDWS). If the ASID is still valid, the ID associated with the ASID is returned. The PSID is then associated with an authenticated user ID at the RPDS. A user is authenticated if both a session cookie PSID and a session entry exist for a particular RPDS. RPDS - session:(PSID, BIP) → (ID); cookie(PSID) = ID

An ASID is a short-term one-time (i.e. having a limited period of validity and can be used only once) session identifier and is used between a RPDS and an UDAS for the initial session control. At any time, the RPDS can obtain a new ASID by calling the UDWS and using a valid PSID, RIP, and BIP as parameters. The newly obtained ASID can be used to check a user’s authorization from UDAS group data, for example. One RPDS (RPDSa ) can transfer its session to another RPDS (RPDSb ) by obtaining a new ASID from the UDAS. The RPDSa first requests an ASID for RPDSb by doing XML-RPC to UDWS with PSID, RIPa , BIP, and RIPb as call parameters, where RIPa is the IP address to RPDSa and RIPb is the IP address to RPDSb . The UDAS checks for a valid session {PSID, RIPa , BIP, *}-quadruplet. If such session exists, a new session’s quadruplet for RIPb and an associated ASID are then created. The UDAS sends the ASID as a response back to the RPDSa. The RPDSa then redirects the user’s browser to RPDSb with the ASID as a ”GET”-request parameter. The RPDSb makes an XML-RPC call to UDWS with the ASID as a parameter. If the ASID is still valid the {PSID, BIP, ID}-triplet is send back as a response. The RPDSb checks the browser’s IP address against the BIP, and if they match, it then sets the PSID to the browser’s session cookie and saves the triplet in its session database. UDAS - (ASID, timestamp) → session:(PSID, RIPb , BIP, ID) RPDSb - session:(PSID, BIP) → (ID)

The architecture allows two types of logoff methods, local sign-off and global sign-off. On the local level a user is logged off from a specific RPDS when the RPDS disables the user’s session cookie PSID and deletes its local session (PSID, BIP) from its database. The RPDS sends an XML-RPC call to the UDWS with the PSID, RIP, and BIP as parameters. The UDAS deletes the session quadruplet from its database. The user is then invalid for that RPDS. The global sign-off signifies the inability of the UDAS to provide new ASIDs to a particular {PSID, RIP, BIP}-triplet. When a RPDS requests a UDAS for global user sign-off, the UDAS will do the following sequence of actions:

Sharing Protected Web Resources Using Distributed Role-Based Modeling

541

1. Select all session-quadruplets containing a PSID, BIP, and ID. Do an XMLRPC call to the RPDS for each RIP in the selection. Each RPDS will delete its local session (PSID, BIP) from its database. 2. Delete each session listed in the selection from the database. Using a short-term ASID and a long-term PSID, a simple security mechanism for a user’s session can be maintained between RPDSs and UDASs. The PSID is bound to a BIP at the RPDS domain side and to the ID after a valid user logon at the UDAS domain side. The two initial URI-redirections (logon redirect and response redirect) are done over HTTPS. By using a peer-to-peer XML-RPC call over a HTTP, an ASID is used to map the validity of an ID for user authentication and authorization. All UDASs maintain a list of valid RPDSs that are allowed to make XML-RPC calls to them. A user never discloses his/her user-identifier and password in a foreign domain. Both HTTPS and XML-RPC have to be compromised in order for a rouge user or application to gain illegal access to resources. The risk for illegal access is minimized by using two calls, the first to obtain the short-term one-time ASID associated with {PSID, BIP}, and the second to obtain the user authentication and authorization associated with the ASID.

7

System Implementation

We propose the use of open source software components on a Linux server for the implementation of the framework. The required supporting systems are: – Apache Web server supporting modules for: • Secure Socket Layer (SSL) - optional, • basic authentication, • mod python - Python module. – A database. PostgreSQL RDBMS for the back-end database system. – An XML-RPC server. Python and xmlrpclib module for Python. – Server side scripts. Python programs for producing dynamic HTML pages, XML-RPC client, database integration, etc. In order to support SSL, openssl component must be installed on the server. The https protocol using SSL is expensive, but its deployment is needed if we need a secure transmittion, e.g. for on-line exams. All static Web resources are protected by the HTTP basic authentication protocol defined by a .htaccess file on each directory. Normally users need not bother with basic authentication logon since requests to such resources are done automatically by server scripts for all authenticated users. Users having valid sessions are logged on to a protected resource by server scripts in relation to their roles. All authenticated users will have a ’user’ role initially and are logged on to a protected resource as ’user’. We define a minimum set of .htaccess users by using htpasswd. This minimum set contains (user, student, instructor, admin) users. Implementers can create additional .htaccess users if needed. The .htaccess files use the ’Require user’ directive to control access to resources.

542

S. Encheva and S. Tumin

All dynamic Web pages are created by server scripts that have access to clients’ session cookies and the session control program. Server side scripts functioning as Web applications are URI addressable from clients, just like static Web pages. All protected Web applications must check for a valid session by compering the client’s session cookie with the session data stored in the database before the clients are presented with the applications. Additional protection can be enforced by using the HTTP basic authentication protocol on critical applications. Basic authentication should not be considered secure, since a username and a password are passed in the GET request header from the client to the server in a base64 encoded string. A simple base64 string decoding program can extract the username and password that have been used. Basic authentication using HTTPS, however, will be secure, since the whole transmission is encrypted. PostgreSQL RDBMS is used to store all operational data. A SQL-base database is chosen instead of a hash-based database, because SQL helps to make data management easy. The Python database integration support module must be installed such that the server scripts can make SQL queries programmatically. Users’ credentails, i.e. the (username, password) pair , are taken from an external data source (the preferred way) if possible. The authentication module uses the method specified in Users:PWD and Users:KEY data. LDAP authentication is used (the preferred way) if Users:PWD contains ’LDAP’, in which case Users:KEY contains the LDAP server name or IP address. It is possible to define a user locally. Local users are those having Users:PWD = ’local’, and in these cases Users:KEY contains encrypted user passwords. All usernames in Users:ID must be unique.

8

Conclusion and Future Work

This collaborative management model can be used by security administrators to enforce a policy of separation of duties. Since users’ management is done on independent sites, it is difficult to guarantee the uniqueness of users across interorganizational boundaries. A person may be affiliated with many organizations at the same time. This problem is difficult to solve and may not be a major issue if conflicts of interests can be resolved in a role-group relationship. A split management of users, groups, roles, and rights is proposed as a possible solution. Furthermore, the risk of illegal access is minimized by using two calls, the first one to obtain the short-term one-time authentication session identifier, associated with {unique session identification, Web browser’s IP address}, and the second one to obtain the user authentication and authorization associated with the authentication session identifier. An important part of this model is special groups called ’quarantine groups’. Each quarantine group is related to a particular protected resource. Users and groups, placed in a quarantine group, loose all their rights on the corresponding resource as long as they are in that quarantine group. The work introduced here represents early stages towards the safe use of protected Web resources among independent educational organizations working

Sharing Protected Web Resources Using Distributed Role-Based Modeling

543

in collaboration. A future goal is to develop a complete model, based on the proposed framework.

References 1. Al-Kahtani M., Sandhu, R.: Rule-based RBAC with negative authorization. 20th Annual Computer Security Applications Conference, Arizona (2004) 2. Andress, M.: Access control. Information security magazine, April, (2001) 3. Barka, E., Sandhu, R.: Role-based delegation model/ hierarchical roles. 20th Annual Computer Security Applications Conference, Arizona (2004) 4. Barkley, Beznosov, and Uppal: Supporting relationships in access control using Role Based Access Control, Fourth ACM Workshop on Role-Based Access Control (1999) 5. Bertino E., Bonatti, P.A., Ferrari E.: TRBAC: A temporal Role-Based Access Control model. ACM Tr. on ISS, 3(3) (2001) 191-223 6. Bhatti, R., Bertino E., Ghafoor A., Joshi, J.B.D.: XML-based specification for Web services document security. IEEE Computer 37(4) (2004) 7. Chou, S-C.; Ln RBAC: A multiple-levelled Role-Based Access Control model for protecting privacy in object-oriented systems. J. of Object Technology 3(3), (2004) 91-120 8. Dowling, J., Cahill, V.: Self-managed decentralised systems using K-components and collaborative reinforcement learning. Proceedings of the Workshop on SelfManaged Systems (WOSS’04), (2004) 41-49 9. Ferraiolo, D., Cugini, J., Kuhn., D. R.: Role-Based Access Control (RBAC): Features and motivations. 1995 Computer Security Applications Conference (1995) 241-248 10. Ferraiolo, D., Sandhu, R., Gavrila, S., Kuhn R.D., Chandramouli R.: Proposed NIST standard for Role-Based Access Control. ACM Transactions on Information and System Security (TISSEC) 4(3) (2001) 224-274 11. Ferraiolo, D., Kuhn., D. R., and Chandramouli R.: Role-Based Access Control. Artech House, Computer Security Series, (2003) 12. Guerin T., Lord R.: RBAC identity management, http://www.portalsmag.com/ articles/default.asp?ArticleID=4923 (2003) 13. Schwoon, S., Jha, S., Reps, T., Stubblebine S.: On generalized authorization problems. Proc. 16th IEEE Computer Security Foundations Workshop, (June 30 - July 2, 2003, Asilomar, Pacific Grove, CA), (2003) 202-218 14. http://shibbolethinternet2.edu 15. Simon R., M. Zurko M.: Separation of duty in role-based environments. In Proceedings of 10th IEEE Computer Security Foundations Workshop, Rockport, Mass., June (1997) 183–194 16. Strembeck, M.: Conflict checking of separation of duty constraints in RBACimplementation experiences. http://wi.wu-wien.ac.at/home/mark/publications/ se2004.pdf 17. Strembeck, M., Neumann, G.: An integrated approach to engineer and enforce context constraints in RBAC environments. ACM Transactions on Information and System Security, 7(3) (2004) 392-427 18. Zhang, X., Park, J., Sandhu, R.: Schema based XML security: RBAC approach. Seventeenth IFIP 11.3 Working Conference on Data and Application Security (Estes Park, Colorado, USA, August 4-6), (2003)

Concept Map Model for Web Ontology Exploration Yuxin Mao1, Zhaohui Wu1, Huajun Chen1, and Xiaoqing Zheng1 1

Research Center for Grid Computing, College of Computer Science, Zhejiang University, Hangzhou 310027, China {maoyx, wzh, huajunsir, zxqingcn}@zju.edu.cn

Abstract. The emerging Semantic Web has facilitated the incorporation of various large-scale on-line ontologies and semantic-based applications; howbeit Web ontologies are due to their complex nature far from being a commodity and make user exploration run into a number of difficulties. Concept maps that provide visual languages for organizing and representing knowledge may complement Web ontologies in several aspects to support efficient knowledge reuse. In this paper, we propose, on top of the Semantic Web technology, an interactive visual model that extends Web ontology with concept map for exploring Web ontologies. We propose several algorithms from the point of view of the graph theory to analyze and uncover some underlying characteristics, to enhance Web ontology exploration in some novel aspects.

1 Introduction As the foundation of the Semantic Web [1], ontologies are initially defined as an explicit specification of a conceptualization [2]. The use of ontologies for the explication of implicit and hidden knowledge is a possible approach to describe the semantics of the information sources with the explicit content and overcome the problem of semantic heterogeneity of information sources in the Web. Recent advent of the Semantic Web has facilitated the incorporation of various large-scale on-line ontologies in different disciplines and application domains; e.g. UMLS1 includes 975,354 concepts and 2.6 million concept names in its source vocabularies, and Gene Ontology2 includes about 17,632 terms, 93.4% with definitions. No doubt those large-scale ontologies will play a critical role in a large variety of semantic-based applications; however, there also arise many new challenges for Web ontologies. Web ontologies are due to their complex nature far from being a commodity and require more support to assist users to utilize domain knowledge efficiently. a) To a large domain ontology like the Gene Ontology, problem-solving may only require particular aspects of the whole ontology. It will be not easy for users to locate required portions of knowledge in exploring large-scale domain ontologies. This calls for the ability to extract meaningful portions of a large ontology in exploration. b) According to the Web ontology model, each concept and each relation in Web ontologies can be a first class object by having a URI, or everything is a thing [3]. When 1 2

http://umlsks.nlm.nih.gov/ http://www.geneontology.org/

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 544 – 555, 2006. © Springer-Verlag Berlin Heidelberg 2006

Concept Map Model for Web Ontology Exploration

545

we visualize Web ontologies as graphs, everything including relations is represented as a node. If a Web ontology is large-scale, corresponding graphs will become very complex and it will be not intuitive to grasp the relations between concepts in exploration. c) The hierarchical structure in Web ontologies is organized with the inheritance relation of super-concept and sub-concept. For large-scale ontologies with multidimensional relations, more types of information view than inheritance tree are required to organize concepts and relations for more efficient knowledge exploration. Alternatively, concept map [4] that has been used as a powerful assistant tool for human beings to learn and create knowledge meaningfully provides a visual language for organizing and representing knowledge of different disciplines and application domains. A remarkable characteristic of concept maps is that the concepts are represented in a hierarchical fashion. So we can take advantages of concept map to complement the existing model of Web ontologies and propose an approach for analyzing, reorganizing and extending large-scale Web ontologies. In this paper, we analyze the structure of ontologies from the point of view of the graph theory and propose a visual interactive model on top of concept map theory. Our work will be complementary to the Web ontology model in that we expect to analyze the semantic structure of Web ontologies from a new perspective to uncover some underlying characteristics, which may be helpful to Web ontology exploration. As OWL3 has become the de facto standard of the Web ontology, we mainly refer the term ontology or Web ontology as OWL ontologies in the literature. The remainder of the paper is organized as follows. We first give an overview of concept map and its applications in Section 2. Then, we explain some preliminary terms and definitions, and then describe a concept map model for Web ontologies, as well as main algorithms of the model in Section 3. Section 4 describes a use case of the model on a real-life domain ontology. Some related work is presented in Section 5. Finally, a brief conclusion and future research direction are presented in Section 6.

2 Concept Map Concept maps that provide a visual representation of knowledge structures and argument forms have been widely used in many different disciplines and application domains. They provide a complementary alternative to natural language as a means of communicating knowledge in different domains. Concept mapping is the strategy employed to develop a concept map. The concept mapping technique was developed by Novak and Gowin [5] based on the theories of Ausubel [6]. Concept maps have been defined in a variety of ways using a wide range of names. Generally speaking, a concept map is a special form of a network diagram for exploring knowledge and gathering and sharing information and consists of nodes or cells that contain a concept, item or question and links. The links are labeled and denote direction with an arrow symbol. The labeled links explain the relations between the nodes and the arrow describes the direction of the relation, which reads like a sentence (see figure 2). Concept maps are used most frequently in education as tools facilitating meaningful learning; however, the characteristics of concept map make it suitable to suitable for Web 3

http://www.w3.org/2001/sw/WebOnt/

546

Y. Mao et al.

applications. Web Brain4, offers a graphical search engine and converts the users’ searching behaviors into exploring a concept map,. The Plumb Design Visual Thesaurus5 is a concept map software with an intuitive interface that encourages words exploration and English learning. KMap [7] is an open architecture concept-mapping tool allowing different forms of concept map to be defined and supporting integration with other applications. Hall and Stocks in [8] proposed a multi-level hypermap interface for the display of World Wide Web pages relevant to an undergraduate class. Gram and Muldoon in [9] compared a concept map-based interface to a web page-based interface in terms of accurately finding relevant information and the results indicated that the concept map-based interface resulted in better search performance.

3 Concept Map Modeling for Web Ontologies Concept mapping takes advantage of the remarkable capabilities of diagram-based information representation and embodies rich semantics of knowledge, so we can adopt concept map as a modeling method for analyzing, reorganizing and extending Web ontologies to support efficient ontology exploration. Besides, we expect to analyze large-scale Web ontologies from the graph theory perspective to uncover some underlying characteristics of Web ontologies. We will use BioPAX ontology (see figure 1) for illustration.

Fig. 1. The OWL ontology of BioPAX

3.1 Preliminaries If we want to construct concept maps based on Web ontologies, what we should do at first is to establish a bridge from ontologies to maps. There is a direct mapping from OWL ontologies to concept maps: Table 1. Mapping between OWL ontology elements and concept map units

4 5

OWL ontology elements

Concept map units

Graphical Types

Class Property Domain class Range class Instance Statement

Concept Relation Concept Concept Specific examples Proposition

Node Arc Source node Target node Node Node

http://www.webbrain.com/ http://www.visualthesaurus.com/

Concept Map Model for Web Ontology Exploration

547

Definition 1 (Semantic Map). A semantic map Μ is the concept map representation for Web ontologies and consists of a triple ( Ο Μ , Ν Μ , Α Μ ), where Ο Μ is a link to the Web ontology for Μ , and Ν Μ is a set of nodes for concepts, and Α Μ is a set of directed-arcs for relations. Each node v is associated with a concept c and in semantic maps a concept is always linked to several other concepts by arcs specifying the relations between two concepts. Each arc e is associated with one or more relations. The source of e denotes the domain of a relation and the target denotes the range. If there is more than one relation between two concepts, in order to express how many relations an arc represents we associate each arc with a relation weight rw . All weights are nonnegative in semantic maps. Similarly to the graph theory, as a semantic-based diagram, there are also some invariants in semantic map, based on which we can analyze a semantic map. Definition 2 (Semantic Degree). The semantic degree of a node v in semantic map is the cardinality of the set of relations related to the concept c associated with v . The semantic out-degree of v is the number of relations that take c as one of its domain concepts, denoted by d + (v ) . Similarly, the semantic in-degree of v is the

number of relations that take c as one of its range concepts, denoted by d − (v ) . As we can see, semantic out-degree (semantic in-degree) is not always equal to the number of out-arcs (in-arcs) and maybe larger if there is more than one relation between two concepts. The semantic out-degree (semantic in-degree) at a specific relation p

related to c is represented as d + (v / p ) ( d − (v / p ) ). For example, let ps be the ontology relation “rdfs:subClassOf”. Then the number of sub-concepts of c is equal

to d − (v / p s ) .

An important step in constructing a good concept map is to identify the key concepts that apply to the application domain and list them in a rank order, which is established from the most general, most inclusive concept, for this particular problem or situation, to the most specific, least general concept. Although this rank order may be always approximate, it helps to identify the relative importance of concepts when exploring a large-scale Web ontology. Definition 3 (Semantic Weight). The semantic weight ( sw ) is used to indicate the rank order of concepts in a semantic map. The set of neighbors of a node v is denoted by N Μ (v ) , or briefly by N (v ) . The sw value of a node v is always affected by its adjacent nodes and can be computed by the following equation:

sw' (v ) = sw(v ) + sw(vi ) + β

(vi ∈ N (v ))

.

(1)

Above is a dynamic equation for computing sw and the sw of v is computed by its original sw and its neighbor’s sw . If v has more than one neighbor, the sw' (v ) is a temporary sw and its sw will be computed for several times. The affecting parameter

548

Y. Mao et al.

β indicates the relations between v and vi . We assign rw to β if {v, vi }∈ Α Μ , and

vi is considered to have positive affection on v . We assign − rw to β if {vi , v}∈ Α Μ , and vi is considered to have negative affection on v . We will give the algorithm for computing the overall and permanent sw s of a semantic map later in the literature. 3.2 Semantic-Based Graph Construction

In this section we propose a simple algorithm to construct a semantic-based graph for a collection of concepts. The underlying structure of a Web ontology is a collection of triples and a set of such triples, so called the graph data model, which can be illustrated by a node and directed-arc graph. Each triple is represented as a node-arcnode link in the graph. For a selected collection of concepts from a Web ontology, we can dynamically construct a graph based on semantic links. Algorithm 1. Constructing a semantic-based graph for a collection of Web ontology concepts SET Counter to 0 FOR each Concept in Collection CREATE Node for Concept in Graph END LOOP FOR each Node in Graph GET Concept from Node GET all relations ranging over Concept into Iterator FOR each Relation in Iterator GET all domain concepts into DomainIterator FOR each Item in DomainIterator IF Item is-in Collection GET ItemNode for Item from Graph FOR each Arc in OutArcList of Node IF ItemNode is the source of Arc INCREMENT Weight of Arc END IF INCREMENT Counter END LOOP IF Counter equals to OutArcList length CREATE Arc for Relation in Graph ADD Arc to OutArcList of Node END IF CLEAR Counter FOR each Arc in InArcList of ItemNode IF Node is the source of Arc INCREMENT Weight of Arc END IF INCREMENT Counter END LOOP IF Counter equals to InArcList length CREATE Arc for Relation in Graph ADD Node to InArcList of ItemNode END IF END IF END LOOP END LOOP END LOOP

In this algorithm, each node maintain two lists: an out-arc list and an in-arc list. The former preserves arcs of relations ranging over the node concept and the latter

Concept Map Model for Web Ontology Exploration

549

preserves arcs of relations taking the node concept as domain. The algorithm distinguishes relations according to whether they relate individuals to individuals (object relations) or individuals to datatypes (datatype relations). We treat datatype relations as intrinsic attributes of nodes rather than relating arcs. Thus, a directed graph is constructed based on Web ontology semantics. Definition 4 (Semantic Graph). A semantic graph Δ is a directed graph based on a selected collection of concepts from a Web ontology. A semantic graph holds basic attributes of a semantic map. Every semantic map is a semantic graph.

For a collection of concepts (control, entity, interaction, physicalEntityParticipant, xref, dataSource, openControlledVocabulary, catalysis, conversion, physicalEntity and complex) from the BioPAX ontology, we can dynamically construct a semantic graph (see figure 2). Relations are denoted by arcs in the graph and the number of labels on the arc indicates arc weights. Arcs with dotted line represent denote external relations that link concepts within the collection with concepts outside (as the concept bioSource is not in the collection).

PHYSICAL-ENTITY

LEFT

COMPONENTS

RIGHT ORGANISM COFACTOR

PARTICIPANTS CONTROLLED

CELLULAR-LOCATION CONTROLLER

control PARTICIPANTS CONTROLLED

XREF

dataSource

XREF

entity

DATA-SOURCE

Fig. 2. A semantic graph for the BioPAX ontology

During graph construction, concepts linked to only one other as a linked list are dead ends to avoid. The relations among concepts should be clear, descriptive, and valid. According to algorithm 1, each arc in a semantic graph is stored twice: one in in-arc list of target node and another in out-arc list of source node. However, this redundancy is useful for computing semantic weight, as we will see later. 3.3 Semantic Weight Propagation

An important step in constructing a good concept map is to identify the key concepts that apply to the application domain and list them in a rank order, which is established

550

Y. Mao et al.

from the most general, most inclusive concept, for this particular problem or situation, to the most specific, least general concept. This rank order can be determined by domain experts, but it could imply subjective selection and filtration. So we propose an automatic approach to determine rank order based on sw s. As we have mentioned before, the sw computed by equation (1) is just a temporary value. The sw of a node v is affected by its neighbors. If there are still nodes not affecting its adjacent nodes or being affected by its adjacent nodes, the overall sw s of a semantic graph is not stable. The following algorithm can be used to compute the overall semantic weights for a semantic graph. Algorithm 2. Computing overall semantic weights for a semantic graph based on BFS FOR each Node in SemanticGraph SET State to Active SET SW to 0 END LOOP CLEAR Queue PUT StartingNode into Queue WHILE Queue is-not empty GET Node out of Queue FOR each Arc in OutArcList of Node GET TargetNode of Arc IF TargetNode is-in PreNodeList of Node Do nothing ELSE COMPUTE SW of TargetNode by equation(1) ADD Node to PreNodeList of TargetNode END IF IF TargetNode is-not-in Queue and is Active PUT TargetNode into Queue END IF END LOOP FOR each Arc in InArcList of Node GET TargetNode of Arc IF SourceNode is-in PreNodeList of Node Do nothing ELSE COMPUTE SW of SourceNode by equation(1) ADD Node to PreNodeList of TargetNode END IF IF SourceNode is-not-in Queue and is Active PUT TargetNode into Queue END IF END LOOP SET State of Node to Inactive END LOOP

Definition 5 (Prenode). If u is an adjacent node of v and the semantic weight of v has been affected by u in semantic weight propagation, then u is a prenode of v . If u affects v then it’s added to v ’s prenode list and if u is affected by v then v is added to u ’s prenode list. In equation 1, vi is one of v ’s prenodes.

As each node maintain an out-arc list and an in-arc list, we can compute the semantic weight of any adjacent nodes of the focused node, in spite of the direction of arc, so it’s efficient for computing semantic weight.

Concept Map Model for Web Ontology Exploration

551

Table 2. The semantic weights for the semantic graph in figure 2 by BFS

Concept

SW

physicalEntityParticipant (starting concept) entity interaction control xref dataSource openControlledVocabulary catalysis conversion physicalEntity complex

0 0 1 2 -2 -1 -1 1 2 -1 1

We can choose physicalEntityParticipant as starting concept, and then the final semantic weights (see table 2) for the semantic graph in figure 2 can be computed by algorithm 2. As we can see, control and conversion get the highest semantic weight (2) because both of them have two relations ranging over physicalEntityParticipant. And xref gets the lowest semantic weight (-2) because there is a distance of two out arcs between physicalEntityParticipant and xref. Thus a rank order is established from the portions of the ontology with the general concepts on top and the specific concepts on bottom. The process of computing semantic weights for semantic graph is a dynamic propagation of semantic relations between concepts. The propagation of semantic weight is similar to the problem of graph traversal: visit each node following the semantic structure of the semantic graph. So the propagation of semantic weight can be performed based on various strategies. Algorithm 2 is based on breadth first search and we can rewrite the algorithm based on depth first search (algorithm 3). Algorithm 3. Computing overall semantic weights for a semantic graph based on DFS FOR each Node in SemanticGraph SET State to Active SET SW to 0 END LOOP (Beginning of ComputeDSF) SET Node to currentNode FOR each Arc in OutArcList of Node GET TargetNode of Arc IF TargetNode is-in PreNodeList of Node Do nothing ELSE COMPUTE SW of TargetNode by equation(1) ADD Node to PreNodeList of TargetNode END IF END LOOP FOR each Arc in InArcList of Node GET TargetNode of Arc IF SourceNode is-in PreNodeList of Node Do nothing

552

END FOR

END FOR

END SET

Y. Mao et al. ELSE COMPUTE SW of SourceNode by equation(1) ADD Node to PreNodeList of TargetNode END IF LOOP each Arc in OutArcList of Node GET TargetNode of Arc IF TargetNode is Active CALL ComputeDSF with Node and SemanticGraph (recursive method) END IF LOOP each Arc in InArcList of Node GET TargetNode of Arc IF SourceNode is Active CALL ComputeDSF with Node and SemanticGraph (recursive method) END IF LOOP State of Node to Inactive

If each node of a semantic graph Δ is labeled with a fixed semantic weight, then Δ is a stable semantic graph. A stable semantic graph is a semantic map, or a semantic graph evolves to a semantic map through the propagation of semantic weight computation. Although this rank order may be always approximate, it helps to identify the relative importance of concepts in a large-scale Web ontology. We can reorganize the structure of the portion of ontology knowledge based on the relative order of the semantic weights for users’ exploration, with the most relevant portions of the ontology with higher semantic weights on top. Users may locate most important and relevant knowledge quickly and efficiently in exploration. Besides, we can construct semantic graphs for different portions of the ontology to support specific explorations. 3.4 Concept Hierarchy Trees

The concept hierarchy in Web ontologies is always organized based on the subconcept/super-concept inheritance relation. However, for large-scale ontologies with multidimensional relations, it’s insufficient to organize concepts only based on inheritance and various of information view more than inheritance hierarchy tree are required for efficient knowledge exploration, because in a hierarchy tree based on inheritance a top concept doesn’t mean it’s important to exploration and for portions of an ontology there is no inheritance relation between any two concepts or individuals. For example, there are many kinds of therapies in the Traditional Chinese Medicine (TCM) ontology like Massage, Qigong, Physical Breathing Exercises, etc, but there is no inheritance relation between any two therapies. Then, we may want to construct a specific hierarchy tree about various therapies to view their intrinsic relations (e.g. with the therapies most effective to cure a kind of disease on top). The relations between ontology concepts fall into direct concept relations that directly relate two concepts in Web ontologies (inheritance, disjoints, equivalents et al) and indirect concept relations (some user defined relations). Unlike inheritance (rdfs:subClassOf), indirect concept relations can’t be viewed as concept hierarchy trees evidently in most ontological tools traditionally. As we have mentioned before, a remarkable characteristic of concept maps is that the concepts are represented in a

Concept Map Model for Web Ontology Exploration

553

Fig. 3. The screen shot of the Semantic Browser, a graphical Web ontology explorer

hierarchical fashion with the most inclusive, most general concepts at the top of the map and the more specific, less general concepts arranged hierarchically below, and hence different kinds of relationships can be represented in hierarchical fashion for special exploration preference. We can construct a multi-relational concept hierarchy tree for portions of a ontology based on semantic weights, rather than a single relation (inheritance). Since tree is a specific form of graph, we can construct such a multirelational concept hierarchy tree using the algorithms in section 3.2 and 3.3.

4 Use Case: TCM Ontology Exploration TCM is a medical science that embodies traditional Chinese culture and philosophical principles, which constitute the basis and essence of the simple and naive materialism in ancient China. As a kind of complex medical science, TCM knowledge system embodies many concepts and relations and each concept may be related to innumerable individuals. In collaboration with the China Academy of Traditional Chinese Medicine (CATCM), we have been developing a domain ontology for TCM. Hundreds of TCM experts and software engineers have taken more than 5 years to build the world’s largest TCM ontology [10], which is now called the Traditional Chinese Medical Language System (TCMLS). More than 10,000 concepts and 100,000 individuals are defined in current knowledge base (KB) and the ontology under development is still a small part of the complete ontology. According to our model, we have partially implemented a graphical Web ontology explorer [11], so called Semantic Browser (see figure 3) to assist users in exploring the TCMLS visually and interactively. In the Semantic Browser, portions of ontologies are visualized as semantic maps based on semantics. Users can navigate from concept to concept through semantic links to browse different aspects of TCM domain knowledge. The Semantic Browser can reorganize the structure of a portion of

554

Y. Mao et al.

TCM knowledge based on the relative order of the overall semantic weights computed in propagation, recommending relevant and important portions of the ontology with higher semantic weights to users and they can just browse them prior to other parts.

5 Related Work Although there have been a number of ontological tools (e.g. Protégé 6 and OntoRama 7 ) that can assist users explore ontologies visually, they mainly focus on graphical control of ontologies (e.g. depth control and layout control) rather semantics itself during exploration. There are also some related works about graph model for ontology in literatures. Gerbe and Mineau in [12] proposed a conceptual graph based Ontolingua, and Corbett in [13] described the semantic interoperability using conceptual graph. A few researches have been done in proposing approaches to extract particular aspects of an ontology. Noy and Musen in [14] described an approach to specify self-contained portions of a domain ontology through traversal of concepts. Bhatt and Taniar in [15] proposed a distributed approach to sub-ontology extraction to extract particular aspects of the whole ontology.

6 Conclusion In this paper, we propose a visual interactive concept map model to support efficient Web ontology exploration. We demonstrate that concept map can capture users’ interactive requirements and complement Web ontologies in several aspects. We have proposed several algorithms for analyzing, reorganizing and extending Web ontologies from the graph theory perspective. Portions of Web ontologies can be represented as semantic maps and semantic weight propagation approach can be used to determine overall semantic weights for semantic maps. We are going to conduct some experiments on the main algorithms of the model with several large local ontologies of the TCMLS to test performance. Currently, our model is based on single domain ontology, but we are planning to take our model into a distributed environment with multiple ontologies and integrate several Web ontologies for efficient exploration.

Acknowledgment This work was partially supported by a grant from the Major State Basic Research Development Program of China (973 Program) under grant number 2003CB316906, a grant from the National High Technology Research and Development Program of China (863 Program): TCM Virtual Research Institute (sub-program of Scientific

6 7

http://protege.stanford.edu http://www.ontorama.com/

Concept Map Model for Web Ontology Exploration

555

Data Grid), and also a grant from the Science and Technique Foundation Programs Foundation of Ministry of Education of China (No. NCET-04-0545).

References 1. T. Berners-Lee, J. Hendler, O. Lassila.: The Semantic Web, Scientific American (2001) 2. T. Gruber.: A Translation Approach to Portable Ontology Specifications, Knowledge Acquisition (1993), 5(2):199–220 3. T. Berners-Lee.: Conceptual Graphs and the Semantic Web, W3C Design Issues (2001) 4. B.R. Gaines, M.L.G. Shaw.: WebMap: Concept Mapping on the Web. Proceedings of the 2nd International WWW Conference (1995) 5. J.D. Novak, D.B. Gowin.: Learning how to learn. Cambridge: Cambridge University Press (1984) 6. D.P. Ausubel.: The psychology of meaningful verbal learning. New York: Grune & Stratton (1968) 7. B.R. Gaines, M.L.G. Shaw.: Concept maps indexing multimedia knowledge bases. AAAI94 Workshop: Indexing and Reuse in Multimedia Systems, Menlo Park, California (1994) 8. R. Hall, E.L. Stocks.: Guided Surfing: Development and Assessment of a World Wide Web Interface for an Undergraduate Psychology Class. Proceedings of North American Web Developers Conference (1997) 9. M. Carnot, B. Dunn et al.: Concept Maps vs. Web Pages for Information Searching and Browsing. Proceedings of the 2003 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on Enablement through Technology (2003) 10. X. Zhou, Z. Wu, A. Yin et al.: Ontology Development for Unified Traditional Chinese Medical Language System. Journal of Artificial Intelligence in Medicine (2004), Vol.32, Issue 1, Pages 15-27 11. Y. Mao, et al.: Interactive Semantic-based Visualization Environment for Traditional Chinese Medicine Information. Proceedings of the 7th Asia-Pacific Web Conference (2005) 12. O. Gerbe, G.W. Mineau.: The CG Formalism as an Ontolingua for Web-Oriented Representation Languages. Proceedings of the 10th International Conference on Conceptual Structures (2002) 13. D. Corbett.: Interoperability of Ontologies Using Conceptual Graph Theory. Proceedings of the 12th International Conference on Conceptual Structures (2004) 14. N.F. Noy, M.A. Musen.: Specifying Ontology Views by Traversal. Proceedings of the 3rd International Semantic Web Conference (2004) 15. M. Bhatt, D. Taniar.: A Distributed Approach to Sub-Ontology Extraction. Proceedings of 18th International Conference on Advanced Information Networking and Applications (2004)

A Resource-Adaptive Transcoding Proxy Caching Strategy* Chunhong Li, Guofu Feng, Wenzhong Li, Tiecheng Gu, Sanglu Lu, and Daoxu Chen State Key Laboratory for Novel Software Technology, Department of Computer Science and Technology, Nanjing University, Nanjing, Jiangsu, 210093, P.R. China {Lch, Fgfmail, Lwz, Gutc}@dislab.nju.edu.cn {Sanglu, Cdx}@nju.edu.cn

Abstract. With the emergence of pervasive computing, the Internet client devices have become highly heterogeneous. Transcoding proxies are used to adapt media streams to suit diverse client devices. In a transcoding proxy based streaming system, CPU and network are both potential bottleneck resources. In this paper, a resource-adaptive transcoding proxy caching mechanism is proposed, which deals with network and CPU demand in an integrated fashion and aims to improve the system’s potential service capability. First, we explore the network gain and CPU gain of caching multiple versions at the same time. By introducing a time-varying influence factor α (t ) , the aggregated resource gain of the caching system is derived. Then, we derive the merit function of caching a single object under a given caching status, and design a cache replacement algorithm called RAC. Simulation shows that, on the primary metric of request blocking ratio, RAC outperforms LRU and LFU markedly.

1 Introduction With the emergence of pervasive computing, many of today’s client end devices (for example, laptops, PDAs and mobile phones) have incorporated the Internet access features [1]. These devices’ capabilities vary in many aspects, such as connection bandwidth, processing power, storage capability and screen display size, etc. Because of the multiple-modality characteristic of multimedia information, a media object may be presented in several versions to different clients devices. These versions differ in modality, fidelity or resource requirements for storage, transferring and representation. The process of transforming a media object from an existing version to another is called transcoding. In general, media transcoding is a computationintensive task. To reduce the service overhead on the original server, a proxy is commonly used to perform transcoding operations on the fly [2,3,4]. However, providing transcoding service at the proxies has introduced new challenges on the *

This work is partially supported by the National Natural Science Foundation of China under Grant No.60402027; the National High-Tech Research and Development Program of China (863) under Grant No.2004AA112090; the National Basic Research Program of China (973) under Grant No.2002CB312002.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 556 – 567, 2006. © Springer-Verlag Berlin Heidelberg 2006

A Resource-Adaptive Transcoding Proxy Caching Strategy

557

scalability of media systems. With transcoding, considerable amount of CPU resource at the proxy is required to complete a session. Thus, in addition to the communication links between the original server and the proxy, the CPU capacity of transcoding proxy is another potential bottleneck resource. In traditional media systems, caching media objects in the proxy is the widely used approach to reduce the network traffic effectively. In this paper, we explore a transcoding proxy caching scheme to improve streaming media delivery in a pervasive environment. In particular, we focus on its potential in reducing the demands of bottleneck resources. We propose a resourceadaptive caching (RAC) algorithm, which adjusts the caching merit function dynamically based on the current workload of CPU and network. Experimental results show that, under various resource constraints, RAC shows desirable performance, resulting in lower request blocking ratio. The rest of the paper is organized as follows. Section 2 gives an overview of related works. Section 3 describes the system environment and service model we considered, and the formal description of the problem. The detailed description of RAC algorithm is presented in section 4. The simulation modal and experimental results are discussed in section 5. Finally, section 6 concludes the paper.

2 Related Works Proxy caching of streaming objects has been well investigated [8,9]. Which object would be replicated in the cache, and which one would be evicted from the cache to make room for the new object when the cache space has been fully occupied, are the two key design issues [8-11]. While the transcoding proxy is attracting more and more attention, its role in the functionality of caching is becoming a hot research topic [7, 12,13,14]. To determine which object versions to cache, there are distinctly two caching strategies, i.e. single-version caching and multiple-version caching. With the single-version caching strategy, at most one version of an object is allowed to be cached in the proxy at any time. The algorithms that adopt this strategy are TeC11[12],TeC-12[12] and FVO[7] , etc. In the caching algorithms that using the multiple-version caching strategy, multiple versions of the same object are cached at the same time. The caching algorithms that adopt this strategy include TeC-2[12], PTC[13] , TVO[7] ,AE[14], and so on. On the issue of cache replacement, transcoding also introduces new challenges. When the multiple-version caching strategy is taken, the aggregated profit of caching multiple versions of the same media object is not simply the sum of the individual profit of caching each version of the same object [14]. Thus, when computing the merit of caching an object version, the existence of other versions of the same object must be carefully considered. In [14], a weighted transcoding graph (WTG) is used to describe the transcoding relationships among the versions and the corresponding transcoding delay, with which the minimal aggregated transcoding delay of caching multiple versions in the transcoding proxy is devised. This work takes transcoding delays into account in cache replacement, and does not apply in the case of “streamlined” transcoding in pervasive media streaming delivery. In the Tec [12] system, the impact of transcoding relationships on the caching design is neglected, and the existing popular algorithms (e.g., LRU, LFU, LRU-k, or GD*) are used to make the cache replacement decision. Bo Shen et al [7]

558

C. Li et al.

simplify the transcoding relationships by assuming that the lower versions can only be transcoded from the original objects. The cache replacement decision is easier due to the restriction they have made, however, the universality of the proposed merit function and the results is lost. Among the existing transcoding caching schemes, the work in [7] has similar objective to ours. In [7], three caching algorithms are proposed, i.e. FVO (full version only), TVO (transcoded version only) and an adaptive caching algorithm. FVO tries to reduce the network demand by caching full object versions, while TVO reduces the computation demand by caching the transcoded versions. To minimize the blocking probability, an adaptive caching algorithm is proposed that considers the dynamic resource demands. Particularly, the adaptive caching algorithm uses a threshold-based policy to switch its caching replacement algorithm between FVO and TVO. However, the threshold α is a manual setting parameter, and how to select a right threshold value in the dynamic environment has not been well explored. Our approach differs from that of [7] in two aspects: (1) With the help of the weighted transcoding graph, the network gain and CPU gain of caching multiple versions at the same time are explored. By introducing a time-varying influence factor α (t ) , the aggregate resource gain is derived. Then, we derive a merit function of caching a single object under a given caching status, and design the cache replacement algorithm. Thus, our approach is more flexible and universal than that presented in [7]. (2) By using an observation-based adaptive controller, α (t ) is dynamically computed based on the current CPU and network demands. The object versions that contribute more to the saving of the dominated bottleneck resource have more chance to be cached. Therefore, the caching system performs with good resource-awareness.

3 Models and Problem Formulation 3.1 System Model We consider a media streaming system consisting of a content server, a transcoding proxy and various client devices, as shown in Figure 1. At the proxy, the media streams are transcoded to meet the capabilities of the client devices, and both the original versions and the transcoded ones can be cached. Upon receiving a service request, the proxy searches its cache space for the appropriate object version. We define the following events in the proxy: (a) Version Hit: If the requested object version exists in the cache, it is send to the client directly; (b) Useful Hit: If the proxy doest not have the requested object version but has the useful versions, transcoding is necessary. The proxy selects the useful version with the least transcoding cost, and reserves the CPU resource required (in terms of the portion of CPU power [7]). If successful, the request is accepted, and the proxy starts up a transcoding thread accordingly; otherwise, the request is blocked (named as CPU block). If the request is accepted, the CPU resource is reserved until the end of the streaming session.

A Resource-Adaptive Transcoding Proxy Caching Strategy

559

(c) Miss: if neither the requested version nor the useful ones are in the cache, the full version should be retrieved from the content server. The proxy determines the required network bandwidth to fetch the full object version and the CPU requirement of transcoding it to the requested one, and performs the procedure. If successful, the request is accepted; otherwise, the request is blocked (if the request is blocked by network, it is called network block; if the request is blocked by the proxy’s CPU, we name it as CPU block). L apto p C o n ten t se rver

Lo cal N etw o rk

Tran sco d in g p ro x y

In tern et

PDA

… P ho n e

C ach e

Fig. 1. A transcoding proxy based media streaming system

3.2 Transcoding Model The various versions of a media object and the transcoding relationships among them can be represented by a WTG (weighted transcoding graph) graph [13,14]. In the WTG Gi of object i , for each vertex v ∈ V [Gi ] , v represents a version of object i ; and the directed edge (u , v ) ∈ E (Gi ) suggests that version u of object i can be transcoded to version v , the transcoding cost is given by wi (u , v) which is the weight of edge (u , v) . An example of WTG is illustrated in Figure 2. 0

0 10

1

10

2 8 8

10

3 0

4 0

Fig. 2. An example of weight transcoding graph [14]

Definition 1. In the WTG Gi of object i , version u is the useful version of version v iff there exist a directed edge from u to v . Let Φ i (v) be the useful version set of v : Φ i (v) = {u | ∀u ∈ V [Gi ] and (u , v) ∈ E (Gi ) }

As an example, for the Gi illustrated in Figure 2, Φ i (2) = {1, 2} , Φ i (4) = {1, 2, 4} . 3.3 Problem Formulation To facilitate our discussion, the following notations are defined: B : the bandwidth of the bottleneck link between the server and the proxy C : the CPU capacity of transcoding proxy

(1)

560

C. Li et al.

O = {o1 , o2 ,L , o N } : the set of all media objects maintained by the content server oi , x : version x of object i bi : bandwidth requirement of fetching object i ( oi ,1 )from the content server

λi , x : the mean reference rate to oi , x si , x : the size of oi , x Gi : the WTG graph of object i wi : the weight matrix of Gi , wi (u , v) depicts the CPU cost of transcoding from u to v Scache : the caching capacity of the proxy H (O) : the set of object versions exist in the cache at a given time The analysis in section 3.1 suggests that, if the request is version hit or useful hit in the cache, the CPU and network resources required for serving the request are reduced. Under a given caching status, the bandwidth saving is denoted as the network gain of the cache, while the saved CPU resource is named as the CPU gain. We begin by exploring how to compute the network gain. Assume that, under the cache status of H (O) , the object oi , x is requested. If there exists oi , x or useful versions of oi , x in the cache, there is no need to fetch the original version of object i from the content server, thus the network bandwidth saving is bi . On the other hand, if cache miss occurs, the bandwidth saving is zero. Hence, under H (O) , for the request of oi , x , the bandwidth saving can be computed as follows: g net (oi , x H (O )) =

{

0, if H (O) I Φ i ( x) = ∅ bi , else

(2)

In a realistic Internet environment, different objects may have very different popularities. So, the network gain of the cache can be computed as follows: N

G net ( H (O)) = ¦

¦

λi , x ⋅ g net (oi , x | H (O))

i =1 x∈V [ Gi ]

(3)

Similarly, if the proxy has cached oi , x , there is no need of transcoding to serve the request. Compared with fetching original version from the content server and transcoding it to the requested one, the saved CPU resource is wi (1, x) . If oi , x is not cached, but there are some useful versions exist in the cache, the proxy selects a useful version with the lowest transcoding cost and transform it to the requested one, thus the CPU saving is ( wi (1, x) − wi ( y , x)) . Therefore, for the request of oi , x , the CPU saving can be computed as follows: if H (O) I Φ i ( x) = ∅ ­°0, g cpu (oi , x H (O)) = ® wi (1, x ) − MIN wi ( y, x), else °¯ y∈ H ( O ) I Φ i ( x )

(4)

A Resource-Adaptive Transcoding Proxy Caching Strategy

561

Thus, under the caching status of H (O) , the CPU gain of the cache is: N

G cpu ( H (O )) = ¦

¦

(5)

λi , x ⋅ g cpu (oi , x | H (O ))

i =1 x∈V [ Gi ]

However, CPU and network are two different types of resources. To make them comparable, we generalize the CPU gain and network gain with B and C respectively: G net ( H (O)) =

1 N ⋅ ¦ ¦ λi , x ⋅ g net (oi , x | H (O)) B i =1 x∈V [ Gi ]

(6)

1 N ⋅ ¦ ¦ g cpu (oi , x | H (O)) C i=1 x∈V [ Gi ]

(7)

G cpu ( H (O)) =

Under H (O) , the aggregate resource gain of the cache is defined as: G ( H (O), α (t )) = α (t ) ⋅ G net ( H (O)) + (1 − α (t )) ⋅ G cpu ( H (O))

(8)

Where α (t )(0 ≤ α (t ) ≤ 1) is a time-varying influence factor that reflects the relative importance of the network gain in the aggregate resource gain. Since the cache space in the proxy is limited, the proxy should determine which object versions be cached to maximize the aggregate gain of the cache. Therefore, the transcoding caching problem can be formulized as an optimization problem, i.e. to find a cache status H (O) , such that (8) is maximized subject to ¦ ∀o sm ,n ≤ Scache . By m , n∈H ( O )

carefully assigning a value to α (t ) , under the optimized cache status H (O) , the service capacity of media streaming system is also maximized.

4 Cache Replacement Algorithm In this section, a resource-adaptive caching (RAC) algorithm is proposed. With RAC, the object versions that contribute more to G ( H (O),α (t )) are hold in the cache as much as possible, and the H (O) convergences towards the global optimum gradually. The value of α (t ) is dynamically computed based on the current workload of CPU and network, and the optimization objective is tuned accordingly. 4.1 Merit Function

Under a given H (O) , if the candidate object version oi , x is added to the cache, the increment in the aggregate resource gain is G ( H (O) + {oi , x },α (t )) − G ( H (O),α (t )) . Thus,

the

caching

merit

of

object

oi , x

can

be

computed

as

G ( H (O ) + {oi , x },α (t )) − G ( H (O ),α (t )) . Taking into account the impact of object size on

the caching efficiency, the caching merit function is defined as follows: U (oi , x ( H (O), α (t ))) =

G ( H (O) + {oi , x }, α (t )) − G ( H (O), α (t )) si , x

(9)

562

C. Li et al.

4.2 Computing the Value of

α (t )

In a dynamic environment like Internet, α (t ) should be dynamically tuned to reflect the current demands and availability of CPU and network resource. Hence, we design an observation-based adaptive α controlling mechanism to compute α (t ) dynamically. As shown in Figure 3, the output sensor measures the current CPU blocking ratio pcpu (t ) and network blocking ratio pnet (t ) , and feeds them back to the controller, where the new value of α (t ) is computed as follows: α (t ) =

(10)

pnet (t ) pcpu (t ) + pnet (t )

pcpu (t ) and pnet (t ) are estimated by the fraction of CPU blocked requests and m m network blocked requests observed in (t −+t , t ) , notated as pcpu (t ) and pnet (t ) m m respectively. In practice, pcpu (t ) and pnet (t ) are smoothed using a low pass filter. E.g.,

let pcpu (t − Δt ) be the smoothed value of last sampling period, the current smoothed CPU blocking ratio pcpu (t ) is computed as a moving average as follows: (11)

m pcpu (t ) = β ⋅ pcpu (t − Δt ) + (1 − β ) ⋅ pcpu (t )

In this computation, older values of CPU blocking ratio are exponentially attenuated with a factor β ( 0 < β < 1 ). Requests

α Controller

Adjusted

Result of request

α Cache

pcpu

pnet

Output Sensor

Feedback

Fig. 3. The controlling loop of α

4.3 Description of RAC Algorithm

When the cache space is fully occupied and there is a new object oi , x being evaluated as a candidate for caching, the RAC procedure is implemented as follows. First, inserts oi , x into the caching queue. Then, computes the caching merits for all objects in the queue, and selects an object with the lowest caching merit and remove it from the queue. If the total storage requirement for the objects in the queue exceeds the cache size, a new cycle of merits computing and evaluating is needed; otherwise, the iterative procedure stops. Figure 4 shows the pseudo code of RAC.

A Resource-Adaptive Transcoding Proxy Caching Strategy

563

Algorithm RAC ( oi , x , H (O) , α (t ) ) 1 H (O) ← H (O ) + {oi , x } 2 while S ( H (O)) > Scache 3 do for each object om , n in H (O)

4 do calculate its caching value U (om,n | ( H (O) − {om,n }, α (t ))) 5 find the object om , n with the lowest caching value 6

H (O) ← H (O) − {om , n } Fig. 4. The pseudo code of algorithm RAC

5 Performance Analysis In this section, we will evaluate the performance of RAC. In particular, we construct an event driven simulator to simulate the media streaming system as illustrated in Figure 1. 5.1 Simulation Model Client Model: Suppose that the client devices can be classified into five classes. Without loss of generality, the distribution of these five classes of clients is modeled as a device vector of . Transcoding Model: The original media objects in the content server should be transcoded to satisfy the user’s need. Thus, there may exist five different versions for the same object, and the size of the five versions are assumed to be 100%, 80%, 60%, 40% and 25% of the original object size. We assume that all objects in the system have the same WTG graph. In particular, the transcoding relationships among the five versions and the corresponding weight matrix w are denied as (12): ª0.0 0.01 0.01 0.01 0.01 º « ∞ 0.0 0.008 0.008 0.008 » « » w=«∞ ∞ 0.0 0.006 0.006 » « » ∞ ∞ 0.0 ∞ » «∞ «∞ ∞ ∞ ∞ 0.0 »¼ ¬

(12)

Where in w, the elements denote the required CPU fraction relative to the CPU power of a well-known machine for transcoding a version to another. Workload Model: Supposed that there are 1000 media objects maintained in the content server. We assume that these media objects are encoded with a constant bit rate of 350kbps. The reference frequencies of these objects follow a Zipf-like distribution with θ set to 0.75. The service duration of each object follows normal distribution, where μ is set to 1800s, and σ is set to 180s. The interarrival time of

requests obeys the Poisson distribution, where the parameter of

λ

is set to be 0.2.

564

C. Li et al.

5.2 Experimental Results

The performance of RAC is compared to that of LAR and LFU. The overall blocking ratio is used as the main metric in the experiments. To get a closer look, other metrics are also examined, including CPU blocking ratio, network blocking ratio, version hit ratio, useful hit ratio, and total hit ratio. Let S be the sum of original object versions maintained in the content server. The cache size at the proxy is described relative to S . Let C Max and B Max are the minimal amount of CPU and network required to accommodate all the user requests if there is no cache at the proxy. The CPU coefficient is used to describe the CPU power of trancsoding proxy relative to C Max , while the network coefficient is used to describe the bottleneck bandwidth between the server and the proxy relative to B Max . 5.2.1 Impact of Cache Capacity First, we investigate the performance of several caching algorithms by varying the cache capacity. The CPU coefficient and network coefficient are all set to 0.3. As can be seen from Figure 5(a), the overall blocking ratio decreases with the increasing cache size, and RAC consistently outperforms LRU and LFU. The advantage of RAC can be explained with the help of Figure 6. From Figure 6(a), it can be seen that RAC has the highest version hit ratio. It implies that under RAC, more requests are serviced directly by the proxy cache, thus much more CPU and network resources are saved, and the potential service capacity of the system is improved. This is the main

0.5

0.5 0.4 0.3 RAC LRU LFU

0.2 0.1 1.25% 2.5%

5%

10%

20%

RAC LRU LFU

0.4 0.3 0.2 0.1 0.0

CPU Blocking Ratio

0.5

0.6

Network Blocking Ratio

Overall Blocking Ratio

0.7

0.4 0.3 RAC LRU LFU

0.2 0.1 0.0

40%

1.25% 2.5%

Relative Cache Size

5%

10%

20%

1.25% 2.5%

40%

(a) Overall Blocking Ratio

5%

10%

20%

40%

Relative Cache Size

Relative Cache Size

(c) CPU Blocking Ratio

(b) Network Blocking Ratio

Fig. 5. Impact of Cache Size on Blocking Ratio

0.6

0.25

0.4 0.3 0.2 0.1 0.0

RAC LRU LFU

0.20 0.15 0.10 0.05

1.25% 2.5%

5%

10%

20%

1.25% 2.5%

5%

10%

20%

40%

Relative Cache Size

(a) Version Hit Ratio

0.4 0.3 0.2 0.1 0.0

40%

Relative Cache Size

RAC LRU LFU

0.5

Total Hit Ratio

RAC LRU LFU

0.5

Useful Hit Ratio

Version Hit Ratio

0.6

(b) Content Hit Ratio

Fig. 6. Impact of Cache Size on Hit Ratio

1.25% 2.5%

5%

10%

20%

40%

Relative Cache Size

(c) Overall Hit Ratio

A Resource-Adaptive Transcoding Proxy Caching Strategy

565

reason why RAC has lower overall blocking ratio. Moreover, as shown in Figure 6(b) and 6(c), LRU and LFU have lower overall hit ratio than RAC, and most of the hits are useful hits. As a result, more transcoding activities take place at the proxy under LRU and LFU, leading to heavier CPU load. Thus the proxy becomes the dominant bottleneck, and most requests are blocked due to insufficient CPU resource, while the network load is reduced. 5.2.2 Resource Coordinating Capability To gain insight into the RAC’s coordinating capability in resource usage, we examine the change of blocking ratio during runtime. In the experiments, the blocking ratio are estimated every 200 requests. From Figure 7(a), we can find that during the starting phase (0-t1), the CPU blocking ratio increases sharply, while the network blocking ratio decreases. It is because, at the beginning time the cache is cold and the version hit ratio is very low, which leads to heavy CPU load at the proxy. During the period of (t1, t2), the version hit ratio increases, the load of CPU is reduced gradually, and the network load increases accordingly. At t2, the system reaches the steady state. On the other hand, in LRU and LFU (see Figure 7(b) and 7(c)), the system reaches the steady state at t1, and the CPU blocking ratio is kept at a high level while the network blocking ratio is kept at a very low level (close to 0). Obviously, CPU and network are used in a much more balanced fashion under RAC. In RAC, the coordinating capability is realized by tuning the value of α (t ) dynamically (see Figure 7(d)). t1

t2

t1 0.25

0.35

Blocking Ratio

0.15

0.10

0.05

0.30

Blocking Ratio

CPU Network Overall

0.20

0.25 CPU Network Overall

0.20 0.15 0.10 0.05 0.00

0.00 0

30000

60000

90000

120000 150000 180000

0

30000

Time (second)

90000

120000 150000 180000

Time (second)

(a) RAC

(b) LRU t1

t1

t2

0.8

0.35

0.7

0.30

0.6

0.25 0.20 CPU Network Overall

0.15 0.10

Value of α

Blocking Ratio

60000

0.5 0.4 0.3 0.2

0.05

0.1

0.00

0.0

0

30000

60000

90000

120000 150000 180000

Time (second)

(c) LFU

0

30000

60000

90000

120000 150000 180000

Time (second)

(d) The Changing of α

Fig. 7. Resource coordinating capability of Algorithms

5.2.3 Performance Under Different Resource Conditions In this section, we examine the performance of RAC under different resource conditions. Figure 8(a) shows the overall blocking ratio as a function of CPU coefficient with the network coefficient fixed at 0.3 and relative cache size set to

566

C. Li et al.

0.65 0.60 0.55 0.50 0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.0

0.55 0.50

Overall Blocking Ratio

Overall Blocking Ratio

20%. As can be seen, RAC produces the lowest overall blocking ratio over a wide range of CPU coefficient. Figure 8(b) shows the overall blocking ratio as a function of network coefficient with the CPU coefficient fixed at 0.3 and relative cache size set to 20%. RAC also produces the lowest overall blocking ratio over a wide range of network coefficient.

RAC LRU LFU

0.2

0.4

0.6

0.8

1.0

CPU Coefficient

(a) Overall Blocking Ratio vs. CPU Coefficient (Network Coefficient=0.3, Relative Cache Size=20%)

0.45 0.40 0.35 RAC LRU LFU

0.30 0.25 0.20 0.15 0.10 0.0

0.2

0.4

0.6

0.8

1.0

Network Coefficient

(b) Overall Blocking Ratio vs. Network Coefficient (CPU Coefficient=0.3, Relative Cache Size=20%)

Fig. 8. The Blocking Ratio under Different Resource Conditions

6 Conclusion In pervasive environments, the media objects should be transcoded to make them accessible to various client devices. In a transcoding proxy-based media streaming system, CPU and network are two potential bottleneck resources. In this paper, a resource-adaptive transcoding proxy caching mechanism is explored to improve the streaming media delivery in pervasive environment. In particular, a resource-adaptive caching (RAC) algorithm is proposed. The caching merit function is adjusted dynamically based on current resource conditions. With RAC, the system deals with the CPU and network demands in a balanced fashion, and performs with good resource awareness.

References [1] M. Satyanarayanan. Pervasive Computing: Vision and Challenges. IEEE Personal Communications, August 2001 [2] Keqiu Li, H. Shen, F. Chin, and S. Zheng. Optimal Methods for Coordinated En-Route Web Caching for Tree Networks. ACM Trans. on Internet Technology (TOIT), Vol. 5, No. 2, May 2005 [3] Takayuki Warabino. Video Transcoding Proxy for 3Gwireless Mobile Internet Access. IEEE Communications Magazine, Vol. 38, No.10, Oct. 2000, pp. 66-71 [4] R. Han, P. Bhagwat, R. LaMaire, T. Mummert, V. Perret, J. Rubas, Dynamic adaptation in an image transcoding proxy for mobile WWW browsing, IEEE Personal Communications, Vol. 5, No. 6, Dec. 1998 [5] Anthony Vetro,Charilaos Christopoulos,huifang Sun.Video Transcoding Architectures and Techniques:An Overview. IEEE SIGNAL PROCESSING MAGEZINE, 2003

A Resource-Adaptive Transcoding Proxy Caching Strategy

567

[6] J. Liu and J. Xu, Proxy Caching for Media Streaming over the Internet, IEEE Communications, Feature Topic on Proxy Support for Streaming on the Internet, Vol. 42, No. 8, pp. 88 - 94, August 2004 [7] X. Tang, F. Zhang, and ST Chanson. Streaming. Media Caching Algorithms for Transcoding Proxies. Proc. CPP ’02, Aug. 2002 [8] R. Ayani, YM Teo and P. Chen, Cost-based Proxy Caching, Proceedings of the International Symposium on Distributed Computing and Applications to Business, Engineering and Science, pp. 218-222, Wuxi, China, December 2002 [9] K. Wu, PS Yu, and J. Wolf. Segment-Based Proxy. Caching of Multimedia Streams. Proc. World Wide. Web, ACM Press, 2001, pp. 36-44 [10] Wang J. A survey of web caching schemes for the Internet. ACM SIGCOMM Computer Communication Review, Vol. 29, pp. 36-46 [11] Junho Shim, Peter Scheuermann, and Radek Vingralek. Proxy Cache Algorithms: Design, Implementation, and Performance. IEEE Transactions on Knowledge and Data Engineering, Vol., 11 No. 4, July/August 1999 [12] B. Shen, S.-J. Lee, and S. Basu. Caching Strategies in Transcoding-Enabled Proxy Systems for Streaming Media Distribution Networks. IEEE Transactions on Multimedia, Vol. 6, No. 2, pp. 375-386, April 2004 [13] Abhishek Trivedi, Krithi Ramamritham, Prashant J. Shenoy: PTC: Proxies that Transcode and Cache in Heterogeneous Web Client Environments. World Wide Web 7(1): 7-28 (2004) [14] C. Chang and M. Chen. On Exploring Aggregate Effect for Efficient Cache Replacement in Transcoding Proxies. IEEE Transactions on Parallel and Distributed Systems, Vol. 14, No. 6, pp. 611-624, June 2003

Optimizing Collaborative Filtering by Interpolating the Individual and Group Behaviors Xue-Mei Jiang, Wen-Guan Song, and Wei-Guo Feng Management College, Shanghai Business School, 2271 Zhongshan West Ave., 200235 Shanghai, P.R. China [email protected], [email protected]

Abstract. Collaborative filtering has been very successful in both research and E-commence applications. One of the most popular collaborative filtering algorithms is the k-Nearest Neighbor (KNN) method, which finds k nearest neighbors for a given user to predict his interests. Previous research on KNN algorithm usually suffers from the data sparseness problem, because the quantity of items users voted is really small. The problem is more severe in web-based applications. Cluster-based collaborative filtering has been proposed to solve the sparseness problem by averaging the opinions of the similar users. However, it does not bring consistent improvement on the performance of collaborative filtering since it produces less-personal prediction. In this paper, we propose a clustering-based KNN method, which combines the iterative clustering algorithm and the KNN to improve the performance of collaborative filtering. Using the iterative clustering approach, the sparseness problem could be solved by fully exploiting the voting information first. Then, as a smoothing method to the KNN method, cluster-based KNN is used to optimize the performance of collaborative filtering. The experimental results show that our proposed cluster-based KNN method can perform consistently better than the traditional KNN method and clustering-based method in large-scale data sets.

1 Introduction Collaborative filtering (CF) has been very successful in both research and application domains such as information filtering and E-commerce. The k-Nearest Neighbor (KNN) method is a popular memory-based CF algorithm. Its key technique is to find k nearest neighbors for a given user to predict his interest using the information of his neighbors. However, this method suffers from a fundamental problem: sparsity. Sparsity problem occurs because most users only visit a few among the whole Web pages and hence the user-page rating matrix that is generated from such user navigation actions is very sparse, which accordingly affects the CF accuracy of the KNN method. To solve the sparsity problem, clustering method is proposed, which works by identifying groups of users who appear to have similar preferences [2][9][11][18]. Once the clusters/user groups are created, predictions for an individual can be made by averaging the opinions of the other users in that same cluster. Some clustering techniques represent individual user with partial participation in several clusters. A X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 568 – 578, 2006. © Springer-Verlag Berlin Heidelberg 2006

Optimizing CF by Interpolating the Individual and Group Behaviors

569

potential weakness of the clustering is that any individual user with strong personality will lose its personal properties, since the prediction is an average across the clusters, weighted by degree of participation. And in some cases, the clustering-based method performed even worse than basic KNN algorithm [2] on a sparse data set. Despite the popularity of the clustering methods in collaborative filtering, there have been no conclusive findings on whether clustering methods can be used to improve the CF performance consistently. In this paper we propose a novel method for cluster-based collaborative filtering, Cluster-based KNN, which can perform consistently on the EachMovie dataset. It makes significant improvements over the KNN method while clusters are used automatically without user-provided relevance information. We conjecture that there are two main reasons for the success of our method. First, our proposed clustering method can fully exploit the information between user and items. By iterative clustering, the sparseness problem could be alleviated and the quality of the clustering method can be improved. Second, the sophisticated smoothing techniques that we used to interpolate the KNN method and the clustering method, may better capture the characteristics of the group and individuals than previously used methods. The rest of the paper is organized as follows. We discuss the related work about collaborative filtering in Section 2. We present the iterative clustering method for collaborative filtering and the novel cluster-based KNN algorithm in Section 3. In Section 4, we describe the data sets, experimental setup and results. Section 5 concludes the paper and points out possible directions for future work.

2 Related Work In this section we briefly present some of the research literatures related to collaborative filtering methods and recommender systems. Tapestry [6] is one of the earliest implementations of collaborative filtering-based recommender systems. This system relied on the explicit opinions of people from a close-knit community, such as an office workgroup. However, recommender systems for large communities cannot depend on the assumption that people know each other. Later, several ratings-based recommender systems were developed. The GroupLens system [13] provides a pseudonymous collaborative filtering solution for Usenet news and movies. Ringo [24] and Video Recommender [8] are email and web-based systems that generate recommendations on music and movies, respectively. Other technologies [2] have also been applied to recommender systems, including Bayesian networks, clustering [18][24]. Bayesian networks create a model based on a training set with a decision tree, and edges representing user information. The model can be built off-line. The resulting model is very small, very fast, and essentially as accurate as KNN methods [2]. Bayesian networks are practical for environments in which knowledge of user preferences changes slowly with respect to the time needed to build the model, but are not suitable for environments in which user preference models need to be updated rapidly or frequently. Clustering techniques work [18][24] by identifying groups of users who appear to have similar preferences. Once the clusters are created, predictions for an individual can be made by averaging the opinions of the other users in that cluster. Some

570

X.-M. Jiang, W.-G. Song, and W.-G. Feng

clustering techniques represent each user with partial participation in several clusters. The prediction is then an average across the clusters, weighted by degree of participation. Clustering techniques usually produce less-personal recommendations than other methods. And in some cases, the clustering methods have worse accuracy than nearest neighbor algorithms [6]. Once the clustering is complete, however, the performance can be very good, since the size of the group analyzed is much smaller. Clustering techniques can also be applied as a "first step" in reducing the candidate set for KNN algorithm or for distributing nearest-neighbor computation across several recommender engines. Note that dividing the population into clusters may hurt the accuracy or recommendations to users. Most of the clustering methods group users with the static feature such as user ratings. [18] proposed to use the hierarchical clustering method to partition the user set. And an EM algorithm was proposed in [2] for classifying the rating data. Probabilistic Latent Semantic Analsys (PLSA) [9][11] was proposed to learn the latent information. The method models individual preferences as a convex combination of preference factors. The latent class variable z∈Z={z1, z2, ……, zk} is associated with each observation pair of a user and an item. PLSA assumes that users and items are independent from each other given the latent class variable. Thus, the probability for each observed pair (x, y, r) is calculated as follows:

P ( r | x, y ) =

∑ P(r | z, x) P( z | y ) z∈Z

(1)

where P( z | y ) stands for the likelihood for the user y to be in class z and P(r | z , x) stands for the likelihood of assigning the items x with the rating r by the class z of the users. In [10], the ratings of each user are normalized to be in a normal distribution with zero mean and one variance. A Gaussian distribution is used for P(r | z , x) , and a multinomial distribution for P( z | y ) .

3 Cluster Based KNN Collaborative Filtering In this section, we first describe the KNN algorithm for collaborative filtering. Then, by analyzing the clustering method, we propose the clustering based KNN collaborative filtering algorithm to optimize the quality of recommendation. First, we briefly describe the notations that are used throughout this paper. Let I={i1, i2, …, iN} be a set of items, U={ u1, u2, …, uM } be a set of training users, and ut be the test user. Let {(u(1), i(1), r(1)), …, (u(L), i(L), r(L))} be all the ratings information in the training database. Each triple ((u(k), i(k), r(k))) indicates that item i(j) is rated as r(j) by the user u(j). For each user u, i(u) denotes the set of items rated by user u, Ru(i) −

denotes the rating of item i by user u, and Ru denotes his average rating. 3.1 KNN Method Memory-based CF algorithm calculates the prediction as a weighted average of other users’ votes on that item. Most memory-based algorithms differ in the way they

Optimizing CF by Interpolating the Individual and Group Behaviors

571

calculate the weights. The popular weights calculation method is Pearson Correlation Coefficient [13]. In the neighborhood-based algorithm [13], a subset of users is first chosen based on their similarity to the active user, and a weighted combination of their rating is then used to produce predictions for the active user. The algorithm we use can be summarized in the following steps: Step 1. Weight all users with respect to the similarity to the active user. This similarity between users is measured as the Pearson correlation coefficient between their rating vectors. Step 2. Select n users that have the highest similarity with the active user. These users form the neighborhood. Step 3. Compute a prediction for the active user from a weighted combination of the neighbor’s ratings. In Step 1, similarity between users a and b is computed using the Pearson correlation coefficient defined below:



M

wa,b =

j =1



M j=1





(va, j − va )(vb, j − vb ) −

(va, j −va ) 2



M j =1



(2)

(vb, j −vb ) 2

where M is the number of items. va,j is the rating given to item j by user a, va is the mean rating given by users a, wa,b is the similarity between user a and user b. In Step 2, i.e., neighborhood-based methods, a subset of appropriate users is chosen based on their similarity to the active user, and a weighted aggregate of their ratings is used to generate predictions for the active user in the step 3. In Step 3, predictions are computed as the weighted average of deviations from the neighbor’s mean: −

RaN, j = v a +



∑b=1 wa,b (vb, j − v b ) N

∑b=1 wa,b N

(3)

where Ra,j is the prediction for active user a for item j. wa,b is the similarity between users a and u as described at Equation (2), and n is the number of users in the neighborhood. The KNN algorithm has the advantage of being able to rapidly incorporate the most up-to-date information and make relatively accurate prediction. Furthermore, the algorithm is to find the most similar individual users to help prediction and the similar users’ behaviors are more like the behaviors of the active user. To some extent, this kind of prediction is more personalized. However, given an active user, the rating of his neighbors is too sparse. As statistics from the whole EachMovie data, the average rating number of the movies per user is about 38 while the total movies are 1628. A large part of the movies are not

572

X.-M. Jiang, W.-G. Song, and W.-G. Feng

rated by the users. Thus, the item to be predicted may not be rated by the neighbors. On such sparse data, the performance of the KNN algorithm would be limited. 3.2 Cluster-Based Collaborative Filtering The main idea of clustering techniques is to work by identifying groups of users who appear to have similar preferences. Once the clusters are created, predictions for an individual can be made by averaging the opinions of the other users in that cluster. In such environment, the sparsity of the rating problem could be alleviated by using the rating information of all users in the group. In this section, we first introduce our iterative clustering method to perform two layers clustering on the users and items, which could reinforce the clustering results with each other. Then, a prediction method is given based on the clustering results to solve the sparsity problem. 3.2.1 Iterative Clustering We model the rating data as a directed graph G=(V, E) where nodes in V represent users and items while edges in E represent the relationships that the item i is rated by user u. As shown in Fig. 1, such a graph is a bipartite graph with users and items on either side. Furthermore, the graph is a weighted graph and the edge weights is used to represent rating score.

Fig. 1. A directed graph between users and items

As we discussed in section 1, traditional clustering algorithms could not exploit the inter-relationship between users and items. For example, when clustering the users, the traditional algorithms just take the items the user rated as the features of the corresponding user. While clustering the items, the algorithms just take the users that rate the item as the features of the item. Two challenges exist in such methods. First is the sparsity problem on the large feature space, which will affect the performance of clustering. Second is that the methods could not exploit the inter-relationship between items and users.

Optimizing CF by Interpolating the Individual and Group Behaviors

573

Table 1. Iterative clustering algorithm Iterative clustering algorithm 1. Select an arbitrary type and cluster the objects for the selected type according to the similarity defined by Equation 2. 2. Merge the objects of each cluster into one object, with corresponding edges updated. 3. Switch to the other types for clustering. 4. Cluster the objects of the current type according to the updated edge features; 5. Restore original edge structure and original objects of the other types; 6. Merge objects of each cluster of current type, with corresponding edges updated 7. Go to Step 4 if the changes in the cluster results are above a threshold; 8. Merge all the objects of each cluster for all object types and then stop.

Based on the analysis, here we propose an iterative reinforcement clustering method to cluster the users and the items in bipartite graph. Under this graph-based framework, the clustering results between the users and items could be reinforced each other and both the clusters of users and items could leverage their correlation. The process will continue by iteratively project the clustering results of one type to another by their relationship until converge. The sparsity problem could be alleviated through the iterative process. The details of the iterative clustering algorithm are described in Table 1. In this paper, K-means algorithm is taken as the basic clustering algorithm. The number K is an input to the algorithm that specifies the desired number of clusters. The running time of this algorithm for each pass is linear in total number (N) of nodes to be clustered, i.e. O(K*N). If our algorithm iterates Z times, the full times used is O(Z*(mM+nN)). In order to consider the users with similar preferences on items may still rate items differently, we take the Pearson correlation coefficient function as the similarity measure function. The similarity between users a and b is calculated using Equation (2). While the similarity between items i and j can be defined similarly. Since the edge will be updated during the iteration process, the similarity function will be modified according to the updated edge. After clustering the users and items, we could get two sets represent the group of users and items. Formally, the clustering results of the users U={ u1, u2, …, uM } and items I={i1, i2, …, iN} are represent as {Cu1 , Cu2 ,, Cum } and {Ci1 , Ci2 ,, Cin } , respectively. Then, the rating over user class and item class can be written as:

RCu ,Ci = Ru ,i − Ru ; u ∈ Cu

and

i ∈ Ci

(4)

3.2.2 Select Similar Cluster Before predicting the active user’s rating on an item, we first identify the user’s group and the item’s group. Since the items in the testing data set are the same to those in the training data set, the group the item belongs to is the group that the item in the training set. However, the active user is not in the train set, we should find which

574

X.-M. Jiang, W.-G. Song, and W.-G. Feng

cluster the user belongs to. In this paper, we take the centroid of the cluster to represent the vector of the cluster and use the Pearson correlation coefficient function as the similarity measure function between the cluster and the active user. 3.2.3 Predict Function Once we isolate the cluster of the user and similar item, the next step is to look into the target users’ ratings and use a technique to obtain predictions. The method for applying the clustering results is to use the average rating that the cluster Ca of the active a to the cluster Ci of the rated item i. That is: −

RaC,i = ν a + RC a ,C i

(5)

where RaC,i is the rating of the user a on item i according to the clustering results. 3.3 Cluster-Based KNN for Collaborative Filtering As we known that KNN algorithm faces the sparse data problem. Since no explicit global model is constructed, nothing is really learned form the available user profiles. Thus, the method generally cannot provide explanations of predictions or further insights into the data. However, the KNN algorithm could provide more personal prediction, which is predicted based on the most similar individual users. On the other hand, as we mention in Section 1, the cluster based method has been proven to be non-consistent performance in CF prediction because of the lesspersonal recommendations. That is, the prediction for an individual is made by averaging the opinions of the other users in that cluster. However, for cluster based algorithms, the clusters may offer added value beyond its predictive capabilities by highlighting certain correlations in the data. It offers an intuitive rationale for recommendations, or simply makes assumptions more explicit, which could solve the sparsity problem by grouping the similar users together. Based on advantages and disadvantages of the two algorithms, we propose clusterbased KNN method to alleviate the sparseness and produce a more personalized prediction, in which the clustering technique is taken as a smoothing method for KNN collaborative filtering. The new method of cluster-based KNN is one that smoothes representation of individual users using models of the clusters that they are most similar. We formulate our model as ~

Ra , j = (1 − ) Ra , j + RaC, j

(6)

where λ is the parameter for smoothing which take different values in different smoothing methods.

4 Experiments In this section, we describe the dataset, metrics and methodology for the comparison between different prediction algorithms, and present the results of our experiments.

Optimizing CF by Interpolating the Individual and Group Behaviors

575

4.1 Dataset We use EachMovie data set (http://research.compaq.com/SRC/eachmovie) to evaluate the performance of our proposed algorithm. The EachMovie data set is provided by the Compaq System Research Center, which ran the EachMovie recommendation service for 18 months to experiment with a collaborative filtering algorithm. The information they gathered during that period consists of 72,916 users, 1,628 movies, and 2,811,983 numeric ratings ranging from 0 to 5. To speed up our experiments, we use a subset of the EachMovie data set. We randomly select 20,000 users who rated 30 or more items and then divide them into a training set (75% users) and a test set (25% users). Finally, we got about 1,507,902 ratings for all movies and the data density is about 6%. We define the data set as 20000-users dataset. Moreover, in order to tune the parameter for the algorithm, we generate a small training and testing data set. In the small data set, the whole number of users is about 400. We take the same policy to divide the dataset which is called as the parameters-tuning dataset in the following experiment. 4.2 Metrics and Methodology We use the Mean Absolute Error (MAE), a statistical accuracy metrics, to measure the prediction quality metric:

MAE =



u∈T

~

| Ru (t j ) − Ru (t j ) |

(7)

|T | ~

where Ru (t j ) is the rating given to item tj by user u, Ru (t j ) is the predicted value of user u on item tj, T is the test set, and |T| is the size of the test set. For each active user, we varied the number of rated items provided by the active user named from Given5, Given10, Given20 to AllBut1 [2]. 4.3 Experiment Results In order to evaluate performance of the Cluster based KNN, we compare it with the current well-know algorithms for collaborative filtering: KNN, Clustering and PLSA. The interpolating parameter λ in equation (6) is set to 0.5 based on preliminary experiments on the parameter-tuning data set mentioned in Section 5.1. The effect of λ on the performance is shown in Figure 3. The neighbors of the KNN are set to 20. The data set we evaluate is 20000-users dataset. The results are shown in Table 2. From the Table 2, we can see that our proposed method outperforms all the three baseline methods substantially and consistently over the EachMovie dataset. In order to show the performance of our proposed algorithm on the sparse data, we also conduct compared experiment with KNN and Clustering methods on the 20000users dataset. In Fig. 2 we empirically analyze how prediction precision evolves when the rating data is changed from sparse to dense. In this experiment, we randomly select 10%, 20%, …, and 100% of the training data to represent the different degree of how dense of the data. We employ the protocol Given20 for the experiment.

576

X.-M. Jiang, W.-G. Song, and W.-G. Feng

Table 2. MAE results for four baseline methods. A smaller number indicates a better performance. 5 Items 1.162

10Items 1.119

Clustering

1.180

PLSA Clustering-KNN

0$(

KNN

20Items 1.091

AllBut1 1.085

1.130

1.102

1.092

1.171

1.129

1.110

1.103

1.128

1.076

1.05

1.042

       

  .11

  &OXVWHU

 

   'DWD6L]H &.11

MAE

Fig. 2. The effect of the density on three algorithms. A smaller number indicates a better performance.

1.06 1.058 1.056 1.054 1.052 1.05 0.31

0.35

0.39 0.43

0.47 0.51

0.55 0.59 Params

Fig. 3. The effect of the interpolating parameter λ. A smaller number indicates a better performance.

Optimizing CF by Interpolating the Individual and Group Behaviors

577

The results show that the degree of the density of the rating data has significant impact on the performance of prediction. As we discussed above, the KNN algorithm has very low performance when the rating data is too sparse. Clustering based method has the ability to achieve higher performance than the KNN algorithm. Our proposed algorithm has been adversely affected than two other algorithms. When the rating data become dense, the clustering method and our proposed method may improve in MAE. But the improvement will decrease, as we can see that when the relative density of rating data is larger than 70%, the performance is very close to that of 100%. That is, the clustering technique could solve the sparsity problem to some extent. In order to measure how the interpolating parameter is to affect the performance of the cluster-based KNN CF, we do the experiment that the parameter λ is ranged from 0.31 to 0.6 and the data set we used is the parameters-tuning dataset. The results are shown in Fig. 3. As shown in the Fig 3, when the interpolating parameter is equal to 0.5, our algorithm could achieve the highest performance.

5 Conclusions In this paper, considering the fact that KNN based collaborative filtering could not achieve good performance on sparse data sets and the clustering based collaborative filtering could not provide the personal recommendation, we propose a unified algorithm that integrates the advantages of both the clustering methods and KNN algorithm. Our proposed algorithm could leverage the information from both the individual users and the clusters/groups they belong to. Furthermore, we propose an iterative clustering algorithm to fully exploit the relationship between the users and the items. Experimental results show that our proposed collaborative filtering algorithm can significantly outperform the traditional collaborative filtering algorithms.

References [1] Arnd Kohrs and Bernard Merialdo. Clustering for collaborative filtering applications. In Proceedings of CIMCA’99. IOS Press, 1999. [2] Breese, J. S., Heckerman, D., and Kadie, C. Empirical Analysis of Predictive Algorithms for Collaborative Filtering. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, 1998. 43-52. [3] Balabanovic, M., Shoham, Y. Fab: Content-based, Collaborative Recommendation. Communication of the ACM, Mar. 1997, 40(3): 66-72. [4] Claypool, M., Gokhale, A., Miranda, T., etc. Combining Content-Based and Collaborative Filters in an Online Newspaper. In ACM SIGIR Workshop on Recommender Systems, Berkeley, CA, Aug. 1999. [5] Ester, M., Kriegel, H.P., and Xu, X. Knowledge Discovery in Large Spatial Databases: Focusing Techniques for Efficient Class Identification, In Proceedings of the 4th International Symposium On Large Spatial Databases, Portland, ME, 1995, 67-82. [6] Goldberg, D., Nichols, D., Oki, B. M., and Terry, D. (1992). Using Collaborative Filtering to Weave an Information Tapestry. Communications of the ACM. December.

578

X.-M. Jiang, W.-G. Song, and W.-G. Feng

[7] Herlocker, J., Konstan, J., Borchers, A., and Riedl, J. An Algorithmic Framework for Performing Collaborative Filtering. In Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Aug. 1999. 230-237. [8] Hill, W., Stead, L., Rosenstein, M., and Furnas, G. (1995). Recommending and Evaluating Choices in a Virtual Community of Use. In Proceedings of CHI ’95. [9] Hofmann, T. Probabilistic latent semantic analysis. In Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence, 1999, 289-296. [10] Thomas Hofmann: Collaborative filtering via gaussian probabilistic latent semantic analysis. SIGIR 2003: 259-266. [11] Hofmann, T., and Puzicha, J. Latent class models for collaborative filtering. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, 1999, 688-693. [12] Joachims, T., Freitag, D., and Mitchell, T. WebWatcher: A Tour Guide for the World Wide Web. In Proceedings of the International Joint Conference on Artificial Intelligence, Aug. 1997. 770-777. [13] Konstan, J., Miller, B., Maltz, D., et al. GroupLens: Applying collaborative filtering to Usenet news. Communications of the ACM, Mar. 1997, 40(3): 77-87. [14] Lieberman, H., Dyke, N. van, and Vivacqua, A. Let’s browse: a collaborative Web browsing agent, In Proceedings of the International Conference on Intelligent User Interfaces, Jan. 1999. 65-68. [15] Lieberman, H. Letizia: an agent that assists web browsing. In Proceedings International Conference on AI, Montreal, Canada, Aug. 1995. 924-929. [16] Mobasher, B., Colley, R., and Srivastava, J. Automatic personalization based on Web usage mining. Communications of the ACM, Aug. 2000, 43(8): 142-152. [17] Mladenic, D. Personal WebWatcher: design and implementation. Technical Report IJSDP-7472, J. Stefan Institute, Department for Intelligent Systems, Ljubljana, 1998. [18] M. O’Conner and J. Herlocker. Clustering items for collaborative filtering. In Proceedings of the ACM SIGIR Workshop on Recommender Systems, Berkeley, CA, August 1999. [19] Pazzani, M., Muramatsu, J., and Billsus, D. Syskill & Webert: identifying interesting web sites. In Proceedings of the 13th National Conference on Artificial Intelligence, 1996. 54-61. [20] Resnick, P., Iacovou, N., Sushak, M., Bergstrom, P., and Riedl, J. GroupLens: An Open Architecture for Collaborative Filtering of Netnews, In Proceedings of the Conference on Computer Supported Collaborative Work, 1994. 175-186. [21] Sarwar, B. M., Karypis, G., Konstan, J. A., and Riedl, J. Application of Dimensionality Reduction in Recommender System - A Case Study. In ACM WebKDD Workshop, 2000. [22] Sarwar, B., Karypis, G., Konstan, J., et al. Item based collaborative filtering recommendation algorithms. In Proceedings of the 10th International World Wide Web Conference, 2001. 285-295. [23] Sarwar, B. M., Karypis, G., Konstan, J. A., and Riedl, J. Analysis of Recommendation Algorithms for E-Commerce. In Proceedings of the ACM EC'00 Conference, Minneapolis, MN. 2000. 158-167. [24] Shardanand, U., and Maes, P. (1995). Social Information Filtering: Algorithms for Automating 'Word of Mouth'. In Proceedings of CHI ’95. Denver, CO. [25] Ungar, L. H. and Foster, D. P. 1998 Clustering methods for collaborative filtering. In Proc. Workshop on Recommendation Systems at the 15th National Conf. on Artificial Intelligence. Menlo Park, CA: AAAI Press.

Extracting Semantic Relationships Between Terms from PC Documents and Its Applications to Web Search Personalization Hiroaki Ohshima, Satoshi Oyama, and Katsumi Tanaka Department of Social Informatics, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan TEL: +81-75-753-5385, FAX: +81-75-753-4957 {ohshima, oyama, tanaka}@dl.kuis.kyoto-u.ac.jp

Abstract. A method is described for extracting semantic relationships between terms appearing in documents stored on a personal computer; these relationships can be used to personalize Web search. It is based on the assumption that the information a person stores on a personal computer and the directory structure in the PC reflect, to some extent, the person’s knowledge, ideology, and concept classification. It works by identifying semantic relationships between the terms in documents on the PC; these relationships reflect the person’s relative valuation of each term in a pair. The directory structure is examined to identify the deviations in the appearance of the terms within each directory. These deviations are then used to identify the relationships between the terms. Four relationships are defined: broad, narrow, co-occurrent, and exclusive. They can be used to personalize Web search through, for example, expansion of queries and re-ranking of search results.

1

Introduction

As Internet access has expanded globally, the Internet has become the primary means of communication for many people. As a result, people exchange vast amounts of personal information through their computers. A “personal” computer consequently holds numerous types of personal content such as e-mail messages, MS-Word documents, PDF documents, and locally saved Web files. Such content is typically categorized and stored in a user-defined directory structure. Although this structure might not be well defined, it and the stored documents should reflect, to some extent, the user’s interests and concept classification. That is, the stored content and directory structure will probably reflect the kind of things in which the user is interested, what they already know, and how they relate concepts. However, such information is simply stored in the user’s PC; it is not used to enhance the user’s activities on the PC. For example, even though a computer contains a great deal of information about the user, conventional Web search engines (e.g., Google) simply accept query words without having any way of X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 579–590, 2006. c Springer-Verlag Berlin Heidelberg 2006 

580

H. Ohshima, S. Oyama, and K. Tanaka

considering what the user already knows or what the query words mean to the user. This is because user-profile information reflecting the user’s views is not generated. We have developed a method for developing a user profile by extracting semantic relationships between terms in the documents stored in the user’s PC. The deviations in the appearance of terms in each directory are used to estimate the relationships between any pari of terms. Tow axes are used: – broad / Narrow and – co-occurrent / Exclusive. These relationships can be treated by the computer as user-profile information, enabling it to take into account for many applications the user’s interests, intentions, and knowledge. For example, Web search can be personalized by expanding the query and re-ranking the search results.

2

Semantic Relationships Between Terms

2.1

Deviations in the Appearance of Terms

Related terms can have various relationships. For example, one might be used in more documents than the other. Consider the relationship between “baseball” and “MLB” (“major league baseball”). The former has broader usage, and documents containing “MLB” will likely also contain “baseball”, while documents containing “baseball” will less likely contain “MLB”. By focusing on the deviations in the appearance of terms in each directory, we can estimate the person’s relative valuation of each term in a pair, i.e., the semantic relationships between terms. An example distribution of terms in a directory is shown in Figure 1. Directory A has two sub-directories containing documents. Key terms from some of the documents are shown in the figure. Both “classic” and “Bach” are evenly distributed between the two sub-directories and in the main directory, while “piano” and “violin” are unevenly distributed. We can thus say that “classic” is used more Directory A

Directory B Bach Violin Violin Classic

Y is Broader than X

Bach

Classic

Classic

Bach

Directory C Classic

Violin Bach

X and Y are Exclusive

X and Y are CoCo-occurrent

Piano Bach Piano Piano Classic

X= Y=

Piano

Y is Narrower than X

Fig. 1. Distribution of terms in directories

Fig. 2. Relationship between term Y and term X

Extracting Semantic Relationships Between Terms from PC Documents

581

broadly than “piano” and “violin” and that “classic” is co-occurrent with “Bach”. This means that we should be able to identify useful relationships between two terms based on the deviations in their distribution in the directory structures. We can plot the two relationships described above as shown in Figure 2. The vertical axis represents the broad/narrow relationship, and the horizontal one represents the co-occurrent/exclusive relationship. Each of the four ovals indicates a certain directory, and the triangles and circles shown how the terms X and Y are distributed. The oval at the top shows the case where term Y has a “broad” relationship to term X. In this directory, term X appears unevenly and term Y appears evenly. The one at the bottom shows the case where term Y has a “narrow” relationship to term X. In this directory, term X appears evenly and term Y appears unevenly. When the distributions of two terms differ in this manner, we can categorize their relationship using the concepts of “broad” and “narrow”. The oval on the right shows the case where term X and term Y have a “cooccurrent” relationship. In this directory, both terms appear evenly. The oval on the left shows the case where term X and term Y have an “exclusive” relationship. In this directory, both terms appear unevenly and are not co-occurrent. When the distributions of two terms are similar in this manner, the relationship is “co-occurrent” or “exclusive”. 2.2

Directories as Current Context

A PC user usually stores documents related to numerous fields, and the documents likely contain many multisense words. This means that a word used with the same meaning in two fields can be co-occurrent with quite different words, depending on the field. For example, for the “dance” and “music” fields, “classic” co-occurs with quite different words. This means that the relationship between “classic” and another term depends on the context, so it is important to extract the semantic relationships between terms based on the user’s current context. By “user’s current context” we mean the field he or she currently has in mind. We treat a directory as the user’s current context. A directory typically contains documents related to a field with meaning to the user, so if he or she somehow indicates a certain directory as the current context, the semantic relationships between terms can be extracted. Extracting semantic relationships between terms, as described above, can be done in any directory with sub-directories. For a certain directory D, term X may be regarded as a broader term than term Y , while in a different directory D , the same term X may be regarded as a narrower term than term Y . In short, the semantic relationships between terms can be extracted based on three parameters: two terms and a directory indicated by the user. 2.3

Term Frequency

In addition to considering the differences in the deviations of terms and the user’s current context, we also need to consider term frequency.

582

H. Ohshima, S. Oyama, and K. Tanaka

The deviations in the appearance of terms and term frequency can be calculated independently. As described in section 2.1, we identify the relationship most appropriate for two terms, and, at the same time, the degree of the relationship is adjusted upwards for the more frequent term. Since very general terms appear quite frequently in almost any type of document, the degree for these terms in any relationship must be adjusted downwards based on a concept such as inverse document frequency (IDF).

3

Algorithm and Implementation

3.1

Basic Algorithm to Extract Semantic Relationships

There are various ways to calculate the semantic relationships between terms. Before explaining our approach, we describe the basic algorithm we use to extract semantic relationships between terms. First, we define some symbols. – D is a directory; X and Y are terms. – Subi (D) denotes D or a sub-directory under D. – VRelationN ame (X, Y, D) denotes the degree on the RelationN ame (broader, narrower, co-occurrence, or exclusive) between X and Y in D. – N um(D) is the number of documents directly stored in D. – N um(X, D) is the number of documents with X directly stored in D. Next, we give the basic algorithm. 1. Calculate the deviations in the appearance of X and Y under a user-specified D, which indicates the current context. 2. Calculate the relationships between X and Y based on the deviations. 3. Calculate the term frequencies for X and Y . 4. Calculate the degree for each relationship between terms (VRelationN ame ). Various implementations are possible for each step. 3.2

Deviations in the Appearance of Terms

We calculate the deviations using Gini coefficients, which are typically used to measure income inequality in economics. They range from 0 to 1, where “0” means equitable and “1” means inequitable. We use them because they are a confirmed statistical method and the values are distributed from 0 to 1. A Gini coefficient (CG) is given by   i j |qi − qj |  , GC = 2(N − 1) i qi where qi are numeric data points and N is the total number of data points. A Gini coefficient ranges from 0 to 1 regardless of N .

Extracting Semantic Relationships Between Terms from PC Documents

583

Ratio

A B C D E

Fig. 3. Directory structure

Directory

A

B

D

E

Fig. 4. Sample bar chart of appearance ratio

To use Gini coefficients to calculate deviations in the appearance of terms, the data need to be prepared. We use the ratio of the number of times each term appears in each directory as the data. Let’s say that the current target directory is indicated by “A” in Figure 3. There are documents in “A” and in each sub-directory. For example, if there are ten documents in “A” and two of them include the target term, the appearance ratio for “A” is 1/5. The ratio for each sub-directory is computed in the same way. Figure 4 is an example bar chart of the ratio for four directories. In this example, “E” has the highest ratio and “A” has the lowest one. There is no bar for “C”, which means that no documents are directly stored in “C”. The large disparity in the bar heights means that the target term is unevenly distributed. If the target term were evenly distributed, the bars would have similar heights. The appearance ratio of target term X in Sub(D, i) is computed using P (X, Subi (D)) =

N um(X, Subi(D)) . N um(Subi (D))

This ratio can be used as the data to calculate the Gini coefficients, and the number of data points with the value P (X, Subi (D))) is N um(Subi (D)). Therefore, the formula for the Gini coefficients of X in D is   i j (|P (X, Subi (D)) − P (X, Subj (D))| · Ni Nj )  GC(X, D) =  , 2( k N um(Subk (D)) − 1) · k N um(X, Subk (D)) where Ni = N um(Subi (D)) and Nj = N um(Subj (D)). This is the method used to calculate deviations in the appearance of terms. The deviation for X in D is represented by CG(X, D); CG(X, D) approaches 1 as the deviation becomes larger and approaches 0 as it becomes smaller. 3.3

Relationships Based on Deviations

The degree for each relationship between terms can be calculated based on only the deviations in the appearance of terms. Let’s focus on Y ’s relationships to X and represent it as RRelationN ame .

584

H. Ohshima, S. Oyama, and K. Tanaka

For the “broad” relationship, when the deviation for X is smaller and that for Y is larger, Rbroad is larger. We need such a function using a Gini coefficient for each relationship. Our implementation is as follows. Rbroad (X, Y, D) = GC(X, D) · (1 − GC(Y, D)) Rnarrow (X, Y, D) = (1 − GC(X, D)) · GC(Y, D) Rco−occurrent(X, Y, D) = (1 − GC(X, D)) · (1 − GC(Y, D)) Rexclusive (X, Y, D) = GC(X, D) · GC(Y, D) 3.4

Term Frequency

As described above, the term frequency also affects the semantic relationships. There are certainly many methods for measuring term frequency. We count the number of documents that contain the target term and normalize it by the number of documents, i.e.,  k N um(X, Subk (D)) DF (X, D) =  k N um(Subk (D)) Furthermore, we need to reduce the effects of the terms that are too general. Inverse document frequency (IDF) is one of the most common methods for doing this. IDF is defined as follows.   1 IDF (X) = log DF (X, RootDirectory) RootDirectory is the top directory. The IDF is computed using all the documents in a PC. The coefficient of term frequencies (T (X, Y, D)) is computed by multiplying all of them: T (X, Y, D) = DF (X, D) · IDF (X) · DF (Y, D) · IDF (Y ) 3.5

Degrees of Relationships

The degrees of Y ’s relationships to X are calculated using VRelationN ame (X, Y, D) = RRelationN ame (X, Y, D) · T (X, Y, D). Let’s compute the semantic relationships between the terms in the example directory shown in Figure 5. As term frequency is calculated for the whole directory, we are concerned here only with the relationships computed based on the differences in the distributions. Directory “A” is a top-level directory and contains two sub-directories. Subdirectory “B” contains four documents, and “Classic”, for example, appears twice in “B”. “A” is the user-specified directory and thus indicates the current context.

Extracting Semantic Relationships Between Terms from PC Documents

585

Directory A Total

Directory B Total Classic Bach Violin

=4 =2 =2 =4

Classic Bach

=8 =4 =4

Directory C Total Classic Bach Piano

=4 =2 =2 =4

Fig. 5. Example directory used to calculate relationships

The deviations in the appearance of the terms are computed as follows. GC(Classic, A) = GC(Bach, A) = 0 GC(Violin, A) = GC(Piano, A) = 0.8 The four relationships computed between “Classic” and “Violin” are Rbroad (Classic, Violin) = 0 · 0.2 = 0 Rnarrow (Classic, Violin) = 1 · 0.8 = 0.8 Rco−occurrent (Classic, Violin) = 1 · 0.2 = 0.2 Rexclusive (Classic, Violin) = 0 · 0.8 = 0. The results indicate that “Violin” is narrow in relationship to “Classic”. Some example meaningful relationships are as follows. Rnarrow (Classic, Piano) = 0.80 Rco−occurrent (Classic, Bach) = 1.00 Rexclusive (Violin, Piano) = 0.64

4

Web Search Personalization

We can use the semantic relationships between terms to personalize Web searches. In our prototype system, we used them to expand queries and to rerank search results. The system uses a conventional Web search engine such as Google. A user can enter query terms and expand queries as necessary. The system sends the current query to Google and obtains the result. The user can re-rank the search results as necessary. 4.1

User’s Current Context

As explained in 2.2, the user somehow needs to specify a directory as the current context. This could be done in various ways. For example,

586

H. Ohshima, S. Oyama, and K. Tanaka

1. The user could explicitly specify a directory. 2. The system could take the directory containing the document the user is reading or editing as the specified one. This would only work if the user is actually reading or editing a document. 4.2

Query Expansion

Queries are expanded as follows. 1. First, the user specifies query term X. The user’s current context, directory D, is determined in some way. 2. The system evaluates the semantic relationships between X and every term in D and identifies the terms that have a large value for a certain relationship with X. 3. The system adds terms to the current query and sends the expanded query to a conventional Web search engine. 4.3

Re-ranking

Search results are re-ranked as follows. 1. The system obtains all the content from the result items. 2. It generates a feature vector for each item based on TF-IDF. 3. It generates an evaluation feature vector as follows. Each term X has a value for each semantic relationship to the query word. The X’s value for the evaluation vector is defined the value for a certain semantic relationship between X and the query word. 4. The system compares the evaluation vector, V1 , with each item’s feature vector, V2 . The cosine similarity between the vectors is defined by Similarity(V1 , V2 ) = (V1 · V2 )/(|V1 ||V2 |). The result items are then sorted in order of similarity and displayed to the user. 4.4

Possible Uses

These four semantic relationships can be used to personalize Web searches in various ways. Query Expansion 1. Suppose a user enters query word X and gets unacceptable search results. If there is a word Y that often co-occurs with X in the user’s documents, results with both X and Y should be more useful to the user than ones with X and without Y . The system could calculate the values for the “cooccurent” relationships between X and other terms in the user’s directories and then add the Y with the largest value to the query so that it becomes “X AND Y ”.

Extracting Semantic Relationships Between Terms from PC Documents

587

2. Again suppose a user enters query word X and gets unacceptable search results. If there is a word Y that rarely co-occurs with X in the user’s documents, results with X and not Y should be more useful to the user than ones with both X and Y . The system could calculate the values for the “exclusive” relationships between X and other terms in the user’s directories and then add the one with the largest value to the query so that the query becomes “X AND not Y ”. 3. Suppose a user wants more detail about a certain term X. Adding a “narrow” term Y to the query should produce more detailed results. Likewise, to obtain more general information about X, a “broad” term Z could be added to produce more general results. Re-ranking 1. Suppose search results for an expanded query are still un acceptable for a user because of the order of the result items. The system could generate an evaluation feature vector based on the “co-occurent” relationship between X and another term, as described in 4.3. Since the “co-occurent” relationship was used to produce it and the result items are ordered by the similarity with it, result items similar to the documents indicating the user’s current context should move up to higher ranks. 2. Suppose a user wants results that are NOT similar to the documents indicating the user’s current context. In this case, the “exclusive” relationship should be used to generate the evaluation vector. 3. If a user wants results containing detailed information about the query words, the “narrow” relationship should used for the re-ranking. If more general information is desired, the “broad” relationship should be used. 4.5

Experiments and Discussion

We conducted two experiments. First, we distributed 148 documents among 38 directories to test extracting semantic relationships from documents stored in a PC. The topics of the documents were hobbies and business. There were two sub-directories in the root directory: “business” and “hobby”. The “business” directory contained 88 documents and 21 sub-directories. The “hobby” directory contained 60 documents and 14 sub-directories. Several of these documents were about “French wine”, and few were about “Japanese wine”. Table 1. Number (ratio) of suitable results for Experiment 1 Results Original Search After re-ranking Top 20 10(0.5) 7 move up Top 10 4(0.4) 7(0.7) Top 5 2(0.4) 5(1.0)

588

H. Ohshima, S. Oyama, and K. Tanaka

Experiment1 In the first experiment, we used “Tanaka” as the query term and the “business” directory as the user’s current context. We used the query expansion and re-ranking functions with the “co-occurrent” relationship. First, query expansion was done for “Tanaka”. “Tanaka” is a last name, and there were many documents containing “Katsumi Tanaka” in the user’s directories. Therefore, the “Tanaka” in the query most likely meant “Katsumi Tanaka”. In this case, we wanted terms co-occurring with “Tanaka”, so we used the “cooccurrent” relationship. The additional term identified by the system was “Katsumi”, the best additional term. However, even with “Katsumi Tanaka” as the query, we still had a problem. There are several famous people named “Katsumi Tanaka”, so the Google search results contained many unsuitable items. Table 1 shows the number of results related to the target “Katsumi Tanaka” for the original Google search and the one after re-ranking. The top 20 results for the original ranking contained 10 suitable items. After re-ranking, 7 out of 10 suitable items move up to higher ranks than original ones. While the original top 10 results contained 4 suitable items, the re-ranked top 10 results contained 7 suitable items. While the original top 5 results contained 2 suitable items, all of the items of the re-ranked top 5 results are suitable items. The results were significantly better. Experiment2 In the second experiment, we used “wine & -shopping” as the query and the “hobby” directory as the user’s current context. We used the re-ranking function with the “narrow” relationship. Using the “narrow” relationship means the user wants to get detailed information – in this case, “wine” without “shopping”. The user’s documents contained many references to “French wine” and only a few to “Japanese wine”. This indicates that more suitable information would be about “French wine” than about “Japanese wine”. Table 2 shows the number of results related to “French wine” and the number related to “Japanese wine” for the original Google search and the one after re-ranking. The top 20 results for the original query contained 4 related to “French wine” and 11 related to “Japanese wine”. After re-ranking, 3 out of 4 “French wine” items move up to higher ranks than original ones, and 8 out of 11 “Japanese wine” items move down to lower ranks. The relevance ratios for the top 10 and top 5 were improved by re-ranking. Once again, the results were significantly better. Table 2. Number (ratio) of suitable results for Experiment 2 French wine Japanese wine Results Original search After re-ranking Original search After re-ranking Top 20 4(0.2) 3 move up 11(0.55) 8 move down Top 10 1(0.1) 4(0.4) 5(0.5) 4(0.4) Top 5 1(0.2) 2(0.4) 2(0.4) 1(0.2)

Extracting Semantic Relationships Between Terms from PC Documents

5

589

Related Work

Haystack [1] is a personal information management system developed by MIT. It manages e-mails, calendars, personal documents, and Web documents through the resource definition framework (RDF). WorkWare++ [2, 3] is groupware developed by Fujitsu Laboratories that enables groups to store and manage business documents. It can manage a great deal of information about documents, people and event information. When the information is stored, metadata such as time is automatically abstracted and also stored. By scanning the contents, a user can become aware of what kind of knowledge has been accumulated in a certain field or who knows certain information. These and similar concepts have been developed to promote the generation of knowledge in a certain domain or community on a limited scale. In contrast, our method is aimed at extracting knowledge of any person and use it for many purposes. Sanderson [4] proposed a method for extracting a concept hierarchy based on subset relationships between sets of documents. They attempted to find particular expressions that indicate meaningful relationships in many documents. Glover [5] proposed a method for determining parent, self, and child keywords for a set of Web pages, where self words describe the cluster and parent and child words describe more general and more specific concepts, respectively. Their method is based on the relative frequency of keywords in and out the cluster. They also use the textual contexts of links to the pages, called extended anchor text, to discover term relationships. They do not distinguish keywords in different positions when counting occurrences. Oyama [6] proposed a method for identifying pairs of keywords in which one word describes the other. They focused on where words appear in a document, i.e., in the title or in the body. These efforts were aimed at extracting general knowledge from a large quantity of documents. In contrast, our method needs fewer documents and uses the directory structure.

6

Conclusions

We have described a method for extracting semantic relationships between terms appearing in documents stored in a personal computer and its applications to Web search personalization. The relationships are based on the deviations in the appearance of the terms. We identified four semantic relationships, broad, narrow, co-occurrent, and exclusive. They can be used to personalize Web search through, for example, expansion of queries and re-ranking of search results. We presented potential applications of this method and showed experimentally that it can usefully personalize Web search.

Acknowledgments This work was supported in part by the Japanese Ministry of Education, Culture, Sports, Science and Technology under a Grant-in-Aid for Software Technologies

590

H. Ohshima, S. Oyama, and K. Tanaka

for Search and Integration across Heterogeneous-Media Archives, a Special Research Area Grant-In-Aid For Scientific Research (2) for the year 2005 under a project titled Research for New Search Service Methods Based on the Web’s Semantic Structure (Project No. 16016247; Representative, Katsumi Tanaka), and the Informatics Research Center for Development of Knowledge Society Infrastructure (COE program of Japan’s Ministry of Education, Culture, Sports, Science and Technology).

References 1. E.Adar, D.K., Stein, L.: Haystack: Per-user information environment. Proc.1999 Conference on Information and Knowledge Management (1999) 413–422 2. Yoshinori Katayama, Fumihiko Kozakura, N.I.I.W., Tsuda, H.: Semantic groupware workware++ and application to knowwho retrieval. IPSJ SIGNotes Contents Fundamental Infology (No.071-002) (2003) 3. Hiroshi Tsuda, K.U., Matsui, K.: Workware: Www-based chronological document organizer. Proc. of APCHI’98 (1998) 380–385 4. Sanderson, M., Croft, B.: Deriving concept hierarchies from text. Proc. of SIGIR’99 (1999) 206–213 5. Eric Glover, David M. Pennock, S.L., Krovetz, R.: Inferring hierarchical descriptions. Proc. of CIKM’02 (2002) 507–514 6. Oyama, S., Tanaka, K.: Query modification by discovering topic from web page structures. Proc. of APWeb 2004 (2004) 553–564

Detecting Implicit Dependencies Between Tasks from Event Logs Lijie Wen, Jianmin Wang, and Jiaguang Sun School of Software, Tsinghua University, 100084, Beijing, China [email protected], {jimwang, sunjg}@tsinghua.edu.cn

Abstract. Process mining aims at extracting information from event logs to capture the business process as it is being executed. In spite of many researchers’ persistent efforts, there are still some challenging problems to be solved. In this paper, we focus on mining non-free-choice constructs, where the process models are represented in Petri nets. In fact, there are totally two kinds of causal dependencies between tasks, i.e., explicit and implicit ones. Implicit dependency is very hard to mine by current mining approaches. Thus we propose three theorems to detect implicit dependency between tasks and give their proofs. The experimental results show that our approach is powerful enough to mine process models with non-free-choice constructs.

1

Introduction

Nowadays, more and more organizations are introducing workflow technology to manage their business processes. Setting up and maintaining a workflow management system requires process models which prescribe how business processes should be managed. Typically, users are required to provide these models. However, constructing process models from scratch is a difficult, time-consuming and error-prone task that often requires the involvement of experts. Furthermore, there are often discrepancies between the predefined models and those really being executed. Process mining is an alternative way to construct process models. It distills process models from event logs that are recorded by most transactional information systems, such as ERP, CRM, SCM and B2B. Process mining can also be used to analyze and optimize prefined process models. In many cases, the benefit of process mining depends on the exactness of the mined models [1]. The mined models should preserve all the tasks and the dependencies between them that are present in the logs. Although much research is done in this area, there are still some challenging problems to be solved [2, 3, 4]. This paper focuses on mining non-free-choice constructs. For process models with this kind of construct, the key factor affecting the exactness of the mined models is whether dependencies between tasks can be mined correctly and completely. There are totally two kinds of dependencies between tasks, i.e., explicit and implicit ones, which will be introduced in detail later. The rest of this paper is organized as follows. Section 2 gives some preliminaries about process mining. Section 3 defines explicit and implicit dependencies X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 591–603, 2006. c Springer-Verlag Berlin Heidelberg 2006 

592

L. Wen, J. Wang, and J. Sun

between tasks and gives all cases in which implicit dependencies must be detected correctly. Section 4 gives three theorems with proofs for detecting implicit dependencies. Experimental results are given in Section 5. Section 6 introduces related work and Section 7 concludes this paper and gives our future work.

2

Preliminaries

In this section, we give some definitions used throughout this paper. Firstly, we introduce a workflow process modelling language and its relevant concepts. Then we discuss the workflow log in detail and give an example. 2.1

WF-Net

In this paper, WF-net is used as the workflow process modelling language, which was proposed in [5]. WF-nets are a subset of Petri nets, which are well understood. Note that Petri nets provide a graphical but formal language designed for modelling concurrency. Moreover, Petri nets provide all kinds of routings supported by workflow management systems in a natural way. Fig. 1 gives an example of a workflow process modelled in WF-net. This model has a non-free-choice construct. The transitions (drawn as rectangles) T1 , T2 , · · ·, T5 represent tasks and the places (drawn as circles) P1 , P2 , · · ·, P6 represent causal dependencies. A place can be used as pre-condition and/or postcondition for tasks. The arcs (drawn as directed edges) between transitions and places represent flow relations. In this process model, there is a non-free-choice construct, i.e., the sub-construct composed of P3 , P4 , P5 , T4 and T5 . For T4 , T5 and the connected arcs, their common input set is not empty but their input sets are not the same.

Fig. 1. An example of a workflow process in WF-net

We adopt the formal definitions and properties (such as soundness and safeness) of WF-net and SWF-net from [5, 6]. Some relative definitions (such as implicit place), properties and firing rules about Petri nets are also described there. For mining purpose, we demand that each task (i.e., transition) has an unique name in one process model. However, each task can appear multiple times in one process instance for the presence of the iterative routings.

Detecting Implicit Dependencies Between Tasks from Event Logs

2.2

593

Workflow Log

The goal of process mining is to extract information about processes from transaction logs. We assume that it is possible to record events such that (i) each event refers to a task (i.e., a well-defined step in the process), (ii) each event refers to a case (i.e., a process instance), and (iii) events are totally ordered. Any information system using transactional systems such as ERP, CRM, or workflow management systems will offer this information in some form [6]. Through sorting all the events in a workflow log by their process identifier and complete time, we need only consider that an event has just two attributes, i.e., task name and case identifier. Table 1 gives an example of a workflow log. Table 1. A workflow log for the process shown in Fig.1 Case id Task name Case id Task name 1 T1 2 T2 1 T3 2 T3 1 T4 2 T5

This log contains information about two cases. The log shows that for case 1, T1 , T3 and T4 are executed. For case 2, T2 , T3 and T5 are executed. In fact, no matter how many cases there are in the workflow log, there are always only two distinct workflow traces, i.e., T1 T3 T4 and T2 T3 T5 . Thus for the process model shown in Fig.1, this workflow log is a minimal and complete one. Here we adopt the definitions of workflow trace and workflow log from [6].

3

Dependency Classification

To distill a process model with non-free-choice constructs from event logs correctly, we must find a way to mine all the dependencies between tasks without mistakes. As research results show, not all dependencies can be mined directly now. Look backwards to the WF-net shown in Fig.1. It is a typical WF-net with a non-free-choice construct. Firstly, there is a free choice between T1 and T2 . After one of them is chosen to execute, T3 is executed. Finally, whether T4 or T5 is chosen to execute depends on which one of T1 and T2 has been executed. There is a non-free-choice between T4 and T5 . If we apply α algorithm [6] on the log shown in Table 1, P3 and P4 as well as their input and output arcs could not be mined. So the mined model is not behavior equivalent with the original model. It can generate two additional kinds of workflow traces, i.e., T1 T3 T5 and T2 T3 T4 . Obviously, this result is not what we want. In fact, there are totally two kinds of dependencies between tasks which must be mined from event logs, i.e., explicit and implicit ones. Explicit dependency, which is also called direct dependency, reflects direct causal relationships between tasks. Implicit dependency, which is also called indirect dependency, reflects indirect causal relationships between tasks. As Fig.1 shows, P2 together with its

594

L. Wen, J. Wang, and J. Sun

surrounding arcs reflects explicit dependencies between T1 and T3 as well as T2 and T3 . While P3 together with its surrounding arcs reflects implicit dependencies between T1 and T4 . If there are only explicit dependencies between tasks in a process model with non-free-choice constructs, most process mining algorithms, such as the α algorithm etc., can mine it correctly. Otherwise, no algorithm can mine it successfully. Now we investigate what characteristics a process model with implicit dependencies may have. Assume that there is an implicit dependency between A and B. Once A is executed, there must be some other tasks before B to be executed. After that, B is to be executed. There is no chance that B can directly follow A in some workflow trace. So the implicit dependency between A and B has

Fig. 2. Characteristics of a process model with implicit dependency

Fig. 3. Sound sub-WF-nets with implicit dependency

Detecting Implicit Dependencies Between Tasks from Event Logs

595

no chance to be detected. This typical characteristics of a process model with implicit dependency is shown in Fig.2. The subnet Nc contains at least one task. It takes tokens from P2 and puts tokens into P3 . In a general case, there may be more complicated relationships between Nc and the rest of the process model. However, we only consider the simplest case and other cases can be converted to this case easily. Therefore we need not consider the cases where some tasks outside of Nc take P2 as their input place or P3 as their output place. If there are no other tasks connected to P1 , P2 and P3 , P1 becomes an implicit place. Implicit place does not do any help for the behaviors of a process model. Any sound mining algorithm should avoid constructing implicit places. By enumerating and analyzing all the cases inherited from the simplest case, all of which refer to the input and output tasks of P1 , the input tasks of P2 and the ouput tasks of P3 , totally seven kinds of subWF-nets are proven to be possibly sound. All of these seven sub-WF-nets are shown in Fig.3. Lack of spaces forbids the detail proofs of the above conclusions. For sub-WF-nets (a) shown in Fig.3, place P1 and its surrounding arcs will not be mined. For (b) and (g), place P1 may be replaced by two or more places. For (c) and (e), the arc (P1 ,B) will be omitted. For (d) and (f), the arc (A,P1 ) will be omitted. Altogether, there are three distinct cases, i.e., (a), (b) and (g), and (c) to (f). All the relative theorems and their proofs will be given in the next section.

4

Detecting Implicit Dependencies

From the above sections, it is obvious that the detection of implicit dependencies is the most important factor for mining process models with non-free-choice constructs correctly. In this section, we will introduce all the three theorems and their corresponding proofs in detail. There exists a one-to-one relationship between the three theorems and the above three cases of implicit dependencies. To detect explicit dependencies between tasks, we adopt the α+ algorithm [7]. Some definitions, such as >w , →w , #w , w , etc., are also borrowed from [7] with some modification. Based on these basic ordering relations, we provide some additional new definitions for advanced ordering relations. Definition 1 (Ordering relations). Let W be a one-length-loop-free workflow log over T. Let a,b∈T: – a w b if and only if there is a trace σ = t1 t2 t3 . . . tn and i ∈ {1, . . . , n − 2} such that σ ∈ W and ti = ti+2 = a and ti+1 = b, – a >w b if and only if there is a trace σ = t1 t2 t3 . . . tn and i ∈ {1, . . . , n − 1} such that σ ∈ W and ti = a and ti+1 = b, – a →w b if and only if a >w b and b ≯w a or a w b or b w a, – a#w b if and only if a ≯w b and b ≯w a, – a w b if and only if a >w b and b >w a and ¬(a w b or b w a), – a w b if and only if a#w b and there is a task c such that c ∈ T and c →w a and c →w b,

596

L. Wen, J. Wang, and J. Sun

– a w b if and only if a#w b and there is a task c such that c ∈ T and a →w c and b →w c, – a !w b if and only if a ≯w b and there is a trace σ = t1 t2 t3 . . . tn and i, j ∈ {1, . . . , n} such that σ ∈ W and i < j and ti = a and tj = b and for any k ∈ {i + 1, . . . , j − 1} satisfying tk = a and tk = b and ¬(tk w a or tk w a), – a "w b if and only if a →w b or a !w b, and – a #→w b if and only if b is implicitly dependent on a, or there is an implicit dependency from b to a. Definitons of w , >w and #w are the same as those defined in [7]. Definitions of →w and w are a little different. Given a complete workflow log of a sound SWF-net [6, 7] and two tasks a and b, a w b and b w a must both come into existence. But for a one-length-loop-free workflow log of a sound WF-net, it is not always true. Now we will turn to the last five new definitions. Relation w corresponds to OR-Split while relation w corresponds to OR-Join. Relation !w represents that one task can only be indirectly followed by another task. Relation "w represents that one task can be followed by another task directly or indirectly. Relation #→w represents implicit dependency between tasks. Consider the workflow log shown in Table 1, it can be represented as string sets, i.e., {T1 T3 T4 , T2 T3 T5 }. From this log, the following advanced ordering relations between tasks can be detected: T1 w T2 , T4 w T5 , T1 !w T4 , T2 !w T5 , T1 #→w T 4 and T2 #→w T5 . For any two tasks a and b, a →w b and a #→w b cannot be true at the same time. Here we see that there are implicit dependencies between T1 and T4 as well as T2 and T5 . But in fact, the detection of implicit dependencies is not as simple as this example shows. Before starting to detect implicit dependencies, two auxiliary definitions should be given. They are used to represent the input and output task sets of one task set. Definition 2 (Input and output task sets). Let W be a one-length-loop-free workflow log over T. Let Ts ⊂ T : – •Ts = {t|∃t ∈Ts t →w t }, and – Ts • = {t|∃t ∈Ts t →w t}. Consider the workflow log shown in Table 1, the following conclusions can be drawn: •{T4} = {T3 }, •{T5 } = {T3 }, and {T1 , T2 }• = {T3 }. Firstly, we try to detect implicit dependencies from a workflow log of a process model with a sub-WF-net similar to Fig.3 (b) and (g). Theorem 1. Let W be a one-length-loop-free workflow log over T. Tw = {t ∈ T |∃σ∈W t ∈ σ}. For any task t such that t ∈ Tw , if there are two tasks t1 and t2 such that t1 ∈ Tw and t2 ∈ Tw and t1 →w t and t2 →w t and t1 #w t2 and {t1 }• = {t2 }•, compute as follows: – Xw = {(A, B)|A ⊆ Tw ∧B ⊆ Tw ∧t ∈ B∧∀a∈A ∀b∈B a →w b∧∀a1 ,a2 ∈A a1 #w a2 ∧ ∀b1 ,b2 ∈B b1 #w b2 }, and – Yw = {(A, B) ∈ Xw |∀(A ,B  )∈Xw A ⊆ A ∧ B ⊆ B  ⇒ (A, B) = (A , B  )}.

Detecting Implicit Dependencies Between Tasks from Event Logs

597

Then for any (A1 , B1 ), (A2 , B2 ) such that (A1 , B1 ) ∈ Yw and (A2 , B2 ) ∈ Yw and for any task a such that a ∈ A1 and a ∈ A2 , the following formula holds: a ∈A2 a w a ∨ a "w a ⇒ ∀b ∈B2 −{a}• a #→w b . Proof. See the sub-WF-net shown in Fig.4. Task t has two input places, i.e., p1 and p2 . Because a →w t, there must be a workflow trace σ = t1 t2 . . . tn and i ∈ {1, . . . , n − 1} such that ti = a and ti+1 = t. After a is executed, there will be a token in p1 . In order to enable t, there must be a token in p2 too. If a " a, a may have been executed before a is executed. There is a case that after a is executed, there is already a token in p2 . Hence after a is executed, t is enabled and can be executed. If a w a, the similar thing happens. Otherwise, t cannot be executed directly following a, thus here is a contradiction. So if a "w a and a ∦w a, the relation a →w t cannot be mined. In this case, there must be an arc connecting a and p2 directly as the dotted arc shows. Thus, there must be an implicit dependency between a and b , i.e., a #→w b . 

Fig. 4. Sub-WF-net for the proof of Theorem 1

Theorem 1 insures that once there is a place connecting two successive tasks in the mined model and the latter task has more than one input place, the latter task can always have chance to be executed directly following the former task. Secondly, we try to detect implicit dependencies from a workflow log of a process model with a sub-WF-net similar to Fig.3 (c) to (f). Theorem 2. Let W be a one-length-loop-free workflow log over T . Tw = {t ∈ T |∃σ∈W t ∈ σ}. (i) For any task t such that t ∈ Tw , if there are two tasks t1 and t2 such that t1 ∈ Tw and t2 ∈ Tw and t →w t1 and t →w t2 and t1 w t2 , compute as follows: – Xw = {X ⊆ Tw |∀x∈X t →w x ∧ ∀x1 ,x2 ∈X x1 #w x2 }, and – Yw = {X ∈ Xw |∀X  ∈Xw X ⊆ X  ⇒ X = X  }. Then for any task set Y such that Y ∈ Yw , the following formula holds: ∀a,b∈Tw aw b∧∃y∈Y (y w b∨y "w b)∧y∈Y (y w a∨y "w a)∧t !w a ⇒ t #→w a. (ii) For any task t such that t ∈ Tw , if there are two tasks t1 and t2 such that t1 ∈ Tw and t2 ∈ Tw and t1 →w t and t2 →w t and t1 w t2 , compute as follows: – Xw = {X ⊆ Tw |∀x∈X x →w t ∧ ∀x1 ,x2 ∈X x1 #w x2 }, and – Yw = {X ∈ Xw |∀X  ∈Xw X ⊆ X  ⇒ X = X  }.

598

L. Wen, J. Wang, and J. Sun

Then for any task set Y such that Y ∈ Yw , the following formula holds: ∀a,b∈Tw aw b∧∃y∈Y (b w y∨b "w y)∧y∈Y (a w y∨a "w y)∧a !w t ⇒ a #→w t. Proof. (i) See the sub-WF-net shown in Fig.5. After t is executed, there is one token in both p1 and p2 . Because t !w a, there must be a chance that one token appears in p3 and a is enabled. At this time, if y2 or y2 is executed, a is disabled. In this case, b will be executed finally (see sub-condition 2 in the formula). Otherwise, if a is executed, y2 and y2 are both disabled (see sub-condition 3). The token in p2 is left in the model, thus here is a contradiction. There must be an arc connecting p2 and a directly as the dotted arc shows. Thus, there must be an implicit dependency between t and a, i.e., t #→w a. (ii) The proof is similar to (i). 

Fig. 5. Sub-WF-net for the proof of Theorem 2

Theorem 2 insures that once a task takes tokens from one of multiple parallel branches, it together with its parallel tasks must consume tokens from other branches too. Finally, we try to detect implicit dependencies from a workflow log of a process model with a sub-WF-net similar to Fig.3 (a). Theorem 3. Let W be a one-length-loop-free workflow log over T . Tw = {t ∈ T |∃σ∈W t ∈ σ}. For any tasks a and b such that a ∈ Tw and b ∈ Tw and a w b, compute as follows: – Xw = {(A, B)|A ⊆ Tw ∧ B ⊆ Tw ∧ (∀ai ∈A a !w ai ∧ b !w ai ) ∧ (∀bj ∈B a !w bj ∧ b !w bj ) ∧ ∀ai ∈A ∃bj ∈B bj w ai ∧ ∀bj ∈B ∃ai ∈A ai w bj ∧ ∀ai ,aj ∈A ai w aj ∧ ∀bi ,bj ∈B bi w bj }, and – Yw = {(A, B) ∈ Xw |∀(A ,B  )∈Xw A ⊆ A ∧ B ⊆ B  ⇒ (A, B) = (A , B  )}. For any (A, B) such that (A, B) ∈ Yw , compute as follows: – A = {t ∈ Tw |t ∈ A ∧ ∃bj ∈B bj w t ∧ ∃ai ∈A ai "w t}, and – B  = {t ∈ Tw |t ∈ B ∧ ∃ai ∈A ai w t ∧ ∃bj ∈B bj "w t}. The following formulas holds: – ∀ai ∈A • {ai } ⊆ •B ∪ •B  ⇒ a #→w ai , and – ∀bj ∈B • {bj } ⊆ •A ∪ •A ⇒ b #→w bj .

Detecting Implicit Dependencies Between Tasks from Event Logs

599

Proof. Here we only consider the simplest case. More complicated cases can be translated into this case by some way. See the sub-WF-net shown in Fig.6. If a is executed, there is a token in p1 . After a period of time, there is a token in p3 . Thus ai is enabled (a !w ai ) and bj is still disabled. If b is executed, there will be one token in both p1 and p2 . After a while, there will be one token in both p3 and p4 . Thus ai and bj are both enabled. Because b !w ai , here is a contradiction. Task ai can only be enabled after a is executed. Because •{ai } ⊆ •{bj }, there must be a place with its surrounding arcs directly connecting a and ai as the dotted part of the figure shows. Thus, there must be an implicit dependency  between a and ai , i.e., a #→w ai .

Fig. 6. Sub-WF-net for the proof of Theorem 3

Theorem 3 insures that if two exclusive tasks (i.e., involved in an OR-Join) lead to different sets of parallel branches and these two sets together with their tasks satisfy certain conditions (listed in Theorem 3), the mined WF-net is still sound.

5

Experimental Evaluation

A lot of experiments has been done to evaluate the proposed theorems in the previous section. Some of the experiments are listed below. Fig.7 (a) shows an original WF-net. After applying α+ algorithm on its log, the mined model is similar to Fig.7 (b) except for the two dotted arcs. Continue applying Theorem 1, A #→w C is detected. Thus p1 and p2 should be merged together. At last, the fixed mined model will be the same as the original one. Fig.8 shows the effect of applying Theorem 2. The WF-nets excluding the dotted arcs are mined by α+ algorithm. The dotted arcs correspond to the

Fig. 7. Detect implicit dependencies by applying Theorem 1

600

L. Wen, J. Wang, and J. Sun

Fig. 8. Detect implicit dependencies by applying Theorem 2

Fig. 9. Detect implicit dependencies by applying Theorem 3

Fig. 10. Detect implicit dependencies by applying Theorem 2 and 3 as well as Theorem 1 and 3

detected implicit dependencies. Thus the WF-nets including the dotted arcs are the same as the original ones. Fig.9 shows the effect of applying Theorem 3. All the implicit dependencies in the WF-nets are detected successfully from the logs.

Detecting Implicit Dependencies Between Tasks from Event Logs

601

Fig.10 (a) shows the effect of applying Theorem 2 and 3 successively. Fig.10 (b) shows the effect of applying Theorem 1 and 3 successively. The mined WFnets with implicit dependencies are the same as the original ones. These experimental results together with the formal proofs show that our theorems are powerful enough to detect implicit dependencies between tasks.

6

Related Work

Process mining is not a new topic, recently many researchers have done a lot of work on it [1, 2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]. For earlier related work, please refer to [13]. Schimm presents an approach on mining complete, most specific and minimal block-oriented workflow models from workflow logs [1], which is a special data mining. van der Aalst et al. present a common format for workflow logs, discuss the most challenging problems and review some of the workflow mining approaches available today [2]. In [3], the authors discuss the main issues around process mining, which include mining non-free-choice constructs. Medeiros et al point out that all the existing heuristic-based mining algorithms have their limitations [4]. After the presentation of α algorithm first proposed in [6], they classify the process constructs that are difficult for it to handle and discuss how to extend it. After that, they propose a new algorithm named α+ to deal with so-called short loops [7] and implement it in a visual process mining tool named EMit [8]. In [9], Herbst et al. describe the main requirements for an interactive workflow mining system and how they derived them. Also, they implement the first prototype of an interactive workflow mining system called ProTo. In [10], they give an overview of the algorithms that were implemented within the InWoLvE workflow mining system. These algorithms can mine process models with multiple tasks having the same name. In [11, 12], Pinter et al. view the execution of an activity as a time interval, and present two new algorithms for synthesizing process models. Their approach can detect parallelism explicitly, which is similar to the approach proposed in [13]. Greco et al. investigate data mining techniques for process mining and provide an effective technique based on the rather unexplored concept of clustering workflow executions [14]. In [15], the authors propose an algorithm to discover workflow patterns and workflow termination states and then use a set of rules to mine the workflow transactional behavior. However, none of those works discuss how to mine process models with nonfree-choice constructs as well as detect implicit dependencies between tasks.

7

Conclusion and Future Work

In this paper, we try to detect implicit dependencies between tasks from event logs. Dependencies between tasks are classified into two classes, i.e., explicit and implicit ones. For mining process models with non-free-choice constructs, the detection of implicit dependencies is an important success factor. There are totally

602

L. Wen, J. Wang, and J. Sun

seven kinds of sound sub-WF-nets and they are grouped into three cases. Thus we propose three theorems for handling these cases and give their proofs. Experimental evaluations show that our theorems are suitable for detecting implicit dependencies between tasks. Our future work will mainly focus on the following two aspects. Firstly, we will extend the α+ algorithm to support our theorems, so that we will be able to mine WF-nets involving implicit dependencies directly by the extended algorithm. Secondly, we will further investigate other factors affecting the mining of process models with non-free-choice constructs.

Acknowledgement This work is supported by the 973 Project of China (No. 2002CB312006) and the Project of National Natural Science Foundation of China (No. 60373011).

References 1. Schimm, G.: Mining exact models of concurrent workflows. Computers in Industry 53 (2004) 265–281 2. van der Aalst, W.M.P., van Dongen, B.F., Herbst, J., Maruster, L., Schimm, G., Weijters, A.J.M.M.: Workflow mining: A survey of issues and approaches. Data & Knowledge Engineering 47 (2003) 237–267 3. van der Aalst, W.M.P., Weijters, A.J.M.M.: Process mining: a research agenda. Computers in Industry 53 (2004) 231–244 4. de Medeiros, A.K.A., van der Aalst, W.M.P., Weijters, A.J.M.M.: Workflow Mining: Current Status and Future Directions. In: Meersman, R., et al. (Eds.): CoopIS/DOA/ODBASE 2003. LNCS, Vol. 2888. Springer-Verlag, Berlin Heidelberg (2003) 389–406 5. van der Aalst, W.M.P.: The Application of Petri Nets to Workflow Management. The Journal of Circuits, Systems, and Computers 8(l) (1998) 21–66 6. van der Aalst, W.M.P., Weijters, A.J.M.M., Maruster, L.: Workflow Mining: Discovering Process Models from Event Logs. IEEE Transactions on Knowledge and Data Engineering 16(9) (2004) 1128–1142 7. de Medeiros, A.K.A., van Dongen, B.F., van der Aalst, W.M.P., Weijters, A.J.M.M.: Process Mining for Ubiquitous Mobile Systems: An Overview and a Concrete Algorithm. UMICS (2004) 151–165 8. van Dongen, B.F., van der Aalst, W.M.P.: EMiT: A Process Mining Tool. In: Cortadella, J. and Reisig, W. (Eds.): ICATPN 2004. LNCS, Vol. 3099. SpringerVerlag, Berlin Heidelberg (2004) 454–463 9. Hammori, M., Herbst, J., Kleiner, N.: Interactive Workflow Mining. In: Desel, J., Pernici, B., and Weske, M.(Eds.): BPM 2004. LNCS, Vol. 3080. Springer-Verlag, Berlin Heidelberg (2004) 211–226 10. Herbst, J., Karagiannis, D.: Workflow mining with InWoLvE. Computers in Industry 53 (2004) 245–264 11. Golani, M., Pinter, S.S.: Generating a Process Model from a Process Audit Log. In: van der Aalst, W.M.P., et al. (Eds.): BPM 2003. LNCS, Vol. 2678. Springer-Verlag, Berlin Heidelberg (2003) 136–151

Detecting Implicit Dependencies Between Tasks from Event Logs

603

12. Pinter, S.S., Golani, M.: Discovering workflow models from activities’ lifespans. Computers in Industry 53 (2004) 283–296 13. Lijie, W., Jianmin, W., van der Aalst, W.M.P., Zhe, W., and Jiaguang, S.: A Novel Approach for Process Mining Based on Event Types. BETA Working Paper Series, WP 118, Eindhoven University of Technology, Eindhoven, 2004. 14. Greco, G., Guzzo, A., et al.: Mining Expressive Process Models by Clustering Workflow Traces. In: Dai, H., Srikant, R., and Zhang, C. (Eds.): PAKDD 2004. LNAI, Vol. 3056. Springer-Verlag, Berlin Heidelberg (2004) 52–62 15. Gaaloul, W., Bhiri, S., Godart, C.: Discovering Workflow Transactional Behavior from Event-Based Log. In: Meersman, R., Tari , Z. (Eds.): CoopIS/DOA/ODBASE 2004. LNCS, Vol. 3290. Springer-Verlag, Berlin Heidelberg (2004) 3–18

Implementing Privacy Negotiations in E-Commerce Sören Preibusch German Institute for Economic Research, Königin-Luise-Str. 5, 14191 Berlin, Germany [email protected]

Abstract. This paper examines how service providers may resolve the trade-off between their personalization efforts and users’ individual privacy concerns. We analyze how negotiation techniques can lead to efficient contracts and how they can be integrated into existing technologies to overcome the shortcomings of static privacy policies. The analysis includes the identification of relevant and negotiable privacy dimensions for different usage domains. Negotiations in multi-channel retailing are examined as a detailed example. Based on a formalization of the user’s privacy revelation problem, we model the negotiation process as a Bayesian game where the service provider faces different types of users. Finally an extension to P3P is proposed that allows a simple expression and implementation of negotiation processes. Support for this extension has been integrated in the Mozilla browser.

1 Introduction Online users are facing a large and increasing complexity of the web, due to its size and its diversity. In online retailing, stores are constantly expanding their assortments in width, depth and quality levels, making it impossible for users to examine all possible alternatives. The user may be offered effective guidance through automated recommender systems; her appreciation for personalized websites [11], and their economical benefits for service providers could be verified empirically [2, 10, 13]. Several recommendation strategies have been developed in the past, an overview can be found in [14]. All these techniques have in common a need of personal data, either by explicit collection or by inferring them. Thus a personalized (or useradaptive) system intrinsically has to deal with privacy issues, especially if personal data is stored and not volatile or only saved for the current session. A common way for websites to communicate their privacy principles is to post “privacy policies”. Our contribution is to depict how negotiation techniques can overcome current drawbacks of static privacy policies, and how these negotiations can be implemented using existing technologies (reviewed in section 2). In section 3 and 4, we examine negotiable privacy dimensions and present the optimization calculi of the user and the service provider respectively, based on a formalization of privacy negotiations. Section 5 explains how privacy negotiations can be implemented using P3P and illustrates a negotiation scenario in multi-channel retailing. The paper concludes with a summary and outlook in section 6. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 604 – 615, 2006. © Springer-Verlag Berlin Heidelberg 2006

Implementing Privacy Negotiations in E-Commerce

605

2 Related Work The privacy-personalization trade-off as presented above has led to several approaches both in research and in practice, among them are the Platform for Privacy Preferences developed by the World Wide Web Consortium (W3C) [19], and the Enterprise Privacy Authorization Language (EPAL) developed by IBM [7]. P3P is a recommendation since 2002 and aims “to inform Web users about the data-collection practices of Web sites” [20]. P3P has become widely adopted by service providers but it remains restricted to the “take-it-or-leave-it” principle: The service provider offers a privacy policy; the potential user either can accept it or to reject it as a whole. A negotiation process between the involved parties is not intended. Although the first drafts of the P3P specification included negotiation mechanisms, these parts had been removed in favour of easy implementation and early and wide adoption of the protocol. The latest P3P 1.1 specification [20] does not mention negotiations either. In addition to the P3P specification, the W3C conceived APPEL1.0, A P3P Preference Exchange Language 1.0 [18]. APPEL is a language “for describing collections of preferences regarding P3P policies between P3P agents”. APPEL is primarily intended as a transmission format and a machine-readable expression of a user’s preferences. Given a P3P privacy policy, it may be evaluated against a user-defined ruleset to determine if her preferences are compatible with the service provider’s intentions for data. Though standard behaviours and basic matching operations are supported by APPEL, its applications are still limited and the capability of expressing negotiation strategies is explicitly excluded from the language’s scope. Using APPEL as a negotiation protocol is neither supported by its semantics nor is the language designed for this purpose. EPAL allows enterprises to express data handling practices in IT systems. The developed policies are intended “for exchanging privacy policy in a structured format between applications or enterprises” [7]. The language focuses on the internal business perspective, and is not intended for customers to express their own privacy preferences. Although EPAL is not suited for the direct dialogue with the end-user – which is needed for negotiation – privacy guarantees towards customers can sometimes be deduced from the stated internal procedures and then be expressed in P3P policies. In parallel to the development of privacy-related technologies and research both in online and offline IT-based transactions, negotiation has been studied in various disciplines. The bases had been set up in game theory, where negotiation is modelled as a bargaining game [8, 16]. Recent influences have arisen with the increasing importance of autonomous agents and collaborative computing [4]. Frameworks for carrying out negotiations have been developed [12]. The rapid development of the Grid and service-based IT-architectures on the technical side, and the enduring process outsourcing to third parties on the economic side, combined with mobile and ubiquitous computing will make Privacy Negotiation Technologies gain in importance in the near future [9, 21].

606

S. Preibusch

3 Privacy Negotiations Thompson states that negotiations are an “interpersonal decision-making process necessary whenever we cannot achieve our objectives single-handedly” [17]. Especially in the case of integrative negotiations, negotiations can unleash the integrative potential that lies in conflicting interests and preferences and turn it into efficient contracts: two major shortcomings of current online privacy handling mechanisms can be overcome if privacy negotiation processes are implemented during the transaction between the service provider (seller) and the user (buyer): The first shortcoming is the “take-it-or-leave-it” principle, i.e. the user can only accept or refuse the provider’s proposal as a whole. The provider is always the one who moves first, he makes the initial offer; the user cannot take the initiative. The second shortcoming is the “one-size-fits-all” principle: once the service provider has designed its privacy policy, it will be proposed to all interested users – no matter what their individual preferences are. There may be users who would have accepted offers with less privacy protection and would have agreed to the provider’s proposal even if more personal data would have been asked. Thus, the provider fails to tap the users’ full potential. 3.1 Individualized Privacy Policies Adopting a broader view and extending the analysis from a single service provider to the whole market, providers specializing on different privacy levels may be an idea. Since the amount of service providers (as discrete units) is much smaller than the amount of potential privacy preferences, which can be seen as quasi-continuous due to the large number of gradations for all considerable privacy dimensions, a specialization is not trivial. Consider n service providers and m j n users having different privacy levels with a known distribution. Hence, a given service provider will target more than one privacy level. This may be implemented by giving the users the choice between a set of usage scenarios corresponding to different amounts of personal data to be collected. As the differences between these usage scenarios have to be clearly communicated and the maintenance of one scenario induces costs for the service provider, the set of scenarios will be limited in size to a few possibilities. A current example of this strategy is the search site A9.com, a wholly owned subsidiary of Amazon.com, Inc. A9.com offers a highly personalized version of its services, which is the standard setting. Users more concerned about their privacy can switch to an alternative service where the data collection and use is limited to a minimum and no personalization is implemented. The notable difference between the offered privacy levels is part of the service provider’s user discrimination strategy and aims at a successful self-selection of the potential users. Thence, even under market-driven specialization and alternative usage scenarios, the user still faces fixed policies and the main problem persists.

Implementing Privacy Negotiations in E-Commerce

607

3.2 Negotiable Privacy Dimensions Apparently, as it is not feasible to negotiate the entire privacy policy, one important aspect is to identify relevant and negotiable privacy dimensions. We define a privacy dimension as one facet of the multi-dimensional concept ‘user privacy’. For each dimension, different discrete revelation levels exist, monotonously associated with the user’s willingness to reveal the data. Privacy dimensions can be identified at different degrees of granularity. Based on the semantics of P3P, a priori all non-optional parts of a P3P privacy STATEMENT are possible negotiable privacy dimensions: The RECIPIENT of the data, the PURPOSE for which the data will be used, the RETENTION time and what kind of DATA will be collected. As shown by empirical studies, these four generic dimensions (recipient, purpose, retention, and data) reflect privacy aspects users are concerned about. Moreover, they are in accordance with European privacy legislation [5, 6]. It is obvious, that the importance of each of the four dimensions as perceived by the users as well as their respective willingness to provide information, depends on the thematic domain of the service. Some recent work proposed to negotiate the recipient of the data in different application scenarios, among them are medical help [21], distance education [22], and online retailing [4]. We will focus on negotiating the amount of data to be revealed (see section 3.4). 3.3 Privacy vs. Personalization – User’s Individual Utility Calculus In order to model the user’s individual trade-off between personalization and privacy, we present it as a utility maximization problem, taking into account different overall sensitivity levels towards privacy and different importance one may assign to a specific privacy dimension. The formalization allows solving the negotiation game presented in section 4, giving the service provider the opportunity to choose its optimal strategy. We denote the user’s utility by U, using the following notations: Dn is a n-dimensional privacy space and di ∈ D are its privacy dimensions ai is the user’s data revelation level on dimension di aiT is a threshold indicating the minimum required data the user must reveal Įi is a weighting of dimension di Ȗ indicates the user’s global privacy sensitivity R is the discount provided by the service provider P are other non-monetary personalization benefits B is the base utility by the execution of the contract Using this notation, the user’s utility can be expressed by:

U (.) = −γ ⋅

n

∏a i =1

αi i

+ P (a1 , ..., a n ) + R(a1 , ..., an ) + B .

(1)

In case that the user is not willing to provide sufficient data for the contract to be executed, the base utility B and the discount R will be zero (2). The user gets the personalization benefits P even if the involved parties do not conclude on a contract.

608

S. Preibusch

In case P is less than the negative utility the user gets from providing the necessary data, the user will prefer unpersonalized usage of the services (3).

R(a1 , ..., an ) = 0 ∧ B = 0 ⇐ ∃i : ai < ai . T

n

P(a1 , ..., an ) < −γ ⋅ ∏ ai i =1

αi

Ÿ unpersonalized usage preferred.

(2)

(3)

As the ability to identify a user individually (identity inference, also known as triangulation) does not increase linearly when more data is provided, we use a CobbDouglas utility function instead of an additive composition for the user’s disutility of data revelation. Two other important characteristics of this utility expression in the context of privacy awareness are discussed at the end of this section. The thresholds aiT are set by the service provider and are usually openly communicated. In implementations, hints like ‘required field’, ‘required information’ or form fields marked by an asterisk are common practice. The necessity can be deduced from the nature of the transaction: It is obvious that an online bookstore cannot achieve postal delivery if the user refuses to provide her shipping address. It is to note that in this model, the kind of privacy dimensions is not fixed: The purpose as well as the recipient can be privacy dimensions. In the case of shipping, the threshold for the recipient dimension may be the company itself (no third-party logistics company used) and the minimum purpose the user has to agree upon may be postal delivery. The weightings Įi for each of the privacy dimensions as well as the global privacy sensitivity Ȗ are private information of the user and constitute her type. The same holds for the valuation of the non-monetary personalization benefits P and the base utility B, but these two components can be neglected in the further analysis: First, users tend to only valuate additional personalization benefits, known solutions will shortly be seen as a standard service and thus there will be no special appreciation. Nevertheless, some personalization benefits may remain. In case of classical implementations such as active guidance, purchase suggestions based on purchase or service usage history, product highlighting or implicit search criteria, the personalization improves the perceived service quality. Through the active support, the user can save search time and simultaneously the matching quality between her preferences and the store’s offers increases: These savings can be seen as monetary benefits and thus subsumed under the variable R. This is especially appropriate, as increased matching quality only becomes effective in case the product is purchased (and R is zero in case of no contract). The base utility can be neglected as it does not depend on the data revelation levels. Hence, the user’s type is determined by Įi and Ȗ. As mentioned above, the multiplicative structure of the Cobb-Douglas utility function allows a good expression of inference threats. In addition, there are two other interesting characteristics in the context of profile data, related to each other. First, the different privacy dimensions are not perfectly substitutable (e.g. the user’s telephone number and her e-mail address constitute two possible ways to contact the user but they are not completely interchangeable). Second, different to an additive composition, the substitution rate between two privacy dimensions (which yields here to -

Implementing Privacy Negotiations in E-Commerce

609

Įiai/Įjaj) is not constant or independent from the current level of revealed data: it decreases with the amount of data already provided. The influences of the different parts on the user’s utility function are described by the partial derivatives and their interpretations shown below:

∂U / ∂ai E 0: Any privacy infringement reduces the user’s utility except in the case where she does not care. ∂U / ∂R > 0: The user appreciates discounts. ∂R / ∂ai U 0: But the service provider is only willing to grant discounts in case he gets some personal information in return. The case ∂R / ∂ai = 0 is applicable for a privacy dimension irrelevant in the current transaction scenario or (more restricted) for which the service provider does not honour revelation. ∂P / ∂ai U 0: The more data the service provider can access, the better the personalization will be. 3.4 Negotiating the ‘data’-Dimension

While the recipient may be the relevant negotiation dimension for distance education or health services, we propose the extent and amount of shared data as negotiation dimension for online retailing. First, the willingness of customers to provide personal information is mainly determined by the service provider’s reputation, who is the (nonnegotiable) initial recipient of the data. Second, disclosure practices are often determined business processes (e.g. outsourced billing services or delivery by thirdparty companies). Third, the relevance of the retention time is rated considerably less important [1]. Finally, all data carries with it a more or less pronounced intrinsic purpose that cannot be subject to a negotiation (e.g. phone numbers are used for personal contact and telemarketing). Hence, negotiating the kind of data seems appropriate in the case of online retailing. Generally spoken, for a type of data to become part of the negotiation process, it must at least meet the following criteria: − the user must be able to provide the data − the data must not be off-topic; the user should see at least a slight reason for the necessity of providing it − it must not be indispensable for the execution of the contract, either by its nature or by the level of detail (i.e. no negotiations for ai < aiT) − the service provider must gain the user’s favour for collecting the data, i.e. if the data can be smoothly collected without the user’s consent, there is no need for negotiating (for example the request time can be collected automatically) The empirical findings of [1] allow establishing a cardinal ordering of types of data according to the willingness of user’s to provide the information. Ackermann et al. found significant differences in comfort level across the various types of information, implying weighting factors Įi in the user’s utility function constituting one aspect of the user’s type. The other aspect, the global privacy sensitivity expressed by Ȗ, will be examined in the following section.

610

S. Preibusch

4 The Service Provider’s Perspective 4.1 Facing Different Types of Users

The service provider is confronted with different types of customers that have various global privacy sensitivity levels, and may rate the importance of one kind of data differently. Efficient customer value extraction is based on a combination of discrimination and negotiation techniques. Discrimination relies on the identification of different groups of customers having the same (or a comparable) type. [1] identified three types: the ‘privacy fundamentalists’, the ‘pragmatics’, and the ‘marginally concerned’ users. [15] distinguishes the pragmatic majority into ‘profiling averse’ and ‘identity concerned’ users, hence establishing four user clusters. Table 1 summarizes the four types whose distribution is assumed to be common knowledge; the characteristics are deduced from [1] and [15]. Table 1. User typology

User type șPF (fundamentalists) șPA (profile averse) șIC (identity concerned) șMC (marginally concerned)

Characteristics, Important factors Ȗ near 1 extremely concerned about any use of their data Ȗ around 0.5 sensitive about equipment, salary, hobbies, health or age Ȗ around 0.5 sensitive about addresses, phone or credit card numbers Ȗ near 0 generally willing to provide data

4.2 Modelling the Negotiation Process

Various methods for modelling negotiation processes exist, some more influenced by computer science (e.g. using finite state machines), others more influenced by microeconomics. We will adopt a game-theoretic approach, examining two possible negotiation scenarios: a sequential game as framework and a simultaneous game that may be played on every step. [3] has examined negotiation protocols in different contexts: customer anonymity (or not), complete knowledge of the service provider’s strategy (or not) and no transaction costs for both parties (or not). Assuming that the service provider can reliably identify privacy fundamentalists for example by means of web usage mining technologies, he will propose a take-it-orleave-it offer to fundamentalists as in most cases, the valuation of hiding personal data will be higher than the discounts the service provider can offer; the inequality (3) becomes binding. This results in a subgame that can be solved by standard procedures. In contrast, the three other types are indistinguishable for the service provider. Cf. figure 1 where the nodes for the types șPA, șIC and șMC are in the same information set whereas the node for type șPF is apart.

Implementing Privacy Negotiations in E-Commerce

611

Fig. 1. The service provider negotiates with three types he cannot distinguish

The service provider’s strategy is a function that associates discounts to data revelation level vectors (Dn ĺ ran(R)). Determining the service provider’s best strategy results in solving the following optimization problem: For users being drawn from a known distribution (with the probabilities as depicted in figure 1), maximize the total profit. The total profit is the revenue generated by the whole population minus the granted discounts, minus the costs for implementing the personalization, and minus other costs. Latter encompass in particular customers that are lost during the negotiation process by cancelling (e.g. due to psychological reasons or just because they feel overstrained). This maximization is subject to constraint of the users’ participation constraint (U(.) - B > 0) and the constraints (2) and (3). We deliberately refrain from a detailed solution, as rigorously integrating the service provider’s cost structure would go beyond the scope of this paper. The framework for the negotiation process is a dynamic game where the service provider has high bargaining power: He opens the negotiation with a basic offer, consisting of a small discount and a few personal data (the threshold) to be asked. This constitutes the fallback offer in case the user does not want to enter negotiation. In case the user accepts, she will be presented another offer with a higher discount and more data to be asked. On every step, the user may cancel (i.e. no contract or the fallback solution are implemented), continue (i.e. reveal more data or switch to another privacy dimension) to the next step or confirm (i.e. the reached agreement will be implemented). This wizard-like structure is strategically equivalent to a set of offers as (data, discount)-tuples from which the user can choose one. However, a sequential implementation allows better guidance, better communication of the benefits in providing the data and instantaneous adaptation of the strategy. Note that for a given offer, the requested data are always a subset of the requested data of the previous offer, even if the customer only enters the additional information (monotonously increasing revelation level for a given dimension). The service provider can also implement more alternatives for one step, so that the user can choose which data she will provide (for example the service provider can ask either for the home address or the office address). This is particular useful for addressing different weightings of privacy dimensions that are equivalent for the service provider. Implementations may offer the multiple privacy dimensions sequentially. A switch to another dimension is performed in case the user refuses to provide further data or the service provider is not interested in a higher detail level for the current dimension. A current implementation is described in section 5.2.

612

S. Preibusch

In this basic case, the service provider grants a fixed discount on every single step, which is cumulated along the process. A more sophisticated procedure could also include the service provider’s concessions into the negotiation process, e.g. by a simultaneous game on every stage: the user indicates the minimum discount she wants to get for revealing the data and the service provider indicates the maximum discount he wants to grant. Problems will arise as the service provider’s maximal willingness can be overt due to the unlimited number of times one or several anonymous users can play this simultaneous game.

5 Implementation 5.1 Integrating Privacy Negotiations into P3P

The negotiation process as described in the previous section can be implemented using the extension mechanism of P3P, which can be used both in a policy reference file and in a single privacy policy. The extensions in the privacy policies will not be optional, but in order to ensure backward compatibility, these extended policies will only be referenced in an extension of the policy reference file. Hence, only user agents capable of interpreting the negotiation extension will fetch extended policies. In a P3P policy, two extensions can be added: a NEGOTIATION-GROUP-DEF in the POLICY element, and a NEGOTIATION-GROUP in the STATEMENT element. The mechanism is comparable to the tandem of STATEMENT-GROUP-DEF and STATEMENT-GROUP in P3P 1.1 [20]. A NEGOTIATION-GROUP-DEF element defines an abstract pool of alternative usage scenarios. One or several statements (identified by the attribute id) code a possible usage scenario; the pool membership is expressed by the NEGOTIATIONGROUP extension in the statement (attribute groupid), which describes relevant parameters of the given scenario, such as the benefits for the user. This mechanism allows to establish a n:m-relation between statements and negotiation groups. The fallback contract can be indicated via the standard-attribute of the NEGOTIATION-GROUP-DEF element. The following example illustrates the usage: users of a bookstore can choose between e-books sent by email and hard copy books, shipped by postal delivery: Example of an extended P3P policy, including the proposed elements NEGOTIATIONGROUP-DEF and NEGOTIATION-GROUP (XML namespaces omitted)

...

...

...

...

Implementing Privacy Negotiations in E-Commerce

613

...

...

Note that the benefits need to be displayed concisely by the user agent. The humanreadable privacy policy and other information resources on the site must work hand in hand with the P3P policy. The exhaustive machine-readable coding of the benefits is a remaining challenge – especially for multi-dimensional phenomena other than just a reduced purchase price. 5.2 Example: Negotiations in Multi-channel Retailing

In addition to the introductory example of the previous section, we want to outline a possible privacy negotiation for a multi-channel retailer. The scenario is as follows: Besides using the service provider’s e-shop, customers can find the nearest store by entering their ZIP code into the store locator. A check of majority is done before these website services can be used. Two privacy dimensions can be identified: the user’s age (d1), with the revelation levels: {year (Y), year and month (YM), year and month and day (YMD)}, and the user’s address (d2), with the revelation levels: {city (C), city and ZIP code (CZ), city and ZIP code and street (CZS)}. The revelation thresholds are a1T = year (for the majority check) and a2T = city and ZIP code (for the store locator). Possible negotiation outcomes are depicted in the left part of figure 2. Using the user’s utility function as defined in equation (1), we can draw iso-utility curves: the user’s disutility increases when moving to the upper right corner, as the revelation levels increase.

Fig. 2. Possible negotiation outcomes. Contracts whose data revelation levels are below the thresholds cannot be reached (marked by unfilled dots) (left). User’s iso-utility curves corresponding to different revelation levels (right).

Based on these two figures, the service provider develops its strategy: he chooses the discounts he will grant to the customer for each of the six possible contracts, “labelling” them with the R(.) function (that maps from Dn to discounts). Hence, he can code the negotiation space by six statements in an extended P3P policy.

614

S. Preibusch

The customer’s user agent fetches this policy and serves as a negotiation support system, displaying possible alternatives (a human-readable communication of the data handling practices as coded in the statements along with negotiation benefits) from which the user can choose one. We have integrated this negotiation support into the Mozilla browser, thence extending its P3P support: a site’s privacy policy can be accessed via the “Policy”, “Summary” and “Options” buttons in the “Page Info” dialog, directly available from the status bar. Extending the chrome components, we have added a “Negotiate” button: a modal dialog is opened, summarizing the negotiable privacy dimensions (di) and the possible realizations (ai) with drop-down menus. The implementation relies on XUL and JavaScript, uses the Mozilla APIs and integrates seamlessly into the user agent. As the proposed extension to P3P is not restricted to a specific privacy dimensions, neither is the implementation. Any privacy dimension can be negotiated as long it can be expressed using the P3P data scheme.

6 Conclusion and Further Work This paper has presented the necessity of negotiation about privacy principles in a relationship between service provider and customer. Negotiating allows a better matching between the seller’s needs and the buyer’s disclosure restraint and helps to reduce the trade-off between personalization and privacy. Modelling the user’s individual utility maximization can take into account the multi-dimensionality of privacy; the service provider may wish to reduce the negotiation space in a way that suits the given business scenario. The incremental revelation of data by the user can be strategically reduced to a choice from a set of alternatives. Using the extension mechanism of P3P, there is no limitation in coding these alternatives even for complex cases involving diverse privacy dimensions: We proposed two new elements that follow the structure of the current P3P 1.1 grouping mechanisms and allow software-supported negotiations in E-Commerce. Software support of the extension was added to the Mozilla browser, integrating privacy negotiations seamlessly into the user agent. Future work will focus on the practical implementation of privacy negotiation techniques on large scale public websites. We are currently investigating which user interface design best fulfils the usability requirements and how negotiable privacy dimensions are best visualized. Moreover, a taxonomy should be developed to allow a machine-readable coding of the user’s benefits for a negotiation alternative. A remaining question is whether users feel more concerned about their privacy when an explicit negotiation process is started. This increasing sensitivity could make take-itor-leave-it offers more favourable for the service provider.

References 1. Ackerman, M. S., Cranor, L.F., Reagle, J.: Privacy in E-commerce: Examining User Scenarios and Privacy Preferences, First ACM Conference on Electronic Commerce, Denver, CO (1999) 1-8 2. Cooperstein, D., Delhagen, K., Aber, A., Levin, K.: Making Net Shoppers Loyal, Forrester Research, Cambridge (1999)

Implementing Privacy Negotiations in E-Commerce

615

3. Cranor, L. F., Resnick, P.: Protocols for Automated Negotiations with Buyer Anonymity and Seller Reputation, Netnomics,. 2(1), 1-23 (2000) 4. El-Khatib, K.: A Privacy Negotiation Protocol for Web Services. Proceedings of the International Workshop on Collaboration Agents: Autonomous Agents for Collaborative Environments (COLA) (2003) 5. European Parliament, Council of the European Union: Directive 2002/58/EC on privacy and electronic communications. Official Journal of the European Communities, 31.7.2002, L 201, 37–47 (2002) 6. European Parliament, Council of the European Union: Regulation (EC) No 45/2001 of the European Parliament and of the Council of 18 December 2000. Official Journal of the European Communities, 12.1.2002, L 8, 1–22 (2002) 7. International Business Machines Corporation: Enterprise Privacy Authorization Language (EPAL 1.2), W3C Member Submission 10 November 2003 (2003) 8. Karrass, C. L.: Give and Take: The Complete Guide to Negotiating Strategies and Tactics. HarperCollins Publishers, New York, NY (1993) 9. Kurashima, A., Uematsu, A., Ishii, K., Yoshikawa, M., Matsuda, J.: Mobile Location Services Platform with Policy-Based Privacy Control (2003) 10. Peppers, D., Rogers, M., Dorf, B.: The One to One Fieldbook. New York, Currency Doubleday (1999) 11. Personalization Consortium: Personalization & Privacy Survey (2000) 12. Rebstock, M., Thun, P., Tafreschi, O.A.: Supporting Interactive Multi-Attribute Electronic Negotiations with ebXML. Group Decision and Negotiation. 12 (2003) 269–286 13. Schafer, J.B., Konstan, J., Riedl, J.: Recommender Systems in E-Commerce (1999) 14. Schafer, J.B., Konstan, J., Riedl, J.: Electronic Commerce Recommender Applications, Journal of Data Mining and Knowledge Discovery. 5, 115–152 (2000)t 15. Spiekermann, S.: Online Information Search with Electronic Agents: Drivers, Impediments, and Privacy Issues (2001) 16. Ståhl, I.: Bargaining Theory. Stockholm: The Economics Research Institute (1972) 17. Thompson, L.L.: The Mind and Heart of the Negotiator. 3rd edn. Pearson Prentice Hall, Upper Saddle River, New Jersey (2005) 18. W3C, A P3P Preference Exchange Language 1.0 (APPEL1.0), W3C Working Draft 15 April 2002, http://www.w3.org/TR/P3P-preferences (2002) 19. W3C, The Platform for Privacy Preferences 1.0 (P3P1.0) Specification, W3C Recommendation 16 April 2002, http://www.w3.org/TR/P3P/ (2002) 20. W3C, The Platform for Privacy Preferences 1.1 (P3P1.1) Specification”, W3C Working Draft 4 January 2005, http://www.w3.org/TR/2005/WD-P3P11-20050104/ (2005) 21. Yee, G., Korba, L.: Feature Interactions in Policy-Driven Privacy Management. Proceedings from the Seventh International Workshop on Feature Interactions in Telecommunications and Software Systems (FIW’03) (2003) 22. Yee, G., Korba, L.: The Negotiation of Privacy Policies in Distance Education. Proceedings. 4th International IRMA Conference (2003)

A Community-Based, Agent-Driven, P2P Overlay Architecture for Personalized Web Chatree Sangpachatanaruk1 and Taieb Znati1,2 1

Department of Telecommunications [email protected] 2 Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260, USA [email protected]

Abstract. We present a user profile driven framework to allow individual users to organize themselves into communities of interest (CoI) based on ontologies agreed upon by all community members. In this paper, we describe the overlay network architecture to support the basic functionalities of a CoI. The basic tenet of this architecture is the use of ontologies to represent objects in order to enable semantic resource discovery and retrieval which reflect the interest of the user within a specific community. Three advertising and retrieval schemes, namely aggressive, crawler-based and minimum-cover-rule, are discussed and investigated using an emulation-based and a simulation-based experimental frameworks. The results show that the minimum-cover-rule scheme exhibits higher performance than the other two schemes in the stable environment. In the high-churn environment, however, the effectiveness of the aggressive scheme is better than those of the other two schemes.

1

Introduction

Recently, the advent of the Semantic Web, with the promises to make the web accessible and understandable, not only by human, but more importantly by machines [1], is creating opportunities for new approaches and tools to resource discovery on the Internet. Semantic Web makes it possible to associate with each resource semantics that can be understood and acted upon by software agents. In particular, web search agents, which can usually be programmed to crawl the web pages in search of keywords, can now be augmented to take advantage of semantic data to carry out sophisticated tasks, such as evaluating similarity among the web pages, and processing data of interest for users. In support of the Semantic Web vision, the focus of a great deal of research work has been on using ontologies to annotate date and make it machineunderstandable, thereby reducing human involvement in the process of data integration and data understanding. The ability to incorporate detailed semantics of data will facilitate greater consistency in its use, understanding and application. It is widely believed, however, that the next generation web will not be primarily comprised of a few large, consistent ontologies, recognized and accepted by the Internet communities at large [2]. Rather, it is envisioned that it X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 616–627, 2006. c Springer-Verlag Berlin Heidelberg 2006 

A Community-Based, Agent-Driven, P2P Overlay Architecture

617

will be a complex web of small ontologies, and largely created by groups of users. In such an environment, companies, universities, or ad hoc interest groups will be able to link their webs to ontological content of their interests, thereby allowing computer programs to collect and process web contents, and to exchange information relevant to their needs freely and efficiently. In accordance with this vision, we propose a user profile driven framework to allow individual users to organize themselves into communities of interest based on ontologies agreed upon by all community members. In this framework, users are able to define and augment their personal views of the Internet by associating specific properties and attributes to objects and defining constraint functions and rules that govern their interpretation of the semantics associated with these objects. These views can then be used to capture the user’s interests and integrate these views into a user defined Personalized Web(PW) [3]. Furthermore, through active advertising and discovery of objects, users automatically expand their “personal web of interest” to include other users’ personalized webs, thereby enabling “natural” formation of communities of similar interests. Enabling communities of interest (CoI) requires the development of efficient data structures, mechanisms and protocols to support information dissemination, object discovery and access in a user-transparent manner. Furthermore, in order to support potentially large and dynamic user communities, the protocols and mechanisms must be scalable, and robust. To address these requirements, we propose a peer-to-peer (P2P) architecture which leverages the benefits of agent technology and DHT-based overlay networking into an integrated seamless infrastructure to support CoIs. In this paper, we show how DHT-based P2P overlay networks can be efficiently integrated with an agent-driven framework to provide the foundation for a large-scale, self-evolving adhoc infrastructure for resource access and knowledge sharing within a community of similar interests. The rest of the paper is organized as follows. First, we discuss the related works. We then describe the overall architecture to enable CoIs, introduce the concept of a Semlet, an agent for resource advertising and discovery, and discuss the basic functionalities of the Ontoloty Overlay Network (OON) to support resource advertising and discovery. Following, we describe three advertising and retrieval schemes. We then present a set of emulation-based and simulationbased experiments to evaluate the performances of these schemes and discuss their results. Lastly, we conclude by describing our future work.

2

Related Works

The concept of PW was originally introduced in [3]. In this paper, the framework is generalized to include the concept of ontology-based “community of interests”. Furthermore, we introduce the concept of semantic-aware agents for resource advertising and access, and propose novel schemes to advertise and retrieve objects of interest to users within a CoI. The work proposed in this paper is related to several other research efforts in different fields, including ontology, semantic web, agent technology, and resource

618

C. Sangpachatanaruk and T. Znati

discovery. Due to space limitation, we limit our discussion to works which support object advertising and retrieval on the Internet. The most similar work to the proposed architecture is the architecture referred to as a publish and subscribe, or event notification service. The architecture allows users to specify what to receive and advertise based on some predicates. Early works of publish and subscribe systems provide simple channel and topicbased service, where subscribers and publishers communicate through channels or topics of interest. Recent works propose content-based publish and subscribe services, where subscribers and publishers can specify, at the content level, their interests [4, 5]. These works support similar functionalities as those provided by the proposed architecture. However, the proposed architecture, to enable CoIs, enhances the basic “publish” and “subscribe” functionalities by supporting filtering capabilities which use ontology-based attributes, agreed upon by all community members. It enables object filtering based on user defined semantics. These capabilities allow a PW to evolve into a fine-tuned advertising and retrieval infrastructure which reflects closely the interests of a user within a community of similar interest. In the field of P2P overlay networking, several publish and subscribe services using DHT-based P2P overlay networks have been recently proposed. Early works, including scribe and pFilter, target to support topic-based publish and subscribe service through DHT-based overlay network of rendezvous nodes [6, 7]. Recent developments in this field attempt to support content-based publish and subscribe services. Most of these works propose a scheme to map subscriptions and notifications to a set of rendezvous nodes using attributes [8]. These proposed frameworks work well for “closed” environments, where a globally defined ontology or a set of well defined semantics and encoding schemes are used to support publish and subscribe services. The use of a single ontology, however, may not be realistic in an open environment, where users freely define their interests. Consequently, these frameworks fall short in supporting the basic functionalities of a PW.

3

Framework and Architecture Overview

In this Section, the framework used to support the functionalities of the CoI architecture is described. First, the main components of the proposed architecture, their associated data structures and basic functionalities are presented. The mechanisms and schemes used by the overlay architecture for object indexing, advertising and retrieval are then discussed. 3.1

Object Metadata, Semlets, and User Interest Profiles

In the CoI architecture, an object is associated with metadata which is composed of two main attributes: “location” and “semantic”. Formally, a metadata of object o, within a CoI c, can be represented by mo = (Lo , Soc ), where Lo is the location of object o, and Soc define the semantic of object o, as agreed upon by the members of c.

A Community-Based, Agent-Driven, P2P Overlay Architecture

619

The location attribute is used to locate an object and is typically represented by the object URL. The Semantic attribute is concerned with the meaning of the associated element, as agreed upon by the members of a CoI. This attribute is represented by a set of concepts defined by the community ontology, referred to as Cmm-Ont. A Cmm-Ont can be viewed as a dictionary of words and their relations used within a community. It is used to ensure consistency in semantic classification within the community. In practice, a semantic representation of an object can be inferred from the keywords that appear in the object attributes. Typically, an extraction function, thereof referred to as X(), scans the object semantic to extract a set of keywords. The extracted keywords are mapped into the concepts defined by the Cmm-Ont. These concepts are then used to index and advertise associated objects within the CoI. Note that the mapping of keywords into Cmm-Ont concepts is “welldefined” within a CoI and, as such, is accessible to all members of the community. In addition to its attributes, an object is also associated with a semanticaware agent, referred to as a semlet. A semlet acts as the “surrogate” agent to advertise its associated object and and retrieve similar objects of interest, as specified in the user’s interest profile. Formally, a semlet, associated with an object o, is characterized by a profile, PSlo , defined as PSlo = (LSlo , RSlo , σSlo ), where LSlo is the semlet locator, RSlo is the set of semantic rules, and σSlo is a user-defined similarity algorithm with respect to object o. The semlet locator is typically the URL of its associated object. Notice that a semlet needs not reside at the same location as its object, as long as it maintains a pointer to the object location. The RSlo component of the semlet profile is derived from the “user’s personal interests” with respect to an object o. These interests can be either explicitly expressed by the user in an interest profile or implicitly derived from objects created by the user 1 . These interests are mapped into a set of Personalized Interest Rules (PIR), using a user-defined mapping function, denoted as U(). For a given object o and a semlet Slo , each rule, included in RSlo , is a logical expression involving a set of concepts and a set of logical operations. These operations typically include AND(∧), OR(∨), or NEGATE(¬). Formally, RSlo is represented by a tuple (Sou , ⊗), where Sou is the concept set associated with user u of community c with respect to object o, and ⊗ is the set of operations defined on the concepts in Sou . Notice that Sou ∩Soc = ∅, thereby ensuring that the rules used by a user to acquire an object of interest within c, are at least partly expressed based on the CoI defined semantics. This constraint is necessary to “integrate” a user’s PW within a specific CoI, without limiting its integration into other CoIs. The semlet uses its associated object metadata to advertise the objet to members of a CoI. During its itinerary, the semlet registers its interests with respect to object o, based on its associated RSlo . Consequently, when a new object is 1

The method used to express a user’s personalized interests or derive them from manipulated objects depends on several factors including the type of object and the ontology used by the CoI. Its specification is outside the scope of this paper.

620

C. Sangpachatanaruk and T. Znati

advertised, the semlet is notified if at least one of the rule in its RSlo is “satisfied” by the semantic of the new object metadata. To illustrate this process, assume that the set RSlo of semlet Slo contains rule si ∧ (sj ∨ sk ). A new object, advertising a metadata containing the concepts {si , sk } and {sj , sk , sl } causes Slo to be notified. Upon notification, Slo uses its associated user-defined similarity algorithm, σSlo , to assess the similarity of the new object to the user’s interest and, in the case of a match, include the object to the user’s PW. To support semlet advertising and retrieval in a dynamic, heterogeneous network environment, object metadata and semlet profiles are distributed and indexed among the overlay nodes using the P2P overlay network, referred to as Ontology Overlay Network (OON). The OON uses community ontology coupled with DHT to infer a semantic structure that regulates location and access to a distributed set of objects and semlets. Consequently, the OON provides adaptive and scalable search and discovery to support the functionality of the proposed architecture. In the following section, the OON setup and organization are described. 3.2

PW Overlay Setup and Organization

To support semantic-based indexing, discovery and advertising, the OON structure assumes that the Internet is organized into a set, D, of autonomous domains. Each domain, d ∈ D, elects a node, nd , to act as an OON node in the semantic overlay network. The set of OON nodes, referred to as N, forms the basis of the OON. Hence, N = {nd |d ∈ D and nd is an OON node}. Each node, nd ∈ N, has a set of neighbors, Nd , which it uses to route data using a DHT-based strategy. The nodes in a given domain, d, address the semlet search and advertising to the OON node nd ∈ N. Each OON node nd ∈ N is assigned a set of semantics S n , and has responsibility to advertise objects and retrieve objects of interest for the semlets associated with any si ∈ S n . ∀si ∈ S n , hash(si ) = hi where hi is the key responsible by nd . A node that is responsible for si is denoted as nsi . A metadata is advertised and indexed to the OON based on its associated community concepts. These concepts are hashed using a system-wide hash func-

Fig. 1. PW Indexing Model

A Community-Based, Agent-Driven, P2P Overlay Architecture

621

tion to obtain a set of keys. These keys are then mapped to the overlay nodes using the DHT-based mapping scheme. These nodes are responsible for the object metadata. A semlet profile is stored and indexed in the OON using the same mapping scheme. Note, however, the mapped concepts are those extracted from the personalized interest rule. The indexing model describing how object metadata and personalized interest rules are indexed through OON is depicted in Figure 1.

4

Semlet Advertising and Retrieval

The underlying strategy of semlet advertising and retrieval is to advertise “aggressively” to and retrieve “selectively” from the community members of similar interest. A semlet advertises and retrieves an object through the OON. First the semlet locates an OON node within its own domain. It then sends a query which contains the object metadata and its profile to this OON node. This OON node then routes the query to the OON node responsible for one of the hash keys of its associated concepts. When the query arrives at an OON node, it triggers the OON node to execute a procedure to discover semlet profiles whose rules are satisfied by its carried metadata. It also triggers the OON node to find objects whose metadata satisfy its carried rules. The object metadata and semlet profile contained in the query may be stored or indexed in this node for later discovery and notification. The matched profiles and object metadata are then forwarded to the next OON node responsible for the next concept in the associated concept set. The forwarding continues until all OON nodes responsible for the concepts in the concept set are visited. At the last OON node, the query triggers the OON node to notify all semlets associated with collected profiles about the object, and send back the collected object metadata to the advertising semlet. To support large number of semlets and objects in a heterogeneous and dynamic environment, OON must efficiently manage storage and communication used in semlet advertising and retrieval. To this end, three semlet advertising and retrieval schemes, namely aggressive, crawler-based, and minimum-coverrule schemes will be investigated in the following Sections. 4.1

Aggressive Scheme

The “aggressive” scheme uses storage and communication aggressively to support semlet advertising and retrieval. The basic principle is to perform data indexing and storing on all the OON nodes responsible for their associated concepts. As a semlet visits these nodes, its associated object metadata and semlet profiles are stored and indexed for later discovery and advertising. The advantage of the aggressive scheme is its added ability to deal with node failures. This is achieved through object metadata replication. The drawback of the aggressive scheme is the need to use multiple overlay nodes to store semlet profiles and object metadata. This may lead to excessive overhead in term of storage. To address this shortcoming, a crawler-based scheme, which trades storage at the cost of increasing communication overhead, is described next.

622

4.2

C. Sangpachatanaruk and T. Znati

Crawler-Based Scheme

Similar to the aggressive scheme, the crawler-based scheme advertises and retrieves objects of interest by sending a query to all the nodes that are responsible for its associated concepts. However, the object metadata and semlet profile will be stored only in a subset of these nodes. Using this strategy, a semlet may miss an object notification. Consider the case when a metadata of an object v with concepts {si , sj , sl } is stored at nsl . Later, a semlet Slo arrives with interest rules si ∧ sj ∨ si ∧ sk and decides to store its profile at node nsi . It then visits nsi , nsj , and nsk . Consequently, the semlet misses the object v. To resolve this problem, semlets no longer wait passively for notifications. Instead, a semlet periodically sends refresh messages to update previously advertised objects and potentially harvest new objects of interest at the remote site. In the case above, when the semlet of object v does refreshing, it discovers the newly added semlet profiles, and advertises to those semlets that missed advertising previously. The advertising and retrieval mechanism of this scheme is similar to the aggressive scheme, except that the subset of keys representing metadata and rules are stored for later notification and advertising. In addition, each semlet schedules an alarm to refresh a query periodically. Overall, the crawler-based scheme saves storage at the cost of additional refreshing. Consequently, the frequency at which refreshing is performed plays a crucial role in the overall performance of the scheme. In an environment where objects are created frequently, the rate at which a semlet performs refreshing must be high. This is necessary to ensure that a semlet discovers the newly advertised objects of interest. In static environments, however, excessive refreshing may unnecessarily result in potentially prohibitive overhead. Therefore, the rate of refreshing must reflect the trade off between the awareness of newly advertised object and communication overhead. 4.3

Minimum-Cover-Rule Scheme

The minimum-cover-rule (MCR) scheme leverages the benefit of aggressive and crawler-based scheme, while reducing its functional cost in term of communication and storage. To achieve this goal, the MCR scheme computes a minimum set of nodes to ensure that all interested semlets are immediately notified when new objects are advertised, while minimizing the storage requirement and communication overhead. The minimum set of nodes is computed from a data structure representing a minimum set of key IDs associated with rules, referred to as Minimum RuleCover-Node Set (MRS). Each member of a MRS is represented by a tuple (h, Rh ), where h is a key ID and Rh is a set of rules stored at the node responsible for h. For example, the MRS of a rule set R = {r1 : (si ∧ sj ), r2 : (si ∧ sk ), r3 : (sj ∧ sl ∨ sj ∧ sk )} is represented by {(hsi , {r1 , r2 }), (hsj , {r3 })}. This implies that rule r1 and r2 are assigned to be stored at the node responsible for hsi and rule r3 is assigned to be stored at the node responsible for hsj . Therefore,

A Community-Based, Agent-Driven, P2P Overlay Architecture

623

the storage cost used to store these rules and bandwidth used to carries these profiles are minimized. Finding a MRS for a given rule set is a two-step process: extracting “independent” rules from the rule set, and assigning rules to the most frequently appearing concepts. An independent rule is a rule which can be evaluated independently from other rules, and which cannot be broken down to smaller independent rules. Consider a rule set R = {r1 : (si ∧ sk ), r2 : (si ∧ sm ∨ sm ∧ sl )}; only r1 is considered as an independent rule. r2 is not because it contains two conjunctive clauses; it can be broken down into two independent rules (si ∧sm ) and (sm ∧sl ). An independent rule is used to identify a concept dependency, which allows us to select minimum node responsible for storing a given rule. For example, r1 in the example above can be stored at either nsi or nsk . Using this scheme, the storage for personalize interest rules and matching computation are minimized. Consider the case, when a semlet Slo has a rule set R = {r1 : (si ∧ sj ), r2 : (si ∧ sk ), r3 : (sj ∧ sl ∨ sj ∧ sk )} and MRS= {(hsi , {r1 , r2 }), (hsj , {r3 })}. Compared to the aggressive scheme, the semlet sends the rule r1 and r2 to be stored at nsi and r3 to be stored at nsj , instead of sending all rules to be stored at nsi , nsj , nsk , nsl . In addition, when another semlet advertises an object with semantics {si , sk } to nsi ,and nsk , the rule-checking against the metadata will occur only once at nsi , instead of multiple times on both nsi and nsj . Furthermore, the complexity of finding MRS for a given rule set does not affect the effectiveness of the semlet advertising and retrieval. The MRS can be computed off-line, prior to the semlet advertising or retrieval.

5

Performance Analysis

The objective of this study is to assess and compare the performance of the advertising and retrieval schemes described above. To this end, a set of experiments, using different scenarios, are conducted to measure two key performance metrics: the effectiveness and efficiency of the retrieval and advertising. The effectiveness is measured in terms of the precision and recall of advertising and retrieval. The precision of advertising and retrieval is defined as percentage of the number of relevant object metadata that are retrieved to the number of all object metadata that are retrieved. The recall is defined as the percentage of the number of relevant object metadata that are retrieved to the number of relevant object metadata in the system. The efficiency is measured in term of the network and peer resources used for advertising and retrieval, namely bandwidth consumption and storage used to store rules and object metadata. 5.1

Experimental Design and Procedure

Two set of experiments are setup in the stable and churn environments. In a stable environment, prior to object creation, an OON structure is formed, and remains stable through out the experiment. In this experiment, Modelnet is used

624

C. Sangpachatanaruk and T. Znati

to simulate delay and bandwidth consumption for a given network topology [9]. The simulated network topology was generated by the Inet-topology software. For this study, four 5,000-node wide-area AS-level networks were generated, with 50, 100, 150, and 200 overlay nodes respectively. The overlay nodes and the core network were connected using 10 Mbps links with 100 ms latency. All stub-stub links are assigned 1 Gbps with 20 ms latency. The OON service is developed using a software package of Bamboo-dht, a recently developed DHT substrate targeted to provide a public DHT service [10]. Each OON node runs the OON service to support semlet advertising and retrieval. A cluster of 16 computers connected with 100 Mbps links is used in this experiment. One machine with a 2.0 GHz cpu is setup as an emulator, and the other 15 each with 1.4 GHz cpu are used to emulate multiple OON nodes. The emulator emulates the network delay by acting as a hub responsible for holding and releasing the packets destined to different overlay nodes. In a churn environment, an OON structure is formed and dynamically changes as nodes randomly join and leave the network. Nodes are classified into two classes: long-lived and short-lived. A long-lived node is characterized by the node that once joins the network remains in the network through out the simulation. While a short-lived is defined as the node that joins and leaves the network with the average live-time of two minutes. Three scenarios of churn are simulated over a 500-node network, with 10%, 35% and 50% node churn respectively. In this experiment, the simulation software provided by the Bamboo-dht package is setup on a machine with 3.2 GHz cpu, and 1 GB of ram. The main objective of this experiment is to measure the effectiveness of the advertising and retrieval of the proposed schemes in a churn environment. For each simulation scenario, three experiments were conducted. All experiments are tested using five thousands objects. Each object is randomly created by one of the OON nodes using Poisson process with inter-arrival time of 20 seconds per OON node. Each object is associated with 2 to 4 concepts drawn from an ontology described using 100 concepts. This relatively small size ontology was used to ensure some level of similarities between different objects, which in turn allows the formation of different PWs. The selection of a concept associated with an object and a rule follows the Zipf distribution, where the frequency of selecting a concept that is ith -most-frequently-used is approximately inversely proportional to i. This distribution is well known in representing semantic selection in natural languages. Consequently, few PWs are expected to have large collections of objects. Once an object is created by an OON node, the OON service generates a single rule of a conjunctive clause involving all concepts generated for the object. For example, a rule si ∧ sj ∧ sk is generated for an object metadata with semantic {si , sj , sk }. This rule is then associated with a semlet profile, which is sent to an OON node, based on the OON index model defined in Figure 1, for advertising and retrieval. The average precision and recall is obtained by the post-process of the offline computation. Upon the completion of the simulation, the number of objects

A Community-Based, Agent-Driven, P2P Overlay Architecture

625

retrieved for each semlet is computed. The information of all semlets, their object metadata, profiles and objects of interest from all the OON nodes is collected and used to compute the number of relevant objects, and those of which are retrieved. These numbers are then used to compute the precision and recall as described above. For the crawler-based scheme, four refreshing intervals, of size 200, 400, 600, and 800 seconds respectively, were considered. The goal was to study the impact of the interval size on the performance of the crawler-based scheme in comparison to the other schemes. 5.2

Precision and Recall

100

100

99.8

99.8

99.6 99.4 99.2 99 crawler rf:200s crawler rf:400s crawler rf:600s crawler rf:800s

98.8 98.6 98.4

Avg. Recall Per Semlet

Avg Recall Per Semlet

Stable Environment: The results show that the aggressive and minimum-rulecover scheme give 100% precision and recall, while the crawler-based scheme only achieved 100% precision. This can be explained by the fact that using crawler-based scheme, a semlet may miss the objects that are advertised earlier and stored on the nodes that the semlet does not visit. The percent of missing objects is directly impacted by the size of the OON and the size of the refreshing intervals, as shown in Figure 2.

99.6 99.4 99.2 99 50-nodes 100-nodes 150-nodes 200-nodes

98.8 98.6 98.4

40

60

80 100 120 140 160 180 200 Number of OON nodes

200

300 400 500 600 700 Refreshing Interval (sec)

800

Fig. 2. Recall of Crawler-based Scheme

Churn Environment: The results in Figure 3 show that the aggressive scheme and the crawler-based scheme with small refreshing periods (200 and 400 seconds) outperform the minimum-rule-cover scheme as the percentage of churn nodes in the network increases. The results also confirm that as the refresh interval increases, the effectiveness of crawler-based scheme reduces considerably. This can be explained by the fact that the churn of the OON causes the lost of object metadata and semlet profiles previously advertised. The aggressive scheme is the most effective since it provides some level of replication. The crawler-based scheme with small refresh intervals, though without replication, outperforms the minimum-rule-cover, since its refreshing strategy provides some level of data recovery. As the refresh interval increases, the effectiveness of the crawler-based scheme decreases as the data recovery are not able to catch up on the data lost.

C. Sangpachatanaruk and T. Znati 100

100

98

98

Avg Recall Per Semlet

Avg Recall Per Semlet

626

96 94 aggressive min-cover-rule crawler rf:200s crawler rf:400s crawler rf:600s crawler rf:800s

92 90 88

96 94 92 90 churn 20 % churn 35 % churn 50 %

88

86

86 20

25 30 35 40 45 % of Churn Nodes in the OON

50

200

300 400 500 600 700 Refresh Interval (seconds)

800

19 18 17 16 15 14 13 12 11 10 9 8

140 aggressive min-cover-rule crawler rf:200s crawler rf:800s

Avg. Storage Per Node (KB)

Avg. Utilized BW Per Node (KBps)

Fig. 3. Recall Comparison in a Churn Environment

aggressive min-cover-rule crawler rf:200s crawler rf:800s

120 100 80 60 40 20 0

40

60

80 100 120 140 160 180 200 Number of OON Nodes

40

60

80 100 120 140 160 180 200 Number of OON Nodes

Fig. 4. Bandwidth and Storage Efficiency

5.3

Bandwidth and Storage Efficiency

The results in Figure 4 confirm that the crawler-based scheme consumes more bandwidth than the other two schemes. The results also show that the OON size does not impact the bandwidth consumption of the aggressive and minimumrule-cover scheme. However, in the crawler-based scheme, as the OON size increases, the bandwidth consumption per node decreases. This can be explained that when the OON is small, most of the OON nodes handle the refreshing traffic. On the other hand, when the OON increases in size, this traffic is spread widely among the OON nodes. As a result, the average bandwidth consumption decreases. For the storage efficiency, the result shows that the crawler-based scheme deploys storage most efficiently among the three schemes. The aggressive scheme, on the other hand, exhibits the worst performance with respect to storage. It also shows that as the OON size increases, the average storage used per OON node decreases. This is because the metadata and profiles are spread among more OON nodes as the OON size increases.

A Community-Based, Agent-Driven, P2P Overlay Architecture

6

627

Conclusion and Future Work

An architecture to enable the dynamic formation of communities of interest is proposed. The data structures, mechanisms, and protocols necessary to support resource access and knowledge sharing among members of a community of interest are described. A study to assess the performance of different strategies for resource advertising and retrieval is presented. The experimental results show that the minimum-rule-cover scheme is more effective and efficient than the other two schemes in the stable environment. The scheme exhibits a 100 % precision and recall, while consuming an acceptable amount of bandwidth and storage. In the churn environment, however, the aggressive and crawler-based schemes outperform the minimum-rule-cover scheme. The tradeoff between the recall and the storage and communication cost in different churn environments will be the focus of the future work. Furthermore, strategies to balance the load among OON nodes, including caching and data chaining, will be explored. Lastly, to enhance robustness and increase data availability, mechanisms such as data replication and erasure coding will be considered.

References 1. Berners-Lee, T., Hendler, J., Lassila, O.: The semantic web. Scientific American (2001) 2. Finin, T., Sachs, J.: Will the semantic web change science? Science Next Wave (2004) 3. Sangpachatanaruk, C., Znati, T.: A p2p overlay architecture for personalized resource discovery, access, and sharing over the internet. In: CCNC’05. (2005) 4. Idreos, S., Koubarakis, M., Tryfonopoulos, C.: P2p-diet: an extensible p2p service that unifies ad-hoc and continuous querying in super-peer networks. In: ACM SIGMOD’04. (2004) 933–934 5. Carzaniga, A., Rosenblum, D., Wolf, A.: Design and evaluation of a wide-area event notification service. In: ACM Transactions on Computer Systems. (2001) 383 6. Rowstron, A., Kermarrec, A.M., Castro, M., Druschel, P.: Scribe: The design of a large-scale event notification infrastructure. In: NGC’01. (2001) 30–43 7. Tang, C., zhichen Xu: pfilter: Global information filtering and dissemination. In: FTDCS’03. (2003) 8. Triantafillou, P., Aekaterinidis, I.: Content-based publish-subscribe over structured p2p networks. In: DEBS’04. (2004) 9. Vahdat, A., Yocum, K., Walsh, K., Mahadevan, P., Kostic, D., Chase, J., Becker, D.: Scalability and accuracy in a largescale network emulator. In: OSDI’02. (2002) 10. Rhea, S., Godfrey, B., Karp, B., Kubiatowicz, J., Ratnasamy, S., Shenker, S., Stoica, I., , Yu, H.: Opendht: A public dht service and its uses. In: ACM SIGCOMM’05. (2005)

Providing an Uncertainty Reasoning Service for Semantic Web Application Lei Li1 , Qiaoling Liu1 , Yunfeng Tao2 , Lei Zhang1 , Jian Zhou1 , and Yong Yu1 1

APEX Data and Knowledge Management Lab 2 Basics Lab, Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China {lilei, lql, zhanglei, priest, yyu}@apex.sjtu.edu.cn, [email protected]

Abstract. In the semantic web context,the formal representation of knowledge is not resourceful while the informal one with uncertainty prevails. In order to provide an uncertainty reasoning service for semantic web applications, we propose a probabilistic extension of Description Logic, namely Probabilistic Description Logic Program (PDLP). In this paper, we introduce the syntax and intensional semantics of PDLP, and present a fast reasoning algorithm making use of Logic Programming techniques. This extension is expressive, lightweight, and intuitive. Based on this extension, we implement a PDLP reasoner, and apply it into practical use: Tourism Ontology Uncertainty Reasoning system (TOUR). The TOUR system uses PDLP reasoner to make favorite travel plans on top of an integrated tourism ontology, which describes travel cites and services with their evaluation.

1

Introduction

The Semantic Web (SW) [1] aims at transforming traditional text-based web and providing machine readable information for practical applications. Such information is based on three types of semantics: the implicit, the formal, and the powerful semantics[2]. The implicit semantics describes informal and uncertain information of the web, such as semantics in unstructured texts and document links. While the formal semantics often refers to well-defined and structured representation with definite meaning, such as ontologies upon Description Logic (DL). The last but most powerful semantics owns abilities of both of the previous ones, i.e. it describes both the informal (imprecise and probabilistic) and formal aspects of the web. However, it is not easy to obtain a powerful semantics due to the incompatibility between logic and probability. To resolve this incompatibility, we propose PDLP to tightly combine description logic with probability and provide a powerful semantics semantic web context. As a formal representation, Description Logic[3] is a decidable fragment of First Order Logic (FO). With DL, Knowledge Bases (KB) describe concepts, roles (the relationship between concepts), axioms and assertions under a Tarskilike semantics. Efficient algorithms have been devised to solve reasoning tasks in X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 628–639, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Providing an Uncertainty Reasoning Service for Semantic Web Application

629

DL, such as highly optimized Tableau algorithms[4]. However, these algorithms do not deal with uncertainty. In the recent decades, uncertainty starts to play an important role in order to supplement the expressivity of formal approaches. The changing web context and intrinsic properties of information require this type of uncertainty, as one example showed in our tourism ontology. However, pure DL does not lend a hand to uncertainty reasoning. Several extensions have been made to combine uncertainty and DL together[5,6,7]. In our attempt, we present Probabilistic Description Logic Programs (PDLP), a lightweight probabilistic extension to assertional knowledge in DL, and interpret probabilities in intentional semantics. Queries of PDLP are answered using a translational approach: reducing DL to logic programs in the same spirit as in [8]. The major differences between our PDLP and other related formalisms lie in the following aspects: – PDLP only attaches probability to world assertions rather than terminological axioms; – The syntax and semantics are carefully devised to meet both the DL and LP restriction; – PDLP adopts a translational approach rather than a hybrid approach, such as [5,7]. As a result of these differences, PDLP achieves several highlights as follows: – Expressive, PDLP owns the ability to deal with uncertainty, besides, the queries for PDLP could be both DL-like and LP-like thus PDLP provides a expressivity extension to queries; – Lightweight, probability is only extended to world assertions rather than the whole knowledge base; – Intuitive, the semantics of probability in PDLP is clear and natural; – Speedy, its reasoning algorithm is very fast thanks to the efficient inferencing with logic programming techniques; – Practical, we apply our implemented PDLP reasoner to Tourism Ontology Uncertainty Reasoning system to evaluate and rank travel plans for users, according to quality of service of travel cites. The remaining part of this paper is organized as follows: in next section, we will formalize the syntax and semantics of PDLP. In section 3, we describe the reasoning tasks and the corresponding algorithm. In section 4, we present our PDLP implementation and its application in tourism planning. Related work is discussed in section 5. Finally, section 6 concludes the paper.

2 2.1

Syntax and Semantics Syntax of PDLP

The language of PDLP is obtained by tailoring DL in the same essence as DHL[8]. The tailoring is justifiable due to the different expressivity and complexity between DL and LP, thus it is necessary to build up a model to capture common

630

L. Li et al.

abilities of the two. Primarily, our extension enhances the expressivity of uncertain knowledge. A probabilistic knowledge base of PDLP consists of two components: pKB := T, pA , where TBox T contains axioms about concepts (in other words, the relationship between concepts): Ch1 ≡ Ch2 Cb ( Ch

R ( S (role hierarchy)) R ∈ R+ (transitive role) R ≡ S − (inverse role)

(concept equivalence) (concept inclusion)

where Ch , Cb are concepts and R, S are roles, defined as follows: Ch → A Ch ) Ch ∀R.Ch

Cb → A ¬A Cb * Cb Cb ) Cb ∃R.Cb

| |

| | | |

where A is an atomic concept and R is a role. Here, negation is allowed on primitive concepts rather than arbitrary ones to meet the translating restriction from DL to LP because LP cannot represent arbitrary negation. The ABox of PDLP, pA, differs from that of normal DLs in the uncertainty it asserts. An probabilistic assertion p ∼ ϕ could assert uncertainty beyond the ability of ordinary assertion ϕ, where ϕ is a:C or a, b :R. A probabilistic ABox contains following assertions: a:Ch ,

a, b :S,

p ∼ a:A,

p ∼ a, b :R,

where a, b are individuals, A a primitive concept, S a role and R a primitive role 1 , and p, called asserted probability (AP), is a real number in the range of [0, 1]. The first two assertions are deterministic, while the latter two are attached with uncertainty on primitive concepts or roles (thus it is lightweight). 2.2

Semantics

In PDLP, an assertion is attached with a probability to indicate how much likely it is consistent with respect to probabilistic knowledge base. In order to assign probability to an assertion, we first recall some notations from Probability Theory. An interpretation of TBox is called a model of T , written as I |= T , where I assigns to every concept C a set C I ⊆ I and to every role R a binary relation RI ⊆ I × I , where I is the domain of the interpretation (Table 1). The partial ordering  on interpretations is defined as I1  I2 if C I1 ⊆ C I2 for every concept C and RI1 ⊆ RI2 for every role R. I1 ≺ I2 if I1  I2 and I1 = I2 . 1

Primitive concepts refer to atomic concepts with none occurrence in heads Ch of TBox axioms, primitive role is similar.

Providing an Uncertainty Reasoning Service for Semantic Web Application

631

Table 1. Semantics of concept constructors Constructor name Atomic Concept Atomic Negation Role Conjunction Disjunction Exists restriction Value restriction

Syntax Semantics A AI ⊆ I ¬A I \ AI I R R ⊆ I × I CD C I ∩ DI CD C I ∪ DI ∃R.C {x|∃y.x, y ∈ RI ∧ y ∈ C I } ∀R.C x|∀y.x, y ∈ RI → y ∈ C I

Definition 2.1. (Least Fixed Point Semantics) Let I0 be a base interpretation only on primitive concepts and primitive roles, an extension I on a base interpretation I0 is called a least fixed point model with respect to a TBox T if: (1) I0  I, I |= T ; (2) ∀I  .I0  I  ∧ I  |= T =⇒ I  I  . Hence, we define the sample space in PDLP as the model class C: the collection of least fixed point models with respect to TBox. We assume that μ is a probability distribution on C with restriction μ(C) = 1. Definition 2.2. A pair C, μ is called a probabilistic world of the knowledge base pKB, where C and μ follow conditions mentioned above. Intensional Semantics. An ABox assertion is in fact a logical formula. A formula has its satisfied models: M od(ϕ) := {I : I ∈ C ∧ I |= ϕ}. Definition 2.3. The calculated ϕ probability (CP) of a classical deterministic assertion is a function defined by: ν(ϕ) := μ(M od(ϕ)) = μ(I) I∈C,I|=ϕ

where ϕ is a deterministic assertion (a:C or a, b :R), C, μ is a probabilistic world. is well defined because C is enumerable. Lemma 2.1 The CP of an assertion has following properties: – ν(a:¬C) = 1 − ν(a:C) – Inclusion-exclusion principle: ν(a:C * D) = ν(a:C) + ν(a:D) − ν(a:C ) D) where C, D are concepts. Definition 2.4. A probabilistic world C, μ satisfies a probabilistic assertion p ∼ ϕ, written as C, μ |≈ p ∼ ϕ, if ν(ϕ) = p (CP equals AP). A probabilistic world satisfies ABox pA with respect to a TBox T , C, μ |≈ pA, if it satisfies all assertions in pA. In this sense, it is also written as C, μ |≈ pKB, for a probabilistic Knowledge Base pKB constituted by T and pA.

632

L. Li et al.

Definition 2.5. A pKB entails an assertion p ∼ ϕ, pKB |≈ p ∼ ϕ, if all probabilistic worlds of pKB, satisfy the assertion. pKB |≈ p ∼ ϕ

iff

∀C, μ |≈ pKB =⇒ C, μ |≈ p ∼ ϕ

Lemma 2.2 Probability Indicator – if pKB |≈ p ∼ a:C, then pKB |≈ (1 − p) ∼ a:¬C; – inclusion-exclusion principle: if pKB |≈ p ∼ a : C, pKB |≈ q ∼ a : D and pKB |≈ r ∼ + a:C ) D, then pKB |≈ (p + q − r) ∼ a:C * D; (a, b :R ∧ b:C), where HU denotes the Herbrand Universe. – a:∃R.C ≡ b∈HU

This provides an approach to calculate existential assertions. The lemma above is vitally important because it is the basis of our calculation of the probability of a given assertion. And this also explains why we restrict the syntax of probabilistic assertion in ABox: we would like to compute the probability of complex assertions from some basic and simple facts. This is rather useful in practical application as shown in our example: the ontology knows basic facts while the reasoner can figure out complex events. Our extension enhances the expressivity of uncertain knowledge. Meanwhile it is its lightweight extension that enables an easy semantic model of probabilistic knowledge base. The overall semantics is rather intuitive once we set up probabilities for models of the knowledge base, which inspires our fast reasoning scheme in the following section.

3 3.1

Reasoning Reasoning Tasks

A DL reasoning system typically supports several kinds of reasoning tasks: membership, subsumption, satisfiability and hierarchy, all of which can be reduced into retrieval problems [3]. While LP engines could typically answer two kinds of queries: instance retrieval and membership check[8]. Since PDLP adopts the translational approach, it supports similar queries as DHL[8], which enables us to express information need either DL-like with concept constructors or LP-like with variables. For example, we can represent our query “retrieve any instance of ∃R.C” as: – DL-like: ∃R.C – LP-like: Query(x) ← R(x, y), C(y) The DL-like queries can be easily translated into LP-like queries, while LP queries can express more than DL ones. Therefore PDLP is expressive in query ability. The query is answered in the following scheme: 1. Fast retrieval of any possible instances by making use of the below translation; 2. Calculate the probability corresponding to each instance (pair).

Providing an Uncertainty Reasoning Service for Semantic Web Application

3.2

633

Translation

In order to retrieve all possible results, a probabilistic knowledge base pKB can be partially translated into a logic program[9] while preserving the semantics. We follow the approach of [8] and define a mapping from PDLP to LP in the same way as DHL except for atomic negations and probability assertions as follows: Γ (A, x) Γ (¬A, x) Γ (C 1 ) C 2 , x) Γ (Cb1 * Cb2 , x) Γ (∀R.Ch , x) Γ (∃R.Cb , x) Γ (Cb ( Ch )

−→ −→ −→ −→ −→ −→ −→

Γ (Ch1 ≡ Ch2 )

−→

Γ (R ( S) Γ (R ∈ R+ )

−→ −→

Γ (R ≡ S − )

−→

Γ (a : Ch ) Γ (a, b : R) Γ (p ∼ a : A) Γ (p ∼ a, b : R)

−→ −→ −→ −→

A(x) ∼ A(x) Γ (C 1 , x) ∧ Γ (C 2 , x) Γ (Cb1 , x) ∨ Γ (Cb2 , x) Γ (Ch , y) ← R(x, y) R(x, y) ∧ Γ (Cb , y) Γ (Ch , x) ← Γ (Cb , x)  Γ (Ch1 ( Ch2 ) Γ (Ch2 ( Ch1 ) S(x, y) ← R(x, y) R(x, y) ← R(x, z) ∧ R(z, y)  R(x, y) ← S(y, x) S(x, y) ← R(y, x) Γ (Ch , a) R(a, b) A(a) R(a, b)

A (retrieval) query could also be translated into LP conventions as the body part of a rule without head. This translation phase does not concern probability, with its primary target on all possible result. The preservation of semantics relies primarily on translation of deterministic part of pKB, which is ensured and by the common Least Fixed Point Semantics these two formal frameworks share[10]. Therefore the semantics preserve, for the probability distribution is on the models of TBox, which concerns no uncertainty. 3.3

Probabilistic Inferencing

Definition 3.1. A set of primitive assertions E is called a (basic) evidence of an assertion p ϕ if – T, D(E) |= D(p ∼ ϕ), where T, D(E) is a temporal knowledge base. – ∀E   E, T, D(E  )  D(p ∼ ϕ) where D(E) = {ϕ : p ∼ ϕ ∈ E}. E is a minimum set of primitive facts to support ϕ. Assuming the consistency of pKB, an answer of a query under the least fixed point semantics is a result of bottom-up calculation while the evidence here in essence is a result of top-down search of supporting facts. Hence, the procedure to calculate the probability of an assertion can be summarized as follows:

634

L. Li et al.

1. Inference all basic evidences E1 · · · Ek by LP engine k + 2. Calculate the probability by CP = CP ( ∧Ei ) by inclusion-exclusion prini=1

ciple introduced in lemma 2.2. In order to calculate probability, one vital property we assume here is the independence of assertions in a basic evidence, that is, CP (p ∼ ϕ ∧ q ∼ ψ) = AP (p ∼ ϕ) · AP (q ∼ ψ) = p · q, where ϕ, ψ are a : A or a, b : R, and A,R are primitive.

4

Implementation and Application

In this section, we present an example application of our implemented PDLP reasoner, the Tourism Ontology Uncertainty Reasoning system(TOUR), to provide a travel planning service for clients. 4.1

Implementation

We implement a PDLP reasoner based on intensional semantics. In order to speed up its reasoning, PDLP reasoner adopts following optimizing techniques: sideway information passing, magic set and semi-naive evaluation strategies[11]. The PDLP reasoning speed is really fast in the following practical tourism application. 4.2

Scenario and Architecture

The TOUR system is built upon the tourism ontology (adapted from Protege2 ontology library). The system aims at making a tourism plan most conforming to customer’s expectation. This system contains three components (Figure 1):

Fig. 1. Architecture of TOUR

– Ontology layer: The tourism ontology formally describes a set of available destinations, accommodation, and activities. Tourism axioms (see Table 2)of 2

http://protege.stanford.edu/

Providing an Uncertainty Reasoning Service for Semantic Web Application

635

this ontology, are devised according to the WTO thesaurus3 . The ontology grades each instance with a score. These grades of travel cites and services come from two sources: trusty tourism agencies like National Tourism Administration (CNTA)4 , and customer personalized favorite setup; – Reasoning layer: make use of our implemented PDLP reasoner; – Querying layer: transform user queries to DL-style or LP-style forms, and put customer personalized assertions about travel cites and services into ontology. Table 2. A fragment of the translated LP of axioms from tourism ontology Axioms Translated rules RuralArea PreferredDest PreferredDest(X) ←RuralArea(X). UrbanArea PreferredDest PreferredDest(X) ←UrbanArea(X). PreferredPark RuralArea RuralArea(X) ←PreferredPark(X). PreferredFarm RuralArea RuralArea(X) ←PreferredFarm(X). PreferredTown UrbanArea UrbanArea(X) ←PreferredTown(X). PreferredCity UrbanArea UrbanArea(X) ←PreferredCity(X). hasPart(X,Z) ←hasPart(X,Y),hasPart(Y,Z). hasPart ∈ R+ Sports Activity Activity(X) ←Sports(X). Adventure Activity Activity(X) ←Adventure(X). Sightseeing Activity Activity(X) ←Sightseeing(X). Hotel Accommodation Accommodation(X) ←Hotel(X). LuxuryHotel Hotel Hotel(X) ←LuxuryHotel(X). offerActivity(X,Y) ←isOfferedAt(Y,X). offerActivity ≡isOfferedAt− isOfferedAt(X,Y) ←offerActivity(Y,X).

Probabilities in the tourism ontology have practical meanings about quality of a tourism cites and services: – p ∼ a:Accommodation denotes accommodation rating given by CNTA; – p ∼ d, a :hasAccommodation indicates the convenience of accommodation a in destination d, e.g. the environment and traffic conditions in neighborhood; – p ∼ d, c :hasActivity describes the service probability of activity c in destination d, e.g. activities such as watching sun rising should be in clean days. 4.3

Querying on the Ontology

The TOUR system takes the following procedures to evaluate a travel plan for a customer: 1. Setup probabilities for basic facts (assertions) in ontology (Probability Personalizer); 2. Specify the factors and rules to retrieve travel plans (Query Generator); 3 4

The World Tourism Organization, http://www.world-tourism.org http://www.cnta.gov.cn/

636

L. Li et al.

3. Make use of PDLP reasoner to infer possible travel plans with their probability (PDLP reasoner). The following example concerning travelling in Beijing illustrates the whole procedure. In the first phase, suppose a customer grades travel cites and services in Beijing (Figure 2) as follows (partially): 0.95 Beijing hasPart

hasPart

hasAccomodation 0.9 Wangfujing Grand Hotel Beijing

Summer Palace

0.9

Tiananmen Square

0.85

offerActivity

offerActivity Visiting

Fig. 2. Assertions about Beijing

0.95 ∼ Beijing:PreferredCity 0.9 ∼ Beijing,Wangfujing Grand Hotel :hasAccommodation 0.9 ∼ Summer Palace,Visiting :offerActivity 0.85 ∼ Tiananmen Square,Visiting :offerActivity Beijing,Summer Palace :hasPart Beijing,Tiananmen Square :hasPart

(1) (2) (3) (4) (5) (6)

Second, the optimal travel plan is specified as a triple of destination, accommodation and activity, judged by a combination of their service quality and convenience. Thus the query rule is: Q1: Query(X,Y,Z)← PreferredDest(X),hasAccommodation(X,Y), hasPart(X,X1),offerActivity(X1,Z). In the third phase, the PDLP engine infers a result RES1 = Beijing, Wangfujing Grand Hotel, Visiting with two evidences: E1 ={(1),(2),(3),(5)} and E2 = {(1),(2),(4),(6)}, thus the convenience (probability) of this plan is CP(RES1 )= CP(E1 )V+CP(E2 )−CP(E1 · E2 )=0.842175. Figure 3 shows results of Q1. 4.4

Performance

We have tested performance of the TOUR system using two sorts of queries. One is a mixed query combining both probability and formal inquiry mentioned above. Another is intended to test capacity of the system on simple queries: Q2: Query(X,Y)←offerActivity(X,Y).

Providing an Uncertainty Reasoning Service for Semantic Web Application

637

Fig. 3. Partial result of Q1

Ontology Ontology Size Q1 Result size Time Cost(s) Q2 Result size Time Cost(s)

Tour1 489 296 0.94 15 0.19

Tour2 1002 638 2.92 15 0.14

Tour3 1524 986 8.30 15 0.16

Tour4 1998 1302 18.64 15 0.17

Tour5 3099 2036 54.66 15 0.19

Fig. 4. Performance of TOUR

Figure 4 shows the performance of TOUR system of two queries on two ontologies. The size of ontology is measured in instance number(both concept instances and role instances). In the logarithmic graph, the reasoning time is in scale to the ontology size, illustrating the attractive computability of PDLP. Theoretically, PDLP compute query answers in fast thanks to the tractable complexity of LP[9]. Because PDLP reasoner is not dedicated to TOUR system, we could also expect high performance of PDLP’s reasoning in general applications.

5

Related Work

Previously there are several related approaches to probabilistic description logics which can be classified according to: (1) what component the uncertainty be attached to (1a) TBox[12,13,14] (1b) ABox[7,15] (1c) both[16,6,5,17,18,19,20]; (2) what approach is applied to reasoning. For the latter aspect, (2a) fuzzy logic

638

L. Li et al.

inferencing[16,17], (2b) Bayesian Network[12,18], (2c) lattice-based approach[20], (2d) combination of probabilistic DL with LP[6,7,5]. [16,17] extend DL (both TBox and ABox) with uncertainty interval by fuzzy set theory, and devise a set of reasoning rules to inference uncertainty. [12,18] translate probabilistic extension of DL into Bayesian Network approach with different expressivities. The former [12] works based on extension to ALC TBox while the latter [18] makes a probabilistic extension to OWL ontology. [20] manages uncertainty in DL with lattice-based approach mapping an assertion to a uncertain value in a lattice and reasoning in a tableaux-like calculus. Other related work [13] concentrates on probabilities on terminological axioms, [15] on world assertions while [14] on concept subsumption and role quantification. [19] extends SHOQ(D) using probabilistic lexicography entailment and supports assertional knowledge. Concerning extension of uncertainty to combination of DL and LP, the works most related to ours can be divided into: (i) hybrid approaches tightly combining DL with LP and adding uncertainty in order to extend expressivity [5][7]; (ii) translational approaches reducing DL with uncertainty to probabilistic inferencing in LP in order to take advantage of powerful logic programming technology for inference [6]. [5] adds an uncertainty interval to an assertion in a combination of DL and LP under Answer Set Semantics. [7] presents combination of description logic programs (or dl-programs) and adds probability to assertions under the answer set semantics and the well-founded semantics, and reduces computation of probability to solving linear optimization systems. [6] generalizes DAML+OIL with probability (in essence both on TBox and ABox), and maps it to four-valued probabilistic datalog. Besides difference in four-valued extension, other difference between [6] and our PDLP lies in that [6] attaches probability to both axioms and assertions in DAML+OIL while we restrict probability only on ABox assertions in order to achieve our three highlights, especially intuitive in semantical aspect.

6

Conclusion and Future Work

In this paper, we have extended DL with probability on assertional knowledge, namely PDLP, and interpreted probabilistic ABox assertions under intensional semantics. The syntax and semantics of PDLP are very lightweight, intuitive and expressive to deal with uncertainty and practical applications in Semantic Web, and its reasoning is very fast through LP techniques. We have implemented a PDLP reasoner and apply it into a practical application TOUR system to make optimal travel plans for users. The performance of the TOUR system is encouraging for future use of PDLP in other applications.

References 1. Berners-Lee, T., Hendler, J., Lassila, O.: The semantic Web. Scientific American 284 (2001) 34–43 2. Sheth, A.P., Ramakrishnan, C., Thomas, C.: Semantics for the semantic web: The implicit, the formal and the powerful. Int. J. Semantic Web Inf. Syst. 1 (2005) 1–18

Providing an Uncertainty Reasoning Service for Semantic Web Application

639

3. Baader, F., Calvanese, D., McGuinness, D.L., Nardi, D., Patel-Schneider, P.F., eds.: The Description Logic Handbook: Theory, Implementation, and Applications, Cambridge University Press (2003) 4. Horrocks, I., Sattler, U., Tobies, S.: Practical reasoning for very expressive description logics. Logic J. of the IGPL 8 (2000) 5. Straccia, U.: Uncertainty and description logic programs: A proposal for expressing rules and uncertainty on top of ontologies. (2004) 6. Nottelmann, H., Fuhr, N.: pDAML+OIL: A probabilistic extension to DAML+OIL based on probabilistic Datalog. In: IPMU-04. (2004) 7. Lukasiewicz, T.: Probabilistic description logic programs. In: Proc. of the 8th Euro. Conf. on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, Springer (2005) 737–749 8. Grosof, B.N., Horrocks, I., Volz, R., Decker, S.: Description logic programs: combining logic programs with description logic. In: WWW ’03, ACM Press (2003) 48–57 9. Dantsin, E., Eiter, T., Gottlob, G., Voronkov, A.: Complexity and expressive power of logic programming. ACM Comput. Surv. 33 (2001) 374–425 10. Gr¨ adel, E.: Finite model theory and descriptive complexity. In: Finite Model Theory and Its Applications. Springer-Verlag (2003) 11. Ramakrishnan, R., Srivastava, D., Sudarshan, S.: Efficient bottom-up evaluation of logic programs. In Vandewalle, J., ed.: The State of the Art in Computer Systems and Software Engineering. Kluwer Academic Publishers (1992) 12. Koller, D., Levy, A.Y., Pfeffer, A.: P-classic: A tractable probablistic description logic. In: AAAI/IAAI. (1997) 390–397 13. Heinsohn, J.: Probabilistic description logics. In: Proc. of UAI-94. (1994) 311–318 14. Jaeger, M.: Probabilistic role models and the guarded fragment. In: IPMU-04. (2004) 15. Duerig, M., Studer, T.: Probabilistic abox reasoning: Preliminary results. In: DL-05, Edinburgh, Scotland (2005) 16. Straccia, U.: Towards a fuzzy description logic for the semantic web (preliminary report). In: ESWC-05, Springer Verlag (2005) 167–181 17. Straccia, U.: Reasoning within fuzzy description logics. JAIR 14 (2001) 137–166 18. Ding, Z., Peng, Y., Pan, R.: A Bayesian Approach to Uncertainty Modeling in OWL Ontology. In: Proc. of Int. Conf. on Advances in Intelligent Systems - Theory and Applications. (2004) 19. Giugno, R., Lukasiewicz, T.: P-shoq(d): A probabilistic extension of shoq(d) for probabilistic ontologies in the semantic web. In: Logics in Artificial Intelligence, European Conference,, Springer (2002) 86–97 20. Straccia, U.: Uncertainty in description logics: a lattice-based approach. In: IPMU04. (2004) 251–258

Indexing XML Documents Using Self Adaptive Genetic Algorithms for Better Retri eval K.G. Srinivasa1 , S. Sharath2 , K.R. Venugopal1 , and Lalit M. Patnaik3 1

Department of Computer Science and Engineering, University Visvesvaraya College of Engineering, Bangalore - 560001, India [email protected], [email protected] 2 Infosys Technologies, Bangalore, India [email protected] 3 Microprocessor Applications Laboratory, Indian Institute of Science, India [email protected]

Abstract. The next generation of web is often characterized as the Semantic Web. Machines which are adept in processing data, will also perceive the semantics of the data. The XML technology, with its self describing and extensible tags, is significantly contributing to the semantic web. In this paper, a framework for information retrieval from XML documents using Self Adaptive Migration model Genetic Algorithms(SAGAXsearch) is proposed. Experiments on real data performed to evaluate the precision and the query execution time indicate that the framework is accurate and efficient compared to the existing techniques.

1

Introduction

Extensible markup langauges are widely used for publishing data on the web. The number of XML documents on the web is growing enormously and hence there is a need for search over the XML documents to retrieve keywords. Keyword search over large document collections has been extensively used for text and HTML documents [1] and it has two main drawbacks. First, Search engines are not as intelligent as their users. For example, a keyword search Kevin Database Technology will retrieve documents in which Kevin is the author and also documents in which Kevin is mentioned in the references with equal priority, though the former is more semantically relevant to the user. The second drawback is that keyword queries are inherently flexible in nature and can produce large number of results. The results are of varying relevance to the user and they need to be ranked. The time taken to rank the results should be a small portion of the total query execution time. In contrast, a structured query language will retrieve only the most relevant results, but the complex query syntax makes it unsuitable for naive. Thus an approach which has the flexibility of keyword queries that still retains the accuracy of a query language would be most suitable. GA is an evolutionary process where at each generation, from a set of feasible solutions, individuals are selected such that those with higher fitness value have a greater possibility of reproduction. At each generation, the chosen individuals X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 640–651, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Indexing XML Documents Using Self Adaptive Genetic Algorithms

641

undergo crossover and mutation to produce populations of successive generations. The selection chooses the best individuals for crossover. With crossover the characteristics of the parents are inherited by the individuals in the next generation. Mutation helps in restoring lost or unexplored regions in the search space. These three operators are inspired from the biological process of evolution and can find possible solutions even in a large problem space. Contributions: We have made use of Self Adaptive Migration Model Genetic Algoritms to learn the tag information. This information is used to distinguish between the frequently used and the less frequently used tags. We have also proposed an index structure that stores the frequently and less frequently stored tag information seperately.

2

Motivation

Consider the XML document fragments, an excerpt from a health-care record. Consider a keyword search Vinu salbutamol over the XML document in Table 1. A standard HTML search engine would consider the whole document in Table 1 as a suitable response, due to the presence of both the terms in the search query. However, in XML environment the two search terms occur as totally unrelated elements in the document as they belong to the medical records of different patients. In the XML document of Table 2 the keyword penicillin appears in two different contexts; first it is associated with the tag and then with the tag and the tag name precisely categorizes between the two occurrences. Additional information like name, record identifiers are also explicitly captured using application specific self explanatory tags. This is Table 1. Example Health Care Record

1. 2. 3. 4. 5. 6.

Vinu Krishnan 4312 Penicillin None

1. 2. 3. 4. 5. 6.

Victor James 4313 Salbutamol Penicillin



642

K.G. Srinivasa et al.

useful in keyword search over XML documents. Thus exploiting the tagged and nested structure of XML can help in effective knowledge discovery. We describe in this paper, an architecture, implementation and evaluation of a search engine for retrieving relevant XML document fragments in real time.

3

Related Work

Extensive research has been done on structured declarative queries over XML documents. A structured declarative query is supported by XQuery [2], which is analogous to SQL queries over relational databases. Though XQuery can achieve perfect precision and recall, they require user to learn query semantics and in cases where the user is unaware of the document structure, a search cannot be performed. An improvement over XQuery that has elegant syntax and semantics is developed in [3]. Information retrieval techniques can consider XML documents as normal text documents, with additional markup overhead. There are several ways of handling the tags. For simplicity the tags can simply be ignored but the document loses its semantics, leading to lower retrieval performance. When tags are taken into consideration, search can retrieve documents containing certain tags, or certain words. Keyword search over XML documents falls under this category. Keyword search over XML documents is supported by XKeyword [4], XRANK [5] and XSEarch [6]. All these keyword search techniques have elaborate ranking schemes. The simplicity of the search queries i.e., keywords make these techniques suitable for nave users. But, precision and recall values tend to suffer and the extensive ranking function employed acts as an overhead during query execution. In XRANK [5], the hierarchical and hyperlinked structure of XML documents are taken into account while computing the ranks for the search results. A ranking technique at the granularity of XML elements is considered here. XRANK can query over a mix of XML and HTML documents. XSEarch [6] introduces a concept known as interconnected relationship. However, checking for the interconnected relationship is a huge overhead during runtime. Moreover, XSEarch suffers from drawbacks similar to other keyword search engines: unimpressive precision and recall values. In our proposed SAGAXsearch algorithm, the association of Self Adaptive Genetic Algorithms with keyword queries ensures high accuracy i.e., very few non-relevant fragments (high precision) and most of the relevant fragments (high recall) will be selected as results.

4

XML Data Model and Query Semantics

In this section, we briefly describe the XML data model and the keyword query semantics for search over XML documents. Data Model: The Extensible Markup Language (XML) is a human readable, machine understandable, general syntax for describing hierarchical data, applicable to a wide range of applications. XML allows users to bring multiple files

Indexing XML Documents Using Self Adaptive Genetic Algorithms

643

together to form a compound document. The XML document consists of nested elements starting from the root and corresponding associated values. The XML document can be considered as a directed, node-labeled data graph G = (X, E). Each node in X corresponds to an XML element in the document and is characterized by a unique object identifier, and a label that captures the semantics of the element. Leaf nodes are also associated with a sequence of keywords. E is the set of edges which define the relationships between nodes in X. The edge (l, k) ∈ E, if there exists a directed edge from node l to node k in G. The edge (l, k) ∈ E also denotes that node l is the parent of node k in G. Node l is also the ancestor of node k if a sequence of directed edges from node l leads to node k. An example XML document tree is shown in Figure 1. Query Semantics and Results: Let the XML document tree be called τ . Let x be an interior node in this tree. We say that x directly satisfies a search term k if x has a leaf child that contains the keyword k and x indirectly satisfies a keyword k if some descendent of x directly satisfies the search term k. A search query q = {k1 , k2 , ...km } is satisfied by a node x iff x satisfies each of k1 , k2 , ...km either directly or indirectly. For example, in the XML tree shown in Figure 2, inproceedings(1) satisfies the search term Vipin and the search term Vipin 1979 but not the term Vipin 1980. dblp (0)

inproceedings (1) author (3)

Vipin

title (4)

Land Use...

inproceedings (2) year (5)

1979

author (6)

title (7)

year (8)

Ravi

1980

Synchronization ...

Fig. 1. Example XML Document Tree

The nodes obtained as result should also be semantically related. Semantically related nodes are nodes that appear in the same context; for example, an author and the title of his book having the inproceedings ancestor node. A mathematical measure of semantic relationship is given in section 6. The various steps in the working of SAGAXsearch are enlisted below. 1. A representative training set is chosen to assist the genetic learning of tags. 2. The keyword queries and the relevant search results are collected from the user. 3. The genetic algorithm retrieves the tag combination which can answer a maximum number of training queries.

644

K.G. Srinivasa et al.

4. Separate indices are built for the frequently used and occasionally used tag combinations. 5. A search over the XML documents in the decreasing order of importance of tags is performed. 6. The search produces only semantically related results.

5

Genetic Learning of Tags

The characteristics of XML documents is that they include extensible tags for formatting the data. The Self Adaptive Migration Model Genetic Algorithms [7], is used to identify the tags which are frequently used and distinguish them from those which are ocasionally used. The architecture of the genetic learning system is illustrated in Figure 2.

Frequently Used Tag Combination Tag Selection

XML Document Set

Training Queries

Genetic Learning of Tags Tag Selection

Occasionally Used Tag Combination

Fig. 2. Genetic Learning of Tags

In simple GA, the three basic operators of GA namely, selection, crossover and mutation are fixed apriori. As the individuals evolve through generations, these operators remain constant. A new breed of GA called adaptive GA [8] adjusts the values of the operators based on the fitness of the individual in the population. Such an adaptive GA can exploit previously discovered knowledge for a focused search on parts of the search space which is more likely to yield better results and at the same time can search over the unexplored regions. In migration model GA, instead of a single population, a set of populations is evolved. The basic operators of GA are applied independently for each population and at some regular intervals; individuals are exchanged between populations for a more diversified search. The Self Adaptive Real coded GA to learn the tag information is adaptive in three aspects. The first parameter that is adaptively changed is the size of each population. The population size is determined by the fitness of the best individual in the population compared to the mean fitness of the population. The number of individuals in the population Pi is updated as,

Indexing XML Documents Using Self Adaptive Genetic Algorithms

ni,t+1 = ni,t +

f (Pi ) −1 f¯

645

(1)

where t is used to represent the time in generations. With this update, the size of the population grows when the fitness is greater than the value of the mean fitness and vice versa. Thus, the algorithm is more explorative in the problem space where there is more likelihood of finding the solution. Though the number of individuals in each population varies, the total number of individuals in the ecosystem remains the same. The second parameter that is dynamically updated is the mutation rate and is given by, pmi,t+1 = pmi,t + (

n ¯ − 1) ∗ 0.0001 ni

(2)

Using this update we see that, if the number of individuals in a population is less than the size of the mean population then the mutation rate is increased in order to make the search more explorative. In contrast, if the size of the mean population is smaller, then the mutation rate is decreased. The final parameter that is adaptive in the algorithm is the rate of migration. Migration refers to copying individuals from one population to another. Migration helps in discovering new schemas generated by the crossover of two populations. In the algorithm, migration occurs only when the average fitness of the populations remains unchanged between two generations. Thus, when populations have attained a steady state, migration occurs to try and discover a new schema. The selection operator tries to improve the quality of the future generations by giving individuals with higher fitness, a greater probability of getting copied into the next generation. Here the assumption is that parents with higher fitness values generate better Offspring. The purpose of SAMGA is to select from the tag pool, the tag combinations which are interesting to a user. The user has to first issue a set of search queries q = {k1 , k2 , ...km }. The documents satisfying the search terms are retrieved as results. The user has to classify the results relevant to him. This is the feedback given to the system in order to learn the user interest. The fitness function used in the GA is given by, N f req(i, Stag ) ) + (1 − α)N f itness = α ∗ ( rank(i) i=1

(3)

where N is the number of documents retrieved with a specific tag configuration, Stag is the set of top k tags with highest tag weights. f req(i, Stag ) is the frequency of occurrence of the terms of the query q = {q1 , q2 , ...qm } within the tags in Stag in the ith retrieved document . The retrieved documents are ranked according to the frequency of occurrence of the terms. The rank(i) denotes the rank of the ith retrieved document provided the document is also classified as relevant by the user. α is a parameter that is used to the express the degree of user preference for accuracy of the search results or the total number of documents that are retrieved. The selection operator used in the algorithm is stochastic universal sampling. Here individuals of the population are assigned contiguous segments

646

K.G. Srinivasa et al.

on a straight line based on their fitness values. Let b be the total number of individuals selected, which are placed on equidistant points over a line. The distance between the points is given by 1b . Such a selection scheme has a zero bias and minimum spread, and is found suitable for our algorithm. The recombination operator used is intermediate recombination, where the variable values of the offspring are around and between the variable values of the parents. Geometrically intermediate recombination produces variables with a slightly larger hypercube than that defined by the parents but constrained by the values of ρ. A real valued mutation operation is also applied in the algorithm to explore new regions and make sure that good genetic material is never lost. Consider a representative training set with n documents on which keyword search is to be performed. Let q = {q1 , q2 , ...qm } be a collection of typical user queries where qi represents the ith query and m is the total number of queries. A brief overview of SAMGA [7] is illustrated in Table 2. After the algorithm determines the frequently and the less frequently used tags, the information within the frequently used tags is stored in an index called Most frequently used Index (MFI) and the information within the occasionally used tags is stored in an index called Less frequently used Index (LFI). Table 2. Self Adaptive Migration Model Genetic Algorithms for Learning Tab Information

1. Initialize the population size and the mutation rate for each population. 2. Associate random tag weights with tags in the tag pool τ . This represents individuals of the initial population. 3. for generation = 1: maximum generation limit 4. for each population (a) for each individual select top k tags with highest tag weights. Let Stag = {t1 , t2 , ...tk } represent the selected tags. Evaluate the fitness function using Equation 3. (b) Modify mutation rate using Equation 2. (c) Modify population size according to Equation 1. (d) Select individuals and perform the recombination operation. 5. If the average fitness of the ecosystem fails to change over two successive generations, migrate best individuals between populations.

6

Identification Scheme for Search over XML Documents

The granularity of search over XML documents is not at the document level, but at the node level in the XML document tree. Hence, an identification scheme for the nodes in the document tree is required. This is accomplished by encoding the position of each node in the tree as a data value before storing it in an index.

Indexing XML Documents Using Self Adaptive Genetic Algorithms

647

Given the identification values of the nodes, the scheme must also be able to reconstruct the original XML tree. An identification scheme called Hierarchical Vector for Identification (hvi) is derived. Let x be a node in the XML document tree τ . Then the Hierarchal Vector for Identification of x is given by, hvi(x) = [τ id(x)p(x)sj(p)], Here, τid is the unique identification number assigned to the XML document tree τ , and p(x) is a vector which is recursively defined as, p(x) = [p(parent(x))sj (parent(p(x)))] and sj (p) denotes the j th sibling of the parent p. With this identification scheme, each node captures its absolute position within the whole document. The hvi of a node identifies itself and all its ancestors. The hvi of various nodes in two XML documents are shown in Figure 3(a) and 3(b). It can be observed that if τ1 , τ2 , ...τn represent the XML document trees of the documents with identification numbers (1, 2, 3,...n), where n is the number of documents. Then, {∃xi ∈ {τ1 , τ2 , ...τn } ∧ ∃xj ∈ {τ1 , τ2 , ...τn }, such that xi = xj , hvi(xi ) = hvi(xj )}, that is, there exist no two distinct nodes among all the XML documents in the collection, such that they have the same hvi. The same can be observed from Figure 3(a) and 3(b). Relationship Strength: Let hvi(xi ) and hvi(xj ) represent the hvi of two distinct nodes xi and xj , existing in the XML document tree τ . The length of the longest common prefix(lcp) for both the hvi is denoted as lcp(xi , xj ). Consider two keywords k1 , k2 . The relationship strength between these two keywords, denoted as RS(k1 , k2 ) is defined as, RS(k1 , k2 ) = lcp(xi , xj ), such that xi directly satisfies k1 and xj directly satisfies k2 . The condition that the node should directly satisfy the keyword ensures that only those nodes satisfying the keyword and also having the longest length of their identification vectors (hvi), are selected while evaluating the Relationship Strength(RS). For example, in the document trees in Figure 3(a) and 3(b), the nodes A and B have a common prefix of length two. Thus, they have a RS value of two; similarly nodes A and C have an RS value of one. Whereas, nodes A and D have an RS value zero since they belong to different document trees. Semantic Interconnection: In terms of the XML document tree, two nodes are semantically interconnected if they share a common ancestor and this ancestor is not the root of the document tree. As an illustration, consider the XML document tree in Figure 1. The keywords Vipin and 1979 have a common ancestor, inproceedings(1). Thus, they are semantically interconnected. Whereas the keywords Vipin and 1980 have a common ancestor, dblp(0), which is the root of the document tree. Hence, the two keywords are not semantically connected. Thus, two keywords k1 and k2 are semantically interconnected if and only if, RS(k1 , k2 ) > leveli + 1, where leveli is the first such level in the document tree where the degree of the node is greater than one. For example, in the XML document tree in Figure 3(a), since leveli = 0, RS must be greater than one for the nodes to be semantically relevant. The nodes A and B have an RS value of two and are semantically relevant. Whereas, nodes A and C have an RS value of one, and hence are not semantically relevant.

648

K.G. Srinivasa et al.

0 1 00 A

10

01 B

C

100

000 001

002 010

011

101 D

012 1000

1010 1001 1002 1011

(a)

1012

(b) Fig. 3. Semantic Interconnection

Let q = {k1 , k2 , ...km } be the search query where ki represents the ith term in the query q and m represents the total number of terms in the query. The algorithm to find the semantically interconnected elements is given in Table 3. Table 3. Partioned Index Search

1. if (m = 1) { s = search the MFI with query q }. (a) if (s = NULL) { s = search LFI with query q; search result = s;}. 2. elseif (m > 1) (a) if search with q in MFI is successful i. s = semantically interconnected nodes in the search results. ii. if (s = NULL) No semantically related nodes. iii. else search result = s. (b) else i. continue search with q in LFI. ii. s = semantically related nodes in the search results. iii. if (s = NULL) No semantically related nodes. iv. else search result = s.

The search algorithm first checks the length of the keyword query. If the query consists of a single term, a search over the MFI is performed. If the search is not successful, the algorithm continues search over the LFI. A failure to retrieve results from both MFI and LFI implies that the term is not found. The same technique is extended when searching with queries having more than one term. The only change is that, at each stage the semantic interconnection of the results is checked. Only semantically interconnected nodes are considered as the search results.

Indexing XML Documents Using Self Adaptive Genetic Algorithms

7

649

Performance Studies

The Self Adaptive GA used in SAGAXsearch takes a small number of user queries (10-20 queries) and the documents adjudged as relevant by the user as inputs. The input documents to the GA are XML fragments from the DBLP XML database [9]. The GA tries to explore all possible tag combinations from the DBLP database and tries to find the best tag combination which satisfies the maximum number of queries. The experimental result in Figure 4(a) shows the average fitness for the generations of population. Note that the fluctuations in the curve representing Self Adaptive Migration model GA (SAMGA) is because of the adaptiveness introduced in the migration rate and population size. For SAMGA the average fitness steadily raises until about the fifteenth generation and then the fitness increases slowly. As the generation progresses further, the increment of fitness falls, as most of the individuals have already converged to their best fitness values. In contrast, a Simple GA (SGA) fails to converge even after 20 generations. Thus, the application of SAMGA helps in faster convergence when compared to SGA.

160

Execution Time (In milliseconds)

140

120

100

80

60

40

20

0 1

Partitioned Index Normal Index 2

3

4

Number of Keywords

Fig. 4. (a): Average Fitness of the Populations; (b): Query Execution Times

Table 4, shows the tag weights of the top four tags with the largest tag weights at the end of every five generations. It can be observed that the tags like < author >, < title >, < year >, < booktitle > are given higher tag weights when compared to the other tags. Query execution time using the MFI and the LFI (partitioned index) and the normal index is shown in Figure 4(b). It can be observed that the partitioned index has lesser query execution time and hence is more efficient. Precision and Recall: Precision of the search results is the proportion of the retrieved document fragments that are relevant. Relevance is the proportion of relevant document fragments that are retrieved. For precision and recall, we compare SAGaXsearch with XSEarch [6] and the naive results. Naive results are

650

K.G. Srinivasa et al. Table 4. Top Four Tags and their corresponding Weights Generation No. Tag1 : Weight Tag2 : Weight 1 Month : 7.71 Author : 6.67 5 Author : 7.63 Pages : 7.15 10 Title : 7.72 Year : 6.35 15 Author : 8.1 Year : 7.63 20 Author : 8.03 Year : 7.51

Tag3 : Weight Tag4 : Weight ee : 6.16 url : 4.26 School : 6.74 Cite : 5.23 Cite : 5.92 Book Title : 5.87 Title : 7.19 Pages : 6.53 Title : 6.83 Pages : 6.60

those which satisfy the search query, but are not semantically interconnected. All these techniques yield perfect recall i.e., all relevant documents are retrieved and the precision values vary. This is because of the factor that apart from the relevant results, some irrelevant results are also retrieved. The precision values of SAGaXsearch are found to be higher than those of XSEarch and naive approaches, when compared with the DBLP XML dataset. The small loss in precision occurs when the same keywords are present in both the MFI and LFI and the intention of the user is to retrieve information from the LFI. In such cases, the algorithm has already retrieved the results from the MFI, it will not continue search over the LFI. The possibility of such an event is quite rare, and hence SAGaXsearch manages to exhibit high precision values. The comparision of precision values of these techniques is shown in Figure 5.

Fig. 5. Comparision of Precision Values

8

Conclusions

We have proposed a framework for information retrieval from XML documents that uses tag information to improve the retrieval performance. Self Adaptive Migration Model Genetic Algorithms, which are efficient for search in large problem spaces, are used to learn the significance of the tags. The notations for relationship strength and semantic relationship help in efficient retrieval of semantically interconnected results as well as ranking the search results based on the proximity of the keywords. Experiment on real data show that the SAGaXsearch is accurate and efficient.

Indexing XML Documents Using Self Adaptive Genetic Algorithms

651

References 1. S Brin and L Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, Proc. of Seventh World Wide Web Conference(WWW7), 1998. 2. World Wide Web Consortium XQUERY: A Query Language for XML W3c Working Draft, http://www.w3.org/XML/Query. 3. D Florescu, D Kossmann and I Manolescu, Integrating Keyword Search into XML Query Processing. Intl. Journal of Computer and Telecommunications Networking, Vol. 33, No. 1, June 2000, pp. 119-135. 4. V Hristidis, Y Papakonstantinou and A Balmin, Key-word Proximity Search on XML Graphs, IEEE Conf. on Data Engineering, 2003. 5. L Guo, et.al., XRANK: Ranked Keyword Search over XML Documents, ACM SIGMOD 2003. 6. Cohen S, Mamou J, Kanza Y, Sagiv Y, XSEARCH: A Semantic Search Engine for XML, VLDB 2003, pp. 45-56. 7. Srinivasa K G, Karthik S, P Deepa Shenoy, Venugopal K R and L M Patnaik, A Dynamic Migration Model for Self Adaptive Genetic Algorithms, Proc. of Intl. Conf. on Intelligent Data Analysis and Automated Learning(IDEAL), July 2005, Brisbane, Australia. 8. M Srinivas and L M Patnaik, Genetic Algorithms, A Survey, IEEE Computer, Vol. 27, No. 6, 1994, pp. 17-24. 9. DBLP XML Records, http://acm.org/sigmoid/dblp/dp/index.html, Feb. 2001.

GCC: A Knowledge Management Environment for Research Centers and Universities Jonice Oliveira1, Jano Moreira de Souza1,2, Rodrigo Miranda1, Sérgio Rodrigues1, Viviane Kawamura1, Rafael Martino2, Carlos Mello2, Diogo Krejci2, Carlos Eduardo Barbosa2, and Luciano Maia2 1

COPPE/UFRJ - Computer Science Department - Graduate School and Research in Engineering – Federal University of Rio de Janeiro {jonice, jano, mirand, searo, vka}@cos.ufrj.br 2 IM/UFRJ - Computer Science Department - Institute of Mathematics – Federal University of Rio de Janeiro (UFRJ) PO Box: 68.513, ZIP Code: 21.945-970, Rio de Janeiro, Brazil

Abstract. Research centers and universities are knowledge-intensive institutions, where the knowledge creation and distribution are constant – and this knowledge should be managed. In spite of it, scientific work had been known for being solitary work, in which human interaction happened only in small groups within a research domain. Nowadays, due to technology improvements, scientific data from different sources is available, communication between researchers is facilitated and scientific information creation and exchange is faster than in the past. However, the focus on information exchange is too limited to create systems that enable true cooperation and knowledge management in scientific environments. To facilitate a more expressive exchanging, sharing and dissemination of knowledge and its management, we create a scientific knowledge management environment in which researchers may share their data, experiences, ideas, process definition and execution, and obtain all the necessary information to execute their tasks, make decisions, learn and disseminate knowledge.

1 Introduction People need to interact with other people and to access information to create new knowledge, and this interaction and active seeking, managing, assimilating and exchanging of information has impelled information technology toward new directions. In Science, the need for information increases exponentially, and the knowledge sharing is still not very much impressive. Although we live in the information age, most scientific collaboration relies on face-to-face interactions, paper-centered flow, asynchronous communication, with more expressive interaction made only in small groups. In business, knowledge has been the force behind millions of strategic and operational decisions, throughout time; however, the recognition of the fact that knowledge is a resource which needs to be managed is relatively recent. Even though universities and research centers are very knowledge-intensive, their decentralized organization, the high complexity of scientific data and information, and peculiar X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 652 – 667, 2006. © Springer-Verlag Berlin Heidelberg 2006

GCC: A Knowledge Management Environment for Research Centers and Universities

653

processes have been obstacles in the move towards more efficient management of scientific knowledge. Other problems occur as a consequence of a low collaboration degree in the scientific environment, as the unknowledgeable about the researchers’ competences, and resource waste by the repetition of mistakes and the reinvention of already-known and consolidated solutions. Some attempts to improve knowledge dissemination have been made in the scientific scenario, and we can mention online scientific journals, the growing use of Internet in universities, e-mail and the adoption of collaboration tools. Nevertheless, information technology must lead to more a fundamental change than automating and accelerating traditional processes [11]. Thus the capabilities of information technology may fundamentally change the way in which scientists work, collaborate, and, consequently, create, organize, and disseminate their knowledge. Based on these issues, we have developed a web environment the purpose of which is to provide resources to enable the knowledge management in research organizations. Our approach envisions personal knowledge management, process management – allowing the reuse of models and rationale capture, collaboration tools and knowledge visualization and navigation. Special attention was given to identify what knowledge may be present, as the way of when, how, and to whom it could be delivered. Competence management, user profiling and knowledge matching techniques were intensive and they are used to filter the amount of information to be provided. Selective dissemination of information was created to deploy automatically important information to communities. Our approach is based on a national ontology of Science, and provides mechanisms to enrich this one. The remainder of this paper is organized as follows: Section 2 discusses a number of theoretical Scientific Knowledge aspects and the section 3 explains our scenario, which reflects some common topics with the international scenario. As the objective of this research is proposing an environment to facilitate the collaboration in scientific organizations, knowledge sharing, dissemination and creation, our approach is explained in section 4. The following section (section 5) describes some related work and the difference with our approach. Future works and the conclusion are shown in section 6.

2 Scientific Knowledge The nature of knowledge has been a matter of intense discussion since the beginning of philosophy [12]. Much research has sought to refine the concept of knowledge and to answer questions about its core characteristics. One of the first to define scientific knowledge was Socrates [6], and, for him, knowing a subject or concept consisted of "gathering the components of a singular thing, or of a real substance, and joining the similar ones, and separating the unsimilar ones, to form the concept or the definition of the singular thing". In this way, in order to "join the similar ones" it is necessary for one to have principles, axioms, definitions and demonstrations, for a concept to be defined as true. In other words, scientific knowledge is the knowledge resulting from scientific activities, and its objective is to demonstrate, by argumentation, a proposed solution to a problem, relative to a certain issue [19]. The most common approach to study knowledge definition, therefore, is to treat the concept as undefined and to approximate its meaning by examining the context of its use [12]. As per [12], an

654

J. Oliveira et al.

analysis of the use of the term in everyday and scientific language leads to three main interpretations: knowledge-that (objective knowledge), knowledge how (know-how) and knowledge by acquaintance. But, what is the difference between business knowledge and scientific knowledge? The first difference is related to the analysis of the data used and of the knowledge construction process. Frequently, the activities executed in a business domain are well defined, as well as the knowledge needed for the execution of each of those activities is well known, while the scientific activities comprise sequences of attempts, because the domain is not completely known. In other words, scientific knowledge is built gradually according to the results of a number of activities and it can be subject to constant alterations. Independent of the complexity of the manipulated data, of the information analyzed and of the way in which is structured, there is another factor in scientific knowledge construction: collaboration. As per [17] collaboration is the essence of science, because there is, on account of people's union, the possibility of knowledge exchange for common activity execution (peer-to-peer collaboration); dissemination of an acquired knowledge (Mentor-Student); researchers with different domains of knowledge who do not share a common background can interact by exchanging results (Interdisciplinary) or just by publishing the research results achieved (Producer-Consumer). Related to the level of inter-personal collaboration, collaboration and knowledge flow in scientific environments are usually more restricted, and occur among a small number of people working in the same group, dealing with or researching more specific items of the their domain. Many researchers do not know about other researchers who are working with works correlated to theirs, as they are based in a different research center, with the distance hindering regular contact. The use of webbased knowledge management tools is a way to provide better communication and interaction among researchers belonging to a same domain – independent of synchronous communication and physical presence, and then the four types of scientific collaboration proposed by [17] can be applied easily.

3 Our Scenario: Classification, Resources Sharing and Knowledge Loss on Brazilian Scientific Research Knowledge organization has always been recognized as an area of research and study by professionals of different domains. Computing Science professionals are currently interested in knowledge organization [8]. LANGRIDGE, in [14], emphasizes the fundamental study of classification related with the study of meaning and definition, that is, of semantic. In knowledge organization, especially in the Science and Technology scenario, we had important contributions such as: classification and indexation in Science [20], in Social Science [10] and Humanities [13]. We have had important progresses in classification theory, as facets Ranganathan [15, 16], concept theory [9, 8] and terminology theory [22;4], and, more recently, principles of ontology construction have been improving the knowledge organization area in the information technology context [5; 3].

GCC: A Knowledge Management Environment for Research Centers and Universities

655

Knowledge representation, in some classification structures which permit organizing, systematically, data from published scientific production and other Science activities, is very important for learning, knowledge dissemination, production management and evaluation. In this context, the classification table of “Knowledge Areas”, created by CNPq1, is used of all research centers in Brazil, and appears as support tool. It is a Brazilian attempt for establishing a unique classification, an ontology, at national level, in which academic systems and digital libraries can lean on and use it to classify and organize their information. This classification has some principal areas, as Exact and Earth sciences, Biological Sciences, Engineering, Health Sciences, Agrarian Sciences, Applied Social sciences, Humanities, Linguistics, Languages and Arts. Each area has a sub-tree with its concept categorization. The main issue is whether a unique classification can be a complex approach of the Science Universe and represents the diversity of involved activities in correlated areas. Satisfying the different interest of institutions on data, information and knowledge aggregation becomes impossible. LANGRIDGE [14] emphasizes that the knowledge unit is a controversial topic, mainly related to knowledge division on disciplines. The construction of a table of knowledge areas involves basic aspects of organization and classification. The first issue classification pointed by LANGRIDGE [14] mentions that the same objects can be classified in different ways, depending on their purpose. Then, a unique representation can represent wrong, incomplete and inappropriate complex areas of Science. Other critical points of the CNPq classification are i)it is deficient to represent the natural evolution of some areas and how the research grew in the research centers; ii)it does not enable temporal evolution of knowledge areas, taking into account that knowledge areas can be represented in different places in a classification at the time; and, iii)some areas can appear with different names. Then, identifying new areas of Science, capturing temporal changes in knowledge areas and reflecting the production in the research centers is a requisite for any effort to Brazilian scientific production. Some enterprises and business companies, in partnership with universities and research centers, help with the research development, but most significant and expressive contributions to national scientific research are made by the Government, using national and state agencies. The size and quantity of universities and research centers in Brazil, and the absence of an efficient approach to identify similar interests and projects among these institutions, constitute some problems: a low collaboration degree among scientific organizations, resource waste by the duplication of efforts and the reinvention of already-known and consolidated solutions. Inside a scientific organization, we have the problem of knowledge loss, which means, institutions lose some experts and specialized professionals, and no attempt of knowledge transmission is made because universities and research centers do not know what they know: their forces (scientific areas with good production) and their weaknesses (scientific areas with insufficient production and few researchers).

1

The National Council for Scientific and Technological Development (CNPq) is a foundation linked to the Ministry of Science and Technology (MCT), to support Brazilian research.

656

J. Oliveira et al.

These problems are faced in Brazil, but they are common in several international institutions. Based on this scenario, we developed the GCC. Our approach allows for identifying new knowledge areas, which are not represented in the CNPQ’s Knowledge Area classification, from e-meeting logs, projects definition, mental maps, publications, and other ways of user interaction, and having as consequence the improvement of a unique ontology, incorporating new scientific knowledge areas and monitoring the organizational knowledge evolution. This work permits a knowledge management in personal and organization aspects, facilitates the knowledge capture in scientific projects, and helps the intra and inter-institutional collaboration, as will be described in next section.

4 The GCC Architecture We have envisioned a web-based architecture to enable the knowledge management in scientific environments and increase the collaboration between researchers. This environment is titled GCC, which is the acronym of “Gestão de Conhecimento na COPPE2” (Knowledge Management in COPPE). The GCC architecture, as shown in Figure 1, is detailed in the following sections. The GCC services and main functionalities are: − Personal KM Services – manage users’ personal knowledge and data, based on the researchers’ “curriculum vitae”, weblogs and mental maps. − Project Management Services – manage scientific project execution, enabling the definition of a process, reuse of past processes, and the capture of acquired knowledge in the activities of a process. − Community Services – allow for easy and quick communication, providing tools for synchronous and asynchronous collaboration and dissemination of information and knowledge to communities. − Knowledge Visualization and Navigation Services – displays knowledge and its relationships in a more intuitive and visual way – differently from common reports, allowing the user to interact with the information, navigate and access it. − User Profiling and Knowledge Matching Services – identify researchers’ interests, profiles and competence. This service provides information to other modules such as searching for users with similar profiles and whom it might be interesting to establish contact, discovering researchers’ competences, suggesting experts to execute a specific activity in a context, and representing their personal interests to a more precise selective-information dissemination. − Knowledge Base – where all kinds of knowledge as processes, past experiences, practices, e-meeting logs, messages exchanged, concept definitions, group and personal characteristics and others are stored. − Collaborative Filtering Service - can streamline research, improve retrieval precision, reduce the amount of time spent looking for significant changes on resources, and even aid in the selection of data, information, people and process definition. 2

COPPE - Graduate School and Research in Engineering from the Federal University of Rio de Janeiro (UFRJ)-Brazil.

GCC: A Knowledge Management Environment for Research Centers and Universities

657

− Inference Engine - inference mechanisms that search through the knowledge base and deduce results in an organized manner. − Analysis Services - Our proposal uses tree kinds of analysis services: i) reports of researchers’ personal information, process and community status; ii) an OLAP (Online Analytical Processing) structure to provide more solid and evolutionary visions about the researchers' knowledge acquisition process, the knowledge flow in a community and the evolution of concepts (new knowledge created, merger of concepts and knowledge which is not used more); and iii) mechanisms based on Business Intelligence to compare researchers, departments and research centers, arousing the possibility for collaboration. User Profiling and Knowledge Matching Services

Project Personal KM Management Services Services

Community Services

Knowledge Visualization and Navigation Services

Users

Knowledge Base

Collaborative Filtering Service

Inference Engine

Analysis Services

Fig. 1. GCC Architecture

The GCC is a proactive environment, that is, capable of taking initiative according to the researcher’s profile and domain, as well as reacting in response to the requests and changes in the environment. In this way, it supplies, at the right time, certain new and relevant knowledge to help researchers in their tasks. The architecture serves to create an effective collaborative and learning environment for all those involved, providing distributed scientific knowledge in a single and accessible system. In our environment, we use the CNPQ ontology and we enrich this classification with knowledge which flows in the GCC, such as personal knowledge, competences, new knowledge concepts and definition. The Services are discussed in more detail below. 4.1 Personal KM Services This module is responsible for providing functions for a researcher to manage his/her personal knowledge, as well as information about him/her. This module provides services such as:

658

J. Oliveira et al.

4.1.1 Curriculum Vitae The curriculum vitae is one way of keeping information about a person. The GCC enables the importation of curricula from CV-Lattes System, a kind of CNPq system. When the user does not have, or does not want use, the curriculum in CV-Lattes System, he/she can fill out his/her information in the GCC. This information comprises the name, personal information such as address, e-mail, home page and phone, academic background, professional activities, language skills, scientific production, advisory and prices. In addition to this information, the user should say in which Knowledge Areas he/she acts and what his/her competences and his/her degree of expertise are. Competences are abilities and knowledge areas, which are not represented in the CNPq classification, in which the user works and has some fluency. The user can also show some areas in which he/she has some interest, but in which he/she is not an expert yet. 4.1.2 Personal Blog Weblogs may be viewed as personal Web pages or "home pages". The term refers to a web site that is a "log of the Web", indicating a record that points to material available on the World Wide Web. Many say that weblogs comprise an electronic-diary, but in the GCC this acts as a tool to provide personal knowledge management. Weblogs, in general, have a number of features which are as follows: − Personal editorship - The content of the site is under the responsibility of a single person. In our case, the user is a researcher, a GCC user, and the weblog reflects some topics about his/her individual's profile. − Hyperlinked post structure - The weblog's contents consists of typically short posts that feature hypertext links referencing material outside the site. The selection of links is entirely up to the editor, who may link anywhere on the web. There is also no prescribed length for a post - some posts simply consist of a single link to content elsewhere, but most often they also include additional information and/or personal commentary on the issue under discussion. In the GCC this may be information about successful or unsuccessful experiments, lesson notes, and other kinds of scientific information. − A first pass before a community creation - An enormous amount of content is published daily in the Web. As it is impossible to read it, people need the means of filtering this output to find the material that will be most relevant to them. A weblog operates in much the same manner. By reading a weblog that is edited by someone with interests similar to yours, you obtain a view of possibly relevant material. In the GCC, by combining the output of several chosen weblogs, you obtain a tailor-made publication and contact with researchers with the same interest as yours. − Frequent updates – which are displayed in reverse chronological order can show in GCC the researcher-interest evolution and involvement about a topic, as a chronological record of your thoughts, knowledge, references and other notes that could otherwise be lost or disorganized.

GCC: A Knowledge Management Environment for Research Centers and Universities

659

More than the issues cited above, a weblog is an important tool in personal knowledge management because it embodies several important functions for the researcher who uses them, such as: In the GCC each researcher can have a weblog, and pieces of text can be turned private (only the author sees the description). By means of “User Profiling and Knowledge Matching Services” tools, we can use the weblogs to automatically identify: a researcher’s personal interests, his/her competences, and knowledge areas cited, and infer which areas are connected and how they are connected. 4.1.3 Mental Maps Knowledge representation as mental maps, in which the concepts are organized in classes and sub-classes, is a way to structure the information. Since every one of us creatively constructs our own maps, then these will be different from everyone else's maps. Each of us will have different perceptions of our needs, different learning styles, and even perceived shared experiences differently. In the GCC, the researcher can construct mental maps to define concepts, help in brainstorm sessions and simplify the job of discussion in a graphic way (Figure 2). Users can display some concepts as public, so that anyone can see the concept, its relationship with other public concepts, and its definition. In addition to the information organizations, mental maps have important function in learning in the GCC. Our own maps of reality (and not reality itself) determine how we interpret and react to the world around us and give meaning to our experiences and behaviors. No individual map is any more 'true' or 'real' than any other. The wisest and most compassionate maps are those which make available the wisest and richest number of choices, as opposed to being the most 'real' or 'accurate'. This has implications when we are identifying outcomes, planning learning activities, assessing learning: gaining the most effective result is a consequence of viewing these issues from multiple perspectives. Exchanging maps, analyzing the different representations of reality, collaborating, discussing and trying an agreement about a definition allow people to discover new definitions, learn and enrich the domain.

Fig. 2. Mental Map Editor

Fig. 3. Workflow Editor

660

J. Oliveira et al.

4.2 Project Management Services This module is responsible for providing services to: i) define and execute a project by a workflow tool, ii) store the knowledge created during project execution, iii) permit knowledge reuse. 4.2.1 Define and Execute a Project The project manager or responsible researcher creates the project model, with the activity sequence and the needs for the execution of each activity: competences necessary for execution, inputs, outputs and tools. The project-model creation is defined in a workflow graphic tool (Figure 3), and process execution control is made by a workflow engine. Finding a person with a specific competence to execute an activity needs more semantic attention because if nobody with the desired competence is found, we can recommend people with competences similar to the competence needed. This search is a functionality provided by the “User Profiling and Knowledge Matching Services”. This service recommends people who better match the pre-requisite competence in an activity and, consequently, can perform this activity better. 4.2.2 Knowledge Storage During the Project During activity execution, the involved parties can add all kinds of explicit knowledge to this activity, such as reports, documents, comments, suggestions, better practices, mistakes and ideas. Then all steps in the activity execution can be documented, and in the future we can track all kinds of created knowledge and the context it happens, important for an inexperienced researcher to learn about the process in a domain. The process model itself is a kind of knowledge which should be stored for further access. 4.2.3 Knowledge Reuse Previous process-models and the knowledge described in their activity can be reuses totally or partially by other users. 4.3 Community Services One of the main focus in the GCC is virtual-community creation, created by groups of researchers with a common interest, who could exchange information and work together. This module displays some tools to improve the interaction between people in a community, such as: − Survey - A question is created by the community supervisor and answered by all members. The presentation of the survey is random and the questioner can change the priority of the queries. The application still provides a result with a graph of inquires and their response percentage. − Forum - To support posted messages, a forum service for communities was built for asynchronous communication. − News – Any member of a community can send news about related topics, such as links and external materials which underlie it. − Scheduled E-meetings – A member can ask for a private and synchronous emeeting with another member, as in the case of an inexperienced researcher asking for an explanation of a doubt from an expert researcher. Then, the member invites

GCC: A Knowledge Management Environment for Research Centers and Universities

− −

− −

661

another member for an e-meeting. If the invited member agrees the meeting is confirmed and appointed in the community schedule. Public library – A space to display electronic material and links. Log Categorization and Storage – All conversation of an e-meeting is stored and categorized in one or more Knowledge Areas. This is important because a conversation or interview is a kind of explicit knowledge which could be consulted in the future, avoiding future interviews and accelerating the knowledge capture. Community Evaluation – Several metrics are used to control the community evaluation and the member interaction. All evaluations are public. Events – Several events such as conferences, workshops, lectures, etc, can be posted here.

4.4 Knowledge Visualization and Navigation Services The typical visualization in the majority systems purport a litany of complaints including unfriendly interfaces, the absence of intuitive search structures, and the requirement that users learn special languages or conventions in order to interact effectively with the online systems. Based on these complains, more than the reports, lists and graphics visualization provided by the GCC, the environment has two more kinds of knowledge visualization and navigation services: the Hyperbolic Tree and the Conceptual Project Map. 4.4.1 Hyperbolic Tree This is a hierarchical tree structure to a hyperbolic display citing two significant qualities of the structure: i)the nodes or components of the tree diminish in size the farther away they are from the center of the display and ii)the number of nodes or components grows exponentially from parent to child. In the GCC, as shown in Figure 4, the hyperbolic tree is used to visualize the CNPq’s Knowledge Area Classification. The user can navigate by the CNPQ classification, as shown in Figure 5, and by all information related to a knowledge area, as projects (in green), competences, people (in blue), communities, and etc.

Fig. 4. Hyperbolic Tree of CNPq’s Knowledge Areas

662

J. Oliveira et al.

Fig. 5. Navigation in Hyperbolic Tree

4.4.2 Conceptual Project Map Permits a tree-like visualization of a specific project, as Figure 6. Then the user can see all related information of a project, such as materials, contributions, participants, collaborators, managers, knowledge areas and competences which are pre-requisites and the workflow model. This map permits searching over it, and opening a node which matches.

4.5 User Profiling and Knowledge Matching Services Nowadays, the main problem in scientific organization is the inability to discover what it knows, what is its abilities are, what researchers and experts know and in which knowledge areas they work. Identifying the Fig. 6. Project Conceptual Map researchers’ profiles, their interests and expertise enhances the chances of collaboration. The GCC provides three kinds of services: i)the S-Miner, to identify automatically the competence of a researcher, using his/her publications; ii) the Competence Search, a semantic search for competences and iii)Web Miner, a way of tracking pages the user accesses when navigating the Internet and identifying topics of his/her interest. 4.5.1 S-Miner Fundamentally, the S-Miner function is to mine competencies based on researchers’ publications’. It is used to extract key words derived from these publications, contrast

GCC: A Knowledge Management Environment for Research Centers and Universities

663

with the other publications histories and suggest, to the text owner, the feasible communities rooted in the mapped competencies. Initially, the text is submitted to the tokenization algorithm. Tokenization purports in the word (tokens) identification. After breaking of the text in tokens, the process continues with the elimination of insignificant words – named Stop Words. The collection of Stop Words which will be cleared away from the text is called Stop List. This catalog of irrelevant words is strongly dependent on the language and the applied circumstance – the S-Miner can handle the English and Portuguese (Brazilian) languages. When Stop Words are removed, the remaining words are considered filtered and should then enter a new selection process. In this phase, the next procedure comprises the creation of weights for each word type. We use the Stemming technique to measure the relevance of a term by removing suffixes in an automatic operation. Ignoring the issue where the words are precisely originated, we can say that a document is represented by a vector of words, or terms. Terms with a common stem will usually have similar meanings, for example: CONNECT, CONNECTED, CONNECTING, CONNECTION, CONNECTIONS. Then, after having the words, stemming to count the relevance of a term is applied. Terms are related to a person’s competence, and these competences and the degree of expertise (using the measure of relevance of a term) are stored in database. Besides, these filtered words, named Relevant Words, are submitted to the association between competences and words. This association contemplates the competences established in CNPq Knowledge Tree and the Relevant Words. This connection suggests that each competence of the CNPq Knowledge Tree can be derived from a set of key words. This relationship is restored from a dictionary stored in the GCC. Finally, after the mining, it is possible to verify the mapped abilities in a report provided by the application. 4.5.2 Competence Search This is a service which provides a list of people with specific competence to perform an activity of a process or to lead a community. The searcher architecture is based on the GCC database and the SMiner provided mining. Thus, the competences are searched following this order of priority: − Declared competences – the competences which the researcher declares to have in GCC or Curriculum Lattes. − Project competences – correspond the competences which were the pre-requisite of a project. We imagine researchers who worked in a project, known about the competence needed. − Extracted competences – recovered from the researchers’ published text mining by S-Miner. − Community competences – collected from the communities in which the researcher participates or contributes. In addition, the Competence Searcher includes distinctive weights for each type of competence found: Declared competences- weight 3, Project competences- weight 2, Community competences- weight 1. Consequently, the minimal weight was chosen not to misrepresent the analysis.

664

J. Oliveira et al.

While the competences above are gathered, the Competence Searcher identifies the levels of each kind of competence in the CNPq Knowledge Tree. After discovering it, it is viable to achieve the researcher relevance rooted on the searched competences and, after this process, a list of researchers who bear the searched abilities. On the other hand, if the Searcher does not find the exact level, that is, if some competence is not correlated directly in the CNPq Knowledge Tree, the Competence Searcher will go up the tree, searching the levels up to the root. In this case, there is no accumulation of weights, since this action is only used in the absence of competence linked straight. 4.5.3 Web Miner A way of tracking the pages the user accesses when navigating the Internet and identifying topics of his/her interest by analyzing the log web. The user can rate sites according to their relevance to his/her interest. This information is inferred from time spent on sites, number of accesses, re-visiting. Sites can be recommended to other users with similar interests. 4.6 Collaborative Filter In this service, documents, people and process’ models are recommended to a new user based on the stated preferences of other, similar users and communities. We use a Collaborative Filtering Spider which collects, from the Web, lists of semantically related documents and disposed to communities. 4.7 Inference Engine Reasoning about profiling is made in the “User Profiling and Knowledge Matching Services”. This module is responsible for extending these functionalities to infer process, cases or solutions, and reusable content. 4.8 Analyses Services We propose some tools for observing the knowledge evolution, the evolution of communities and comparative analysis by researchers, departments and organizations. These kinds of analysis are especially useful to: − The intellectual capital of the institution not be associated exclusively to people who own critical knowledge, but can be distributed among the members of a research team; − Make the identification of knowledge areas with a shortage of professional possible and then plan a way to acquire this knowledge, by training or external researcher recruiting; − Make the regular appraisal of each researcher's knowledge level possible; − Analyze chances of external collaboration; − Analyze the quality of an institution; and − Identify the appearance, death and merging of knowledge areas Our proposal uses an OLAP (Online Analytical Processing) structure to provide more solid and evolutionary visions about the researchers' knowledge acquisition

GCC: A Knowledge Management Environment for Research Centers and Universities

665

process and knowledge flow in a community. Moreover, GCC uses techniques of Business Intelligence to the other issues.

5 Related Work Some work the purpose of which is to improve knowledge sharing and collaboration among researchers has been created, although focus was different as in: − The Pacific Northwest National Laboratory (PNNL) [7], is developing and integrating an environment of collaborative tools, the CORE (Collaborative Research Environment) [17]. − The CLARE [21] is a CSCL environment (Computer-Supported Cooperative Learning) which aims at scientific knowledge creation by cooperative learning. For this, CLARE uses a semiformal representation language called RESRA and a process model called SECAI. − The Science Desk Project[18] is developed by NASA and its purpose is the development of a computational infrastructure and a suite of collaborative tools to support the day-to-day activities of science teams. Moreover, projects and some methodologies were created to aim the construction of Knowledge Management Systems, the ISKM (“Integrated Systems for Knowledge Management”) [1;2], an approach to aiming at community creation, managing the necessary knowledge in a learning environment and decision-making applied to the natural resource management domain. Our approach differs significantly from the above mentioned because major attention is given to competence management, collaborative filtering, knowledge navigation and user profiling and knowledge-matching strategies, and personal knowledge management, as well. Special attention is given to competence management, because experts and their knowledge are the most important assets in a scientific institution. Moreover, the GCC makes it possible for researchers to collaborate and interact among themselves, facilitating the communication between people within the same research area, gathering, in one environment, different perspectives and expertise present in the organization, enabling the formation and recognition of groups with common interests, diminishing the amount of time spent in the coordination of team work and expediting the project problem solution processes. As explained before, the GCC also proposes an analysis structure providing evolutionary visualization of the knowledge buildup and the comparison to common interest matching among departments and organizations to allow for possible collaboration. None of these related works attempts to automatically identify new knowledge areas and correlate them.

6 Conclusion and Future Work Our work is a scientific knowledge management environment, which aims to encourage knowledge dissemination, expert localization and collaboration. We also envision knowledge reuse and the detection of new areas of knowledge in Science, thereby enriching Science Ontology used in Brazil.

666

J. Oliveira et al.

One way of identifying new knowledge areas in Science is monitoring current scientific activities, as publications, projects, groups and the individual researcher. Based on this, in addition functionalities aiming at scientific work, we automatically identify knowledge areas in order to: i)improve communication among people, ii)identify people to execute scientific activities, iii)know what the institution knows, iv) reuse knowledge and v)update the national Science ontology, and detect new emerging knowledge areas. Currently, the GCC is a centralized-base environment. As future work, we envision extending it to a distributed scenario. Moreover, we will measure how much the GCC is aiming at those researching in order to manage their personal knowledge, disseminate it, their learning and collaborating. This approach is in an evaluation process and it is been used by the Database Group of COPPE2. Our future work is extend and makes available to all pubic universities and research centers in Brazil.

References 1. ALLEN, W., 2003, "ISKM (Integrated Systems for Knowledge Management)". In: http://nrm.massey.ac.nz/changelinks/iskm.html, Accessed in 21/05/2002. 2. ALLEN, W.,KILVINGTON, M., 2003, "ISKM (Integrated Systems for Knowledge Management): An outline of a participatory approach to environmental research and developement initiatives". In: http://www.landcareresearch.co.nz/sal/iskm.html, Accessed in 21/05/2002. 3. BIOLCHINI, J. C. de A.. Da Organização do Conhecimento à Inteligência: o desenvolv imento de ontologias como suporte à decisão médica. Ph.D. Thesis. Rio de Janeiro, 2003. 4. BRANSFORD, J. D., BROWN, A.L. and COCKING, R.. How people learn: Brain, mind, experience and school. Washington, D.C.: National Academy Press, p. 10, 1999 5. CAMPOS, M.L. de A. A organização de unidades de conhecimento em hiperdocumentos: o modelo conceitual como um espaço comunicacional para a realização da autoria. Ph.D. Thesis. Rio de Janeiro, 2001.187p. 6. CHAUI, M.. Introdução à história da filosofia: dos pré-socráticos a Aristóteles. 2a.edition, São Paulo-Brazil, 2002 – in Portuguese. 7. CHIN, G., LEUNG, L. R., SCHUCHARDT, K., et al, 2002, "New Paradigms in Problem Solving Environments for Scientific Computing",Anais do IUI'02 - Association for Computing Machinery (ACM) San Francisco, California, USA 8. DAHLBERG, I. Faceted classification and terminology. In: TKE'93, TERMINOLOGY AND KNOWLEDGE ENGENEERING, Cologne, Aug.25-27, 1993. Proceedings. Frankfurt, Indeks Verlag, 1993. P. 225-234 9. DAHLBERG, I. Ontical structures and universal classification. Bangalore, Sarada Ranganathan Endowment, 1978 a . 64 p. 10. FOSKETT, D.J. Classification and indexing in the social sciences. London, Butterworths, 1963.190p. 11. HAMMER, M. and CHAMPY, J.. Reengineering the Corporation. Harper Business, 1994. 12. HARS, A. From publishing to knowledge networks : reinventing online knowledge infrast ructures. Berlin ; New York : Springer, 2003. 13. LANGRIDGE, D. Classification and indexing in the humanities. London, Butterworths, 1976.143p.

GCC: A Knowledge Management Environment for Research Centers and Universities

667

14. LANGRIDGE, D. W. Classificação: abordagem para estudantes de biblioteconomia. Tradução de Rosali P. Fernandez. Rio de Janeiro, Interciência, 1977. 15. 15. RANGANATHAN, S.R. Prolegomena to library classification . Bombay, Asia Publishing House, 1967. 640 p. 16. RANGANATHAN, S.R. Philosophy of library classification. New Delhi, Ejnar Munks gaard, 1951. 17. SCHUR, A., KEATING, K.A., et al . Experiment-Oriented Scientific Collaborative Suites for Experiment-Oriented Scientific Research, ACM Interactions, Vol 5, Issue 3, pp. 40-47, May/June 1998. 18. SCIENCEDESK, 2002, "ScienceDesk Project Overview". In: http://ic.arc.nasa.gov/ publications/pdf/2000-0199.pdf, Accessed in 05/2002. 19. SEVERINO, A. J..Metodologia do Trabalho Científico. 22a.edition. Rio de Janeiro – Brazil, 2002– in Portuguese. 20. VICKERY, B. C. Classification and indexing in science. London, Butterworths, 1975. 21. Wan, Dadong and Johnson, Philip M., 1994, "Computer Supported Collaborative Learning Using CLARE: The Approach and Experimental Findings", ACM Computing Surveys, p. 187-198 22. WÜESTER, E. L'étude scientifique générale de la terminologie, zone frontalière entre la linguistique, la logique, l'ontologie, l'informatique et les sciences des choses. In: RONDEAU, G., FELDER, F. (org.). Textes choisis de terminologie I. Fondéments théoriques de la terminologie. Quebéc, GIRSTERM,1981. P. 57-114.

Towards More Personalized Web: Extraction and Integration of Dynamic Content from the Web Marek Kowalkiewicz1, Maria E. Orlowska2, Tomasz Kaczmarek1, and Witold Abramowicz1 1

Department of Management Information Systems, Poznan University of Economics, Poznan, Poland {w.abramowicz, t.kaczmarek, m.kowalkiewicz}@kie.ae.poznan.pl 2 School of Information Technology and Electrical Engineering, The University of Queensland, QLD 4072, Australia [email protected]

Abstract. Information and content integration are believed to be a possible solution to the problem of information overload in the Internet. The article is an overview of a simple solution for integration of information and content on the Web. Previous approaches to content extraction and integration are discussed, followed by introduction of a novel technology to deal with the problems, based on XML processing. The article includes lessons learned from solving issues of changing webpage layout, incompatibility with HTML standards and multiplicity of the results returned. The method adopting relative XPath queries over DOM tree proves to be more robust than previous approaches to Web information integration. Furthermore, the prototype implementation demonstrates the simplicity that enables non-professional users to easily adopt this approach in their day-to-day information management routines.

1 Introduction The Internet has become one of the biggest storages of information in the world. More than 92% of all newly created information is stored and accessible via electronic media, Internet being the prominent one [1]. At the same time we come across a great paradox of the Internet – increase in Internet usage is leading to decreased physiological well-being and social involvement [2]. One of the causes, indicated by researches, is the information overload [3]. 1.1 Vision Many attempts have been made in order to enhance user experience of the Internet, especially of the World Wide Web. Those include: information retrieval and filtering methods, providing users with sophisticated yet easy to use search engines, and less popular personal information filtering systems; information extraction systems, robust, but focusing on atomic information, and hardly understandable by users without a degree in computer science. Other technologies, such as RSS (RDF Site Summary), trying to utilize content of the World Wide Web, have also been proposed. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 668 – 679, 2006. © Springer-Verlag Berlin Heidelberg 2006

Towards More Personalized Web: Extraction and Integration of Dynamic Content

669

Goal of the mentioned technologies is to provide Internet users with information of their interest in a very compact form, without cluttering the view. The problem with technologies like RSS is however a requirement of investing additional efforts in preparing Web content. Additionally, RSS feeds are slowly evolving from streams of compact information to streams of URLs pointing to traditional webpages, without any additional information (like abstracts of the pages). The challenge of current World Wide Web is to provide users with a system that will limit information presented to the users to only relevant one, with ease of use similar to those of search engines, robustness of information extraction systems, but providing more complex information (content), similar to one originally provided by RSS feeds, a system which will not require any additional work in terms of annotating Web content or using new protocols, simply using the existing Web. 1.2 Research Challenges The main research challenge described in this paper is to answer the question, whether it is possible to build a content extraction and integration system that is at least as robust as similar information extraction methods and at the same time easy to use. The latter challenge – ease of use – is achieved when the system does not require users to be accustomed with any programming languages or to be able to create and modify extraction rules, as it is expected in other systems. The former challenge – robustness – can be measured with success ratio of extraction procedure and compared to other existing extraction methods, which is discussed in the paper. 1.3 Contribution The first contribution of this work is in presenting feasibility of creating an easy to use system for building user content extraction expressions and integrating the extracted content. At first sight, building such a system should be straightforward. However there are a number of difficulties that are discussed in this paper. The prototypical system, addressing these difficulties, has been implemented. It is based on technologies used in information retrieval and information extraction, addressing technologies used in XML and HTML document processing. It is also inspired by the works in webpage clipping and positioning. The system fulfills the following tasks: it allows users to point interesting content on webpages, which should then afterwards be presented to the users, ignoring all the other content of the pages; it also allows users to construct a personal portal (therefore the system name – myPortal) out of selected content from different sources. All of this executed with high robustness in mind, so that whenever content on selected pages changes, the user will always get the interesting one, and whenever other radical changes (for instance structure of the document) are introduced, the system will maintain high effectiveness of content extraction. The second contribution is the preliminary evaluation of the robustness of the proposed extraction and integration method in comparison to other state-of-the-art and widely adopted extraction technique. We are currently continuing the evaluation, complete results of which will be published in later stages of the research.

670

M. Kowalkiewicz et al.

1.4 Structure Structure of the paper is as follows. The second section discusses state of the art in concepts addressed in this work, namely content extraction and integration. Section three discusses the concept and describes implementation work, including requirements, outline of the process, and problems encountered. Section four presents some examples of using the new method. In section five we discuss potential future directions of the work. The paper concludes with sections six and seven, where contribution, possible usage scenarios of the method, and information about supporter of the research are shortly presented.

2 Related Work Several attempts have been made in order to improve user experience while browsing the Web by limiting the displayed content only to the most relevant one. Information retrieval systems are helpful in finding sources of relevant information. After accessing the source document, it may be processed using information extraction methods to get very specific information or using Web page clipping tools to limit the displayed content. There is very little research work on integrating (or aggregating) extracted content, perhaps due to perceiving it as a simple task. The most challenging problem nowadays is to provide users with robust tools for content extraction, which will prove usable in everyday Internet usage. 2.1 Content Extraction Content extraction is understood as extracting complex, semantically and visually distinguishable information, such as paragraphs or whole articles from the Web. It borrows from information extraction methods used in the World Wide Web environment, and especially from Web data extraction methods. The most comprehensible survey of Web data extraction tools has been provided by Laender et al. [4], there are however other ones, also relevant to our study. The WebViews system [5] is a GUI system that allows users to record a sequence of navigation and point interesting data in order to build a wrapper. User is able to point interesting data, however it is not clear how the query to document’s data is generated. The system is limited to extracting data from tables. IEPAD [6] is a system used to automatically extract repetitive subsequences of pages (such as search results). It is interesting in the context of wrapper generation and content extraction. IEPAD uses PAT trees to identify repetitive substructures and is prone to specific types of changes in subsequent substructures (for instance changing attributes of HTML tags, additional symbols between tags). Annotea [7], on the other hand, is a system designed not for content extraction, but for its annotation. The work provides a description of an approach of addressing specific parts of HTML documents. The authors present the method on XML documents, implicitly assuming that the conversion from HTML to XML representation has been done. As the authors point themselves, the method is very sensitive to changes in the document, which makes it

Towards More Personalized Web: Extraction and Integration of Dynamic Content

671

usable only in addressing content of static documents. eShopMonitor [8] is a complex system providing tools for monitoring content of Web sites. It consists of three components: crawling system, which retrieves interesting webpages; miner, allowing users to point interesting data and then extracting the data; and reporting systems, which executes queries on extracted data and then provides user with consolidated results. The miner uses XPath expressions to represent interesting data. ANDES (A Nifty Data Extraction Systems) [9] extracts structured data using XHTML and XSLT technologies. The author of this system decomposes the extraction problem into five sub-problems: website navigation, data extraction, hyperlink synthesis, structure synthesis, data mapping, and data integration. WysiWyg Web Wrapper Factory (W4F) [10] is a set of tools for automatic wrapper generation. It provides tools for generating retrieval rules and a declarative language for building extraction rules. W4F uses a proprietary language, making it hard to integrate with other extraction systems. WebL [11] is a data extraction language. It is possible to represent complex queries (such as recursive paths and regular expressions) with it, however the language provides very limited means to address XML documents, particularly it doesn’t support XSLT templates and XPath expressions. Chen, Ma, and Zhang [12] propose a system that clips and positions webpages in order to display them properly on small form factor devices. They use heuristics in order to identify potentially interesting content. Their clipping methods, according to a set of 10’000 analyzed HTML pages, behaves perfectly (no problems in page analysis and splitting) in around 55% of documents. Out of remaining 45%, some 35% percent documents cause problems in page splitting, and the final 10% generates errors in both page analysis and splitting. Other possibly interesting systems include: WIDL [13], Ariadne [14], Garlic [15], TSIMMIS [16], XWRAP [17], and Informia [18]. It is important to note, that none of the mentioned systems was designed explicitly to extract previously defined content from dynamically changing webpages. 2.2 Content Integration Content integration is a concept that has not been very well researched so far. There are very few scientific publications regarding this concept. One and most significant of the few is a research work of Stonebraker and Hellerstein [19]. They define the term content integration to refer to the integration of operational information across enterprises [19]. The authors understand content as semi-structured and unstructured information, and content integration deals with sharing such information. Stonebraker and Hellerstein describe a Cohera Content Integration System, helpful in content integration, which consists of the following components: Cohera Connect, providing browsing and DOM based wrapper generation functionality, Cohera Workbench for content syndication, and Cohera Integrate for distributed query processing. There are currently no publicly known approaches that allow building the so called “intelligent portals”. Simon and Shaffer [20] enumerate nine providers of integration services, however only two of them are actually providing services, the other ones are not operating. One provider is giving access to information integrated from limited number of sources, the other one is providing integration of personal financial data

672

M. Kowalkiewicz et al.

(such as bank accounts). A careful examination reveals that both of them use extraction rules tailored to specific content providers. This approach limits application of the proposed tools. The users cannot actually build personal portals, they can only choose from information blocks prepared in advance.

3 Concept and Implementation In order to reach the goal described in introductory section (high relevance of information, ease of use, robustness, providing content (as opposed to information), similar to RSS feeds, not requiring any additional work on server side), we have divided the goal into three problems: navigation, content extraction, and content integration. Although the three problems were addressed independently, implemented solutions to them constitute one consistent system, named myPortal.

Fig. 1. Navigation interface in myPortal with highlighted and selected “Home Search Results” area

3.1 Navigation The system should allow users to navigate to the content of their interest using the simplest possible manner. Therefore, it has been implemented as a standard Web

Towards More Personalized Web: Extraction and Integration of Dynamic Content

673

browser. The first difficulty is maintaining the context of user navigation. During navigation some information about its context is stored on the client machine, allowing for example for basket filling in e-shops, browsing search query result, or other tasks that require some data persistence between browser calls to the remote website. As opposed to WebViews [5], myPortal doesn’t record complete sequence of navigation to a desired webpage. However, it stores all relevant information in the context of navigation (client side session or other cookies, HTTP request headers information, and HTTP variables with corresponding values – POST and GET data). Therefore when navigating for instance to an e-store, all personalization information is used in myPortal as well (no need to log in to the store etc.). While navigating, the users have an option to formulate their information needs (in a form of queries). Some of the systems described in Section 2 require usage of complex languages, providing only limited support for the user. In our system this has been solved by a simple point-and-click interface. Move of the mouse pointer over a document can be reflected on the respective nodes in the document DOM tree. This information can be used further to highlight content selected by the user by drawing a border over it (or highlighting selection in any other way). During myPortal configuration phase, a user is required to navigate to relevant webpages and point and click at the interesting content. This results in automatic generation of a query to the selected element. 3.2 Content Extraction Content extraction process uses queries generated during the navigation phase. The information about context is used to accurately access the webpage contents. In the prototypical implementation we used existing XML technologies to extract relevant content. User queries are translated into XPath expressions over the document tree. The source HTML document is converted to the XML form. Evaluating XPath expression on such prepared document returns a part of the document tree (or a set of elements), the desired content blocks. This approach poses some difficulties: 1. Numerous HTML documents are inconsistent with HTML specification so that the conversion to XML is not straightforward and it is hard to construct a proper DOM tree from them. Luckily there are tools for cleaning not well-formed documents (we adopted HTML Tidy for this task). 2. The dynamic structure of webpages, that are generated with scripts, allows assuming that the document will maintain its conformance with HTML standards over time and invariable structure. This is however not always the case. During our research we encountered examples of websites that did not hold this condition. This may be a result of either configuration errors in the cluster of Web servers serving the website, or a countermeasure against content theft. In the former case HTML cleaning usually solves the problem. The latter is dealt with relative XPath queries that we use for content extraction. 3. The webpage structure may change over time as a result of new layout or redesign. In case of dramatic change and renaming of content sections, all pattern based content methods fail (which is also the case in our system). Only methods based on

674

M. Kowalkiewicz et al.

text analysis and similarity measures searching for relevant content could deliver appropriate content. They are however prone to returning not relevant answers due to their inherent “fuzziness”. 4. The webpage structure may change slightly due to introduction of advertisements or repositioning certain layout elements. This problem is addressed in our approach with relative XPath expressions, which do not require the whole document structure to remain unchanged. The document tree, for this method to work, should remain unchanged only locally – around the selected content element. This is the main feature that provides improved robustness of our approach in comparison to other methods of content extraction. 5. The query generated from user input does not give unique answers. This problem occurs when the structure of a webpage contains numerous duplicate structures, and can be dealt with simple query reformulation or displaying multiple results (both possible in our prototype). Content extraction process is invisible to the user in myPortal and occurs while user wants to refresh the myPortal view on the selected websites. 3.3 Content Integration One of the requirements of the system is to enable user to construct a portal of his needs out of the extracted content. The overall process is depicted in Figure 2. At first, users navigate to the pages of their interest (1). Then, interesting content is selected, which leads to query formulation (2). After that, the users configure their portal, specifying its layout (3), later they assign previously prepared queries to specific areas in the portal (4). After that, the content extraction and integration may be executed and results in filling the empty portal with content. Currently we encounter two shortcomings of the method, they should however not be treated as research challenges. First, myPortal doesn’t yet support cascading style sheets (CSS). Due to their distribution (in different files, in HTML document headers, in tag attributes) and style inheritance. It is not straightforward to collect all formatting information. However, this should be regarded as a programming task, not requiring sophisticated research. The second problem is related to portal layout, which is currently using HTML tables. It may happen that one of extracted content will include some large elements (for instance very wide horizontal banners), which will distort the layout (cells with large elements will dominate the portal). One solution to that is using IFRAMES with content inside cells. However, this may in turn require users to scroll content inside cells, and may in result be considered a shortcoming as well. Figure 3 shows a sample extraction and integration result in myPortal. Two different real estate websites were queried, and the website query results have been used to formulate myPortal queries. All content around the results (most of the content on both pages) has been discarded when defining queries. The figure shows a portal integrating results from two websites.

Towards More Personalized Web: Extraction and Integration of Dynamic Content

675

1

My portal

5

My portal

My portal

4

3

2

Fig. 2. Phases of interaction with myPortal

Fig. 3. Content extracted from two real estate websites and integrated in myPortal. Notice that only relevant information is presented. The views are always up-to-date.

4 myPortal Experience Application areas of content extraction and integration technologies are very numerous. They may include news integration from multiple sources in order to be able to compare different sources. Accessibility applications (for instance speech enabled browsing) may benefit from getting rid of clutter surrounding relevant content. Integrating search results, weather, financial data, and other information is enabled and very easy to accomplish using myPortal application. Content extraction

676

M. Kowalkiewicz et al.

technologies could ease the maintenance of corporate Web portals that currently require tedious and costly work to maintain numerous views for separate user groups, or demand heavy personalization software to be applied. Another possible application is content clipping for viewing on small form factor devices. The sheer number of solutions that aim at information integration, such as RSS feeds, adoption of ontologies and content annotation, and even Web services which were initially applied for this purpose, convinces to continue research on this topic. What is really necessary is a simple solution, since there is already a wealth of approaches that did not receive deserved attention, probably due to their complexity.

5 Robustness Evaluation – Preliminary Results We define extraction robustness as immunity to document structure changes and ability to extract desired content with the same query evaluated against subsequent versions of the same dynamic webpage. The robustness can be measured with average number of successful content extractions against the number of page versions. 100,0%

100,0%

100,0%

98,5%

90,0% 83,9% 80,0%

74,1%

70,0% 60,0% 53,0% 50,0% 40,0% 30,0% 20,0% Relative 10,0% Absolute 0,0% http://info.onet.pl/

http://www.msn.com/

http://www.yahoo.com/

Fig. 4. Preliminary comparison of robustness of the two methods. The analysis included main portal pages from 2004 with query definitions constructed beginning of the year.

We conducted preliminary research on three selected main Web portal pages. The queries were constructed for the first occurrence of each of these pages for the year 2004. They were subsequently evaluated against all the remaining occurrences of each page for the whole 2004. We used two methods of extraction. The first one, traditional and widely adopted, defines query as an XPath expression over the DOM

Towards More Personalized Web: Extraction and Integration of Dynamic Content

677

tree, spanning from the tree root element to the element to be extracted. The second one, the proposed relative addressing method, also uses XPath expression which spans relatively from the user selected reference element as opposed to DOM root node. These preliminary examinations made on several hundred historical webpages show that myPortal method is robust in 99,68% of analyzed webpages, while most commonly used XPath with absolute addressing shows robustness of 73,16%. The average values were weighted according to a number of tested pages for each website. Due to very small sample data presented here, we expect that the results may change (worsen). However, current state of analysis (after analyzing several thousand pages) confirms superiority of XPath relative method in Web content extraction. The increased robustness of the relative XPath method is achieved through its immunity to ‘small’ document structure changes (like advertisement introduction or content repositioning) that likely occur on Web portals.

6 Future Work Currently we are working on comparing the technologies used in myPortal to other popular ones (absolute XPath expressions, wrappers based on rules or stiff HTML expressions). myPortal is using a modified method of extracting data, and preliminary research results show that the method is more robust than other, currently used, one. Since there are no standard ways of assessing efficiency of content extraction (as for example those used in Message Understanding Conferences in information extraction or Text Retrieval Evaluation Conference in information retrieval), the preparation phase for empirical evaluation is very demanding. We are currently collecting Web documents to perform more extensive robustness tests which hopefully will confirm the preliminary results. Other directions in future work include preparing a time enabled content extraction and integration, providing different portal layouts and content in different moments (for example news in the morning, financial data in the afternoon etc.), and suggesting potential areas for extraction on portals by analyzing structure and content of past documents.

7 Conclusions We have presented an approach to building an easy to use system for robust extraction and integration of Web content. The myPortal application fulfills the requirements stated in the first section of the paper, proving that such an application may be provided to users as a way of enabling current Web content to be used in a similar way to this envisioned in the Semantic Web concept. The contribution of the work is in presenting a system enabling new methods of browsing the Internet, clipping webpages to improve relevance of browsed content and to improve user experience in mobile, small screen, devices. We have also shown ways of integrating the content that may facilitate business use of the Internet.

678

M. Kowalkiewicz et al.

Preliminary research results indicate that the system is more robust than current approaches in Web data extraction.

Acknowledgements This research project has been supported by a Marie Curie Transfer of Knowledge Fellowship of the European Community's Sixth Framework Programme under contract number MTKD-CT-2004-509766 (enIRaF)

References 1. Lyman, P., et al., How Much Information 2003? 2003, School of Information Management and Systems, the University of California at Berkeley. 2. Diener, E., et al., Subjective well-being: Three decades of progress. Psychological Bulletin, 1999. 125: p. 276-302. 3. Waddington, P., Dying for Information? A report on the effects of information overload in the UK and worldwide. 1997, Reuters. 4. Laender, A.H.F., et al., A brief survey of web data extraction tools. ACM SIGMOD Record, 2002. 31(2): p. 84-93. 5. Freire, J., B. Kumar, D. Lieuwen, WebViews: Accessing Personalized Web Content and Services, in Proceedings of the 10th international conference on World Wide Web, V.Y. Shen, et al., Editors. 2001, ACM Press New York: Hong Kong. p. 576-586. 6. Chang, C.-H., S.-C. Lui, IEPAD: Information Extraction based on Pattern Discovery, in Proceedings of the 10th international conference on World Wide Web, V.Y. Shen, et al., Editors. 2001, ACM Press New York: Hong-Kong. p. 681-688. 7. Kahan, J., et al., Annotea: An Open RDF Infrastructure for Shared Web Annotations, in Proceedings of the 10th international conference on World Wide Web, V.Y. Shen, et al., Editors. 2001, ACM Press New York: Hong-Kong. p. 623-632. 8. Agrawal, N., et al., The eShopmonitor: A comprehensive data extraction tool for monitoring Web sites. IBM Journal of Research and Development, 2004. 48(5/6): p. 679692. 9. Myllymaki, J., Effective Web Data Extraction with Standard XML Technologies, in Proceedings of the 10th international conference on World Wide Web, V.Y. Shen, et al., Editors. 2001, ACM Press: New York, NY, USA. p. 689-696. 10. Sahuguet, A., F. Azavant, WysiWyg Web Wrapper Factory (W4F), in Proceedings of the 8th International World Wide Web Conference, A. Mendelzon, Editor. 2000, Elsevier Science: Toronto. 11. Kistler, T., H. Marais, WebL - A Programming Language for the Web, in Proceedings of the 7th International World Wide Web Conference. 1998: Brisbane, Australia. 12. Chen, Y., W.-Y. Ma, H.-J. Zhang, Detecting Web Page Structure for Adaptive Viewing on Small Form Factor Devices, in Proceedings of the 12th International World Wide Web Conference. 2003, ACM Press New York: Budapest, Hungary. p. 225-233. 13. Allen, C., WIDL: Application Integration with XML. World Wide Web Journal, 1997. 2(4). 14. Knoblock, C.A., et al., Modeling Web Sources for Information Integration, in Proc. Fifteenth National Conference on Artificial Intelligence. 1998.

Towards More Personalized Web: Extraction and Integration of Dynamic Content

679

15. Roth, M.T., P. Schwarz, Don’t Scrap It, Wrap It! A Wrapper Architecture for Legacy Data Sources, in Proceedings of the 23rd VLDB Conference. 1997: Athens, Greece. p. 266-275. 16. Ullman, J., et al., The TSIMMIS Project: Integration of Heterogeneous Information Sources, in 16th Meeting of the Information Processing Society of Japan. 1994. 17. Liu, L., C. Pu, W. Han, XWRAP: An XML-enabled Wrapper Construction System for Web Information Sources, in Proc. International Conference on Data Engineering (ICDE). 2000: San Diego, California. 18. Barja, M.L., et al., Informia: A Mediator for Integrated Access to Heterogeneous Information Sources, in Proceedings of the 1998 ACM CIKM International Conference on Information and Knowledge Management, G. Gardarin, et al., Editors. 1998, ACM Press: Bethesda, Maryland, USA. p. 234-241. 19. Stonebraker, M., J.M. Hellerstein, Content Integration for EBusiness. ACM SIGMOD Record, 2001. 30(2): p. 552-560. 20. Simon, A.R., S.L. Shaffer, Data Warehousing and Business Intelligence for E-Commerce. 2001: Morgan Kaufmann. 312.

Supporting Relative Workflows with Web Services Xiaohui Zhao and Chengfei Liu Faculty of Information & Communication Technologies, Swinburne University of Technology, PO Box 218, Hawthorn, Melbourne, VIC 3122, Australia {xzhao, cliu}@it.swin.edu.au

Abstract. Web service technology has found its application in workflow management for supporting loosely coupled inter-organisational business processes. To effectively enable inter-organisational business collaboration, it is necessary to treat participating organisations as autonomous entities. For this purpose, we proposed an organisation centred framework called relative workflows. In this paper, we study how relative workflow processes can be realised in the Web service environment. WS-BPEL is extended for relative workflow modelling. In facilitating relative workflows, Web service based WfMS architecture and its main components are proposed and discussed in this paper.

1 Introduction With the trend towards global business collaboration, there is an urgent requirement to integrate and automate business processes across organisations [1]. To support such inter-organisational business collaboration, research efforts have been put on developing new methods to improve current workflow management and infrastructure technologies [2-5]. Unlike traditional workflow management, an inter-organisational business process involves more than one participating organisation that is loosely coupled with each other. As an autonomous business entity, a participating organisation shall be able to keep the control of its own business as well as the control of its participation in interorganisational collaboration. In addition, it is also required to prevent the private information of participating organisations from being disclosed during the interaction. These requirements can hardly be satisfied by those inter-organisational workflow management approaches, in which certain levels of details of local workflow processes and how these local workflow processes participate in inter-organisational collaboration are made public to all other organisations [2, 3]. Some previous research, such as the workflow view model [6] and the process view model [7] provided supports for local autonomy and business privacy at certain level. To fully provide support to autonomous organisations and business privacy, we proposed an organisation centred approach called relative workflow [5]. Recently, we saw that Web services provide a promising solution to interorganisational workflow applications. Compared with the traditional Client/Server architecture, Web services behave as independent entities, each having a distinct responsibility within the system and each specifying its own internal behaviour. Since X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 680 – 691, 2006. © Springer-Verlag Berlin Heidelberg 2006

Supporting Relative Workflows with Web Services

681

Web services have no dependency on each other, we can use them to connect technically diverse systems for the purpose of utilising the functionalities that may already exist either within, or outside an organisation. The current basic layer of Web services mainly concentrates on the behaviours between single Web services. More methods are required to co-operate multiple Web services into a reliable and dependable business solution supporting an appropriate level of complexity. In this paper, we apply Web service technologies to relative workflows for modelling and implementing inter-organisational business processes. With regards to the support of business collaboration, Web service technology and relative workflow approach complement with each other. From one side, Web service technology provides the required support on loose-coupling at infrastructural level to relative workflows. From the other side, the process management mechanism from relative workflows effectively coordinates the functioning of Web services to achieve required business objectives. The use of Web service technology in our relative workflow approach demonstrates a feasible way for organisations to adopt the Web service technology in the field of inter-organisational business process management. The specific contributions of this paper are listed below: • A mapping mechanism from relative workflow processes to executable business processes written in the Business Process Execution Language for Web services (WS-BPEL, formerly BPEL4WS) with an extension to support relative workflows. • An architecture enabling the management of relative workflows, on the basis of Web service technologies. The rest of this paper is organised as follows: Section 2 gives a brief introduction of our relative workflows framework. Section 3 presents the mapping from the components of relative workflows to the elements defined in Web Service Description Language (WSDL) and the extended WS-BPEL. Section 4 discusses the architecture of workflow management system to facilitate relative workflows on the basis of Web services. Section 5 introduces some related work. Concluding remarks are given in Section 6, together with an indication of the future work.

2 Preliminary Relative Workflow Model In this section, we briefly review the relative workflow model [5], which is based on a visibility control mechanism. In our context, a collaborative workflow process consists of several intra-organisational workflow processes of participating organisations and their interaction. We call these intra-organisational workflow processes as local workflow processes. Definition 1 Local Workflow Process (LWfP). A local workflow process lp is defined as a directed acyclic graph ( T, R ), where T is the set of nodes representing the set of tasks, and R ⊆T×T is the set of arcs representing the execution sequence. 1

Definition 2 (Organisation). An organisation g is defined as a set of LWfPs {lp , 2 n i i lp , … , lp }. An individual LWfP lp of g is denoted as g.lp .

682

X. Zhao and C. Liu

During the collaboration, the organisation applies visibility control to protect the critical or private business information of some workflow tasks from disclosing to other organisations. Table 1 lists the three basic visibility values defined for business interaction and workflow tracking. Due to the high diversity of business collaborations, these three values may hardly cover all visibility scenarios. In this paper, we use these three values to provide a fundamental visibility control mechanism, and this visibility value table is open for future extension. Table 1. Visibility values Visibility value Invisible

Explanation

Trackable Contactable

A task is said invisible to an external organisation, if it is hidden from that organisation. A task is said trackable to an external organisation, if that organisation is allowed to trace the execution status of the task. A task is said contactable to an external organisation, if the task is trackable to that organisation and the task is also allowed to send/receive messages to/from that organisation for the purpose of business interaction.

Definition 3 (Visibility Constraint). A visibility constraint vc is defined as a tuple (t, v), where t denotes a workflow task and v∈{ Invisible, Trackable , Contactable }. A set of visibility constraints VC defined on an LWfP lp is represented as a set {vc:(t, v) | ∀t (t∈lp.T)}. Definition 4 (Perception). A perception p gg0 .lp of an organisation g0’s LWfP lp from 1 another organisation g1 is defined as ( VC, MD, f ), where − VC is a set of visibility constraints defined on g0.lp. − MD ⊆ M × { in, out }, is a set of the message descriptions that contains the messages and the passing directions. M is the set of messages used to represent inter-organisational business activities. − f: MD → g0.lpg1.T is the mapping from MD to g0.lpg1.T, and g0.lpg1 is the perceivable workflow process (PWfP) of g0.lp from g1. The generation of g0.lpg1 from g0.lp will be discussed later. Definition 5 Relative Workflow Process (RWfP). A relative workflow process g1.rp perceivable from an organisation g1 is defined as a directed acyclic graph ( T, R ), where T is the set of the tasks perceivable from g1, which is a union of the following two parts: − −

k

g1.lp k .T

i j

gi .lp gj .T 1

k

, the union of the task sets of all g1.lp . j

, the union of the task sets of all PWfPs of gi.lp from g1.

Supporting Relative Workflows with Web Services

683

R is the set of arcs perceivable from g1, which is a union of the following three parts: − −

k

k

g1.lp k .R

i j

, the union of the arc sets of all g1.lp .

gi .lp gj .R 1

j

, the union of the arc sets of all PWfPs of gi.lp from g1.

− L, the set of messaging links between LWfPs and PWfPs, consisting of two parts: − Lintra, the set of intra-organisational messaging links that connect tasks belonging to different LWfPs, and is defined on U U g1 .lp i .T × g1 .lp j .T where i  j . i j

− Linter, the set of inter-organisational messaging links that connect tasks between an LWfP and a PWfP, and is defined on

(

)

U U U g1.lp k .T × g i .lp gj .T ∪ g i .lp gj .T × g1.lp k .T . i j k

g0

1

1

g1

Relative Workflow Process

1

1

has [1, n]

1

1

has

Local Wf Process

Message Link Set

1

1

1

[1, n]

[1, n]

Perceivable Wf Process

1

1

has

has

has

[1,n]

Relative Workflow Process

has [1, n]

[1,n]

Perceivable Wf Process

Message Link Set

Local Wf Process 1

1

1

compose

compose [1, n]

[1, n]

Message Description

Visibility Constraint

[1, n]

[1, n]

has

1

Perception

1

has

match

[1, n]

[1, n]

Message Description

Visibility Constraint

[1, n]

has

[1, n] 1

Perception

1

has

Fig. 1. Relative workflow model

Figure 1 illustrates how the components of the relative workflow model are related across organisations. Given the discussion and definition of the relative workflow process above, a necessary procedure for an organisation, g0 or g1, to generate RWfPs is to define the perceptions on LWfPs. This step includes defining visibility constraints, message links and matching functions. Once the perceptions on LWfPs of its partner organisations have been defined, an RWfP can be generated by two more steps: composing tasks and assembling RWfPs. The purpose of composing tasks is to hide some private tasks of LWfPs. We choose to merge invisible tasks with the contactable or trackable tasks into composed tasks, if not violating the structural validity; otherwise, those invisible tasks are combined into a blind task. According to the perception defined from g1, an LWfP of g0 after this step becomes a PWfP for g1.

684

X. Zhao and C. Liu

In the step of assembling RWfPs, an organisation may assemble its RWfPs from LWfPs, the PWfPs from partner organisations, together with the messaging links. As shown in Figure 1, an RWfP of g1 consists of the LWfPs, the PWfPs from g0, and the messaging links obtained by matching the message descriptions defined in the perceptions of g0 and g1. The related algorithms for composing tasks and assembling RWfPs are discussed in [5].

3 Specifying Relative Workflows in Web Services 3.1 Mapping Rules In the current standard stack of Web services, WSDL [8] is used to specify the static interfaces of a Web service. This includes message types, port types, partner link types, etc. WS-BPEL [9] is used to define the orchestration and choreography of Web services towards specific business objectives. To implement our relative workflow approach on the Web service platform, we designed a mapping from our relative workflow model to WS-BPEL. Table 2 gives the list of mapping rules for representing relative workflow components in WS-BPEL and WSDL. The LWfPs defined in an RWfP can be considered as a network of collaborative intra-organisational workflow processes, which have interactions with each other. Since these LWfPs belong to the same organisation, say g1, there is no restriction on visibility. In another word, all details about these LWfPs and the interactions between these LWfPs are available to g1. Therefore, we can treat these collaborative LWfPs as a composite workflow process, which represents the interaction behaviours between the involved local Web services. To structurally unite these collaborative LWfPs, the messaging links between these processes can be represented by WS-BPEL elements for structured activities. Yet the underlying partner links have to be kept to indicate the functional invocations. Some elements may be added to represent the response on the event of partner link invocations. Since each component LWfPs can start and be executed independently, therefore the composite workflow process may have multiple starting activities. To deal with this, we can use standard WS-BPEL correlation elements, viz. / / , to coordinate the concurrent starting activities. Different from LWfPs, the PWfPs defined in an RWfP of organisation g1 are subject to visibility constraints. Actually, the execution of these PWfPs is in the hands of g1’s corresponding partner organisations, rather than g1. The inclusion of these PWfPs is mainly used to provide instructions on necessary interaction interfaces and handling flows from/to g1, and then assists g1’s participation in the corresponding business collaboration. Therefore, the PWfPs should belong to a non-executable part of a WSBPEL document. In this paper, we extend BPEL4WS with a couple of new elements, viz. and . In the extended BPEL4WS, the root element is used to represent the composite process constructed by the LWfPs, while each of the element nested in the element, is used to describe an individual PWfP.

Supporting Relative Workflows with Web Services

685

In the context of BPEL4WS, the services with which a workflow process interacts are modelled as partner links. Each partner link is characterised by a partnerLinkType. The message links defined in the relative workflow model will be mapped to be partner links connecting Web services, which provide the corresponding business function for Table 2. Mapping rules

Relative Workflow Components i

lp i , all local workflow

processes defined in a relative workflow process

a perceivable workflow process

t∈T, a task

r∈R-L, a connection between tasks l∈Linter, an interorganisational messaging link

l∈Lintra, an intraorganisational messaging link

m∈M, a message md=(m, in/out) ∈MD, a message description

a blind task

Web Service / BPEL Process Elements business process, in WS-BPEL

… … … …

perceivable process, in BPEL4WS

… …

Web service operation defined in a WSDL portType.



The operation will be invoked through a partnerLink defined in WS-BPEL. elements for structured activities, in WS-BPEL , , , , , partner link, in WS-BPEL

myRole = ... partnerRole=... invokeType = “foreign” >

partner link and event response elements in WS-BPEL

invokeType = “local” /> …



message type, in WSDL … primitive messaging activities, in WS-BPEL / in asynchronised mode / in synchronised mode an empty activity, in WS-BPEL

686

X. Zhao and C. Liu

the tasks connected by the message links. We use an extended attribute invokeType with value local or foreign to represent the messaging links between two LWfPs or an LWfP and a PWfP, respectively. With the given invokeTypes and messaging directions, each participating organisation can recognise its tasks in an RWfP. Consequently, the participating organisations can collectively accomplish the business collaboration represented by this RWfP. Finally, we list all mapping rules in Table 2, including major relative workflow components to the elements defined in WSDL and WS-BPEL. 3.2 Process Mapping Example Figure 2 (a) describes an RWfP of a retailer. It has two LWfPs for product ordering and customer relation management (CRM), and one perceivable process of a manufacturer for production. The retailer’s product ordering process collects orders form customers, and then orders products from the manufacturer. At the same time, the CRM process is responsible to record all customer related behaviours. Based on the mapping rules listed in Table 2, the RWfP in Figure 2 (a) can be mapped to an extended WS-BPEL document as shown in Figure 2 (b). Retailer’s CRM process

Retailer’s product ordering process

link ‘recordOrder’

Manufacturer’s production process

Raise Order link ‘ orderProduct’

Collect Order

Place Order with Manufacturer Customer Transaction Record After-Sales Service Customer Evaluation

link ‘recordInvoice’

Invoice Customer Pay Invoice Approve Payment Print Check

link ‘delivery Confirm’

Confirm Delivery

Blind Task

link ‘sendInvoice’

Invoice Retailer

(a)

… … …

















… …





687

Invoke the “customer transaction record” task of the CRM process. Invoke the “place order with manufacturer” task of the product ordering process. The content of the CRM process

The content of the perceivable process, i.e. the production process from the manufacturer.

(b) Fig. 2. Process mapping example

As shown in Figure 2 (b), the underlying Web service interaction behaviours of the two collaborative local workflow processes, i.e. the product ordering process and the CRM management process, are merged as the main content of the element. And the Web service behaviours of perceivable production process are described in the element.

4 Facilitating Relative Workflows Based on Web Services 4.1 Workflow Management System Architecture To facilitate relative workflows, we also deploy Web service technologies to implement a workflow management system for each participating organisation, with the Agreement Management Service, the Workflow Modelling Service, the Workflow Engine Service and the Workflow Monitor Service. As shown in Figure 3, the local ports of the four administrative services are only accessible to intra-organisational components and databases, while the external ports to partner organisations. The whole lifecycle of an RWfP through these four administrative services goes as follows: First, the Agreement Management Service wraps an LWfP into PWfPs for partner organisations; Thereafter, the Workflow Modelling Service generates RWfPs with the PWfPs from partner organisations; Finally, RWfPs will be executed by the Workflow Engine Services of the host organisation and the partner organisations. And the monitoring over the execution will be handled by the Workflow Monitor Services of involved organisations.

688

X. Zhao and C. Liu Enquiries from Workflow Modeling Service of partner organisations External Port

To search the appropriate perceivable workflow processes through the Agreement Management Services of Partner organisations.

Authentication Partner List

Perceivable Process Checker

External Port

Contracts

PWfP Locator

Perception Generator

LWfP Locator

Perception Locator RWfP Assembler

Perceivable Process Generator

Local Port

Local Port

User Interface

RWfP DB LWfP DB

PWfP DB

LWfP DB

Perception DB

(a)

(b)

To Cooperate with the Workflow Engine Services of Partner Organisations External Port Workflow Manager

Status enquiries from / to partner organisations

Workflow Coordinator

External Port Monitoring Manager

Process Instantiation Starter

Worklist Generator

LWfP Editor

Status Assembler

Local Port

LWfP Locator Local Monitor

Local Port User Interface Web Service 1 ...

Web Service m

Local Wf Instance DB

Local Wf Process DB

Relative Wf Instance DB

Relative Wf Process DB

Agreement Management Service

RWfP DB

RWf Ins. DB

(c)

Agreement Management Service

LWf Ins. DB

(d)

Fig. 3. Four administrative services

4.2 Administrative Services 4.2.1 Agreement Management Service The function of Agreement Management Service, shown in Figure 3 (a), is to define perceptions and generate PWfPs for partner organisations. The perception generator of the Agreement Management Service extracts the visibility constraints from the manually signed commercial contracts and then defines them into perceptions. The perception locator and the perceivable process generator are jointly responsible for the procedure of composing tasks. The generated PWfPs are stored in the Perceivable Workflow Process Database. 4.2.2 Workflow Modelling Service The Workflow Modelling Service, shown in Figure 3 (b), is responsible for local workflow modelling and relative workflow assembling. To generate an RWfP shown in Figure 2 (a), the perceivable process locator of the retailer first enquires the perceivable process checker of each partner organisation’s Agreement Management Service, such as the manufacturer’s, to check the PWfPs defined for the retailer. Then the relative workflow process assembler can assemble an RWfP with LWfPs and the retrieved PWfPs. The generated RWfPs are stored in the Relative Workflow Process Database.

Supporting Relative Workflows with Web Services

689

4.2.3 Workflow Engine Service The function of Workflow Engine Service, shown in Figure 3 (c), is to coordinate and enact the execution of workflow instances. When the inter-organisational collaboration process flows from an organisation to another organisation, the context organisation accordingly changes too. For example, when the retailer places an order with the manufacturer, the manufacturer handles its collaboration following the RWfP defined from its own perspective, while the retailer continues its collaboration following the RWfP defined in Figure 2 (a). As we can see, in this organisation centric approach, the RWfP may change with the host organisation. Therefore, the workflow coordinator is assigned to keep the correct correspondence of participating workflow instances. Technically, the workflow coordinator can recognise whether the process is flowing from an external organisation or from the host organisation by referring the “invokeType” attribute of the invoking partner links. If the invoking partner link is from an external organisation, the workflow coordinator will first record the instance ID of the requesting workflow instance, and identify that which relative workflow instance in the Relative Workflow Instance Database is in correspondence with this requesting instance, with the help of the Agreement Management Service. Afterwards, the workflow coordinator needs to find the local workflow instances that participate in the recognised relative workflow instance. Such association information will assist the workflow manager to perform the requested operations by redirecting the invocations to corresponding Web services. The workflow instantiation and work list handling will be done by the process instantiation starter and the worklist generator, respectively. 4.2.4 Workflow Monitor Service The function of Workflow Monitor Services, shown in Figure 3 (d), is to handle the tracking over local workflows and relative workflows. The local monitor is responsible for tracking the local part of a relative workflow instance, i.e. the status of the tasks that belong to LWfPs. As for the perceivable workflows, the monitoring manager will propagate status requests to the counterparts of partner organisations. At the site of a requested organisation, the local workflow process locator will first locate the LWfPs of the requested instances with the help of the Agreement Management Service, and then find the execution status via the local monitor and finally return them to the requesting organisation. With the retrieved information, the requesting organisation’s status assembler can update the execution status of the relative workflow instance in the Relative Workflow Instance Database. Due to the space limit, we can hardly extensively discuss the handling procedure of this architecture in this paper.

5 Related Work and Discussion The Electronic Business XML (ebXML [10]) consortium defined a comprehensive set of specifications for an industrial-strength usage pattern for XML document exchange among trading partners. The ebXML architecture allows businesses to find one another by using a registry, define trading-partner agreements, and exchange XML messages in support of business operations [11]. This proactive collaboration scheme is also supported by our relative workflow approach. We pay special attention to the

690

X. Zhao and C. Liu

relativity of participating organisations, which lays the foundation of local autonomy and privacy protection. A flexible visibility control mechanism is proposed to support the relativity. Chiu, Li, Liu et al [6, 7] borrowed the notion of ‘view’ from federated database systems, and employed a virtual workflow view (or virtual process view) for the interorganisational collaboration instead of the real instance, to hide internal information. Schulz and Orlowska [4] developed a cross-organisational workflow architecture, on the basis of communication between the entities of a view-based workflow model. Very recently, Issam, Dustdar et al [12] proposed another view-based approach to support inter-organisational workflow cooperation from the motivation of considering an inter-organisational workflow as a cooperation of several pre-established workflows of different organisations. In comparison, our relative workflows approach extracts explicit visibility constraints from commercial contracts to restrict the information disclosure. Different from the workflow view model, the relative workflow approach distributes the macro business collaboration into interactions between neighbouring organisations, and these interactions are performed by the relative workflows designed from the perspective of individual organisations.

6 Conclusion and Future Work This paper described our work on facilitating relative workflows with Web service technologies to provide a solution for supporting local autonomy and loose-coupling in inter-organisational workflow applications. This work is performed at two levels, one is to extend present WS-BPEL to fully characterise the basic components, processes and operations in the context of relative workflows; the other is to design a new architecture for workflow management systems. The proposed solution presents a comprehensive picture for organisations to adopt Web services for supporting complex business collaboration and interaction. We are refining and extending the current research to further consider some important issues, such as the coordination mechanism between participating organisations and the derivability of visibility constraints. Currently, we are developing a prototype for incorporating relative workflows in Web service environment.

Acknowledgements The work reported in this paper is partly supported by the Australian Research Council discovery project DP0557572.

References 1. van der Aalst, W.M.P., ter Hofstede, A.H.M., and Weske, M.: Business Management: A Survey. In Proceedings of In Proceedings of Business Management (2003) 1-12. 2. Grefen, P.W.P.J., Aberer, K., Ludwig, H., and Hoffner, Y.: CrossFlow: Organizational Workflow Management for Service Outsourcing in Dynamic Enterprises. IEEE Data Engineering Bulletin, 24(1) (2001) 52-57.

Process Process CrossVirtual

Supporting Relative Workflows with Web Services

691

3. Sayal, M., Casati, F., Dayal, U., and Shan, M.C.: Integrating Workflow Management Systems with Business-to-Business Interaction Standard. In Proceedings of In Proceedings of International Conference on Data Engineering. IEEE Computer Society (2002) 287-296. 4. Schulz, K. and Orlowska, M.: Facilitating Cross-organisational Workflows with a Workflow View Approach. Data & Knowledge Engineering, 51(1) (2004) 109-147. 5. Zhao, X., Liu, C., and Yang, Y.: An Organisational Perspective of Inter-Organisational Workflows. In Proceedings of International Conference on Business Process Management. Nancy, France (2005) 17-31. 6. Chiu, D.K.W., Karlapalem, K., Li, Q., and Kafeza, E.: Workflow View Based E-Contracts in a Cross-Organizational E-Services Environment. Distributed and Parallel Databases, 12(2-3) (2002) 193–216. 7. Liu, D. and Shen, M.: Workflow Modeling for Virtual Processes: an Order-preserving Process-View Approach. Information Systems, 28(6) (2003) 505-532. 8. W3C. Web Services Description Language (WSDL) 1.1. http://www.w3c.org/TR/wsdl (2001) 9. Andrews, T.: Business Process Execution Language for Web Services (BPEL4WS) 1.1. http://www.ibm.com/developerworks/library/ws-bpel (2003) 10. ebXML. http://ebxml.org (2002) 11. Dogac, A., Tambag, Y., Pembecioglu, P., Pektas, S., Laleci, G., Kurt, G., Toprak, S., and Kabak, Y.: An ebXML Infrastructure Implementation through UDDI Registries and RosettaNet PIPs. In Proceedings of ACM SIGMOD International Conference on Management of Data. Madison, Wisconsin (2002) 512-523. 12. Issam, C., Schahram, D., and Samir, T.: The View-Based Approach to Dynamic InterOrganizational Workflow Cooperation. Data & Knowledge Engineering (In Press)

Text Based Knowledge Discovery with Information Flow Analysis Dawei Song1 and Peter Bruza2 1

Knowledge Media Institute, The Open University, Walton Hall, Milton Keynes, MK7 6AA, United Kingdom [email protected] 2 Queensland University of Technology [email protected]

Abstract. Information explosion has led to diminishing awareness: disciplines are becoming increasingly specialized; individuals and groups are becoming ever more insular. This paper considers how awareness can be enhanced via text-based knowledge discovery. Knowledge representation is motivated from a socio-cognitive perspective. Concepts are represented as vectors in a high dimensional semantic space automatically derived from a text corpus. Information flow computation between vectors is proposed as a means of discovering implicit associations between concepts. The potential of information flow analysis in text based knowledge discovery has been demonstrated by two case studies: literature-based scientific discovery by attempting to simulate Swanson’s Raynaud-fish oil discovery in medical texts; and automatic category derivation from document titles. There is some justification to believe that the techniques create awareness of new knowledge.

1 Introduction There has been ever expanding vast amount of information published and accessible on the WWW, which provides new opportunities for mining the web to discover useful information. For example, a researcher searches the bibliographic databases to obtain possible solutions or useful hints from other people’s published work in order to solve his own problems. The term “hints” here refers to the intermediate knowledge connecting the problem with some yet unknown possible solution. This kind of hidden connection is referred to as “undiscovered public knowledge” by Swanson [14]. Swanson’s serendipitous literature-based discovery of a cure for Raynaud’s disease by dietary fish oils illustrates this phenomenon. The literature documenting Raynaud’s disease and literature surrounding fish oil were disjoint. Had these communities been aware of each other, a cure would probably been found much earlier than Swanson’s chance discovery. Similarly, Barwise and Seligman [1] use a term “information flow” for the relation of one information carrier X carrying or conveying another Y. We have referred the process of uncovering implicit information to as informational inference. Humans X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 692 – 701, 2006. © Springer-Verlag Berlin Heidelberg 2006

Text Based Knowledge Discovery with Information Flow Analysis

693

have the ability to swiftly make reliable judgments about the content of brief text fragments, e.g., document titles. For example, with web page titles “welcome to Penguin Books” and “Antarctic Penguin”, the term “penguin” within these two phrases is about two rather different concepts: the publisher Penguin Books, and the short, black animal living in the ice-cold Antarctic. The process of making such “aboutness” judgments has been referred to as informational inference [12]. An information inference mechanism has been proposed which automatically computes information flow through a high dimensional conceptual space [11] [12]. Each concept is represented as a vector of other concepts in the conceptual space. The information flow is a reflection of how strongly Y is informationally contained within X, which discovers the implicit associations between concepts. This is distinct from the explicit term associations via co-occurrence relationships used in traditional information processing systems. For example, “bird” is an information flow derived from “Antarctic Penguin”, even if it may never co-occur with “penguin” in a document collection. Information flow can be considered as a special form of informational inference [12]. In this way, the user can classify the incoming documents into different categories, e.g., “publisher”, “birds”, etc., by scanning their titles, instead of spending much time browsing and reading the whole articles. Suppose a user is only interested in the bird penguin. He can then simply filter out the documents about penguin publisher. The information flow analysis is based on Hyperspace Analogue to Language (HAL) vector representations, which reflects the lexical co-occurrence information of terms [4] [9]. Words are represented as vectors in a high dimensional semantic space automatically derived from a text corpus. Research from cognitive science has demonstrated the cognitive compatibility of HAL representations with human processing. Information flow computation between vectors is proposed as a means of suggesting potentially interesting implicit associations between concepts. In this paper we offer the fundamentals of a knowledge discovery mechanism based on information flow analysis. Two case studies are presented, using the mechanism, in the areas of serendipitous discovery, such as a replication of Swanson’s Raynaud-Fish Oil discovery (Section 3), and unsupervised category derivation from document titles (Section 4). The ultimate goal of this research is to develop automated information discovery devices for enabling people to have their awareness enhanced.

2 Fundamentals of Information Flow Analysis 2.1 Representing Information in High Dimensional Conceptual Space How can information, or concepts, be represented and processed in a full, rich-inmeaning context? From the cognitive science aspect, Gärdenfors proposed a cognitive model on ‘conceptual space’, which is built upon geometric structures representing concepts and properties [6]. In the conceptual level, information is represented geometrically in terms of a dimensional space.

694

D. Song and P. Bruza

Recent investigations on lexical semantic space models open a door to realize Gardenfors’ conceptual spaces theory. Humans encountering a new concept derive the meaning via an accumulation of experience of the contexts in which the concept appears. The meaning of a word is captured by examining its co-occurrence patterns with other words in the language use (e.g., a corpus of texts). There have been two major classes of semantic space models: document spaces and word spaces. The former represents words as vector spaces of text fragments (e.g. documents, paragraphs, etc) in which they appear. A notable example is the Latent Semantic Analysis (LSA) [8]. The latter represents words as vector spaces of other words, which occur with the target words within a certain distance (e.g., a window size). The weighting scheme can be inversely proportional to the distance between the context and target words. The Hyperspace Analogue to Language (HAL) model employs this scheme [4]. The concepts, which occur in the similar contexts, tend to be similar to each other in meaning. For example, nurse and doctor are similar in semantics to each other, as they always experience the same contexts, i.e., hospital, patients, etc. The semantic space models have demonstrated cognitive compatibility with human processing. For example, Burgess and Lund showed via cognitive experiments that “human participants were able to use the context neighbourhoods that HAL generates to match words with similar items and to derive the word (or a similar word) from the neighbourhood, thus demonstrating the cognitive compatibility of the representations with human processing” [4]. 2.2 Hyperspace Analogue to Language (HAL) Song and Bruza [11] [12] proposed to use the HAL vectors to prime the geometric representations inherent to Gärdenfors’ conceptual spaces. Given an n-word vocabulary, the HAL space is a n x n matrix constructed by moving a window of length L over the corpus by one word increment ignoring punctuation, sentence and paragraph boundaries. All words within the window are considered as co-occurring Raynaud with each other with strengths inversely proportional to Dimension Value El 0.17 the distance between them. Given two words, whose blood 0.09 distance within the window is d, the weight of association calcium 0.14 between them is computed by (L – d + 1). After dazoxiben 0.18 traversing the corpus, an accumulated co-occurrence flap 0.09 matrix for all the words in a target vocabulary is forestry 0.09 produced. It is sometimes useful to identify the so called ketanserin 0.26 quality properties of a HAL-vector. Quality properties are necrosis 0.09 identified as those dimensions in the HAL vector for c nifedipine 0.41 which are above a certain threshold (e.g., above the orthostatic 0.09 average weight within that vector). HAL vectors are platelet 0.14 prostaglandin 0.22 normalized to unit length before information flow renal 0.09 computation. For example, Table 1 shows part of the ... ... normalized HAL vector for the word “Raynaud” computed by applying the HAL method to a collection of medical Table 1. HAL vector texts originating from the MEDLINE collection.

Text Based Knowledge Discovery with Information Flow Analysis

695

This example demonstrates how a word is represented as a weighted vector whose dimensions comprise other words. The weights represent the strengths of association between “Raynaud” and other words seen in the context of the sliding window: the higher the weight of a word, the more it has lexically co-occurred with “Raynaud” in the same context(s). The quality of HAL vectors is influenced by the window size: the longer the window, the higher the chance of representing spurious associations between terms. A window size of eight or ten has been used in various studies [2][11][12]. Accordingly, a window size of 10 will also be used in the experiments reported in this paper. More formally, a concept c is a vector representation: c = < wcp , wcp ,..., wcp > 1

2

n

where p1,p2.,..pn are called dimensions of c, n is the dimensionality of the HAL space, and wcp denotes the weight of pi in the vector representation of c. A dimension is i

termed a property if its weight is greater than zero. A property pi of a concept c is a termed a quality property iff wcp > ∂ , where ∂ is a non-zero threshold value. Let i

QP∂ (c ) denote the set of quality properties of concept c. QPμ(c) will be used to denote

the set of quality properties above mean value, and QP(c) is short for QP0(c). Computing Information Flow in HAL Spaces Information flow between two concepts is computed as a degree of inclusion (⊆ ) of the source vector in the target vector. As HAL vectors are not perfect representations of the associated concepts, the computation is done based on how many dimensions of one concept are present in another concept. Total inclusion leads to maximum information flow. Definition HAL-based information flow i1, K , ik − j iff degree(⊕ci ⊆ c j ) > λ

where c i denotes the conceptual representation of token i, and λ is a threshold value. (⊕ci refers to the combination of the HAL vectors c1 , K , c k into a single vector representation representing the combined concept. Details of a concept combination heuristic can be found in [2]. The degree of inclusion is computed in terms of the ratio of intersecting quality properties of ci and cj to the number of quality properties in the source ci:

¦

w cip l p l∈ (QP μ (c i ) ∧ QP(c j ))

degree(c i

< c j)=

¦

w cip k p k ∈ QP μ (c i )

The underlying idea of this definition is to make sure that a majority of the most important quality properties of c i also appear in c . Note that information flow j produces truly inferential character, i.e., concept j need not be a property dimension of

696

D. Song and P. Bruza

Table 2. Information flows from "Gatt talks". Information Flows gatt trade agreement world negotiations talks set states EC ...

Degree 1.00 0.96 0.86 0.85 0.84 0.82 0.82 0.81 ...

c i . Table 2 shows an example of information flow computation where the weights represent the degree of information flows derived from the combination of “GATT” (General Agreement on Tariffs & Trade) and “talks”. The HAL-based information flow model has been successfully applied to automatic query expansion for document retrieval with encouraging results [2]. The next two sections will present two case studies of applying information flow analysis to literature-based scientific discovery as well as category derivation from document titles.

3 Case Study 1: Scientific Discovery Via Information Flow Analysis An important type of information discovery is the scientific discovery, which is also an inference process. Swanson (1986) proposed a concept of “undiscovered public knowledge” [14]. Two originally unconnected phenomena A and C could be linked via some intermediate B. When some relationship between A and B is found in a number of research papers while some kind of relationship between C and B can also be found, the “undiscovered public knowledge” that there is some kind of relationship between A and C can then be established. Swanson successfully discovered the hidden connection between “Raynaud’s disease” and “fish oil” via intermediate concepts “blood viscosity”, “Platelet Aggregation”, etc. from the MEDLINE. Gordon and Dumais [7] used Latent Semantic Indexing to simulate Swanson’s discovery, but the results are disappointing – no fish oil was explicitly discovered. Weeber et al. [16] developed a semi-automatic system with a set of NLP and semantic processing tools to simulate Swanson’s discovery. However, a lot of human interactions are involved. For example, they both need to download the articles containing “Raynaud disease” first. Terms co-occurring frequently with “Raynaud disease” are then filtered and ranked. A number of them are manually selected to serve as intermediate B-terms. For each B-term, a set of articles containing this term are then retrieved and analysed in order to find the corresponding C-terms. Note that it is not necessary that “fish oil” is ranked among the top C-terms – the goal is to help experts to obtain important hints to find it. We feel the intuitions behind Swanson’s discovery and our information flow model are very close – this provides the possibility to replicate the Raynaud-Fish Oil discovery automatically or with minimal human involvement. 3.1 Experimental Setup Our experiment is run on 111,603 MEDLINE core clinic journal articles during 19801985. There are no articles during that period relating “Raynaud disease” and “fish oil”. Therefore, it will be truly discovery to find information flow between them or

Text Based Knowledge Discovery with Information Flow Analysis

697

large amount of hints of that. Only document titles are used. After dropping a list of stop words used in Swanson’s ARROWSMITH system, the resulting size of vocabulary is 28834. We have two runs using our information flow model to derive information flow from term “Raynaud”. The first run is fully automatic. In the second run, we manually adjust the weights of a number of B-terms identified in [13] and [16] by assigning dimensions “blood” and “platelet” value of 1.0 and other dimensions (“aggregation”, “fibrinolysis”, “thrombosos”, “adhesiveness”, “coagulation”, :viscosity”, “erythrocyte”, “plasma”, “hemorheleology”, “flow”, “hyperviscosity”, “vascular”, “vasodilatation”, “vasolidation”, “vasospasm”, “vasomotion”) the maximal dimensional weight of the “Raynaud” vector. These dimensions originate from the Bterms documented in [13]. The resultant vector (denoted “Raynaud+”) was then normalized. The second run emulates the case where a researcher manually enhances the automatically derived HAL vector for “Raynaud” motivated from their specific expertise and interest. Some highly weighted dimensions (above average) of the original “Raynaud” vector and the “Raynaud+” are listed below for illustration: Table 3. HAL vectors of “Raynaud” and “Raynaud+”

raynaud Dimension El blood calcium dazoxiben flap forestry ketanserin necrosis nifedipine orthostatic platelet prostaglandin renal ...

Weight 0.17 0.09 0.14 0.18 0.09 0.09 0.26 0.09 0.41 0.09 0.14 0.22 0.09 ...

raynaud+ Dimension

adhesiveness aggregation blood coagulation erythrocyte fibrinolysis flow hemorheology hyperviscosity nifedipine plasma platelet thrombosis vascular vasodilatation vasomotion vasospasm viscosity

Weight 0.17 0.17 0.42 0.17 0.17 0.17 0.17 0.17 0.17 0.17 0.17 0.42 0.17 0.17 0.17 0.17 0.17 0.17

3.2 Experimental Results Some illustrative information flows and their associated degrees derived from Raynaud and Raynaud+ are shown in table 4. The number in brackets following each derived term is its ranking.

698

D. Song and P. Bruza Table 4. Ranked information flows from the Ratnaud and Raynaud+ HAL vectors

raynaud Information Flow raynaud (1) coronary (2) pulmonary (3) myocardial (4) blood (5) renal (6) platelet (7) hypertension (8) vascular (9) ... liver (56) ... oil (325) ...

Degree 1.000 0.678 0.650 0.648 0.618 0.617 0.600 0.593 0.588 0.326 0.140

raynaud+ Information Flow raynaud (1) blood (2) myocardial (3) platelet (4) coronary (5) renal (6) ventricular (7) cells (8) cell (9) ... oil (162) ... fish (256) ... cod (435)

Degree 1.000 0.630 0.558 0.548 0.543 0.522 0.519 0.518 0.516 0.229 0.196 0.155

Discussion It is promising that HAL-based information flow (Raynaud+) has established a connection between the term “Raynaud” and terms suggesting “fish oil” such as “oil”, “liver”, “cod”, and “fish”. These terms are genuine information inferences as these dimensions have a value of zero in the vector representation of “Raynaud”. In 27 ventricular (0.52) infarction (0.46) other words, information flow has 27 thromboplastin (0.17) detected an implicit connection 27 monoamine (0.17) oxidase (0.18) 27 blood (0.63) coagulation (0.29) between “Raynaud” and these terms. 26 umbilical (0.24) vein (0.32) For reasons of computational 25 fish (0.20) efficiency, information flows are 23 viscosity (0.21) computed to individual terms. As a 23 cigarette (0.26) smokers (0.22) consequence, interpreting the ranked ... output is made difficult. For example, 6 liver (0.43) cell(0.52) carcinoma (0.32) the distance between “fish” (rank 256) ... and “oil” (rank 162) is considerable. fish (0.20) oil (0.23) 4 In order to counter this problem, we ... 1 cod (0.15) liver (0.43) oil (0.23) have made some initial attempts to ... ulitilise shallow natural language processing. The titles are parsed into Table 5. Post-processing of information flow index expressions [3], which are tree ranking structures in which the nodes of the trees correspond to terms and the edges are relationships between terms. Table 5 depicts a fragment of the output after the information flows of the Raynaud+ vector have been projected. The score on the left reflects frequencies. For example, the phrase “fish oil” was inferred from four separate power index expressions. The scores next to individual terms reflect the degree of information flow.

Text Based Knowledge Discovery with Information Flow Analysis

699

The next case study shows how information flow analysis can be sued to derive categories from short textual document titles.

4 Case Study 2: Deriving Text Categories with Information Flow In situations involving large amounts of incoming electronic information (e.g., defense intelligence), judgments about content (whether by automatic or manual means) are sometimes performed based simply on a title description or brief caption because it is too time consuming, or too computationally expensive to peruse whole documents. The task of automatic Text Categorization (TC) is to assign a number of predefined category labels to a document by learning a classification scheme from a set of labelled documents. A number of statistical learning algorithms have been investigated, such as K-nearest neighbor (KNN), Naïve Bayes (NB), Support Vector Machines (SVM), and Neural Network (NN), etc. They make use of the manually pre-assigned categories associated with the training documents. However, many practical situations do not conform to the above format of text categorization. For example, pre-defined categories and labeled training may not be available. As a consequence, unsupervised techniques are required. In data intensive domains incoming documents may need to be classified on the basis of titles alone. In this section, we apply the information flow analysis to unsupervised category learning and document title classification, where the information flow theory introduced in Section 2 allows information classification to be considered in terms of information inference. Consider the terms i1, K , i n being drawn from the title of an incoming document

D to be classified. The concept combination heuristic can be used to obtain a combination vector i1 ⊕ i2 ⊕ K ⊕ i n . The information flows from

i1 ⊕ i2 ⊕ K ⊕ i n can be calculated. If the degree of information flow to a derived term j is sufficiently high, j can be assigned to D as a category. For example, “trade” is inferred from “GATT TALKS” with a high degree 0.96. The document titled ““GATT TALKS” can then be classified with category “trade”. 4.1 Experimental Setup We use the reuters-21578 collection for our experiments. 17 categories (topics) are selected from the total 120 Reuters topics. The 14,688 training documents are extracted, with all pre-assigned topics removed. This training set is then used to generate the HAL space, from which the topic information flows can be derived. As the document-topic labels have been removed, such learning process can be considered as true discovery. On the other hand, 1,394 test document titles, each of which belong to at least one of the 17 selected topics, are extracted. With respect the test set, only 14 topics have relevance information (i.e., assigned to at least one test document). Among the 14 topics, five have above 100 relevant titles and four have below 10 relevant titles. The average number of relevant document titles over the 14 topics is 107. We choose this set of topics because they vary from the most frequently used topics in Reuters collection like “acquisition” (Note that we use the real English

700

D. Song and P. Bruza

word “acquisition” instead of the original topic “acq” for information flow analysis) to some rarely used ones such as “rye”. The test titles are indexed using the document term frequency and inverse collection frequency components of the Okapi BM-25 formula. 4.2 Experimental Results Since it is still at its early stage of this research, we do not compare the performance with other text categorization models using the micro and macro averaged precision, and recall and precision break-even point measures. Our purpose at the moment is to test the potential of information flow model in unsupervised learning of categories. Experimental results are quite promising (average precision: 0.46, initial precision: 0.79, R-precision: 0.45, Average Recall: 1418/1498). Further refinement and comparison with other models will be left as our future work.

5 Conclusions and Future Work A consequence of the explosion of information is diminishing awareness: disciplines are becoming increasingly specialized; individuals and groups are becoming ever more insular. This paper considered how awareness can be enhanced via text-based knowledge discovery. Information flow computation through a high dimensional semantic space is able to simulate Swanson’s Raynaud-fish oil discovery, if the dimensional representation of Raynaud is manually enhanced by increasing weights corresponding to salient aspects of Raynaud’s disease. The amount of enhancement is small, and the character of the enhancements is within the sphere of knowledge appropriate to a researcher in Raynaud’s disease. On the other hand, the information flow analysis also demonstrates a promising performance in category learning from document titles. It should be noted that the connections information flow model establishes can be indirect. In short, we feel that there is some justification to be encouraged that HALbased information flow can function as a logic of discovery [5]. The goal of such a logic is to produce hypotheses. Future work will be directed towards a logic of justification, the goal of which is to filter those hypotheses produced by the logic of discovery to a small set of candidates. In the light of our discussion of awareness, HAL-based information flow has the potential to enhance awareness by producing hypotheses (e.g., raynaud disease - fish oil; Gatt talks - trade) that the human researcher would never consider or easily ignore.

References 1. Barwise, J., & Seligman, J. (1997) Information Flow: The Logic of Distributed Systems. Cambridge Tracts in Theoretical Computer Science 44. 2. Bruza, P.D. and Song, D. (2002) Inferring Query Models by Information Flow Analysis. In: Proceedings of the 11th International ACM Conference on Information and Knowledge Management (CIKM 2002) pp.260-269.

Text Based Knowledge Discovery with Information Flow Analysis

701

3. Bruza, P.D. (1993): ‘Stratified Information Disclosure’. Ph.D. Thesis. University of Nijmegen, The Netherlands 4. Burgess, C., Livesay, L., & Lund, K. (1998) Explorations in Context Space: Words, Sentences, Discourse, In Foltz, P.W. (Ed) Quantitative Approaches to Semantic Knowledge Representation. Discourse Processes, 25(2&3), pp. 179-210. 5. Gabbay, D. and Woods, J. (2000): ‘Abduction’, Lecture notes from ESSLLI 2000 (European Summer School on Logic, Language and Information). Online: http://www.cs.bham.ac.uk/~esslli/notes/gabbay.html 6. Gärdenfors, P. (2000) Conceptual Spaces: The Geometry of Thought. MIT Press. 7. Gordon, M.D. and Dumais, S. (1998): ‘Using Latent Semantic Indexing for literaturebased discovery’. Journal of the American Society for Information Science, 49(8), pp. 674685. 8. Landauer, T., and Dumais, S. (1997). A Solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211-240. 9. Lund, K., & Burgess, C. (1996) Producing High-dimensional Semantic Spaces from Lexical Co-occurrence. Behavior Research Methods, Instruments, & Computers, 28(2), pp. 203-208 10. Robertson, S.E., Walker, S., Spark-Jones, K., Hancock-Beaulieu, M.M., & Gatford, M. (1994) OKAPI at TREC-3. In Proceedings of the 3rd Text Retrieval Conference (TREC-3). pp. 109-126 11. Song, D. and Bruza P.D. (2001): ‘Discovering Information Flow Using a High Dimensional Conceptual Space’. In Proceedings of the 24th Annual International Conference on Research and Development in Information Retrieval (SIGIR'01) pp.327333. 12. Song, D. and Bruza, P.D. (2003) Towards context-sensitive information inference. Journal of the American Society for Information Science and Technology, Volume 54, Number 4, pp. 321-334. 13. Swanson, D.R. and Smalheiser, N.R. (1997): ‘An Interactive system for finding complementary literatures: a stimulus to scientific discovery’. Artificial Intelligence. 91, 183-203. 14. Swanson, D.R. (1986): ‘Fish Oil, Raynaud’s Syndrome, and Undiscovered Public Knowledge’. Perspectives in Biology and Medicine, 30(1), 7-18. 15. Text Retrieval Conference (TREC), National Institution of Standards and Technology(NIST). http://trec.nist.gov/data/ 16. Weeber, M., Klein, H., Jong van den Berg, L and Vos, R. (2001): ‘Using concepts in literature-based discovery: Simulating Swanson’s Raynaud-Fish Oil and migrainemagnesium discoveries’. Journal of the American Society for Information Science and Technology. 52(7), pp. 548-557

Study on QoS Driven Web Services Composition Yan-ping Chen, Zeng-zhi Li, Qin-xue Jin, and Chuang Wang Department of Computer Science and Technology, Xi’an Jiaotong University, 710049, Xi’an China [email protected], [email protected][email protected]

Abstract. Providing composed Web Services based on the QoS requirements of clients is still an urgent problem to be solved. In this paper, we try to solve this problem. Firstly, we enhanced the current WSDL to describe the QoS of services, and then gave a way to choose the proper pre-exist services based on their QoS.

1 Introduction Web Service is a hotspot in the research of SOA as the best realization of SOA [1]. There are several problems in Web Service frame needed to be studied such as service composition, data integration of services, security of services1, etc. However, all these existing problems are aroused by the composition [2]. The task of composition is to combine and link existing Web services to create new Web services. Lots of researchers have paid their attentions to the service composition. SELF-SERV [3] is a platform that can provide composed service, but SELF-SERV emphasizes only on functional composition and ignores the QoS requirements of clients. A way to compose components based on QoS is proposed in [4], but it gives no details about how to describe the QoS of a service. In distributed environment, different service components may possess the same functions, and references [5-7] provide WSOL (Web Service Offering Language) that can change services at run time by dynamic switching in different service constrains. DAML-S [8] aims to define ontologies for service description that will allow software agents to reason about the properties of services. The above approaches solve some issues in composed Web Services from different views, but none of them can give a whole and realizable way to provide composed Web Services based on clients’ QoS. To describe QoS requirements of clients, we proposed a new Web Service description language-EWSDL (Enhanced Web Service Description Language), and optimized concurrent Web Service frame with an evolved role of provider to meet non-functional requirements of clients to realize compositions. The remainder of this paper is organized as follows. In section 2 we give an enhanced Web Service description language-EWSDL and the composition selection algorithm. The paper is concluded in section 3. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 702 – 707, 2006. © Springer-Verlag Berlin Heidelberg 2006

Study on QoS Driven Web Services Composition

703

2 Web Service Description Model 2.1 QoS Property of Web Services There are many failed projects because of ignoring non-functional properties of software [9]. Definitions of non-functional properties of software are various and have no unified definition [10]. However, there are still some broadly accepted views about the non-functional properties. Generally speaking, the non-functional properties of software should include performance, reusability, maintainability, security, reliability, and availability. In Internet, we call the non-functional property as QoS property [13]. If a service provider only takes functional requirement of clients into account without considering non-functional requirements when providing the service, the provided service will not be accepted by clients at runtime. How to assure QoS of services is a stringent problem for service providers. Now there are two different ways to solve this problem. One is syntax-based approach extending the current WSDL with more elements, and the other is to develop a new language based on semantic, such as XL, OWL-S. Both ways intend to add more information of the service when descript the service. But the language based on semantic is more complex. So, in this paper we extend WSDL to support more QoS descriptions. WSDL2.0 describes three functional properties of Web Service: what to do, where to do, and how to do. However, WSDL cannot describe non-functional properties of services [11]. So, WSDL cannot be used to automatic service lookup and large-scale reuse and composition. Because of these defections, WSDL should be extended to include more information. In addition, only when including the non-functional properties it is integrity. Essentially, how to describe and quantitative analyze non-functional properties of software is a complex problem and is still needed to solve, so how to describe the non-functional properties of Web Service has no existing way. Many researchers have paid their attentions to this problem. V.Tosic proposed a Web Service Offering Language[5-7](WSOL) to extend WSDL by adding new mark, such as price, time, etc. In essence, WSOL only provides some disperse, predefined, and limited property plate, so it lacks of flexibility Some non-functional properties are given in [12], such as availability, channels, charging style, settlement, payment obligations, etc. The author also indicates that the non-functional properties of Web Service are actually the QoS properties actually. A model for Web Service discovery with QoS is given in [13], it gives some definitions of QoS properties, but these properties are not from the view of managing composed Web Services. In this paper we explore an enhanced Web Service description language according to the need of managing the QoS properties of composed Web Service. 2.2 Description Model of Web Service Definition 1. Description model of Web Service. Let S be the description model of Web Service which can be expressed as S={Func QoS} where, Func denotes the functional properties of S, and QoS denotes the non-functional properties of S.The functions of Web Service is described by the portTypes of WSDL, in order to describe the QoS properties, a tOperationInst element is added to tport element of WSDL.

704

Y.-p. Chen et al.

Considering that the QoS properties of Web Service should be independent to the domain of services and they also should be quantifiable, we use a vector in seven dimensions to describe the QoS of a Web Service (both element service1 and composed service) responseTime represents the response time of a element service; availability represents the probability of the service can be used correctly; concurrency represents the maximum ability to support concurrent transactions; expireTime represents the expire time of a service, and the reliability of a service can be ensured before the expireTime, price represents the money the client should pay for this service, fine represents service provider(client) should compensate client(provider) for breaking the contract between them. Commonly, the fine has a linear relation with the price, securityLevel represents the security level of a service. For a composed service CS it also possesses the same parameters, but in a CS, these QoS parameters cannot be calculated by a simple mathematical function such as sum. For example, responseTime of CS is not the sum of responseTime of every element services when existing concurrent process in CS. Followings give a way to calculate the QoS parameters of CS. ResponseTime, considering that there exists concurrent service in composed Web Service, response time of composed service is not the sum of responseTime of all the element services, it should be the sum of critical route in execution process. CPA[15] is an algorithm to find the critical route. ResponseTime = CPA ( Service1 , Service 2 ,..., Service m ), m

Availability= ∏ avalibilityi , i =1

Concurrency = min (concurrency 1 ,concurrency 2 ,..., concurrency m ), ExpireTime = min (expireTime 1 , expireTime 2 ,..., expireTime m ), m

Price =

U pricei ,

i =1

Fine = A*Price the client and the provider negotiate to get A, SecurityLevel=min(securityLevel 1 ,securityLevel 2 ,...,securityLevel m ).

These QoS properties are not independent, for example, a correlation between price and fine. This is why composition according to clients’ QoS requirements is a difficult problem. 2.3 Composition Selection Algorithm Definition 2. Service Class. Let A be a set of Web Services which have been registered into the Composer, and these services can be divided into several subsets according to 1

Web Services that participate in the composition are all called element services. In this paper, there are no essences differences between the element service and the composed service except for the granularity.

Study on QoS Driven Web Services Composition

705

the different functions, and each Web Service of A belongs to one subset at least, and the differences among services which belong to a same subset only are the QoS properties. Noticed, all the Service Classes form an overlay of A not a partition. It is a hard work for the service composer to provide a composition Web Service to clients according to their QoS requirements based on the following two reasons: first, there are no standard measurement of all these QoS properties, second, QoS properties are not independent, and one QoS property may favor/feed back others. So, the multi-dimensional of the QoS properties cannot be merged into one dimension, we proposed a Composition Selection Algorithm, which relies on the following assumptions: 1) 2) 3)

Clients’ SLA is the QoS of the composed Web Services; Composed services and element services use the same glossary; The required QoS of clients can be pressed as a vector (ResponseTime

Availability

* c

,

Concurrency

* c

* c

ExpireTime

,

,

Price

* c

,

Fine

* c

* c

,

, Security

Level *c );The requirements of clients usually fall into a range not a certain value, so we must define the relation between composed service and the required QoS (requirement) of clients; 4) The composition logic is predefined, and the aim is to simplify the composition and pay more attention to QoS. There are many ways to solve the multi-object programming [16,17,18], and in our prototype, we used the following method. Step1, Construct programming matrix A = ( a ij ) n×m , and normalize A to

R = (rij ) n×m using proper ways; •



Step2, Calculate R = ( rij ) n×m to get R = ( r ij ) n×m , where

rij



r ij =

i ∈ N, j ∈ M .

n

¦r

ij

i =1

Step3, Get the information entropy E j of every QoS,

Ej = − Step4, Get weight vector

1 n • • ¦ rij ln rij , j ∈ M , ln n i =1

ω = (ω 1 , ω 2 ,..., ω m ) , where

ωj =

1− Ej

,

m

¦ (l − E k =1

k

)

706

Y.-p. Chen et al.

Step5, Synthesis QoS of every scheme is defined as m

z i (ω ) = ¦ rij ω j

i ∈ N,

j =1

Step6, Sorting and selecting a scheme are according to z i (ω ) ( i ∈ N ).

3 Conclusion and Future Directions The studies on Web Service management focus on two aspects, one is the self-management of the element Web Service; the other is the management of composed services [5-7]. The QoS properties are independent on above two aspects due to different targets. In this paper, we aim to manage the QoS of composed Web Services and propose a new service description language based on current WSDL, and then propose a composition selection algorithm. Short-term goals of our research contain three parts: firstly, we will extend the current UDDI to support EWSDL; secondly, we will add a monitor to the prototype E-WsFrame. The monitor can collect and analyze service data and make judgment when client or provider violates the agreement between them. Finally, we will order the candidate services by some algorithms to reduce composition price.

Acknowledgements This research was sponsored by the National Science Foundation of China NSFC under the grant No. 90304006 and Research Fund for Doctoral Program of Higher Education of China (No.2002698018).

References [1] IBM dW 2004 special http://www-128.ibm.com/developerworks/cn/. [2] YUE Kun, WANG Xiao-ling, ZHOU Ao-ying. Underlying Techniques for Web Services: A Survey. Journal of software ,2004 15(3):428-442. [3] Boualem Benatallah, Marlon Dumas, Quan Z. Sheng, Anne H.H. Ngu. Declarative Composition and Peer-to-Peer Provisioning of Dynamic Web Services. Proceedings of the 18th International Conference on Data Engineering (ICDE.02). [4] LIAO Yuan, TANG Lei, LI Ming-Shu. A Method of QoS-Aware Service Component Compositon. Chinese Journal of computers, 2005,28(4):627-p634. [5] Esfandiari, B., Tosic, V. Requirements for Web Service Composition Management, in Proc. of the 11th HP-OVUA Workshop, Paris, France, June 21-23, 2004. [6] Tosic, V., Patel, K., Pagurek, B. WSOL - Web Service Offerings Language, in Proceedings of the Workshop on Web Services, e-Business, and the Semantic Web - WES Bussler, C.et al.(eds.), Toronto, Canada, May 2002. [7] Tosic,V, Ma, W, Pagurek, B, Esfandiari,B. Web Services Offerings Infrastructure (WSOI) – A Management Infrastructure for XML Web Services. In Proc. of NOMS (IEEE/IFIP Network Operations and Management Symposium) 2004, Seoul, South Korea, April 19-23, 2004, IEEE, 2004, pp. 817-830.

Study on QoS Driven Web Services Composition

707

[8] Chakraborty D, Joshi A, Yesha Y, Finin T. GSD: A novel group-based services discovery protocol for MANETS. In: Proc. of the 4th IEEE Conf. on Mobile and Wireless Communications Networks. 2002. [9] Finkelstein A, Dow ell J. A Comedy of Errors: the London Ambulance Service case Study. Proceedings of the 8th International Workshop on Software Specification and Design. 1996. 2-4. [10] YANG Fang-chun, LONG Xiang-ming. An Overview on Software Non-Functional properties Research. Journal of Beijing University of Posts and Telecommunications, 2004,27(3):1-11. [11] WSDL 2.0. http://www.w3.org/TR/2005/WD-wsdl20-primer-20050510/ . [12] JUSTIN O’SULLIVAN. What’s in a Service? Towards Accurate Description of Non-Functional Service Properties. Distributed and Parallel Databases,2002, 12:117 133 [13] Shuping Ran. A Model for Web Services Discovery with QoS. ACM SIGecom Exchanges, 2003, 4(1):1-10. [14] McCall J.A.,Richards P.K.,Walters G.F. Factors in software quality. RADC:Technical report RADC-TR-77-363,1977. [15] M. Pinedof. Scheduling: Theory, Algorithms, and Systems. Prentice Hall, 2001. [16] CHEN Ting, Programming Analysis. Science Press. Peking. 1987. [17] WANG Xiao-ping, CAO Li-ming. genetic algorithm- theory, application and realization. Xian Jiaotong University press. Xian, 2002. [18] Hwang C L, Yoon K. Multiple Attribute Decision Making and Applications. New York: Springer-Verlag, 1981.

Optimizing the Data Intensive Mediator-Based Web Services Composition Yu Zhang1 , Xiangmin Zhou1 , and Yiyue Gao2 1

2

University of Queensland, Australia Beijing University of Chemical Technology, China

Abstract. Effectively using heterogeneous, distributed information has attracted much research in recent years. Current web services technologies have been used successfully in some non data intensive distributed prototype systems. However, most of them can not work well in data intensive environment. This paper provides an infrastructure layer in data intensive environment for the effectively providing spatial information services by using the web services over the Internet. We extensively investigate and analyze the overhead of web services in data intensive environment, and propose some new optimization techniques which can greatly increase the system’s efficiency. Our experiments show that these techniques are suitable to data intensive environment. Finally, we present the requirement of these techniques for the information of web services over the Internet.

1

Introduction

Currently, effective communication within and among organizations, departments, agents in different locations is critical to their success. Government organization requires an integrated enterprize system, which can provide the latest, unified spatial information to its own employees and customers as well. Various mediator systems have been used to provide a unified query interface to various data sources, however, they only accept a specific user query and answer such query by reformulating it into a combination of source queries. Web services technology [1] has been used extensively in distributed environment. They provide seamless integration of information from various data sources. Mediator-based approaches support the automatically composition of web services. According to these methods, the existing web services can be viewed as data sources, which can compose new web services automatically. The mediator can optimize a plan by adding sensing operations for further optimization. However, in a data intensive environment, most of these optimization techniques are not effective. In this paper, we address the overhead that web services bring, and the necessity to reduce the cost of spatial join of mediator based web service composition in data intensive environment, which is a very expensive operation in communication processing. We propose histogram based cost model for the plan optimization. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 708–713, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Optimizing the Data Intensive Mediator-Based Web Services Composition

2

709

Related Work

Some commercial web services involve complex transactions and fully automatic web service composition may not be possible. However, it is possible to fully automate the composition of the large class of information-producing web services. In this paper, we build on existing mediator-based approaches to support the automatic composition of web services. The extension from the traditional mediator approach is proposed to the Inverse Rules Query reformulation algorithm that produces a generalized service composition in response to a user request. Instead of generating a plan limited to the specific user request, their system produces an integrated web services that can answer a range of requests. In a sense, their system produces a universal integration plan. However, in a data intensive environment, this system might not perform as shown in the previous work before, because there is more communication cost and web services overhead involved. Therefore, we propose new optimization techniques later in the paper. These techniques are developed from the cost model in that assumes the data consists of uniformly distributed rectangles and estimates the I/O and CPU costs of both filtering and the refinement steps of a window query. In the next section, we will firstly go through come key components of mediator-based system.

3

Mediator Service

Mediator service is a key component of the mediator based web services composition system. It consists of three main parts: (1) Query Reformulation (QR); (2) Query Optimization (QO); (3) Composite Web Services (CWS). Query Reformulation is to deal with the mapping between global schema and local schema, which accepts user’s selected service in the data log notation format. Here, we assume the services system provides all conjunctive queries. System administrator can set up several global schemas facing to users. Along with the global schema, the descriptions of the available web services are also supplied for specifying the relationship between the relations in the mediated schema and those in the local schemas. The Local-As-View model is used to describe the relationship. To answer queries using views, [2] Mediator utilizes the Inverse Rules Algorithm to generate a datalog program for the new web services based on the descriptions of relationships between global schema and local schema. CWS calls the web services according to the plan which is produced from last module QO. After all the called web services return their results back, CWS combines them and pass to presentation services.

4

Cost Model Based Query Optimization

We propose a new cost model based query optimization strategy which is suitable to the data intensive environment and can reduce the communication cost and web services overhead involved. This new optimization technique is described as

710

Y. Zhang, X. Zhou, and Y. Gao

follows. For a certain query, after processed, there are possibly four plans for it. We establish a cost model to measure the spatial join cost and compare them for choosing the best. The cost model assumes that the query window and the data objects are general polygons. There are two situations we need to cope with: Direct Join and MBR Join. The former sends objects directly. While the later sends objects followed by MBRs. We estimate those two approaches with total I/O and CPU costs in the followings. The parameters related to side B have single quotes to separate from A’s. We will describe two models, Direct Join Cost Model and MBR Join Cost Model, for these two cases in detail in the subsection 4.1 and 4.2. 4.1

Direct Join Cost Model

To estimate the cost of retrieving objects which are indexed by R-tree within a query window in side A, we need to know the number of tuples locating in the query window, F (window). Assuming that a good R-tree implies the tuples retrieved will be in the minimum number of R-tree leaf nodes. The total I/O cost of this step is given by:   F (window) F (window) + + · · · + 1 ∗ Crandio m m2      1 1 = ∗ 1 − k−1 ∗ F (window) + 1 ∗ Crandio m−1 m where h is the height of the R-tree and Crandio is the cost per page of a random read. The CPU cost can be estimated by      1 1 ∗ 1 − k−1 ∗ F (window) + 1 ∗ m ∗ CMBRtest m−1 m where CMBRtest is the CPU cost of per test between two MBRs. It tests all the entries in each R-tree noted read from the disk for overlapping with the query window. After being identified within the query window, the objects need to be transferred to the other side B. The communication cost occurs in this step can be estimated since we know the number of candidates to be transferred. Thus, the communication cost of sending polygons is given as: F (window) ∗ C(window) ∗ Ctrans where Ctrans is the cost of sending per spatial point. C (window) is the average number of vertex in the query window. After the candidates arriving side B, we firstly retrieve the data in B, and do the join operation. The I/O cost is given as similar reason:    F (window) F  (window)  + + · · · + 1 ∗ Crandio m m2      1 1   = ∗ 1 − k−1 ∗ F (window) + 1 ∗ Crandio m − 1 m

Optimizing the Data Intensive Mediator-Based Web Services Composition

711

The CPU cost depends on the algorithm used for testing overlap. Detecting if two polygons overlap can be done in O (n log n) using a plane sweep algorithm, where n is the total number of vertices in both polygons. We estimate the cost assuming using plane sweep algorithm. We assume that average number of vertices within a query window is C’ (window), Cpolytest is proportionality constant. The spatial join CPU cost is estimated as follows:  F (window)∗(C(window)+C  (window)) log(C(window)+C  (window))∗Cpolytest

4.2

MBR Join Cost Model

In the MBR join cost model, the total IO cost and CPU cost can be estimated according to the same equations as that in direct join. But different from direct join cost model, in this process, we need to estimate the communication cost of sending MBR, while not need that of sending polygons. The communication cost for sending MBR is given as follows: F (window) ∗ 4 ∗ Ctrans where Ctrans is the cost of sending per spatial point. C (window) is the number of vertex in the query window. Different from the direct join, for MBR join, after the candidates arrive side B, the candidates considered are MBRs, while not simple data. The data in B are retrieved, and used for the join operation. The I/O cost is estimated by a similar equation as that for direct join: [

F  (window) F  (window)  + + ... + 1] ∗ Crandio m m2

The CPU cost can be estimated in the same way as direct join.  F (window)∗(8)log(8)∗CMBRtest

After the spatial join of objects’ MBR, the number of candidates produced from this step is needed, which can be obtained by using the histogram and overlapping probability model. The first cost from this is the I/O cost which occurs of retrieving candidates after A knows which candidates need to be transferred to B. Here, the cost for sending object id can be neglected, since it is too small comparing to spatial operation. The I/O cost for this step can be estimated by the following way: [

Fp Fp + 2 + ... + 1] ∗ Crandio + F ∗ C(window) ∗ Cvertio m m

where Fp is the number of candidates we estimated before, and Cvertio is the per vertex I/O cost of reading a polygon. The communication cost of sending those candidates back to side B is given as below: Fp ∗ C(window) ∗ Ctrans

712

Y. Zhang, X. Zhou, and Y. Gao

The CPU cost involved in dataset A of polygons spatial join is given as:  Fp ∗ (C  (window) + C(window)) log(C  (window) + C(window)) ∗ Cpolytest

So far, we present all components of the cost model to estimate spatial join. There are several parameters are required by the cost model. Crandio , Ctrans , Cpolytest , CMBRtest , Cvertio , m, h, C (window), F (window), O1x , O1y , O2x and O2y . To get these parameters, some are provided by the system implementer at system development or installation time. The other can be estimated by histogram [3].

5

Experimental Evaluation

In this part, we will study the performance of our proposed web service optimization strategy by extensive experiments. In our experiment, two datasets are used for the performance evaluation. One is the Queensland Regional Ecosystem layer from Environmental Protection Agency (EPA) in Queensland, Australia. Experiments were conducted on two Pentium 2.2GHz machines with 256 Mbytes of Ram running Microsoft Windows XP. Please refer the detail to the full technical report. 5.1

Web Services Overhead Effects

In this part, we experimentally evaluate the performance of using web services based on three different scenarios of local spatial join, traditional distributed spatial join and using web services in distributed join. For local spatial join, the data are not needed to be moved around, thus this process is costless. For the rest of two scenarios, the candidates are transferred from one PC to the other, which involves great communication costs. The difference between these two is that web services technology overhead cost occurred in web services based distributed join. We input four different groups, 0.001, 0.05, 0.01 and 0.1, from different areas of query windows to our system. This process was conducted 100 times, producing the means of this 100 times queries’ results. Figure 1 shows query results’ comparison from one group window. Averagely, using web services can increases the processing time by 13.099%. 5.2

Efficiency of Query Optimization

In this part, we demonstrate the efficiency of the query optimization strategy by using cost model and self-tuned histogram. In the experiment, we input a number of queries, testing which plan it chooses and the execution time. For each query, four plans run separately and the execution time of each is recorded. The information recorded is shown in Figure 2. Execution Plan column is the plan chosen by system. Other four plan columns’ fields are the exact execution time (second). Figure 3 shows the effect accuracy of query optimization over 500 queries. The error rate is calculated as the number of wrong decision divided by the number of queries. As expected, system becomes more and more accurate as the error rate keeps going down. Thus, we can make the conclusion that the query optimization is able to increase system’s performance.

Optimizing the Data Intensive Mediator-Based Web Services Composition Query Results Comparison

713

60% 50%

25 20

Local

15

Traditional Distributed

10

Web Services

5 0 0.000001

0.0001

0.01

40%

Error

Respond Time (s)

30

Series1

30% 20%

0.1

Query Window Area

10%

0.000001

0.0001

0.01

0.1

Local

2.1123

13.5445

4.4389

17.4502

Traditional Distributed

3.4512

19.042

5.2501

22.2211

0% 25

75

5 12

5 17

0 25

0 35

0 45

Number of Queries Web Services

3.9101

21.4824

5.9732

25.2085

Figure 1 Sample of Query Execution Time

Figure 3 Error Rates

Query

Execution Plan

Plan 1

Plan 2

1

3

5.3436

6.3436

2

3

19.5467

20.0435

3

4

8.4546

8.0640

4

3

10.6489

11.3345

5

3

22.7897

23.0957

......

Fig. 1. Execution Time of Plans

6

Conclusion and Future Work

In this paper, we evaluated the current web services technology used to support dynamical composite services over the Internet, showing that the requirement and performance of web services in data intensive situation. For the purpose of reducing the cost of spatial join, we proposed a new cost model based query optimization strategy. The proposed strategy can effectively reduce the communication cost and web services overhead involved in the data intensive environment. We proposed two cost models: Direct Join cost model and MBR Join cost model, for estimating the cost of direct join and MBR join respectively. We performed experimental study over two types of datasets. The experimental results show that the optimization strategy proposed can greatly improve the query efficiency of web service system.

References 1. A. R. Levy, A. Y. and J. J. Ordille, “Query-answering algorithms for information agents,” in In Proceedings of AAAI-96, 1996. 2. C. A. Snehal Thakkar, Jose Luis Ambite, “A data integration approach to automatically composing and optimizing web services,” in University of Southern California/ Information Sciences Institute, technical report, 2004. 3. L. N. Bruno, S. Chaudhuri, “Stholes: A multidimensional workload-aware histogram,” in In proceedings of the 2001 ACM International Conference on Mangament of Data (SIGMOD’ 01), 2001.

Role of Triple Space Computing in Semantic Web Services Brahmananda Sapkota, Edward Kilgarriff, and Christoph Bussler DERI, National University of Ireland, Galway, Ireland {Brahmananda.Sapkota, Edward.Kilgarriff, Chris.Bussler}@deri.org

Abstract. This paper presents Triple space computing and described the role it can play in bringing Semantic Web Services to the next level of maturity. In particular the shortcomings of current Semantic Web services are identified and the role of Triple Space computing can play in resolving these shortcomings is described.

1

Introduction

In the recent years the Internet has become a popular business hub. Lately, efforts have been put to bring machine level communication through the use of Semantic Web technology [1]. Initiatives such as [2], [3], [4] are aiming at enabling seamless integration of business processes through the combination of Semantic Web and Web services thus supporting automation of service discovery, mediation and invocation [2]. The synchronous communication technology used in Semantic Web services (SWS) requires both parties to be ‘live’ at the time of their interaction reducing scalability [5]. In a distributed scenario, such communication can be costly because of network congestion, low processing power, third party invocation, etc. Timing in communication is also important as most business transactions are time bounded in the business world. The time difference problems can be resolved through a priori agreement but in automated business process reaching such an agreement beforehand is unpractical. In the past, tuple-based computing was largely investigated for introducing communication asynchrony between processes in parallel computing. Linda [6], TSpace [7] approaches were intended to facilitate communication between applications that require distribution of data. Messaging techniques like publishsubscribe, emails, etc., were introduced to support communication asynchrony [8]. These techniques, however, do not provide semantic support for communication. In this paper, we identify some of the roles of a new communication approach called Triple Space Computing [9] in the context of Semantic Web Services focusing on the communication aspects. Other aspects of Semantic Web Services such as mediation, discovery, etc are out of the scope of this paper. 

This material is based upon works supported by the EU funding under the projects DIP (FP6 - 507483) and by the Science Foundation Ireland under Grant No. SFI/02/CE1/I131. This paper reflects the author’s views and the Community is not liable for any use that may be made of the information contained therein.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 714–719, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Role of Triple Space Computing in Semantic Web Services

715

This paper is structured as follows. In Section 2, similar approaches are discussed. Section 3 describes SWS, discusses its missing features and other requirements. The Triple Space Computing (TSC) architecture is presented in Section 4. Section 5 discusses the role of TSC in SWS paradigm. The paper is concluded in Section 6 presenting intended future works.

2

Similar Works

The shared space communication paradigm had already taken the attention of both industry and academia of which some relevant ones are described below. Message Queuing technology is designed to support communication asynchrony in distributed environment [10]. In this scheme, sending application puts (enqueue) messages on to the queue where as the receiving application extracts (dequeue) them from that queue [11]. The flexibility and scalability of this technique is limited by queue size and their location [12]. The Publish-Subscribe interaction mechanism is aimed at supporting communication asynchrony through publication of an event and a subscription to that event [8]. The events registered by the publishers are propagated asynchronously to the subscribers of that event. Though, it has been largely recognized as a promising communication infrastructure in distributed environment [13], the importance of semantics in distributed communication is largely ignored. The Tuple-Based Communication concept introduced for Linda by Gelernter [6] allow inter processes communication by writing and reading information (i.e. tuple) to and from a shared space called Tuple Space. The tuple space is persistent and the communication is blocking and transient. Thus the reading process waits until the matching tuples are available [14]. The tuple matching is content based, nesting is not supported by the used data model, and physical representation of tuple space is difficult. Tuples having the same number and order of fields but with different semantics can not be matched [15] thus they do not scale [16]. Semantic Tuple Spaces is introduced in sTuple [16]. It envisions to crossfertilize the Semantic Web and Tuple Space technologies. The tuples in sTuple are similar to that of JavaSpace[17] where one of the fields must contain a data item semantically defined using DAML+OIL [18]. The Semantic Tuple Spaces enables semantic matching on top of object based polymorphic matching. The underlying data model of sTuple consists of both Semantic and and non-Semantic technologies thus inherits all the problems of tuple-based communication.

3

Semantic Web Services

Semantic Web services aim to realize seamless integration of applications, semantically enhanced information processing and distributed computing on the Web through the combination of Semantic Web and Web services. Ontologies are vital for semantic interoperability and advanced information processing, Web services

716

B. Sapkota, E. Kilgarriff, and C. Bussler

enable computation over the Web. Web service technology is built around Simple Object Access Protocol (SOAP), Web Service Description Language (WSDL) and Universal Description, Discovery and Integration (UDDI). SOAP facilitates message exchanging between Web services [19]. WSDL [20] is used for describing access interfaces of a Web service. UDDI registries [21] are used to advertise service descriptions. These technologies work at a syntactic level thus do not support dynamic inspection and integration of Web services. In this context, Semantic Web services (SWS) exploits Ontologies to automate Web service discovery, composition and invocation, thus enabling seamless interoperation between them with minimum human intervention. The use of SOAP as communication protocol is one of the problems of (Semantic) Web services. Supporting asynchronous communication, it requires both the service owner and the invoker be available at the same time. The current (Semantic) Web services are moving apart from the Web paradigm that allow independent information publishing and retrieval [9]. In addition, information can not be reused and read multiple times. In addition to the existing ones, the basic requirements of the (Semantic) Web service technologies are: Persistent Publication - the reuse of information is possible only if the information is published persistently. Communication Asynchrony - the information should be made accessible asynchronously in order to allow SWS to work in Web scale. Reuse and Provenance - in order to avoid conflict of communication, logging is needed. For example, in case of Web service discovery and composition, if similar request were processed before the previous result can be reused.

4

Triple Space Communication

The fundamental concept of TSC (Triple Space Computing) is to provide a semantic communication infrastructure such that machine-to-machine interaction can be achieved at Web scale. It was first envisioned in [9] and ideally, it extends traditional tuple-based computing in the direction of RDF. It aims at providing a persistent shared semantic space called triple space where application write and read information to asynchronously communicate with each other. Providing communication asynchrony, TSC provides autonomy to applications in terms of time, reference, location [22], [9]. The visible advantage of using TSC in Web service paradigm are threefold. Firstly, the Web service paradigm is brought onto the Web paradigm, i.e., the persistent publication of information. Secondly, the communication is asynchronous [9]. Thirdly, TSC provides middleware support hiding from internal application complexities. 4.1

TSC Architecture and Interaction

Building on proven technologies and their combination, TSC has (Figure 1) a simple but powerful architecture.

Role of Triple Space Computing in Semantic Web Services

717

Fig. 1. Triple Space Computing Architecture

It consists of loosely coupled components and is based on SOA design principles. It uses RDF as an underlying data model. The architecture presented here is the extension of a minimal triple space architecture defined in [22]. Triple Space Interfaces are defined to enable applications to interact with TSC. It allows applications to create and destroy triple spaces, to write to and read from a triple space, to retrieve history of communication. The read, write, and history operations are relative to the triple space, i.e., these operations can be performed on a triple space that already exists. Applications can invoke these operations through HTTP. The semantics of these operations are defined as follows: create (TSID) returns SUCCESS | TSID ALREADY EXISTS destroy (TSID) returns SUCCESS | TSID DOES NOT EXIST TSID.write (Triple*) returns SUCCESS | TSID DOES NOT EXIST TSID.read (Triples) returns Triple∗ | TSID DOES NOT EXIST TSID.history (Triples) returns Triple∗ | TSID DOES NOT EXIST Where TSID and Triple∗ represent the triple space ID and zero or more RDF Triples respectively. The triple space ID is unique with respect to the URL [23] where TSC infrastructure is hosted. The parameters to the write and history operations are the queries expressed in RDF triples. The parameter to read operation is the Triple(s) to be stored in the triple space identified by TSID. The Triple Space Manager receives a request from the applications and informs one of the necessary components to carry out their operation. If the operation invoked is create then the Triple Space Factory is informed to create a new triple space. Triple Space Factory creates a new triple space having id TSID, if it is not already created. The Triple Processor component implements the functionality of the write, read and the history operations. The storage space can be anything from a simple file system to an advanced RDF stores but YARS (Yet Another RDF Store [24]) is used in the current implementation. The History Archiver component records every flow of triples to and from triple space. The information in the Web-like open environment should be kept securely. At present only primitive security functionalities such as access policy and information provenance have been implemented. We acknowledge the need of more secure strategy and is left for future work.

718

5

B. Sapkota, E. Kilgarriff, and C. Bussler

Role of TSC in SWS

TSC brings machine-to-machine (Semantic) Web service communication to the Web scale. TSC has many roles to play in (Semantic) Web service technologies, the most notable being: a Communication Asynchrony - Current (Semantic) Web service technologies lack scalability because the communication is synchronous. TSC provides communication asynchrony thus allowing SWS applications to communicate without knowing each other explicitly. b Persistent Publication - Current SWS technologies are difficult to manage and do not scale as it lacks persistent publication. Thus making service discovery a tedious, tiresome and time consuming task. Triple space provides a persistent storage where Web service descriptions, Ontologies and access interfaces can be published persistently. The published information can be uniquely identified by their URIs. In addition, the published information can be read similar to that in the Web. Thus significantly simplifying communication set up between applications supporting seamless integration and computations on a Web scale. c Scalability - Enabling asynchronous communication and using semantically enriched resources much of the process can be automated achieving scalability significantly. d Complexity Management - The communication complexity is managed efficiently in TSC through persistent publication and synchronous communication. If the success message is not returned, the writing or reading application will know that the request need to executed again. Thus, information will never be lost in case of system failures. e Communication Archiving - TSC allows the storing of a history of all flow of messages and requests to and from the triple space. This enables monitoring of communicating applications helping in reuse of already available Web services. f Provenance - TSC provides a mechanism to ensure that the information arrived from the said source and time helping in reducing denial of service or non-repudiation. g Middleware Support - Due to the fact of the above mentioned support, TSC also plays a role of middleware in SWS.

6

Conclusions and Future Work

The traditional Web services are synchronous and destroying. The information exchanged between applications can not be reused and read multiple times. TSC provides a semantic communication infrastructure where information can be published persistently. All communication are asynchronous and information exchanged are described using RDF Triples. Each triples are identified through URIs. In this paper, we presented what role TSC can play in Semantic Web services. By providing persistent shared space, TSC brings Semantic Web services

Role of Triple Space Computing in Semantic Web Services

719

to Web scale. It plays the role of middleware that hides internal complexity from applications allowing them to communicate without changing their internal storage structure. Though it provides support for resolving communication disputes much more security needs have to be supported. Advanced support for security will be incorporated in future architecture.

References 1. Berners-Lee, T., et. al: The Semantic Web. Scientific American (2001) 2. Roman, D., Lausen, H., U.Keller, eds.: Web Service Modelling Ontoloty (WSMO). WSMO Deliverable, version 1.2 (2005) 3. Martin, D., et. al: Bringing Semantics to Web Services: The OWL-S Approach. In Proceedings of the First International Workshop on Semantic Web Services and Web Process Composition (2004) 4. Patil, A., Oundhakar, S., Verma, K.: METEOR-S: Web Service Annotation Framework. In Proceedings of World Wide Web Conference (2004) 5. Engleberg, I., Wynn, D.: Working in groups: Communication Principles and Strategies. Houghton Mifflin (2003) 6. Gelernter, D.: Mirror Worlds. Oxford University Press (1991) 7. TSpace, http://www.icc3.com/ec/tspace/. 8. Eugster, P., et. al: The Many Faces of Publish/Subscribe. ACM Computing Surveys (2003) 9. Fensel, D.: Triple Space Computing. Technical Report, Digital Enterprise Research Institute (DERI) (2004) 10. Blakeley, B., et. al: Messaging and Queuing using MQI. McGraw-Hill (1995) 11. Gawlik, D.: Message Queuing for Business Integration. eAI Journal (2002) 12. Leymann, F.: A Practioners Approach to Database Federation. In Proceedings of 4th Workshop on Federated Databases (1999) 13. Huang, Y., Garcia-Molina, H.: Publish/Subscribe in a Mobile Environment. In Proceedings of MobiDE (2001) 14. Matsouka, S.: Using Tuple Space Communication in Distributed Object-Oriented Languages. In Proceedings of OOPSLA ’88 (1988) 15. Johanson, B., Fox, A.: Extending Tuplespaces for Coordination in Interactive Workspaces. Journal of Systems and Software 69 (2004) 243–266 16. Khushraj, D., et. al: sTuple: Semantic Tuple Spaces. In Proceeding of MobiQuitous ’04 (2004) 17. JavaSpace, http://java.sun.com/products/jini/specs. 18. DAML+OIL, http://www.daml.org/2001/03/daml+oil-index.html. 19. Gudgin, M., et. al, eds.: SOAP Version 1.2. W3C Recommendation (2003) 20. Chinnici, R., et. al, eds.: WSDL Working Draft. W3C (2004) 21. Clement, L., et. al, eds.: UDDI Spec. Technical Committee Draft. OASIS (2004) 22. Bussler, C.: A Minimal Triple Space Computing Architecture. In Proceedings of WIW 2005 (2005) 23. URL, http://www.cse.ohio-state.edu/cs/Services/rfc/rfc-text/rfc1738.txt. 24. Yet Aother RDF Store, http://sw.deri.org/2004/06/yars/yars.html.

Modified ID-Based Threshold Decryption and Its Application to Mediated ID-Based Encryption Hak Soo Ju1 , Dae Youb Kim2 , Dong Hoon Lee2 , Haeryong Park1, and Kilsoo Chun1 1

Korea Information Security Agency(KISA), Korea {hsju, hrpark, kschun}@kisa.or.kr 2 Center for Information Security Technologies {david kdy, donghlee}@korea.ac.kr

Abstract. Chai, Cao and Lu first proposed an ID-based threshold decryption scheme without random oracles. Their approach is based on the Bilinear Diffie-Hellman Inversion assumption, and prove that it is selective chosen plaintext secure without random oracles. However, to ensure correctness of their ID-based threshold decryption scheme, it is necessary to guarantee that the shared decryption is performed correctly through some public verification function. We modify Chai et al.’s scheme to ensure that all decryption shares are consistent. We also present the first mediated ID based encryption scheme based on the Bilinear Diffie Hellman Inversion assumption without random oracles. In addition, we extend it into a mediated hierarchical ID-based encryption scheme.

1

Introduction

Chai, Cao and Lu [7] recently defined a security model for ID-based threshold decryption without random oracles and gave a construction. Their approach is based on an efficient selective ID-based encryption scheme (sIBE) [2] that was proved to be selected-ID secure against chosen plaintext attack without random oracles. However, to ensure correctness of their ID-based threshold decryption scheme, it is necessary to guarantee that the shared decryption is performed correctly through some public verification functions, without revealing the encrypted message, the private key, and its shares. For example, we can consider a secure e-voting scheme [1], where the ballots are required to be anonymous. After the encrypted ballots are anonymous through the use of an anonymous channel, they are decrypted by the decryption authorities. Each decryption share requires a proof of correct decryption. We modify the ID-based threshold decryption scheme, “IdThD” of Chai et al. [7] to ensure that all decryption shares are correct. We also present the first mediated ID-based encryption scheme without random oracles as an application of IdThD. Most ID-based mediated schemes [6,9] are based on the IBE [5], and security are proved under the random oracle model. Motivated by sIBE [2], X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 720–725, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Modified ID-Based Threshold Decryption and Its Application

721

we construct the first mediated ID-based encryption scheme without random oracles. Finally we extend our scheme into a hierarchical scheme without random oracles.

2

Security Improvements on Chai et al.’s IdThD Scheme

In the ID-based threshold decryption scheme of Chai et al., there exists a trusted authority PKG (Private Key Generator), who is charge of issuing private key or secret key shares of a requested identity. There also exist a group of decryption servers Γi (i = 1, . . . , n) under a single public identity ID. The IdThD scheme consists of the following algorithms Setup, KeyGen, Encrypt, Decrypt and Combine. To ensure correctness, it is necessary to guarantee that the shared decryption is performed correctly through some public verification functions, without revealing the encrypted message, the private key, and its shares. Because each decryption share requires a proof of correct decryption, we use the proof system of the equality of two discrete logarithms used in [8] for a signature scheme. We now modify Decrypt and Combine algorithms in Chai et al.’s scheme into three algorithms DShareGen, DShareVerify and DShareComb as follows. • Setup. Given a security parameter k, the PKG chooses groups G and G1 of prime order q, a generator g ∈ G, a bilinear map eˆ : G × G → G1 . It picks x, y, z uniformly at random from Zq∗ and computes X = g x , Y = g y , and Z = g z . The system’s public parameters are params =< G, G1 , eˆ, q, g, X, Y, Z > while the master key (x, y, z) is kept secret by the PKG. • KeyGen. Given an identity ID, the number of decryption servers n and a threshold parameter t, the PKG picks randomly a polynomial over Zq∗ :  i ∗ F (u) = z + t−1 i=1 u ai , where ai ∈ Zq . It picks ri ∈ Zq and computes F (i)/(ID+x+ri y) F (i) Ki = g , vki = eˆ(g, g) , (i = 1, . . . , n) and outputs the shared private key ski = (ri , Ki ) and verification keys vki . • Encrypt. To encrypt a message M ∈ G1 under an identity ID ∈ Zq∗ , pick a random s ∈ Zq∗ and output ciphertext C = (A, B, C, D) = (g s·ID X s , Y s , eˆ(g, Z)s · M, H(A, B, C)s ), where H : G × G × G → G. • DShareGen. To compute a decryption share δi,C of the ciphertext C = (A, B, C, D) using its private key ski = (ri , Ki ), each decryption server Γi does the following: 1. Computes H = H(A, B, C) and checks if eˆ(g ID X, D) = eˆ(A, H) and eˆ(Y, D) = eˆ(B, H). 2. If C has passed the above test, computes ki = eˆ(AB ri , Ki ), ti1 = hλi , ˜ λi , ci = H(h||h||k ˜ i ||vki ||ti1 ||ti2 ), and Li = λi − ci fi for random ti2 = h λi ∈ Zq , outputs δi,C = (i, ki , ti1 , ti2 , vki , ci , Li ) where h = eˆ(g, g) and ˜ = eˆ(g, g s ). Otherwise, it returns δi,C = (i, “Invalid Ciphertext”). h • DShareVerfy. Given a ciphertext C = (A, B, C, D), a set of verification keys {vk1 , . . . , vkn }, and a decryption share δi,C , a dealer computes H = H(A, B, C) and checks if eˆ(g ID X, D) = eˆ(A, H) and eˆ(Y, D) = eˆ(B, H). If C has passed the above test then a dealer does the following:

722

H.S. Ju et al.

1. If δi,C is of the form (i, “Invalid Ciphertext”) then returns “Invalid Share”. 2. Else parses δi,C as (i, ki , ti1 , ti2 , vki , ci , Li ) and computes ti1 = ˜ Li ki ci . hLi (vki )ci , ti2 = h ˜ i ||vki ||t ||t ). (a) Checks if ci = H(h||h||k i1 i2 (b) If the test above holds, returns “V alid Share”, else outputs “Invalid Share”. Otherwise, does the following: 1. If δi,C is of the form (i, “Invalid Ciphertext”), returns “V alid Share”, else outputs “Invalid Share”. • DShareComb. Given a ciphertext C and a set of valid decryption shares {δj,C }j∈Φ where |Φ| ≥ t, a dealer computes H = H(A, B, C) and checks if eˆ(g ID X, D) = eˆ(A, H) and eˆ(Y, D) = eˆ(B, H). If C has not passed the above test, the dealer returns “Invalid Ciphertext”. Otherwise, he computes cΦ

C/Πj∈Φ kj 0j = M.

3

Our Mediated ID-Based Encryption Scheme

An application of the ID-based threshold decryption scheme is as a building block to construct a mediated ID-based encryption scheme. Libert and Quisquater showed that the SEM architecture in a mRSA [4] can be applied to the BonehFranklin identity-based encryption and GDH signature schemes [9]. However, they only provided a proof that their scheme is secure against chosen ciphertext attack in a weaker sense. Here “weak” means that attackers are not allowed to obtain the user part of private key. Baek and Zheng proposed a mediated IDbased encryption scheme which is secure against ciphertext attack in a strong sense, that is, secure against chosen ciphertext attack conducted by the stronger attacker who obtains the user part of private key. We describe our mediated ID-based encryption scheme without random oracles, which is based on the Chai et al.’s IdThD scheme. Our scheme consists of the following algorithms. 1. Setup. Given a security parameter k, the PKG chooses groups G and G1 of prime order q, a generator g ∈ G, a bilinear map eˆ : G × G → G1 . Then it picks x, y, z uniformly at random from Zq∗ and computes X = g x , Y = g y , and Z = g z . The system’s public parameters are params =< G, G1 , eˆ, q, g, X, Y, Z > while the master key (x, y, z) is kept secret by the PKG. 2. KeyGen. Given a user of identity ID, the PKG picks randomly a zuser ∈ Zq∗ and computes zsem = z − zuser . It computes dID,sem = g zsem /(ID+x+r1 y) and dID,user = g zuser /(ID+x+r2 y) , where r1 , r2 ∈R Zq∗ . Then it gives dID,sem to the SEM and dID,user to the user. 3. Encrypt. Given a plaintext M ∈ G1 and a user’s identity ID, a user creates a ciphertext C = (A, B, C) such that: A = g s·ID X s ; B = Y s ; C = eˆ(g, Z)s · M, where a random s ∈ Zq∗ .

Modified ID-Based Threshold Decryption and Its Application

723

4. Decrypt. When receiving C =< A, B, C >, a user forwards it to the SEM. They perform the following steps in parallel: (a) SEM: Checks if the user’s identity ID is revoked. If it is, return “ID Revoked”. Otherwise, computes ksem = eˆ(AB r1 , dID,sem ) and sends ksem to the user. (b) USER: Computes kuser = eˆ(AB r2 , dID,user ). When receiving ksem from the SEM, he computes k = ksem · kuser and returns M = C/k. Our scheme is secure against chosen plaintext attack and not against chosen ciphertext attack, because it do not use random oracles. As Baek and Zheng point out it in [6], our scheme also can be modified into the scheme secure against the chosen ciphertext attack by using a mechanism for checking the validity of a ciphertext.

4

Our Mediated Hierarchical ID-Based Encryption Scheme Without Random Oracles

This section describes a novel mediated hierarchical ID-based encryption scheme without random oracles denoted by mHIBE. This mediated HIBE is obtained from the HIBE with constant size ciphertext of the Boneh et al.’s scheme [3]. We assume that there exist two disjoint tree-shaped hierarchies of users and SEMs, respectively. Moreover, the root node of two hierarchies is a root PKG and a set of users is associated to each SEM. We assume that identities ID at depth l are vectors of elements in Zql and denote by IDl = (I1 , . . . , Il ) ∈ Zql . The k-th component corresponds to the identity at level k. We also assume that the messages to be encrypted are elements in G1 . The major steps of our scheme work as follows. 1. Root Setup. Given a security parameter k, the PKG chooses groups G, G1 of prime order q and a bilinear map eˆ : G × G → G1 . It selects a random generator g from G∗ and αuser , αsem from Zq∗ . Next, picks random elements g2 , g3 , h1 , h2 , . . . , hl from G. Then it computes α = αuser + αsem and sets g1 = g α and g4 = g2α . The public parameters and the master key are params = (g, g1 , g2 , g3 , h1 , . . . , hl ), mkey = g4 = g2α . 2. KeyGen. To generate the private key dIDk ,user for an identity IDk = (I1 , . . . , Ik ) ∈ Zqk of depth k ≤ l, picks a random ru ∈ Zq and outputs u dIDk ,user = (g2αuser · (hI11 · · · hIkk · g3 )ru , g ru , hrk+1 , . . . , hrl u ) ∈ G2+l−k .

To generate the private key dIDk ,sem for an identity IDk = (I1 , . . . , Ik ) ∈ Zqk of depth k ≤ l, picks a random rsem ∈ Zq and outputs sem , . . . , hrl sem ) ∈ G2+l−k . dIDk ,sem = (g2αsem · (hI11 · · · hIkk · g3 )rsem , g rsem , hrk+1

3. Encrypt. To encrypt a message M ∈ G1 under the public key IDk = e(g1 , g2 )s · (I1 , I2 , . . . , Ik ) ∈ Zqk , picks a random s ∈ Zq and outputs CT = (ˆ M, g s , (hI11 · · · hIkk · g3 )s ) ∈ G1 × G2 .

724

H.S. Ju et al.

4. Decrypt. When receiving CT =< A, B, C >, a user forwards it to the SEM. The SEM checks if the user’s identity IDk is revoked. If it is, returns δIDk ,sem,C = (semk , ”ID Revoked”). Otherwise, using the private key dIDk ,sem = (a0 , a1 , bk+1 , . . . , bl ), the SEM computes ksem,k =

eˆ(g rs em , (hI11 · · · hIkk · g3 )s ) 1 eˆ(a1 , C) = = , I α I sem 1 k s r em s eˆ(B, a0 ) eˆ(g, g2 )sαsem eˆ(g , g2 (h1 · · · hk · g3 ) )

and sends δIDk ,sem,C = (semk , ksem,k ) to the user. He computes kuser,k using the private key dIDk ,user = (a0 , a1 , bk+1 , . . . , bl ) as follows: kuser,k =

eˆ(g ru , (hI11 · · · hIkk · g3 )s ) 1 eˆ(a1 , C) = = .  I α I user 1 k s r u eˆ(B, a0 ) eˆ(g, g2 )sαuser eˆ(g , g2 (h1 · · · hk · g3 ) )

When receiving δIDk ,sem,C from the SEMk , he does the following: If δIDk ,sem,C is of the form (semk , ”ID Revoked”) then he returns ”Error” and terminates. Otherwise, he computes k = ksem,k · kuser,k and returns M = A · k. Our scheme supports an information access control in hierarchically structured communities of users whose privileges change very dynamically as Nali et al.’s mHIBE scheme [10] does.

5

Conclusion

In this paper, we analyzed an weakness of the Chai et al’s ID-based threshold decryption scheme and modified it to improve the security. We also showed how an ID-based threshold decryption scheme without random oracles can result in a mediated ID-based encryption scheme without random oracles. As one possible extension of our scheme, we proposed a hierarchial ID-based encryption scheme. More work is needed to be finding more security applications where mediated ID-based encryption is particular useful.

References 1. Olivier Baudron, Pierre-Alain Fouque, David Pointcheval, Jacques Stern, and Guillaume Poupard, Practical multi-candidate election system, In Twentieth Annual ACM Symposium on Principles of Distributed Computing, pages 274-283, 2001. 2. D.Boneh and X.Boyen, Efficient selective id secure identity based encryption without random oracles, In Advances in Cryptology-Proceedings of EUROCRYPT’04, volume 3027 of Lecture Notes in Computer Science, pages 223-238, Springer, 2004. 3. D.Boneh and X.Boyen, Hierarchical Identity Based Encryption with Constant Size Ciphertext, In Advances in Cryptology-Proceedings of EUROCRYPT’05, volume 3494 of Lecture Notes in Computer Science, pages 440-456. Springer, 2005. 4. D.Boneh, X.Ding, and G.Tsudik, Fine-grained control of security capabilities,ACM Transactions on Internet Technology (TOIT) Volume 4, Issue 1, February 2004.

Modified ID-Based Threshold Decryption and Its Application

725

5. D.Boneh and M.Franklin, Identity-based encryption from the Weil pairing, In Advances in Cryptology-CRYPTO 2001, volume 2139 of LNCS, pages 213-229. Springer-Verlag, 2001. 6. Joonsang Baek and Yuliang Zheng, Identity-Based Threshold Decryption, Proceedings of the 7th International Workshop on Theory and Practice in Public Key Cryptogrpahy (PKC’04), LNCS, vol. 2947, Springer-Verlag, 2004, pp. 262-276. 7. Zhenchuan Chai, Zhenfu Cao, and Rongxing Lu, ID-based Threshold Decryption without Random Oracles and its Application in Key Escrow, In Proceedings of the 3rd international conference on Information security, ACM International Conference Proceeding Series, 2004. 8. D.Chaum and T.P. Pedersen, Wallet databases with observers, In Advances in Cryptology-Proceedings of CRYPTO’92, volume 740 of Lecture Notes in Computer Science, pages 89-105, Springer-Verlag, 1993. 9. B. Libert, J.-J. Quisquater, Efficient revocation and threshold pairing based cryptosystems, Symposium on Principles of Distributed Computing-PODC’2003, 2003. 10. D. Nali, A. Miri, and C. Adams, Efficient Revocation of Dynamic Security Privileges in Hierarchically Structured Communities, Proceedings of the 2nd Annual Conference on Privacy, Security and Trust (PST 2004), Fredericton, New Brunswick, Canada, October 13-15, 2004, pp. 219-223.

Materialized View Maintenance in Peer Data Management Systems Biao Qin, Shan Wang, and Xiaoyong Du School of Information, Remin University of China, Beijing 100872, P.R. China {qinbiao, swang, duyong}@ruc.edu.cn

Abstract. The problem of sharing data in peer data management systems (PDMSs) has received considerable attention in recent years. However, update management in PDMSs has received very little attention. This paper proposes a strategy to maintain views in our SPDMS. Based on applications, this paper extends the definition of view and proposes the peer view, local view and global view. So the maintenance of a global view is became the maintenance of all related local views if join operations are confined in each local PDMS. Furthermore, this paper proposes an ECA rule for definition consistency maintenance and a push-based strategy for date consistency maintenance. Finally, we do extensive simulation experiments in our SPDMS. The simulation results show the proposed strategy has better performance than that of Mork’s.

1

Introduction

The problem of sharing data in peer data management systems (PDMSs) has received considerable attention in recent years. For example, the Piazza PDMS [1] proposes a solution of facilitating ad hoc, decentralized sharing and administration of data, and defining of semantic relationships between peers. In [2], the authors present a vision for a query and distributed active rule mechanisms which use mapping tables and mapping rules to coordinate data sharing. PeerDB [3] employs an information retrieval-based approach to query reformulation. The Chatty web [4] focuses on gossip protocols for exchanging semantic mapping information. However, the management of updates in those systems has received very little attention. In [5], Gao et al propose a framework called CC-Buddy, for maintaining dynamic data coherency in peer-to-peer environment. In [6], Mork et al present a framework for managing updates in large-scale data sharing systems. Based on the above work in PDMSs, this paper proposes a strategy to maintain views in our schema-mapping based PDMS (SPDMS). The main contributions of this paper are as follows. – Based on applications, this paper extends the definition of view and proposes the global view, local view and peer view. So the maintenance of a global view is turned into the maintenance of all related local views if join operations are confined in each local PDMS. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 726–732, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Materialized View Maintenance in Peer Data Management Systems

727

– If the schema mappings between peers are changed (insert, delete, update), this paper proposes an ECA rule to actively adjust views with the changes of the schema mappings between peers. The paper is organized as follows. Section 2 presents the logical model of our SPDMS. Section 3 discusses view maintenance in our SPDMS. We do extensive simulation and present the representative experimental results in Section 4. Section 5 concludes.

2

The Logical Model of SPDMS

Because of a global mediated schema, data integrated systems can only provide limited support for large-scaled data sharing [1]. In a PDMS, each peer is able to create its own schema based on its query needs. These schemata are interrelated using a network of mappings. Each peer only needs to maintain a small number of mappings to closely related schemata so that, in the event of a change, only a minimal number of mappings need to be updated. So a PDMS can be viewed as a strict generalization of data integration systems. Definition 2.1. If a peer is a virtual peer that holds the schema mappings of peers in a group or provides a uniform view over a set of peers, the peer is called mapping peer (MP). Definition 2.2. We call a mapping peer P and all peers in the same group a local PDMS. The local peer set Lo P S(P ) is made up of peers that lie in the same local PDMS as peer P, that is, Lo P S(P )={Pi | Pi and P in the same group}. Definition 2.3. Let peer P be a MP. The association peer set AP S(P ) is made up of peers, each of which does not belong to Lo P S(P ) but has schema mappings with at least one peer in Lo P S(P ). So AP S(P )={Pi | Pi ∈ Lo P S(P ) ∧ ∃Pj ∈ Lo P S(P ) ∧ Pi ↔ Pj }. Pi ↔ Pj means there are schema mappings between them. In this paper, the schema mappings between peers are symmetric. Definition 2.4. The peer association peer set P AP S(P ) is made up of peers, which have schema mappings with peer P. So P AP S(P ) = {Pi | Pi ↔ P }. We illustrate how to construct our SPDMS as shown in Fig. 1. Arrows indicate that there are mappings between the relations of the peer schemas. If Stanford and Berkeley, as neighboring universities, come to an agreement to map their schemas, there are three things to do. First, Stanford registers the schema mappings from it to Berkeley at Mapping Peer (P2 ) and Berkeley registers the schema mappings from it to Stanford at P2 . Second, P2 puts Stanford and Berkeley into Lo P S(P2 ). Finally, Stanford puts Berkeley into P AP S(Stanf ord) and Berkeley puts Stanford into P AP S(Berkeley) respectively. Then Stanford-Berkeley PDMS is established. DB-Projects (P1 ) is also a mapping peer that provides an uniform view over UPenn and UW. If a mapping between UW and Stanford is established, there are three things to do. First, UW registers the schema mappings from it to

728

B. Qin, S. Wang, and X. Du Area(areaID,name,descr) Project(projID,name,sponsor) ProjArea(projID,areaID) Pubs(pubID,projName,title,venue,year) Author(pubID,author) DB-Proiects member(projName,member)

Mapping Peer Direction(dirID,name) Project(pID,dirID,name) ...

UPenn

Data

UW

Project(projID,name,descr) Student(studID,name,status) Faculty(facID,name,rank,office) Advisor(facID,studID) ProjMember(projID,memberID) Paper(papID,title,forum,year) Auther(authorID,paperID)

Stanford

Data

Area(areaID,name,descr) Project(projID,areaID,name) Pub(pubID,title,venue,year) PubAuthor(pubID,projID) Member(memID,projID,name,pos) Alumni(name,year,thesis)

Berkeley

Data

Data

Members(memID,name) Projects(projID,name,startDate) ProjFaculty(projID,facID) ProjStudents(projID,studID) ...

Fig. 1. The architecture of SPDMS

Stanford at P1 and Stanford registers the schema mappings from it to UW at P2 . Second, P2 puts UW into AP S(P2 ) and P1 puts Stanford into AP S(P1 ) respectively. Finally, UW puts Stanford into P AP S(U W ) and Stanford puts UW into P AP S(Stanf ord) respectively.

3

Materialized View Maintenance

3.1

The Definitions of Materialized Views V

m P2 denote that a materialized view (Vm ) can be transitive from P1 Let P1 −→ to P2 . We have a transitivity rule of Vm s as follows.

V

V

V

m m m P2 holds and P2 −→ P3 holds, then P1 −→ P3 . Transitivity rule. If P1 −→

Definition 3.1. We call the set of peers that a Vm posed over peer P can reformulate over under a set of schema mappings the transitive closure of the Vm ; we denote it by PV+m . The peers in PV+m form the semantic path of the Vm and P is the initiator of the semantic path. The union of a Vm and its reformulations over all other peers in PV+m is called the semantics of the Vm . Theorem 1. Let peer A and peer B (A = B) be two peers in a PDMS. If V

m + A −→ B holds, then A+ Vm = BVm holds.

V

m B and schema mappings between peers are symmetric, Proof. Because A −→ Vm Vm we have B −→ A. For any peer C ∈ A+ Vm , we get A −→ C from definition 3.1.

V

m C. So we have C ∈ BV+m . Thus we get From transitivity rule, we obtain B −→ + + + A+ Vm ⊆ BVm . With the similar method, we can prove BVm ⊆ AVm . So we obtain + + AVm = BVm . Thus the theorem follows.

Definition 3.2. If a view can only retrieve data from one data source, we call it peer view. If a view can retrieve data from more than one data source in its

Materialized View Maintenance in Peer Data Management Systems

729

local PDMS, we call it local view. If a view can retrieve data from more than one local PDMS or retrieve data from more than one data source in another local PDMS, we call it global view. If those views materialize, they are called peer materialized view (PMV ), local materialized view (LMV ), and global materialized view (GMV ) respectively. And we call the part of a GMV in a local PDMS its local instance (G LMV ). 3.2

The Maintenance Strategy of Definition Consistency

A PDMS can be made up of many local PDMSs. For example, the PDMS in Fig. 1 is made up of Stanford-Berkeley PDMS and UPenn-UW-DBProjects PDMS. Because join operations are confined in each local PDMS ,n in our SPDMS, a GMV is the union of all related G LMV s, that is, GMV = i=1 G LMV i . So the maintenance of a GMV is became the maintenance of all related G LMV s. In our SPDMS, PMV s can accommodate in any peer. However, the definitions of G LMV s and LMV s are accommodated in their mapping peer and their data are accommodated in capable peers, which are called propagation peer, of the same local PDMS. And the mapping peer maintains the following relationships: – The relationship between the definition of every G LMV and its related data. – The relationship between the definition of every G LMV and its related data source. In each MP, there is an ECA rule to actively adjust Vm s with the changes of the schema mappings between peers. The ECA rule will be triggered as follows. – (1) If the schema mappings between peer A and peer B are registered into V

m the MP, the ECA rule is evaluated. For any A −→ B becomes true after the schema mappings are registered, the C is true. So for any peer P + + + (∀P ∈ newA+ Vm − oldAVm , where newAVm and oldAVm denote the Vm ’s transitive closure after and before the schema mappings change respectively.), the reformulation of the Vm over it is added into its MP and the corresponding data are sent to its propagation peer. – (2) If the schema mappings between peer A and peer B are unregistered

V

m from the MP, the ECA rule is evaluated. For any A −→ B becomes false after the schema mappings are unregistered, the C is false. So for any peer + P (∀P ∈ oldA+ Vm − newAVm ), the reformulation of the Vm over it is deleted from its MP and the corresponding data are deleted from its propagation peer. – (3) If the schema mappings between peer A and peer B are updated, the

V

m ECA rule is evaluated. On one hand, for any A −→ B becomes true after the schema mappings update, the C is true. The ECA rule is triggered as Vm in (1). On the other hand, for any A −→ B becomes false after the schema mappings update, the C becomes false. The ECA rule is triggered as in (2).

730

3.3

B. Qin, S. Wang, and X. Du

The Maintenance Strategy of Data Consistency

In our SPDMS, different versions of relations are denoted by version numbers. We use Rt to denote the t -th version of the relation R. We use version vectors to specify the versions of a view. The version vector of a view V contains a version number of each base relation on which V depends. And an updategram [6] contains the list of changes (inserts, deletions, and updates) necessary to advance a relation from one version number to a latter version number: μi,j R contains the → →

changes that must be applied to advance Ri to Rj . Similarly, μVi , j advances V



i



to V j . In [6], Mork et al give the definition of booster as follows. Definition 3.3. Let V be an SPJ view definition whose FROM clause is R1 , ..., Rn , let D be a database, and let μR1 be an updategram for R1 . The booster of R2 w.r.t. μR1 and V is the subset of tuples of R2 in D that are relevant to some tuple mentioned in μR1 . We denote the booster by βV (μR1 , R2 ); when V and μR1 are obvious, we abbreviate β(R2 ). In this paper, we focus on the maintenance of GMV and LMV . We call a updategram and its corresponding boosters together Δ-relation. If a Vm relates to any relation in a peer, we call the peer viewing peer. If a peer accommodates update data temporarily, we call the peer temp peer. Our maintenance strategy of the GMV is as follows. If any relation is modified and there is a Vm related to it in a local PDMS, the viewing peer sends updates to the propagation peer. If a Vm can be self-maintainable [7], the viewing peer only sends the updategram to the propagation peer. Otherwise, the viewing peer sends Δ-relation to the propagation peer [6]; If the propagation peer is offline, the viewing peer sends updates to the temp peer. Once the propagation peer becomes online, it accepts data from the temp peer.

4

Simulation

According to the logical model of our SPDMS, we develop a simulation system, which is implemented in Java. Based on it, we do extensive simulation experiments comparing the maintenance efficiency of a centralized GMV (Mork’s strategy) with that of its corresponding decentralized GMV (our method). In our experiments, there ,5are 60 simulation nodes, which are classified into 5 local PDMSs. So GMV = i=1 G LMV i . There are no join operations between peers in our experiments. The view has 12 attributions and 2400K tuples. The simulation system was tested on a Windows Server 2000 Pentium 4 PC running at 1.7 GHz with 512M of memory. In the experiments, our strategy adopts even distribution, that is, the data of a GMV is evenly distributed among all related G LMV s. The simulation results are shown in Fig. 2. The legend Gmv denotes the maintenance time of the GMV . And the legend GLmv denotes the maintenance time of each G LMV . From the figure, we see Gmv is much higher than GLmv. This is because each G LMV has only one fifth population as large as that of the GMV and G LMV has only one

Materialized View Maintenance in Peer Data Management Systems

731

400 Gmv GLmv

350

Maintenance time (second)

300

250

200

150

100

50

0 500

1000

1500 2000 The number of update operations

2500

3000

Fig. 2. Maintenance time of GM V vs. G LM V when even distribution

fifth update operations as many as that of the GMV . When the system maintains each G LMV , it needs fewer I/O operations because the data are almost all in memory. So the maintenance time is very shorter.

5

Conclusions and Future Work

This paper researches on view maintenance in PDMSs. Based on applications, this paper extends the definition of view and proposes the peer, local and global views. If a peer updates its data, our SPDMS adopts a push-based algorithm to maintain views. If the schema mappings between peers are changed, our SPDMS adopts an ECA rule for active definition consistency maintenance. There are two directions we plan to pursue next. First, we will study how to store a LMV or G LMV in different local PDMSs from that of their related data sources. Second, we will study how to adopt ontology in our SPDMS. Acknowledgements. This work is supported by National Natural Science Foundation of China under Grant No. 60503038, 60473069 and 60496325. The authors wish to thank the anonymous reviewers for their useful comments.

References 1. A. Y. Halevy, Z. G. Ives, J. Madhavan, P. Mork and et al. The Piazza Peer Data Management System. IEEE Transactions on Knowledge and Data Engineering. Vol. 16 (7), 2004: 787-798. 2. M. Arenas, V. Kantere, A. Kementsietsidis, I. Kiringa, R. J. Miller, and J. Mylopoulos, The Hyperion Project: From Data Integration to Data Coordination. SIGMOD Record, 2003.

732

B. Qin, S. Wang, and X. Du

3. W. S. Ng, B. C. Ooi, K. -L. Tan, and A. Zhou, PeerDB: A P2P-Based System for Distributed Data Sharing, ICDE 2003. 4. K. Aberer, P. Cudre-Mauroux, and M. Hauswirth, The Chatty Web: Emergent Semantics through Gossiping, WWW 2003. 5. S. Gao, W. S. Ng, W. Qian, and A. Zhou, CC-Buddy: An Adaptive Framework for Maintaining Cache Coherency Using Peers. WWW (Poster) 2004. 6. P. Mork, S. D. Gribble, and A. Y. Halevy, Managing Change in Large-Scale Data Sharing Systems, UW CS&E Technical Reports UW-CSE-04-04-01.pdf. 7. A. Gupta, H. V. Jagadish, I. S. Mumick, Data Integration using Self-Maintainable Views, Technical Memorandum 113880-941101-32, AT&T Bell Laboratories, November 1994.

Cubic Analysis of Social Bookmarking for Personalized Recommendation Yanfei Xu1, Liang Zhang1,*, and Wei Liu2 1

Department of Computing and Information Technology, Fudan University 2 Shanghai Library {032021154, zhangl}@fudan.edu.cn, [email protected]

Abstract. Personalized recommendation is used to conquer the information overload problem, and collaborative filtering recommendation (CF) is one of the most successful recommendation techniques to date. However, CF becomes less effective when users have multiple interests, because users have similar taste in one aspect may behave quite different in other aspects. Information got from social bookmarking websites not only tells what a user likes, but also why he or she likes it. This paper proposes a division algorithm and a CubeSVD algorithm to analysis this information, distill the interrelations between different users’ various interests, and make better personalized recommendation based on them. Experiment reveals the superiority of our method over traditional CF methods.

1 Introduction The information in the WWW is increasing far more quickly than people can cope with. Personalized recommendation [1] can help people to conquer the information overload problem, by recommending items according to users’ interests. One popular technique is collaborative filtering (CF) [2]. It is built on the assumption that people who like the items they have viewed before are likely to agree again on new items. Although the assumption that CF relied on works well in narrow domains, it is likely to fail in more diverse or mixed settings. The reason is obvious: people have similar taste in one domain may behave quite different in others. There are several approaches for handling this problem, either by using pre-defined ontology [3], or by applying clustering algorithm to group items or users beforehand [4]. In this paper we propose a more general and straightforward way for handling this problem. It is based on the burgeoning web application named social bookmarking, from which we can gain information about not only what a user likes, but also why he or she likes it. On the social bookmarking websites such as del.icio.us (http://del.icio.us), users attach some tags on the items they are interested in, just like using bookmarks. Because of the lower barriers of adding tags and the usefulness of social bookmarking, these websites are in their blossom and provide us a great mass of tags for each item. It is certain that a user may have diverse interests, a tag may have ambiguity meanings, *

To whom correspondence should be addressed.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 733 – 738, 2005. © Springer-Verlag Berlin Heidelberg 2005

734

Y. Xu, L. Zhang, and W. Liu

and an item may relate to multiple topics. Consider the three things together, it is possible to eliminate the ambiguity and distill the inherent correlations among them. As to the realization of our ambition, there are two prominent challenges: (1) Personalized recommendation on three-order data requires finding the complicated correlations among user, tag, and items. Most researches are for two-order data analysis, and have limited ability when coping with three-order data. Therefore, a higher-order analysis approach is required to extract the multi-type relations. (2) The three-order tensor is huge and highly sparse. Most CF algorithms are susceptible to data sparseness, and the problem is more severe here. We should address this problem to decrease the computing cost and improve analysis accuracy. In this paper, we propose a three-order-data approach for personalized recommendation. The main contributes include: (1) Address the second challenge by designing a division algorithm according to the distribution characters of tags. (2) Apply higherorder singular value decomposition (HOSVD) technique [5] to social bookmarking, in order to take full advantage of the three-order data and achieve a better performance for data analysis. (3) Based on the three-order data analysis, propose an approach for personalized recommendation with high quality and good usability. Experiments against data collected from the most famous social bookmark website del.icio.us reveals the superiority of our approach over traditional CF recommend methods. The rest of this paper is organized as follows: Section 2 reviews related work on CF recommendation and higher-order analysis. Section 3 proposes the division algorithm. Section 4 describes the analysis and recommendation algorithm. Section 5 reports the experimental results. Section 6 draws the conclusion and work to be done.

2 Related Work Collaborative filtering (CF) is one of the most successful recommendation techniques for personalized recommendation. Early systems on CF such as Grouplens [2] are based on users’ explicit ratings, while PHOAKS [6] monitor users’ behavior for getting implicit ratings. Different from these rating approaches, our work is based on tags, which include richer information and are more reliable than ratings. Hybrid approaches emerged mainly for solving the data sparse and cold-start problem [7] by combining the advantage of content-based and CF recommendations together. Our approach is also a hybrid one since it recommends items according to both users’ taste and related tags. It is applicable to non-textual recommendation, which the traditional content-based or hybrid approaches fail. CF has the problem of being less effective when users have multiple interests, because users have similar taste in one aspect may behave quite different in other aspects. Research in [3] classifies papers using ontological classes, and calculates collaborative recommendation on each class separately. Such method requires ontology or classification created by dedicated professionals beforehand, and items for recommendation should also be annotated with proper metadata. This limitation makes such approaches not scalable for Internet, because they depend on professional efforts. Approaches in [4] tries to solve this problem by applying cluster algorithm on items

Cubic Analysis of Social Bookmarking for Personalized Recommendation

735

or users. Although data become denser after clustering, the problem caused by multiple interests still remains. The main reason is that the user-item rating matrix does not contain enough information for distinguishing the non-similar domains. Tags of social bookmarking offer additional information for addressing this problem. Despite of the large number of typos and deceptions, tags still have the potential of constructing a reliable taxonomy. This is named folksonomy, whose essence is cooperative classification through tags, the user created metadata [8]. By adding tags to the original duple, it is possible to generate more sophisticated recommendation to a certain user’s specified interest, instead of simply to a user. To achieve this goal, we need cubic data analysis for user-tag-item tensor. There are only a few researches carry out such analysis on web information service. The recommendation research in [9] extends Hofmann’s two-way aspect model to deal with the co-occurrence data among users, items, and item content. Another approach quite related to our work uses CubeSVD on clickthrough data for personalized Web search [10]. It inspires us of analyzing three-order tensor from social bookmarking.

3 Tensor Division When a user marks an item with several tags, we get triples of accordingly. These triples then form a 3-order tensor, by setting all the cells correspond to the exist triples to 1 and left others to 0. This tensor is very sparse, and the tags and items in different domains seemed to have few intersections. Therefore, it is necessary and possible to divide the original tensor into several sub-tensors. The division algorithm is based on tags and items. It allows tags and items to appear in multiple subsets, because some tags have diverse meanings in different domains and some items are related with several topics. Use E to denote the set of all the items, and use T to tags. There are three phrases as described below. 1.

Initialize. Collect hundreds of most frequently used tags and items, and thus get a tag-item matrix A=(aij)MN, where aij=C(ti,ej) is the number of users that marked tag ti on item ej. Use k-means cluster algorithm to get k clusters of tags. Each cluster is the initial core for a subset, and each tag is initiated with the weight wsil=1/n if there are n distinct tags in the set sl. Each tag ti also has a global weight wg i = ¦ C t i ,e j ¦ ¦ C t ′,e j . e∈E

2.

(

)

t ′∈T e∈E

( )

Expand. For each subset sl, repeat following steps until it stops expanding. a) For each item ej, calculate the belonging value Bjl to the subset sl. B jl = ¦ C ti ,e j ⋅ wsil ¦ C ti ,e j ⋅ wg i Add all the items that have Bjl>Į to ti ∈Sl

b)

( )

ti ∈T

sl. Here Į is a pre-determined threshold that can control the size of subsets. For each item ej in sl, add all tag ti that has C(ti,ej)>0 to sl. Recalculate tags’ local weight by ws il = ¦ C t i ,e j ¦ ¦ C t ′,e j e∈sl

c)

( ) (

)

t ′∈sl e∈ sl

(

)

Repeat step a and b until there are no more tags and items added to sl.

736

3.

Y. Xu, L. Zhang, and W. Liu

Finalize. For all the items that have not been assigned to any subset, add them to one of the most related subset. Add all the related tags as well. Add user to the subset if there is at least one triple having both the tag and item in this subset. Then the users, tags and items in the subset form a sub-tensor that has a partial data of the original tensor and is denser than the original one.

By analyzing the distribution of tags on social bookmarking, we get the conclusion that the expanding phrase described above tends to converge quickly. Firstly, only a small portion of tags is used by the majority users, and the set of frequently used tags is quite stable. Secondly, although there are always hundreds of tags marked on a single item, only a few of them are frequently used. The frequently used tags’ weight will dominate the belonging value of items and the new unpopular tags will change the value little. Thus, the subset will converge after a few iterations, and each subset will contain the tags and items very related to the core tags in the initial phrase.

4 Cubic Analysis for Recommendation SVD is the basis of Latent Semantic Indexing (LSI) [11] technique. For a M×N matrix A (usually a term-document matrix), it can be written as the product A=UȈVT, where U and V are orthogonal and Ȉk=diag(ı1, ı2…, ır) is the diagonal matrix of singular values in decrease order. Calculate the approximation Ak=UkȈkVkT by keeping the k largest singular values and setting others to 0. The matrix Ak contains the most significant information of the original matrix while the noise is reduced. HOSVD is a higher order generalization of matrix SVD for tensors. Every I1×I2…×IN tensor A can be written as the n-mode product A = ¦ ×1 U1 × 2 U 2 K × N U N . Un contains the orthonormal vectors and ¦ is called core tensor. We apply HOSVD to the 3-order tensor got from the division algorithm. We use the term CubeSVD for the 3-order analysis. The algorithm is described as below:

1.

2. 3.

4.

Calculate the matrix unfolding Au, At, Ae for user, tag and item from the m×n×l tensor A, by varying one index and keeping the other two fixed. Thus, Au, At, Ae are matrixes of m×nl, n×ml, l×mn respectively. Computer SVD on Au, At and Ae, and get the left matrix Uu, Ut, Ue respectively. Select m 0 ∈ [1, m ] , n0 ∈ [1, n ] and l 0 ∈ [1, l ] . Truncate Uu, Ut, Ue by keeping only the left most m0, n0 and l0 columns and removing others, and denote the truncated matrix by Wu, Wt, We.The core tensor is S = A × 1 W uT × 2 W tT × 3 W eT Reconstruct the original tensor by Aˆ = S × U T × U T × U T . 1

u

2

t

3

e

The CubeSVD reconstructed tensor  measures the associations among users, tags and items more precisely than the original tensor does, because the noise is reduced by eliminating the small singular values. Elements of  can be represented by a quadruplet , where w indicates how does user u like item e when he is seeking information related to tag t. So recommendation on user ui’s tag tj are items ek1, ek2, …, ekN that Â(i,j,kn) is among the N biggest values of vector Â(i,j,:). The recommen-

Cubic Analysis of Social Bookmarking for Personalized Recommendation

737

dation is a CF approach since it is based on users’ opinion. It is also a content-based approach because it uses tags information to recommends items as well.

5 Experiments and Remarks We collected the records from del.icio.us. The dataset contains about 249,463 records of one week, involving 21,455 users, 17,920 tags and 8,124 items. Remove the rare appeared data and get 224,938 records left, involving 5,428 users, 7,847 tags and 6,010 items. Apply the division algorithm with Į=2.1, k=15. Randomly select a sub-tensor as an example. It has 5381 records involving 182 users, 113 tags and 481 items. Use 2691 records as the training data and the rest 2690 as testing data. Compare our cubic analysis recommendation approach (denoted as CubeRec) with the traditional CF approach. The CubeRec approach recommends Top-N items to each user’s each tag according to the value w in quadruplet . The CF approach applies LSI on user-item matrix first, finds M most similar neighbors for each user, and then recommends Top-N items according to neighbors’ collection. Use hit ratio to evaluate the recommendation quality HitRatio = R∩ C min(N, C ) . R is the set of items recommend to this user, C is the set of items that this user actually collected. From Table 1., we find the results of CubeRec are very promising. The HitRatios are much higher than those of CF-based approaches. The main reason for the worse results of CF approach is that although users collect many identical items, their interests may still be quite different. Here we give an example found in the experiment. There is a user who maybe a web designer, because he collected several web pages and marked them with tags such as webdesign, cool, elegant etc. While he is interested at the design of the web page, his neighbors, who collect many common pages, seem to be interested at the content, because they mark them with very different tags such as XML, food, cuisine etc. Therefore, the recommendations to the user will be definitely wrong. The more it recommends, the more errors it makes. This also explains why the HitRatios are generally decreasing as the number of recommending items increasing. Table 1. HitRatio for our CubeRec approach and traditional CF approach. For CF, k=100, M=5; For CubeRec, = .

CF CubeRec

Top1

Top3

Top5

Top10

Top15

0.3911 0.7097

0.3035 0.6957

0.2491 0.7437

0.2115 0.8117

0.2111 0.8247

6 Conclusion and Future Work Personalized recommendation becomes popular these years to help people to conquer the information overload problem. While the most successful such technique CF is

738

Y. Xu, L. Zhang, and W. Liu

frustrated when applied to the multiple domain settings, our approach can still works well. Experiments against data from del.icio.us reveal the superiority of our approach. Our future work of personalized recommendation on social bookmarking includes: (1) We choose the Į and k for division algorithm and the truncated values for CubeSVD manually by empirically and only get a relatively good result. We will study at how to choose them automatically for getting the best result. (2) Many other means of recommendation should be studied. Such as recommend the most related tags or items to a user’s given tag or items, as well as recommending related users. These means will help users when they browsing the web and give them the serendipity of finding novel but related items.

Acknowledgement This work is partially supported by National Basic Research Program (973) under grant No. 2005CB321905, NSFC Key Program under grant No. 69933010, and Chinese Hi-tech (863) Projects under grant No. 2002AA4Z3430, No. 2002AA231041.

References [1] Resnick, P., Varian, H.: Recommender systems. Communications of the ACM 40, 3 (1997) [2] Resnick, P., Iacovou, N., Suchak, M., Bergstrom, Riedl, J.: GroupLens: an open architecture for collaborative filtering of netnews. Proceedings of CSCW (1994), ACM Press, 175-186. [3] Middleton, S.E., Shadbolt, N.R., De Roure, D.C.: Ontological user profiling in recommender systems. ACM Transactions on Information Systems, 22,1 (2004), 54-88 [4] Kelleher, J., Bridge, D.: An accurate and scalable collaborative recommender. Artificial Intelligence Review, 21,3-4 (2004) 193-213 [5] De Lathauwer, L., De Moor, B., Vandewalle, J.: A multilinear singular value decomposition. SIAM Journal on Matrix Analysis and Applications, 21,4 (2000) 1253-1278 [6] Terveen, L., Hill, W., Amento, B., McDonald, D., Creter, J.: PHOAKS: a system for sharing recommendations. Communications of the ACM, 40,3 (1997) 59-62 [7] Schein, A.I., Popescul, A., Ungar, L.H., Pennock, D.M.: Methods and metrics for coldstart recommendations. In Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (2002) 253 – 260 [8] Adam M.: Folksonomies Cooperative Classification and Communication Through Shared Metadata. (2004) http://www.adammathes.com/academic/computer-mediatedcommunication/ folksonomies.html [9] Popescul, A., Ungar, L.H., Pennock, D.M., Lawrence, S.: Probabilistic models for unified collaborative and content-based recommendation in sparse-data environments. In Proc. 17th Conf. Uncertainty in Artificial Intelligence (2001) 437-444 [10] Sun, J.T., Zeng, H.J., Liu, H., Lu, Y.C., Chen, Z.: CubeSVD: a novel approach to personalized Web search, Proceedings of the 14th international conference on World Wide Web (2005) [11] Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), (1990)

MAGMS: Mobile Agent-Based Grid Monitoring System Anan Chen, Yazhe Tang, Yuan Liu, and Ya Li Xi’an Jiaotong University, Xi’an Shaanxi 710049, China [email protected]

Abstract. With the emergence of Service-oriented Grid System, Quality of Service (QoS) guarantee has become the most significant goal of Grid monitoring. High flexibility and scalability are crucial for a Grid monitoring system. This paper introduces the architecture and some implementation details of a Mobile Agent-based Grid Monitoring System (MAGMS). In this system, MA with its monitoring tasks is designed to be encapsulated in SOAP conforming to the OGSA standards, and then be transported to wherever there is consumption of services and executed. MAGMS is a novel infrastructure for its flexibility, scalability and fine-well QoS guarantee.

1

Introduction

Grid is the extension of traditional distributed computing technology with a goal to share the resource (CPU, memory, etc) among virtual organizations. For the large-scale, geography-dispersed, and dynamic-changed resource in the Grid system, it is crucial to construct a monitoring and management system with high scalability and flexibility. Most Gird monitoring systems are implemented based on the Grid Monitoring Architecture (GMA[1]) recommended by Global Grid Forum (GGF), however, there has been no standard monitoring method. The GMA is based on simple consumer/producer architecture with an integrated system registry. To date, there have been several implementations of Grid monitoring system according to GMA, such as Monitoring and Discovery Service(MDS4[2] [3]), Relational Grid Monitoring Architecture (R-GMA [4]), Hawkeye [5] and GridRM [6]. But these systems do not meet the scalability and flexibility requirements of Grid monitoring, owing to the tight coupling model that GMA imposes on the parties exchanging information. Mobile Agent, with its mobility, automatons and reactivity, meets these needs perfectly. Bandwidth saving and network reduction can also be brought in by Mobile Agent by moving the ”method” to the data. Although Orazio Tomarchio and Lorenzo Vita [7] realized this converge of interests their architecture focused mainly on the former resource-based Grid, and didn’t take full advantage of the mobility of agent. Grid Mobile Service [8] [9] conception yet hasn’t been fulfilled into a specific service, like monitoring service. In contrast, we propose a flexible, dynamic and extensible Grid monitoring system named MAGMS, which makes good use X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 739–744, 2006. c Springer-Verlag Berlin Heidelberg 2006 

740

A. Chen et al.

of the mobile agent characteristics to give QoS guarantee to Grid Services. In this paper we present the design of MAGMS, as well as some key technology of implementation details. Since OGSA models all the resources in Grid as Grid Services, the monitored object in our design is the Grid Service, and the main monitoring goal is to ensure the Quality of Grid Service. The rest of the paper is organized as follows. In Section2, the design of MAGMS is described, while the critical factors on system design are given in Section 3. Section 4 introduces the implementation of the prototype system. Finally, conclusions are presented together with directions for future work in Section 5.

2

System Architecture

Our system merges the intelligence, mobility characteristic of mobile agent into current OGSA, constructing a more dynamic and extensible monitoring system to ensure the Quality of Grid Service. Fig.1 shows the framework of MAGMS. Components of the framework can be divided into three groups: Services providers, Storages and SOAP-oriented Agents.

Fig. 1. Framework of MAGMS

MAGMS: Mobile Agent-Based Grid Monitoring System

2.1

741

Service Providers

In OGSA, any kind of host with resources and abilities is called a service provider. It models all of its resources and abilities to Grid Services, which provide a set of well-defined interfaces and conform to specific conventions. All of the Grid Services in Grid environment should register itself in Grid Service Register Centre (mentioned in section 2.2), and any change happens to the Grid Service should be updated in the Centre. In addition, the Grid Service in MAGMS should be binded with the Classes of Service quality it can support, and specify the details about different classes. Management Service is a special Grid Service, which manipulates the realtime data gathered by the sensors all over the resource to get useful information and offers it to MAs through the Management Interface. 2.2

Storages

There are three basic storages in MAGMS: Grid Service Register Centre manages all the Grid Services, Monitoring Task Patterns Centre provides a series of basic monitoring behaviors and Mobile Agent Management Centre keeps a record of all the Mobile Agents. Grid Service Register Centre stores information about all the Grid Services that are being provided in Grid. When a new Grid Service is going to be supplied, it should be registered in this Centre firstly, and update the register information whenever changes happen. Besides, MAGMS requires the special attribute, namely Classes of Services to be registered too. The service name and its class compose the key of a record, and each entry indexes to a specific file depicting this class of the service, especially about the conditions for Quality of Service guarantee. A simplified structure of these files is shown in Fig. 1 Monitoring Task Patterns Centre has two libraries: System Monitoring Library and Custom Monitoring Library. Monitoring tasks modules from these two libraries can be integrated into a more complex and concrete task carried by MAs. Since users can customize some monitoring modules through this Custom Monitoring Library dynamically, it increases the flexibility and extensibility of MAGMS. Mobile Agent Register Centre keeps the records of all the agents in MAGMS. When a MA is created out from the Agent Factory, it is required to be registered in this centre with the information of what Grid Service and which class of the Grid Service the MA will be monitoring. When a MA gets instantiated in an Agent Executing Environment, the location of this MA instance will also be recorded and updated in this Centre. 2.3

Soap-Oriented Agents

Distinguished from most of the other Mobile Agent-based platforms, in MAGMS the transport and communication mechanism of Mobile Agent are both based on the SOAP (Simple Object Access Protocol) message, neither Agent Transfer

742

A. Chen et al.

Protocol (ATP) nor Agent Communication Language (ACL). This design well merges the MA into the OGSA. According to the specific files about the Grid Service with a certain class, Agent Factory extracts the objects and goals that the monitoring for Grid Service guarantee requires, finds the corresponding modules for these monitoring requirements in Monitoring Task Patterns Centre, and integrates them into the process logic of a MA. Finally, attached with some routines and the destination where it heads for, a MA is made out, whose transmission, migration, and interaction will all be realized by SOAP. Another characteristic of MAGMS — agent executing environment, which is a basic element of Mobile Agent Platform (MAP [10]) now is embedded in our Grid monitoring system. Mobile Agent executing environment provides MA a place to run itself, namely execute the monitoring tasks and realize its life circle.

3 3.1

Critical Factors on System Design Agent Transmission

In this system, the agent code is encapsulated in the SOAP message and transferred from one host to another. The SOAP is realized through the XML technologies by which an extensible message framework is defined. However, some characters in the agent code cannot be supported by the XML. In Base64 coding, a 65-character subset of US-ASCII supported by the XML is used, enabling 6 bits to be represented per printable character. Therefore, we use Base64 coding to encode agent code, which might increase the length of the original agent code string. Luckily, the size of the agent code usually is not too big in MAGMS, so the length of the encoded agent is acceptable. 3.2

SLA-Based Grid Service Management

Service level agreement (SLA) has been widely used in web service literature for QoS management. WSLA [11] and WSOL [12] have made considerable achievements in this field. Grid Service, which is based on web service and Grid technique, will by its very nature use SLA for QoS specification and management. In our architecture, we use Class of Service for QoS specification and management. Here Class of Service refers to a variety of services that one same service provider offers but with different QoS support, like service offering in WSOL. In this case, consumers cannot negotiate with service providers to make a SLA, instead, consumers will choose among Classes of Services. In fact, Class of Service is a special kind of SLA. As Fig. 1 and section 2.2 depict, in our architecture, every Grid Service will register itself as well as the Classes of Service quality it can support in Grid Register Centre. The user will inquire Grid Register Centre to get the appropriate Grid Service and service class with required function and QoS support. Prior to this, some important management information will be retrieved from the specification of the class of service.

MAGMS: Mobile Agent-Based Grid Monitoring System

4 4.1

743

Implementation of the Prototype System Introduction of the Developing Tools

In the system, we use JDK1.4.2, Eclipse3.0, Apache SOAP2.3.1, Tomcat5.0.28 and some support packages including JavaMail1.3.2, JAF1.0.2, and Xerces 2.5.0. Eclipse is used to develop Mobile Agents, Management Service and Grid Services. JDK is used to support the Eclipse. Tomcat is used as web server. Apache SOAP and support packages are used to realize the SOAP message encapsulation. 4.2

Running of the Prototype System

Firstly, the agent code is sent with the user’s requirement. The agent is made out from the Agent Factory according to the monitoring tasks and sent to the provider host of the Grid Service that the user is consuming. All the messages between the Agent Factory and hosts are encapsulated in the SOAP messages binding with HTTP. Secondly, the server receives and saves the agent code to the local storage. Then the agent execution environment instantiates this agent (suppose the classes agent needs are all local ones) and runs it. Lastly, while the agent is executing the logic operation, it interacts with the Management Service to get the information that is needed for its computing and gets the result whether the current condition of this Grid Service meets the user’s requirements, namely appointed Class of Service.

5

Conclusion

In this paper, we propose a Mobile Agent-based Grid Monitoring System (MAGMS). The main purpose of this system is to offer the Quality of Service monitoring and guarantee of Grid Services. MAGMS well meets the distribution and flexibility requirements of the giant Grid environment because the monitoring modules can be customized and reused in many monitoring tasks that are carried out by MAs. MAGMS also takes good advantage of the mobility and intelligence of MAs, which are encapsulated in a SOAP message along with its monitoring task, migrating among the hosts that need them. With the MAGMS, each Service under consuming will be monitored by a corresponding MA to guarantee the quality of the service, namely, the class of the service the user has chosen. Now MAGMS is in its primary phase. There are many jobs left to promote efficiency and scalability of the system, such as all kinds of task modules that are suited for different conditions, the dispersal repository for the Mobile Agent and Grid Service registrations. Appropriate authentication, authorization and delegation in MAGMS are also important parts of our future work.

Acknowledgments This work is supported by National Natural Science Foundation of China (NSFC) under Grant NO. 60403029.

744

A. Chen et al.

References 1. B. Tierney, R. Aydt, D. Gunter, W.Smith, M. Swany, V. Taylor, and R. Wolski: Grid monitoring architecture. Technical report, Global Grid Forum(2002) 2. Czajkowski, K., S. Fitzgerald, I. Foster, and C.Kesselman: Grid Information Services for Distributed Resource Sharing. In Proc.10th IEEE International Symposium on High Performance Distributed Computing (HPDC-10), IEEE Press(2001) 3. MDS4: http://www.globus.org/toolkit/mds/ 4. Byrom, R., Coghlan, B.,Cooke, A., Cordenonsi, R., etc. The CanonicalProducer: an instrument monitoring component of the Relational Grid Monitoring Architecture (R-GMA). Parallel and Distributed Computing (2004). Third International Symposium on/Algorithms, Models and Tools for Parallel Computing on Heterogeneous Networks (2004). Third International Workshop on 5-7 (July 2004) Page(s): 232 237. 5. Hawkeye: http://www.cs.wisc.edu/condor/hawkeye/ 6. Mark Baker, Garry Smith, GridRM: An Extensible Resource Monitoring System. In Proceedings of the International Conference on Cluster Computing (2003) 7. Orazio Tomarchio, Lorenzo Vita: On the use of mobile code technology for monitoring Gird system. IEEE (2001) 8. Shang-Fen Guo, Wei Zhang, Dan Ma, Wen-Li Zhang: Grid mobile service: using mobile software agents in grid mobile service. Proceedings of 2004 International Conference on Volume 1, 26-29 Aug. 2004 Page(s): 178 - 182 vol.1 9. Wei Zhang, Jun Zhang, Dan Ma, Benli Wang, Yuntao Chen: Key Technigque Research on Grid Mobile Service. Proceeding of the 2nd International Conference on Information Technology for Application (ICITA 2004) 10. Antonio Puliafito, Orazio Tomarchio, Lorenzo Vita: MAP: Design and implementation of a mobile agents’ platform. Journal of Systems Architecture 46(2000) 145-162 11. Keller, A., Ludwig, H.: The WSLA Framework: Specifying and Monitoring Service Level Agreements for Web Services, IBM Research Report (May 2002) 12. Tosic, V., Pagurek, B., Patel, K.: WSOL - A Language for the Formal Specification of Classes of Service for Web Services. In Proc. of ICWS’03 (The 2003 International Conference on Web Services), Las Vegas, USA, June 23-26, 2003, CSREA Press, pp. 375-381, 2003

A Computational Trust Model for Semantic Web Based on Bayesian Decision Theory Xiaoqing Zheng, Huajun Chen, Zhaohui Wu, and Yu Zhang College of Computer Science, Zhejiang University, Hangzhou 310027, China {zxqingcn, wzh, huajunsir, zhangyu1982}@zju.edu.cn

Abstract. Enabling trust to ensure more effective and efficient agent interaction is at the heart of the Semantic Web vision. We propose a computational trust model based on Bayesian decision theory in this paper. Our trust model combines a variety of sources of information to assist users with making correct decision in choosing the appropriate providers according to their preferences that expressed by prior information and utility function, and takes three types of costs (operational, opportunity and service charges) into account during trust evaluating. Our approach gives trust a strict probabilistic interpretation and lays solid foundation for trust evaluating on the Semantic Web.

1 Introduction In this paper, we present a Bayesian decision theory-based trust model that combines prior and reputation information to produce a composite assessment of an agent's likely quality and quantify three types of cost: operational, opportunity and service charges incurred during trust evaluating. Consider a scenario in which a user (initiator agent) wants to find an appropriate service provider (provider agent) on the Semantic Web, and his problem is which provider may be the most suitable for him. Assuming that he maintains a list of acquaintances (consultant agents) and each acquaintance has a reliability factor that denotes what degree this acquaintance's statements can be believed. During the process of his estimating the quality of different providers and selecting the best one among them, he can 'gossip' with his acquaintances by exchanging information about their opinions. We call it reputation of a given provider agent that integrates a number of opinions from acquaintances and acquaintances of acquaintances. We also consider initiator agent's prior information that is direct experience from history interactions with the provider agents. And then, the trust can be generated by incorporating prior and reputation information.

2 Trust Model for Semantic Web There are many different kinds of costs and benefits an agent might incur when communicating or dealing with other agents and the trust model should balance these costs and benefits. Following [K. O'Hara et at., 2004], The costs and utility associated with trust evaluating include the following. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 745 – 750, 2006. © Springer-Verlag Berlin Heidelberg 2006

746

X. Zheng et al.

Operational Cost. Operational costs are the expenses of computing trust value. In other words, this is the cost of setting up and operating the whole trust plan. Therefore, the more complex the algorithm is the higher the cost is expected to be. Opportunity Cost. Opportunity cost is the lost of missing some possibility of making better decision via further investigation. Service Charge. Service charge can be divided into two types. One is Consultant fee that incurred when an agent asks for the opinions from the consultant agents, the other is Final service charge that will be paid to the selected provider agent. Utility Function. To work mathematically with ideas of "preferences", it will be necessary to assign numbers indicating how much something is valued. Such numbers are called utilities. Utility function can be constructed to state preferences and will be used to estimate possible consequences of the decisions. 2.1 Closed Trust Model and Bayesian Decision Theory We assume that the quality of a provider agent can be considered to be unknown numerical quantity, ș, and it is possible to treat ș as a random quantity. Consider the situation of an agent A tries to make an estimate of trust value for provider agent B. A holds a subjective prior information of B, denoted by distribution ʌ(ș), and requests A's acquaintance to give opinions on B's quality. After A receives the assessments of B's quality from its acquaintances, A takes these opinions as sample about ș. Outcome of these sample is a random variable, X. A particular realization of X will be denoted x and X has a density of f(x|ș). Then, we can compute "posterior distribution" of ș given x, denoted ʌ(ș|x). Just as the prior distribution reflects beliefs about ș prior to investigation in B's reputation, so ʌ(ș|x) reflects the update beliefs about ș after observing the sample x. If we need take another investigation on B's quality for more accuracy, ʌ(ș|x) will be used as prior for the next stage instead of original ʌ(ș). We also should construct utility function for agent A's owner, represented by UA(r), to express his preferences, where r represents rewards of the consequences of a decision. Supposing that ʌ(ș|x) is the posterior distribution of provider agent B, the expected utility of function UA(r) over ʌ(ș|x), denoted Eʌ(ș|x)[UA(r)], is possible gain of consequence of selecting B. If there are several provider agents can be considered, we simply select one that will result in the most Fig. 1. A "closed society of agents" for agent A expected utility. By treating an agent as a node, the "knows" relationship as an edge, a directed graph emerges. To facilitate the model description, agents and their environment are to be defined. Consider the scenario that agent A is evaluating trust of B and C for being business. The set of all consultant agents that A asks for this evaluation as well as A, B, C can be considered to be a unique society of agents N. In our example (see Figure 1), N is {A, B, C, D, E, F, G, H, I, J, K} and is called a "closed society of agents" with respect to A.

A Computational Trust Model for Semantic Web Based on Bayesian Decision Theory

747

Good Average Bad

Decisions are more commonly called actions Table 1. Contitional density of X, and the set of all possible actions under given ș consideration will be denoted A . In our example, ș Good Average Bad agent A is trying to decide whether to select agent B (action b) or C (action c) as business 0.60 0.10 0.05 partner (A = {b, c}). Service charges of B and C are 400 and 450 units respectively (SCB = 400, 0.30 0.70 0.15 SCC = 450). We will treat quality of service, ș, as x a discrete variable, and ș affects the decision process is commonly called the state of nature. 0.10 0.20 0.80 In making decision it is clearly important to consider what the possible states of nature are. The symbol Ĭ will be used to denote the set of all possible states of nature and ș Ĭ = {good, average, bad} represents three levels of service quality. See Figure 1, a notion on an edge between initiator and consultant agents or between the two consultant agents represents reliability factor, that between consultant and provider agents is the assessment of service quality and that between initiator and provider agents is prior information. In our example, the prior of A to B is {ʌB(șg)=0.3, ʌB(șa)=0.5, ʌB(șb)=0.2)}, where PB(good) = 0.3, PB(average) = 0.5, and PB(bad) = 0.2. We also suppose that the prior of A to C is {ʌA(șg)=0.33, ʌA(șa)=0.34, ʌA(șb)=0.33)}. The probability distribution of X that represents the assessments of service quality from consultant agents is another discrete random variable, with density f(x|ș), shown in Table 1. We also construct a utility function of {UA(r = good) = 900, UA(r = average) = 600, UA(r = bad) = 100} for agent A's owner which means how much money he is willing to pay. We take the assessments of B's quality from consultant agents as sample about șB, and in order to answer the question of which sample information should be used with higher priority we propose following order rules. (1) The sample from the agent with shorter referral distance should be used first; (2) If the samples come from the agents that have the same referral distance, which with larger reliability factor is prior to that with smaller one. Note that ș and X have joint (subjective) density h ( x , θ ) = π (θ ) f ( x | θ )

(1)

and that X has marginal (unconditional) density m (x) =

³

Θ

f ( x | θ )π (θ ) d θ

(2)

h ( x ,θ ) m (x)

(3)

it is clear that (providing m(x)0) π (θ | x ) =

Now, we begin to evaluate trust value of B for A by using above data and information. Firstly, we use the sample information from D through the path AĺDĺB, and the posterior distribution ʌB(ș|x) of șB, given xa, is {ʌB(șg|xa)=0.1915, ʌB(șa|xa)=0.7447, ʌB(șb|xa)=0.0638}. However, the reliability factor has not been considered when calculating this posterior distribution. We used following formula to

748

X. Zheng et al.

rectify above ʌB(ș|xa), where ʌ(ș), ʌold(ș|x) and ʌnew(ș|x) represent the prior distribution, the posterior distribution before rectification and the posterior distribution after rectification respectively, and R is reliability factor. π

new

(θ i | x ) = π ( θ i ) + [ π

old

(θ i | x ) − π (θ i )] × R

(4)

i ∈ { g ( good ), a ( average ), b ( bad )}

So, after rectification, ʌB(șg|xa) = 0.3+(0.1915−0.3)×0.85 = 0.2078, ʌB(șa|xa) = 0.5+(0.7447−0.5)×0.85 = 0.7080, and ʌB(șb|xa) = 0.2+(0.0638−0.2)×0.85 = 0.0842. The residual calculating process of B and the whole process of C are shown in Table 2 and 3. We employ multiplying to merge two or more than two reliability factors. For example, the reliability factor of the edge AĺE is 0.9 and that of EĺG is 0.8, then the value of reliability factor on the path AĺG is 0.72 (0.9×0.8). The reason behind using multiplying is that if the statement is true only if the agents that propagate this statement all tell the truth and it is considered to be independent for any two agents to lie or not to lie. After obtaining the posterior distribution of B and C, we can compare the results of the expectation of UA(r) over ʌB(ș|x) minus SCB with that of the expectation of UA(r) over ʌC(ș|x) minus SCC, and simply select the agent that possibly will produce more utility. Expected utility of B and C are Utility of B = E π ( θ | x ) [U A ( r )] − SC B = 0 . 2837 × 900 + 0 . 6308 × 600 + 0 . 0855 × 100 − 400 = 242 . 36 Utility of C = E π ( θ | x ) [U A ( r )] − SC C B

C

= 0 . 5049 × 900 + 0 . 4519 × 600 + 0 . 0432 × 100 − 450 = 279 . 87 hence (279.87 > 242.36), action c should be performed which means that C is more appropriate than B in the eyes of A . Table 2. The process of evaluating B's trust Step

Path

Statement

Reliability Factor

Prior Distribution

Posterior Distribution

ʌB(șg)

ʌB(șa)

ʌB(șb)

ʌB(șg|x)

ʌB(șa|x)

ʌB(șb|x)

1

AĺDĺB

Average

0.8500

0.3000

0.5000

0.2000

0.2078

0.7080

0.0842

2

AĺEĺGĺB

Bad

0.7200

0.2078

0.7080

0.0842

0.1233

0.6420

0.2347

3

AĺEĺFĺB

Good

0.6300

0.1233

0.6420

0.2347

0.3565

0.5073

0.1362

4

AĺEĺGĺHĺB

Average

0.5400

0.3565

0.5073

0.1362

0.2837

0.6308

0.0855

Table 3. The process of evaluating C's trust Step

Path

Statement

Reliability Factor

Prior Distribution

Posterior Distribution

ʌC(șg)

ʌC(șa)

ʌC(șb)

ʌC(șg|x)

ʌC(șa|x)

ʌC(șb|x)

1

AĺEĺC

Average

0.9000

0.3300

0.3400

0.3300

0.2635

0.5882

0.1483

2

AĺIĺJĺC

Good

0.7410

0.2635

0.5882

0.1483

0.5905

0.3466

0.0629

3

AĺIĺJĺKĺC

Average

0.4817

0.5905

0.3466

0.0629

0.5049

0.4519

0.0432

2.2 Open Trust Model and Bayesian Sequential Analysis Above discussion is under the condition that the "closed society of agents" must be defined at first, but it is nearly impossible for inherent open and dynamic Web. Our

A Computational Trust Model for Semantic Web Based on Bayesian Decision Theory

749

idea is that at every stage of the procedure (after every given observation) one should compare the (posterior) utility of making an immediate decision with the "expected" (preprosterior) utility that will be obtained if more observations are taken. If it is cheaper to stop and make a decision, that is what should be done. To clarify this idea, we transform Figure 1 to the structure of tree, shown in Figure 2. The goal of preposterior analysis is to choose the way of investigation which minimizes overall cost. This overall cost consists of Fig. 2. The process of trust evaluating the decision loss (including opportunity cost) and the cost of conducting observation (consultant fee). Note that these two quantities are in opposition to each other and we will be concerned with the balancing of these two costs in this section. We continue above example used in the illustration of the closed trust model. As shown in Figure 2, we begin at the stage 1 when A only holds the prior information of B and has no any information about C (even the existence of C, but it is more possible that an agent like C is near in the network). Agent A either can make an immediate decision (to select B) or can send request to its acquaintances for their opinions by extending the tree of Figure 2 down to the next layer. Suppose that the cost of consultant services is determined by the function of udn, where n represents the number of stage, d the mean degree of outgoing in given network, and u the mean consultant fee for each time (here, u = 1 and d = 4). Note that initiator agent can estimate the cost of consultant services at the next stage more exactly in real situation. The utility of an immediate decision is the larger of E π B ( θ ) [U A ( r )] − SC

B

= 0 . 30 × 900 + 0 . 50 × 600 + 0 . 20 × 100 − 400 = 190

and E π ( θ ) [U A ( r )] − SC C = 0 . 33 × 900 + 0 . 34 × 600 + 0 . 33 × 100 − 450 = 84 Hence the utility of an immediate decision is 190. If the request messages is sent and x observed, X has a marginal (unconditional) density of {m(xg) = 0.24, m(xa) = 0.47, m(xb) = 0.29,} and E π (θ |x ) [U A (r )] − SC B = 404.15 , E π (θ |x ) [U A (r )] − SCB = 225.55 , E π (θ |x ) [U A (r )] − SCB = −44.88 . Note that if xb is observed, we prefer to select C instead of B (because E π (θ ) [U A (r )] − SCC = 84 and 84 > −44.88). The expected cost of C

B

B

g

B

a

b

C

consultant services for further investigation at this stage is 4 (1×41). So the utility of not making immediate decision (opportunity cost) is: max{404.15,84}×0.24+max{225.55,84}×0.47+max{−44.88,84}×0.29 − 4 = 223.36 Because 223.36 > 190, further investigation would be well worth the money. The residual process of Bayesian sequential analysis is shown in Table 4. As shown in Table 4, at the stage 3, the expected utility of C begins to larger than that of B, and because 295.70 > 255.02, making an immediate decision is more profitable. So A should stop investigating and select C as a decision.

750

X. Zheng et al. Table 4. The process of Bayesian sequentail analysis Agent B Prior Distribution

Stage șg

șa

Agent C Marginal Distribution

șb

xg

xa

Prior Distribution xb

șg

șa

Utility Marginal Distribution

șb

xg

xa

xb

Consultant Further Immediate InvestigaFee Decision tion

Decision

1

0.3000 0.5000 0.2000 0.2400 0.4700 0.2900 0.3300 0.3400 0.3300

í

í

í

4

190.00

223.36

Continue

2

0.2078 0.7080 0.0842 0.1997 0.5706 0.2297 0.2635 0.5882 0.1483

í

í

í

16

220.25

221.33

Continue

3

0.3565 0.5073 0.1362

64

295.70

255.02

Stop

í

í

í

0.5905 0.3466 0.0629 0.3921 0.4292 0.1787

3 Conclusions The semantic web is full of heterogeneous, dynamic and uncertain. If it is to succeed, trust will inevitably be an issue. Our work's contributions are: The closed and open trust models have been proposed based on Bayesian decision theory and give trust a strict probabilistic interpretation; The utility and three types of costs, operational, opportunity and service charges incurred during trust evaluating have been considered sufficiently and an approach is proposed to balances these cost and utility; Our approach enables an user to combine a variety of sources of information to cope with the inherent uncertainties on Semantic Web and each user receives a personalized set of trusts, which may vary widely from person to person.

Acknowledgment This work was partially supported by a grant from the Major State Basic Research Development Program of China (973 Program) under grant number 2003CB316906, a grant from the National High Technology Research and Development Program of China (863 Program).

References 1. B. Yu and M. P. Singh. Trust and reputation management in a small-world network. 4th International Conference on MultiAgent Systems, 2000, 449-450. 2. K. O'Hara, Harith Alani, Yannis Kalfoglou, Nigel Shadbolt. Trust Strategies for the Semantic Web. Proceedings of Workshop on Trust, Security, and Reputation on the Semantic Web, 3rd International (ISWC'04), Hiroshima, Japan, 2004. 3. M. Richardson, Rakesh Agrawal, Pedro Domingos. Trust Management for the Semantic Web. Proceedings of the Second International Semantic Web Conference, Sanibel Island, Florida, 2003. 4. S. Milgram. The small world problem. Psychology Today, 61, 1967. 5. S. P. Marsh. Formalising Trust as a Computational Concept. Ph.D. dissertation, University of Stirling, 1994. 6. Y. Gil, V. Ratnakar. Trusting Information Sources One Citizen at a Time. Proceedings of the First International Semantic Web Conference, Sardinia, Italy, June 9-12, 2002, 162-176.

Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree Yun Bai1, Yanyan Guo1, Xiaofeng Meng1, Tao Wan2, and Karine Zeitouni2 1

Information School, Renmin University of China, 100872 Beijing, China {guoyy2, xfmeng}@ruc.edu.cn 2 PRISM Laboratory, Versailles University, 78000 Versailles, France {Tao.Wan, Karine.Zeitouni}@prism.uvsq.fr

Abstract. Nowadays, the rapid advances in wireless communications, positioning techniques and mobile devices enable location based service such as dynamic traffic navigation. Yet, it is a challenge due to the highly variable traffic state and the requirement of fast, on-line computations. This motivates us to design an intelligent city traffic control system, which is based on an extension of the Quad-tree access method that adapts better to the road networks while it maintains aggregated information on traffic density according to hierarchy levels. Based on this index, a view-based hierarchy search method is proposed to accelerate the optimal path computation. At the end, the experiment will show this index structure to be effective and efficient.

1 Introduction The advanced technologies, such as Global Positioning System (GPS), and the progress of wireless communication techniques, mobile computing, make it possible for a vehicle (here referred to as a moving object) to have sophisticated onboard wireless equipment installed at a reasonable price [1]. On this condition, location-based services are becoming more and more popular. Traffic navigation service, qua one of this kind of services, receives the special attention because of the closed relation with modern life. Besides, traffic surveillance technologies (either GPS or cell phones etc) allow monitoring and broadcasting of traffic conditions in real time. This encourages some promising applications such as dynamic traffic navigation service. It is believed that the dynamic traffic navigation will become widely spread because it can provide exact useful information on driver’s current position, optimal path to destination, traffic congestion and so on and so forth. Yet, it is a challenge due to the highly variable traffic state and the requirement of fast, on-line computations. In research side, certain work, such as [2], uses the technique of prediction, which forecasts potential congestions and thus calculates the optimal path for each moving object. It needs each moving object to provide its start time, start location and its destination. However, obtaining all the information of each object is unrealistic, and traffic conditions are difficult to forecast (e.g. traffic jams produced by any unpredictable bursting event such as an accident). Therefore, dynamic navigation techniques can not be totally based upon prediction X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 751 – 758, 2006. © Springer-Verlag Berlin Heidelberg 2006

752

Y. Bai et al.

models. In other words, efficient dynamic navigation is a challenge due to the highly variable traffic conditions and the requirement of fast, on-line computations. This motivates us to design an intelligent city traffic control system, providing the user, always in a continuous fashion, the optimal path to destination considering traffic conditions. Although the optimal path finding problems is one of the most fundamental problems in transportation networks analysis [3, 4], here we will not attempt to propose an algorithm to solve another variation of path finding problems. Instead, we have adapted the most efficient Dijkstra algorithm [5] for shortest path calculating in order to answer path finding request. In this paper, we propose a novel indexing method, namely HAT, combining spatial indexing technology and pre-aggregation for traffic measures. On one hand, this spatial index is more suitable to road networks, and could support other kinds of location-based services especially on road network; on the other hand, the aggregated information stored in each hierarchy level may filter congested zones in advance. Then, based on this index structure, a notion of personal “view” is defined. First, it improves the performances of optimal path finding by limiting search to regions that are likely to be crossed after the current user’s location. Second, it optimizes the local memory occupancy. Finally, we derive a navigation algorithm and a Dynamic Navigation System, named DyNSA, and prove, through extensive experiments, its efficiency and effectiveness. The paper is organized as follow: Section 2 describes, in detail, a novel road index HAT; Based on this index structure, section 3 depicts an efficient method for searching optimal path within the person “view”; Section 4 details the architecture and experimentation of the system. Finally, summary is presented in Section 5.

2 Hierarchy Aggregation Tree (HAT) Navigation intensively uses the retrieval of network sections in which objects move. So, an efficient spatial index is then a key issue. A road is usually represented as a line string in a 2-dimensional space. Spatial access methods, such as R-tree, PMR Quad-tree [6] and Grid file [7], can be consequently adopted for networks’ indexing. However, R-tree indexing structure produces large overlapping MBRs (Minimum Bounding Rectangle), which makes the search inefficient Furthermore, originally tailored for indexing rectangles, applying R-tree to a network will result in large amount of dead space indexing. Although PMR Quadtree and Grid File have not the overlapping problems, the uniform geometric partition do not adapt to non-uniform road distribution in space. Since these indices are not suitable for optimal path searching in traffic environments, to improve the efficiency of query processing, we put forward a new indexing method named Hierarchy Aggregation Tree (HAT). It is based on two structures: road and region. The former is a segment of road, which has not intersection with other road except two extremities. The latter is similar to a MBR, in addition, it contains a supplement information which stores an aggregated value over it. The principal functionality of this aggregated information is to filter the regions having a high traffic density on the same hierarchy level.

Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree

753

2.1 Basic Idea HAT is set up based on the spatial information of roads. Unlike R-tree, HAT references, edges and nodes so that it avoids dead space. Moreover, because node MBRs do not overlap, the search is more efficient. Basically, this method is inspired from PMR Quadtree’s non-overlapping index, except that the space partition in HAT may be skewed and the resulting tree will be balanced. Furthermore, for each region, HAT stores additional information on traffic density at different granularity levels. As shown later, this brings a filter capability for path finding process. 2.2 Index Building HAT is constructed by partitioning the index space recursively. The space is divided according to the distribution of network segments, by an adaptive and recursive split of space in four sub-regions. When the amount of roads, namely capacity in a leaf node N exceeds B (B is the predefined threshold for split), N is to be split and the corresponding region is to be partitioned into four sub-regions. The split method of HAT satisfies the following two rules: Rule 1: Capacities in four sub-regions should almost be the same. Suppose Max is the maximum capacity and Min is the minimal capacity of four sub-regions. Then the difference between Max and Min should not exceed predefined proportion P1, which is expressed by the following inequality: (Max-Min)≤ªMax×P1º. Symbol ª º denotes the minimal integer more than the value in it. Rule 2: The sub-regions crossed by a road should be as few as possible, namely the copies of entries for the road should be as few as possible. Suppose S is the sum of four capacities, C is the capacity of original region. S should be as close to C as possible, i.e. the difference between S and C should not exceed the predefined proportion P2, which is expressed by the following inequality: (S-C)≤ªS×P2º. The main idea of Split algorithm is to firstly partition the region into four ones of equal size; then no more modification is needed if the result satisfies Rule1 and Rule2. Otherwise the partition point is adjusted. Let N be an overflowing node, and let R be its corresponding region of capacity C(C>B). To process split operation, R is partitioned at a partition point (xs, ys) into four equal sub-regions R1, R2, R3, R4. (xs, ys) is initially set to (median, ymedian), i.e. the median point among road coordinates in R, then, it should be adjusted to fulfil the above rules. To do so, the split axis is pushed so that it minimizes segment split, as sketched in Fig.1 and Fig.2. Y

Y

Y

Equal partition

X

Partition after adjustment

X

Fig. 1. Adjusting partition for a region without crossings

Y

Equal partition

X

Partition after adjustment

X

Fig. 2. Adjusting partition for a region with crossings

754

Y. Bai et al.

By the above method, although the partition of index space is skewed, the resulting tree is balanced and the copies of roads are reduced which save the occupancy of index and the searching efficiency.

3 Dynamic Navigation Query Processing Dynamic navigation query refers to finding a path through which a user will take least time, distance or cost to reach the destination. In this section, we introduce a new method for path searching, using the available moving objects stream aggregation and HAT index. 3.1 Optimal Path Search Based on View We notice that in real world, users do not prefer detour. Thus their travel only involves parts of the whole map. Considering this, referring to the concept of “view” in relational databases, we carry out the view-based searching method. When finding the optimal path, only consider the area from the user’s current position to his/her destination, which is referred to as “reference area” shown in Fig.3. The resulting path is then found from the set of those roads crossing the reference area, which are referred to as “candidate roads”. Reference area is a logic notion. To retrieve all the candidate roads in HAT, firstly the algorithm retrieves those underlying regions contained in or intersects with the reference area. This spatial union of underlying regions is then referred to as “searching area” which is a partial view of the whole map shown in the right figure. Y

Y

D

D S

S X

X

Fig. 3. Reference area and searching Area

Since HAT corresponds to the whole map, the “view” corresponds to some nodes of HAT. These nodes remain the tree-like structure referred to as “view tree”. Consequently, not accessing the whole HAT but only parts of it will take less time implementing the dynamic navigation task. In addition, with the user becoming closer and closer to the destination, the size of the view tree will surely reduce, and release memory resources. Processing on such a smaller and smaller view tree will surely enhance the efficiency. 3.2 Hierarchy Search Based on the structure of view tree, we adopt a method similar to “drill-down” process in OLAP, named “hierarchy search”. The hierarchy search is a top down process that

Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree

755

always chooses the region of better traffic conditions as the mid region to go through until it finds the resulting path. Considering a movement from S to D, the user may go through different regions along different paths. Actually, several cases could be distinguished. The first is when S and D belong to the same finest region, which means they are in the same leaf-node. In this case, a direct call to Dijkstra search suffices. The second case is when S and D belong to the same region, but at a higher level of the tree. We then continue to descendant nodes till S and D will appear in different sub-regions or in a leaf-node. In the other cases, when S and D belong to two different regions (denoted as Rs and Rd), we should find mid regions from S to D by selecting a sub-region (denoted as Rm) providing the rough direction of Rs→Rm→Rd. Since each region is partitioned into four ones, Rs and Rd are either adjacent to each other, or separated by one region of the same level as shown in Fig.4. If Rs and Rd are adjacent, the algorithm makes recursive calls to HierarchySearch, starting from their corresponding nodes. Whereas, if they are not adjacent, recursive calls will include one of the two sub-regions according to its aggregate value. For instance, in the right of Fig.4, suppose the density in R3 is smaller, then the determined direction will be Rs→R3→Rd . Y

Y

Rd Rs

D

Rs

S

X

D Rd

R3 S

X

Fig. 4. Different distribution of S and D

Notice that the path from one region Rs to an adjacent region Rd will necessarily pass through a road that crosses their frontier. Suppose r1, …, rm are m such roads spanning Rs and Rd. Therefore, Rs→Rd can be concretely transformed to m selections: S→(ri.x1, ri.y1)→(ri.x2, ri.y2)→D(1≤i≤m). The endpoint (ri.x1, ri.y1) is in Rs and the other endpoint (ri.x2, ri.y2) is in Rd. Each selection results in different paths (denoted as pathi). For each selection, we issue the same hierarchy search under node N to find the optimal path (denoted as pathi.sub-path1) from S to (ri.x1, ri.y1) and the optimal path (denoted as pathi.sub-path2) from (ri.x2, ri.y2) to D. Then, we join pathi.sub-path1 and pathi.sub-path2 by ri and get the entire pathi. Finally, we select the one of minimal cost among every pathi as the final optimal path from S to D.

4 Performance Evaluation 4.1 The Architecture of DyNSA System Adopting the aforementioned index structure and navigation method, we have designed and implemented an intelligent city traffic control system, named DyNSA (Dynamic Navigation System based on moving objects stream Aggregation), which aims at providing high quality of dynamic navigation services for Beijing Olympics in 2008. An overview of this system architecture is shown in Fig.5.

756

Y. Bai et al.

This system consists of multiple managers: TIR (Taffic Information Reciever), TIM (Traffic Information Manager) and Query Processor. TIR is an information receiver, which continuously sends traffic information to TIM, it can be considered like current TMC, which captures real traffic information. In TIM, aggregated information of each road segment is timely refreshed according to current traffic information and the region aggregation on each HAT’s hierarchy level is thus recalculated. Query Process is in charge of users’ navigation requests. When a navigation request is coming, it will send it to a View Manager, and then a corresponding view tree on HAT will be created. The Service Agent will perform the view based hierarchy searching on it, and finally, the optimal path will return to the user. Since the View Manager keeps a consistency between the view tree and the HAT, a recalculation will happen if necessary and will be sent to the user until she/he arrives to her/his destination. The underlying index structure of RIM and Query Processor are both based on HAT. View Tree

HAT View Manager

Service Agent



.

RIM (Road Information Manager)

TIC (Traffic Information Receiver)

Service Agent Map

Query Processor

Fig. 5. System Architecture of DyNSA

4.2 Performance Studies DyNSA was implemented in Java, so that both PMR Quadtree and Grid index were also implemented respectively in the same platform. For the experimental data, we have adopted several real road network datasets on which some moving object datasets are generated by Brinkhoff’s generator [8]. The parameters used are summarized in Table 1, where values in bold denote default used values. The performance studies are concentrated in 4 points: (1) Moving Object’s Localization Efficiency; (2) Aggregation computing efficiency; (3) View trees’ creating performance; (4) Searching fastest path performance. Table 1. Parameters of the experiments Parameter

Setting

Meaning

Page size

4K

Leaf capacity

90

X

[3420584, 4725764]

The size of disk page and leaf node Maximum number of road segments one leaf node (HAT, PMR Quardtree) or one bock (Grid) could contains The range of coordinate x in indexed space domain

Y

[4077155, 5442669]

The range of coordinate x in indexed space domain

N

3000,…12000,…, 21000, 24123

The number of road segments in the map

M

50k,…250k,…500k

The amount of generated moving objects

Referenced area size

1%,…9%,…81%

The proportion of referenced area related to total space

Efficient Dynamic Traffic Navigation with Hierarchical Aggregation Tree

757

Due to the limited paper space, we give only the comparison results respectively in figure6-9. These figures show evidently that HAT performs better that the other two indexing methods not only in the aspect of structure size, but also in querying processing.

4XHU\&38WLPH V

4XHU\,2V



+$7 3054XDGWUHH *ULG)LOH

   

+$7 3054XDGWUHH *ULG)LOH





 

N

N

N

N

  

N

N

N

N

N

1XPEHURIPRYLQJREMHFWV

1XPEHURIPRYLQJREMHFWV

N



  







Fig. 7. Aggregation Computing



3URSRUWLRQRIUHIHUHQFHDUHD



+$7 3054XDGWUHH *ULG)LOH

 

+LHUDUFK\ YLHZ +LHUDUFK\ 'LMNVWUD YLHZ 'LMNVWUD



&383URFHVVLQJWLPH

1XPEHURIOHDIQRGHV

1XPEHURIWRWDOQRGHV

+$7 3054XDGWUHH *ULG)LOH





1XPEHURIURDGVHJPHQWV

Fig. 6. Efficiency of Query Performance





(b) CPU time

(a) I/O



3054XDGWUHH







+$7



&38WLPH V

















 











3URSRUWLRQRIUHIHUHQFHDUH

Fig. 8. Creating the view tree Performance











3URSRUWLRQRIUHIHUHQFHDUHD



Fig. 9. Optimal Path Finding

5 Conclusion This paper has proposed a novel indexing technique HAT, in order to improve the efficiency of dynamic navigation techniques. In essence, it is a balanced Quadtree that adapts to the road network distribution, attached with pre-aggregated traffic measurement by its spatial hierarchy. This feature allows the navigation process to filter the areas, which have a high density traffic, without drilling down to check detail traffic information of each road segment. Thus, it can efficiently perform path searching process from macro to micro. Another contribution of this paper is the concept of “view tree” that restricts the optimal finding computation only within a part of the data structure that interests the user. Based upon the above techniques, we have proposed a system architecture, DyNSA, allowing dynamic navigation service in the prospect of Olympic Games in Beijing. The implementation and the experimentation results have validated our approach and demonstrated its efficiency and effectiveness. In perspective, we will test the HAT index in the context of data warehouses, and particularly to optimize OLAP. It would be also interesting to explore the possibility to combine real-time data on the traffic state and historical data to allow prediction of traffic state and adapt the navigation accordingly.

758

Y. Bai et al.

Acknowledgements This research was partially supported by the grants from the Natural Science Foundation of China under grant number 60573091, 60273018; China National Basic Research and Development Program's Semantic Grid Project (No. 2003CB317000); the Key Project of Ministry of Education of China under Grant No.03044; Program for New Century Excellent Talents in University(NCET).

References 1. Barbara D.: Mobile Computing and Databases - A Survey. IEEE Transactions on Knowledge and Data Engineering, (1999), Jan/Feb, Vol. 11(1) 108-117. 2. Chon, H., Agrawal, D., and Abbadi, A. E.: FATES: Finding A Time dEpendent Shortest path. Proc. of Int. Conf. on Mobile Data Management, (2003), 165-180. 3. Deo, N. and Pang, C.Y.: Shortest path algorithms: taxonomy and annotation, Networks, (1984) vol. 14, 275-323. 4. Fu, L. and Rilett, L.R.: Expected shortest paths in dynamic and stochastic traffic networks. Transportation Research, Part B: Methodological, (1996), vol. 32(7), 499-516. 5. Dijkstra, E. W.: A note on two problems in Connection with graphs, Numerische Mathematik, (1959), vol. (1), 269-271. 6. Tayeb, J., Ulusoy, O., Wolfson, O.: A Quadtree-based Dynamic Attribute Indexing Method. The Computer Journal, (1998), 185-200. 7. Nievergelt, J., Hinterberger, H.. The Grid File: An Adaptable, Symmetric Multikey File Structure. Proc. of ACM Trans. On Database Systems, Vol9(1), (984), 38-71. 8. Brinkhoff, T.: Network-based Generator of Moving Objects. http://www.fh-oow.de/institute/ iapg/personen/brinkhoff/generastor

A Color Bar Based Affective Annotation Method for Media Player Chengzhe Xu, Ling Chen, and Gencai Chen College of Computer Science, Zhejiang University, Hangzhou 310027, P.R. China [email protected]

Abstract. Multimedia technology has been developing quickly since the computer was invented. Nowadays, more and more people use media players to watch movies and TV shows on their computer. But it seems that we are too busy sometimes to spare nearly two hours watching a movie from beginning to end. We’d like to find and watch the most exciting episodes that we are interested in as soon as possible. The media players we use often only provide a slider bar for locating, which is rather low in efficiency. Respecting this, an affective annotation method for video clips is proposed in this paper and the method is implemented in an affectively annotated media player. In addition, two experiments were set up to evaluate the proposed method. Experiment results indicate that affective annotation facilitates the subjects in quick locating target events and helps them understand the scenarios better.

1 Introduction Almost all popular media players, like Windows Media Player and RealPlayer, have beautiful convenient user interfaces and they provide slider bars for users to locate to any time point of the video clip that is playing. However, users couldn’t manage to find the episodes they are interested in quickly, for the conventional slider bar does not provide any information about the episodes or events except the time. There are all kinds of annotations applied to various researching areas: Qian, etc [1] integrated photo annotating tasks into instant messaging applications; Abowd, etc [2] provided a means for archiving and annotating large collections of informal family movies; Ramos, etc [3] explore a variety of interaction and visualization techniques for fluid navigation, segmentation, linking, and annotation of digital videos; Costa, etc [4] implemented a VAnnotator which allowed user to annotate audiovisual content using a timeline model. People use annotations. Emotion is usually associated with the behaviors of human being. People convey and change it along with spoken languages, manners or facial expressions. The content of various kinds of video materials has tight association with the emotion of roles. Hugo Liu, etc [5] established a textual affect sensing engine and annotated the sentences in a text document with emotions. They used colors to represent emotions and sequenced them into a color bar, which represented the progression of affect through a text document. Experiments were set up to evaluate their method. The results demonstrated that it facilitated the within-document information foraging activity. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 759 – 764, 2006. © Springer-Verlag Berlin Heidelberg 2006

760

C. Xu, L. Chen, and G. Chen

In this paper, we propose an affective annotation method and implemented it in an affectively annotated media player. In addition, two experiments were set up to evaluate the proposed method. The first experiment tries to figure out whether there is any improvement in the speed while locating to an important scene with the help of the affective annotation method. The other one tries to examine how well the user would interpret the scenario of the movie clip after watching it in restricted time.

2 Affective Annotation Method for Video Clips There is such a large vocabulary we use to describe emotion but no one can tell that how many categories all emotions can be divided into. The metrics about emotion can be instrumentally measured from speeches, face expressions or any other way people convey emotion, but we still describe emotion and interpret it in impressionistic terms, which may be highly subjective [7]. Different methodologies yield different classifications of emotional states, researchers and experts keep exploring and have proposed many different schemes [8, 9]. As no clear standard has been set, in this paper, we group human affect into five categories: happiness, sadness, fear, anger and neutral affect.

(a) With affective annotation

(b) Without affective annotation Fig. 1. Color bar

Just like affective classification, there is a large body of literature on the psychology of color [6], and results of researches vary largely from different situations. Though no color encoding can claim to be completely intuitive [5], color has its predominance for presentation on user interface: simple and clear. We use a color bar along with a slider bar to annotate roles’ affect at different time points in a movie (or other media). Fig. 1a shows an example of the color bar with affective annotation. Color blocks on it represent various emotions. And they span the time the particular emotions occurs. Yellow blocks on it represents happiness, while red, blue, green and brown stand for anger, sadness, fear and neutral affect, referencing [10], in the implementation of color bar. In addition, gaps between those color blocks represent absence of corresponding role. On the other hand, Fig.1b shows another color bar which only identifies the role’s presence (represented with brown). These two conditions are to be compared in the experiments to evaluate the effect of affective annotation. The thumb of the slider bar slides beneath the color bar. Like the conventional slider bar, it selects and tells time position of the movie (or other media). What’s more, along with the color bar, it also tells the emotional state of the role at current time and any time!

A Color Bar Based Affective Annotation Method for Media Player

761

EmoPlayer is an experimental platform which implements the color bar based affective annotation method. As shown in Fig. 3, it has the combo box select color bars between different roles. The affective information is saved in an XML file and it’s organized in the format as Fig. 2.







......



......



......

Fig. 2. XML file format

Fig. 3. Snapshot of EmoPlayer

3 Experiment In order to evaluate the color bar based affective annotation method for video clips, two experiments were set up. The first experiment tries to figure out whether there was any improvement in the speed while locating to an important scene with the help of the affective annotation. The other one tries to examine how well the user would interpret the story of the movie clip after watching it in restricted time. Three movie clips were hunted for the experiments. The general principle for choosing movie clips was that they should be various, changeful and expressed straightforwardly in emotion. One of the three clips was used for practicing before the experiments while the other two were used for experiment 1 and 2, respectively. The latter two for formal experiments were dubbed in Chinese. In order to annotate the video clips selected for the experiment, a three-person group was set up. The members are sophisticated with emotion expression. They did the annotating job and according to the subjective evaluation rated by the subjects after experiments, it was considered accurate. Before the beginning of the experiments, the interface was introduced and every operation was explained explicitly to each subject. After that, they were given sufficient time to get familiar with EmoPlayer, including how the colors represented the emotions, which color corresponded to which emotion, how to use the slider bar and get the current emotional information and so on. The subjects were told to work as quickly as possible.

762

C. Xu, L. Chen, and G. Chen

3.1 Experiment 1 In this experiment, a 2 (using affective annotation or not) × 2 (familiar or not) between group design has been employed. Five target events were selected from the movie clip before the experiment. A region lasting about 7 seconds around an event was marked as acceptable region. When the thumb of the slider bar got into this region, the subject would be notified that he/she had reached the target event. The mean TCTs (Task Completion Time) were studied in this experiment. 24 subjects were recruited and each of them was presented a movie clip and asked to locate to five target events, one after another. Before each event, he/she was given a locating quest, a brief description of the target event such like ‘the point when the lovers start to dance’. When he/she located to a time point within the acceptable region of the event, he/she would be notified and asked to prepare for the next event at the same time, until all of the five events were found out. The subjects were asked to operate as quickly as possible at every event. Fig. 4 shows the mean TCT over various conditions. The results of ANOVA show significant effect of affective annotation, F(1, 20) = 17.313, pposition (fj), it means ci displaying the former position than cj. Definition 9 (Length function). If parameter is content c, length(c) return the number n

of words in c, and if parameter is content set C, length(C ) = ¦ length (ci ) . i =1

3.3.1 Finding Topic In most Web pages, the whole page or a local area already has a topic, which is displayed with bigger size of font or different color. In this case, the emphases of E operation is how to find a exist topic. According to definition 4, b={(fi, ci)| 1”i”n, fi∈F, ci∈C}, the algorithm of finding topic of b is like follows. Step 1. Traversing the F set and finding element fi. ∀ fj∈F (i font(fi)> font(f j) Step 2. If ș is the max number of words in a topic, then

≠ j), making fi satisfy

length (ci ) ≤ θ ­c topicb = ® i ¯trim(ci ) length(ci ) > θ

And trim(ci) is a function that trims ci to a phrase only have ș number of words. Before trim(ci) is invoked, stopwords are deleted from ci according to stoplist[10]. 3.3.2 Extracting Topic In some Web pages or blocks of a Web page, there are topics but not emphasized at bigger font or even no a topic at all. In these case, the emphases of E operation is how to extract the topics from exist words. Many Web mining algorithms are useful for extracting a block’s topics, but this is not suitable for the block in which there is little literal information[14]. The algorithm of extracting the topics of b is like follows. Let į be a threshold which mean the minimal number of words suitable for Web mining algorithms. length(ci ) < δ and ∀f j ∈ F (i ≠ j ) position( f i ) > position( f j ) ­ trim(ci ) topicb = ® extract ( c ) length (ci ) ≥ δ i ¯ In the algorithm 1, line 1 Algorithm 1: Content extract and 2 is for preprocessing Input: C as all content of block b, a stoplist file S, a using the stoplist[8] and frequency threshold İ. stemming[7]. tf(d,t) at line 6 Output: a set T consisting of all words of topic phrase. is the absolute frequency of 1) Delete stopwords from C according to stoplist; 2) Stem C using stemming algorithm; term t∈T in document d∈D, 3) T = {}; where D is the set of 4) For i= 1 to ș step 1 documents and T={t1, t2, ... , 5) wordi = word of C which index is i; tm} is set all different terms 6) If wordi ∉ T and tf(word, wordi) >İ occurring in D[13]. 7) Put wordi into T; 8) End; 9) End; 10) Return T;

An Effective Web Page Layout Adaptation for Various Resolutions

783

3.4 Resolution Adaptation The hierarchy-tree of the page in section 3.1 and 3.2 is used to generate layout views. The lower the resolution is, the deeper the hierarchy-tree is used. Each sub tree that starts from the root node of hierarchy-tree maps to a layout view that provides navigation for a range of resolution. Starting from the top of hierarchy-tree, when the depth is 1, there is only a root node to generate layout Fig. 3. Hierarchy-tree Created through view, which means no navigation is Parsing Example Page required at this resolution. And when the depth is 2, a layout view is generated based on the child nodes of root node. Then every time the depth increases, a more detailed layout view will be generated to suit the navigation of a lower resolution. If a resolution is between two resolution thresholds, the layout view will accord to the higher resolution threshold. An Example of resolution adaptation is shown in Fig.3. 3.5 Generating Layout View When hierarchy-tree gets ready, the topic of every block is extracted and the layout view’s level is defined. All of these are not real pages but some objects in memory yet. The last work is to generate a new document with necessary HTML elements and corresponding topics, which expresses the layout view of the original page.

4 Implementation and Case Study In this research, an Automatic Layout Adaptation Proxy (ALAP) is designed and implemented. As a Web proxy, ALAP can automatic identify the client terminal’s resolution (simulated by browser size) and compare with the resolution of a page for which the client request, then it generates a layout view automatically at server side

Fig. 4. 600*480 px

Fig. 5. 400*300 px

Fig. 6. 320*240 px

784

J. Song et al.

and sends to client together with the original page. The smaller the resolution of visitor’s terminal is, the more details the layout view have. Three examples are given as Fig.4 to Fig.6, which are layout views of motivating example under resolution 600*480 px as TV’s, 400*300 px as windows-mobile’s, and 320*240 as palm’s respectively.

5 Conclusions and Future Work For the increasing of user terminals and a great demand for the Web pages adapting to various resolution terminals with good readability, an effective layout adaptation method was proposed in this paper. Because the adaptation does not need to do any change to original page itself, the method is usable for all well formed HTML page. The presented implementation proved that this approach can solve the resolution problem well. Future work includes the research on improving the proxy’s performance. One way, for example, is building a more efficient parser to improve the parsing speed, another way is re-architecting the proxy’s concurrency model, sharing memory with each thread instead of current one-thread-one-memory model to save memory.

Acknowledgements This work is supported by National Natural Science Foundation of China (No. 60573090, 60503036, 60473073).

References 1. I.Beszteri and P.Vuorimaa: Automatic Layout Generation with XML Wrapping. APWeb 2003: 101-107 2. Extensible Markup Language (XML) 1.0 (Second Edition), W3C Recommendation 6. October 2000, http://www.w3c.org/TR/2000/REC-xml-20001006, 2000 3. N.Frayling, R.Sommerer, K.Rodden, and A.Blackwell: SmartView and SearchMobil: Providing Overview and Detail in Handheld Browsing. Mobile HCI Workshop on Mobile and Ubiquitous Information Access 2003: 158-171 4. S.Gupta, G.Kaiser, and S.Stolfo: Extracting context to improve accuracy for HTML content extraction. In: WWW (Special interest tracks and posters) 2005: 1114-1115 5. S.Gupta, G.Kaiser, D.Neistadt, and P.Grimm: DOM-based content extraction of HTML documents. In: WWW 2003: 207-214. 6. http:// www.apache.org/~andyc/neko/doc/html/,2004-06-10 7. http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/porter.c 8. http://www.dcs.gla.ac.uk/idom/ir_resources/linguistic_utils/stop_words 9. http://www.java.net 2005-6-30 10. http://www.openxml.org 11. http://www.w3.org/DOM/ 12. E.Kaasinen, M.Aaltonen, J.Kolari, and et al. Two approaches to bring Internet service to WAP devices.In:Proc of the 9th Int’l World Wide Web Conf on Computer Network. Amsterdam: North-Holland Publishing Co, 2000,231-246

An Effective Web Page Layout Adaptation for Various Resolutions

785

13. M.Kantrowitz, B.Mohit, and W.Mittal. Stemming and its Effects on TFIDF Ranking. Proc of the 23rd ACM SIGIR Conf. .2000. 357-359 14. A.Rahman, H.Alam, and R.Hartono. “Content Extraction from HTML Documents”. In 1st Int. Workshop on Web Document Analysis (WDA2001), 2001. 15. A.Tombros, Z.Ali: Factors Affecting Web Page Similarity. In: ECIR2005: 487-501

XMine: A Methodology for Mining XML Structure Richi Nayak and Wina Iryadi School of Information Systems, Queensland University of Technology, Brisbane, Australia [email protected] Abstract. XML has become a standard for information exchange and retrieval on the Web. This paper presents the XMine methodology to group heterogeneous XML documents into separate meaningful classes by considering the linguistic and the hierarchical structure similarity. The empirical results demonstrate that the semantic and syntactic relationships and the path names context of elements play important role for producing good quality of clusters.

1 Introduction The potential benefits of the rich semantics of XML have been recognized extensively for enhancing document handling over the vast amount of documents on the Web. The XML documents are usually associated with a schema definition that describes the structure of the document. A schema clustering process improves the document handling process by organising heterogeneous XML documents into groups based on structural similarity. Similarity of correspondence elements between XML documents is conducted efficiently using relevant XML schemas. We present a methodology, XMine, that quantitatively determines the similarity between heterogeneous XML schemas by considering the linguistic and the context of the elements as well as the hierarchical structure similarity, and groups them into separate classes. Research on measuring the structural similarity and clustering of XML documents is gaining momentum [2,3,4]. XMine comes closer to a number of schema matching approaches based on schema only information such as XClust, Deep, Cupid, COMA, SF. However, XMine derives the structure similarity based on the maximal similar paths found by using the adapted sequential pattern mining algorithm [1]. Thus, this eliminates the element-to-element matching process, making XMine an efficient and accurate method. Lee et al. [3] also uses the sequential mining approach to quantify the structural similarity between XML documents. [3] defines the structural similarity only based on the ‘ratio’ between the maximal similar paths and the extracted paths of the base document. They do not include the element level hierarchy position, leading in erroneous match between two names occurring at two different positions or with different context. XMine overcomes this by including PNC.

2 The XMine Methodology The XMine methodology (figure 1) starts with the Structure Analyser that transforms the structure of a schema into a suitable tree model representation. This module X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 786 – 792, 2006. © Springer-Verlag Berlin Heidelberg 2006

XMine: A Methodology for Mining XML Structure

787

performs simplification analysis of the schema trees in order to deal with nesting and repetition problems. The Element Analyser calculates the linguistic similarity of each pair of element names based on the semantic and syntactic relationships. The semantic relationship (e.g. movie and film) is first measured for exploiting the similarity degree between the two token sets of names by looking up the WordNet thesaurus and user-defined library same as [4]. If there is no common elements are identified, the syntactic relationship (e.g. ctitle and title) is then measured by using the string edit distance function [5]. The lsim of two sets of name tokens is the average of the best similarity of each token with a token in the other set.

WordNet/User-Defined Dictionary

Structure Analyser

XML schema

schema cluster

schema cluster

Element Analyser

Maximally Similar Paths Finder

schema cluster

Schema Similarity (Semantic & Structure) Matrix Processor

Constrained Agglomerative Clustering

Fig. 1. The architecture of XMine methodology

The Maximally Similar Paths Finder identifies the paths and elements that are common between the hierarchical structures of pairs of schemas. We adapt the sequential pattern mining algorithm [1] to infer similarity between element structures. The structure of a schema tree is represented by a set of path expressions. A path is represented by a unique sequence of element nodes following the containment links from the root to the corresponding nodes as a resultant of traversing the schema tree from root to the leaf node. A path expression is denoted as where x1 is a root node and xn is a leaf node. Let the set of path expressions, PE, in a schema tree be {p1, p2 … pn}. In a set of paths, a path pj is maximal if it is not contained by any other path expression or no super path of pj is frequent. The overall degree of similarity based on the element and structure similarity is then computed in Schema Similarity Matrix Processor. The maximal similar paths serve the basis for the element structural similarity that emphasizes on the hierarchical information of the element, which cover the context of an element defined by its ancestor (if it is not a root) and descendant elements positioned in the path expressions is measured. The element semantic similarity that involves the linguistic and constraint similarity between elements contained in the maximal large paths is also computed. Let us assume two schemas: base schema (schemab) and query schema (schemaq) and the corresponding base tree TB and query tree TQ. A unique set of path expressions are obtained by traversing both the base and query trees, denoted as PEB and PEQ respectively. A set of maximal similar paths (MPE) represents a number of common paths that exist in both base and query tree. The corresponding full path expressions that contain a MPE from the both PEB and PEQ sets are identified. The

788

R. Nayak and W. Iryadi

similarity coefficient of a particular maximal similar path (MPEk), maxpathSim, uses the similarity coefficient of its corresponding base and query path expressions, which refers to the path similarity coefficient, pathsim.

Similarity between two path expressions (pathSim) is computed by taking into account the similarity coefficient between the linguistic names, constraints, and path name of every element in the both PEBi and PEQj,. This checks a one-to-one mapping of elements in the path expressions, that is an element in PEBi matches, at most, one element in PEQj.

The linguistic and constraints similarity of the elements is derived from the base element similarity coefficient, basesim, which obtained by weighted sum of linguistic similarity coefficient, lSim and constraint similarity coefficient, contraintSim of the elements:

baseSim( e1, e 2) = w1 ∗ lSim( e1, e2 ) + w2 ∗ constraintSim( e1, e2 ) where weights w1 + w2 = 1. The cardinality constraint coefficient, constraintSim is determined from the cardinality constraint compatible table as used in [4]. The path name coefficient, PNC, measures the degree of similarity of the two element names in two given paths. PNC differentiates two elements with the same name but in different level position in the common paths (e.g., book.name and book.author.name) or in their context (e.g., a patient’s name and a physician name). The context of an element e is given by the path from root element to an element e, denoted as e.path(root). Thus the path from root element to element e is an element list denoted by e.path(root) = {root, epi,…,epj, e}.

Every schema similarity value between each pair of schemas is mapped into the schema similarity matrix. This matrix becomes the input to the clustering process. XMine uses the constrained agglomerative clustering technique [6] to group schemas similar in structure and semantics to form a hierarchy of schema classes. The similarity between two schemas is computed by:

XMine: A Methodology for Mining XML Structure

789

| MPE |

¦ MaxpathSim(MPE ) k

schemaSim( schemab, schemaq ) =

k =1

max(| PE B | + | PE Q |)

In the final phase, the discovered schema patterns are visualized as a tree of clusters called dendogram. This visualization facilitates the generalization and specialization process of the clusters to develop an appropriate schema class hierarchy. Each cluster consisting of a set of similar schemas forms a node in the hierarchy, where all nodes are at the same conceptual level. Each cluster may be further decomposed into several schema sub clusters, forming a lower level of the hierarchy. Clusters may also be grouped together to form a higher level of the hierarchy. A new schema can now be generalized. First, the schema is generalized to the identifier of the lowest subclass to which the schema belongs. The identifier of this subclass can then, in turn, be generalized to a higher-level class identifier by climbing up the class hierarchy. Similarly, a class or a subclass can be generalized to its corresponding superclasses by climbing up its associated schema class hierarchy.

3 Empirical Evaluation The 180 schemas from various domains and sources with the nesting levels of 2-20 and nodes varying from 10 to 1000 are used in experiments. The validity and quality of the XMine clustering solutions are verified using two common evaluation methods: (1) FScore measure and (2) the intra-cluster and inter-cluster quality. The FScore result of the 9-clusters solution shows the best FScore (figure 2). When the process reached the 13-clusters solution, the clustering quality is stabilized. The XMine maximizes the intra-class similarity by decreasing the average scattering compactness of clusters as the number of clusters increases (figure 3). This is because the greater the number of clusters specified in the solution, clusters are further decomposed into smaller subclusters containing more highly similar schemas. The figure 4 also shows that the average external similarity between clusters decreases as the number of clusters increases. As the numbers of cluster increases, a smaller size of

Fig. 2 & 3. FScore and Intra-class similarity Performance

790

R. Nayak and W. Iryadi

clusters is produced consisting of highly similar schemas and hence highly dissimilar with schemas in other clusters. Based on these observations, the 13-clusters solution provides the optimal clustering model for the input data set. XMine is also examined to test the sensitivity in computing the schema similarity coefficient (schemaSim). Figure 5 shows that the effect of the PNC on clustering is very significant. Without inclusion of PNC, the element names with the same semantics but occur in different position in the hierarchy path name (i.e. book.title and book.author.title) cannot be identified and discriminated. Without the semantic relationship, XMine is still able to handle the linguistic similarity between element names relatively more effectively (figure 7) than without the syntactic relationship (figure 6). Therefore, for what concerns element names in many cases, syntactic similarity measure could be more reliable than semantic similarity measure.

Fig. 4 & 5. Inter-class similarity & Influence of Path Name coefficient

Fig. 6 & 7. Effect of Syntactic and semantic relationships on clustering

All (182)

Travel (24)

Flight (20)

Property (16) Article and Book (24)

Pubication (32)

All (182)

Level 0

Health (20)

Proceedings (8)

Hotel (25)

Booking (10)

Unclassified Level 1 (45)

General (13)

Level 2

Travel (45) Info (24)

Fare (11)

Flight (20)

Property (16)

Article (12)

Pubication (40) Book (12)

Health (20)

Proceedings (8)

Hotel (25)

Automibile (9)

Journal (8)

Booking (10)

Fig. 8 & 9. The cluster decomposition for 9 & 13 number of clusters

Unclassified (17)

General (13)

XMine: A Methodology for Mining XML Structure

791

Figures 8 & 9 display the clusters decomposition for 9 and 13 numbers of clusters. The shaded nodes in the hierarchy represent the actual clusters of the schemas. The unshaded nodes represent the generalization class of the low-level schema classes. Each node is labelled with the class name and the size of the class. The progression in clustering process achieves the disjoint and very specific classes of documents (i.e., lesser unclassified documents). The classes become very small in size, and may not sufficient to be considered as independent classes as they may be only holding specific schemas (as it happens in the case with 18 clusters). Generally, the clusters consist of a very small number of members called noises and outliers.

4 Conclusions This paper presented the XMine methodology that clusters the schemas by considering both linguistic and structural information of elements in the maximal similar paths, as well as the context of an element, which is defined by its level position among other elements in the path expressions. The context of elements takes into account the elements that do not locate at the same level of the hierarchies tree, but they are similar. The experimental evaluation shows the effectiveness of XMine in categorizing the heterogeneous schemas into relevant classes that facilitate the generalization of an appropriate schema class hierarchy. From the sensitivity evaluation, it is shown that the XMine pre-processing components highly influences the quality of clusters. The current implementation and experiments of XMiner uses XML DTDs as schema definition language. However, XML Schema is likely to replace DTD in the future. The shift from DTDs to XML Schema is considerable straightforward with more constraint procedures to be developed in the XMine pre-processing phase for dealing with semantic extension provided in XML Schema. XMine’s element analyser process can also be extended by categorizing elements into similar semantic and syntactic concepts. The purpose of element categorization is to reduce the number of element-to-element comparison. The element categorization based on their data types and linguistics content will accelerate the element comparison process by only matching elements that belong to the same element categories.

References 1. Agrawal, R., & Srikant, R. (1996). Mining Sequential Patterns: Generalizations and Performance Improvements. Proceeding of the 5th International Conference on Extending Database Technology (EDBT'96), France. 2. Bertino, E., Guerrini, G. & Mesiti, M. (2004). A Matching Algorithm for Measuring the Structural Similarity between an XML Document and a DTD and its applications. Information Systems, 29(1): 23-46, 2004. 3. Lee, J. W., Lee, K., & Kim, W. (2001). Preparations for Semantics-Based XML Mining. The 2001 IEEE International Conference on Data Mining (ICDM'01), Silicon Valley, CA. 4. Lee, L. M., Yang, L.H., Hsu, W., & Yang, X. (2002). XClust: Clustering XML Schemas for Effective Integration. The 11th ACM International Conference on Information and Knowledge Management (CIKM'02), Virginia.

792

R. Nayak and W. Iryadi

5. Rice, S. V., Bunke, H., & Nartker, T.A. (1997). Classes of Cost Functions for String Edit Distance. Algorithmica, 18(2), 271-280. 6. Zhao, Y., & Karypis, G. (2002, November 4-9, 2002). Evaluation of Hierarchical Clustering Algorithms for Document Datasets. The 2002 ACM CIKM International Conference on Information and Knowledge Management, USA.

Multiple Join Processing in Data Grid* Donghua Yang1, Qaisar Rasool1, and Zhenhuan Zhang2 1

School of Computer Science and Technology, Harbin Institute of Technology, China 2 Production Research & Engineering Institute of Daqing Oilfield Company, China {yang.dh, qrasool}@hit.edu.cn, [email protected]

Abstract. In this paper, we undertake the problem of multiple join operations in data grid environment. To get convenience in data transference among grid nodes, we propose an n-way relation-reduction algorithm that reduces the relation tuples before the execution of join operation. A new method is developed that can accurately estimate the cardinalities of the join results in each step. The experiments and analytical results depict the effectiveness of the proposed multiple join algorithm for minimizing the total amount of data transmission, parallel and efficient joining of the data to improve query responses in data grid.

1 Introduction Data grid [1,2] is a distributed architecture for data management that provides the coordinated and collaborative means for integrating data across network and thus forms a single, virtual environment for data access. We consider the grid environment and focus our attention on the join operator that is a common as well as an important operation in database queries [3]. While large amounts of research efforts developing multiple join algorithms have been proven effective in traditional environments, when moving on data grid where queries are executed over remote sources in a dynamic environment, the situation becomes complex [4]. The existence of autonomous nodes, heterogeneous datasets and the different bandwidths offers new challenges. We assume a user query involves multiple join operations in it. Firstly we reduce the relations. Secondly grid execution nodes are selected for performing join operations. The reduced data is then transferred to the execution nodes for subsequent processing and ultimately result is propagated back to the user node. In this paper, we propose a new n-way relation-reduction algorithm for minimizing the size of data transferred; an innovative mechanism for estimating the join result cardinality and a new algorithm that keeps track of the subsequent join operation by partitioning previous join results and shipping them to next execution node. The performance analysis of our proposed algorithm and the comparison with other methods are studied by experiments, showing the efficiency and usefulness of our work. *

This work is supposed by the National Natural Science Foundation of China, Grant No. 60273082 and 60473075.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 793 – 799, 2006. © Springer-Verlag Berlin Heidelberg 2006

794

D. Yang, Q. Rasool, and Z. Zhang

The rest of the paper is organized as follows. Section 2 introduces n-way relationreduction algorithm. In Section 3 we discuss the method that can accurately estimate the cardinality of the join result. Section 4 explores the sort merge join algorithm at execution nodes. The performance analysis and experimental results are given in Section 5. Finally we conclude the paper in Section 6.

2 n-Way Relation-Reduction Algorithm We describe the problem of multiple join processing as follows: assume there exist n relations R1, R2, …, Rn in n different nodes A1, A2, ..., An in data grid. A user at any arbitrary node C issues a query that requires join of these n relations according to n-1 join attributes T1, T2 , Tn-1, where Ti (1≤i≤n-1)) is the join attribute of Ri and Ri+1. The user gets join result R1 >< T1R2 >< T2 R3 >< T3 … Rn-1 >< Tn-1Rn at node C. We propose an n-way relation-reduction algorithm to get efficient tuple sets R1′, R2′, … Rn′. Firstly we sort the relations R1, R2, … Rn on their join attributes. If Ri has only one join attribute, we sort the tuples according to this attribute; otherwise if Ri has two join attributes, we sort the tuples on the two join attributes by using multi-key sort method, according to the query plan, one join attribute is considered as primary key and the other join attribute is considered as secondary key. We create some temporary relations and called reducer. For each relation, its reducer is the projection of its join attributes. Fig 1(a) shows the relations Ri (1≤i≤5) and Fig 1(b) shows their corresponding reducers. The process of getting efficient tuples for each relation by using of their reducers is composed of three phases: (1) Ship C1 from node A1 to A2 and get the join result of C1 >< C2 named F-C2 in A2. Get the projection of F-C2 on T2 called F-C2.P[T2] and ship F-C2.P[T2] from node A2 to A3, …, until F-Ci-1.P[Ti-1] arrives at node Ai. At the same time in parallel fashion, ship Cn from An to An-1 and get Cn >< Cn-1 in An-1 named F-Cn-1. Get the projection of F-Cn-1 on Tn-2 called F-Cn-1.P[Tn-2] and ship F-Cn-1.P[Tn-2] from node An-1 to An-2, …, until F-Ci+1.P[Ti] arrives at node Ai. Once F-Ci-1.P[Ti-1] and F-Ci+1.P[Ti] gather at node Ai, we can get the join result of F-Ci-1.P[Ti-1] >< Ci >< F-Ci+1.P[Ti] named Ci′, and also the projections on Ti-1 and Ti called Ci′.P[Ti-1] and Ci′.P[Ti] respectively. (2) Ship back Ci′.P[Ti-1] from Ai to Ai-1 and get the join result of Ci′.P[Ti-1] >< F-Ci-1 named Ci-1′ in Ai-1. Get the projection Ci-1′.P[Ti-2] and ship it from node Ai-1 to Ai-2, …, until C2′.P[T1] arrives at A1; C1′ equals to the join result C2′.P[T1] >< C1. Similarly, ship back Ci′.P[Ti] from node Ai to Ai+1 and have Ci′.P[Ti] >< F-Ci+1 named Ci+1′, we then get the projection Ci+1′.P[Ti+1] and ship Ci+1′.P[Ti+1] from node Ai+1 to Ai+2, …, until Cn-1′.P[Tn-1] arrives at node An; Cn′ will equal to the join result Cn-1′.P[Tn-1] >< Cn. (3) We get efficient tuples satisfying the multiple join conditions in each relation Ri according to its reduced reducer Ci′ for there is a correspondence between them. As Fig 1(a) shows the tuples marked by dark background are efficient tuples. Lemma 2. The join result of n relations R1, R2, … Rn equals to the join result of R1′, R2′, … Rn′ that are subsets of R1, R2, … Rn respectively satisfying the join condition, that is, R1 >< R2 >< … >< Rn= R1′ >< R2′ >< … >< Rn′.

Multiple Join Processing in Data Grid

795

3 Estimation for the Join Results First we set up a scenario. Consider three relations with the attributes as: Ri(Ti-1, Ti), Ri+1(Ti, Ti+1) and Ri+2(Ti+1, Ti+2). These three relations will be joined together in the order of (Ri >< TiRi+1) >< Ti+1Ri+2. For the relation Ri+1, the join attribute Ti is called the first join attribute and Ti+1 is called the second join attribute. Assume the set of different values on join attribute Ti in Ri′ is denoted as Ri′.P[Ti], where Ri′.P[Ti]={val1, val2, … , valn}. In our example, R2′.P[T2]={1,2,3}. For each value valj in Ri′.P[Ti], Ri′.P[Ti].Num(valj) represents the number of tuples in Ri′ whose values on Ti equals to valj. In our example, R2′.P[T2].Num(2)=2. For each relation Ri′, we can construct a table for it which is composed of three attributes: (1) Ri′.P[Ti], where Ti is the first join attribute, which describe different values in Ti; (2) Num, which describe the number of tuples whose values on Ti equals to valj;(3) S(Tk), where Tk is the second join attribute of Ri, which describe the value sets in Tk and is composed of the values in Tk which each value in Ri′.P[Ti] corresponds to. T0 1 2 1 2 4 3 4 5

T1 1 1 2 2 3 4 5 5

T1 3 3 4 4 5 5 6

R1

T2 2 3 3 5 4 5 5 R2

T2 1 2 3 3 4 4 6 7 R3 (a)

T3 1 1 1 3 5 8 5 8

T3 1 2 2 2 4 5 5 8

T4 2 2 3 6 3 2 3 4 R4

T4 1 1 1 2 2 4 4 5

T5 1 2 3 2 3 5 6 7

T1 1 2 R 2` 1 2 4 R 2 `.P [T 2 ]

N um

T1 1 1 2 2 1 2 4 4

S (T 1 )

1 2 3

2 { 1 ,2 } 2 { 1 ,2 } 1 {4} R 2 `.ta b le T2 T3 1 1 R 3` 1 2 2 3 3 3 3 4

R5

T1 T1 T2 T2 T3 T3 T4 T4 (1)C 5 (1) C 1 1 3 2 (1)F-C 1 .P[T 2 ] 1 1 (1)F-C 4 .P[T 3 ] 1 2 1 2 3 3 2 1 2 2 2 3 4 3 3 1 2 3 4 4 (2)C `.P[T ] 4 5 (2)C `.P [T ] 3 3 (2)C `.P[T ] 2 6 (2)C `.P[T ] 5 2 1 3 2 3 3 4 4 5 5 4 4 5 4 3 C5 5 5 4 8 5 2 C1 6 5 6 5 5 3 R 3 `.P [T 2 ] 7 8 8 4 1 C2 C3 C4 2 (b)

3

Fig. 1. n-way relation-reduction (a) relations (b) reducers

T2 1 1 2 2 3

Num

S (T 3 )

2 { 1 ,2 } 1 {3} 2 { 3 ,4 } R 3 `.ta b le

T2 1 1 1 1 2 2 3 3

T3 1 2 1 2 3 3 3 4

R 2 `> < R 3 ` P [T 3 ]

N um

1 2 3 4

2 2 3 1

S (T 1 )

{ 1 ,2 } { 1 ,2 } { 1 ,2 ,4 } {4} (R 2 ` > < R 3 `).ta b le

Fig. 2. The processing of estimating for the join result

Because Ri′ and Ri+1′ are reduced results by using relation-reduction algorithm, the values in Ri′.P[Ti] are same as in Ri+1′.P[Ti]. Assume Ri′.P[Ti]=Ri+1′.P[Ti]={val1, val2, … , valn}, for each value valj in Ri′.P[Ti] and Ri+1′.P[Ti], we seek the corresponding values in their Num columns. Assuming Ri′.P[Ti].Num(valj)=val1 and Ri+1′.P[Ti].Num(valj)=val2, the number of tuples in Ri′ >< Ri+1′ whose value on Ti equals to valj is val1×val2. So the cardinality of the join result can be computed by the n

formula

¦ (Ri′.P[Ti].Num(valj)×Ri+1′.P[Ti].Num(valj)). j =1

According to Ri′.table and Ri+1′.table, we can construct the table of the join result, which will be used to control the join process at execution node as section 4.2 described. Assume Ti+1 is the first join attribute in Ri >< Ri+1′, firstly we get every value in (Ri′ >< Ri+1′).P[Ti+1] from the column S(Ti+1) in Ri+1′.table, assume these values are

796

D. Yang, Q. Rasool, and Z. Zhang

{val1, val2, … , valn}. For each value valj, find its corresponding value in Ri+1′.P[Ti] in Ri+1′.table, assume it is Ri+1′.P[Ti].val. To this value, we seek its corresponding value Ri′.P[Ti].Num(Ri+1′.P[Ti].val) in Ri′.table. If there is only one value for valj found in Ri+1′.P[Ti], the value (Ri′ >< Ri+1′).P[Ti+1].Num(valj) is Ri′.P[Ti].Num(Ri+1′.P[Ti].val). If there are m values found in Ri+1′.P[Ti], the value (Ri′ >< Ri+1′).P[Ti+1].Num(valj) is the sum of these m Ri′.P[Ti].Num values. We take 3 in (R2′ >< R3′).P[T3] as example, then there are two values 2 and 3 in R3′.P[T2] in R3′.table, 2 corresponds to R2′.P[T2].Num(2)=2 and 3 corresponds to R2′.P[T2].Num(3)=1 in R2′.table respectively, so (R2′ >< R3′).P[T3].Num(3) equals to 3 (2 plus 1). Now we describe how to get (Ri′ >< Ri+1′).S(Ti-1) according to Ri′.table and Ri+1′.table. Firstly for each value valj in (Ri′ >< Ri+1′).P[Ti+1], find its corresponding sets in Ri+1′.S(Ti+1) in Ri+1′.table and the value val in Ri+1′.P[Ti] which the set corresponds to. Next, to the value val, we seek its corresponding sets in Ri′.S(Ti-1), fill these values in Ri′.S(Ti-1) into (Ri′ >< Ri+1′).S(Ti-1). If there are other corresponding sets in Ri′.S(Ti-1), the set (Ri′ >< Ri+1′).S(Ti-1) equals to the union of these sets in Ri′.S(Ti-1). We also take value 3 in (Ri′ >< R3′).P[T3] as example, its corresponding sets in R3′.S(T3) in R3′.table are {3} and {3,4}, the values in R3′.P[T2] which the set corresponds to are 2 and 3 respectively. Their corresponding sets in R2′.S(T1) in R2′.table are {1,2} and {4} separately. So the set value 3 corresponds to in (R2′ >< R3′).S(T1) is the union of {1,2} and {4}, that is {1,2,4}.

4 Multiple Join Processing The main idea to resolve the problem of multiple join processing is described as follows: before joining these n relations R1, R2, …. , Rn, we firstly apply n-way relationreduction algorithm to get efficient tuple sets R1′, R2′, … Rn′. Then we select some nodes as execution nodes. Reduced relations are shipped to these nodes for completing the join operations in parallel fashion. Lastly join results are transferred to node C in pipeline mechanism. We describe the algorithm in detail as follows. 4.1 Selection of Execution Nodes Keeping in view the coordinate resource sharing in data grid environment and also the high accessibility of these resources to grid users, we assume nodes having tremendous processing capability, too much memory space and smaller network transfer rate, are execution nodes for performing join operations. After m nodes are selected as execution nodes to which m pairs of reduced results of R1′ and R2′, R3′ and R4′, … Rn-1′ and Rn′ are shipped, where m=n/2, which node each pair of reduced relations should be shipped to is an issue of concern. Assume the consumed time for shipping Ri′ from node Ai (1≤i≤n) to an execution node Ej (1≤j≤m) is Ci,j, where Ci,j=|Ri′|×TRi,j and TRi,j is the network transfer rate between nodes Ai and Ej. We represent the consumed time for shipping Ri′and Ri+1′ from nodes Ai′and Ai+1′ to Ej as Max{Ci,j, Ci+1,j}. The optimal target is that the consumed time for shipping all pairs of reduced relations should be minimal, that is, minimizing the time for shipping.

Multiple Join Processing in Data Grid

797

We model this problem as seeking an edge-weight-minimum-matching in weighted complete bigraph[5]. To get convenience in explanation, we assume that the pair of relations R1′and R2′are shipped to node E1, R3′ and R4′ are shipped to node E2 … and similarly Rn-1′ and Rn′ are shipped to node Em. 4.2 Join Operation at Execution Node For the sake of convenience, we consider the join process of two relations Ri′ and Ri+1′ at execution node Ej. The memory space of Ej can be divided into three parts, named IRi, IRi+1, OP. These memory parts can be resized dynamically according to their usage. Assume the OP can be divided into k partitions according to the first join attribute Ti+1 of Ri′ >< Ri+1′ where k equals to the number of different values on Ti+1. Before shipping Ri′ and Ri+1′ from nodes Ai and Ai+1 to node Ej, we firstly ship Ri′.table and Ri+1′.table to Ei and construct (Ri′ >< Ri+1′).table. Using the values in (Ri′ >< Ri+1′).P[Ti+1] and its corresponding value in (Ri′ >< Ri+1′).Num, we can control the transference of the tuples in each partition in OP and allocate space accordingly. For each tuple from Ri′ and Ri+1′, join processing is carried out in two phases at node Ej: (1) Insertion phase. When a tuple from Ri′ arrives at node Ej, whose value on attribute Ti equals to V, firstly the tuple is inserted into IRi on condition that there is room in IRi. And the number of tuples inserted into IRi, whose value on Ti equals to V, plus one. Similarly, a tuple from Ri+1′ is processed. (2) Probing phase. A tuple in IRi, whose value on join attribute Ti equals to V, is probed with the tuples in IRi+1. To each tuple in Ri+1′, if the join condition is satisfied, the join result can be output into its corresponding partitions OP(Val) according to Val, where Val is the value on the first join attribute Ti+1 of Ri′ >< Ri+1′, and the number of tuples whose value on Ti+1 in Ri′ >< Ri+1′ equals to Val adds one. Similarly, a tuple in IRi+1 is processed. When a join result whose value on Ti+1 equals to Val is inserted into its corresponding partition OP(Val) and the number of tuples in this partition equals to the corresponding value (Ri′ >< Ri+1′).Num(Val), we ship all the tuples in this partition to next execution node for next join processing and thus free memory space OP(Val) at Ej. When an arrived tuple in a block is ready for being inserted into IRi or IRi+1 and there is no room in memory space at Ei, some tuple must be flushed into disk in order to free some memory space. For this, flushing policy may select one partition in OP randomly. If there are join results that will be generated, which belong to this selected partition, they are directly written to disk and hereafter shipped after the join results in OP are moved to next execution node for subsequent join operation.

5 Performance Analysis and Experiment Results 5.1 Analysis on the Times for Various Number of Tuples Produced In this experiment, we use a 4-way join to study the effect of relation-reduction algorithm. Fig 3 shows that the time for the first 1% result tuples by hash based join algorithm is much smaller than by our proposed algorithm. This is because our algorithm spends enough time getting the efficient tuples. Once the join result is produced, much more results can be generated in unit time in our algorithm than in hash based algorithm.

798

D. Yang, Q. Rasool, and Z. Zhang

As Fig 3 depicts, when the number of result tuples becomes larger, the cost time by hash based algorithm is much more than by our proposed algorithm. Also, the total response time by hash based algorithm is much more than by our proposed algorithm. 5.2 The Effect of Left-Deep Tree and Bushy Tree

60000

40000

ours

30000

hash based

time (MS)

time (MS)

In this experiment, we study the effect of left-deep and bushy query tree. As Fig 4 shows, the performance using bushy tree is better than using left-deep tree. This is because bushy tree can offer the best opportunity for allowing the implementation of parallelism. While joining R1 >< R2, the operation R3 >< R4 can also be executed in parallel fashion. In left-deep query tree, much more intermediate join result should be shipped and sorted, R3 and R4 must wait for shipping or processing until its left operation has finished, so the response time is higher than bushy tree.

20000 10000

50000

bushy tree

40000

left deep tree

30000 20000 10000

0

0

1

10

40

70

100

% of answer tuples produced

Fig. 3. The times for various number of tuples produced

1

10

40

70

100

% of answer tuples produced

Fig. 4. The comparison of response time in bushy tree and left deep tree

6 Conclusion and Future Work This paper has proposed a multiple join algorithm in data grid. To ease the data transference among the grid nodes, n-way relation-reduction algorithm is used to get efficient tuples for each relation before join operation executes. Execution nodes can be selected by making use of matching theory. A new method is developed which can accurately estimate the cardinality of the join result. The analytical and experimental results demonstrate the effectiveness of the proposed multiple join algorithm for the management of data in data grid.

References 1. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, 1998 2. Chervenak, A., Foster, I., Kesselman, C., Salisbury, C., Tuecke, S.: The Data Grid: towards an architecture for the distributed management and analysis of large scientific datasets. Journal of Network and Computer Applications, (23), 187-200, 2001 3. Roussopoulos, N., Kang, H.: A pipeline n-way join algorithm based on the 2-way semijoin program. IEEE Transactions on Knowledge And Data Engineering, 3(4): 486-495, 1991.

Multiple Join Processing in Data Grid

799

4. Smith, J., Gounaris, A., Watson, P., Paton, NW., Fernandes, AAA., Sakellariou, R.: Distributed Query Processing on the Grid, 3rd Int. Workshop on Grid Computing, J.Sterbenz, O.Takada, C.Tschudin, B.Plattner (eds.), Springer-Verlag, 279-290, 2002 5. Donghua Yang, Jianzhong Li, Qaisar Rasool. Join Algorithm Using Multiple Replicas in Data Grid. Proceedings of the International Conference on Advances in Web-Age Information Management (WAIM2005), 416-427.

A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology Zhilin Feng1,2, Jianwei Yin2, Zhaoyang He1,2, Xiaoming Liu2, and Jinxiang Dong2 1 2

College of Zhijiang, Zhejiang University of Technology, Hangzhou 310024, P.R. China State Key Laboratory of CAD & CG, Zhejiang University, Hangzhou 310027, P.R. China [email protected], [email protected]

Abstract. Grid workflow applications are emerging as an important new alternative to develop truly distributed applications for the grid platform. The modeling of grid workflow system is a challenging task, due to the high degree of autonomy and heterogeneity of the cooperative organizations under distributed environments. This paper presents a formal specification methodology for grid workflow modeling which is founded upon the Pi-Calculus process algebra. This method works well for characterizing the behaviors and interactions of the grid workflow processes that belong to different organizations in terms of the semantics of Pi-Calculus.

1 Introduction In recent years, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources [1, 2]. Such application scenarios require means for composing and executing complex grid computing. Grid computing has emerged as a global platform to support organizations for coordinated sharing of distributed data, applications, and processes [3]. To satisfy these demands, many efforts have been made towards the development of workflow systems for Grid computing. Grid workflow systems which are evoking a high degree of interest aim to support modeling, redesign and execution of large-scale sophisticated e-science and e-business processes [4, 5]. Grid workflows, in contrast to production and administrative business workflows, are normally more flexible and completely automatic. They typically rely on distributed and autonomous processes for information interaction [5]. Because the data and computing may be dispersed in a physically distributed environments(especially under inter-organizational environments), one of the key challenges for grid computing is to define a common mechanism that the grid workflow system can handle the data transfer issue and invoke the computational tools over a distributed and heterogeneous platform. This mechanism is exactly the goal that grid computing technology works to achieve in scientific environments. Grid computing technology satisfies the requirement by providing a new computing infrastructure for large-scale resource sharing and distributed system integration. This paper aims at designing and building basic infrastructure for grid computing in the form of a workflow system capable of defining and enacting service processes, X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 800 – 805, 2006. © Springer-Verlag Berlin Heidelberg 2006

A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology

801

as well as supporting related interactive behaviors among these service processes. The rest of the paper is organized as follows. Section 2 gives the syntax and operational semantics of Pi-Calculus for Grid workflow modeling. Section 3 applies this technology to a special grid workflow setting. Finally, section 4 gives a short conclusion.

2 Pi-Calculus for Grid Workflow Grid workflow can be seen as a collection of tasks that are processed on distributed resources in a well-defined order to accomplish a specific goal. Such workflow system is essentially a set of loosely coupled service agents [5]. Typically, there are many service agents which are involved in one global workflow process under interorganizational environments. Each of the agents has its own local workflow process. Figure 1 shows a typical example of grid workflow system which consists of five local workflow processes.

Fig. 1. Interactive behaviors of a grid workflow system

Recently there has been an increased interest in formal modeling of grid workflow systems. Pi-calculus developed by Robin Milner in the late 1980s is about modeling concurrent communicating systems [6]. Pi-Calculus is a process calculus that is able to describe dynamically changing networks of concurrent processes. Pi-calculus provides a framework in which to develop the appropriate formalism. There are several reasons for using Pi-Calculus for grid workflow modeling: • Since the architecture of a grid workflow system is highly dynamic, it is necessary to develop an efficient technology that is capable of handling dynamic behaviors across enterprise boundaries. It requires that the formal method should be capable of coping with those dynamic aspects, and the Pi-Calculus can precisely satisfy this requirement.

802

Z. Feng et al.

• Processes in a grid workflow system are crudely concurrent. It is difficult for classical or non-classical logic to describe the concurrency among processes, but it is easy for the Pi-Calculus to do so. • A good formal method for grid workflow system should be able to express interactions among processes. Pi-Calculus naturally takes the advantage of the ability to describe interactions among processes. Pi-Calculus contains just two kinds of entities: processes and channels. Processes, sometimes called agents, are the active components of a system. Definition 1. The syntax of polyadic Pi-Calculus is as follows: r r ur P ::= x( z ).P | x[ y ].P | P | Q | (v x) P | P + Q | φ | !P | [ x = y ]P Definition 2. A grid workflow model is a tuple W = ( LP1 , LP2 ,..., LPn ,VW , tW , fW , RW ) , where n ∈ ` is the number of local workflow processes, for each k ∈ {1,..., n} , LPk is a local workflow process, VW = {v1 ,..., vn } is a set of ports in a local process, tW (vi ) ∈ {START , ACTIVITY , DECISION , SPLIT , JOIN , END} indicates the type of a channel, VX for X ∈ {START , ACTIVITY , DECISION , SPLIT , JOIN , END} denotes the subset VX ⊆ VW of all channels of type X , fW : VACTIVITY → A is the activity assignment function, RW ∈ (VW × VW ) is a set of channels among local workflow processes. In Definition 2, the overall processes are decomposed into many kinds of activities that are ordered based on the dependencies among them. Definition 3. If activity A is automatic activity, then its corresponding representation def

of process is: A = request _ resource[resource _ id ].START .assigned _ resource(resource _ id ). ACTION .release _ resource[ resource _ id ].FINISH where uppercase names of the definition ( START , ACTION , and FINISH ) denote processes and lowercase names denote channels ( request _ resource , assigned _ resource , and release _ resource ) or ports ( resource _ id ) to communicate neighbor activities.

Definition 4. If activity A is manual activity, then its corresponding representation of def

process is: A = request _ resource[resource _ id ].START .assigned _ resource(resource _ id ). wait _ user[role _ id ]. ACTION .release _ resource[ resource _ id ].FINISH . Definition 5. If activity A is time-triggering activity, then its corresponding represendef

tation of process is: A = request _ resource[ resource _ id ].START .timer < begin _ time, end _ time > .Counter .assigned _ resource(resource _ id ). ACTION .release _ resource[resource _ id ]. FINISH .

Definition 6. If activity A and activity B have sequence dependency and their corresponding processes expression is A , B respectively, and then the sequence dependdef

ency can be defined as: A , B = (v a )([a / done] A | [a / start ]B) .

A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology

803

Definition 7. If activity A and activity B have XOR-split dependency and their corresponding processes expression is A , B respectively, then the XOR-split dependency def

can be defined as: A XOR − split B = (v a)( start.[a ] | ([ a / start ] A + [a / start ]B)) . Definition 8. If activity A and activity B have XOR-join dependency and their corresponding processes expression is A , B respectively, then the XOR-join dependdef

ency can be defined as: A XOR − join B = (v a )(([a / finish] A + [a / finish]B ) | finish.[a]) . Definition 9. If activity A and activity B have AND-split dependency and their corresponding processes expression is A , B respectively, then the AND-split dedef

pendency can be defined as: A AND − split B = (v a )( start.[a ] | ([a / start ] A | [a / start ]B )) . Definition 10. If activity A and activity B have AND-join dependency and their corresponding processes expression is A , B respectively, then the AND-join dependency def

can be defined as: A AND − join B = (v a )(([a / finish] A + [a / finish]B) | finish.[a ]) . Definition 11. If the decision expression ψ is true, then activity A will be executed; otherwise activity B will be started. The expression can be defined as: If ψ then A def

= (v a, b)( decison _ exp[ψ ].START .([ψ = TRUE ]a + [ψ = FALSE ]b)) | ([a / start ] A | [ b / start ]B ) .

else B

3 Experimental Results In this section, we introduce a grid workflow system implemented by Pi-Calculus mechanism. It is a loosely coupled inter-organizational workflow system which consists of three local workflows: Customer, Producer and Supplier. Figure 2 shows how communication procedures among local workflow processes are handled under interorganizational environments. In Figure 2, it can be easily seen that the process P(Customer ) shares four channels ( order , Notification , Delivery and Payment ) with the process P( Producer ) , and the process P( Producer ) shares two channels ( Order _ Supplier and Delivery _ Supplier ) with the process P( Supplier ) . Formalization descriptions of the three processes are given as follows: P(Customer ) = order (a).Notification(b).[ a = b].Delivery (c).[c = a].Payment (e) , P( Producer ) = order (a ).(Order _ Supplier (b)).[a = b].Notification(c). ( Delivery (d )).[d = c ].Delivery (e).[d = f ].Payment ( g ) , P( Supplier ) = Order _ Supplier (a).Delivery _ Supplier (b).[a = b] .

We use Pi-Calculus to describe the behavior of activities in Figure 2. For example, we give the following descriptions of two activities in P(Customer ) according to definition 3 to 5.

804

Z. Feng et al.

Fig. 2. Communication procedures under distributed environments def

Activity ( Send _ Order _ Producer ) = request _ resource[resource _ id ].START .

assigned _ resource(resource _ id ).Send _ Order _ Producer.release _ resource [resource _ id ].FINISH , def

Activity ( Receive _ Notification ) = request _ resource[resource _ id ].START . assigned _ resource( resource _ id ).wait _ user[ role _ id ].Receive _ Notification. release _ resource[resource _ id ].FINISH , We also use Pi-Calculus to describe the communication of any two processes of the inter-organizational workflow system. For example, we give the following descriptions of communications between “Customer” process and “Producer” process according to definition 6 to 11.

• If Activity ( Notify ) sends a notification to Activity ( Receive _ Notification) via the channel Notification , the communication process can be written as: (v Notification)(([ Notification / finish] Activity ( Notify ) + [ Notification

/ finish] Activity ( Receive _ Notification)) | finish.[ Notification]) .

• If Activity ( Pay ) sends a payment to Activity ( Receive _ Payment ) via the channel Payment , the communication process can be written as: (v Payment )(([ Payment / finish] Activity ( Pay ) + [ Payment / finish] Activity ( Receive _ Payment )) | finish.[ Payment ]) .

4 Conclusions In this paper, a novel formalizing approach for modeling and controlling the execution of grid workflow system is proposed by Pi-Calculus technique. This approach

A Novel Architecture for Realizing Grid Workflow Using Pi-Calculus Technology

805

provides interactive graphic user interface to support dynamic control and easy communication of an executed grid workflow, and provides a powerful tool for production and execution of workflow-based Grid applications. We are currently in the process to develop a Grid workflow enactment engine to execute Grid workflow applications based on the proposed approach.

Acknowledgement The work has been supported by the Zhejiang Province Science and Technology Plan (jTang Middleware) and the National Natural Science Foundation, China (No. 60273056). The name of contact author is Jianwei Yin.

References 1. Kesselman, F. C, Tuecke, S.: The anatomy of the grid:Enabling scalable virtual organizations. International Journal of Supercomputer Application. 15 (2001) 200-222 2. de Roure, D., Jennings, N. R., Shadbolt, N.: The semantic grid: A future e-science infrastructure. International Journal of Concurrency and Computation: Practice and Experience. 15 (2003) 437-470 3. Deelman, E., Blythe, J., Gil, Y., Kesselman, C., Mehta, G., Vahi, K.: Mapping abstract complex workflows onto grid environments. Journal of Grid Computing. 4 (2003) 25-29 4. Blythe, J., Deelman, E., Gil, Y.: Automatically Composed Workflows for Grid Environments. IEEE Intelligent Systems. 19 (2004) 16-23 5. Edmond, D., Hofstede, A.: A reflective infrastructure for workflow adaptability. Data & Knowledge Engineering. 34 (2000) 271-304 6. Milner, R., Parrow, J., Walker, D.: A calculus for mobile processes, parts I and II. Journal of Information and Computation. 100 (1992) 1-77

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol Min Li1, Enhong Chen1, and Phillip C-y Sheu2 1

Department of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230027, P.R. China [email protected], [email protected] 2 Department of EECS, University of California, Irvine, CA 92697 [email protected]

Abstract. With the increasingly developed technology of mobile devices and wireless networks, more and more users share resources by mobile devices via wireless networks. Compared to traditional C/S architecture, P2P network is more appropriate for mobile computing environment. However, all existing P2P protocols have not well considered the characteristics and constraints of mobile devices and wireless networks. In this paper, we will present a novel mobile P2P protocol, M-Chord, by adopting hierarchical structure and registering mechanism on the basis of Chord. The experimental results show that M-Chord system has high-efficiency and good robustness in mobile P2P network.

1 Introduction With the increasing technology of mobile devices and wireless networks, it gets more and more prevalent to share various resources by mobile devices such as a PDA or a mobile phone via different types of wireless access networks such as GPRS and IEEE 802.11 wireless LAN. On the other hand, compared to the constraints of traditional Client/Server (C/S) architecture, such as the systemic fragility, the bottleneck of system performance caused by high-load of the center server, the overmuch bandwidth consumption for broadcasting message and so on, Peer-to-Peer (P2P) network is more appropriate to employ in mobile computing environment because it adopts distributing services among equal nodes and improves the scalability and reliability of the whole system. However unfortunately, all existing P2P protocols have not specially considered the problems of wireless joining for mobile devices, for instance, limited CPU and memory of mobile devices, intermittent disconnection, limited bandwidth, high transmission delay etc. So how to obtain more efficient and effective mobile P2P techniques have recently become prominent discussion topics. For the new generation of scalable P2P systems that support distributed hash table (DHT) functionality, such as Tapestry [1], Pastry [2], Can [3] and Chord [4], files are associated with a key by hashing its title or its content, and each node is responsible for storing a certain range of keys. Each DHT system employs a different routing X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 806 – 811, 2006. © Springer-Verlag Berlin Heidelberg 2006

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol

807

algorithm and has its own attractive respects and disadvantages at the same time. But we appreciate the simplicity, scalability and high-efficiency of Chord much more. Although some researchers have presented a mobile P2P protocol M-CAN [5] based on CAN, the theory of hierarchical structure and registering mechanism can not be applied to Chord directly. In this paper, we will propose a novel mobile P2P file sharing protocol, M-Chord, by ameliorating Chord protocol and adopting hierarchical structure and registering mechanism aim to accord with the characteristics of MP2P.

2 Design and Implementation of M-Chord As an extensible P2P routing algorithm, Chord adopts the simple logic structure, simple systematic interface and uses one-dimensional circular key space. Considering the constraints of mobile environment, we have modified and improved the base Chord protocol to obtain an efficient lookup protocol in MP2P, M-Chord. 2.1 Hierarchical Structure of M-Chord M-Chord adopts hierarchical structure to organize mobile peers and it introduces the theory of register mechanism used in M-CAN to manage resources. There are two kinds of nodes in M-Chord, super nodes and ordinary nodes. Super nodes are the nodes having larger memory, better computing capability and more reliable connection. Ordinary nodes are associated with super nodes by the register mechanism. Firstly, when a sharing file is published in M-Chord system, it will be assigned a file ID according to its content and title by hash function. Every super node manages a range of file IDs separately. The node which has sharing files will be registered on some super nodes according to the IDs of its shared files. The node which has no sharing files will be appointed a super node with minimum load. A super node will be registered on itself but it should also present the information of all its sharing files to its corresponding super nodes. Every ordinary node needs to record the IDs and addresses of its super node(s). Every super node needs to maintain two tables: one is the routing finger table, the other is sharing files directory, which records the information about its registered files, such as the file IDs, the registered nodes’ addresses etc. For super nodes, we use Chord to manage them. 2.2 Routing in M-Chord Before a source node sends out its file access request, it must calculate the ID of its wanted file firstly. Then the source node would submit a request containing this ID to the source super node which the source node is registered on [5]. After receiving the request, the source super node would lookup its finger table and go on with the routing process until the destination super node whose ID space covers the ID of the wanted file receives the request. The routing process on the ring is the same as Chord. Then the destination super node would launch a lookup process locally. If the destination super node can find the very file ID in its directory, which means the destination node which owns the wanted file is registered locally, it would return the destination node’s address to the source node. After the source node gets the destination node’s address, it would try to communicate with the destination node directly. A lookup is

808

M. Li, E. Chen, and P. C-y Sheu

considered to fail if it returns the wrong node among the current set of participating nodes at the time the sender receives the lookup reply (i.e., the destination node has left or failed already), or if the sender receives no reply within some timeout window. 2.3 Node Join When a new node joins M-Chord, it must register itself on one or more appropriate super nodes according to the IDs of its shared files. If succeed in registration, the corresponding super nodes should update their sharing file directory in time. In order not to make super nodes be the bottleneck of the whole system, we set a rule that any super node can manage n nodes at most. If a super node manages more than n nodes, a split process would be triggered. In this paper, we will present two kinds of split strategies, real split and virtual spilt. Real split strategy. In the “real split” process, the original super node (denoted as OS) will first choose a new super node (denoted as NS) from its registered nodes. NS is the node with the best connection among registered nodes. We assign an ID to the new super node by the following formula. NS.id = ¬(OS.predecessor.id+OS.id)/2¼

(1)

Unlike Chord system, which generate the node identifiers by hashing its IP address to an m-bit space, M-Chord system will identify the ordinary node uniquely by its address, and to the super node, it will assign its ID with the median of two existing super nodes to ensure the uniqueness of the IDs of super nodes too.

Fig. 1. Before real splitting of super node 3, n=4

Fig. 2. After real splitting of super node 3, n=4

The registered nodes will be divided into two groups. One group continues to be managed by the original super node, and the other group is managed by the new super node. Then we should go on with four steps. (1)Generate the finger table of the new super node NS. (2)Update neighbor information of OS and NS. (3) Update sharing file directories of OS and NS. (4)Update finger tables of all super nodes except NS.

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol

809

Fig.1 and Fig.2 show an instance for super node splitting. Suppose one super node can manage four ordinary nodes at most. Fig.1 shows the state when peer E joined and registered on super node 3. Super node 3 has exceeded the management maximum, so it will be split. Fig.2 shows the state after super node 3 splitting. A new super node, super node 1 is generated and the rest registered nodes are divided into two groups. Virtual split strategy. The rationale of virtual split strategy is very simple. Like real split, in virtual split process the original super node OS will first choose a new super node NS from its registered nodes, then divide the rest registered nodes into two groups. Each group has the same number of nodes and each super node manages one group. Unlike real split strategy, we assign the same ID as OS to NS. Then NS and OS will update their sharing file directories to keep consistent. Because there is not a new node ID generated on the M-Chord ring, it is unnecessary to update the finger tables of other super nodes. We only need to copy the finger table and neighbor information from OS to NS. But we should note that during the process of lookup operation, one position of routing path may relate to more than one super node. If we need to check the finger table of this position, we only need to check one node randomly. Moreover, the destination super node ID may also correspond to more than one super node. In this case we need to check all these virtual super nodes’ sharing file directories to lookup the destination node. In fact, the processes of node joining and super node splitting are also the system construction process. At first, there is only one node that covers the whole ID space in the system. In this case, any node will be registered on the original super node until the number of registered nodes exceeds the limit n. Then the split process will be triggered. If sharing files are distributed equally on the whole ID space, real spilt will balance the load of M-Chord ring perfectly. But if there are too many sharing files within a certain continuous file ID range, real splitting may cause a “local saturation” situation. That is to say, several continuous positions on the ring have existed super nodes and the super node in this series can not split any more although there are many vacant positions on the other part of the ring. To solve this problem, we set the rule that if the same file is shared and registered so many times by different nodes as to exceed a certain limit, it is forbidden when the file attempts to be published by later joined nodes. Compared with virtual split, real split will increase some expenses, such as the expense of updating all super nodes’ finger tables. However although virtual split is simple and highly efficient, it will cause the whole system load-unbalance because it allows too many nodes congregating on one position of M-Chord ring. So in practice, we can combine these two strategies to achieve the best performance. In our simulation experiment, we stipulate that to every node ID on the ring, there exist two virtual super nodes at most. If exceeds, real split will be performed. 2.4 Node Departure When an ordinary node leaves the system, we only need to modify the directory of its super node(s). But if the missing node is a super node, we should extend the file ID space of its successor to cover the missing space. If the successor super node manages more than n nodes, a split process would be triggered.

810

M. Li, E. Chen, and P. C-y Sheu

3 Theoretical Analysis and Simulation Results To evaluate the performance of M-Chord system, we design our simulation based on the platform P2PSIM [6], a P2P simulation software developed by MIT. P2PSIM has provided the interface of Chord and we made some modification and extension on the basal Chord to implement M-Chord. We design two data sets with different n standing for the maximum that one super node can manage ordinary nodes with the value 5 and 10 separately. For each data sets, we will compare the average data flow of Chord node, super node of M-Chord and ordinary node of M-Chord, measured by the unit bw (bytes/node/s), under different total node number (128, 256, 512, 1024, 2048). Our testing data accords with Kingdata [7]. Fig.3~4 show the result with different parameter value. Clearly we can find that the average data flow of super node in M-Chord is the largest, Chord node is secondary and the minimum is ordinary node of M-Chord. This demonstrates that super nodes have shielded most expenses and the total network bandwidth occupation of the whole M-Chord is reduced remarkably.

Fig. 3. Node average bandwidth consumption comparison with n=5

Fig. 4. Node average bandwidth consumption comparison with n = 50

A Chord-Based Novel Mobile Peer-to-Peer File Sharing Protocol

811

4 Conclusions and Future Work In this paper, we have presented a mobile P2P protocol M-Chord. The particularities and constraints of MP2P make traditional P2P protocols lowly-efficient and unreliable. Aim to accord with the characteristics of MP2P, we introduce hierarchical structure and registering mechanism into Chord. From our experimental results, we know that compared to Chord, M-Chord greatly reduces the network bandwidth occupation. M-Chord system has high-efficiency and good robustness in mobile computing environment. In the future, we intend to improve the performance of M-Chord by decreasing super node’s load and the additional expenses of super node splitting and so on.

Acknowledgements This work is supported by Natural Science Foundation of China (No.60573077), and the Nature Science Foundation of Anhui Province (No.050420305).

References 1. Ben Y.Zhao, John Kubiatowicz, and Anthony D.Joseph. Tapestry: An infrastructure for Fault-tolerant Wide-area Location and Routing. Tech. Rep. UCB/CSD-01-1141, University of California at Berkeley, Computer Science Division, 2001. 2. A. Rowstron and P. Druschel. Pastry: Scalable, distributed object location and routing for large-scale peer-to-peer systems. Proceedings of IFIP/ACM International Conference on Distributed Systems Platforms (Middleware), Heidelberg, Germany, pages 329-350, November, 2001. 3. Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp and Scott Shenker. A Scalable Content-Addressable Network. Proceedings of ACM SIGGOMM (San Diego, CA, August 2001), pp.161-172. 4. Ion Stoica, Robert Morris, David Karger, M.Frans Kaashoek, Hari Balakrishnan. Chord: A scalable peer-to-peer lookup service for Internet applications. Tech. Rep. TR-819, MIT LCS, March 2001. 5. Gang Peng, Shanping Li, Hairong Jin, Tianchi Ma. M-CAN: a Lookup Protocol for Mobile Peer-to-Peer Environment. Proceedings of the 7th International Symposium on Parallel Architectures, Algorithms and Networks (ISPAN’04). 6. http://pdos.csail.mit.edu/p2psim/ 7. http://www.cs.washington.edu/homes/gummadi/king/

Web-Based Genomic Information Integration with Gene Ontology Kai Xu IMAGEN group, National ICT Australia, Sydney, Australia [email protected]

Abstract. Despite the dramatic growth of online genomic data, their understanding is still in early stage. To have meaningful interpretation, it requires the integration of various types of data that are relevant, such as gene sequence, protein interaction and metabolic pathway. Gene Ontology, an ontology proposed by molecular biologist community, is a possible tool to help address some of the difficulties in such integration. In this paper, we exam the formality of Gene Ontology, and study the possibilities and potential problems in applying Gene Ontology to both structured (such as database) and semi-structured (such as the publications in the literature) data integration for online genomic information.

1

Introduction

The amount of genomic information available online increases dramatically in the last a few year. For instance, the number of DNA base pair in GenBank (www.ncbi.nlm.nih.gov/genbank/) increase from 680,338 in 1982 to 44,575,745, 176 in 2004. Despite the abundance of data, the understanding of these data lags far behind the collection. A key question that molecular biologists try to understand is the gene regulation mechanism, i.e., why the variation in gene sequence can lead to diseases such as cancer. The underlying principle of the answer to this question is the “central dogma of molecular biology”: the genetic information stored in DNA sequence is passed through RNA to protein, which eventually performs the encoded regulation function by its interaction with its environment. The central dogma implies that any type of genomic data alone, such as the DNA sequence data, is not sufficient for meaningful interpretation of its genetic function; this can only be achieved through the integration of relevant genomic data such as DNA sequence, protein interaction and metabolic pathway, which are all publicly available online. Data integration has been a challenging problem studied by computer scientist for years due to the prevalence of heterogeneity. The integration of genomic data has additional difficulties such as the requirement of biological expertise. Merging various types of genomic data poses a even greater challenge due to the complexity introduced by the variety of data and their context. Some attempts have been made recently, but few of them are successful [1]. The Gene Ontology [2], an recent collaboration within the molecular biologist community, tries to alleviate the semantic heterogeneity in genomic data representation by providing a X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 812–817, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Web-Based Genomic Information Integration with Gene Ontology

813

shared vocabulary. Though still in its early stage, we believe the Gene Ontology can play an important role in genomic information integration besides facilitating the communication between molecular biologists. An compelling property of the Gene Ontology is that by building an ontology the genomic expertise is captured in the concept definitions and the relationships among them in a more machine-friendly form. In this paper, we discuss the possibilities and challenges of employing the Gene Ontology to address some issues in integrating online genomic information, including both structured data (such as database) and semi-structured data (such as the publications in the literature). This study is not meant to provide a complete solution but rather a pilot study of the feasibility of applying Gene Ontology to data integration.

2 2.1

Background Gene Ontology

The goal of the Gene Ontology is to produce a dynamic, controlled vocabulary that can be applied to all eukaryotes even as knowledge of gene and protein roles in cells is accumulating and changing [2]. It is maintained by the “Gene Ontology Consortium” (www.geneontology.org). The Gene Ontology includes not only terms and relations among them, but also the associations between the terms and gene products in online databases. 2.2

Web-Based Biological Information Integration

There are two type of approaches commonly used for web-based biological information integration, centralized (or warehouse) integration and distributed (or mediator-based) integration. Centralized integration duplicates data from multiple sources and stores them in a data warehouse. All queries are then executed locally rather than in the actual sources. An example of such system is GUS (Genomics Unified Schema) [3]. The centralized integration approach relies less on the network to access the data, and using materialized warehouses also allows for an improved efficiency of query optimization. This approach however has an important and costly drawback that it must regularly check throughout the underlying sources for new or updated data and then reflect those modifications on the local data copy. Unlike the centralized/warehouse approach, none of the data is cached locally in a distributed/mediator-based integration. Instead a query is reformulated by the mediator at runtime into queries on the local schema of the underlying data sources. Example of such system is the DiscoveryLink [4]. The two main approaches for establishing the mapping between each source schema and the global schema are global-as-view (GAV) and local-as-view (LAV) [5]. The LAV is considered to be much more appropriate for large scale ad-hoc integration because of the low impact changes to the information sources have on the system maintenance, while GAV is preferred when the set of sources being integrated is known and stable.

814

3 3.1

K. Xu

Genomic Information Integration with Gene Ontology The Formality of Gene Ontology

A commonly quoted definition for ontology is “a formal, explicit specification of a shared conceptualization” [6]. Therefore, an ontology should have: 1. A vocabulary of terms that refer to the things of interest in a given domain; 2. Some specification of meaning for the terms, (ideally) grounded in some form of logic. However, as stated by the creators of the Gene Ontology [7], they have consciously chosen to begin at the most basic level, by creating and agreeing on shared semantic concepts; that is, by defining the words that are required to describe particular domains of biology. They are aware that this is an incomplete solution, but believe that it is a necessary first step. The argument is that these common concepts are immediately useful and can be used ultimately as foundation to describe the domain of biology more fully. In this sense, Gene Ontology is still an incomplete ontology. It has a defined vocabulary (requirement 1 previously), but is lack of formal specification (requirement 2). This can affect the application of Gene Ontology to data integration and is discussed in detail in Section 3.2 and 3.3. 3.2

Structured-Data Integration

For structured-data, we focus on online databases. Generally, there are two types of heterogeneity in database integration: the structural and semantic heterogeneity. Three frameworks are available when using ontology to address the semantic heterogeneity: single ontology model, multiple ontology model and hybrid ontology model [8]. The hybrid ontology model has a high-level vocabulary that is shared by the ontologies of participating databases. The semantic mapping between databases is done by the following transformation: local ontology 1 → shared vocabulary → local ontology 2 This eliminates the mapping between every pair of local ontologies. Instead, only the mappings between the shared vocabulary and the local ontologies are required. The Gene Ontology fits well into the hybrid ontology model because: – It is a high-level vocabulary that is well-defined – It is general enough for extension and thus easy for databases to adopt. In fact the semantic heterogeneity is much less severe for databases that choose to annotate their data with Gene Ontology because essentially they are following the same ontology by doing this (the single ontology model). For other genomic databases that have their own ontology, the Gene Ontology can be served as the global shared vocabulary to help build ontology mappings between these databases and with databases already integrated. In this sense, the Gene Ontology provides a nice tool for semantic integration among genomic databases.

Web-Based Genomic Information Integration with Gene Ontology

(a) DA

(b) DB

815

(c) Integrated database

Fig. 1. Global schema derivation based on ontology mapping

Though devised mainly for semantic heterogeneity, ontology can also contribute to resolve structural heterogeneity due to the fact that ontology and database schema are closely related [9]. One of the similarities is the considerable overlap in expressivity, which includes objects, properties, aggregation, generalization, set-valued properties, and constraints. For example, entities in an ER model correspond to concepts or classes in ontologies, and attributes and relations in an ER model correspond to relations or properties in most ontology languages. For both, there is a vocabulary of terms with natural language definitions. Such definitions are in separate data dictionaries for database schema, and are inline comments in ontologies. Arguably, there is little or no obvious essential difference between a language used for building database schema and one for building ontologies. The similarity between ontology and schema can be used for resolving structural heterogeneity in database integration. Here we use an example to illustrate deriving the global schema of the integrated system from the mapping between the database ontologies. The two databases in this example are referred as DA and DB . Their ontologies are shown in Figure 1(a) and 1(b) respectively. DA follows Gene Ontology, whereas DB does not. For simplicity we assume their ER models have the same structure as the ontology; thus the global schema can be represented as the integration of two database ontologies. With term definitions, it is possible to build a mapping between the ontologies of DA and DB . It is easy to see that ”nuclear chromosome” and ”genome” both refer to the entire DNA sequence, therefore they are semantically equivalent. When two terms are referring to the same concept, then one global term can represent both in the integrated ontology, i.e., the global term definition subsumes the two local term definitions. In this example ”nuclear chromosome” and ”genome” will be represented by one term (for instance ”nuclear chromosome / genome”) in the integrated ontology. If two terms refer to two concepts in a specialization relation, then such relation should be kept in the integrated ontology. In this example, “condensed nuclear chromosome” (a highly compacted molecule of DNA and associated proteins resulting in a cytologically distinct

816

K. Xu

structure that remains in the nucleus) and “nuclear chromatin” (the ordered and organized complex of DNA and protein that forms the chromosome in the nucleus) are two specializations of ”nuclear chromosome”. Such relation should be kept between these two terms and the new term replacing “nuclear chromosome” in the integrated ontology. This general rule also applies to other relations in the ontology, such as the ”part-of” relation between ”genome” and ”gene” in database DB ontology. Therefore the integrated ontology is as the one shown in Figure 1(c). Based on our assumption, the global schema can be derived from the integrated ontology in a straightforward manner. 3.3

Semi-structured Data Integration

Besides the structured genomic data stored in various online databases, there are also large amount of semi-structured data, which mainly includes the publications in the literature. The knowledge in such publications can be made structural (more understandable by machine) by extracting and annotating them. However, such task requires significant effort and usually requires biological expertise. The major hurdle for any algorithm to perform such task is its inability to understand the semantics of previous research and based on them derive new knowledge from publications. Theoretically it is possible to use ontology to capture the semantics of publications, but is almost impossible in practice. For structured data, the semantics of a database can be captured by mapping its ontology to some known one. It is much more difficult to build such mapping for semi-structured data. First, semi-structured data hardly have its ontology available at all. Second, the number of such ontologies can be prohibitive because every paper could have its own ontology. Therefore, it is too early to discuss using Gene Ontology to map publication semantics. However, we think it is feasible to use Gene Ontology as a prior knowledge so the algorithm can understand the literature in a deeper semantic level. For example, given the knowledge that gene MCM2 is associated with molecular function “chromatin binding”, an algorithm can “guess” that the paper is relevant to “chromatin binding” when it finds gene MCM2 in it. Such semantic reasoning may not be able to extract meaningful (to human) knowledge from publication yet, but it can improve the effectiveness of existing machine learning / data mining algorithms for genomic knowledge discovery. For instance, the algorithm may be able to distinguish the type of data used in a paper, whether it is alphanumeric, DNA sequence, or interaction network, without understanding every specific data type. Such capability is also important when integrating online services. The possible better understanding of the service functionalities, which is usually semi-structured data, can lead to better query execution planning for integrated systems that adopt distributed/mediator-based approach. This also provides possible solutions for the data redundancy and inconsistency. By understanding the data semantics, the algorithm can identify duplications in databases. Such information with extra knowledge regarding data quality, it is also possible for algorithm to recognize correct data from contradictory copies. All these semantic reasonings heavily rely on the formal repre-

Web-Based Genomic Information Integration with Gene Ontology

817

sentation of ontology, which is still lack in the current Gene Ontology. Without such formal specification, it is difficult for algorithm to follow the knowledge in Gene Ontology and use them to derive new knowledge from online resources.

4

Conclusions

In this paper, we study the feasibility of using Gene Ontology to facilitate online genomic information integration. Our findings confirm that Gene Ontology is a valuable tool to resolve semantic and sometimes structural heterogeneity in database integration. The potential of applying Gene Ontology to tasks in semistructured data integration, such as automatic literature knowledge discovery and data inconsistency resolving, is limited by its lack of formal specification. With the development of the Gene Ontology, we believe it will play an vital role in genomic information integration and understanding.

References 1. Hernandez, T., Kambhampati, S.: Integration of biological sources: current systems and challenges ahead. ACM SIGMOD Record 33 (2004) 51–60 2. The Gene Ontology Consortium: Gene ontology: tool for the unification of biology. Nature Genetics 25 (2000) 25–29 3. Davidson, S.B., Crabtree, J., Brunk, B.P., Schug, J., Tannen, V., Overton, G.C., C. J. Stoeckert, J.: K2/Kleisli and GUS: Experiments in integrated access to genomic data sources. IBM Systems Journal 40 (2001) 512–531 4. Haas, L.M., Kodali, P., Rice, J.E., Schwarz, P.M., Swope, W.C.: Integrating life sciences data-with a little garlic. In: Proceedings of the 1st IEEE International Symposium on Bioinformatics and Biomedical Engineering. (2000) 5–12 5. Lenzerini, M.: Data integration: a theoretical perspective. In: Symposium on Principles of Database Systems. (2002) 233 – 246 6. Gruber, T.: A translation approach to portable ontology specifications. Knowledge Acquisition 5 (1993) 199–220 7. The Gene Ontology Consortium: Creating the gene ontology resource: Design and implementation. Genome Research 11 (2001) 1425–1433 8. Wache, H., Vogele, T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann, H., Hubner, S.: Ontology-based integration of information - a survey of existing approaches. In: Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI-01) Workshop: Ontologies and Information Sharing. (2001) 108– 117 9. Uschold, M., Gruninger, M.: Ontologies and semantics for seamless connectivity. SIGMOD Record 33(4) (2004) 58–64

Table Detection from Plain Text Using Machine Learning and Document Structure Juanzi Li, Jie Tang, Qiang Song, and Peng Xu Department of Computer Science and Technology, Tsinghua University, P.R. China {ljz, xp}@keg.cs.tsinghua.edu.cn, {j-tang02, sq02}@mails.tsinghua.edu.cn

Abstract. Addressed in this paper is the issue of table extraction from plain text. Table is one of the commonest modes for presenting information. Table extraction has applications in information retrieval, knowledge acquisition, and text mining. Automatic information extraction from table is a challenge. Existing methods was mainly focusing on table extraction from web pages (formatted table extraction). So far the problem of table extraction on plain text, to the best of our knowledge, has not received sufficient attention. In this paper, unformatted table extraction is formalized as unformatted table block detection and unformatted table row identification. We concentrate particularly on the table extraction from Chinese documents. We propose to conduct the task of table extraction by combining machine learning methods and document structure. We first view the task as classification and propose a statistical approach to deal with it based on Naïve Bayes. We define features in the classification model. Next, we use document structure to improve the detection performance. Experimental results indicate that the proposed methods can significantly outperform the baseline methods for unformatted table extraction.

1 Introduction Table is one of the commonest modes for presenting information. It is widely used in many kinds of documents such as: government reports, academic publications, and magazine articles. Thus, automatically identifying tables from documents is important for many applications, for example: information retrieval, knowledge acquisition, information summarization, and data mining. Unfortunately, table can be expressed in different kinds of formats. Specifically, it may have format information, for example “
,
”, “”, etc in HTML. We call this kind of table as formatted table. On the other hand, a table may have no such format information, for example, in plain text. We call this kind of table as unformatted table. Table may have table header and table data. Table header usually lies in the topmost row, the leftmost column of a table, or both. Correspondingly, table data can be expressed in a row, column, or mixed mode. In this paper, we address the issue of unformatted table extraction from Chinese plain text. We propose to conduct table extraction by combining machine learning and document structure extracted. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 818 – 823, 2006. © Springer-Verlag Berlin Heidelberg 2006

Table Detection from Plain Text Using Machine Learning and Document Structure

819

The rest of this paper is organized as follows. In Section 2, we introduce the related work. In section 3, we formalize the problem of unformatted table detection. In section 4, we describe our approach. Section 5 gives an implementation of our approach and Section 6 gives our experimental results. We make conclusions in section 7.

2 Related Work Table detection is an important area in data mining. Many research efforts have been made so far. However, most of the existing work was focusing on detecting tables from web page and only a little work addressed the table detection from plain text. 1. Table Detection from Plain Text Several research works have been made on table detection from plain text. For example, Pinto et al propose an approach using Conditional Random Fields (CRFs) for table extraction. They view the table detection as that of sequence learning. The CRFs based method outperforms the Hidden Markov Models (HMMs) based method (a classical method for sequence learning) [5]. Klein has constructed commercial document analysis systems for detecting table title, recognizing table structure, and grouping similar table lines [3]. See also [4][6]. Comparing with the above methods, two features make our method different: (1) we combine machine learning and document structure to detect unformatted table from plain text; (2) we focus on table detection from Chinese documents. 2. Table Detection from Web Page Table extraction from web pages is an important research topic in web data mining. Considerable research work has been placed on the table detection from web page. On web page, table is usually enclosed by special tags, such as: “
”, “
”. Therefore, table detection on web page significantly differs from that on plain text. For example, Wang et al propose a table extraction model. They have defined many statistical features of table. Based on these features, they have proposed an optimal algorithm to detect table [9]. See also [1][2][8].

3 Unformatted Table Detection According to the format of table, we divide tables into two categories: formatted table and unformatted table. By formatted table, we mean tables that have some kind of format information. For example, in HTML, a table is usually enclosed by two tags “” and “
”. By unformatted table, we mean tables that do not have such format. Therefore, the problem of unformatted table detection seems more complex and more difficult to deal with. Figure 1 shows an example of unformatted table. Line 1 is the table title. Lines lied from 2 to 7 are a table. Within the table, each line is a table row. The line 2 is a table header row. The table can be grouped into four columns. In this paper, we formalize the unformatted table detection as that of unformatted table row recognition and unformatted table identification. In unformatted table row

820

J. Li et al.

recognition, we mean a process of recognizing unformatted table rows from the plain text. In unformatted table identification, we identify an unformatted table by making use of the document structure and the results of table row recognition.

Fig. 1. An example of unformatted table

For the convenience of the presentation, in the later of this paper, we will not differentiate unformatted table and table.

4 Our Approach We perform the unformatted table detection in three passes: Document structure extraction, unformatted table row recognition, and unformatted table identification. The input is a plain text. In document structure extraction, we aim to reconstruct the document structure which is lost in plain text. Example of document structure is the document map tree in Microsoft Office Word. However, in most cases, documents do not contain such clear structure for various reasons. We reorganize the document structure by a hierarchical clustering approach. (See [10] for details.) In unformatted table row recognition, we use a classification based method to find unformatted table rows. Specifically, we define features for Chinese plain text in the classification model. We employ a Bayesian classification to perform the detection of the unformatted table rows. In unformatted table detection, we make use of the results of document structure extraction and unformatted table row recognition, and utilize heuristic rules for identifying the unformatted table. Finally, the output is detected unformatted tables and unformatted table rows within them.

5 Implementation We consider one implementation of the proposed approach. We employ a unified machine learning approach, Naïve Bayesian classification in unformatted table row recognition. Furthermore, we perform unformatted table identification by combining document structure and unformatted table row recognition. These issues have not been investigated previously on plain text and thus are the main focus of our work.

Table Detection from Plain Text Using Machine Learning and Document Structure

821

5.1 Document Structure Extraction We use the methods in [10] for document structure extraction. Here, we give the schema of the document structure.





The definition is recursive. Each Level-Block has a Block-Title and a Content. It can have sub Level-Blocks. Examples of the Block-Title can be section title or the table title. The document structure is organized in a tree. 5.2 Unformatted Table Row Recognition In this paper, the problem of determining whether or not a text line is an unformatted table row is viewed as that of classification. We manually annotate unformatted table rows in our dataset, and learn a classification model in advance. For recognition, we use the learned model for predicting whether a line is a table row or not. The key point here is to define features for efficiently performing the table row recognition task. The features mainly include three categories. Space Symbol Features: The features represent the percentage of space or tab symbol in the current line. The values range from 0 to 1. Text Length Feature: This feature represents the length of the current line. Segment Number Features: The number of segments in a line is an important indicator of table in Chinese document. The features above are also defined for the previous line and the next line. 5.3 Document Structure Based Unformatted Table Identification We use document structure to enhance the detections of unformatted table row and unformatted table. We propose an algorithm that can be described in three processing steps: annotation, modification and refinement, and induction. Input: Results of unformatted table row detection using classification model Output: Improved detection results using document structure Algorithm: Step 1: Annotation (1) If a line is a title in document structure, then annotate the line as title. (2) ElseIf a line is classified as true according to the classification model, annotate the line as a table row. (3) Else annotate it as content. Step 2: Modification and refinement (1) Search the document titles to see whether there exist table row within the titles; If exist, go to (2), else go to Step 1 and conduct annotation for the next line.

822

J. Li et al.

(2) If all lines in the same level meet the following condition: Most of them are recognized as table row and few lines have one segment. Then modify all the lines as table rows. Step 3: Induction Annotate all continuous table rows as a corresponding table.

6 Experimental Results 6.1 Experimental Setup 1. Data set and evaluation measures We tried to collect data from a Stock Exchange. Totally, 108 annual enterprise reports were gathered. These reports are converted into a XML based format so as to facilitate the later processing (See [10] for details of the conversion). In summary, 1706 unformatted tables and 13162 unformatted table rows were annotated. In all the experiments, we conducted evaluations in terms of precision, recall and F1-measure. 2. Baseline methods For table row recognition, we defined a baseline method as follows. If a line contains more than one separator such as space and tab, then identify the line as table row. For table identification, we defined a baseline method as follow. If successive lines are recognized as table rows, then identify these lines as a table. 6.2 Table Detection from Plain Text Table 1 shows the five-fold cross-validation results of unformatted table row recognition on the data set. Table 2 shows the five-fold cross-validation results of unformatted table identification. In the table, the first column lists the four methods that we evaluated in our experiments. Table 1. Performance of unformatted table row detection

Methods Baseline Bayesian Classification Document Structure Combination

Recall

Precision

F1-Measure

87.8% 98.3% 92.8% 93.7% (+5.9%) 97.4% (-0.9%) 95.5% (+2.7%) 95.6% (+7.8%) 99.0% (+0.7%) 97.3% (+4.5%) 97.1% (+9.3%) 98.7% (+0.4%) 97.9% (+5.1%)

Table 2. Performance of unformatted table detection

Methods

Recall

Precision

F1-Measure

Baseline 78.3% 63.8% 70.3% Bayesian Classification 85.5% (+7.2%) 75.0% (+11.2%) 79.9% (+9.6%) Document Structure 85.1% (+6.8%) 85.1% (+21.3%) 85.1% (+14.8%) Combination 89.9% (+11.6%) 87.6% (+23.8%) 88.7% (+18.4%)

Table Detection from Plain Text Using Machine Learning and Document Structure

823

In Bayesian classification method, we used classification model to decide whether or not a line is an unformatted table row, and then viewed the continuous unformatted table rows as an unformatted table. In document structure based method, we first identified unformatted table rows by using the baseline method. Then we used the algorithm described section 5.3 to identify tables. In combination method, we used method in section 5 to perform the detection of table row and table. We see that our methods can achieve high performances in the two tasks. For both unformatted table row recognition and unformatted table identification, our methods significantly outperform the baselines.

7 Conclusion In this paper, we have investigated the problem of unformatted table detection from plain text in Chinese. We have formalized the problem as that of unformatted table row recognition and unformatted table identification. We have proposed a combination approach to the task. We have defined features in the classification model. Using the classification method and document structure, we have been able to make an implementation of the approach. Experimental results show that our approach can significantly outperform the baseline methods.

References [1] H.H. Chen, S.C. Tsai, and J.H. Tsai. Mining tables from large scale HTML Text, In the Proc. of 18th international conference on Computational Linguistics, 2002, Saarbruecken, Germany. [2] W. Cohen, M. Hurst, and L. Jensen. A flexible learning system for wrapping tables and lists in HTML documents. In the Proc. Of WWW2002, Honolulu, Hawaii, 2002. [3] B. Klein, S. Gokkus, T. Kieninger T. Three approaches to “industrial” table spotting. In Proc. 6th Int’l Conf. Document Analysis and Recognition. 2001: 513-517 [4] H. T. Ng, C. Y. Lim, J. L. T. Koo. Learning to Recognize Tables in Free Text. In Proc. of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics (ACL'99). 1999, pages 443-450. [5] D. Pinto, A. McCallum, X. Wei, and W. B. Croft. Table Extraction Using Conditional Random Fields. In Proc. of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 2003. [6] P. Pyreddy and W. Croft. TintinL: A system for retrieval in text tables. In Proc. the second international conference on digital libraries. 1997 [7] J. Tang, J. Li, H. Lu, B. Liang, X. Huang, K, Wang. iASA: Learning to Annotate the Semantic Web. Journal on Data Semantics. 2005. [8] A. Tengli, Y. Yang, and N. Ma, Learning Table Extraction from Examples. In Proc. Of 20th international conference on computational linguistics. [9] Y. Wang, T.P. Phillips, and R.M. Haralick. Table structure understanding and its performance evaluation. In Pattern Recognition. Vol.37(7), 2004: 1479-1497 [10] K. Zhang, P. Xu, J. Li, and K. Wang. Optimized hierarchy clustering based extraction for document logical structure. Journal of Tsinghua Science and Technology. 2005, 45(4).

Efficient Mining Strategy for Frequent Serial Episodes in Temporal Database Kuo-Yu Huang and Chia-Hui Chang Department of Computer Science and Information Engineering, National Central University, Chung-Li, Taiwan 320 [email protected], [email protected]

Abstract. Discovering patterns with great significance is an important problem in data mining discipline. A serial episode is defined to be a partially ordered set of events for consecutive and fixed-time intervals in a sequence. Previous studies on serial episodes consider only frequent serial episodes in a sequence of events (called simple sequence). In real world, we may find a set of events at each time slot in terms of various intervals (called complex sequence). Mining frequent serial episodes in complex sequences has more extensive applications than that in simple sequences. In this paper, we discuss the problem on mining frequent serial episodes in a complex sequence. We extend previous algorithm MINEPI to MINEPI+ for serial episode mining from complex sequences. Furthermore, a memory-anchored algorithm called EMMA is introduced for the mining task.

1

Introduction

Mining significant patterns in sequence(s) is an important and fundamental issue in knowledge discovery, including sequential patterns, frequent episodes, frequent continuities and periodic patterns [1]. In these studies, discovering frequent serial episodes is a basic problem in sequence analyzing[3]. The goal of episode mining is to find relationships between events. Such relationships can then be used in an on-line analysis to better explain the problems that cause a particular event or predict future result. Serial episode mining has been of great interest in many applications, including internet anomaly intrusion detection [2], biomedical data analysis and web log analysis. The task of mining frequent episodes was originally defined on “a sequence of events” where the events are sampled regularly as proposed by Mannila et al. [3]. Informally, an episode is a partially ordered collection of events occurring together. The user defines how close is close enough by giving the width of the time window win. Mannila et al. introduced three classes of episodes. Serial episodes consider patterns of a total order in the sequence, while parallel episodes have no constraints on the relative order of event sets. The third class contains composite episodes like serial combination of parallel episodes. Mannila et al. presented a framework for discovering frequent episodes through a level-wise algorithm, WINEPI [3], for finding parallel and serial X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 824–829, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Efficient Mining Strategy for Frequent Serial Episodes

825

episodes that are frequent enough. The algorithm was an Apriori-like algorithm with the “anti-monotone” property of episodes’ support. The support of an episode is defined as the number of sliding windows, a block of win continuous records, in the sequence. Take the sequence S = A3 A4 B5 B6 as an example, there are 6-3+3=6 sliding windows in S given win = 3, e.g. W1 = A3 , W2 = A3 A4 , W3 = A3 A4 B5 , W4 = A4 B5 B6 , W5 = B5 B6 and W6 = B6 . Unfortunately, this support count has a defect in confidence calculation of an episode rule. For example, the serial episode rule “When event A occurs, then event B occurs within 3 time units” should have probability or confidence 2/2 in the sequence S since every occurrence of A is followed by B within 3 time units. However, since episode < A > is supported by four sliding windows, while serial episode < A, B > is matched by two sliding windows (W3 and W4 ), the above rule will have confidence 2/4. Instead of counting the number of sliding windows that support an episode, Mannila et al. consider the number of minimal occurrences of an episode from another perspective. They presented MINEPI [4], an alternative approach to the discovery of frequent episodes from minimal occurrences (mo) of episodes. A minimal occurrence of an episode α is an interval such that no proper subwindow contains the episode α. For example, episode < A > has mo support 2 (interval [3,3] and [4,4]), while episode < A, B > has only mo support 1 from interval [4,5]. Thus, the above rule will have confidence 1/2. However, both measures are not natural for the calculating of an episode rule’s confidence. Therefore, we need a measure that facilitates the calculation of such episode rules to replace the number of sliding windows or minimal occurrences. In addition, we sometimes find several events occurring at one time slot in terms of various intervals, called complex sequences. Note that a temporal database is also a kind of complex sequence with temporal attributes. Mining frequent serial episodes in a complex sequence has more extensive applications than that in a simple sequence. Therefore, we discuss the problem on mining frequent serial episodes over a complex sequence in this paper, where the support of an episode is modified carefully to count the exact occurrences of episodes. We propose two algorithms in mining frequent episodes in complex sequences, including MINEPI+ and EMMA. MINEPI+ is modified from previous verticalbased MINEPI [4] for mining episodes in a complex sequence. MINEPI+ employs depth first enumeration to generate the frequent episodes by equalJoin and temporalJoin. To further reduce the search space in pattern generation, we propose a brand new algorithm, EMMA (Episodes Mining using Memory Anchor), which utilizes memory anchors to accelerate the mining task. Experimental evaluation shows that EMMA is more efficient than MINEPI+.

2 2.1

Mining Serial Episodes MINEPI+

MINEPI is an iteration-based algorithm which adopts breadth-first manner to enumerate longer serial episodes from shorter ones. However, instead of scanning

826

K.-Y. Huang and C.-H. Chang

the temporal database for support counting, MINEPI computes the minimal occurrences mo of each candidate episode from the mo of its subepisode by temporal joins. For example, we want to find all frequent serial episodes from a simple sequence S = A1 A2 B3 A4 B5 with maxwin = 4 and minsup = 2. MINEPI first finds frequent 1-episode and records the respective minimal occurrence, i.e. mo(A) = {[1, 1], [2, 2], [4, 4]}, mo(B) = {[3, 3], [5, 5]}. Using temporal join which connects events from different time intervals (less than maxwin), we get intervals [1,3], [2,3], [2,5] and [4,5] for candidate 2-tuple episode . Since [1, 3] and [2, 5] are not minimal, the minimal occurrences of will be {[2, 3], [4, 5]}. If we want to count the number of sliding windows that match serial episode , interval [1, 3] should be retained since the first subwindow contains A. Therefore, we have support count 3 for serial episode since [2,3] and [2,5] denote the same sliding window. To extend MINEPI for our problem, we also need equal join which connects events at the same interval for dealing with complex sequences. We will use these intervals to compute the right support count for the problem. Given the maximum window bound maxwin, the bound list of a serial episode P = < p1 , . . . , pk >, is the set of intervals [tsi , tei ] (tei − tsi < maxwin) such that p1 ⊂ Xtsi , pk ⊂ Xtei and [Xtsi +1 , Xtsi +2 , . . . , Xtei −1 ] is a supersequence of < p2 , . . . , pk−1 >. Each interval [tsi , tei ] is called a matching bound of P . By definition, the bound list of an event Y is the set of intervals [ti , ti ] such that Xti supports Y . Given a serial episode P =< p1 , . . . , pk > and a frequent 1-pattern f and their matching bound lists, e.g., P.boundlist = {[ts1 , te1 ], . . . , [tsn , ten ]} and f.boundlist = {[ts1 , ts1 ], . . . , [tsm , tsm ]}. The operation equal join of P , and f which computes the bound list for a new serial episode P1 =< p1 , . . . , pk f > (denoted by P / f ) is defined as the set of intervals [tsi , tei ] such that tei = tsj for some j (1 ≤ j ≤ m). Similar to equal join, the operation temporal join (concatenation) of P and f (denoted by P · f ) which computes the bound list for new serial episode P2 =< p1, . . . , pk , f > is defined as the set of intervals [tsi , tej ] such that tej − tsi < maxwin, and tej > tei for some j (1 ≤ j ≤ m). Different from MINEPI, we apply depth-first enumeration to pattern generation for memory saving. This is because breadth first enumeration must keep track of records for all episodes in two consecutive levels, while depth-first enumeration needs only to keep intermediate records for episodes generated along a single path. Note that MINEPI+ does’t search the minimum occurrence in the temporal database, we call our algorithm as MINEPI+ since the vertical-based operation in MINEPI+ is similar to MINEPI. Though the extension of MINEPI discover all frequent serial episodes, MINEPI+ has the following drawbacks: 1. A huge amount of combinations: Let |I| be the number of frequent 1-episodes, 2 WINEPI+ needs |I|2 and |I| 2−|I| checking for temporal joins and equal joins, respectively. 2. Unnecessary joins: For example, while the number of the extendable matching bounds for a serial episode is less than minsup ∗ |T DB|, we can skip all temporal joins for this prefix. 3. Duplicate joins: For example, to find serial episode , MINEPI+ needs four of equal joins (twice

Efficient Mining Strategy for Frequent Serial Episodes Time

1 A Events C

2 C

3 4 5 6 8 9 10 11 12 13 14 B A D B A B E A B D A D C E D C D C D E C (a) A temporal database T DB Time 1 2 3 4 5 6 8 9 10 11 12 13 14 #1 #3 #2 #1 #4 #2 #1 #2 #1 #2 #4 #1 ID #3 #4 #3 #4 #3 #4 #3 #4 #3 #5 #6 #5 #6 #5 #6 #5 #6 #5 (c) Encoded horizontal database for T DB

15 16 C B E D 15 16 #3 #2 #4 #6

ID #1 #2 #3 #4 #5 #6 (b)

827

Item Timelist A 1, 4, 8, 11, 14 B 3, 6, 9, 12, 16 C 1, 2, 4, 8, 11, 14, 15 D 3, 5, 6, 9, 12, 13, 16 A, C 1, 4, 8, 11, 14 B, D 3, 6, 9, 12, 16 Frequent itemsets for T DB

Fig. 1. Phase I and II for EMMA

(, ) and (,)) and one temporal joins ((, )). However, if we maintain the bound list for < ABC >, we only needs one temporal joins. 2.2

EMMA

In this section, we propose an algorithm, EMMA (Episode Mining using Memory Anchor), that overcomes the drawbacks of the MINEPI+ algorithm. To reduce duplicate checking, EMMA is divided into three phases, including (I) Mining frequent itemset in the complex sequence. (II) Encode each frequent itemset with a unique ID and construct them into a encoded horizontal database. (III) Mining frequent serial episodes in the encoded database. The EMMA algorithm adopts DFS to enumerate local frequent patterns by memory anchors to accelerate the mining task, which is more like a pattern growth method since it searches the local frequent sub-pattern to form the long pattern. Thus, instead of frequent items, we have a larger set of all frequent itemsets as frequent 1-tuple episodes. Again, we will use the boundlists for each frequent 1-tuple episode to enumerate longer frequent episodes. However, we only combine existing episodes with a “local” frequent 1-tuple episode to overcome the huge amount of candidate generation. Now, in order to discover local frequent 1-tuple episode efficiently, we construct an encoded database EDB indexed by time (Phase II) and utilize the boundlists as a memory anchor to access the horizontal-based information. Note that the timelists of the frequent itemsets are equivalent to the boundlists for frequent 1-tuple episodes. As an example, Figure 1 shows an illustrative transaction database, the frequent itemsets with min sup = 5, and the encoded database. Finally, we use depth first enumeration to enumerate frequent serial episodes and carefully avoid unnecessary joins in Phase III. Similar to MINEPI+, it adopts depth first enumeration to generate longer serial episodes. However, EMMA generates only frequent serial episodes by joining an existing serial episode with local frequent IDs. This is accomplished by examining those transactions following the matching bounds of current serial episodes. For example, if we want to extend an episode #3={C} with boundlist {[1,1], [2,2], [4,4], [8,8], [11,11], [14,14], [15,15]}, we need to count the occurrences of IDs in the following intervals within maxwin = 4 bound, i.e. [2,4], [3,5], [5,7], [9,11], [12,14], [15,16] and [16,16]. We call these intervals the projected bound-

828

K.-Y. Huang and C.-H. Chang

list of a serial episode . Formally, the projected bound list of a boundlist for an episode is defined as follows. Given the bound list of a serial episode P , P.boundlist = {[ts1 , te1 ], . . . , [tsn , ten ], } in the encoded database ED, the projected boundlist (P BL) of P is defined as P.P BL = {[ts1 , te1 ], . . . , [tsn , ten ], } where tsi =min(tsi + 1, |T DB|) and tei = min(tsi + maxwin − 1, |T DB|). When examining the IDs in the projected boundlist, we also record the boundlists of IDs. For example, #4 is a local frequent ID in #3.PBL and has boundlist {[3,3], [5,5], [6,6], [9,9], [12,12], [13,13], [16,16]}. Thus, when new serial episodes are generated by temporal join , we know their boundlists immediately, i.e. {[1,3], [2,3] [4,5], [8,9], [11,12], [14,16], [15,16]}. To extend this episode, the procedure emmajoin is called recursively until no more new serial episodes can be extended, i.e. when the number of extendable bounds for a serial episode is less than minsup ∗ |T DB|. For example, suppose the boundlist of of some serial episode is {[1,3], [3,5], [8,11], [11,14], [14,15]}, with maxwin = 4 the extendable bounds include {[1,3], [3,5], [14,15]} since [8,11] and [11,14] already reach the maximum window bound. With minsup = 5, we do not need to extend serial episode . This strategy can avoid unnecessary checking spent in MINEPI+.

(a) Running Time v.s. minsup

(b) Memory Usage v.s. minsup

(c) Running Time v.s. maxwin

(d) Memory Usage v.s. maxwin

Fig. 2. Performance comparison in real data

3

Experiments

We apply MINEPI+ and EMMA to a data set composed of 10 stocks in the Taiwan Stock Exchange Daily Official list for 2618 trading days from September 5, 1994 to June 21, 2004. We discretize the stock price of go-up/go-down into five levels. Figure 2(a) shows the running time with an increasing support

Efficient Mining Strategy for Frequent Serial Episodes

829

threshold, minsup, from 10% to 30%. Figure 2(c) shows the same measures with varying maxwin. As the maxwin/minsup threshold increases/decreases, the gap between MINEPI+ and EMMA in the running time becomes more substantial. Figures 2(b) and (d) show the memory requirements and the number of frequent episodes with varying minsup and maxwin. As the maxwin threshold increases or minsup threshold decreases, the number of frequent episodes also increases. The memory requirement in MINEPI+ is steady. However, EMMA needs to maintain more frequent itemsets as the minsup decreases; whereas the memory requirement with varying maxwin in EMMA is changed slightly. Overall, MINEPI+ is better than EMMA in memory saving (by a magnitude of 4 for minsup = 10%).

4

Conclusion and Future Work

In this paper, we discuss the problem of mining frequent serial episodes in a complex sequence and propose two algorithms to solve this problem. First, we modify previous vertical-based MINEPI to MINEPI+ as the baseline for mining episodes in a complex sequence. To avoid the huge amount of combinations/computations and unnecessary/duplicate checking, we utilize memory to propose a brand-new memory-anchored algorithm, EMMA. The experiments show that EMMA is more efficient than MINEPI+. So far we have only discussed serial episodes. Parallel episodes, which have no constraint on event orders, and composite episodes, e.g. serial combination of parallel episodes, remain to be solved. Thus, further researches are required.

References 1. K. Y. Huang and C. H. Chang. Smca: A general model for mining synchronous periodic pattern in temporal database. IEEE Transaction on Knowledge and Data Engineering (TKDE), 17(6):776–785, 2005. 2. Jianxiong Luo and Susan M. Bridges. Mining fuzzy association rules and fuzzy frequency episodes for intrusion detection. International Journal of Intelligent Systems, 15(8), 2000. 3. H. Mannila, H. Toivonen, and A. I. Verkamo. Discovering frequent episodes in sequences. In Proceedings of the First International Conference on Knowledge Discovery and Data Mining (KDD’95), pages 210–215, 1995. 4. H. Mannila and H. Toivonen. Discovering generalized episodes using minimal occurrences. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD’96), pages 146–151, 1996.

Efficient and Provably Secure Client-to-Client Password-Based Key Exchange Protocol Jin Wook Byun, Dong Hoon Lee, and Jong-in Lim Center for Information Security Technologies (CIST), Korea University, Anam Dong, Sungbuk Gu, Seoul, Korea {byunstar, donghlee, jilim}@korea.ac.kr

Abstract. We study client-to-client password-authenticated key exchange (C2C-PAKE) enabling two clients in different realms to agree on a common session key using different passwords. Byun et al. first presented C2C-PAKE schemes under the cross-realm setting. However, the schemes were not formally treated, and subsequently found to be flawed. In addition, in the schemes, there are still rooms for improvements both in computational and communicational aspects. In this paper we suggest an efficient C2C-PAKE (EC2C-PAKE) protocol, and prove that EC2CPAKE protocol is secure under the decisional Diffie-Hellman assumption in the ideal cipher and random oracle models. Keywords: Human memorable password, mobile computing, different password authentication, authenticated key exchange, dictionary attacks.

1

Introduction

To communicate securely over an insecure public network it is essential that secret keys are exchanged securely. Password-authenticated key exchange protocol allows two or more parties holding a same memorable password to agree on a common secret value (a session key) over an insecure open network. Most password-authenticated key exchange schemes in the literature have focused on a same password -authentication (SPWA) model which provides passwordauthenticated key exchange using a shared common password between a client and a server [2, 3, 4, 5, 11, 19]. Several schemes have been presented to provide password-authenticated key exchange between two clients (or n clients) with their different passwords in the single-server setting where two clients (or n clients) are in the same realm [14, 15, 17, 1, 7, 8, 13]. In this different password-authentication (DPWA) model two clients can generate a common session key with their distinct passwords by the help of a server. However it is unrealistic that two clients trying to communicate each other are registered in the same server. In real distributed applications, an 

This research was supported by the MIC(Ministry of Information and Communication), Korea, under the ITRC(Information Technology Research Center) support program supervised by the IITA(Institute of Information Technology Assessment).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 830–836, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Efficient and Provably Secure C2C Password-Based Key Exchange Protocol

831

authentication setting that two clients are registered in different servers is arisen more often. For example, from a user’s point of view, in a mobile computing environment, a secure end-to-end channel between one mobile user in cell A and another user in cell A or cell B may be a primary concern. Additionally, the endto-end security service minimizes the interferences from the operator controlled network components. 1.1

Related Works and Our Contribution

Byun et al. first proposed C2C-PAKE protocol which was a secure Client-toClient Password-Authenticated Key Agreement (C2C-PAKE) in the cross-realm setting where two clients were in two different realms and hence there existed two servers involved [6]. They have heuristically proved that the schemes were secure against all attacks considered. Unfortunately, the scheme was found to be flawed. Chen first pointed out that in the scheme with the cross-realm setting one malicious server can mount a dictionary attack to obtain the password of client who belongs to the other realm [10]. In [18], Wang et al. showed three dictionary attacks on the same protocol, and Kim et al. pointed out that the protocol was susceptible to Dening-Sacco attack in [12]. Kim et al. also proposed an improved C2C-PAKE protocol. However, very recently, Phan and Goi suggested unknown key share attacks on the improved C2C-PAKE protocol in [16]. Up until now, several countermeasures to protect the attacks on C2C-PAKE protocol have been presented in [10, 18, 12, 16], but without any formal treatment, which may lead to another attack-and-remedy procedures in near future. In this paper we propose an efficient and provably secure C2C-PAKE (called EC2C-PAKE) scheme which improves efficiency in both communicational and computational aspects and formally prove the security of the proposed one. We prove that EC2C-PAKE protocol is secure under computational assumptions of the decisional Diffie-Hellman (DDH) and hash decisional Diffie-Hellman (HDDH) problems, secure cryptographic primitives of MAC (message authentication code), and secure symmetric encryption. Furthermore, EC2C-PAKE is more efficient than C2C-PAKE protocol, while preserving session key security and forward secrecy. We summarize the efficiency of two protocols below. Although the percentage of saving in aspects of T-R is relatively low, only one decrease in the number of rounds is very notable since it is well recognized that cost for communication is a major portion of total cost. In addition, one of the important factors to evaluate efficiency of a protocol is the number of modular exponentiations since it is the most power-consuming operation. We reduce the number of exponentiation by almost half of C2C-PAKE scheme. T-R Exp Enc C2C-PAKE 7(8) 23(23) 10(12) EC2C-PAKE 6(7) 12(12) 8(8) PS 14.3%(12.5%) 47.8%(47.8%) 20%(33.3%)

FS  

R-R 3 2 33.3%

832

J.W. Byun, D.H. Lee, and J.-i. Lim

* T-R : The number of total rounds, Exp : The number of total exponentiations, Enc : The number of total encryptions, FS : Forward Secrecy ( means that the scheme provides FS), R-R : The number of rounds between realms, PS : × 100, The percentage of saving which are calculated by C2C-PAKE−EC2C-PAKE C2C-PAKE () : The number when the protocol is augmented with mutual authentication

2

Efficient C2C-PAKE Protocol

In this section we design an efficient C2C-PAKE protocol. In the protocol, we use two encryption functions; One is an ideal cipher E which is a random oneto-one function such that EK : M → C, where |M | = |C| and the other function is a secure symmetric encryption E. For simplicity, we assume that Clients = {Alice, Bob}, Server = {KDCA ,KDCB }, where KDCA and Alice are in Realm A and KDCB and Bob are in Realm B. 2.1

Design Principles

Security proof of C2C-PAKE protocol is based on heuristic approach. Byun et al. in [6] defined several active attacks and security property in DPWA such as Denning-Sacco attack, dictionary attack and perfect forward secrecy. They also showed that C2C-PAKE protocol is secure against these active attacks. These attacks may be regarded as known key attacks because the adversary goals defined are to break the security of the given protocol using the known keys. Concretely, the authors assumed that an adversary can get long-term passwords, session keys and ephemeral Diffie-Hellman keys (R and R ). Passwords may be revealed inadvertently during a conversation or by malicious insider adversaries. The previously used session keys also may be lost for various reasons such as hacking or careless clients. In a realistic view, therefore, the assumption that the adversary may obtain passwords or session keys are reasonable. However, the assumption that the adversary may obtain ephemeral Diffie-Hellman keys is not realistic. If we assume that any adversary can obtain ephemeral keys which may be generated instantly but not saved, then we can assume that the adversary is also able to capture every ephemeral state values. This results in a total breakdown of the security of C2C-PAKE. Up until now, there is no scheme secure against the adversary which steals ephemeral states. So we assume that realistic active adversaries can not get ephemeral states but can get session keys and passwords. Actually, by giving a realistic restriction on adversary ability, we design EC2C-PAKE protocol while preserving the security of the original one. 2.2

Protocol Description

Protocol Setup. Preliminaries for a protocol run are as follows. 1. g, p and q are global public parameters shared by all protocol participants. 2. Alice (Bob) shares her password pwa (pwb) with key distribution center KDCA (KDCB , respectively) by using algorithms Gpw and R.

Efficient and Provably Secure C2C Password-Based Key Exchange Protocol

833

 

Notation. R(= H1 (g xy )) and R (= H2 (g x y )) are ephemeral Diffie-Hellman keys agreed between Alice and KDCA , Bob and KDCB , respectively. sk(= H3 (IDA ||IDB ||g a ||g b ||g ab )) is a session key agreed between Alice and Bob where IDA and IDB are identifiers of Alice and Bob, respectively. T icketB is EK (k, IDA , IDB , L) where k is a common key distributed for Alice and Bob, and L is a lifetime of T icketB . We assume that key K is pre-distributed between KDCA and KDCB by using two party key exchange protocol. MACk (m) denotes an output of MAC applied key k for a message m. A notation || means that two adjacent messages are concatenated. Protocol Description. The description is as follows. 1. Alice chooses a random value x from Zp∗ randomly, computes and sends Ex = Epwa (g x ) to KDCA along with IDA and IDB . 2. KDCA obtains g x by decrypting Ex , chooses y ∈ Zp∗ randomly, and computes Ey = Epwa (g y ) and R = H(g xy ). KDCA also generates a random key k from Zp∗ for Alice and Bob, and computes ER = ER (k, IDA , IDB ). KDCA specifies L, a lifetime of T icketB . Then KDCA makes T icketB and sends Ey , ER , and T icketB to Alice. Upon receiving the message from KDCA , Alice computes an ephemeral key R and decrypts ER to get the distributed key k. Alice also checks whether IDA and IDB are correct or not. 3. Alice generates a random value a ∈ Zp∗ and makes Ea (= g a ||MACk (g a )). Then she forwards IDA , Ea , and T icketB to Bob.  4. Bob chooses x ∈ Zp∗ randomly and computes Ex (= Epwb (g x )). Then he sends Ex and T icketB to KDCB . 5. KDCB obtains k, L, and IDA by decrypting T icketB by using its key K. KDCB first examines the validity of T icketB by checking the lifetime L and IDA . If the validation processes succeed, KDCB selects y  ∈ Zp∗ randomly  and computes Ey (= Epwb (g y )) and ER (= ER (k, IDA , IDB )) where R is   H(g x y ). KDCB finally sends Ey and ER to Bob. 6. Bob decrypts Ey and computes R . Then Bob decrypts ER to get the key k. Using the key k, Bob checks g a by verifying the previously received Ea . Bob generates a random value b ∈ Zp∗ and makes sk(= H(IDA ||IDB ||g a ||g b ||g ab )) and Eb (= g b ||MACk (g b )). Finally Bob sends Eb to Alice. Upon receiving the message Eb , Alice also generates a common session key sk.

3

Security Proofs

In this section we prove that EC2C-PAKE protocol achieves the DPWA security conditions under the DDH and HDDH assumptions, and secure MAC and symmetric encryption primitives. For formal definitions of DPWA security and computational assumptions, refer to the full paper [9]. Theorem 2.1. Let A be a probabilistic polynomial time adversary against DPWA security of the proposed EC2C-PAKE protocol P within a time bound T and making qs send queries, qe execute queries, qE encryption queries for ideal cipher E,

834

J.W. Byun, D.H. Lee, and J.-i. Lim

qe encryption queries for symmetric encryption E, qt tag queries, and qv verification queries. Then qE2 + qh2 hddh + 6 · Advcca Ase (Tse , qe ) + 4 · AdvAhddh (Thddh ) (p − 1) qs ddh + 2 · Advcma Amac (Tmac , qt , qv ) + 2 · AdvAddh (Tddh ) + |D|

Advsk P (A) ≤

where |D| is the size of the password space, p is prime order of a cyclic group G, Tse = Thddh = Tddh ≤ T + qs (τG + τE ), Tmac ≤ T + τT (qt + qv ), and τG , τE , τT are computational time for an exponentiation, ideal encryption E, message authentication code, respectively. In the proof above, we do not consider forward secrecy and malicious server attack [10] for simplicity. Next we show that EC2C-PAKE protocol satisfies the above two security concerns. Theorem 2.2. The proposed EC2C-PAKE protocol P provides a forward secrecy under the computational Diffie-Hellman assumption. Theorem 2.3. The proposed EC2C-PAKE protocol P is secure against malicious server attack under the computational Diffie-Hellman assumption. Due to the limited space, the full proof of the above theorems will be presented in the full paper [9]. 3.1

Adding Mutual Authentication

In this paper, we do not explicitly consider the security of mutual authentication. But, mutual authentication in our protocol can be easily achieved by using an additional authenticator structure described in [2]. The authenticator is computed as the hash of the keying materials and client’s ID. We describe only the part of mutual authentication below. We let sk  and sk be H(IDA ||IDB ||g a ||g b ||g ab ) and H(sk  ||0), respectively. Bob first sends H(sk  ||1) as authenticator in the last stage of EC2C-PAKE. Upon receiving the message, Alice confirms the authenticator by using sk  and makes H(sk  ||2). Alice sends this back to Bob. If the confirmation processes are successful, then, Alice and Bob generate a common session key sk = H(sk  ||0).

4

Conclusion and Further Research

In this paper we proposed an efficient C2C-PAKE protocol. The proposed EC2CPAKE is the first formally treated and provable secure scheme in the DPWA model. It may be worth to design a generic construction of C2C-PAKA in the cross-realm setting. The protocols can be constructed generically by using 2 party password authenticated key exchange and key distribution protocols and proved to be secure generically as well. In this general construction, we can avoid the random oracle and ideal cipher models by using the existing efficient protocol based on standard assumption.

Efficient and Provably Secure C2C Password-Based Key Exchange Protocol

835

References 1. M. Abdalla, P. Fouque, and D. Pointcheval, “Password-Based Authenticated Key Exchange in the Three-Party Setting”, In Proceedings of PKC 2005, LNCS Vol. 3386, pp. 65-84, Springer-Verlag, 2005. 2. M. Bellare, D. Pointcheval, and P. Rogaway, “Authenticated key exchange secure against dictionary attacks”, In Proceedings of Eurocrypt’00, LNCS Vol.1807, pp. 139-155, Springer-Verlag, 2000. 3. S. Bellovin and M. Merrit, “Encrypted key exchange: password based protocols secure against dictionary attacks”, In Proceedings of the Symposium on Security and Privacy, pp.72-84, IEEE, 1992. 4. E. Bresson, O. Chevassut, and D. Pointcheval, “Group diffie-hellman key exchange secure against dictionary attacks”, In Proceedings of Asiacrypt’02, LNCS Vol. 2501, pp. 497-514, Springer-Verlag, 2002. 5. V. Boyko, P. MacKenzie, and S. Patel, “Provably secure password-authenticated key exchange using diffie-hellman”, In Proceedings of Eurocrypt’00, LNCS Vol. 1807, pp. 156-171, Springer-Verlag, 2000 6. J. W. Byun, I. R. Jeong, D. H. Lee, and C. Park, “Password-authenticated key exchange between clients with different passwords”, In Proceedings of ICICS’02, LNCS Vol. 2513, pp. 134-146, Springer-Verlag, 2002. 7. J. W. Byun and D. H. Lee, “N-party Encrypted Diffie-Hellman Key Exchange Using Different Passwords”, In Proc. of ACNS05’, LNCS Vol. 3531, page 75-90, Springer-Verlag, 2005. 8. J. W. Byun, D. H. Lee, and J. Lim “Password-Based Group Key Exchange Secure Against Insider Guessing Attacks”, In Proc. of CIS 05’, LNAI Vol. 3802, page 143-148, Springer-Verlag, 2005. 9. J. W. Byun, D. H. Lee, and J. Lim, “Efficient and Provably Secure Clientto-Client Password-based Key Exchange Protocol”, A full paper is availabe at http://cist.korea.ac.kr or http://cist.korea.ac.kr/∼byunstar. 10. L. Chen, “A weakness of the password-authenticated key agreement between clients with different passwords scheme”, ISO/IEC JTC 1/SC27 N3716. 11. J. Katz, R. Ostrovsky, and M. Yung, “Efficient password-authenticated key exchange using uuman-memorable passwords”, In Proceedings of Eurocrypt’01, LNCS Vol. 2045, pp. 475-494, Springer-Verlag, 2001. 12. J. Kim, S. Kim, J. Kwak, and D. Won, “Cryptoanalysis and improvements of password authenticated key exchange scheme between clients with different passwords”, In Proceedings of ICCSA 2004, LNCS Vol. 3044, pp. 895-902, Springer-Verlag, 2004. 13. J. H. Koo and D. H. Lee , “Secure Password Pocket for Distributed Web Services”, In Proc. of NPC 2005, LNCS Vol. 3779, pp. 327-334, Springer-Verlag, 2005. 14. C. Lin, H. Sun, and T. Hwang, “Three-party encrypted key exchange: attacks and a solution”, ACM Operating Systems Review, Vol. 34, No. 4, pp. 12-20, 2000. 15. C. Lin, H. Sun, M. Steiner, and T. Hwang, “Three-party encrypted key exchange without server public-keys”, IEEE Communications Letters, Vol. 5, No. 12, pp. 497-499, IEEE Press, 2001. 16. R. C.-W. Phan and B. Goi, “Cryptanalysis of an Improved Client-to-Client Password-Authenticated Key Exchange (C2C-PAKE) Scheme ”, In Proceedings of ACNS 2005, LNCS Vol. 3531, pp. 33-39, Springer-Verlag, 2005. 17. M. Steiner, G. Tsudik, and M. Waider, “Refinement and extension of encrypted key exchange”, In ACM Operation Sys. Review, Vol. 29, No. 3, pp. 22-30, 1995.

836

J.W. Byun, D.H. Lee, and J.-i. Lim

18. S. Wang, J. Wang, and M. Xu, “Weakness of a password-authenticated key exchange protocol between clients with different passwords”, In Proceedings of ACNS 2004, LNCS Vol. 3089, pp. 414-425, Springer-Verlag, 2004. 19. T. Wu, “Secure remote password protocol”, In Proceedings of the Internet Society Network and Distributed System Security Symposium, pp. 97-111, 1998.

Effective Criteria for Web Page Changes* Shin Young Kwon1, Sang Ho Lee1, and Sung Jin Kim2 1 School of Computing, Soongsil University, 1-1 Sangdo-dong Dongjak-gu Seoul 156-743, Korea {sykwon, shlee}@comp.ssu.ac.kr 2 School of Computer Science and Engineering, Seoul National University, San 56-1 Shinlim-dong Kwanak-gu Seoul 151-744, Korea [email protected]

Abstract. A number of similarity metrics have been used to measure the degree of web page changes in the literature. In this paper, we define criteria for web page changes to evaluate the effectiveness of the metrics. Using real web pages and synthesized pages, we analyze the five existing metrics (i.e., the byte-wise comparison, the TF·IDF cosine distance, the word distance, the edit distance, and the shingling) under the proposed criteria. The analysis result can help users select an appropriate metric for particular web applications.

1 Introduction In many web applications, administrators create and manage web databases (a collection of web pages). For example, web search services such as Google or Yahoo, create their web databases and allow users to search the databases. As web pages change dynamically, web databases become obsolete. Since the changed web pages in the databases need to be updated, administrators would like to know how significantly the contents of the web pages change. A number of similarity metrics for textual data have been used to measure the degree of web page changes. The simplest way to see if a web page changes is to compare web pages in a byte-by-byte level, which is used in [1, 3, 5]. Ntoulas et al. [7] used the TF·IDF cosine distance and the word distance. Lim et al. [6] used a metric based on the edit distance. Broder et al. [2] and Fetterly et al. [4] used the shingling metric. Each of the metrics often represents the same change of web pages differently. Users may have a difficulty with selecting an appropriate metric for their specific applications. In our best knowledge, there have been no research activities to intensively compare (or evaluate) the existing metrics in terms of web page changes. We propose criteria for web page changes in order to evaluate existing similarity metrics. In the criteria, the changes of web pages are classified into six types (namely, “add”, “drop”, “copy”, “shrink”, “replace”, and “move”). We believe that the six types represent common changes on the web. We conducted two kinds of experiments. The goal of the first experiment is to show how differently the existing metrics represent the same change of web pages. The goal of the second experiment is *

This work was supported by Korea Research Foundation Grant (KRF-2004-005-D00172).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 837 – 842, 2006. © Springer-Verlag Berlin Heidelberg 2006

838

S.Y. Kwon, S.H. Lee, and S.J. Kim

to evaluate the effectiveness of the existing metrics with the synthesized data under the criteria. Interesting experimental results are presented. This paper is organized as follows: In section 2, the change types of web pages are defined, and the criteria for web page changes are described. In section 3, the experimental results are reported. Section 4 contains the closing remarks.

2 Criteria for Evaluating the Metrics Prior to defining the criteria, we introduce the six change types of web pages. Suppose that a web page p changes to a new page p’. First, when new (i.e., not existing on p) words are inserted into p, we say that an “add” change takes place. Second, when old (i.e., already existing on p) words are inserted into p, we say that a “copy” change takes place. Third, when unique (i.e., occurring only once on p) words are deleted from p, we say that a “drop” change takes place. Fourth, when duplicate (i.e., occurring more than once on p) words are deleted from p, we say that a “shrink” change takes place. In this case, the deleted words still exist on p’. Fifth, when words on p are substituted by different words, we say that a “replace” change takes place. Finally, when the positions of words on p change, we say that a “move” change takes place. Examples of the six types are given in Fig. 1. w1 w2 w2 w3 w4

w1 w2 w2 w5 w3 w4

w1 w2 w2 w3 w4

(a) Add w1 w2 w2 w4

w1 w2 w2 w3 w4

w1 w2 w2 w3 w4

(c) Drop w1 w2 w2 w3 w4

w1 w2 w2 w3 w4 w3 (b) Copy w1 w2 w3 w4 (d) Shrink

w1 w2 w2 w5 w4

w1 w2 w2 w3 w4

(e) Replace

w1 w2 w2 w4 w3 (f) Move

Fig. 1. Six types of web page changes

Now, let us define the criteria on the six change types. Let n and x denote the number of words on p and the number of changed words, respectively.

(a) Add

(b) Copy Fig. 2. Criteria on “Add” and “Copy”

Effective Criteria for Web Page Changes

839

The criterion on the “add” change is defined as (x / (n+x)), as illustrated in Fig. 2(a). For example, when n words are added to p with n words, the change degree is 0.5 (= n / (n+n)). The criterion on the “copy” change is defined as (Įx / (n+x)), which is illustrated in Fig. 2(b). The parameter Į, which ranges from 0 to 1, denotes the user-defined weight of the “copy” change against the “add” change. As a user considers the “copy” change more significantly (or trivially), Į becomes higher (or lower, respectively). For example, if a user considers the effect of adding one word to be equivalent to the effect of copying two words, Į should be set to be 1/2. If a user considers the effect of adding one word to be equivalent to the effect of copying three words, Į should be set to be 1/3.

(a) Drop

(b) Shrink Fig. 3. Criteria on “Drop” and “Shrink”

The criterion on the “drop” change is defined as (x / n), as in Fig. 3(a). For example, when n words are dropped from p with n words, the degree of change is one (= n / n). The criterion on the “shrink” change is defined as (Įx / n), as in Fig. 3(b). The parameter Į, which is defined before, is used to denote the user-defined weight of the “shrink” change against the “drop” change. m denotes the maximum number of duplicate words on p.

(a) Replace

(b) Move

Fig. 4. Criteria on “Replace” and “Move”

The criterion on the “replace” change is defined as (x / n), as shown in Fig. 4(a). For example, when n words on p with n words are replaced to other words, the degree of change is one (= n / n). The criterion on the “move” change is defined as (ȕx / n),

840

S.Y. Kwon, S.H. Lee, and S.J. Kim

as in Fig. 4(b). The parameter ȕ, which ranges from 0 to 1, denotes the user-defined weight of the “move” change against the “replace’ change. As a user considers the “move” change more significantly (or trivially), ȕ becomes higher (or lower, respectively). (n-1) is the maximum number of movable words on a page.

3 Experiments We have conducted two experiments. First, using real web pages, we show how differently existing metrics measure the degree of web page changes. Next, we show the effectiveness of the metrics under the proposed criteria. We compare the following five metrics: the byte-wise comparison (in short, BW), the TF·IDF cosine distance (COS), the word distance (WD), the edit distance (ED), and the 10-shingling (10SH). Markups of web pages were excluded in the experiments, as done in the literature [2, 4, 6, 7]. In the first experiment, we crawled Korean real web pages in August 2005 and randomly selected 41,469 pages among them. The web pages were downloaded twice in a two-day interval. Fig. 5 shows how differently 10SH, COS, and ED (the other metrics are omitted for the space limit) response to the same changes of web pages. The identifiers of web pages are sorted in an ascending order of 10SH and COS in Fig. 5(a) and 5(b) respectively, in order to clearly visualize the difference of the metrics. As the figure shows, each metric returns different values on almost all the pages. Sometimes, the difference is as large as 0.92. This experiment implies that the degree of web page changes is heavily dependent on which metric is used to measure the page changes. Users need to select an appropriate metric when they precisely measure the degree of web page changes; otherwise they would misunderstand web page changes. These experimental results motivated our study.

(a) 10SH vs. COS

(b) COS vs. ED

Fig. 5. Various metrics

The second experiment was done with our synthesized pages reflecting the characteristics on the web. We evaluated the metrics when various numbers of words on a page with 1,000 words were changing. Web pages with 1,000 words occupy about 25% on the web [4]. The changed words were clustered (i.e., not distributed) on the pages, because the changes of real web pages were generally clustered [8]. Both the parameters Į and ȕ in the criteria were set to 0.75. A metric is defined to be effective if the results of the metric are close to those of the criteria. In addition, if the

Effective Criteria for Web Page Changes

841

results of a metric are always higher (or lower) than those of the criteria, we say that the metric is oversensitive (or undersensitive, respectively). In Fig. 6, 10SH is effective for the “add” and “drop” changes, but is oversensitive for the other changes. If Į in the criteria were set to be one, the metric would be effective for the “copy” and “shrink” changes. In our experiment, COS is always undersensitive. COS returns very low values for the “copy” and “shrink” changes, which implies that COS treats the “copy” and “shrink” changes to be minor. COS and WD always return zero on the “move” change, because the metrics do not consider the changes of word order at all. WD is effective for the “replace” change but is undersensitive for the other changes. If Į in the criteria were to be 0.5, the metric would be effective for the “copy” change. On the other words, WD would be the right choice for users who consider the effect of adding one word to be equivalent to the effect of copying two words. ED works similar to WD, except for the “move” change. ED treats the “move” change and the “replace” change identically.

(a) Add

(b) Copy

(c) Drop

(d) Shrink

(e) Replace

(f) Move Fig. 6. Comparison of metrics

842

S.Y. Kwon, S.H. Lee, and S.J. Kim

We also evaluated the metrics when only one word changes in various sizes of pages. Each page consists of 22, 23, 24, …, or 213 words. The web pages with 22 to 213 words occupy about 95% on the web [6]. The x-axis in Fig. 7 represents the number of words on a page before changing. We show only two cases due to the space limit. Note that 10SH becomes more oversensitive as web pages become smaller. The other metrics are not sensitive to the page size in most cases.

(a) Add

(b) Replace Fig. 7. Sensitivity versus page size

4 Closing Remarks In this paper, we classified the changes of web pages into six types and defined a criterion for each type. Under the criteria, we evaluated the effectiveness of the existing five metrics. Our study presents how significantly the metrics consider each change type as well as which metric is effective on each change type. We believe that this study is the first attempt to evaluate the metrics and could be used as a guideline for selecting an appropriate metric measuring the degree of web page changes. As future work, we plan to develop a new metric that can model the criteria well.

References 1. Brewington, B. E., Cybenko, G.: How Dynamic is the Web? the 9th International World Wide Web Conference (2000) 257-276 2. Broder, A. Z., Glassman, S. C., Manasse, M. S., Zweig, G.: Syntactic Clustering of the Web. Computer Networks and ISDN Systems, Vol. 29, No. 8-13 (1997) 1157-1166 3. Cho, J., Garcia-Molina, H.: The Evolution of the Web and Implications for an Incremental Crawler. the 26th International Conference on Very Large Data Bases (2000) 200-209 4. Fetterly, D., Manasse, M., Najork, M., Wiener, J. L.: A Large-Scale Study of the Evolution of Web Pages. Software: Practice & Experience, Vol. 34, No. 2 (2003) 213-237 5. Kim, S. J., Lee, S. H.: An Empirical Study on the Change of Web Pages. the 7th Asia Pacific Web Conference (2005) 632-642 6. Lim, L., Wang, M., Padmanabhan, S., Vitter, J. S., Agarwal, R.: Characterizing Web Document Change. the 2nd International Conference on Advances in Web-Age Information Management (2001) 133-144 7. Ntoulas, A., Cho, J., Olston, C.: What’s New on the Web? The Evolution of the Web from a Search Engine Perspective. the 13th International World Wide Web Conference (2004) 1-12

WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages Xiaojun Wan and Jianwu Yang Institute of Computer Science and Technology, Peking University, Beijing 100871, China {wanxiaojun, yangjianwu}@icst.pku.edu.cn

Abstract. A lexical signature of a web page consists of several key words carefully chosen from the web page and is used to generate robust hyperlink to find the web page when its URL fails. In this paper, we propose a novel method based on WordRank to compute lexical signatures, which can take into account the semantic relatedness between words and choose the most representative and salient words as lexical signature. Experiments show that the DF-based lexical signatures are best at uniquely identifying web pages, and hybrid lexical signatures are good candidates for retrieving the desired web pages, while WordRank-based lexical signatures are best for retrieving highly relevant web pages when the desired web page cannot be extracted.

1 Introduction The World Wide Web evolves dynamically through constantly adding, deleting, moving, or otherwise changing web pages and hyperlinks, which leads to the problem of broken hyperlink (unresolvable hyperlink). A typical example is that many URL citations in research papers become invalid as early as a year or two after publication [2]. How to find the desired web page when its URL is broken is a challenging problem. Among the proposed approaches to address this problem, robust hyperlinks [7] offer high probability of being successfully resolved even after the target page had made an unannounced move and left no forwarding address, even if the page had been edited as well. A robust hyperlink is a URL augmented with a small lexical signature consisting of carefully chosen words taken from the referenced web page. For example, the lexical signature for the web page referred by “http://www.britishhorrorfilms. co.uk/rillington.shtml” may be “rillington+geeson +Christie+constable+hurt”. If the address-based portion of the URL (i.e. “http://www.britishhorrorfilms. co.uk/ rillington.shtml”) fails, this content-based signature (i.e. “rillington+geeson + Christie+ constable+hurt”) can be submitted as a query to web search engines to locate the web page. Robust hyperlinks can be computed cheaply, exploited automatically and conveniently, and understood easily. The performance of robust hyperlinks relies heavily on the computation of lexical signatures. As stated in [4, 5], lexical signatures should easily extract the desired web page and be useful enough to find relevant information when the precise web pages being searched are lost. Moreover, lexical signatures should have minimal search engine dependency and new lexical signatures should have minimal overlap with existing lexical signatures. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 843 – 849, 2006. © Springer-Verlag Berlin Heidelberg 2006

844

X. Wan and J. Yang

A number of basic and hybrid methods are empirically examined in [4, 5], where basic lexical signatures are computed using a single metric, and hybrid signatures combine words generated from two different basic methods. The detailed explanation of four basic lexical signature methods and four hybrid lexical signature methods in [4, 5] are as follows: Basic Lexical Signature Methods 1. TF: Select words in decreasing term frequency (TF) order. If there is a tie, then pick words based on increasing document frequency (DF). If tied again, randomly select the words. 2. DF: Select words in increasing DF order. If there is a tie, then pick words based on decreasing TF order. If tied again, randomly select the words. 3. TFIDF: Select words in decreasing term-frequency inverse-document-frequency (TFIDF) order. If there is a tie, then pick words based on increasing DF order. If tied again, randomly select the words. 4. PW: Select words based on Phelps and Wilensky’s method [7], or decreasing TFIDF order where the TF term is capped at five. If there is a tie, then pick words based on increasing DF order. If tied again, randomly select the words. Hybrid Lexical Signature Methods 1. TF3DF2: Select two words in increasing DF order. Then filter out all words which have DF value one. Select three words maximizing TF. 2. TF4DF1: Select one word based on increasing DF order first. Then filter out all words which have DF value one. Select four words maximizing TF. 3. TFIDF3DF2: Select two words based on increasing DF order first. Then filter out all words which have DF value one. Select three words maximizing TFIDF. 4. TFIDF4DF1: Select one word based on increasing DF order first. Then filter out all words which have DF value one. Select four words maximizing TFIDF. In hybrid lexical signatures, the first part is useful for finding relevant web pages and the second part for uniquely identifying the desired web page. Note that we only examine five-word lexical signatures in this study. In the above lexical signature computation methods, different words in a web page are usually assumed to be independent. In fact, different words could express the same or similar meanings due to the synonym phenomenon. An example of synonym is the words “cat” and “feline”. In this study, we propose a WordRank-based method to compute lexical signatures, which can take into account the semantic relatedness between words and choose the most representative and salient words as lexical signature. Experiments on the web show that WordRank-based lexical signatures are best for retrieving highly relevant web pages when the desired web page cannot be extracted.

2 The WordRank-Based Lexical Signatures The WordRank method is inspired by the TextRank method [3] and it can take into account the semantic relatedness between words and choose the most representative

WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages

845

and salient words as lexical signature. The semantic relatedness between words is computed with the vector measure [6] based on the electronic lexical databaseWordNet [1]. The basic idea underlying the WordRank method is the mutual reinforcement principle widely employed in many graph-based algorithms, e.g. PageRank, HITS and Positional Function. In the graph, when one vertex links to another one, it is basically casting a weighted vote for that other vertex. The higher the total weight of votes that are cast for a vertex, the higher the importance of the vertex. Moreover, the importance of the vertex casting the vote determines how important the vote itself is, and this information is also taken into account by the ranking model. The WordRank method goes as follows: First we have to build a graph that represents the text and interconnects words with meaningful relations to enable the application of graph-based ranking algorithms. We denote the graph as G=(V,E) with the set of vertices V and set of edges E. V contains all words in the text except those stop words. The edges in E are undirected and weighted. Two vertices are connected if their corresponding words are semantically related. The semantic similarity value is assigned as the weight of the corresponding edge. Then each vertex in the constructed graph is assigned with an initial score of the normalized tf*idf value of the corresponding word. The ranking algorithm formulated as follows is run on the graph for several iterations until it converges. w ji WS k (V j ) , WS k +1 (Vi ) = WS 0 (Vi ) + d ∗ ¦ w v j ∈neighbor( vi ) ¦ jk vk ∈neighbor( v j )

k

where WS (Vi) represents the score of vertext Vi at iteration k WS0(Vi) represents the initial score of vertext Vi and wji represents the weight of the edge between vertex Vj and Vi. neighbor(Vi) is the set of vertices that has a edge with Vi. d is a damping factor and usually set to 0.85. Note that the convergence is achieved when the error rate for any vertex in the graph falls below a given threshold. The error rate of a vertex Vi is defined as the difference between the “real” score of the vertex WS(Vi) and the score computed at iteration k, WSk(Vi). Since the real score is not known apriori, this error rate is approximated with the difference between the scores computed at two successive iterations: WSk+1(Vi) - WSk(Vi). The convergence threshold is 0.0001 in our experiments. Once a final score is obtained for each vertex in the graph, vertices are sorted in decreasing order of their score. The WordRank-based method is then defined as follows: WordRank: Select words in decreasing WordRank score calculated above. If there is a tie, then pick words based on increasing document frequency (DF). If tied again, randomly select the words. In order to better identify the desired web page uniquely, two new hybrid lexical signature methods combining DF are defined as follows: WordRank3DF2: Select two words based on increasing DF order first. Then filter out all words which have DF value one. Select three words maximizing WordRank score.

846

X. Wan and J. Yang

WordRank4DF1: Select one word based on increasing DF order first. Then filter out all words which have DF value one. Select four words maximizing WordRank score.

3 Experiments and Results The experiments are based on two popular search engines: Google and MSN Search. We randomly extracted 2000 URLs from the open directory of DMOZ.org and downloaded the corresponding web pages. The URLs that could not be downloaded and those URLs with a file name extension that indicated that it was not html (e.g., pdf, ps, doc, etc) were excluded. After removing HTML tags, some web pages do not have any words or have only a few words in their content. After removing stop words from word tokens, we excluded all documents that contained less than five unique words. Lastly, 1337 documents were reserved. For each web document, eleven lexical signatures consisting of five words were computed based on different methods. Then we used the lexical signatures as queries for two search engines: Google and MSN. If a search engine did not return any documents, we removed a word from the given lexical signatures based on its lowest DF order and re-queried the search engine. This procedure was continued until the search engine returned documents or all of words in the given lexical signature were removed. After the search engine returned the list of documents, only the top ten documents were downloaded and the similarity between returned documents and the target document was calculated using the cosine measure. Our first concern is whether or nor the desired document is returned and its location in the list of returned documents. The higher the desired document is ranked in the returned list, the better the lexical signature is. As in [15, 16], four disjoint classes are defined as follows: Unique represents the percentage of lexical signatures that successfully extract and return the single desired document. Top represents the percentage of lexical signatures that extract a list of documents with the desired document first ranked. High represents the percentage of lexical signatures that successfully return a list with the desired document but not first ranked, but one of top ten. Other represents the percentage of lexical signatures that failed to extract the desired document. Note that the above added together represent 100% of all classes. Figures 1 and 2 show the desired document retrieval performance and we can see that DF-based lexical signatures are most efficient for uniquely extracting the desired documents. However, in practice, if the desired documents are highly ranked in the returned list, such lexical signatures are effective. So we consider the case where unique, top and high are combined, and then hybrid methods with two words chosen by DF (i.e. TF3DF2, TFIDF3DF2 and WordRank3DF2) are most efficient. WordRank3DF2 is a little more efficient than TF3DF2 and TFIDF3DF2. Our second concern is whether or not the lexical signature can find a related document if the desired document cannot be extracted. By our analysis, 19% of all 1337

WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages

847

documents could not be retrieved from Google using all lexical signature methods and 24% from MSN. The possible reasons are the following: The desired documents are not yet indexed by the search engines; they are moved, deleted, modified or updated; they contain very few unique words; the words in the lexical signatures are not indexed; etc. In the above situations, lexical signatures should be expected to extract highly relevant web documents. We analyze the average cosine similarity values of top ten retrieved documents to the desired document. Figure 3 shows the average cosine values of top ten documents in the Not Retrieved Document class and Figure 4 shows the average cosine values of top ten documents for all 1337 documents. Seen from Figures 3 and 4, DF and PW yield smaller average similarity values, while WordRank and its hybrid methods yield larger average similarity values for both Not Retrieved Documents and all 1337 documents. It demonstrates that the WordRank-based lexical signatures can retrieve more relevant documents to the desired document. 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% WordRank4DF1

WordRank3DF2

WordRank4DF1

Other

WordRank3DF2

WordRank

TFIDF4DF1

TFIDF3DF2

TF4DF1

High

TF3DF2

Top

PW

TFIDF

DF

TF

Unique

Fig. 1. Retrieval performance of lexical signatures for Google

100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% WordRank

TFIDF4DF1

TFIDF3DF2

TF4DF1

High

TF3DF2

Top

PW

TFIDF

DF

TF

Unique

Other

Fig. 2. Retrieval performance of lexical signatures for MSN

848

X. Wan and J. Yang

0.35 0.3 0.25 0.2 0.15 0.1 WordRank4DF1

WordRank3DF2

WordRank

TFIDF4DF1

MSN

TFIDF3DF2

TF4DF1

TF3DF2

PW

TFIDF

DF

TF

Google

Fig. 3. Average cosine value of top ten documents for Not Retrieved Documents

0.3 0.25 0.2 0.15 0.1 WordRank4DF1

WordRank3DF2

WordRank

TFIDF4DF1

TFIDF3DF2

TF4DF1

MSN

TF3DF2

PW

TFIDF

DF

TF

Google

Fig. 4. Average cosine value of top ten documents for all 1337 Documents

To summarize, DF is the best method for uniquely identifying the desired documents; TF is easy to compute and does not need to be updated unless the documents are modified; TFIDF and the hybrid methods combining TFIDF and DF are good candidates for extracting the desired documents; WordRank-based methods are best for retrieving highly relevant documents.

References 1. Fellbaum, C.: WordNet: An Electronic Lexical Database. The MIT Press (1998) 2. Lawrence, S., Pennock, D. M., Flake, G., Krovetz, R., Coetzee, F. M., Glover, E., Nielsen F. A., Kruger, A., and Giles, C. L.: Persistence of Web references in scientific research. IEEE Computer 34-2 (2001) 26-31 3. Mihalcea, R., Tarau, P.: TextRank: bringing order into texts. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (2004).

WordRank-Based Lexical Signatures for Finding Lost or Related Web Pages

849

4. Park, S. T., Pennock, D. M., Giles, C. L., and Krovetz R.: Analysis of lexical signatures for finding lost or related documents. In Proceedings of SIGIR’02 (2002) 5. Park, S. T., Pennock, D. M., Giles, C. L., and Krovetz R.: Analysis of lexical signatures for improving information persistence on the World Wide Web. ACM Transactions on Information Systems 22-4 (2004) 540-572 6. Patwardhan, S.: Incorporating dictionary and corpus information into a context vector measure of semantic relatedness. Master’s thesis, Univ. of Minnesota, Duluth (2003) 7. Phelps, T. A., and Wilensky, R.: Robust hyperlinks: cheap, everywhere, now. In Proceedings of Digital Documents and Electronic Publishing 2000 (DDEP00) (2000)

A Scalable Update Management Mechanism for Query Result Caching Systems at Database-Driven Web Sites Seunglak Choi2 , Sekyung Huh1 , Su Myeon Kim1 , Junehwa Song1 , and Yoon-Joon Lee1 1

KAIST, 373-1 Kusong-dong Yusong-gu Daejeon 305-701, South Korea {skhuh, yjlee}@dbserver.kaist.ac.kr, {smkim, junesong}@nclab.kaist.ac.kr 2 ETRI, 161 Gajeong-dong Yusong-gu Daejeon 305-350, South Korea [email protected]

Abstract. A key problem in using caching technology for dynamic contents lies in update management. An update management scheme should be very efficient without imposing much extra burden to the system, especially to the original database server. We propose an scalable update management mechanism for query result caching in database-backed Web sites. Our mechanism employs a two-phase consistency checking method, which prunes out unaffected queries at the earliest possible moment. The method scales well with a high number of cached instances.

1

Introduction

Caching technology has been frequently used to improve the performance of serving dynamic contents at Web sites. The key problem in using caching technology for dynamic contents lies in update management; that is, cached contents should be ensured consistent to the original data in databases. Thus, an effective update management mechanism is of utmost importance for dynamic content caching. Moreover, an update management scheme should be very efficient without imposing much extra burden to the system, especially to the origin database server. Note that the database server can be easily a bottleneck to overall Web site’s performance. Thus, if not efficient, the advantage of using the cache will be significantly impaired due to the extra overhead to keep the freshness of the cached data. In this paper, we propose an efficient update management mechanism for dynamic content caching, more specifically, for query result caching [5,6,8,2] in database-driven Web sites. The idea of query result caching is to store results of frequently-issued queries and reuse the results to obtain the results of subsequent queries, significantly saving computational cost to process queries and retrieve results. Our method, upon reception of an update request, instantly processes the update and invalidates affected query results in the cache. In doing so, the cache initiates and takes in charge of the update management process, and minimizes X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 850–855, 2006. c Springer-Verlag Berlin Heidelberg 2006 

A Scalable Update Management Mechanism

851

the involvement of the database server. In other reported update management schemes [4,3,1], the servers are heavily responsible for the overall update process. In addition, our mechanism employees a two-phase consistency checking method in which the expensive part, i.e., join checking, is performed only once to a group of queries. In this fashion, the method prunes out unaffected queries at the earliest possible moment. Thus, the method scales well with a high number of cached instances. The number of query instances can be very high especially for range queries. This paper is organized as follows. In section 2, we describe the cache consistency mechanism. In section 3, we evaluate and analyze the performance of the mechanism. Finally in section 4, we present conclusions.

2 2.1

Cache Consistency Mechanism Query Templates and Query Instances

Before describing our mechanism, we introduce the notions of query templates and instances. In Web-based applications, a user usually submits a request by using a HTML form. Figure 1 shows a simple example. A user types a search keyword and clicks the submit button in the form. Then, a WAS generates a query from the form and sends it to a database server. The generation of the query is done as encoded in the applications. Thus, each HTML form can be translated to a template of queries through the encoding. We call such a template a query template. The queries generated from the same HTML form, i.e., from the same query template are of the same form; they share the same projection attributes, base tables, and join conditions. The only difference among the queries lies in their selection regions, which are specified by users. We call the individual queries generated upon user’s requests query instances.

SELECT I_TITLE, I_ID, A_FNAME, A_LNAME FROM ITEM, AUTHOR WHERE I_A_ID = A_ID AND A_LNAME = '@keyword'

Fig. 1. HTML form and its corresponding query form. The HTML form is clipped from the Search Request Web page of the TPC-W benchmark [7].

We characterize a query template QT = (T, P A, JC) as follows. T is a set of tables which are specified in a FROM clause. P A is a set of projection attributes. JC is a set of join conditions. T (QT ), P A(QT ), and JC(QT ) denotes T, PA, and JC of a query template QT respectively. We define a query group of a template QT as QG(QT ) = {Qi } where Qi is generated from QT . A query instance Qi is said to be affected by an update if Qi is modified by the update.

852

S. Choi et al.

Web Server & WAS

(2)

(1)

Consistency Maintainer (CM) (3)

(5)

(4)

DBMS

(6)

Read-Query Processor (RQP) Query Result Caching System

Fig. 2. Processing Flow

2.2

Architectural Overview

Under the proposed mechanism, a caching system conceptually consists of the Consistency Maintainer (CM) and the Read-Query Processor (RQP) (see Figure 2). CM performs the consistency check to identify query results affected by a given update and invalidates affected results. RQP is a main component of the caching system, which stores query results and serves read queries. Figure 2 depicts the processing flow of the consistency check. A WAS sends an update query to CM (1). CM forwards the update to the origin database server (2). In order to find the templates which can include the query instances affected by the update, CM investigates query template information kept in RPQ (3) and sends to the database server a join check query (4), which will be discussed in detail in section 2.3. Once the templates are determined, CM finds affected query instances in the templates (5) and removes them from the cache (6). 2.3

Two-Phase Consistency Check

Consistency check is to test if there exist any query instances which are affected by an update. This involves repeated matching of each query instance against a given update, and thus costs serious computation overhead. We identify that there are many computation steps which are repeated in testing different instances. To avoid such repetition, we propose a two-phase consistency checking mechanism. We note that different query instances generated from the same query template differ only in their selection regions. Thus, during the first step called template check, we match the query template against the update and identifies if it is possible that any of the query instances from the template are affected by the update. Then, during the second step, called instance check, individual instances are matched against the update. Template Check. The following three conditions are satisfied, if U affects any query instances in QG(QT ). 1. If a set of attributes modified by U intersects P A(QT ). Note that, for INSERT and DELETE, this condition is always true. These queries insert or delete an entire tuple.

A Scalable Update Management Mechanism

853

2. If a table on which U is performed is included in T (QT ). 3. If one or more newly inserted tuples by U satisfy JC(QT ). If QT has join conditions, query instances generated from QT include only joined tuples. Thus, only when the inserted tuples are joined, U can affects the query instances. Example 1. Let us consider a query template QT , a query instance Q, and an update U as follows. Q is generated from QT . QT : T = {ITEM, AUTHOR}, P A = {I TITLE, I COST, A FNAME}, JC = {I A ID = A ID} Q : SELECT I TITLE, I COST, A FNAME FROM ITEM, AUTHOR WHERE I A ID = A ID AND I PUBLISHER = ‘McGrowHill’ U : INSERT INTO ITEM (I ID, I A ID, I TITLE, I PUBLISHER) VALUES (30, 100, ‘XML’, ‘McGrowHill’) The conditions (1) and (2) are easily evaluated as trues. U modifies the projection attribute I TITLE and the table ITEM. For checking the condition (3), a cache sends to a database server a join check query as shown below. This query examines whether the table AUTHOR has the tuples which can be joined with the tuple inserted by U . Because the join attribute value of the inserted tuple is 100, the join check query finds the tuples with A ID = 100. If the result of the query is not null, we know that the inserted tuple is joined. SELECT A ID FROM AUTHOR WHERE AUTHOR.A ID = 100 As described in the example 1, the join check requires the query processing of a database server. The two-phase consistency check performs the join check over each query template, not each query instance. Thus, it dramatically decreases the overhead to a database server. Instance Check. Once a template passed the template check, the query instance check is applied to the template. It finds the affected query instances by comparing the selection region of an update query to those of query instances. If the selection region of a query instance overlaps that of an update query, we know that the query instance is affected by the update. In the example 1, the query instance Q is affected by the update U because the selection region of Q, I PUBLISHER=‘McGrowHill’, is equal to that of U .

3

Performance Evaluation

Experimental Setup. We measured the update throughputs of the query result caching system adopting the proposed mechanism. We sent update queries to the caching system. For each update, the caching system forwards the update query and sends join check queries to a database server. Under this situation, the throughput is limited by the amount of processing these queries in the database server.

854

S. Choi et al.

Query generator

Database server

Cache server

TPC-W Database

Fig. 3. Experimental setup

Figure 3 shows the setup for evaluating the proposed mechanism. The Query Generator emulates a group of Web application servers, generating queries. It runs on a machine with a Pentium III 800MHz, 256MB RAM. For the database server, we used Oracle 8i with the default buffer pool size of 16MB. The database server runs on a machine with a Pentium IV 1.5GHz, 512M RAM. The caching system runs on the machine with a Pentium III 1GHz, 512M RAM. We implemented the proposed mechanism as a part of WDBAccel query result caching system [5]. All machines run Linux and are connected through a 100Mbps Ethernet. We populated the database of the TPC-W benchmark in the database server at 100K scale 1 . The TPC-W benchmark [7] is an industrial standard benchmark to evaluate the performance of database-driven Web sites. We used the update query modified from one used in Admin Confirm Web page of TPC-W: INSERT INTO ITEM (I ID, I A ID, I COST, I PUBLISHER) VALUES (@I ID, @I A ID, @I COST, ‘@I PUBLISHER’). This query inserts information on a book into ITEM. The values of the attribute I ID follow the random distribution. Experimental Results. In order to determine the performance improvement by the two-phase consistency check, we measured the update throughputs of the two-phase check and the brute-force approach. In the brute-force approach, the join check is performed against individual query instances. (Note that in the two-phase check, the join check is performed against a query template.)

update throughput (queries/sec)

100

80

60

40

20

0 50

100

150

200

# of query instances Two-phase

Brute-force

Fig. 4. Update throughputs of a single cache node

Figure 4 shows the update throughputs as the number of query instances ranging from 50 to 200. The figure shows that the throughputs of the two-phase 1

In TPC-W, the scale of database is determined by the cardinality of ITEM table.

A Scalable Update Management Mechanism

855

check are equal. This means that the two-phase check imposes the same overhead on a database server regardless of how many query instances are. The amount of overhead of the two-phase check will depend on only the number of query templates. The two-phase check generates a join check query for each query template.

4

Conclusions

In this paper, we proposed a scalable update management mechanism for the query result caching systems. We divided a consistency check to two phases, template check and instance check. The template check is performed over a query template, not an individual instance. We presented the experimental results that verify a high level of the scalability of the mechanism.

References 1. Khalil Amiri, Sara Sprenkle, Renu Tewari, and Sriram Padmanabhan. Exploiting templates to scale consistency maintenance in edge database caches. In Proceedings of the International Workshop on Web Caching and Content Distribution, 2003. 2. Khalil S. Amiri, Sanghyun Park, Renu Tewari, and Sriram Padmanabhan. DBProxy: A self-managing edge-of-network data cache. In 19th IEEE International Conference on Data Engineering, 2003. 3. K. Selcuk Candan, Divyakant Agrawal, Wen-Syan Li, Oliver Po, and Wang-Pin Hsiung. View invalidation for dynamic content caching in multitiered architectures. In Proceedings of the 28th VLDB Conference, 2002. 4. K. Selcuk Candan, Wen-Syan Li, Qiong Luo, Wang-Pin Hsiung, and Divyakant Agrawal. Enabling dynamic content caching for database-driven web sites. In Proceedings of ACM SIGMOD Conference, Santa Barbara, USA, 2001. 5. Seunglak Choi, Jinwon Lee, Su Myeon Kim, Junehwa Song, and Yoon-Joon Lee. Accelerating database processing at e-commerce sites. In Proceedings of 5th International Conference on Electronic Commerce and Web Technologies (EC-Web 2004), 2004. 6. Qiong Luo and Jeffrey F. Naughton. Form-based proxy caching for database-backed web sites. In Proceedings of the 27th VLDB Conference, Roma, Italy, 2001. 7. Transaction Processing Performance Council (TPC). TPC benchmarkTM W (web commerce) specification version 1.4. February 7, 2001. 8. Khaled Yagoub, Daniela Florescu, Valerie Issarny, and Patrick Valduriez. Caching strategies for data-intensive web sites. In Proceedings of the 26th VLDB Conference, 2000.

Building Content Clusters Based on Modelling Page Pairs Christoph Meinel and Long Wang Hasso Plattner Institut, Potsdam University, 14482 Potsdam, Germany [email protected]

Abstract. We give a new view on building content clusters from page pair models. We measure the heuristic importance within every two pages by computing the distance of their accessed positions in usage sessions. We also compare our page pair models with the classical pair models used in information theories and natural language processing, and give different evaluation methods to build the reasonable content communities. And we finally interpret the advantages and disadvantages of our models from detailed experiment results.

1

Introduction

In [19], Niesler used the distance between two words acting as a trigger-target pair to model the occurrence correlations within a word-category based language model. In this paper, we use ”heuristic importance” to depict the importance of one page to attract visitors to access another page. The ”heuristic importance” is measured by computing the distance of their access positions in usage sessions. The methods to reconstruct sessions are classified into five different standards[15, 6, 21, 10, 11]. These five standards show the difference views on the binary relations of two accessed pages in reconstructing usage sessions. Web usage patterns are mostly defined on the association rules, sequential patterns and tree structure[2, 5, 6, 12]. Detailed definitions of different actions performed by visitors are given in [8, 9]. The binary relationship between every two pages in usage patterns are modelled on the co-occurrence happenings[2], time sequential[2, 5, 6] and structural characteristics [9, 12]. Clustering and classification of users are investigated in [1, 3, 21, 13, 18]. The binary relations between every two pages are computed on conditional possibilities or Markov chains [17], or on the content attributes[7]. We name the required terms in section 2 and give the general clustering method and web site modelling in section 3. We explain the evaluation measurements in section 4 and discuss our experiments in section 5. Section 6 is a short summary.

2

Problem Statements

A page pair is named as P air(ptrigger , ptarget ). The heuristic importance is named as Hr(ptrigger , ptarget ). For a page p in a given session s, we use P osp to name the position of this page p in this session. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 856–861, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Building Content Clusters Based on Modelling Page Pairs

857

We improve this distance used in [19] to model the heuristic importance from trigger page to target page. Within a session s, the mutual relation √ from page ptrigger to page ptarget is named as: Ms (ptrigger , ptarget ) = P osptrigger P osptarget |P osptarget −P osptrigger |+1 ,

where P osptrigger is the position for page ptrigger in this session and P osptarget for the page ptarget . Over the session set S, the heuristic importance from ptrigger to ptarget in P air(ptrigger , ptarget ) is defined as: Hr(ptrigger , ptarget )=

i=1 n

Msi (ptrigger ,ptarget ) n i=1 |si |/2

: si ∈ S  ,

where S’ is the sub set of S but includes all the sessions in which ptrigger was accessed before ptarget . Here we also give the definitions of other methods to model page pairs. Method 1 (SUP): A page pair is symbolized as two adjacently accessed pages in sessions. Given a session set S, L(S ) represents the corresponding binary page relation set, and I(S) the set reducing all the repeat happenings of the same binary relations in L(S). The support of the page pair P A = P air(ptrigger , ptarget ) A,P Ai ∈I(S)}| in session set S is defined as: supP = |{P Ai |P Ai =P . So we use this |L(S)| support as the heuristic importance of ptrigger to ptarget . This measurement is widely used in web usage mining [1, 2, 5, 6, 8, 12]. Method 2 (IS): A page pair is symbolized as two adjacently accessed pages in P r(ptarget ptarget ) to compute the sessions. [18] used Hr(ptrigger , ptarget ) = √ P r(ptrigger )P r(ptarget )

heuristic importance of page ptrigger to page ptarget . This model is also used in computing the mutual information of in model natural language [14]. Method 3 (CS): The heuristic importance is characterized by the conditional possibility: Hr(ptrigger , ptarget ) = P r(ptarget |ptrigger ). This measurement is also named as confidence in data mining and n-Markov chain is widely used in personalized recommendation and adaptive web sites [17].

3

Clustering Method and Site Modeling

The clustering method that finds the related page communities from page pairs is introduced in this section. An overview of the algorithm is given in follows: Input: web server usage logs Output: page clusters 1.Recover sessions from web usage logs, 2.Scan the recovered sessions and build page pairs by computing heuristic importance, 3.Create the graph from page pairs and find the cliques. The method to recover sessions for different users has been detailed discussed in [11, 21], individual accessing behaviours are also recovered in this step for further interesting usage pattern mining [9]. In [13], clustering mining method was introduced in the PageGather system.

858

C. Meinel and L. Wang Table 1. Piece of Weights of Pages on HPI Site URL / /lehre.html /index.html /support/sitetmap.html /lehre/studienprojekte.html

Weight 100 16.7 12.6 3.33 2.08

Web has been modelled by many ways, most of which is based on the graph theory. PageRank[7] and HIT[4] are the two famous methods. Besides the graph model, role-based model was used in [6] and n-Markov was used in many personalized recommendations[17, 21]. We improved the method from HIT[4] to model the page relations within a web site. In this model, a web site is dedicated to several particular topics, and its semantic space can be formed based all the concepts related to these particular topics, and all the concepts are organized as a concept hierarchy. Each page within a web site is given a concrete numerical definition represented the corresponding sub set of concepts, and this numerical weight is computed by weight propagation step by step from the home page, which represents the whole concept set. We call the page that disperse its concept as ”host ”, and the page that inherits concepts from ”host ” as ”receiver”. The weight of one concept wpc from a ”host ” is equally divided by all the ”receives” that inherit this concept, and on the other hand, different concepts for a ”receiver ” is inherited from different ”host ”.  p.wci and p.wci = Given a page p, its weight wp is computed as: wp = i=1 k q.wci , where k is the number of different concepts for p, q is the host of p for n concept ci , and n is the number of receivers that inherit concept ci from q. During computing wp for every page, the weight of a concept for a page is reduced with the weight propagation from the home page, so wp represents the importance of its corresponding semantic value from the point web designer. The distribution and propagation of concept weights like our definition are universally observed in the frame work designing of many web sites. We illustrate this model on the content main frame of www.hpi.uni-potsdam.de. There are 67 different pages for the main frame, and they are organized as a tree structure. With the help of automated interface design, among these 67 pages, every two pages are directly connected. This helps greatly to reduce the affect of navigation hyperlink for pair analysis. The table 1 shows the weight for some pages.

4

Evaluation Measurements

The number of clusters and their average size are the two important measurable criteria for the success of a clustering method, and distinctiveness and coverage are the other two criteria for the quality of clusters. Given a set of M which is built by based on one clustering method, distinctiveness is given by the following equation:

Building Content Clusters Based on Modelling Page Pairs

859



| Distinctiveness(M ) = k|P |P , where P is the set of pages appearing as least in i| i=1 one clusters, and Pi is the pages used in i-th cluster, if there is k clusters in M. And  |  coverage is given as: Coverage(M ) = |P |P | ,where P is the set of pages appearing at least in one cluster, and P is the pages that need to be clustered. In our scenario, we add another two criteria: semantic dependence and popularity. Semantic dependence is defined as: Semantic − dependence(M ) = k|C||C  | . i i=1 In the above formula, C is the set of content categories that a web site belongs to, C i is content categories that the i−th cluster has, if there is k clusters in M . We use the average support of the page pairs appearing in at least one cluster to name the popularity of the clustering model, the popularity of a model M is: P opularity(M ) n P r(P A ) = i=1|P A| i . In this popularity formula, P A is the set of page pairs that appear in M , and P r(P Ai ) is the possibility of page pair P Ai over the usage record set. In [18], Gold standard is named as the expert criterion in general evaluation method that is used to find ”ideal solution” to a problem. The methods to reduce the subjective bias from experts is trying to get as more suitable experts as possible. In web usage mining, the ideal evaluation for a content improving schema is the direct feedback from client sides. But in web applications such direct feedback is uncontrollable. We are pushed to raise three measurements to evaluate usage models:

1. If similar patterns happen in different models, then this pattern is useful. 2. If similar patterns happen in different periods of time, then this pattern is valuable. 3. If a model reflects the changes of the content reorganization, then this model is reasonable.

5

One Case Study

We take the content frame from www.hpi.uni-potsdam.de as the improving target, which includes 67 different pages with different URLs. We take two pieces of web logs for clustering page pairs, one is from 01.03.2005 to 31.03.2005, and the other is from 01.04.2005 to 30.04.2005. By fetching the related usage information of 67 target frame pages, we get 16314 sessions from March, and 18546 from April. Here we show the validity of our method by one case study: P3 =/lehre/vorlesungen.html, P4 =/lehre/bachlor.html, and P5 =/lehre/master.html. These three pages have the same semantic importance computed as in section 4, because they are linked from the same source page as /lehre.html. This means that these three pages have no bias on the web designer’s side. But based on page pairs modeled from usage data, these three pages have some clear bias on heuristic importance.

860

C. Meinel and L. Wang

P3 0.4

0.003

0.2 0.28

P4

P3

0.06 0.33

P5

0.21

0.0006 0.001 0.0004 0.001

P4

P5

0.0005

Fig. 1. Page Clusters base on CS and DS in March Logs

P3 0.016

0.029 0.023

P4

P3 0.062

0.0025

0.02 0.013

0.55 0.54

P5

P4

0.079 0.24 0.18

P5

Fig. 2. Page Clusters base on CS and DS in April Logs

In the above tow figures, we discriminate the different directions of heuristic importance within a page pair by using different lines: the bold line means a higher heuristic importance and the dashed line means a lower heuristic importance within the same page pair. From the four page clusters in these two figures, we find P5 has a higher heuristic importance to P3 and P4 than those from P3 and P4 to P5 , which happens in two different period of logs based on two different models. Based on task-oriented evaluating measurements in section 6, we can naturally conclude that P3 < −P5 − > P4 is a very useful page cluster and helps for improving content organization.

6

Conclusion

In this paper, we investigate the problem of building content clusters based on modeling page pairs by computing the position distance between source page and target page. Some questions are still open for further investigation, for example, measuring the difference between usage patterns and original web organization.

References 1. A. Banerjee, J. Ghosh: Clickstream Clustering using Weighted Longest Common Subsequences. In Workshop on Web Mining, SAIM, (2001). 2. B. Berendt, and M. Spiliopoulou: Analysis of navigation behaviour in web sites integrating multiple information systems. The VLDB Journal, (2000).

Building Content Clusters Based on Modelling Page Pairs

861

3. J. Heer, E. Chi: Mining the Structure of User Activity using Cluster Stability. In Workshop on Web Analytics, SIAM, (2002). 4. J. M. Kleinberg: The Web as a Graph: Measurements, Models, and Methods. In 5th International Conference on Computing and Combinatorics, (1999). 5. Jian Pei, Jiawei Han and etc.: Mining Access Patterns Efficiently from Web Logs, PAKDD, (2000). 6. J. Srivastava, R. Cooley, M. Deshpande and P. Tan: Web Usage Mining: Discovery and Application of Usage Patterns from Web Data, ACM SIGKDD, (2000). 7. L. Page, S.Brin, R. Motwani and T. Winograd: The PageRank Citation Ranking: Bringing Order to the Web, Technical report, Stanford Digital Library Technologies Project, (1998). 8. L. Wang and C. Meinel: Behaviour Recovery and Complicated Pattern Definition in Web Usage Mining. In proc. of ICWE, (2004). 9. L. Wang, C. Meinel and C. Liu: Discovering Individual Characteristic Access Patterns in Web Environment. In proc. of RSFDGrC, (2005). 10. M. Chen, J. Park: Data Mining for Path Traversal Patterns in a Web Environment. In ICDCS, (1996). 11. M. Spilioupoulou, B. Mobasher, B. Berendt and M. Nakagawa: A Framework for the Evaluation of Session Reconstruction Heuristics in Web Usage Analysis. In Journal of INFORMS on Computing, (2003). 12. Mohammed J. Zaki: Efficiently Mining Frequent Trees in a Forest. In SIGKDD (2002). 13. Perkowitz, M. and Etzioni, O. Adaptive Web Sites: Automatically Syntehsizing Web Pages. In AAAI, (1998). 14. Peter F. Brow: Class-Based n-gram Models of Natural Language. In Association for Computational Linguistics, (1992). 15. Peter Pirolli: Distributions of Surfers Path through the World Wide Web. World Wide Web 2, (1999). 16. P. Tonella: Evaluation Methods for Web Application Clustering. In proc. 9th WWW conference, (2000). 17. R. Sarukkai: Link Prediction and Path Analysis Using Markov Chains. In proc. 9th WWW conference, (2000). 18. T. Pang and V. Kurmar: Interestingness Measures for Association Patterns: A Perspercitve. Tech. Rep. University of Minnesota, (2000). 19. T. Joachims, D. Freitag and etc.: WebWatcher: A Tour Guide for the World Wide Web. In IJCAI, (1997). 20. T.R. Niesler and P.C. Woodland: Modelling Word-Pair Relations in a CategoryBased Language Model. In proc. ICASSP, (1997). 21. X. Huang, F. Peng, A. An and D. Schuurmans: Dynamic Web Log Session Identification with Statistical Language Models. In Journal of American Society for Information Science and Technology, (2004). 22. Y. Fu, K. Sandhu, and M. Shih: A Generalization-based Approach to Clustering of Web Usage Sessions. In Web Usage Analysis and User Profiling, (2002).

IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm Jie Shen1, Ying Lin1, Gui-Rong Xue1,2, Fan-De Zhu1, and Ai-Guo Yao3 1

Department of Computer Science, Yangzhou University, Yangzhou, Jiangsu 225009, P.R. China 2 Department of Computer Science & Engineering, Shanghai Jiao-tong University, Shanhai 200030, P.R. China 3 College of Computer, Wuhan University, Wuhan 430072, P.R. China [email protected], [email protected], [email protected], [email protected], [email protected]

Abstract. The aim of collaborative filtering is to make predictions for active user by utilizing the rating information of likeminded users in a historical database. But previous methods suffered from problems: sparsity, scalability, rating bias etc. To alleviate those problems, this paper presents a novel approach— Iterative Rating Filling Collaborative Filtering algorithm (IRFCF). Firstly, based on the idea of iterative reinforcement process, object-pair similarity is computed iteratively, and average rating and rating range are introduced to normalize ratings in order to alleviate rating bias problem. Then missing ratings are filled from user and item clusters through iterative clustering process to solve the sparsity and scalability problems. Finally, the nearest neighbors in the set of top clusters are selected to generate predictions for active user. Experimental results have shown that our proposed collaborative filtering approach can provide better performance than other collaborative filtering algorithms.

1 Introduction With the rapid development of World Wide Web and e-commerce, recommendation system becomes a hot issue in the research area of e-commerce and information retrieval. And collaborative filtering (CF) is one of the most promising recommendation technologies [1], which predicts the utilities of items for active user based on the historic rating information of other users whose interest is similar with active user. In general, methods of collaborative filtering can be divided into two categories: memory-based algorithms and model-based algorithms. Because of the tremendous growth of customers and commodities in e-commerce, the problems of sparsity and scalability have become key challenges for collaborative filtering, which seriously impact the quality of present CF methods. To deal with the sparsity problem, Some work have been proposed, such as: Singular Value Decomposition (SVD), Principle Component Analysis (PCA), Latent Semantic Indexing(LSI),content-boosted CF approaches and item-based approach [2]. Furthermore, some researchers tried to solve the problem from other perspectives: J.Wan proposed ReCoM [3] algorithm using multi-type relationships to improve the cluster quality of X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 862 – 867, 2006. © Springer-Verlag Berlin Heidelberg 2006

IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm

863

interrelated data objects through iterative reinforcement clustering process; G.R.Xue [4] used cluster to provide smoothing operation in order to solve the sparsity and scalability problems from a new perspective. To address above problems, this paper presents a novel approach—Iterative Rating Filling Collaborative filtering algorithm (IRFCF). Firstly, through iterative reinforcement process, object-pair similarity is computed, and average rating and rating range are introduced to normalize ratings in order to alleviate rating bias problem. Then we use K-means algorithm to cluster objects, and iteratively fill unrated data from user and item clusters. Finally, the nearest neighbors in the set of top clusters are selected to generate predictions for active user. The rest of this paper is organized as follows. Section 2 presents brief description of related work. In Section 3, Iterative Rating Filling Collaborative filtering algorithm is formally illustrated. The results of experiment are presented in Section 4, followed by conclusion in Section 5.

2 Related Work Recom [3] proposed by J.wang employed method of analyzing inter-relationships to improve the cluster quality, which can effectively alleviate the problem sparsity. The main similarity equation was defined as follows:

S = α ⋅ S f + β ⋅ S int ra + γ ⋅ S int er

(1)

Where Sf stands content similarity, Sintra and Sinter are intra-type and inter-type similarities. Į, ȕ, and Ȗ are weights for different similarities with Į + ȕ +Ȗ = 1. G.R. Xue [4] combines the advantages of these two approaches by introducing a smoothing-based method, in which clusters generated from the training data provide the basis for data smoothing and neighborhood selection. The missing-value was smoothed as: ~

R u (t ) = R u + ( R C u (t ) − R C u )

(2)

Where RCu (t ) is the average rating for all users in cluster Cu to the item t , RCu is the average rating for all the users in cluster Cu.

3 Iterative Item Rating Filling Collaborative Filtering 3.1 Iterative Similarity Calculating and Rating Normalization G. Jeh proposed SimRank algorithm [5] to measured similarity by Iterative analyzing the structural-context information of objects. We find there are same features in CF research. In SimRank, all relationships between heterogeneous objects were regarded as 0/1 binary, but they are variants in CF. Furthermore, this paper introduces average rating and rating range to normalize ratings in order to alleviate rating bias [6]. Taking them into account, this paper firstly extend SimRank similarity equation as follows:

864

J. Shen et al.

C⋅ S (O x , O y ) = n

R ( O x , O a ) R ( O y ,O b )

¦ i =1

¦

( R i ( O x , O a ) − R x )( R j ( O y , O b ) − R y ) S n −1 ( O a , O b )

j =1 ~

~

(3)

R x ⋅ R y ⋅ R (O x , O a ) ⋅ R (O y , O b )

Where O denotes the objects of users or items, and Oa,Ob are the heterogeneous objects with Ox,Oy. Sn(Ox,Oy) denote the similarity of object-pair iterative computed at n times. R(Ox,Oa) is the rating value between Ox and Oa. C is a decay factor of similarity transferring. |R(Ox,Oa)| is the rating number between Ox and Oa. Rx denotes the ~ average values for object Ox. R x stands the rating range of object X. At the beginning of the calculation, similarities of all object-pairs are initialized to 0 or 1: if Ox = Oy S0(Ox,Oy)=1; otherwise (OxOy), S0(Ox,Oy)=0. 3.2 Iterative Filling Missing Ratings In general, each user always has a certain interest on any item. Non-rating items doesn’t mean that users don’t like these items, and doesn’t indicate their interests are all zero. However, unrated items widely exist in the recommendation system. it’s become a main factor that causes sparsity and interferes seriously with the quality of the collaborative filtering. This paper uses the item ratings of similar users to fill missing ratings. If user ux don’t rate item ia but user uy rated item ia, whose rating behavior is similar with user ux, so we can simulate user ux’s rating on item ia based on the rating value of user uy. Meanwhile, considering that our approach analyze from the interrelationships between heterogeneous objects, it is feasible to fill the missing ratings from the aspect of item. That means if item ia is similar with item ib, we also can use the rating R(ux,ib) to fill the missing rating Rm(ux,ia). So we can define a new rating score computing equation through iterative reinforcement processing: R(ux, ia ) if ux rate ia ­ n ° n ¦(R(uy,ia) − Ruy ) ⋅ S (ux,uy ) ¦(R(ux,ib) − Rux ) ⋅ S (ia,ib) ° ∃R(uy ,ia ) Rn (ux, ia ) = ® ∃R(ux ,ib ) R α β else + ⋅ + ⋅ 2 ° ux 2 Sn (ux, uy ) Sn (ia ,ib ) ¦ ¦ ° ∃R(uy ,ia ) ∃R(ux ,ib ) ¯

(4)

Where Ru x stands the average rating for user ux, Ru is the average rating for user uy, y Rnm(ux,ia) denotes the filling value to the missing ratings R(ux,ia) in n-th times iterative processing. Į2,ȕ2 denote filling ratio of missing rating from user or item aspect respectively. 3.3 Cluster-Based Iterative Missing Rating Filling The above method fills missing ratings iteratively based on the improved iterative similarity calculation. In each iteration step, information of all uses and items are used to fill missing ratings. But factually, it is not necessary at all and could not reach the optimization. So we cluster objects in each iteration step in order to obtain better performance and speed up operation. K-means algorithm is selected as the basic clustering algorithm in this paper. After clustering users and items data, those missing ratings

IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm

865

Rm(ux,ia) can locally be filling within the user cluster of user ux and the item cluster of item ia. The cluster-based missing rating filling equation is expressed as: R mn ( u x , i a ) = R u x + α 2 ⋅ ( R c n ( i a ) − R c n ) + β 2 ⋅ ( R c n ( u x ) − R c n ) x

x

a

(5)

a

n

Where C ux stands the user cluster which ux belongs to in n-th iteration. Rc n (i a ) is the x

average rating for all users in cluster Cx (which ux belongs to) to the item ia after n times iteration and Rc n is the average rating for all the users in cluster Cx. x

3.4 Nearest Neighbor Selection and Prediction Recommendation The final aim of collaborative filtering recommendation is to select some nearest neighbors for test user to predict. Considering that above cluster-based iterative filling process, this paper selects nearest neighbor from several clusters most similar with test user. Firstly, after iterative clustering converged, we pre-select those clusters similar with test user. The similarity between clusters and test user can be calculated as: ( R ( u , i ) − R u )( R ( c j , i ) − R c j )

¦

S (u , c ) =

i∈ I

( R (u , i) − R u ) 2

¦

i∈ I u

u ,c j

¦

i∈ I u

,c j

( R (c j , i) − R c j ) 2

(6)

,c j

Then, we re-calculate the similarity between the user in the candidate set and the test user on the filling rating, and select the Top N most similar users based on the following similarity function:

¦

S (u t , u ) =

( R (u t , i) − R

ut

)( R ( u , i ) − R u )

i∈ I ut

¦

( R (u t , i) − R

u

t

)2

i∈ I u t

¦

i∈ I

( R (u , i) − R u ) 2

(7)

ut

Finally, after the Top N most similar users are selected, we can conduct prediction for test user by aggregating their ratings. The prediction is calculated as the average of deviation from neighbors’ mean: R (u t , i) = R ut +

¦

N j =1

S (u t , u j ) ⋅ ( R (u j , i ) − R u j )

¦

N j =1

S (u t , u j )

(8)

Where S(ut,uj) is the similarity between the test user ut and the training user uj, and N is the number of users in the neighborhood.

4 Experiments 4.1 Dataset and Evaluation Metric A real dataset about movie rating is used in our experiments: MovieRating [7]. We altered the training size to be the first 100, 200, or 300 users, which is denoted as MR_100, MR_200, and MR_300, respectively. Then we use the last 200 users for testing. The evaluation metric used in our experiments was the commonly used mean

866

J. Shen et al.

absolute error (MAE)[2], which is the average absolute deviation of the predicted ratings from the actual ratings on items the test users have voted. 4.2 Performance In order to evaluate the performance of our cluster-based iterative filling approach, we compare our algorithm-iterative rating filling collaborative filtering (IRFCF) with the state-of-art algorithms for collaborative filtering: the Pearson Correlation Coefficient (PCC) method, the Personality Diagnosis (PD), the Aspect Model (AM), the Flexible Mixture Mode(FMM) and the iterative similarity computing collaborative filtering (ICF). Table 1. MAE on MovieRating and Moivelens for different algorithms. A smaller value means a better performance .

Training Set

MR_100

MR_200

MR_300

Methods

Given 5

Given 10

Given 20

PCC

0.883

0.838

0.815

PD

0.835

0.824

0.813

AM

0.962

0.926

0.885

FMM

0.828

0.814

0.809

ICF

0.836

0.828

0.816

IRFCF

0.827

0.806

0.794

PCC

0.865

0.829

0.812

PD

0.832

0.818

0.809

AM

0.852

0.833

0.822

FMM

0.816

0.807

0.801

ICF

0.828

0.816

0.810

IRFCF

0.812

0.800

0.786

PCC

0.852

0.840

0.821

PD

0.821

0.814

0.810

AM

0.839

0.832

0.828

FMM

0.812

0.805

0.798

ICF

0.824

0.813

0.809

IRFCF

0.810

0.794

0.773

Table 1 summarizes the results for these six methods. Clearly, the algorithm IRFCF outperforms the other five methods in all configurations. By iterative filling missing data based on mutual relationship reinforcement between users and items, our filling approach is revealed to be effective in improving the prediction accuracy for

IRFCF: Iterative Rating Filling Collaborative Filtering Algorithm

867

collaborative filtering. The cluster number K=20 for the MovieRating dataset. The number of nearest number is set to 20.

5 Conclusion To alleviate the problem of sparsity, scalability, rating bias, this paper presents a novel collaborative filtering algorithm. Unlike the previous CF algorithms, our approach fills the unrated data with the similarity of users and items based on clusterbased iterative similarity computation. Experiment results show that our approach can significantly improve accuracy of prediction. Acknowledgements. This work was funded by the Natural Science Fund of Jiangsu Province(No.BK2005046).

References 1. J. S. Breese, D. Heckerman, and C. Kadie, Empirical Analysis of Predictive Algorithms for Collaborative Filtering. In Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence, 1998. 43-52. 2. B.M. Sarwar, G. Karypis, J.A. Konstan, and J. Riedl. Item-based collaborative filtering recommendation algorithms. In: Proceedings of the 10th International World Wide Web Conference. Hong Kong (2001) 3. J. D. Wang, H. J. Zeng, Z. Chen, H. J. Lu, L. Tao, and W.-Y Ma. ReCoM: reinforcement clustering of multi-type interrelated data objects. In Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 274-281, July 2003. 4. G.R. Xue, C.X. Lin, Q. Yang, W.Xi. Scalable Collaborative Filtering Using Cluster-based Smoothing. To appear in the Proceedings of the 2005 ACM SIGIR Conference. Brazil. August 2005. 5. J. Jeh and J. Widom. “SimRank: a measure of structural-context similarity”. In Proceedings of the eighth ACM SIGKDD international conference on knowledge discovery and data mining, pp. 538-543. Edmonton, Alberta, Canada, July 23-26,2002. 6. R.Jin, L.Si. A Study of Methods for Normalizing User Ratings in Collaborative Filtering. The 27th Annual International ACM SIGIR Conference Sheffield, UK. July 25-29, 2004. 7. MovieRating. http://www.cs.usyd.edu.au/~irena/movie_data.zip

A Method to Select the Optimum Web Services* Yuliang Shi, Guang’an Huang, Liang Zhang∗∗, and Baile Shi Department of Computing and Information Technology, Fudan University, China {031021056, huangguan, zhangl, bshi}@fudan.edu.cn

Abstract. Emerging web services standards enable the development of largescale applications in open environments. With the increasingly emerging web services, one of the main problems is not to find out the required services, but to select the optimum one from a set of requirements-satisfying services. In this paper, we propose a workflow-organization-model-based method to compute web services reputation values, then combine the reputation values with QoSbased objective quality values as the judgment standard to provide to users. Based on these values, users can select the optimum web service to execute. Keywords: Web services, Service Selection, Workflow, QoS.

1 Introduction Recently, web services have emerged as a new paradigm that supports loosely coupled distributed software systems. The provider of a web service publishes the service description (WSDL)[1] in service register center (UDDI)[2], then the user can search his required services in UDDI for SOAP[3] calling. With the services increasingly merging, to find a required web service has become easier. More attention has been paid to select the optimum one from a set of requirementssatisfying ones. For web services selection, most current methods are based on QoS(the service quality model), which can be established through the semantic information of web services and the evaluation information of users. However they only emphasize some quality values, different user requirements are not taken into account. In practical applications, different users may evaluate a same web service differently. In this paper, we propose a workflow-organization-model based method to compute web services reputation values, which consider the relationship of different users. The reputation value is a subjective experience one, and then we can take objective quality value in web services QoS model into consideration. The reputation value and the objective quality value will be set different weights to produce a total sum. According to the resulted sums, users can select the optimum web service. The structure of this paper is organized as follows. After this introduction, section 2 provides some related works. In section 3, we introduce the whole process of *

This work is partially supported by the National Basic Research Program (973) under grant No. 2005CB321905, the Chinese Hi-tech (863) Projects under the grant No. 2002AA4Z3430, and No. 2002AA231041. ∗∗ To whom correspondence should be addressed. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 868 – 873, 2006. © Springer-Verlag Berlin Heidelberg 2006

A Method to Select the Optimum Web Services

869

optimum web service selection. Then we propose a workflow-model-based method to compute web services reputation values in section 4. Section 5 discusses the evaluation of web services quality values and the services selection. Finally, we give the summary and future works.

2 Related Work With web technology developing, the web services providing same functions have become more and more. Thus, how to select the optimum web services has been attracting much attention. In [4], the authors propose a quality model for web services composition. It manages to globally make the composition expense optimum in a linear way, but the selection of a single web service isn’t taken into consideration. [5] gives a workflowbased quality model of web services, which includes the flow execution time and expense. However it also doesn’t consider the single web service selection. A dynamic QoS model is put forwards in [6], through which the values of services will be calculated and provided to users to select the optimum one. Although this model introduces service reputation, it fails to consider user relationship and feedback information, which are very important for service selection. The authors of [7] present a log-based QoS model. This model computes the service quality according to the log information. It also doesn’t take user relationship and feedback information into account. In [8], the authors discuss an agent-based selecting method, which involves the evaluation and feedback of users. But the user relationship isn’t included in this evaluation and the subjective quality values of web services are also not considered in selection process. In this paper, our method of selection can not only take into account the reputation of web service, but also the confidence relationship of users. Next we will illustrate the details.

3 Selection Process of the Optimum Web Services The whole selection process can be executed in 5 steps, which is shown in Fig.1. 1. For a task to execute, list all the web services involved in the task and all the former users fulfilling the task. 2. According to the workflow organization model, evaluate the confidence value of both the current user and the former users fulfilling the task. Then based on the web services reputation values the former users defines, calculate the reputation values of all web services for the current user. 3. Evaluate the objective quality values of all web services according to web services QoS model. 4. For both the reputation value and the quality value of every web service, multiply them by related weights then add up the products to generate the result sum. Then the user can decide the optimum web service to execute.

870

Y. Shi et al.

5. After executing the web service, the user feeds back his evaluation of this service. Based on this feedback information, recompute the reputation of the current user and other former users. Listing web services realted to the activity

Searching former users executing the activity

Computing the confidence value of current user to former users

Listing the reputation values of former users

Computing reputation values of web services

Obtaining the result based on Qos model

Computing the final results

Selecting one by current user

Giving a reputation value

Fig. 1. The whole process of selection

4 Workflow-Organization-Model-Based Calculation of Web Service Reputation Values The conception of workflow originates from office automatization field. It aims at those tasks having a fixed procedure in daily life. The organization model of workflow describes the level of people interacting with the workflow, who are located in different groups by their roles.

Fig. 2. The organization model of workflow

Here we use hierarchy structure to describe this model. As shown in Fig.2, leaf nodes represent the end users, while internal nodes denote the different groups. Among these groups, some may be contained by others, such as group B and C belong to group A. Note that, each user may have multiple roles, so that he may exist in multiple groups. In Fig.2, the user s1 belongs to both group D and group E.

A Method to Select the Optimum Web Services

871

Based on the organization model of workflow, the calculation process has 3 steps: 1. Computing the user confidence, 2. Evaluating the reputation values of all web services for the current user, 3. Recalculating the user confidence according to the feedback of the current user. Confidence denotes the interaction degree between the users involved in the workflow. Assume two users to be s1 and s2, then the confidence of s1 to s2 is represented by Cs1s2 and the confidence of s2 to s1 is Cs2s1. Note that Cs1s2 may not be equal to Cs2s1. For each two users s1 and s2, s1s2

C s1s 2 = 1 − ( Ds1s 2 − 1) / Ds1 max

(1)

Where Ds1s2 means the distance of user node s1 and s2. If s1 and s2 belong to the same group, then Ds1s2 = 1; otherwise Ds1s2 is the distance between the two groups which s1 and s2 belong to respectively, that is, the length of the shortest path of the groups. Ds1max denotes the distance from s1 to the user who is farthest to s1. As shown in Fig.2, Ds1s2=1, Ds1s5=3. If Ds1max=8, then Cs1s2=1, Cs1s5=0.75. After obtaining the initial confidence according to formula 1, we can calculate the reputation value of a web service related to some task as follows: R wi =

n

¦C j =1

Where

n

asj

R sjwi / ¦ C asj

(2)

j =1

Rwi is the reputation value of web service wi for user a; C asj means the

confidence of user a to sj; and

Rsjwi denotes the evaluation of user sj to service wi,

the value of which is in the range of 0 and 1. The result of confidence obtained by formula 1 is just an initial value. When a user has executed a web service, he needs to make evaluation of this service, which will be fed back to the initial confidence to calculate a new confidence value. This feedback process does not aim at all the users who have fulfilled the task, but those who have evaluated the web service being selected to execute by the current user. The feedback formula is shown as follows: n

' Casj = Casj + p• | ΔR − ΔR | / ¦ Casj

(3)

j =1

' Where C asj is the confidence of user a to user sj before feedback, Casj is the

confidence after feedback,

Rsjwi and Rawi are the evaluation of user sj and a to web

service wi respectively, ΔR =| R sjwi − R awi | , which means the difference between the n

evaluation of user a and sj to web service wi, ΔR = ΔR / n is the average difference ¦ j =1

between the evaluation of user a and all users who have evaluated wi, p={1,-1}, and if

ΔR ≥ ΔR then p=-1,otherwise p=1.

872

Y. Shi et al.

In addition, the result of feedback confidence based on formula 3 satisfies the condition: n

¦

C

' asj

=

j =1

n

¦

C

asj

j =1

5 Selection of Web Services The resulted reputation value is just a subjective experience one. In practical application, we should also take the subjective quality value into consideration that is evaluated based on the QoS model of web services. 5.1 Calculation of Web Services Quality Values Here we only consider two factors related to the web service quality. 1. Execution expense, which is the necessary expense provided to the service provider from the user who calls the service. It will be denoted with qc. 2. Execution time, which means the total time from the user’s calling the web service to finishing execution. It will be denoted with qt. For all the web services corresponding to a task, we can establish a quality matrix to record the quality information, in which every row represents a web service of wi and the two columns correspond the execution expense and execution time of wi respectively. ª q 1c «q « 2c Q = « ... « « ... «¬ q ic

q 1t º q 2 t »» ... » » ... » q it »¼

Based on the above quality matrix, we can calculate the quality values of web services in the following two steps: 1. Transform the initial matrix Q. For qic and qit in each row of the matrix, turn them into vic and vit, where vit = min(q it ) / q it , vic = min(qic ) / q ic . 2. Calculate the quality value of web service according to the following formula:

Vi = wc • vic + wt • vit

(4)

Where wc and wt denote the weights of execution expense and execution time respectively and their sum is a constant 1. 5.2 Selection of Web Services The selection method is to first combine the reputation value with the quality value of each web service like formula 5, then select the maximum one from the value sums.

A Method to Select the Optimum Web Services

Wi = α • R wi + β • Vi

873

(5)

Here Į and ȕ means the weights of reputation value and quality value respectively and they satisfy the equation: Į +ȕ=1. If Į=1, it indicates that we only consider the user evaluation during the selection without web service subjective quality value. Otherwise if ȕ=1, it means we only take the latter into account.

6 Conclusion and Future Work In this paper, we propose the method which enables users select the optimum web service to execute. Future work will be focused on the improvement of objective quality value computation. We’ll make use of the log information of web services to compute the values, so that the selection of web services will be more appropriate for practical applications.

References [1] Http://uddi.org/pubs/uddi-v3.00-published-20020719.html [2] W3C, “Web Service Description Language (WSDL) 1.1,” World Wide Web Consortium (2001), available at http://www.w3.org/TR/wsdl. [3] W3C “Simple Object Access Protocol(SOAP) 1.2” World Wide Web Consortium (2002), available at http://www.w3.org/TR/2003/REC-soap12-part0-20030624/ [4] Liangzhao Zeng, Boualem Benatallah, Marlon Dumas. Quality Driven Web Services Composition. In Proc. Int. World Wide Web Conf. (WWW), May 2003. [5] Jorge Cardoso, Amit Sheth, John Miller, Jonathan Arnold, Krys Kochut. Quality of Service for Workflows and Web Service Processes. Web Semantics: Science, Services and Agents on the World Wide Web 1 (2004) 281–308 [6] Yutu Liu, Anne H.H. Ngu, Liangzhao Zeng. QoS Computation and Policing in Dynamic Web Service Selection. In Proceedings of 13th International Conference on World Wide Web (WWW),2004. [7] Derong Shen, Ge Yu, Tiezheng Nie, Rui Li, and Xiaochun Yang. Modeling QoS for Semantic Equivalent Web Services. WAIM 2004, LNCS 3129, pp. 478–488, 2004. [8] Raghuram M.Sreenath, Munindar P. Singh , Agent-based Service Selection. Web Semantics: Science, Services and Agents on the World Wide Web 1 (2004) 261–279.

A New Methodology for Information Presentations on the Web Hyun Woong Shin1, Dennis McLeod1, and Larry Pryor2 1

Computer Science Department, Integrated Media Systems Center, University of Southern California, 941 W. 37th Place, Los Angeles, CA, USA {hyunshin, mcleod}@cs.usc.edu 2 Annenberg School for Communication, University of Southern California, Los Angeles, CA, USA [email protected]

Abstract. The rapid growth of on-line information including multimedia contents during the last decade caused a major problem for Web users - there is too much information available, most of it poorly organized and hard to find. To help a user to find proper information, web news search functions are devised and developed. Although those search engines provide some solutions, users still suffer from reading huge amounts of hyperlinks. Also, users of new media now have great expectations of what they can see on the Web. To provide better user satisfaction, we proposed a story model (story structures) that can be dynamically instantiated for different user requests from various multi-modal elements. The proposed story model defines four domain-independent story types. We compared traditional web news search functions and our story model by using usability test. The result shows that our multimedia presentation methodology is significantly better than the current search functions.

1 Introduction The rapid growth of online digital information over the last decade has made it difficult for a typical user to find and read information [10]. One way to address the problem of information overload is to tailor that information to specific user interests, needs and knowledge base. If there is an approach that responds to individual information requests with an original, dynamically built story, several problems are solved. First, in today’s Web service industry, information presentations and collections of data are static and having limited multi-modal presentations. Critically, there is no capability to dynamically adapt an integrated presentation of information to a user. We believe that a user will engage deeply into a story when a user not only reads text articles but also watches videos and/or listens to audio clips in a coordinated manner. Second, most of current web search engines deliver a huge amount of hyperlinks. Although this helps improve accuracy (recall), an end user has a trouble deciding which results are what he/she wants. The most similar study related to our work is called “Cuypers” [11]. The system generates Web-based presentations as an interface to a semi-structured multimedia database. The goal of the system is to generate the final presentation by using conX. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 874 – 879, 2006. © Springer-Verlag Berlin Heidelberg 2006

A New Methodology for Information Presentations on the Web

875

straints (quantitative and qualitative) and logic programming to process automatic presentation generation. In our approach, each story type was created by using constraints as same as Cuypers, but the story structures are used to deliver high level of abstraction. For example, a summary story type can be chosen by an end user who wants to read brief information. In order to present parallel information (e.g., stories about players of LA Dodgers), a structured collection story type can be chosen by our system. The core of this paper is to provide a dynamic multi-modal presentation with retrieved results of news search engine. To achieve the goal, the proposed system created story structures that can be dynamically instantiated for different user requests from various multi-modal elements. Also, the proposed system focuses on quality of the results not quantity of results. In order to convey the nature of the information presentation, we devised and developed the precise nature of the dynamic integration of multimedia presentations that will draw upon visual techniques [12, 14], presentation constrains [3, 16], a content query formulation, a story assembly and a structured rule-based decision process. Within this philosophy, we propose a story model that defines four story types that lay out the appropriate presentation style depending on the user’s intention and goal – a summary story type, a text-based story type, a non-text based story type and a structured collection story type. We design the story model to delineate highlevel abstractions of general story templates so that the proposed story types can cope with any kind of existing stories. We conducted an experiment to examine user satisfaction of our system comparing with that of traditional web news search functions. The experimental result shows that our system is statistically significantly better in user satisfaction. To determine a user’s intention and goal, a general knowledge-based process will be used. A key to the successful use of story types is the ability to relate and connect the user requests to the Content Database. A domain dependent ontology [2, 7] is essential for capturing the key concepts and relationships in an application domain. For our purposes, we are interested in sports news domain dependent ontologies [8]. Metadata descriptions will connect a modified user request (by using domain ontology) to the Content Database for retrieving proper content elements.

2 Overall Architecture The overall functional architecture of the proposed system is illustrated in figure 1. The model has two key phases: story assembly and content query formulation. In the story assembly phase, a novel structured rule-based decision process is introduced to determine a proper story type and to invoke a primary search and a secondary search in the content query formulation phase. At the beginning, the story assembly module receives a modified user’s request from a query processing procedure. These inputs then invoke a primary search to retrieve multi-modal content objects, along with a constraint-based k-nearest neighbor search. These results are sent to the story type decision module to determine a proper story type and then fill in the chosen story type with multi-modal elements (content objects). If it is necessary, this decision module also invokes a secondary search to get extra elements.

876

H.W. Shin, D. McLeod, and L. Pryor

Fig. 1. Overall functional Architecture

3 Story Model The proposed story model defines four story types that lay out an appropriate presentation style depending on a user’s intention and goal. In order to provide an efficient presentation, the story model needs quantification for the size of each icon based on ranking in the retrieved content objects. Furthermore, the story model employs visual techniques that solve layout problems such as combining and presenting different types of information and adopts presentation constraint specifications to abstract higher levels of presentation so that lower levels of presentation can automatically generate a story that meets those specifications. A visual technique depends on traditionally accepted visual principles to provide an arrangement of layout components [1]. This conventional arrangement, called a layout grid, consists of a set of parallel horizontal and vertical lines that divide the layout into units that have visual and conceptual integrity [12]. According to Vanderdonckt et al. [14], there are five sets of visual techniques: physical techniques (e.g. balanced vs. unbalanced layout), composition techniques (e.g. simple vs. complex layout), association and dissociation techniques (e.g. grouped vs. splitted layout), ordering techniques (e.g. sequential vs. random layout), and photographic techniques (e.g. round vs. angular layout). We focus only on balance and symmetry of physical techniques because balance is a highly recommended technique evoked by many authors [4, 6, 9]. Balance is a search for equilibrium along a vertical or horizontal axis in layouts. Symmetry consists of duplicating visual images along with a horizontal and/or vertical axis [5, 9]. Thus, achieving symmetry automatically preserves balance. Presentation constraints are typically expressed in terms of a timeline, screen layout, or navigation structure. In most constraint systems, only certain aspects of the presentation are adapted to satisfy each constraint. Multimedia presentation structures consist of multiple dimensions, primarily including space, time and navigation [13, 15, 16]. Our approach is only concerned with a spatial constraint because time and navigational constraints are not relevant to our presentation goal. Our presentation

A New Methodology for Information Presentations on the Web

877

goal is to deliver an integrated multi-modal presentation with a balanced layout in response to a user’s objective. Thus, timeline and navigation constrains are not considerable constraint specifications of our approach. Graphical icons, including a scrollable box for a text, a fixed size window for images, and control boxes for audio and video clips are containers of elements in story types. In spatial constraint specifications, each container has a fixed size to be filled in by an element. This higher level of abstraction allows a consistent final presentation for the user.

4 Experiments 4.1 Evaluation Plan To exam the usability of our system, we designed a controlled experiment. In our experiments, the total 25 students from the engineering and journalism schools (17 and 8, respectively) were selected and asked to fill up a questionnaire after experiencing the system. Appendix A shows the sample questionnaire and the figure of our system used in the experiment. The subjects were provided by an experimenter with a brief instruction about the experiment and asked to have experience with two sites the traditional news search functions such as CNN, LA Times, and Washington Post and our systems. 13 subjects began with the traditional news search function site and then our system site, while the other 12 subjects started with our system site and then the traditional web news search function site in order to avoid any possible order effect. At the end of each site, the subjects were asked to fill out an online questionnaire, which was hyperlinked from the last page of each site, with radio-button scaled responses and some open-ended questions which asked him/her to evaluate the four categories - Overall satisfaction; Functionality and capability; Learnability; and Interface design. Finally, all subjects were debriefed and thanked. 4.2 Statistical Analysis The Cronbach’s alpha was used to estimate if the items in the same category are measuring the same underlying construct. We assume there is high reliability if the alpha value is over 85%. Cronbach's alpha measures how well a set of items (or variables) measures a single unidimensional latent construct - it is a coefficient of reliability (or consistency). Cronbach’s alpha can be written as a function of the number of test items and the average inter-correlation among the items. 4.3 Results and Discussion The online questionnaire was composed of based on the Cronbach’s alpha showing that the questions in each category are highly reliable (Table 1). All questions are included in the results. The paired t-tests were performed at each category, and the results were showed in Table 2. In the all categories, our system is statistically significantly batter than the traditional web search engines (all p-values are less than 0.05). Our system’s overall satisfaction, functionality and capability and interface design were more than one level up than the traditional ones’.

878

H.W. Shin, D. McLeod, and L. Pryor Table 1. Cronbach’s alpha value of two sessions

Cronbach’s alpha Session 11 Session 22 .8690 .9150 .7747 .6941 .8027 .7829 .8392 .8056

Overall Functionality and Capability Learnability Interface Design

Table 2. Paired t-test value

Pair 1 Pair 2 Pair 3 Pair 4

Q101 - Q111 Q111 - Q131 Q102 - Q112 Q112 - Q132

Mean of Paired Differences 2.35 –2.29 1.71 –1.88

t (df = 16)

Sig. (2-tailed)

6.305 -5.376 3.237 -4.157

.000 .000 .005 .001

References [1] Bodart F and Vanderdonckt J. Visual Layout Techniques in multimedia Applications. In CHI Companion’94, 1994; 121-122. [2] Bunge M. Treatise on basic Philosophy, Ontology I: The Furniture of the World. Reidel Publishing Co., vol.3, 1977. [3] Cruz IF and Lucas WT. A visual Approach to Multimedia Querying and Presentation. ACM Multimedia, 1997. [4] Davenport G and Murtaugh M. ConText: Towards the Evolving Documentary. In Proceedings ACM Multimedia, 1995; 381-389. [5] Dondis DA. A Primer of Visual Literacy. Cambridge: The MIT Press, 1973. [6] Dumas JS. Designing User Interface for Software. Prentice Hall, 1988. [7] Gruber TR. Toward Principles for the design of Ontologies used for Knowledge Sharing. In Proceedings of the International Workshop on Formal Ontology, 1993. [8] Khan L and McLeod D. Effective Retrieval of Audio Information from Annotated Text Using Ontologies. In Proceedings of ACM SIGKDD Workshop on Multimedia Data Mining, 2000; 37-45. [9] Kim WC and Foley JD. Providing High-Level Control and Expert Assistance in the User Interface Presentation Design. In Proceedings of Inter-CHI’93, ACM Press, 1993; 430- 437. [10] NetCraft. Web Server Survey. http://news.netcraft.com/, 2005. [11] Ossen bruggen J, Geurts J, Cornelissen F, Rutledge L, and Hardman L. Towards Second and Third Generation Web-based Multimedia. In the Tenth International World Wide Web Conference, 2001; 479-488. [12] Shneiderman B and Kang H. Direct Animation: A Drag and Drop Strategy for Labeling Photos. Proceedings of International Conference on Information Visualization (VI2000), 2000; 88-95. 1 2

Traditional Web Search Functions. Our system.

A New Methodology for Information Presentations on the Web

879

[13] Smolensky P, Bell B, King R, and Lewis C. Constraint-based Hypertext for Argumentation. In Proceedings of Hypertext, 1987; 215 – 245. [14] Vanderdonckt J and Gillo X. Visual Techniques for Traditional and Multimedia Layouts. Proceedings of the workshop on advanced visual interfaces, 1994. [15] Weitzman L and Wittenberg K. Automatic Presentation of Multimedia Documents Using Relational Grammars. In Proceedings of ACM Multimedia, 1994; 443-451. [16] Zhouh MX. Visual Planning: A Practical Approach to Automated Presentation Design. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 1999; 634-641.

Integration of Single Sign-On and Role-Based Access Control Profiles for Grid Computing Jongil Jeong1, Weehyuk Yu1, Dongkyoo Shin1,*, Dongil Shin1, Kiyoung Moon2, and Jaeseung Lee2 1

Department of Computer Science and Engineering, Sejong University, 98 Kunja-Dong, Kwangjin-Ku, Seoul 143-747, Korea {jijeong, solui}@gce.sejong.ac.kr, {shindk, dshin}@sejong.ac.kr 2 Electronics and Telecommunications Research Institute, 161 Kajong-Dong, Yusong-Gu, Taejon 305-350, Korea {kymoon, jasonlee}@etri.re.kr

Abstract. In this paper, we propose an architecture to integrate authentication and authorization schemes for constructing a secure Grid system. In our proposed method, SAML (Security Assertion Markup Language) and XACML (eXtensible Access Control Markup Language) play key solution roles in integrating single sign-on and authorization. IBM and Microsoft are already leading in the standardization of security for Grid computing. Nevertheless, we recommended SAML as an alternative to the existing standard that they recommend. Therefore, our proposed architecture opens up the possibility of adopting a variety of single sign-on technologies in constructing secure Grid computing. Additionally, in order to implement access control, we recommended XACML, which gives Grid computing an efficient way to implement role-based access control. Keywords: single sign-on, SAML, role-based access control, XACML, Grid.

1 Introduction Grid computing is a very complex environment in which all the systems connected to the Grid network have authentication and authorization schemes to protect their resources. To construct a reliable Grid system, a user’s access to certain resources needs to be restricted according to an authorization scheme even after the user is authenticated. With regard to security vulnerability in relation to access control, it is said that most security problems are caused by malicious internal users rather than intruders. Recently, the Open Web Application Security Project (OWASP) stressed the importance of access control for authenticated users by declaring “Broken Access Control” to be one of the top ten most critical Web application security vulnerabilities [1]. Therefore, authentication and authorization for a user need to be integrated to construct a secure Grid system. *

Correspondence author.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 880 – 885, 2006. © Springer-Verlag Berlin Heidelberg 2006

Integration of Single Sign-On and Role-Based Access Control Profiles

881

Globus Toolkit [2] provides a Grid Security Infrastructure (GSI) that provides fundamental security services such as single sign-on and access control. Globus Alliance suggests the use of delegation for single sign-on and SAML AuthorizationDecision for authorization. However, there is no guidance on how each technology can be integrated. Thus, we propose another way of implementing single sign-on and integrating it with authorization. In our proposed method, SAML (Security Assertion Markup Language) and XACML (eXtensible Access Control Markup Language), which are recommended by OASIS (Organization for the Advancement of Structured Information Standards) [3], play key solution roles in integrating single sign-on and authorization. In this paper, we propose an efficient architecture that integrates single sign-on and access control for Grid security based on SAML and XACML.

2 Background 2.1 Single Sign-On and Access Control The SSO service acts as a wrapper around the existing security infrastructure that exports various security features like authentication and authorization [4]. To support single sign-on, the system collects all the identification and authentication credentials at the time of primary sign-on. This information is then used by SSO Services within the primary domain to support the authentication of the user to each of the secondary domains with which the user may interact. Access control either permits or denies user access requests by checking whether the user has permission to access target resources. Recently, role-based access control has been emerging for use in Grid computing. In contrast with identity-based methods, a role-based access control policy (RBAC) takes user context into account, that is, roles and resources become the main factor determining the size of the rolebased rule base because RBAC puts users into groups and assigned roles. For this reason, roles can reflect organizational structure. Access control management of a role-based policy can be simpler and have a better scale than an identity-based system [5, 6]. 2.2 SAML (Security Assertion Markup Language) and XACML (eXtensible Access Control Markup Language) SAML is designed to offer SSO for both automatic and manual interactions between systems. It will let a user log into another domain and define all of their permissions, or it will manage automated message exchanges between just two parties. SAML enables the exchange of authentication and authorization information about users, devices, or any identifiable entities, calling these subjects. Using a subset of XML, SAML defines the request-response protocol by which systems accept or reject subjects based on assertions, which are declaration of certain facts about a subject [7].

882

J. Jeong et al.

XACML is used in conjunction with SAML and supplements lacking access control policy in SAML. XACML can specify various targets, such as a resource, an entire document, a partial document, or multiple documents. It can even specify an XML element as the target to be protected. This aspect makes it possible to implement fine-grained access control. SAML can be bound to multiple communication and transport protocols. It can be linked with Simple Object Access Protocol (SOAP) over HTTP [7]. SAML operates without cookies in a browser/artifact profile. Using a browser/artifact, a SAML artifact is carried as part of a URL query string and is a pointer to an assertion.

3 The Expression of Role-Based Access Control Policy Using XACML As we mentioned earlier, XACML provides an XML-based RBAC profile for the expression of authorization policies to build them into Web applications. We propose a practical model for transcribing the RBAC profile into XACML using some examples. Figure 1 represents the schema related to how the policy element specifies default values that apply to the element. The element identifies the set of decision requests that the parent element intends to evaluate. The element defines the individual rules in the policy. The element must be fulfilled by the PEP (Policy Enforcement Point) in conjunction with the authorization decision. PolicyId is the policy identifier. RuleCombiningAlgId is the identifier for the rule-combining algorithm by which the components must be combined [5]. Figure 2 represents an instance of a user’s information, which is stored in a repository or database. Figure 3 depicts the rules applicable to the distributed resources based on the Policy schema shown in Figure 1. Box #1 specifies the user’s group. Subjects who want to execute a certain resource belong to this group. Box #2 specifies the condition that an ID offered by a requester must correspond with the ID in the instance of the user’s

Fig. 1. Schema for Policy element

Fig. 2. XML document including user’s information

Integration of Single Sign-On and Role-Based Access Control Profiles

883

< Target> < Subjects> #1 < /Subjects> < R esources> < AnyR esource/> < SubjectA ttributeD esignator < /Resources> A ttributeId =” urn:oasis:nam es:tc:xacm l:1.0:exam ple:attribute:user-id” < A ctions> DataType =” http://w ww.w3.org/2001/X M LSchem a#string” /> < AnyA ction/> < /A pply> < /Actions> < Apply FunctionId=” urn:oasis:nam es:tc:xacm l:1.0:function:string-one-and-only” > < /Target>

RequestC ontextP ath =” //tst:users/tst:user/tst:user-id/text()” #2 DataType= ” http://www.w3.org/2001/XM LSchema#string” /> < /C ondition> #2 < /A pply> < /R ule>

Fig. 3. Description of Rule using XACML

information (Refer to Figure 2). In this policy, whoever is authenticated by his ID and belongs to the Resident group can access any resource and execute any action to it.

4 Architecture for Integrating User Authentication and Access Control for Grid Computing Environments We propose an integrated architecture in which a user offers authentication credential information to the Grid system network to obtain user authentication and authorization and then access to another trusted system using this authentication and

Fig. 4. Architecture for integrating user authentication and role-based access control

884

J. Jeong et al.

authorization, based on the SAML and XACML standard. Figure 4 explains the concept of this architecture. The user authentication and authorization procedure for this architecture is presented as a sequence diagram in Figure 5. Each box in the diagram denotes an entity involved in the process. Figure 5 explains the messages between entities, applying a user’s single sign-on and access control in three domains in which there is a mutual trust relationship.

Fig. 5. Sequence diagram of the proposed Architecture

Fig. 6. An Assertion with Authentication and Attribute Statement

Figure 6 is an assertion statement issued by the SAML authority (refers to Step (4) of Figure 5). Figure 7 is an assertion statement issued by the XACML Component (refers to Step (19) of Figure 5). These messages were verified by a simulation where two domains were constructed with a mutual trust relationship and the SAML and XACML libraries, which were built from previous work [8].

Integration of Single Sign-On and Role-Based Access Control Profiles

885

Fig. 7. An Assertion with Authentication, Attribute, and Authorization Decision Statement

5 Conclusion With regard to security for Grid computing, IBM and Microsoft are already leading in its standardization. Nevertheless, we recommended SAML as an alternative to the existing standards that they are promoting. SAML is also a standard technology for implementing single sign-on. Thus, our proposed architecture opens up the possibility that various single sign-on technologies can be adopted in constructing secure Grid computing. Additionally, we recommended XACML to implement role-based access control. Since XACML gives Grid computing an efficient way to implement rolebased access control, the difficulty of using the Globus toolkit for the convergence of standard technologies to implement role-based access control can be solved by XACML.

References 1. OWASP (Open Web Application Security Project): http://www.owasp.org/document/ topten. html 2. Globus Toolkit 4.0 Release Manuals, http://www.globus.org/toolkit/docs/4.0/security/ prewsaa/Pre_WS_AA_Release_Notes.html 3. OASIS (Organization for the Advancement of Structured Information Standards): http://www.open-oasis.org 4. B. Pfitzmann, B. Waidner.: Token-based web Single Signon with Enabled Clients. IBM Research Report RZ 3458 (#93844), November (2002) 5. eXtensible Access Control Markup Language (XACML) Version 1.0: http://www.oasisopen.org/committees/xacml/repository/ 6. O. Mark., et al., W.: Web Services Security, McGraw-Hill/Osborne, (2003) 7. Bindings and Profiles for the OASIS Security Assertion Markup Language (SAML) V1.1: http://www.oasis-open.org/committees/security/ 8. J. Jeong, D. Shin, D. Shin, H. Oh.: A Study on XML-based Single Sign-On System Supporting Mobile and Ubiquitous Service Environments, Lecture Notes in Computer Science 3207, (2004).

An Effective Service Discovery Model for Highly Reliable Web Services Composition in a Specific Domain* Derong Shen, Ge Yu, Tiezheng Nie, Yue Kou, Yu Cao, and Meifang Li Dept. of Computer, Northeastern University, Shenyang 110004, China {Shendr, yuge}@mail.neu.edu.cn

Abstract. How to discover suitable Web services becomes more and more important in supporting highly reliable Web services composition. Nowadays, related work mainly focuses on syntax agreement and compatible matching to obtain Web services, in which long responding time and low recall for discovering Web services as well as low reliability of Web services composition are manifested. In this paper, an effective Service Discovery Model is proposed, in which profile matching, functional matching, nonfunctional matching and QoS matching are included and realized at three stages, namely publish matching stage, discovery matching stage and QoS matching stage. The availability of the Service Discovery Model and its advantages are testified by experiments.

1 Introduction To support highly reliable Web services composition, it is urgent to discover suitable Web services with better performance from numerous Web services. Nowadays, existing researches on Web services discovery have advanced from keywords-based and frame-based searching to semantic information based one, while the latter matching has been recognized as an effective service searching strategy. Nowadays, a variety of techniques have been proposed, among which, popular ones include the following five methods: (1) DAML-S ontology matching method [1,2]. (2) WSDL based matching method [3]. (3) Ontology based matching method [4,5]. (4) Description Logics (DLs) reasoning method [6]. (5) OWL-S/DAML-S embedded to UDDI method [7]. To enhance the performance of Web services discovered, typical researches [8,9] proposed a theory on evaluating the QoS for Web services, but no standard QoS model has been proposed so far. Moreover, these existing efforts have the following disadvantages: (1) Only part of matching is insufficient for discovering suitable Web services. (2) Multi-step matching makes the responding time of both discovering Web services and executing composite services much longer. (3) Pair–wise matching based on text for profile information is not flexible and highly efficient. (4) Only few attributes such as QoS information and constraint information have been addressed so far. The paper focuses on service discovery strategy for supporting highly reliable Web services composition in a specific domain, and proposes a Service Discovery Model, *

This research is supported by the National High-Tech Development Program (2003AA414210), the National Science Foundation (60173051).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 886 – 892, 2006. © Springer-Verlag Berlin Heidelberg 2006

An Effective Service Discovery Model for Highly Reliable Web Services Composition

887

with the background of the Enterprise Flexible Composition System (EFCS)[10], and some experiments are tested to testify the efficiency and availability of the Service Discovery Model.

2 Service Discovery Model To ensure reliable and effective executing of composite services, the following four strategies are adopted. First, Web services composition is supported in a specified domain with OWSDL and ODI [11] being defined. Secondly, the composite services are defined based on OWSDL(j) by domain experts according to their experience and the actual demands. Thirdly, input and output mapping information (Map(Ip, Io) and Map(Op, Oo)) from WSi published to OWSDL(j) is obtained at publishing time. Fourthly, the actual execution of WSi is realized through reverse mapping function (InMap(Ip, Io)) which realizes mapping from standard input parameters of OWSDL(j) to the input parameters of WSi, and the result is reflected through InMap(Op, Oo) that realizes the conversion from result parameters to standard output parameters. Based on above ideas, we divide the process of service discovering into three stages, respectively, publish matching stage, discovery matching stage and QoS matching stage. The Service Discovery Model is defined as: ServiceDiscoveryModel=(PublishM, DiscoverM,QoSM) in which (1) Context-based Profile matching and WSDL-based functional matching are realized by Algorithm PublishM. (2) In DiscoverM, the consistency of data flows in pre-operation and post-operation is realized, and the nonfunctional matching based on vector distance is used to select suitable Web services on demands. (3) In QoSM, QoS matching based on QoS information is adopted to select the optimal Web services from numerous semantic equivalent Web services. In the following, the matching algorithms of the three stages are further expounded on respectively. (1) Publish Matching algorithm (PublishM Algorithm). At publishing time, the Web service published will be correctly registered according to its domain concept[11] and corresponding Domain Web service[11], which is the guarantee to discover Web services with syntax agreement and to enhance the precision and recall of services discovered. The PublishM is represented as follows: PublishM Algorithm Input: {P.WSi, OP.WSi}// WSi is a Web service published, and P.WSi and OP.WSi are the profile information and operation information of WSi. Output: {Di, OWSDL(j), Map(Ip, Io), Map(Op, Oo) } // Di is the domain concept WSi belongs to, OWSDL(j) is the Domain Web service followed by WSi. Main Steps: Step1: P.WSi is parsed into keywords by eliminating empty words which contain less functional information and kept the substantives in W1= {w1, w2…}, and by the same means, service name, operation name, input and output name as well as their corresponding functional description information are analyzed to obtain the set of keywords W2. Let W= W1+ W2.

888

D. Shen et al.

Step2: Set_D2, …,Set_Dm are determined, where Set-Dk={wi|wi ∈W∧ wi∈ Dk } represents that the set Dk consists of terms in W and keeps them in order. D1, D2 , ……, Dm represent domain concepts defined in ontology. s −1

Step3: ||W, Dk||CSD =s+ ∑ neighborhood ( x i , x i +1) represents the context-based similar i =1

distance between W and Dk., where the number of elements in Set-Dk is s, let SetDk={x1,x2,…,xs}, xi = wj, xi+1 = wk, then neighborhood(xi,xi+1)=1/(k-j). Step4: Remove the keywords excluded in any Dk from W and another effective keyword set W are obtained. Step5: || W, D1||CSD, || W, D2||CSD, …, || W, Dm||CSD are calculated to determine the Dk with the maximum similar distance. Step6: In Dk, OWSDL(j) is pre-determined if ||W, DkWSDL(j)||CS D has bigger similar distance. Step7: Resorting to ODI, if all the parameters of input set of OWSDL(j) subsume those in input set of WSk and the parameters in output set of WSk subsume ones of output set of OWSDL(j), then OWSDL(j) is determined and the input/output mapping information (Map(Ip, Io), Map(Op, Oo)) from WSi to OWSDL(j) is obtained. (2) Discovery Matching Algorithm (DiscoverM Algorithm). P_Set and WS_Set can simply be obtained by keywords (Domain Web services) matching first, and then C_Set is acquired based on constraint information by means of vector distance. To further expatiate at the DiscoverM Algorithms efficiently, first of all, some related definitions are given: Definition 1 Service Profile Ontology (O{Go,Io}). O{Go,Io} is a standardized service profile, which denotes constraint information and can be defined as a SDTemplate, where Go is a set of general attributes of profile ontology; Io is a set of the instance attributes of profile ontology. Corresponding Service Publishing Profile (P{Gp,Ip}) and Service Requesting Profile (R{Gr,Ir}) are defined the same as O{Go,Io}, where P{Gp,Ip} is the profile information published by providers, and R{Gr,Ir} is the profile information requested by customers. Definition 2 Service Discovery Matching Degree. [11]. Let O P and R be one dimension vector matrix corresponding to O{Go,Io }, P{Gp,Ip} and R{Gr,Ir} respectively. Then Service Discovery Matching Degree of WSi and WSr is denoted as: ||WSi, WSr||SDMD=|| O , P ||VD*|| P , R || VD , where || O , P ||VD is the vector distance between and P , || P , R || VD is one between P and R . || O , P || VD represents completeness about the description information of Web services published, while || P , R || VD represents the similarity degree between the Web services published and that requested. || pmin , R ||MMD is the Minimum Matching Degree [12] corresponding to R . DisO

coverM Algorithm is represented as follows: DiscoverM Algorithm Input: { Dk, OWSDL(j), CR} // CR is the constraint information requested. Output: {C_Set}

An Effective Service Discovery Model for Highly Reliable Web Services Composition

889

Main Steps: Step1: both the keywords of Dk, and OWSDL(j) are matched to obtain P_Set and WS_Set. Step2: C’_Set={WSi| || pi , R ||VD>|| pmin , R ||MMD∧WSi∈WS_Set} is acquired, where

pi is the vector of WSi. Step3: the Web services in C’_Set is sorted according to the ||WSi, WSr||SDMD and named as C_Set. (3) QoS Based Matching Algorithm (QoSM Algorithm). Though the evaluation factors of QoS model in different domains may be different, and no matter which domain it belongs to, the following factors are generally considered in QoS model [13,14]: different weights designated on users’ demands, historical statistic information or more recent and important principle, and reasonable evaluation rules for all the factors being defined etc. So the QoS evaluation model can be defined as follows: Definition 3 QoS Evaluation Model (QoS(WS)). Let QoS(WS)={FN,OP,CS},where FN represents the set of evaluation factors and their formulas, OP ={op1,op2,…,opm} represents the set of evaluation modes provided for customers in an application, CS={cs1,cs2,…,csm} represents the constraints on evaluation factors. Based on the QoS evaluation model, QoSM Algorithm is denoted as follows: QoSM Algorithm Input: {C_Set, QR}// QR is the quality information requested. Output: {S_Set} Main Steps: Step1: According to QoS model, the performance values of every factor in QoS model about all the Web services in C_Set are calculated. Step2: Their total performance values based on the evaluation mode selected are calculated. Step3: S_Set is obtained, in which Web services are sorted by their QoS performance.

3 Experiments The service discovery strategy discussed earlier has been realized in EFCS, which is supported based on advanced software environment, such as Eclipse 3.0.1, Websphere 5.1, GT3, Tomcat5.0, Jetspeed1.5, IE6.0, and is realized by Java JDK1.4, JavaScript (JScrip 6.0), XML1.0, HTML4.0, Velocity and JSP 1.2. Here three aspects of experiments are focused on to present the performance of the discovering model, namely, the responding time of discovering services affected by profile matching, the “recall” and precision of DiscoverM Algorithm determined by nonfunctional matching, and the responding time and failure rate of composite services affected by QoS matching, the test results are shown in Fig. 1, Fig. 2 and Fig. 3 respectively. In Fig 1, Tt represents the total responding time of discovering services except for the cost of QoS matching, TP represents the time profile matching costs, Rp=Tp/T,

890

D. Shen et al.

represents the percent rate of TP to Tt. Here according to profile matching, the time cost is tested based on various numbers of keywords in a domain concept. Fig.7 shows the recall of the keywords matching with semantic extended and that by using nonfunctional matching in discoveryM algorithm. In Fig. 3, Take a Travel Service [12] as an example, QoS_relation_sort[14] is selected and the Web services in C_Set m

are sorted by using their QoS performance [13] calculated by ∑ ( wi ∗ M i) , in which the 1







7S7W 



   7W 7S

  

















 



1XPEHURI.H\ZRUGV

7S7W

Fig. 1. (a) Responding Time of Profile Matching

.H\ZRUGV0DWFKLQJ 6LPLODU0DWFKLQJ 6LPLODU0DWFKLQJ 

5HFDOO



   







1XPEHURI:HE6HUYLFHV

Fig. 2. (a) Recall of Discovery Matching















 

1XPEHURI.H\ZRUGV

(b) Rate of Time of Profile Matchingin Total Discovering Time

$YHUDJH5HFDOO

5HSRQGLQJ7LPH PV

responding time and the failure rate in executing the composite service “TravelService” with QoS and nonQoS are tested respectively. According to the testing results, the discovering time by adopting the Service Discovery Model can be substantially decreased due to the profile matching realized at publishing time (Fig.1), the number of Web services discovered is enlarged with the loosing of constraints (Fig.2), the average responding time and the failure rate in executing TravelService are all reduced to a greater extend than that without QoS model (Fig.3).

       





7\SHVRI0DWFKLQJ

(b) Average Recall of Three Types of Matching

 

4R6 QRQ4R6

   

 *URXS1R

Fig. 3. (a) Average Responding Time of TravelService

$YHUDJH5HVSRQGLQJ 7LPHRI 7UDYHO6HUYLFH

)DLOXUH5DWHRI 7UDYHO6HUYLFH

An Effective Service Discovery Model for Highly Reliable Web Services Composition

891

 

4R6 QRQ4R6

   

 *URXS1R

(b) Failure Rate of TravelService

4 Conclusions To ameliorate limitations existing in service matching strategies, a Service Discovery Model is proposed. At publishing time, the domain concept and the Domain Web service the published service belongs to are determined based on profile matching and functional matching to guarantee the consistency of data flows in a composite service. At discovering time, nonfunctional matching is used to discover suitable Web services on demands, and then the best Web service is selected from the semantic equivalent Web services by employing QoS matching. This Service Discovery Model has been realized in EFCS, and in view of the experiments tested, this service matching strategy is highly effective and easily available to support highly reliable Web services composition. Next, the services matching algorithms with intelligence by means of ontology knowledge and the QoS model for evaluating composite services with transaction property as well as the completeness of ontology knowledge will be studied further.

References 1. Terry R., Massimo P. Advertising and Matching DAML-S Service Descriptions. In Proc. Of Semantic Web Working Symposium Program and Proceedings, 2001. 2. Paolucci M, Kawamura T. Semantic Matching of Web Services Capabilities. In Proc. of the International Semantic Web Conference (ISWC2002), Sardinia, Italy, 2002-06-12. 3. S. Yu, R. Guo. Flexible Web Service Composition Based on Interface Matching. In Proc. Of CIS, 2004. 4. Yiqiao Wang and Eleni Stroulia. Semantic Structure Matching for Assessing Web-Service Similarity. In Proc. Of ICSOC 2003. 5. Brahim Medjahed, et al, Composing Web Services on the Semantic Web, VLDB Journal, 2003, 12: 333-351. 6. Javier G. Description Logics for Matchmaking of Services. http://www.hpl.hp.com/, 2001. 7. N. Srinivasan, M.Paolucci, K. Sycara. An Efficient Algorithm for OWL-S Based Semantic Search in UDDI. In Proc of SWSWPC, 2004. 8. Cardoso J, Bussler C. Semantic Web Services and Processes: Semantic Composition and Quality of Service. On the Move to Meaningful Internet Computing and Ubiquitous Computer 2002, Irvine CA, October 2002.

892

D. Shen et al.

9. Cardoso1 J, Miller1 J. Modeling Quality of Service for Workflows and Web Service Processes. VLDBJ, 2003. Springer Verlag, Berlin-Heidelberg. 10. Shen D.R., Yu G. Study on Service Grid Technologies Supporting Enterprises Business Agile Integration. Computer Science, 2004,Vol.31(6):82-85. 11. Shen D., Yu G. Heterogeneity Resolution based on Ontology in Web Services Composition. In Proc. Of CEC-East, 2004-09. 12. Shen D., Yu G., Cao Y., Kou Y., and Nie T.. An Effective Web Services Discovery Strategy for Web Services Composition. In Proc. Of CIT2005,pp257-263. 13. Shen D., Yu G. Modeling QoS for Semantic equivalent Web Services. In Proc. Of WAIM04, 2004. 14. Shen D., Yu G. A Common Application-centric QoS Model for Selecting Optimal Grid Services. In Proc. Of Apweb05, pp.478-488, 2005.

Using Web Archive for Improving Search Engine Results Adam Jatowt1, Yukiko Kawai1, and Katsumi Tanaka1,2 1

National Institute of Information and Communications Technology, 3-5 Hikaridai, Seika-cho, Soraku-gun, 619-0289, Kyoto, Japan {adam, yukiko}@nict.go.jp 2 Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, 606-8501, Kyoto, Japan [email protected]

Abstract. Search engines affect page popularity by making it difficult for currently unpopular pages to reach the top ranks in the search results. This is because people tend to visit and create links to the top-ranked pages. We have addressed this problem by analyzing the previous content of web pages. Our approach is based on the observation that the quality of this content greatly affects link accumulation and hence the final rank of the page. We propose detecting the content that has the greatest impact on the link accumulation process of top-ranked pages and using it for detecting high quality but unpopular web pages. Such pages would have higher ranks assigned.

1 Introduction Search engines are the main gateway to the Web for many users seeking information. Most search engines use link structure analysis to evaluate the quality and ranks of web pages. The most popular ranking approach is derived from the computation of PageRank metric [5], which, in iterative process, defines the quality of pages based on the macro-scale link structure of the Web. The basic PageRank formula is shown in Equation 1; PR[pj] is the PageRank of page pj, d is a damping factor, ci is the number of links from some page pi to the target page pj, and m is the number of all pages containing links to page pj. Basically, the larger the number of inbound links to the page and the higher their Pagerank values, the higher is the assigned PageRank value of the page. m

PR [ p j ] = d + (1 − d ) *

¦ PR [ p ] / c i

i

(1)

i =1

However, the search engines can affect user behavior and page popularity [3,4]. Since there are huge amounts of resources on the Web, relatively few of them are likely to be found and visited by users seeking information. Pages must therefore compete for user attention. Successful pages are the ones that are frequently visited by many users. Unfortunately, the competition between web pages for user attention is not completely fair. Since users tend to click only on the top-ranked pages, lower X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 893 – 898, 2006. © Springer-Verlag Berlin Heidelberg 2006

894

A. Jatowt, Y. Kawai, and K. Tanaka

ranked pages have difficulty gaining popularity even if they have high-quality content. Conversely, the top-ranked pages maintain their high ranks since they are frequently accessed by many users and consequently receive more new inbound links than pages with lower rankings. The accelerating growth of the Web is resulting in even more pages competing for user attention, making the negative-bias problem more serious. Consequently, we are witnessing “rich-get-richer phenomenon” [3,4]. The huge popularity of search engines among Web users and the impact of these engines on page popularity demands countermeasures that will enable pages to compete more fairly for user attention. One proposed solution is temporal link analysis. Temporal link analysis [1,2,4] focuses on analyzing link evolution and on using timerelated information regarding links for estimating page quality or improving page rankings. Links are associated with time, and this factor is considered when analyzing page quality. We also use the notion of previous page quality, however unlike in previous approaches, we analyze past page content for use in evaluating the usefulness of those pages. Our assumption is that, for a certain query, there is “topic representation knowledge” that describes the topic of the query in the best way or is the most related to this query. If we can identify this knowledge, we can rank pages based on how well they cover it. However, this knowledge is distributed not only spatially among many existing web pages but also temporally among previous versions of pages. We use historical data for top-ranked pages to identify this “topic representation knowledge” by detecting the content that contributed most heavily to link accumulation of web pages. The previous content of a page and the quality of that content have a major effect on the current number of inbound links to that page. We demonstrate it by content-based link accumulation model of pages. In the next section we introduce content-based link accumulation model of a web page. Section 3 discusses our approach to improving the search results by using past versions of pages extracted from web archives. It also shows some experimental results. Finally, we conclude with a brief summary.

2 Content-Based Link Accumulation Model Consider a web page that is ranked highly by a popular search engine for a certain query. Since it has a high ranking, it should have relatively many in-bound links, probably coming from high quality pages. Let t0 denote the time at which the page was created, meaning that its age is tn - t0, where tn is now. Let us imagine that at t0, the page had few inbound links, denoted as L(t0), and a small stream of visitors, U(t0). Since its inception, the page has been accumulating links and increasing in popularity, reflected by a growing number of visitors. It has now reached the top rank, gained many links, L(tn), and enlarged its user stream U(tn). We assume that the query popularity, hence the number of web pages found in response to the query, stays the same. The number of visitors at some point of time ti, that is, the size of the user stream, is the number of visitors due to browsing plus the number due to finding the page by using a search engine: U (ti ) = η * U B (ti ) + μ * U S (ti ) .

(2)

Using Web Archive for Improving Search Engine Results

895

Thus, it is the number of users coming from other pages by following links, UB(ti), plus the number coming from search engine results, US(ti). We can represent the first number as the sum of the PageRank values of pages linking to the target page, hence in approximation, the actual PageRank value of the page. Similarly, we can approximate the number of users visiting the page from search engines as its PageRank value. Hence, the user stream at a particular time depends on the PageRank value of the page at that time, PR(ti): U ( t i ) ≈ λ * PR ( t i )

(3)

Some fraction of the visitors will place links pointing to the page on their pages or on those that they administer. This fraction greatly depends on the quality of the target page. Thus, similar to the approach of Cho et al. [4], we consider page quality as the conditional probability that a visitor will subsequently link to the page. Consequently, the increase in the number of inbound links can be represented as the product of the quality of the page and the size of the user stream at a certain point in time. However, the quality of a page at time ti, Q(ti), is affected by the page content at that time. The quality can be in approximation represented as the sum of the qualities of particular elements on the page, such as for example the sentences and paragraphs. We should also evaluate the qualities of the inter-element relationships, however since evaluating relationships between elements is a difficult task, we consider now only independent qualities of elements. In this approach, each k element of the total K elements that ever stayed on the page during its lifetime has quality qk(ti) at ti contributing to the overall quality of the page Q(ti) at this time. These qualities are assumed to have zero values or fixed, positive values depending on whether they did not or did stay on the page at ti. Thus, the overall quality of the page changes only when elements are added or deleted: q k (ti ) > 0 if k exists at ti q k (ti ) = 0 otherwise

K

Q (ti ) =

¦ q (t ) k

i

where :

k =1

(4)

Consequently, each element on the page has a direct impact on page quality and an indirect impact on the page’s popularity and rank. If we assume that a page’s inbound links are not deleted and omit λ in Equation 3, the number of inbound links at ti is ti

L ( ti ) = L (t0 ) +

³ Q ( t ) * PR

( t ) dt

(5)

t0

where Q(t) is a function describing the change in page quality and PR(t) is a function describing the change in the PageRank value or in the user stream over time. If the page had content changes at time points belonging to the sequence T=t1, t2,…,tn, the total number of inbound links is given by t1

t2

L ( t n ) = L ( t 0 ) + ³ Q ( t1 ) *PR ( t ) dt + ³ Q ( t 2 ) *PR ( t ) dt + ... + t0

t1

tn

³ Q (t

n

) *PR ( t ) dt

(6)

t n −1

This equation shows that the current number of inbound links and hence indirectly the current PageRank depend on the page’s history, i.e., its previous qualities and PageRanks. However, a page’s PageRank depends to a large extent on the number of

896

A. Jatowt, Y. Kawai, and K. Tanaka

inbound links, hence also on the previous page qualities. Since the previous content of a page strongly affects the number of inbound links it now has and indirectly its rank, it can be considered the major factor in page popularity and can be used for detecting high-quality but low-ranked pages. If we represent the quality of a page at certain point in time as the sum of the qualities of its parts at that time, the total number of inbound links at that time is t1 K

L (t n ) = L (t 0 ) +

³¦

t 0 k =1

tn

t2 K

q k (t i ) * PR (t ) dt +

³¦

q k (t i ) * PR (t ) dt + ... +

t1 k =1

K

³ ¦q

k

(t i ) * PR (t ) dt

(7)

t n −1 k =1

The degree of impact of an element on the current link number of a page depends on its quality, PageRank value and on how long it is on the page. The longer the period, the stronger is the impact. Thus, the content that is relatively static has a higher impact on the page’s rank than the content that is on the page for only a short while. This seems reasonable as the content that is on the page longer will be seen by more visitors. In this model we do not consider the qualities of the pages from where the inbound links originate. The focus is thus only on the changes in the number of inbound links.

3 System Implementation and Experiments Since we do not know which pages could be used as topic representation knowledge for any arbitrary topic, we use some number of top-ranked pages from search engine results and retrieve their past versions from web archive. If lower ranked pages have historical content similar to that of higher ranked pages, their qualities should also be similar. In this case, one can ask why their ranks are still low. If the previous content of low-ranked pages demonstrated, on average, equally high quality, these pages should receive the same or a similar level of attention and trust from users as the top ranked pages. However this is not the case. For these pages to be treated fairly, their ranks should be increased. However, at the same time, we should check to see if their content is as relevant to the query as that of the top-ranked pages at the present time since their topics might have changed. Thus, additionally, in our approach, the present content of pages is compared with the present content of top-ranked pages. To facilitate comparison of historical data, we combine the contents of previous versions of a page during certain time period to create a “super page” that presents the long-term history of the page. Each super page has a vector representation calculated using normalized TF*IDF weighting of the collection of super-pages of some number of top-ranked pages. Terms in the vector are stemmed and filtered using a stop list. Because the more static content is contained in many consecutive page versions, it is repeated more often on the super page. Its importance is thus increased, and it has more impact on the super-page vectors. In this preliminary implementation we do not consider the PR(t) factor that is present in the link accumulation model (Equation 7). To achieve more accuracy we should combine historical content and link data. Thus the importance of the element of the page would also be dependent on the changes in page’s PageRank values during the time periods when the element stayed on the page.

Using Web Archive for Improving Search Engine Results

897

The new rank of a page is the result of re-ranking the search engine results based on the similarities between super pages and on the similarities between current page contents. The user issues a query to the search engine and specifies the number of top pages, N, to be re-ranked. The system then compares their historical and current contents with those of W user-defined top-ranked pages, where W Tp q∈S,q≠ j

q∈S

=∞

otherwise

(2)

(k = 1,2,.....; S ⊆ N j )

In Eq. (2), q and Fk(j,S) represents the processor number included in the previous allocation and the minimum allocation cost which covers S composed of k processors, respectively. Since boundary condition represents the cost which each processor handles task p without co-operation with other processors, it is defined by Eq. (3). ( j,-)

F1

= c(task( j), p)

(3)

However, we can not obtain the optimal solution using Eqs. (2) and (3) only because (j,S) the final stage has not been decided yet. We, therefore, compute F2 for all (j,S) satis(j,-) (j,S) (j,S) fying ™q ∈S wq + wj ” Tp using F1 . Then, F3 is computed using F2 . By repeating

908

Y.-J. Lee, D.-W. Lee, and D.-J. Chang (j,S)

this procedure, we reach the case where ™q ∈S wq + wj > Tp for all (j,S). Since Fk are infinity for all (j,S) at such a case, we regard that case as the final stage and set L= k. This means that the further allocation is not possible. Accordingly, allocations obtained at the previous stage k (1,2,..,L-1) are feasible solutions. 2.2.2 Finding Optimal Allocation At each stage k of the feasible allocations generation phase, Fk(j,S) represents the cost of the allocation composed of the same elements as j S. Thus, the sub-optimal solution at each stage k (g(k)) is given by g(k) = Min [ Fk(j,S) ]

k=1,2,…,L-1

(4)

Since global optimal solution (G) must be the minimum value among g(k), it is given by G = Min [ g(k) ]

k=1,2,…,L-1

(5)

Finally, we find the optimal processor allocation corresponding to global optimal solution. 2.2.3 Optimal Non-preemptive Task Scheduling Algorithm From the above problem formulation, the optimal task scheduling algorithm for the parallel and distributed system composed of tasks with the non-preemptive processing time and the rigid deadline is described as the following. ALGORITHM: Optimal non-preemptive task scheduling 1. for k = 1 and for all j do (j,-) 2. F1 ← c(task(j), p) 3. end for 4. while Fk(j,S) ≠ ∀ (j,S) do 5. if ¦ wq + wj ≤ Tp then q∈S

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17.

( j,S )

Fk

(q,S-{q})

← Min [Fk-1 q∈S,q≠ j

+ c(task( j), p)]

else Fk(j,S) ← end if end while L←k for all k such that k=1,2,..,L-1 do g(k) ← Min [ Fk(j,S) ] end for for all k such that k=1,2,..,L-1 do G ← Min [ g(k) ] end for find the optimal allocation processors corresponding to G

Optimal Task Scheduling Algorithm for Non-preemptive Processing System

909

3 Performance Evaluation In order to evaluate the proposed algorithm, we carried out the computational experiments on workstation. We assumed that the available time on processor j (wj: j=1,2..,n) has exponential distribution. Thus, n exponential random numbers are generated and used as the available time on each processor. As the number of processors (n), 30, 50, 70, 100 and 150 are used respectively. Three different required processing times for task p (Tp) are used according to the amount of load: heavy load™wj §Tp, medium load- ™wj §1/2Tp, and light load- ™wj >>Tp. Fig. 2 shows the mean execution time of our suggested algorithm for three different cases (heavy load, medium load, and light load) to demonstrate the efficiency of the algorithm. The simulation code was written in C. For each case, ten problems were randomly generated and run on a SUN Ultra SPARC-II workstation. As shown in Fig. 2, the light load case has the least mean execution time at all the cases of n. Due to the rapid violation of constraints in the light load case, the step 2 of the solution algorithm for the light load case is ended earlier than other two cases. In addition, we can find that only two seconds are required to obtain the optimal processor allocation when the number of processors is as many as 150.

mean execution time (sec)

2.0

Light load Medium load Heavy load

1.5

1.0

0.5

0.0 30

50

70

100

150

the number of processors (n)

Fig. 2. Mean execution time

4 Conclusions In this paper, we present the task scheduling problem arising in parallel and distributed system. The considered system has the non-preemptive processing tasks and rigid deadlines for completion. In order to solve the problem, we have proposed the mathematical formulation and exact solution algorithm composed of two phases. In the first phase, the algorithm generates feasible allocations to satisfy the deadline by using dynamic programming, and then in the second phase it finds the optimal processor allocation. Simulation results show that the proposed algorithm has good efficiency in the execution time for the parallel and distributed system with a large number of processors. Future work includes task scheduling algorithm considering the preemptive processing time and priority.

910

Y.-J. Lee, D.-W. Lee, and D.-J. Chang

References 1. Lee, H., Kim, J., and Lee, S.: Processor allocation and Task Scheduling of Matrix Chain Products on Parallel Systems. IEEE Transaction on Parallel and Distributed Systems. Vol. 14. No. 4. (2003) 394–337. 2. Yoo, B and Das., C: A Fast and Efficient Processor Allocation Scheme for Mesh-Connected Multicomputers. IEEE Transactions on Computers. (2002) 46-60. 3. Cambazard, H., Hladik, P., Deplanche, A., Jussien, N., and Trinquest, Y.: Decomposition and Learning for a Hard Real Time Task Allocation Problem. Lecture Notes in Computer Science. Vol. 3258. (2004) 153. 4. Franklin, A. and Joshi, V.: SimplePipe: A Simulation Tool for Task Allocation and Design of Processor Pipelines with Application to network Processors. 12th Annual International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems. (2004) 59-66. 5. Ruan, Y., Liu, G., Li, Q., and Jiang, T.: An Efficient Scheduling Algorithm for Dependent Tasks. The fourth International Conference on Computer and Information Technology. (2004) 456-461.

A Multi-agent Based Grid Service Discovery Framework Using Fuzzy Petri Net and Ontology* Zhengli Zhai1,2, Yang Yang1, and Zhimin Tian1 1 School

of Information Engineering, University of Science and Technology, Beijing, China [email protected] 2 College of Information, Linyi Normal University, China

Abstract. In the proposed Grid service discovery framework based on multiagent, agents are classified into three types: service-agent, request-agent and medium-agent. Two key issues in the framework are discussed: service description language and service matchmaking mechanism. A Fuzzy Petri nets-based service description language is proposed as a specification to publish or request for a service, Possibilistic transition is used to represent a service or request. Meanwhile, through ontology’s class hierarchy, we give a semantic-based service matchmaking that can find an appropriate service for a request. Possibility and necessity measures are used to quantify the confidence levels that the service can satisfy a request, that is, support partial matching.

1 Introduction With rapidly increasing demand for share and cooperation of distributed resource, Grid technology [1] has recently become an active research field. Resource discovery is to find resource satisfying user’s demand. OGSA [2] integrates Grid technology with Web service and considers all (including resource) are services. So service discovery is a sticking point in Grid research field. In this paper, we put forward a Grid service discovery framework based on multiagent. Using Fuzzy Petri Net and Ontology, we handle two key issues in the framework: service description language (SDL) and service matchmaking mechanism.

2 A Multi-agent Framework for Grid Service Discovery In our framework, agents are classified into three types: service-agent provides services such as resources, information, and particular domain-specific problem-solving capabilities; request-agent requests some specific services; medium-agents provide means to find relevant service-agents for request-agent. Request-agent communicates with corresponding service-agents and decides whether to subscribe to their services. The key of this framework is how to perform the service matchmaking embedded in the medium-agents. The process of service matchmaking mainly involves: *

This work is supported by National Natural Science Foundation of China (No.90412012).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 911 – 916, 2006. © Springer-Verlag Berlin Heidelberg 2006

912 z z z

Z. Zhai, Y. Yang, and Z. Tian

a SDL to provide a specification of publishing or requesting services, class hierarchy to represent terms used in the descriptions of services, a matchmaking mechanism based on semantic to search for relevant service.

3 Fuzzy Petri Net-Based Service Description Language (FPN-SDL) 3.1 Possibilistic Reasoning We propose a representation for uncertain information and an inference mechanism based on possibility and necessity measures [3] proposed by Dubois and Prade. To represent uncertain information, a possibilistic propositions is represented as “ r , ( N r , Π r ) ”, where r denotes a classical proposition, N r denotes the lower bound

of necessity measures, Π r denotes the upper bound of possibility measures. “ r , ( N r , Π r ) ” means that the degree of ease to say that r is false is at most equal to 1 − N r and the degree of ease to say that r is true is at most equal to Π r . A general formulation of inference rule including multiple antecedents is expressed as follows: (r1 ∧ r2 ∧ L ∧ rn ) → q, ( N ( r1 ∧ r2 ∧L∧ rn ) → q , Π ( r1 ∧ r2 ∧L∧ rn ) →q ) r1 ,

( N r1 , Π r1 )

M rn ,

M ( N rn , Π rn )

(1)

We can infer N q and Π q that corresponding to q through an approach called probabilistic entailment [4] proposed by Nilsson. 3.2 Fuzzy Petri Nets (FPN)

A Petri net [5] is a directed, weighted, bipartite graph consisting of two kinds of nodes, called places and transitions. A typical interpretation of Petri nets is to view a place as a condition, a transition as the causal connectivity of conditions, and a token in a place as a fact to claim the truth of the condition associated with the place. However, the confidence about the connectivity of conditions and the facts could be uncertain. To take uncertain information into account, we define a Fuzzy Petri nets as follows. Definition 1. A Fuzzy Petri net (FPN for short) is defined as a five-tuple: FPN = ( P, T , F , W , M 0 )

(2)

where P = { p1 (r1 ), p2 (r2 ),L , pm (rm )} is a finite set of places, place pi represents a classical proposition ri . T = {t1 ( N1 , Π1 ), t2 ( N 2 , Π 2 ),L , tn ( N n , Π n )} is a finite set of possibilistic transitions, t j represent the connectivity of places, N j denotes the lower

A Multi-agent Based Grid Service Discovery Framework Using FPN and Ontology

913

bound of necessity measures, and Π j denotes the upper bound of possibility measures to represent the uncertainty about the connectivity. F ⊆ ( P × T ) ∪ (T × P) is a set of arcs. W : F → {1, 2,3,L} is a weight function. M 0 = {M ( p1 ), M ( p2 ),L , M ( pm )} is the initial marking, where M ( pi ) is the number of possibilistic tokens (associated with necessity and possibility measures) in pi . Definition 2. Firing rule: Firing of an enabled possibilistic transition removes the possibilistic tokens from its each input place, adds new possibilistic tokens to its each output place, and the necessity and possibility measures attached to the new token will be computed according to the possibilistic reasoning. 3.3 Using FPN as a Basis for Service Description Language

We use Fuzzy Petri net as a basis for service description language to provide a specification to publish, request and match services in Grid environment. A simple example is illustrated to demonstrate the use of FPN-SDL as follows. Example 1. Suppose that there is a mechanic working in Beijing and specializing in fixing Van. Meanwhile, there is a man whose truck is broken in Tianjin and requests to repair his Truck. This is represented in Fig. 1. IP1(r1)

Service

IP3(r3)

t1 (1,1)



IP2(r2)

OP1(q1) r1: Van is broken r2: the location of is Beijing q1: Van is functioncal

Request

t 2 (1,1)

OP2(q2) r3: Truck is broken r4: the location of is Tianjin q2: Truck is functioncal

IP4(r4)



(a)

(b)

Fig. 1. An example of FPN-SDL

3.4 Ontology for SDL

Ontology [6] defines a common vocabulary for users to share information in a domain. It includes definitions of basic concepts (classes) and relations among them. In practical terms, developing ontology also includes arranging the concepts in a class hierarchy. In Example 1, there are two class hierarchies needed to represent the relations of classes related to Vehicle and World (see Fig. 2). World

Vehicle 0.9 Bus

0.8 0.7 0.8 Car Truck Van (a)

0.9 American (b)

0.7 China 0.9 0.7 0.6 Beijing Tianjin Shanghai

Fig. 2. Class hierarchies: (a) A class hierarchy of vehicle (b) A class hierarchy of world

914

Z. Zhai, Y. Yang, and Z. Tian

4 The Service Matchmaking Mechanism for FPN-SDL 4.1 Possibility and Necessity Measures Between Two Classes

To permit partial match, we use degree of possibility and necessity to represent confidence level that a service-agent can provide the relevant service for request-agent. The matchmaking mechanism quantifies the possibility and necessity of matching between two classes by computing a similarity between two classes in a class hierarchy. Definition 3. We assign each pair of adjacent superclass and subclass (c1 , c2 ) a unique degree of similarity, denoted as S (c1 , c2 ) , ranging from 0 to 1.

For example, S (Vehicle, Van) is 0.7 in Fig.2. Definition 4. Let the most specific common superclass of two classes c1 and c2 be

denoted by CS (c1 , c2 ) . We have: CS (c1 , c2 ) = {g | su( g , c1 ) ∧ su( g, c2 ) ∧ (∀c ≠ g )(su(c, c1 ) ∧ su( g , c2 ) → su(c, g ))}

(3)

where su ( x, y ) indicates that x is a superclass of y . Definition 5. Let the similarity between two arbitrary classes c1 and c2 be denoted

by S (c1 , c2 ) . We have: 1. S (c1 , c2 ) = 1 if c1 = c2 . 2. S (c1 , c2 ) = 0 if CS (c1 , c2 ) does not exist. 3. S (c1 , c2 ) = S (c1 , CS (c1 , c 2 )) × S (c 2 , CS (c1 , c2 )) . For example, S (Van, Truck ) = 07 × 0.8 in Fig. 2. Inspired by the concepts of degrees of consistence and implication proposed by Ruspini [7], we define degrees of consistence and implication between two classes to quantify the confidence level that a service can fulfill a request. Definition 6. Let the degree of consistence of class c1 implying class c2 be denoted

by C (c1 → c2 ) . We have: 1. C (c1 → c2 ) = 1 if CS (c1 , c2 ) exists. 2. C (c1 → c2 ) = 0 if CS (c1 , c2 ) does not exist. In fact, the degree of consistence between two classes can be considered as the possibility measure. Therefore, Π (c1 → c2 ) = C (c1 → c2 ) . Definition 7. Let the degree of class c1 implying class c2 be denoted by I (c1 → c2 ) . We have: 1. I (c1 → c2 ) = 0 if c1 and c2 are located in two different class hierarchies.

2. I (c1 → c2 ) = 1 if CS (c1 , c2 ) = c2 . 3. I (c1 → c2 ) = S (c1 , c2 ) if CS (c1 , c2 ) ≠ c2 and c1 , c2 are in one class hierarchy. In fact, the degree of implication between two classes can be viewed as the necessity measure. Therefore, N (c1 → c2 ) = I (c1 → c2 ) .

A Multi-agent Based Grid Service Discovery Framework Using FPN and Ontology

915

4.2 Matching Algorithm

Consider a request having m input places. A service having n input places ( n ≤ m ) is chosen to match the request by the following algorithm (see Fig.3(a)). IPn+1(rn+1)

. . . .

(1,1)

Request

t 2 (1,1)

(1,1)

.

(1,1)

IPn+m(rn+m) (1,1)

t 4 ( N1 , Π1 ) t5 (N2 , Π2 )

IPn(rn)

OP2(q2)

.

(1,1)

t3 ( N 3 , Π 3 )

Service

t4 (0.56,1)

t 3 (1,1) Service t1 (1,1)

IP1(r1)



....

tn +3 ( N n , Π n )

IP4(r4)

t1 (1,1)

X

t 2 (1,1)



IP1(r1) IP2(r2)

(1,1)

OP2(q2)

.... ....

IPn+p(rn+p)

Request

IP3(r3)

IPn+2(rn+2)

(a)

OP1(q1)

t5 (0.63,1)

IP2(r2)

OP1(q1)

(b)

Fig. 3. Matching process and an example

Algorithm 1:

1. For the classes cq1 and cq2 mentioned in the output places of service and request, find the appropriate locations in class hierarchies. If the classes cq1 and cq2 are located in two different class hierarchies then select another service to match the request and go back to Step 1. Otherwise, compute the degrees of the possibility and necessity that the output place of service can infer the output place of request, that is, Π = C (cq2 → cq1 ) , N = I (cq2 → cq1 ) , this is to calculate confidence level that the service can fulfill the request when all the preconditions of providing the service hold. 2. Find the appropriate locations in class hierarchies for the classes crk and crl mentioned in the input places of service IPk (rk ) and request IPl (rl ) . If crk and crl have the same class hierarchy, then compute Π (crl → crk ) = C (crl → crk )

and N (crl → crk ) = I (crl → crk ) , where k = 1, 2,L , n and l = n + 1,L , n + m , this is to calculate confidence level about satisfying the preconditions. If any pair of ( crk , crl ) does not have the same class hierarchy, then select another service to match the request and go back to Step 1. 3. Insert certain tokens, i.e. ( N , Π ) = (1,1) , into all input places of the request to represent the current preconditions and then perform possibilistic reasoning through transitions t 4 ~ t n +3 , which is attached with the degree of possibility and necessity derived from Step 2. 4. Perform the possibilistic reasoning through transition t1 . The derived degree of possibility and necessity represents our confidence level that the service can be performed under partially matched preconditions. Then carry out the possibilistic reasoning through transition t 3 , which is attached with the degree of possibility and necessity derived from Step 1. The final derived degree of possibility and necessity is our confidence level that service can fulfill the request.

916

Z. Zhai, Y. Yang, and Z. Tian

Example 2. Let us consider a more complex situation in Example 1, suppose there are three possible services for the request. Besides the previously mentioned mechanic (say mechanic A), a mechanic B works in Shanghai and specializes in fixing Truck, a mechanic C works in Tianjin and can repair Car. In case of mechanic A, its matchmaking process is shown in Fig. 3(b). Firstly, the degree of the necessity and possibility for (OP1 (q1 ) → OP2 (q2 )) is calculated as (0.56,1) because S (Truck , Van) = 0.56 . Then, the degrees of the necessity and possibility for

( IP3 (r3 ) → IP1 (r1 )) and ( IP4 (r4 ) → IP2 (r2 )) are computed as (0.56,1) and (0.63,1), respectively, which denotes the confidence level about the satisfying of the preconditions to provide the service. Finally, the possibilistic transitions t4 , t5 , t1 and t3 are triggered sequentially, and we thus obtain that the degree of necessity and possibility to serve the request by mechanic A is (0.56,1). Similarly, in the case of mechanic B, the degree of necessity and possibility to repair the broken Truck is (0.42,1). In the case of mechanic C, the degree of necessity and possibility to fulfill the request is (0.64,1). Finally, mechanic C will be chosen because it has the highest degree of necessity measure to accomplish this request.

5 Conclusion In this paper, we propose a multi-agent framework for Grid service discovery, which integrates with Fuzzy Petri nets and Ontologies. A Fuzzy Petri Nets-based service description language (FPN-SDL) is proposed as a specification to publish or request for a service. Meanwhile we give a semantic-based service matchmaking mechanism integrated with class hierarchies, it permits a relaxed matching through possibility and necessity measures. Our future research work will consider service composition to collaboratively fulfill a request.

References 1. I. Foster, and C. Kesselman, The Grid 2: Blueprint for a New Computing Infrastructure. Morgan Kaufmann Publishers, 2004 2. I. Foster, et al. Grid services for distributed system integration. IEEE Computer, 2002, 35(6): 37~ 46 3. D. Dubois, H. Prade. The three semantics of fuzzy sets. Fuzzy Sets and Systems, 1997, 90(2): 141-150. 4. N.J. Nilsson, Probabilistic logic, Artificial Intelligence. 1986, 28 (1): 71-87. 5. T. Murata, Petri nets: properties, analysis and applications, Proc. of the IEEE, 1989, 77 (4): 541-580. 6. M.N. Huhns, M.P. Singh, Agents on the Web: Ontologies for Agents. IEEE Internet Computing, 1997, 1(6): 81-83. 7. E. H. Ruspini, On the semantics of fuzzy logic, Int. J. of Approximate Reasoning. 1991, 5: 45-88.

Modeling Identity Management Architecture Within a Social Setting* Lin Liu1,3 and Eric Yu2 1

School of Software, Tsinghua University, Beijing, China 100084 Faculty of Information Studies, University of Toronto, Toronto, Canada, M5S 3G6 3 Shanghai Laboratory of Intelligent Information Processing, Fudan, Shanghai 200433 [email protected], [email protected] 2

Abstract. Managing digital identity is about intelligently using the identity management solutions to achieve business goals. There is a pressing need for frameworks and models to support the analysis and design of complex social relationships with regard to identity in order to ensure the effective use of existing protection technologies and control mechanisms. This paper explores how the GRL/i* approach could be used in the design of a more integrated and comprehensive identity management solutions for organizations. Using this modeling approach, we may explicitly relate the different strategic rationales of stakeholders to the alternative design choices in the context of identity management. We argue that different actor configurations and vendor-based identity management components may be evaluated and selectively combined to achieve the best requirements satisfaction level based on strategic analysis results. Keywords: Identity Management, Social Modeling, architecture and component analysis.

1 Introduction Identity and access management will play an extremely critical role in the advancement of e-business as a primary mode of operation. As a result, this technology will transform the way organizations deploy security. This transformation could revolutionize business information systems, allowing organizations to use digital identities to contribute real value to their business. The objective of identity management is to protect valuable stakeholder information resources. The effectiveness of a solution is measured in terms of how well the assets of the stakeholders are protected, while their mission and business objectives are met. The decision-making on identity design involves hard choices such as resource allocation and competing objectives and organization strategy. When selecting controls, many factors need to be considered, including the protection policy of the organization, the legislation and regulations that govern the enterprise, along with *

Financial support by the National Key Research and Development Plan (973, no.2004 CB719401, no.2002CB312004) and National High Technology Research and Development Plan (863, no. 2004AA413120) is gratefully acknowledged.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 917 – 922, 2006. © Springer-Verlag Berlin Heidelberg 2006

918

L. Liu and E. Yu

safety, reliability, and quality requirements. Yet, there is no systematic analysis technique through which one can go from answers to these questions to particular security and privacy solutions. This paper is a follow up step to [3] on intentional modeling to support identity management, where we propose the methodological framework for identity management. In this paper, we further elaborate on how to use this modeling approach to analyze the strategic rationales of business players/stakeholders, and their influences to the design decision making of system architecture in the setting of identity management, this modeling approach can help understand the major differences of existing vendor-based solutions, such as the .NET Passport [4], and Liberty Alliance ID management specification [2] With the support of this modeling approach, identity solution providers will be able to provide customizable solutions to different users. Organizations, who want to deploy identity management in its business settings, will be able to form an integrated solution that addresses competing high-level concerns with maximum optimization.

2 Strategic Actor Modeling on Identity Management Requirements In this section, we use the modeling constructs of a strategic actors modeling framework i* to build models explaining the identity problem at an intentional level. Strategic actors modeling framework i* uses concepts such as actor, role, agent, dependency, to represent the social context of a system explicitly, so that we may understand the fundamental needs and alternative ways to serve them better. Managing digital identities is about intelligently using identities to achieve business goals – such as increasing customer satisfaction, higher security, and reducing costs. In conducting business online, an organization can only use identities that are trusted. In i*, an intentional entity is defined as an actor, who seeks opportunities to fulfill his strategic interests, and mitigate vulnerabilities and threats to such interests. For instance, we may represent a business organization as an actor, whose high-level business requirements may be defined as softgoals, such as profitability, customer satisfaction, cost efficiency, security, etc. Such requirements are “soft” in nature as it relies on stakeholders’ subjective judgments. Lower level, concrete softgoals may have a positive or a negative contribution to a higher-level softgoal. Its influence can be a sufficient one or a partial one. We may also use tasks to represent concrete course of actions, or operationalize the softgoals. In i* framework, actors are distinguished into abstract roles and concrete agents. We usually use roles to represent conceptual functional units, while using agents to represent the entities with physical existence, either human, or machinery. Figure 1 illustrates the identity management requirements from another perspective, a strategic dependencies perspective. Instead of studying the internal rationales of actors, a Strategic Dependency view focus more on their bilateral social dependency relationships. Reading the model in Figure 1 from the right hand side, a general identity management requirements initiates by a Principal agent’s request to a Policy Enforcer to grant him access (to a service, resource, or application). In return, the Policy Enforcer will ask him to Provide Proof of Authorization. To satisfy this requirement, the

Modeling Identity Management Architecture Within a Social Setting

919

Principal has to seek the help of an Authorization Authority, who are qualified to issue Authorization Assertions. However, again, the Authorization Authority will ask the Principal to provide certain Proof of Authentication, which has to be obtained from an Authentication Authority. Further to the left, the Principal has to present Valid Credential and Attribute to the authorities accordingly, in order to fulfill all required procedures. Although the authority roles are always combined one way or another in reality, we separate them here on the purpose of clearly acknowledging each unique functional unit. In a Strategic Dependency model, actors form social dependency networks to further their strategic interests, but at the same time, it also brings vulnerabilities and threats. By analyzing the strength and power of the depended parties, we may reason about their opportunities and vulnerabilities qualitatively. For example, a vulnerable dependency may be propagated upstream along the dependency link to impact other indirectly connected actors and their goals. Comparing this model with the diagram in Figure 1, we will notice that although the two models has common components, the model below has the advantage in many aspects: it has richer ontological concepts that may explicitly distinguish entities of different nature; the strategic and intentional perspective makes it a natural choice for representing causal relationships. When combined with a SR model, it will be able to give a complete picture regarding an intentional entity’s opportunities and vulnerabilities.

Fig. 1. A Strategic Dependency Model of general identity management functions

3 Macro Architectures Based on Different Actor Configuration Having the general business requirements and basic functional units needed for identity management identified, we may continue to explore the alternative architectural design options. There are currently many possible ways to macroarchitecting the identity management service in an online business organization. The scenario given above shows that we may manage identity within the organization asis. In such organization, identity management treatments are usually duplicated, and each individual owns and manages multiple disjoint identities simultaneously. Cost

920

L. Liu and E. Yu

efficiencies are severely impacted by password resetting, multiple login/logout sessions, and manual interventions to integrate individual services, etc. Due to such unsatisfying state of the practice, we are motivated to explore other possible options in the following sub-sections. 3.1 Outsourcing ID Service: The .NET Passport Paradigm Outsourcing ID service means that the organization will delegate the task of managing identity to an outside actor, who has the specialty and would like to provide required ID service for the organization. The Microsoft .NET Passport [4] is a typical example for such model. It is composed of a suite of services for signing in users across a number of application. It solves the authentication problem for users by allowing them to create a single set of credentials that will enable them to sign in to any site that supports a Passport service (referred to as Passport member web site, or participating site). The objective of Passport single sign in service is to help increase customer satisfaction by allowing easy access without the frustration of repetitive registrations and forgotten passwords.

Fig. 2. Strategic Dependency Modeling of .Net Passport paradigm

Continuing the analysis in the earlier sections, the model in Figure 2 shows that, Passport.com PLAYS the role of identity Service Provider, who takes on the responsibility of Authentication, Identification and Attribute Authority. In the mean while, Passport Member Web Site PLAYS the role of Service Provider, who inherits the responsibility of Authorization Authority and Policy Enforcer. Agent Jim is a Service User, who has the objectives and obligations of a common service user. As we can see, Passport.com is the only Identity Service Provider within the context. This implies that it is the unique center of trust, who is at the position of power in collecting ID Service Fee from Passport Member Web Sites, and to whom the user has to count on for Privacy. While this architecture improves simplicity of usage, it brings vulnerability to the security of user’s personal information. It satisfies the business needs of Passport.com very well, but at the expense of risking user privacy. Although

Modeling Identity Management Architecture Within a Social Setting

921

Passport.com may move to use new protection technologies such Kerberos Ticket, it can only mitigate certain kinds of attacks, such as interception, but not those brought by the business model itself. 3.2 Form Federations: The Liberty Identity Paradigm The members of Liberty Alliance envision a networked world across which individuals and businesses can engage in virtually any transaction without compromising the privacy and security of vital identity information. To accomplish this vision, the Liberty Alliance establishes open technical specifications [2] to support a broad range of network identity-based interactions. The basic functional requirements of the Liberty architecture and protocols including the following: identity federation, authentication, use of pseudonyms, support for anonymity, and global logout. The strategic dependency model shown in Figure 3 follows similar ideas to the one on .NET Passport in Figure 2. The major differences in the Liberty Alliance ID standard are: companies (e.g. Company1.com, Company2.com) implementing Liberty Alliance Identity Management play Liberty Identity Provider (LIP) role and Liberty Service Provider (LSP) role simultaneously. Usually, when one is playing LSP, another acts as a LIP. The two agents need to have mutual Trust, and that the Identity (of an user Jim need to) Be Federated beforehand.

Fig. 3. Strategic Dependency Modeling of Liberty Alliance ID framework

The distributive nature of the model shows that there can have several circles of trust in the Liberty standard mode. It implies that power and duty on identity management are distributed evenly among web service providers. By asking for the consent of user when doing identity federation, users can get back partial control to his Privacy. While this architecture has great potential in improving the state of the practice, as an open standard, it does not have strong enforcement to the matter.

922

L. Liu and E. Yu

Service provider who deploys the Liberty-enabled products can choose among alternative ways of doing federation, redirection, and authentication. The relationship between providers is mutual trust and identity federation, rather than constant identity service user to provider relationship. Based on the analysis procedure in the previous subsections, stakeholders of a given organization will be able to identify a macro-architecture that matches the organization’s requirements and preferences the most. If an organization decides to play a more specialized role in an identity management business model, a natural, but difficult next step can be the evaluation, selection, and acquisition of the multiple, interdependent software components to meet the complex systems requirements and implementation constraints. While picking instances for candidate architectures, we have to follow certain general heuristic rules, as well as some architecture specific rules, such as those of Liberty. Example general rules can be: Components from same vendor are conflict free, but can be more expensive. Components with more functional units are more expensive. Combined use of components from different vendor has to be confirmed by a past case. All functional points required by candidate architecture have to be covered. Bearing these general rules, and high-level business objectives such as: low cost, best security, we will be able to identify a group of COTS components that could be combined to form a balanced solution that tradingoff multiple concerns from different stakeholders. Some example input knowledge of this process is given in Appendix B. An elaborated COTS component selection approach based on i* can be found in [1]. The i* approach is complementary to existing frameworks and techniques for identity management. Our approach supports the exploration and management of structural alternatives, based on a balanced consideration of all competing requirements, thus complementing the various point solutions of recent identity management techniques. Our approach is distinctive in its treatment of agents/actors as being strategic, and thus readily adaptable to the identity management analysis domain illustrated in this paper. While this paper has given some preliminary exploration on how the modeling process goes, much remains to be done. Fundamental ontology of the reusable design knowledge about identity management solutions need to be further elicited and codified. Another direction worth pursuing is to provide design decision support to implementers of open standard, such as SAML and Liberty. These are topics of ongoing research.

References [1] Franch X. & Maiden N.A.M., Modelling Component Dependencies to Inform their Selection, in Proceedings 2nd International Conference on COTS-Based Software Systems (ICCBSS'03), Ottawa, Canada, February 10-13, 2003, LNCS 2580, Springer-Verlag: pp. 81-89. [2] Identity Systems and Liberty Specification Version 1.1 Interoperability. Available at: http://www.projectliberty.org/resources/whitepapers. [3] Liu, L., E. Yu. Intentional Modeling to Support Identity Management. In: P.Atzeni et al. (Eds.): Proceedings of the 23rd International Conference on Conceptual Modeling (ER 2004), LNCS 3288, pp. 555-566, 2004. Springer-Verlag Berlin Heidelberg 2004. [4] Microsoft .Net Passport Review Guide. Available at: http://www.passport.com/.

Ontological Engineering in Data Warehousing Longbing Cao, Jiarui Ni, and Dan Luo Faculty of Information Technology, University of Technology Sydney, Australia {lbcao, jiarui, dluo}@it.uts.edu.au

Abstract. In our previous work, we proposed the ontology-based integration of data warehousing to make existing data warehouse system more user-friendly, adaptive and automatic. This paper further outlines a high-level picture of the ontological engineering in data warehousing. Its basic theory includes building ontology profiles for warehousing in terms of domain ontology and problemsolving ontology, analyzing ontological commitment and semantic relationships of ontological items, aggregating, transforming, mapping, querying and discovering semantic relationships and/or ontological items crossing multiple relevant ontological domains. We introduce and illustrate them very briefly in terms of a web-based electronic institution F-Trade.

1 Introduction The existing data warehousing is based on a two-step process. First, data dispersing in relevant enterprise information systems (EIS) is extracted, transformed and loaded into a data warehouse (DW) server according to predefined algorithms and data models. Then analytical engines for predefined, ad hoc, OLAP or data mining analyses are linked to the DW to present reporting results based on the existing models. This process makes the current DW inflexible, non-ease of use and heavily human-dependent. In our previous work [1-6], we have addresses a series of issues and proposed ontology-based approaches to more flexible integration of EIS, more adaptive data modeling and easier personalization in a business-oriented way. The basic theory of ontological engineering of data warehousing is as follows. We first build the ontology profile for the DW in terms of a specific problem domain. Second, we analyze the ontological commitment and semantic relationships among ontological items. Third, certain specifications are developed to depict and represent the ontology profile. Fourth, a proper architecture must be built to integrate the multiple domains in warehousing. Finally, semantic rules are developed to aggregate, transform, map and query the multi-domain ontological items in ontological DW.

2 Ontology Profile in Warehousing A fundamental step in the ontology-based warehousing is to build a comprehensive ontology profile which transforms a domain problem and its problem-solving (DW) X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 923 – 929, 2006. © Springer-Verlag Berlin Heidelberg 2006

924

L. Cao, J. Ni, and D. Luo

system to domain ontology (DO) and problem-solving ontology (PSO). Figure 1 shows the structure of such a ontological profile, where the PSO is further divided into categories as task, method, business logic (BL) and resource ontology. As the first step, domain ontology (DO) extracts the essence of conceptual items in a problem domain. DO consists of reusable vocabularies of concepts and relationships within a domain, activities taking place in that domain, and theories and elementary principles governing that domain. Taking the capital markets as an instance, Figure 2 illustrates an excerpt of the relevant concepts and their relations in stock market microstructure. In the stock, the Financial Order is categorized as Limit Order, Market Order or Stop Order; while for every trade, it consists of attributes such as Price, Dealer, Date, Time, Volume and so forth. Due to the complexity of data warehousing and customer feedbacks in the real world, we emphasize the involvement of business requirements and domain knowledge. Therefore, we highlight relevant concepts and their relationships associated with solving the domain problem in terms of business-oriented task, method, business logic and resource ontologies. Taking the financial trading and mining analysis system F-Trade [7] as an instance, Figure 3 illustrates the main high-level tasks the system must conduct to support the designed goals of the system. The whole system is decomposed into five main tasks (corresponding to five Centers): system administration, algorithm plug-in and evaluation, service providing, system control and evolution, and user interaction and management. Every task may be divided into subtasks. Correspondingly, ontological items will be specified for them respectively.

in sta

su bc pa la rtnc ss of e-o of f

Fig. 1. Ontology profile in the warehousing

Fig. 2. Domain ontology in capital market

Fig. 3. Task ontology

Ontological Engineering in Data Warehousing

925

A set of methods may be called to fulfill a specific task. Method ontology describes ontological attributes related to a method. For instance, the task Algorithms Logon Configuration can be handled via man-machine interaction or interface agents. Every method is relevant to one or many BL. A BL is a functional and computational unit in a problem-solving system. It may be divided into smaller units on demand. It involves system architecture and design patterns, and working flow and the processes. BL ontology consists of ontological items used for the management and execution of the above aspects in the problem-solving system. For instance, in the F-Trade, each BL unit is decomposed into four classes as action, model, controller and view in terms of MVC design patterns, which are interacted via business logic relationships. For instance, the activity of executing an algorithm is decomposed into four classes as AlgoExeAction, AlgoExeModel, AlgoExeController and AlgoExeView (shown in Figure 4) linked via some business logic interactions.

Algorithm Execution

ard Forw

tch spa Di

Fig. 4. Business logic ontology

View

Algorithm Execute Controller

Get

Controller

Algorithm Execute Action Action

Algorithm Execute View

Algorithm Execute Model Logic relation

Model

Fig. 5. Resource ontology

Resources in a DW system include: (i) domain database storing business data, (ii) knowledge base storing all algorithms, data models and rules, and (iii) system base keeping all configuration and management information of the system, and so forth. Resource ontology captures all objects and relations in all these resources. Here, we illustrate some resource ontology related to the task of algorithm management in the knowledge base of the F-Trade. As shown in Figure 5, algorithms consist of trading strategies and data mining algorithms. With regard to data mining, algorithms can be categorized as classification & model construction, stream data mining, association & frequent pattern, cluster & outlier detection, multidimensional & OLAP analysis, and text & web mining according to some classification criteria. Following this framework, we can go deep into lower levels to define ontological items if applicable.

3 Ontological Commitment and Semantic Relationships In building the mapping from a natural concept to its ontological item or between two ontological domains, an essential work is to deal with the synonymous and multivocal phenomena widely seen in business world. This is handled by ontological commitment. Ontological commitment defines an agreement to use a shared ontology

926

L. Cao, J. Ni, and D. Luo

library in a coherent and consistent way. It permits ontologists to share and instantiate ontological items with committed freedom. DEFINITION 1. An ontological commitment is a five-element tuple: OC = (C, O, R, P, S). Where, C = {ci | i ∈ I} is a set of domain-specific concepts in a given domain, it could be some business key word/phrase specified by end users for their preferences. O = {oj | j ∈ J} includes a set of candidate ontologies in domain or problem-solving ontological base; oj is relevant or mapped to a given ci. R = {ri | i ∈ I} is a set of semantic relationships between ci and oj or items across domains. P = {pi | i ∈ I} is a set of cardinality properties specifying how ci is associated with oj. S = {si | i ∈ I} optionally measures the similarity between ci and oj.

In the above definition, I and J are two separated positive integer sets. In addition, C and O can also refer to two varied ontological domains. DEFINITION 2. An atom item of ontological commitment can be defined as follows. := ( ( , ) : [, similarity])* ::= (Instantiation | Aggregation | Generalization | Substitution | Disjoin | Overlap | Association) ::= ({ | name ∈ C}) ::= ({ | name ∈ O}) ::= (SingleUnrestricted | SingleRestricted | MultipleUnrestricted | MultipleRestricted) ::= ( { s | s ∈ [0, 1]})

To manage the above mentioned mapping, we need to build corresponding mechanisms for the identification and aggregation of semantic relationships between ontological items across domains in the warehousing. Varying combinatory scenarios of natural terminologies and their notations furthermore result in different semantic relationships among ontological items. In [2,6], we identify seven types of semantic relationships from them. They are Aggregation, Association, Disjointness, Generalization, Instantiation, Overlap, and Substitution. For space limitations, please refer to [2,6] for details.

4 Ontological Representation In Section 2, ontological items are visually presented as tree with terms and their relationships. On the other hand, formal or semi-formal specifications can present the ontological structure more precisely. In our work, we define ontological grammar in description logics (DL) since it supports intentional concepts and properties of concepts, and allows the construction of composite descriptions. /*DL-based grammar for presenting ontology*/ ::= | ( +) | ( ) | ( ( )) | ( ( + | +) ( + | +)) ::=

Ontological Engineering in Data Warehousing

927

::= (AND | OR) ::= (AT-MOST | AT-LEAST) ::= (SU | SR | MR | MU) ::= (* | + | ? | “|”)

Furthermore, we develop specifications for representing domain ontologies, problem-solving ontologies. With respect to domain ontologies, for instance, a domain ontology Closing Price can be informally expressed as: ;; Definition of Closing Price in DO (Domain Closing_Price LI) (substitute_to Closing_Price (Close_Price Daily_Price SI))

It can be formally expressed in terms of the DL as: Closing Price ::= Closing_Price (MU Close_Price Daily_Price ) (AT-MOST 1 Stock_Code) (OO Float ?)

It means that there is at most one value of the closing price for a stock in a day; if there is a value existing, then its value is in float. PSO consists of ontologies for tasks, methods, business logics and resources. The relationships among them are as follows. A task is fulfilled by some methods. A method is instantiated into some business logics, and supported by some resources. In most conditions, a task is divided into multiple sub-tasks; these sub-tasks are satisfied by some alternative methods. Correspondingly, these alternative methods are implemented by alternative business logics and relevant resources. PSO diagram as shown in Figure 6 is developed to present these items and their relationships. This diagram should be instantiated to include sets of PSO items from task, method, BL and resources in a real example. To present these types of PSO items, the following specification is built based on first-order logics: /*CCD entry for presenting a part of PSO item*/ ::= ( | )* ::= (, [, cardinality] : , )* ::= (task | method | business logic | resource) ::= (, [,cardinality] : , )*

Furthermore, a PSO item can be presented as: /*CCD entry for a complete PSO item*/ ::= ( [.method][.business logic][.resource] | )*

Fig. 6. PSO structure

For instance, the following illustrates that a task Register_Algorithm call BL Type_AlgoInputs to fill in Algo_Name as MovingAverage via the method RegisterAgent. ;; Definition of a PSO item (Task Register_Algorithm SR) (Method RegisterAgent SU) (BL Type_AlgoInputs MU) (BL Algo_Name (MovingAverage)) (Resource (AlgorithmBase SystemBase))

928

L. Cao, J. Ni, and D. Luo

5 Ontological Aggregation, Transformation, Mapping and Query After defining and building semantic relationships and the representation of ontological items, relevant mechanisms must be developed for ontological aggregation, transformation, mapping and queries either intra or inter domains [2,3,5,6]. A key step is to build some proper ontological architecture first. An ontological architecture indicates the relationship, organization and management of the mapping and integration of ontology crossing domains. Please refer to [2,3,6] for more information about our proposed ontological architecture for DW. Furthermore, a fundamental issue is how to aggregate, transform, map and query ontological items in the DW ontological domains. To this end, semantic rule [5] is an effective approach. Thus, we need to develop semantic rules for the aggregation of ontological items and semantic relationships, the rules for ontological transformation and mapping. These rules define what the resulting logical output should be for a given input logical combination with some semantic relationship inside. [5,6] present more information about these issues. For instance, the following rule aggregate the semantic relationship part_of. RULE 3. - ∀ (A AND B), ∃ B ::= part_of(A, B) Ÿ B, the resulting output is B

Again, the following exemplifies some rules for the transformation, where Ci is an input item, O, O1 and O2 are candidate items in the target domains. RULE 6. - ∀Ci, ∃: (substitute_to(O, Ci) ∨ is_a(O, Ci) ∨ instance_of(O, Ci) ∨ part_of(O, Ci) ∨ associate_with(O, Ci)) Ÿ O, the O is the output item

6 Conclusions In our previous work, we have proposed ontology-based business intelligence for improving the performance of existing commercial DW systems. This paper outlines the basic picture of ontological engineering in data warehousing. It has demonstrated mechanisms for: (i) building ontology profiles for a domain problem, (ii) defining ontological commitment and semantic relationships, (iii) presenting ontological items, and (iv) aggregating and transforming items in one domain or across domains. This actually forms the foundations for analyzing, designing and implementing ontologybased data warehouse systems in business world.

References [1] Cao, L.B. et. al.: Systematic engineering in designing architecture of telecommunications business intelligence system. Proceedings of HIS’03, pp1084-1093, IOS press, 2003. [2] Cao, L.B. et. al.: Ontology Services-Based Information Integration in Mining Telecom Business Intelligence. Proceeding of PRICAI04, Springer Press, pp85-94, 2004.

Ontological Engineering in Data Warehousing

929

[3] Cao, L.B. et. al.: Integration of Business Intelligence Based on Three-Level Ontology Services. Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, IEEE Computer Society Press, pp17-23. [4] Cao, L.B. et. al.: Agent Services-Based Infrastructure for Online Assessment of Trading Strategies. Proceedings of the 2004 IEEE/WIC/ACM International Conference on Intelligent Agent Technology, IEEE Computer Society Press, pp345-349. [5] Cao, L.B. et. al.: Ontology Transformation in Multiple Domains. 17th Australian Joint Conference on AI, G.I. Webb and Xinghuo Yu (Eds.): LNAI 3339, pp. 985–990, 2004. [6] Cao, L.B., Zhang, C.Z., Liu, J.M. Ontology-Based Integration of Business Intelligence. Int. J. on Web Intelligence and Agent Systems, No 4 Vol 4, 2006. [7] F-Trade: http://www.f-trade.info.

Mapping Ontology Relations: An Approach Based on Best Approximations* Peng Wang1, Baowen Xu1,2, Jianjiang Lu1,2,3, Dazhou Kang1, and Jin Zhou1 1

Department of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2 Jiangsu Institute of Software Quality, Nanjing 210096, China 3 PLA University of Science and Technology, Nanjing 210007, China [email protected]

Abstract. Relation mappings are important for interoperations between heterogeneous ontologies. Several current methods employ string similarity matching or heuristic rules to find them, but often produce low quality results. We propose a novel approach based on best approximations. The core idea is to find least upper bounds and greatest lower bounds of a relation, and then use them to get upper and lower approximations of the relation. These approximations are the relation mappings between ontologies. To discovery the best mappings, we extend the definition of least upper(/greatest lower) bounds as multielement least upper(/greatest lower) bounds, that not only containing separate relations, but also disjunctions or conjunctions of relations and the related concepts. The simplified multielement bounds are also defined to avoid redundancy. An effective algorithm for finding the relation mappings is proposed.

1 Introduction An ontology is a formal, explicit specification of a shared conceptualization [1]. It describes the common knowledge in a domain through formally defining concepts, relations, axioms and individuals in order to solve knowledge sharing problems [2]. Usually the ontologies are distributed, and produced by different communities. That causes the heterogeneous problem, which is the major obstacle to share information between different systems. Ontology mapping is a main approach to solve such problems through capturing the communication rules between ontologies [3]. Concept mapping and relation mapping are the two kinds of most important mappings. Finding these mappings is a key issue. Current methods usually employ the technologies such as string or structure similarity matching [4], machine learning [5], or integration of several technologies [6], to compute the similarity between the corre*

This work was supported in part by the NSFC (60373066, 60425206, 90412003, 60403016), National Grand Fundamental Research 973 Program of China (2002CB312000), National Research Foundation for the Doctoral Program of Higher Education of China (20020286004), Foundation for Excellent Doctoral Dissertation of Southeast University, and Advanced Armament Research Project (51406020105JB8103).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 930 – 936, 2006. © Springer-Verlag Berlin Heidelberg 2006

Mapping Ontology Relations: An Approach Based on Best Approximations

931

sponding entities in different ontologies. However, most works just focus on the mappings between concepts, and few discuss the relation mappings. In fact, the relation mapping is important in applications. Several methods are proposed for discovering the relation mappings. Noy and Musen determine the relation mappings based on lexical similarity [4]. QOM method compares some features of relations, such as label, URI, domain and range, superproperties and sub-properties [6]. These methods are all based on some basic heuristic rules, and cannot achieve high quality relation mappings. Doan and his colleagues declare that their machine learning method for the concept mappings may be suitable for discovering relation mappings [5], but it is an ongoing research until now. VargasVera and Motta propose a similarity algorithm to access relation similarity through comparing two graphs [7], one comes from the query, and the other obtained from the ontology. This method uses ontological structures and instances and the WordNet thesaurus. However, this method may omit some valid relation mappings. Some reasons cause the fact that finding relation mappings is more difficult than finding concept mappings. First, relations are different from concepts, and a relation is binary relationship on instances. Second, relations can not be organized effectively as tree or graph like concepts do, and some relations are separate to each other. Finally, the labels of relations are more arbitrary. So the existing methods for concept relations are not suitable for dealing with relation mappings. This paper proposes a novel approach based on best approximations, which is inspired by our previous work for complex concept mappings based on sharing instances [8]. The core idea is to find least upper bounds and greatest lower bounds of a relation, then use them to get upper and lower approximations of the relation. The approximations are exactly the relation mappings. To get the best mappings, we extend the least upper(/greatest lower) bounds as multielement least upper(/greatest lower) bounds, that not only containing separate relations, but also disjunctions or conjunctions, inverse relations, and the related concepts. To avoid redundancy, the simplified multielement bounds are also defined. We provide an effective algorithm for finding the relation mappings. This paper is organized as follows: Section 2 provides some basic definitions. Section 3 discusses the methods for finding the best upper approximations and lower approximations respectively. Finally, conclusions are given in section 4.

2 Best Approximations of Relation Since some relations also can be organized as hierarchy by “subPropertyOf” property, the method for dealing with concept hierarchy can also be used here. Meanwhile, we will consider other features including inverse relations and concepts related to relations. Our aim is to find the relation mappings of each relation R in O1 with respect to O2. We first present the definition of the best approximations of R, and regard them as the relation mappings. Let TC and TR be the set of concepts and set of relations in O2 respectively. Here, we just focus on the relations between instances of two concepts, namely the object properties in OWL. Another type of relations between instances of concepts and RDF literals and XML Schema datatypes, namely datatype properties in OWL, is not considered in this paper.

932

P. Wang et al.

To find the similarity of relations, we assume that the two ontologies considered sharing same instances. There are some ways for providing sharing instances such as manual annotation. Therefore, equal concepts must have same set of instances. We think a relation also have its instances, and call them relation instances. Definition 1 (Relation instance). Let R be a relation, concept Cd is the domain of R, concept Cr is the range of R, a relation instance of R is a member of a set RI={| ∀ x,y, x ∈ Cd, y ∈ Cr, and R(x,y)}. If two relations R and S are equal, it must have RI=SI. Based on definition 1, some relations also have subsumption relationship. Definition 2 (Relation subsumption). Let R, S be relations, R subsumes S, i.e. S ‹ R , iff S I ‹ R I . R properly subsumes S ( S ‡ R ) if S ‹ R but not R ‹ S . We also define a universal relation F to subsume any other relations. Obviously, the relation subsumption is transitive and reflexive. Most current ontology languages support the relation subsumption such as rdfs:subPropertyOf in RDF(S) and OWL. Current ontology languages do not support the disjunction/conjunction operators on relations, but we can express them in relation mappings, because the mappings are independent of the ontology. Compound is another important operator on relations, but compounding two relations would generate infinite numbers of new relations. So we do not consider compound operator here. Definition 3 (Relation conjunction). The conjunction of two relations R and S can be denoted as R ∧ S = {< x, y >|< x, y >∈ R I and < x, y >∈ S I } . Definition 4 (Relation disjunction). The disjunction of two relations R and S can be denoted as R ∨ S = {< x, y >|< x, y >∈ R I or < x, y >∈ S I } .

To get the mappings of R with respect to O2, we compare the sets of relation instances. We use similar methods for concept mappings to deal with relation mappings. To assure the precision of results, we deduce more instance pairs from the existing ontologies. We consider inverse relations first. If R is inverse to S and there is a relation instance R(a,b), we can get a corresponding relation instance S(b,a). Then we consider the symmetric relation. If R is symmetric and R(a,b), we have a new relation instance R(b,a). Thirdly, we use the transitive relations. Given a transitive relation R and R(a,b) and R(b,c), we add R(a,c) as a new relation instances. Based on these preprocess, we enrich the number of the relation instances in existing ontologies. Here, we define approximations of relations using least upper bounds and greatest lower bounds of relations. These approximations are the relation mappings. Definition 5 (Least upper bounds of relations). The least upper bounds of R with respect to TR is a set of relations in TR, notated as lub(R,TR), if the assertions hold:

(1). ∀S ∈ lub( R, TR ) → R ‹ S ; (2). ∀M ∈ TR , R ‹ M → ∃N ∈ lub( R, TR ), N ‹ M

Then the upper approximations of a relation can be defined using the conjunction of the relations in lub(R,TR): ua(R,TR)= F ∧



Si

Si ∈lub( R ,TR )

(1)

Mapping Ontology Relations: An Approach Based on Best Approximations

933

Since all relations in lub(R,TR) subsumes R, the conjunction of them is also subsumes R. Therefore, ua(R,TR) is an upper approximation of R with respect to TR. Definition 6 (Greatest lower bounds of relations). The greatest lower bounds of R with respect to TR is a set of relations in TR, notated as glb(R,TR), if the assertions hold:

(1). ∀S ∈ glb( R, TR ) → S ‹ R ; (2). ∀M ∈ TR , M ‹ R → ∃N ∈ glb( R, TR ), M ‹ N Then the lower approximations of a relation can be defined:



la(R,TR)=? ∨

Si

(2)

Si ∈glb( R ,TR )

Unfortunately, the quality of ua(R,TR) and la(R,TR) is not acceptable in some cases. In Relation set TR, the super-relation of R may be much larger and the sub-relations of R may be much smaller. In the worst cases, if R has no sub-relation or super-relation in TR, then ua(R,TR) always returns the empty set and la(R,TR) returns the full set. For example, let M, N are relations in TR, if there is no relation subsumes R, then it must have ua(R,TR)= F , and R ‹ M ∧ N is impossible. However, it may have R ‹ M ∨ N ‡F , so M ∨ N is closer to R than ua(R,TR). We expand disjunctions(/conjunctions) in least upper(/greatest lower) bounds, and call them multielement least upper(/greatest lower) bounds. Using the multielement bounds can get best approximations of the relations. We introduce several notations first for readability. If E={S1,S2,…, Si} is a set of relations in TR, then E is called an i-relation-set in TR, and |E|=i. We define ċ = S1 ∨ S2 ∨ ⋅⋅⋅ ∨ Si be the disjunction of all relations in E, and call ċ an i-relationdisjunction in TR. Especially, if |E|=1, let ċ=S1; if |E|=0, let ċ =F . Similarly, Ê = S1 ∧ S 2 ∧ ⋅⋅⋅ ∧ Si is called i-relation-conjunction in TR. if |E|=1, let Ê =S1; if |E|=0, ∨



let Ê =?. Here we specify E, F, G be relation-set in TR, and ċ, F , G be corresponding relation-disjunctions. We first define the multielement least upper bounds. Definition 7 (Multielement least upper bounds of relation). The multielement least upper bounds of R with respect to TR is a set of relation-disjunctions in TR, notated as mlub(R,TR), if the following assertions hold: ∨



(1). ∀ċ ∈ mlub( R, TR ) → R ‹ ċ ; (2). ∀F ⊆ TR , R ‹ F → ∃ċ ∈ mlub( R, TR ), ċ ‹ F The first assertion ensures that each member of mlub(R,TR) should subsume R. The second ensures that for any relation-disjunctions that subsume R, it should also subsume at least one member of mlub(R,TR). Notice that mlub(R,TR) may not be unique for certain R and T. We define the upper approximation based on multielements: mua(R,TR)= F ∧



ċi

ċ i ∈m lub( R ,TR )

(3)

934

P. Wang et al.

Theorem 1. mua(R,TR) is the least upper approximation of R with respect to TR.

Proof: We only need to prove that for any upper approximation Q of R with respect to TR, i.e. R ‹ Q, it has mua(R,TR) ‹ Q. ∨





Let Q′ be equivalent CNF to Q, Q′ ≡ Q= F 1 ∧ F 2 ∧ ⋅ ⋅ ⋅ ∧ F n , where Fi ⊆ TR . For any ∨





relation instances , if ∉ Q , for Q= F 1 ∧ F 2 ∧ ⋅ ⋅ ⋅ ∧ F n , there must have a ∨ F , hold that < x , y > ∉ F i . For Q is the upper approximation of R, then R ‹ Q, and so ∨

i











R ‹ F i . From definition 7, ∃ E ∈ mlub(R,TR) holds E ‹ F i . So x ∉ E too. For mua(R,TR) is the conjunction of F and all members from mlub(R,TR), so x ∉ mua( R, TR ) . Then we can get that mua( R, TR ) ‹ Q. Ŷ

Theorem 1 shows that the multielement least upper bounds can derive the least upper approximation of relation R. There may be many multielement least upper bounds of R with respect to TR. Most of them contain redundant members. We define the simplified multielement least upper bounds of relation. Definition 8 (Simplied multielement least upper bounds of relation). slub(R,TR) called the simplified multielement least upper bounds of R with respect to TR, if it is the multielement least upper bounds of R, and for any ċ ∈ slub(R,TR) the following assertions hold: ∨









(1). ¬∃F ⊆ TR , R ‹ F ‹ E ; (2). ¬∃ F ∈ s lub( R, TR ), F ≡ E ∨



(3). ¬∃F ⊆ TR , ( F ≡ E ) ∧ (| F |> k. Our goal is to design a sensor-based testing facility for leaking problem. The location of the leak is expected to be identified with a certain level of accuracy (locating any leak, for eg. up to 50m accuracy) within a predefined time frame. In other words, we want to demonstrate that regular monitoring of the data collected by the deployed sensors will identify all water leaks on pipes’ segments with accuracy of the location to the predefined pipe length.

4 Sensor Network Design 4.1 The Sensor Nodes We propose a sensor network be placed over the water pipe distribution system that measures the water usage on the end points of water distribution as well as at intermediate points so that the presence of leaks could be located with required accuracy. The sensors are positioned along the water pipe supply distribution system of the area under consideration. Our goal is show that required testing can be accomplished by a network with greatly simplified functional requirements for all deployed sensors. We assume that each sensor is capable to observe and temporarily store only two values at any given time/period of the measurement; • Direction of the water flow and • Volume of the flow Hence, further we assume that each sensor has a small data storage space and is capable to communicating to other nodes as dictated by the communication protocol. The very important assumption that is to be made is the presence of the time synchronisation of the network nodes. Since data collected from distributed nodes are to be integrated for performing certain calculations to check the balance of inflow, outflow and consumption along the water pipes, there has to be a strong correlation point identified that is actually the timestamp on which the values that will be fused are recorded. There are certain dedicated approaches [6] [7] [8] employed in order to

On Sensor Network Segmentation for Urban Water Distribution Monitoring

977

achieve time synchronisation. We further assume that acceptable time synchronisation is already established among the nodes of the network. There is a trade-off between some functional capabilities which are mainly the communication capacity of sensor nodes and the number of sensor nodes that are used in the network. There could be a dense set of sensors with bare minimum functionalities (small transmission ranges) or there could be less dense set of sensors with increased functionalities. In the proposed approach a right balance is sought to be stricken with a hierarchical level of sensor nodes deployed depending on their need and purpose in the network. It is rather simple to observe that to be able to reason about in-flow water distribution to many consumption nodes along a main pipe (regardless of number of supply sources and water flow direction) one must get flow measurements in at least two points at each end of pipe segment investigated. This gives us the lower bound for number of distribution nodes required – at least twice the number of the main pipes in the network. However, due to accuracy being a parameter of the testing facility, this number may be larger reflecting actual length of the pipe. 4.2 The Sensor Network Model In this section first we present an abstraction of the real pipe network environment presented above by the following mathematical model. We begin with introduction of essential notation. Let the water distribution network W = (S, C, P) consists of a set of supply sources S = {s1 , s 2 , ..., ss } , a set of consumption units C = {c1 , c 2 , ..., cq } , and a set of connected main pipes P = {p1 , p 2 , ..., p m } .

Fig. 2. A Water Distribution Network W; SN(W) for W

Given such water distribution network, we now define sensors network for W, SN(W) by adding the next layer where sensor nodes, to measure water in-flow and consumption, are positioned and connected to form a sensor network. As mentioned before, we distinguish two types of nodes in the network those associated with the consumption nodes and the distribution nodes. With each consumption point ci we associate a sensor node placed at that network point, essentially a sensor located on the water meter which measure the water consumption

978

S. Nesamony et al.

of that position. Without any loss of specification precision, we use the same notation for these sensors as for corresponding water network nodes, we call them ci , where i = (1, 2, …, q). Further, we extend collection of sensors deployed by the SN by defining another 2 } where set of sensor nodes called distribution nodes D = {d11 , d12 , d12 , d 22 , ..., d1m ,d m d1j and d 2j denotes a pair of sensor nodes placed one each at both the ends of the pipe

segment p j ∈ P in W. The next important question to be addressed is the number of required sensors and their exact positioning along the pipes to achieve desired leak testing quality. Before we address this challenging question let us focus now on data to be collected by this sensors environment and the infrastructural physical requirements for the sensing devices to effectively accomplish the collection. For correct correlation purpose, all measurements must be time stamped and cover flow volume as well as direction of the flow at the given time. Collection of this data type requires specialised physical device capable to function accordingly with the change of the direction of the water flow. In each network’s segment, over a period of time the direction change may occur several times, each redirection must be reflected by the data collected and allied with the corresponding volumes. Each sensor ci is capable of measuring of water flow volume vi (t s ) where ts is SN global timestamp generated in regular intervals of T time units. The consumption sensors can be designed in such a way so as to get activated and collect the data if only there is a water flow. To discuss the data collected by distribution sensors we must look one more time on flow directions in a pipe of W. As we mentioned previously, the direction of water can change over some unpredictable periods of time. The flow direction can only be specified in relation to some established concept against which it can be measured. To facilitate such base, we introduce the direction of water towards a network segment in later sections. In the current situation, we shall focus on one single pipe segment and consider the water flown into and flown away from the whole pipe from either or both directions. In accordance with our specification of the location of the distribution nodes earlier, if we consider a pipe segment p j ∈ P , then the distribution sensor nodes at the ends of p j will be able to compute wj (ts ) , which will be the net amount of water flown into p j from either ends in a timestamp ts . The computation of this value is shown in the following section. We compare this wj (ts ) with the total amount of water consumed by the consumption nodes placed all over the pipe p j . The location of these computations does not come under the scope of this paper and it is assumed that those computations happen pertaining to the communication methodology. The expression for computing a leak in a single pipe is given clearly in the next section. 4.3 Sensors Numbers and Positioning

When designing a sensor network, important consideration is related to establishing the minimum number of sensors to be adopted, their unique positions to offer required

On Sensor Network Segmentation for Urban Water Distribution Monitoring

979

data collection facilities and communication between collaborating nodes. In this paper we only concentrate on the data collection aspect. Connectivity considerations in this application can be arranged by adopting solutions presented in [9] [10]. Before we consider data monitoring mechanism for an arbitrary pipes network W let us consider a special case of our problem; W consisting only a single pipe p j .

Fig. 3. A pipe segment in isolation

Thus if our problem is of the distribution nodes location identification then the solution in this special case is obvious; we need only two distribution sensors located at each end of the pipe p j . The only larger number can be implied by the accuracy requirements; in case that pipe p j is longer than the length of monitoring segment set for accuracy. Leak identification can be achieved by considering the following simple expression (1) that checks the balance between the volumes of water used with the volume of the water supplied to pipe p j . Let c1 , ..., cș be the consumption nodes that are present over the pipe p j . IN (d1j , ts ) denotes the inflow of water into the pipe measured at dj1 at ts and the other notations are self explanatory. ș

¦ v (t ) = (IN (d , t ) + IN (d , t )) − (OUT (d , t ) + OUT (d , t )). i

s

1 j s

2 j

s

1 j s

2 j

s

(1)

i =1

If this equation does not hold true, then it is implied that all the water flowing into the pipe is not being consumed by the consumption units over it. Thus there are one or more discharges or leaks in that pipe which are not accounted for. This is a basic expression that holds good only for single pipe considerations as the notions of IN and OUT lose credible balance when we extend our interest to finding leaks for a segment of a network rather than a pipe alone. Thus, to extend this consideration to more general network structure, we assume now that W has a tree structure embedded within itself as depicted in figure 4 below.

Fig. 4. Tree Structured Network segment

980

S. Nesamony et al.

One can easily observe that each tree structure can be logically partitioned into a number of simple pipes; each link of a tree is a single pipe. Thus the location of the distribution nodes can be derived from the simple case presented above. As we discussed, each pipe has two distribution nodes to locate the leak down to the pipe level. Although, in this case, such a method clearly calls for too many distribution sensors, if we are interested on the region covered by the above network segment. Especially nodes positioned on end points of links sharing the junction points can be regarded as redundant. In other words, the nodes that are not leaf nodes will have distribution nodes present over the links stemming out from them and the readings from those distribution nodes are not actually needed to detect the presence of a leak considering the whole network segment. In the figure above, the distribution nodes covered by shaded circles are not needed for evaluating the presence of leak in the whole segment because they do not contribute to the inflow and outflow of water with respect to the whole network segment. However, nodes on all end points of the tree are indispensable for identifying water leakage. As before, we are able to establish the minimum number of distribution sensors; it is the number of all end points of the tree. As before, the accuracy requirement may increase this number accordingly. Let d1 , ..., d k be the set of distribution nodes which are placed in all the leaf nodes of this tree segment. Also assume that c1 , ..., cș are the consumption nodes that are located within this tree segment. Now, still we can use the IN and OUT functions to determine the water leakage in the tree segment as the distribution nodes are placed just next to the pipe junction points. ș

¦

k

vi (ts ) =

i =1

¦ ( IN (d , t ) − OUT (d , t ) ). j s

j s

(2)

j=1

It is to be observed here that (2) is a general version of (1) wherein we have more than two distribution nodes at the end of the segment which is under consideration. In (1) the segment considered is a lone pipe where as in (2) it is a tree structured segment. The insignificance of the IN and OUT functions arises when there is a need to segment the network on any arbitrary point over a pipe, which is detailed in the following section. Thus by simple observations and generalisation of the special cases presented we can see how to approach leak detection in an arbitrary network W. 4.4 Network Segmentation

Theoretically, to be able to detect pipe faults over the entire region A, we need to systematically, regularly compare all in-flows from water supply sources with all legitimate out-flows at the consumption points. However, in practice such leak discovery without location of the pipe affected will not be useful. Additionally, the simplified approach presented may not be applicable to very large regions. In a real pipe system, the change in the pressure of flows, subtle delays in actual water

On Sensor Network Segmentation for Urban Water Distribution Monitoring

981

measurements, some accepted and unavoidable losses and more complicated flow’s physics of hydrological nature potentially add great deal of noise into the data used. On the other hand to measure all the time at all distribution nodes (set to reason about individual pipes) is not required. The challenge is to strike the right balance of measures to guarantee the expected accuracy and not measure too often at too many points. The most natural approach seems to be a hierarchical based on sub-regions of region A. The hierarchical method would follow a simple principle; if a sub-region reports pipe failure(s) then in order to identify the leak(s) location, subsequent subregions of the initial sub-region are examined until the lowest level is reached; the level of individual pipes sections (according to the set accuracy parameters). Since we understand how to monitor larger segments of W (not only limited to individual pipes), the remaining challenge is to design segmentation of the pipe network into fragments so that the number of measurements required can be effectively controlled. By segmentation, we mean to identify cut points over the pipes, on which if a cut is made the network will be rendered into a certain amount of segments. The distinction here from the previous cases is that the distribution nodes will have to be placed at the cut points over the pipes which mark end points of network segments. In those nodes, the notion of IN and OUT does not hold any value because they are very much relative to the segments they constitute. Each cut point participates in two network segments and will be the end point in both the segments. Each distribution node placed in a cut point will be measuring the flow of water through it in either directions, but cannot ascertain which will be inflow and which will be outflow. The inflow measure of water is with respect to the segment which is being considered as a particular direction of flow of water at that cut point is considered inflow for one segment and outflow for the other segment. This is formally explained in the following paragraphs. Let W be a pipe network supporting region A. The accuracy consideration calls for introduction of function length L defined for all pipes; L (p j ) . Further, we use L to indicate the total length of all pipes in W; m

L (W) =

¦ L (p ) = L j

j=1

The problem of the network segmentation can be formulated as follows. Let A1 , A 2 , ..., A p be a partition of A into sub-regions. Each Ai defines corresponding Wi as the intersection of Ai with W in a natural way. For a given W with length L over A, find such a cover of A by A1 , A 2 , ..., A p that

∀i , Wi has L (Wi ) ≈ L

and the number of crossed pipes is minimal. p Clearly, in general, the number of such covers can be large, what is characteristic to all constrained covering problems. Also, examining all possible options is computationally intensive being an exponential problem. In this paper will not search for an optimal solution, instead we will show a simple heuristic method of network segmentation.

982

S. Nesamony et al.

In order to define the inflow and outflow with respect to the segments, let us consider the following. Let I be the set of all pipe segments and let I = I1 ∪ I 2 ∪ ... I p , where I1 , I 2 , ..., I p correspond to the region A1 , A2 , ..., Ap . I i is actually a set of all the pipe segments that are present in that particular segment. If I i ∩ I j ≠ Φ , where i, j = 1,2,…,p then it means that those two segments I i and I j share one or more pipes. Let

J

ij

us

consider

J

ij

={o

J

1

ij

, ... o

J

ij

J

ij

}

where

J

ij

= Ii ∩ I j ,

i.e.

represents all the shared pipes between the network segments I i and I i . To

define inflow and outflow, we focus on one element of J

ij

, op .

∀o p , [ wo p (t s )]Ii = − [ wo p (ts )]I j , for w defined earlier. Hence, calculating the amount of water flown through a cut point used to partition the network into segments, in regards to the segment for which the calculations are performed, can be done using the previous expression as inputs. For a particular pipe o p , the amount of water flown at a cut point in timestamp ts is recorded and is used accordingly for leak identification in different network segments. 4.5 Heuristic Method for Network Segmentation

We begin with demonstration that two different network fragmentations leads to different pipes’ crossing points. For the sake of simplicity we do not show Ai regions on the following figure. Without any loss of generality, we do not consider consumption nodes but mainly concentrate on positioning, only indispensable to the overall testing problem, distribution sensor nodes.

Fig. 5. Examples of different fragmentations (a) needs 4 cut points (b) needs 6 cut points

Our pipe network model can be interpreted as a planar, directed graph with intersection points being the nodes, pipes being the links and direction for each link corresponding to the direction of the pipe.

On Sensor Network Segmentation for Urban Water Distribution Monitoring

983

The heuristic method developed is based on the Euler formula [11] showing for every planar graph correspondence between numbers of nodes n, links l and closed components c of the graph as follows l=n+c-1 Our segmentation method can be presented by listing its main two steps; Step 1 – Transforming the network into a tree by splitting (cutting) all closed components, Step 2 – Following the branches of the tree in a bottom-up fashion, cut further links to get required size (length) of the network segment.

Step 1 delivers all additional end points that must be equipped with the distribution measurement facility. This number can be well estimated by the following figure c= l(n+1). Note that there are many possibilities in choosing the link to be cut for a close component; meaning that many different trees can be constructed by the step 1. Step 2 increases the number of the new end points by no more than 2m where m is the number of segments. As in the step 1, there are many different ways to traverse the tree in order to construct the segments. The heuristic method presented is effective but may not deliver the optimum solution. It is to be observed at this juncture that the cut points on the links within one segment carry no value as only the end points of the segment alone matter. This approach presented offers a hierarchical way to monitor water supply by network segmentation mechanism. Tests are designed at the segments level, and only if a negative result is recorded further search for the leak location is carried out. The segment under examination is now treated as the network and the same method is applied on the finer granularity of segments. The total number of required distribution sensors can be easily estimated based on the topology of the pipe network and required accuracy but at any give time only a small subset of them will be active; only those that are located at all end pints of all segments. Let us conclude this discussion on the segmentation of the network by providing the expression to determine the presence of leaks in such segments. For any segment i

i

I i , let C be the set of all consumption units’ nodes. Let D be the set of all new distribution nodes needed to be placed along the cut points that define the segment I i . For instance, C

1

is the set containing all the consumption nodes which are

present in the network segment I1 and D located in the end points of that segment. 1

Let C = {c11 , c12 , ..., c1 1 } and D C

1

1

is the set of all distribution nodes that are

= {d11 , d 21 , ..., d 1 1 } .It can be observed from D

1

previous discussions that each element of the set D will be a repeated elsewhere in another set of D k , k ≠ 1 because those nodes cut a link forming two segments thereby participating in both the segments.

984

S. Nesamony et al.

For the purpose of having simple expressions, let us define the amount of water consumed by any consumption unit, say c11 is given by c11 (ts ) . Let wd 1 (ts ) be the net 1

d11

water consumed by (flown into) the segment I1 measured at in time ts . Thus in any segment I i , the expression needed to balance the water flow (or to detect the leak) is as follows C

D

i

i

¦ c (t ) = ¦ w i i

i=1

s

j=1

d ji

(ts ).

(3)

This is a general form of (2) as it uses the net inflow of water and is the same expression to identify the presence of leak for any arbitrary network segment.

5 Advantages of the Approach The idea of deploying sensor nodes along the water pipe lines in order to monitor and manage the water flow has got more than the obvious significant advantages. As mentioned earlier, the setup and deployment of such sensors is instrumental in finding the leaks in the underground water pipes with a lot more ease and efficiency than the existing conventional methods. The other benefit of such an approach could be to visualise the water usage pattern of a particular region. Given the additional storage capacity in the distribution nodes, complex patterns of water flow and diffusion could be recorded to incredible level of detail, without involving the base station by distributing the processing. The consumption nodes steadily measure the water flown through them to the consumption units. Those nodes could be used to record the water usage pattern in regular intervals. The recording of the general usage pattern could be deduced based on the previously stored data or a convolution performed on historical data or it could be in any other form which is optimised so as to be stored within the sensor node abiding by the storage limitations associated with the node. These values could be propagated to a higher level like for instance, the business units’ water consumption and the residential units’ water consumption in one particular suburb could be stored in a hierarchically higher level of distribution node that is present in the water pipe that starts distributing the water to the suburb under consideration. The water usage pattern of different entities of a suburb on even a daily basis can be stored and an approximation of the average pattern could be deduced. If the observed water usage pattern is not well within certain prediction intervals, then an alarm could be raised. The advantage of this is that, these types of anomalies could be identified at a convenient level of hierarchy and then the focus could be shifted to the needed granular level. The other benefit of the whole approach is that the water flow in the pipeline distribution system could be identified and the knowledge about measurement and the directions of the flow could assist in determining the decisions regarding the switching of water pumps, water pressure to be maintained etc.

On Sensor Network Segmentation for Urban Water Distribution Monitoring

985

6 Conclusion In this paper, we have proposed the deployment of sensor networks for identifying leaks over the urban water distribution system. We have discussed the issues involved in segmenting sensor network for efficient monitoring of water distribution in a hierarchical level. Starting with very simple cases in the network scenario, we showed the complexities involved in segmentation for complex network sub-structures. Providing a heuristic to the network segmentation, we have left the examination of the nature of the optimal solution for the same as an open question and expect to investigate on it further. We would like to extend our thanks to Mr. Andris Krumins and Mr. Shashi Mallappa of Brisbane water, for their enthusiastic assistance and inputs which enabled us to progress with a realistic assumption base.

References 1. Water distribution Network of the city of Jodhpur, Rajasthan, India http://www. gisdevelopment.net/magazine/years/2004/aug/jodhpur_project2.htm (accessed 15th August 2005) 2. Burn S., DeSilva D., Eiswirth M., Speers A. and Thornton J. “Pipe leakage - A problem for the future”. Pipes Wagga Wagga Conf. Australia, October 1999. 3. TÜV Austria, http://www.tuev.at/go/index.pl?l=uk&seite=wp5 (accessed 15th August 2005) 4. I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci. “Wireless Sensor Networks: A Survey." Computer Networks, 38(4):393-22, March 2002. 5. D. Estrin, R. Govindan, J. Heidemann, and S. Kumar. “Next Century Challenges: Scalable Coordination in Sensor Networks." In Proceedings of the fifth annual ACM/IEEE international conference on Mobile computing and networking, pp.263-270, 1999. 6. J. Elson and D. Estrin. “Time Synchronization for Wireless Sensor Networks”. In Proceedings of the 2001 International Parallel and Distributed Processing Symposium, April 2001, San Francisco, CA, USA. 7. K. Römer. “Time Synchronization in Ad Hoc Networks." In Proceedings of MobiHoc 2001, Long Beach, CA, Oct 2001. 8. W. Su and I. F. Akyildiz. “Time-Diffusion Synchronization Protocol for Sensor Networks." Technical report, Georgia Institute of Technology, Broadband and Wireless Networking Laboratory, 2002. 9. M. K. Vairamuthu, S. Nesamony, M. E. Orlowska, S. W. Sadiq, “On the Design Issues of Wireless Sensor Networks for Telemetry”, 8th International Workshop on Network-Based Information Systems, August 2005, Copenhagen, Denmark. 10. M. K. Vairamuthu, S. Nesamony, M. E. Orlowska, and S. W. Sadiq, “Channel Allocation Strategy for Wireless Sensor Networks Deployed for Telemetry”, Second International Workshop on Networked Sensing Systems, June 2005, San Diego, California, USA 11. Douglas B. West, “Introduction to Graph Theory”, Prentice Hall, First Edition, 1996.

Using the Shuffled Complex Evolution Global Optimization Method to Solve Groundwater Management Models Jichun Wu and Xiaobin Zhu Department of Earth Sciences, Nanjing University, Nanjing, Jiangsu 210093, China

Abstract. The shuffled complex evolution method developed at the University of Arizona (SCE-UA) is applied to optimize the management models of groundwater in the paper. It is the first time to use this method in hydrogeology field. Different from traditional gradient-based optimization methods, such as Difference Dynamic Programming (DDP), Sequential Quadratic Programming (SQP), etc., the SCE-UA algorithm is capable of finding global optimum and it does not rely on the availability of an explicit expression for the objective function or the derivatives. Making use of virtues of two types of non-numerical approaches: search along definite direction and search randomly, and introducing the new concept “complex shuffling”, the SCE-UA is a very effective and efficient global optimization method and can be used to handle nonlinear problem with high-parameter dimensionality. Two management models of groundwater resources are built in an unconfined aquifer: linear model of the maximum pumping and nonlinear model of minimum pumping cost. The SCE-UA method and some other optimization methods are used to solve these two models at the same time. Comparison of the results shows that the SCE-UA method can solve the groundwater model successfully and effectively. It is obvious that this method can be used widely to optimize management models in hydrogeology field, such as configuration of groundwater resources, prevention and manage of groundwater contamination, etc.

1

Introduction

Two types of methods are used to solve management models: numerical methods and non-numerical methods. The latter are used more widely because they don’t rely on continuity and derivative of the objective functions. According to the characteristics of decision variables’ change, the searching process of nonnumerical methods can be divided into two types: one is to search in definite direction and the other is to search randomly. The first type method includes steepest descent method, simplex downhill[1], etc.. The characteristic of this type non-numerical method is that the direction of each searching step is definite, so 

This paper is financially supported by the Doctor Foundation of China No. 20030284027.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 986–995, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Using the SCE Global Optimization Method

987

the search is easy to stop at local optimum. The characteristic of the second type non-numerical method, such as genetic algorithm[2], simulated annealing[3], etc., is that the direction of each step of search is stochastic, therefore the search can converge to global optimum or at least near-global optimum[4]. Due to the development of computer science, now more and more attention is paid to the result of optimum methods instead of the calculation velocity. Therefore, non-numerical methods, which are able to converge to global optimum, such as GA, SA, etc., are more and more used [5-15]. Combing two stochastic nonnumerical optimization methods GA and SA, Wu Jianfeng et. al[14] developed a new optimization method of the genetic algorithm based on the simulated annealing penalty function (GASAPF). Based on the GA, this new method handles constrains of management models by simulated annealing, and it avoids the complexity of selection of penalty factor in the traditional GA method. The GASAPF has been used to solve groundwater management problem successfully and proved to be better than the traditional GA[14,15]. Different from the GASAPF, which combines two types of stochastic nonnumerical methods, the SCE-UA method combines direction-searching of deterministic non-numerical methods and robustness of stochastic non-numerical methods. A new concept, shuffling complex, is introduced to the method. The SCE-UA method is capable of handling nonlinear optimization problem with high-parameter dimensionality and derivative of the objective function is not necessary. The SCE-UA is a very effective and efficient global optimization method and has been used successfully to calibrate many types of watershed models[1625]. For the first time, the SCE-UA method is applied to solve the groundwater management models in this paper and results are very satisfying.

2

Simple Introduction of the SCE-UA Method

The SCE-UA method was developed by Duan et. al. at the University of Arizona in 1992. It is a global optimization strategy designed to handle the various response surface problems encountered in the calibration of nonlinear simulation models, particularly the multilevel optima problem encountered with Conceptual Rainfall-Runoff (CRR) models. This method combines deterministic and stochastic strategies and it is based on a synthesis of the best features from several existing methods, including the genetic algorithm and simplex downhill search scheme. On the base of competitive evolution, the SCE-UA method introduces a new concept of complex shuffling. This method is capable of finding global optimum and it does not rely on the availability of an explicit expression for the objective function or the derivatives. The core ideas of the SCE-UA algorithm include competitive evolution and shuffling complex. A general description of steps of the SCE-UA method is given below. (1) To initialize the process, select p ≥ 1 and m ≥ n+1, where p is the number of complexes, m is the number of points in each complex, and n is the dimension of the problem. Compute the sample size s = pm.

988

J. Wu and X. Zhu

(2) Generate a sample as follows. Sample s points x1 , · · · , xs in the feasible space. Compute the function value fi at each pointxi . In the absence of prior information, use a uniform sampling distribution. (3) Rank the points as follows. Sort the s points in order of increasing function value. Store them in an array D = {xi , fi , i = 1, · · · , s}, so that i = 1 represents the point with the smallest function value. (4) Partition D into p complexes A1 , · · · , AP , each containing m points, such K K K that AK = {xK j , fj |xj = xK+p(j−1) , fj = fk+p(j−1) , j = 1, · · · , m}. k (5) Evolve each complex A , k = 1, · · · , p, according to the competitive complex evolution (CCE) algorithm. (6) Shuffle the complexes as follows. Replace A1 , · · · , AP into Dsuch that D = {AK , K = 1, · · · , p}. Sort D in order of increasing function value. (7) Check the convergence. If the convergence criteria are satisfied, stop; otherwise, return to step (3). (8) Check the reduction in the number of the complexes——if the minimum number of complexes required in the population, Pmin , is less than p, remove the complex with the lowest ranked points; set p = p − 1 and s = pm; return to Step (3); If Pmin = p, return to step (3). One key component of the SCE-UA method is the CCE algorithm, as mentioned in Step (5). This algorithm is based on the Nelder and Mead (1965) Simplex downhill search scheme[1]. Flexible and robust features of probability approach are utilized in the SCEUA method because the search begins with a randomly selected complex of points spanning the entire feasible space Rn . A large enough number of points will help ensure that the complex contains information regarding the number, location and size of the major regions of attraction and the search can operate in the whole sample space. In the process of generating sub-complex, competition make the points with better value of objective function, be selected as parents with more possibility. In the process of evolution of sub-complexes, deterministic information of response surface is used effectively in the SCE-UA method. All these speed up the search to the direction of global improvement. In a word, deterministic information leads the search to the direction of global improvement. The strategy of competitive evolution and complex shuffling inherent in the method help to ensure that the information contained in the sample is efficiently and thoroughly exploited. These properties endow the SCE-UA method with good global convergence properties over a broad range of problems.

3

Application of the SCE-UA Method to Solve Groundwater Management Models

To investigate the performance of applying the SCE-UA method to solve groundwater management problem, a linear model to determine the maximum pumping rate and a nonlinear model to optimize determine the minimum pumping cost are used as examples.

Using the SCE Global Optimization Method

3.1

989

Linear Optimization Model — To Optimize the Maximum Pumping Rate

The first example problem is to determine the maximum pumping rate from the homogeneous, isotropic unconfined aquifer. The groundwater flow situation is adapted from Mckinney and Lin[6]. The aquifer is composed of sand and gravel. The porous medium is homogeneous and isotropic, and the hydraulic conductivity is 50 m/d. The recharge per unit area is 0.001 m/d. There are 10 potential wells extracting water from the aquifer. Plan and elevation views of the phreatic aquifer are shown in Figure 1. Boundary conditions of the aquifer include Dirichlet boundaries on the north river and south swamp, both are 20 m; and no-flux conditions on the east and west sides of the aquifer due to the mountains. Assuming that the Dupuit assumption is valid, the partial differential equation describing groundwater flow can be given as 10 ∂ -- ∂h -∂ -- ∂h -Qi δ(xi , yi ) = 0 (1) kh + kh + R − ∂x - ∂x - ∂y - ∂y i=1 Where δ(xi , yi ) is Dirac delta function evaluated at (xi ,yi ) (L−2 ), and R is the area recharge rate (L/T ).

1

2

3

4

5

6

7

8

9

Mountains, no flow

10,000m

Mountains, no flow

River, h=20m

10

Swamp, h=20m

4,500m

m

Potential well (1a) R=0.001m/d

1200 1000

River

Swamp K=50m/d

980 State bedrock (1b)

Fig. 1. Aquifer for optimization examples (adapted from Mckinnery et al.[1994]). (1a) plan view and (1b) elevation view.

990

J. Wu and X. Zhu Table 1. Maximum pumping results for different methods (units: m3 /d) Well 1 2 3 4 5 6 7 8 9 10 Total

LP1 7000 7000 7000 6000 4500 6000 6800 4100 4100 6800 59300

GA SCE-UA GASAPF2 7000 7000 7000 7000 7000 7000 7000 7000 7000 7000 5987 6056 2000 4477 4290 6000 5986 6056 7000 6814 6774 4000 4094 4064 4000 4094 4064 7000 6814 6774 58000 59266 59058

The object is to maximize the total pumping rate from 10 wells in aquifer, subject to the constraints that the hydraulic heads at the wells must be nonnegative and each pumping rates must be in the range of 0 to 7000 m3 /d. Due to steady flow, the optimization model can be expressed as Maximize

10

Qi

(2)

i=1

Subject

to

hi ≥ 0 0 ≤ Qi ≤ 7000

i = 1, 2, · · · , 10 i = 1, 2, · · · , 10

(3) (4)

The values of hydraulic heads in the constraints are calculated by the groundwater simulation model (1) using the current values of the decision variables, the pumping rates. In order to compare results of different optimization methods, like Mckinney et. al. [6], a standard finite difference method used to solve the linear equation set derived from groundwater flow model (1) in terms of h2 subject to the current boundary conditions. Values of parameters are given as n = 10, m = 21, q = 11, α = 1,β = 21,p = 25. The results of solving the maximum pumping rate example using the SCEUA method, the LP, the GA and the GASAPF are given in Table 1. The hydraulic heads at wells and hydraulic head contours corresponding to different total pumping rate using the GASAPF and the SCE-UA method is shown in Table 2 and Figure 2, respectively. It is obviously from Table 1 that the total maximum pumping rate calculated by the SCE-UA method agrees more closely with the LP solution, which is almost equal to global optimum, than that calculated by the GA and the GASAPF. 1 2

Results got by D. C. Mckinney et al.[1994]. Results done by Wu Jianfeng et. al.[1999]; LP denotes the linear programming; GA denotes the general genetic algorithm; SCE-UA denotes the Shuffled Complex Evolution-University of Arizona; GASAPF denotes the GA based simulated annealing penalty function.

Using the SCE Global Optimization Method

991

Table 2. Hydraulic head at wells corresponding to optimization pumping of the GASAPF and the SCE-UA(m) well 1 2 3 4 5 6 7 8 9 10 GASAPF 12.12 11.27 12.12 1.55 2.49 1.55 2.00 2.05 2.05 2.00 SCE-UA 12.07 11.23 12.07 0.13 0.08 0.20 0.10 0.10 0.16 0.11

10000

9000

8000

7000

6000

5000

4000

3000

2000

1000

0

0

1000

2000

3000

4000

*$6$3) 6&(8$

Fig. 2. Maximum pumping example contours for different optimization methods

The hydraulic heads at wells shown in Table 2 and hydraulic head contours in Figure 2 indicate that results from the SCE-UA method satisfied constrains of the optimization model better than that from the GASAPF. All these show that the SCE-UA method can be used to solve the linear groundwater management model effectively and efficiently. 3.2

Nonlinear Optimization Model — To Optimize the Minimum Cost

The second example is to determine the minimum cost (capital and operating cost) of installing and operating wells to supply a given, exogenous, groundwater demand of 30,000 m3 /d from the unconfined aquifer shown in Figure 1. Basic conditions of the nonlinear optimization model, such as the property of the aquifer, boundary conditions and the distribution of wells, are the same with those of above linear optimization case. The response of the aquifer to pumping at 10 potential well locations in the aquifer is predicted by the finite difference model described above. The objective of the nonlinear model is to minimize the

992

J. Wu and X. Zhu

total cost (include fixed cost of drilling wells and pumping equipment, and the operating expense of wells) of the wells subject to the constraints that the total pumping rate from the aquifer must satisfy the demand, the hydraulic head at wells must be nonnegative and individual pumping rates must be in the range of 0-7000 m3 /d to prevent aquifer dewatering. The model can be written as[6] Minimize

10

a1 di + a2 t(Qi )b1 (di − hi )b2 + a3 tQi (di − hi )

(5)

Qi ≥ 30, 000

(6)

i=1

Subject to

10 i=1

hi ≥ 0

i = 1, 2, · · · , 10

0 ≤ Qi ≤ 7000

(7)

i = 1, 2, · · · , 10

(8)

where ai i = 1, 2, 3and bi (i = 1, 2) are constants given in Table 3. di [L] is the depth of a well at location i and hi is the hydraulic head at well i. The first term on the right-hand side of the objective function is the fixed cost of drilling a well at location i, the second term represents the capital cost of the well and pumping equipment, and the third term is the operating expense of the well. Table 3. Coefficients in objective function coefficient a1 ($/m) value 4000

a2 ($/m4 ) a3 ($/m4 ) b1 0.005 0.002 1

b2 1

di (m) t(d) 20 365

Table 4. Optimization results of the SCE-UA, the NLP and the GA well NLP3 (m3 /d) GA(m3 /d) SCE-UA(m3 /d) 1 5600 5664.27 5885.64 2 4550 5340.37 4833.40 3 5860 5770.24 5894.06 4 0 0 0 5 0 0 0 6 0 0 0 7 7000 6531.68 6694.23 8 0 0 0 9 0 0 0 10 7000 6694.91 6692.67 Total pumping 30010 30001.47 30000.00 cost($) 129467 128558.05 128174.26

The minimum cost model is a nonlinear fixed charge program. Using gradientbased nonlinear programming algorithms to solve problems with this type of

Using the SCE Global Optimization Method

993

objective function caused two serious computational difficulty[6] without modification or approximation. To prevent this computational difficulty, b1 and b2 are commonly set to 1 instead of their true values. This simplification has been used here in order to compare the nonlinear programming, the GA and the SCE-UA method solutions. However, it is not necessary for the GA and the SCE-UA method. The nonlinear programming, GA and SCE-UA solutions to the minimum cost pumping example are presented in Table 4. Comparatively the pumping rate from the SCE-UA method agrees best with the constraint equation and the total cost from this method is minimum. Furthermore, the SCE-UA pumping rates are symmetric due to the symmetry of the aquifer system. The symmetric solution is not particularly exciting but provides a check on the search accuracy and validity of the algorithm. All these show that the SCE-UA solutions are more perfect than the NLP’s and the GA’s for the nonlinear groundwater management model.

4

Conclusions

For the linear optimization model, solution of the SCE-UA method is very similar to that of the LP, which means solution of the SCE-UA method is almost converge to the ideal global optimum, and better than the Shuffled results of the GA and GASAPF..For the nonlinear optimization model, results from the SCE-UA method are more close to the global optimum than those from the NLP and GA. All these show that application of the SCE-UA method to solve management models is effective and efficient. The SCE-UA method combines direction -searching of deterministic nonnumerical methods and robustness of stochastic non-numerical method, and practical results of this method agree better with the global optimum than that of the GASAPF, the method which only combines two types of stochastic nonnumerical methods. Parameters of the SCE-UA method can be selected easily and the physical characteristic of these parameters are definite while the choice of crossover and mutate probability of the GA is difficult and subjective. So artificial effect of optimization results due to selection of parameters can be decreased or avoided in some degree by using the SCE-UA method. The SCE-UA method has the virtues of non-numerical direction-searching method and the robustness of non-numerical stochastic method. The global optimum can be reached and continuity and derivative of the objective function is not needed. One can foresee that the SCE-UA method can be applied widely in the hydrogeology field, such as water resources optimization, pollution prevention and optimization problems with high-parameter dimensionality. 3

Results got by D. C. Mckinney et al.[1994]. NLP denotes the Nonlinear programming; GA denotes the general genetic algorithm; SCE-UA denotes the Complex Evolution-University of Arizona.

994

J. Wu and X. Zhu

References 1. Nelder, J.A., Mead, R. A simplex method for function minimization. Computer Journal, 1965,7(4): 308-313. 2. Zhou Ming, Sun Shudong. Theroy and Applicaitions of Genetic Algorithm. Beijing: National Defence Industry Press, 2002. 3. Kang Lishan, Xie Yun, Luo Zuhua. Non-numerical parallel arithmetic—simulated annealing. Beijing: Science Press, 1998. 4. Zheng Hongxing, Li Lijuan. Stochastic optimization on parameters of water quality model. Geographical Research, 2001, 20(1):97-102. 5. Wang Q J. The genetic algorithm and it’s application to calibrating conceptual rainfall-runoff models. Water Resources, 1991, 27(9):2467-2471. 6. Mckinney D C, Lin M-D. Genetic algorithm solution of groundwater management models. Water Resour Res, 1994,30(6):1897-1906. 7. Ritzel B J, Eheart J W, Ranjithan S. Using genetic algorithms to solve a multiple objective groundwater pollution containment problem. Water Resour Res, 1994,30(5):1589-1603. 8. Mckinney D C, Lin M D. Genetic algorithm solution of groundwater management models. Water Resour Res, 1994,30(6):1897-1906. 9. Cieniawski S E, Eheart J W, Ranjithan S. Using genetic algorithms to solve a multiobjective groundwater monitoring problems. Water Resour Res, 1995,31(2):399409. 10. Shao Jingli, Wei Jiahua, Cui Yali et al.,Solution to groundwater management model using genetic algorithm, Earth Science-Journal of China University of Geosciences (in Chinese with English abstract), 1998,23(5):532-536. 11. Lu Guihua, Li Jianqiang, Yang Xiaohua. Application of genetic algorithms to parameter optimization of hydrology model. Journal of Hydraulic Engineering, 2004,(2):50-56. 12. Kirkpatrick S, Gelatt C D, Vecchi M P. Optimization by simulated annealing. Science, 1983,220(4598):671-680. 13. Doughterty D E, Marryott R A. Optimal groundwater management. 1 Simulated annealing. Water Resour Res, 1991,27(10):2493-2508. 14. Wu Jianfeng, Zhu Xueyu, Liu Jianli. Using genetic algorithm based simulated annealing penalty function to solve groundwater management model. Scien in China (Series E),1999,29(5):474-480. 15. Wu Jianfeng, Zhu Xueyu, Qian Jiazhong, et al., Application of GASAPF to optimization model for fracture-karst water resources management, Journal of Hydraulic Engineering, 2000,(12):7-13. 16. Duan, Q., Sorooshian, S., Gupta, V. K., Effective and efficient global optimization for conceptual rainfall-runoff models. Water Resour Res,1992,28(4):1015-1031 17. Duan, Q. Y., Gupta, V. K., Sorooshian, S., Shuffled complex evolution approach for effective global minimization. Journal of optimization theory and applications, 1993,76(3):501-521. 18. Gan, Y. G., Biftu, G. F., Automatic calibration of conceptual rainfall-runoff models: Optimization algorithms, catchment conditions and model structure. Water Resour Res, 1996,32(12):3513-3524. 19. Cooper, V. A., Nguyen, V. T. V, Nicell, J. A., evaluation of global optimization methods for conceptual rainfall runoff model calibration. Water Sci. Technol., 1997,36(5):53-60.

Using the SCE Global Optimization Method

995

20. Sorooshian, S., Duan, Q., Gupta, V. K. Calibration of rainfall runoff models: Application of global optimization to the Sacramento soil moisture accounting model. Water Resour Res, 1993,29(40:1185-1194. 21. Kuczera, G., Efficient subspace probabilistic parameter optimization for catchment models. Water Resour Res, 1997,33(1):177-185. 22. Ha puarachchi, H.A.P., Li, Z. J., Ranjit, M., et al. Application of global optimization technique for calibrating the Xinanjiang watershed model. Lowland Technology International, 2001,3:43-57. 23. Tanakamaru, H., Burges, S. J., Application of global optimization to parameter estimation of the TANK model, Proc. Ink. Conf. On Water Resour. And Environ. Res.,Kyoto, Japan, 2:39-46. 24. Luce, C. H., Cundy, T. W. Parameter identification for a runoff model for forest roads, Water Resour Res, 1994,30(4):1057-1069. 25. Eckhardt, K., Arnold, J. G., Automatic calibration of a distributed catchment model. Journal of Hydrology, 2001,251:103-109.

Integrating Hydrological Data of Yellow River for Efficient Information Services Huaizhong Kou1 and Weimin Zhao2 1

Hydrology Bureau of Yellow River Conservancy Commission, No.12, ChengBei Road, Zhengzhou 450004, China 2 Beijing Normal University, No.19, Xinjiekouwai Road, Beijing 100875, China [email protected], [email protected]

Abstract. Isolated islands of hydrological information form a bottleneck to both administrative authorities and researchers of Yellow River. It is urgent to integrate hydrological data of Yellow River and ease access to hydrological data by public. This is particularly because more and more hydrological data are collected by increasingly modernized measuring systems across the Yellow River basin. For this, a Web-oriented application management system of hydrological information has been proposed. This paper describes main technique points involved in the management system in order to build a common platform of hydrological data from different sources and to efficiently provide information services. Keywords: hydrology, information services, data integration.

1 Introduction Thanks to great efforts dedicated to Yellow River, its levees are neither breached nor its course changed since the last 60 years. The scenarios of Yellow River has really changed very much. Today the water volume of Yellow River drops remarkably and the catastrophic floods hardly take place if any. But its basic features never change: less water and more sediment, uneven time-space distribution of river runoff. So the problems with concerned Yellow River are not eradicated although they are strikingly eased. By contraries, some new issues appear due to the social and economic developments in the basin. For example, sharp contradiction between demand and supply of water resources, water pollution that is not apparently slowed, problem of zero-flow, etc. In order to cope with such all problems, a lot of efficient measures have been put into place, including water regulation over the whole basin, operations of sediment-water discharge regulation. Obviously, all measures taken to harness Yellow River and manage water resources in the basin are fundamentally based on hydrological information. Tremendous volume of hydrological information in the basin has been accumulated so far. But by contrast with increasingly demands posed to harness Yellow River, the hydrological information is not well organized and managed. For example, not all X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 996 – 1003, 2006. © Springer-Verlag Berlin Heidelberg 2006

Integrating Hydrological Data of Yellow River for Efficient Information Services

997

hydrological information is stored in digital medias or exactly in a relational database. And some of them are still stored in paper medias and are at risk of being damaged, for example the raw observation data stored at LUHUN reservoir in the basin. In addition, all original measuring information is separately kept by different administrative units. Although some management systems have already been implemented to store annually processed hydrological information and survey information of river sections of the lower reach respectively, they are isolated and can not be used in the collaborative way. It is very hard for both researchers and publics to access hydrological information. Also, we know that lots of information is often needed at the same time to tackle a given specific problem. For example, all information such as evaporation, groundwater, precipitation etc is necessary for water resources planning. But it is very inconvenient to get them and in some cases even impossible to access them. Hence the way that hydrological information is organized does not suit to the current situation of harnessing Yellow River. Recently we have proposed a Web-based management system of hydrological information that aims at answering questions about information requirement. Its main ideas are to integrate heterogeneous hydrological data into a common platform and provide comprehensive information services for various users. It will play important rules to keep healthy life of Yellow River and support sustainable developments of society and economics in the basin. It will also make hydrological information open and sharable resources accessed by publics. This paper describes main techniques used and introduces its key functionalities. The rest of this paper is organized as follows: Section 2 is about the related hydrological data; the framework of data integration and information services is discussed in Section 3; Section 4 presents key functionalities while implementation techniques are described in Section 5; and we conclude this paper in Section 6.

2 Related Hydrological Data Sources Hydrological data record by scientific method can date back to 1900 when the first meteorological observation station in the Yellow River Basin was built. Today, different kinds of relatively perfected measurement systems of hydrological and meteorological data have been completed or are under construction in the basin including its tributaries. They are charged by Yellow River resources conservancy commission and regional administrations respectively. According to 1, there are 451 hydrological stations, 62 stage stations, 2,357 rainfall stations, 169 evaporation stations, 61 sediment survey sections in Sanmenxia reservoir, 174 in Xiaolangdi reservoir, 152 in the low reach and 36 in river mouth of seashore region, etc. A satellite–based observing system is being built in the original region of Yellow River to collect information of water resource at this region. In addition, 1276 observation well of groundwater level have also been built. They make up of principal data sources of hydrology and water resources information in the basin. The items that are measured and collected include precipitation, evaporation, water level, flow, sediment, water temperature, ice status, sediment section of river courses, density flow, underground water, etc.

998

H. Kou and W. Zhao

3 Framework of Data Integration and Information Services As indicated at the Section 1 and Section 2, there is large volume of hydrology and water resources information, and on the other hand it is very hard and even impossible to get needed information. Data integration can provide a solution to getting over difficulties of information need and access. The Figure 1 illustrates the framework of hydrological data integration and information services. It consists of several logical parts: data integration, enterprise data server, application server, Web server. 3.1 Data Integration The availability of integrated hydrological data and water resources information from different sources is crucial for flood management, water resources planning and other many tasks involved to harness Yellow River. Data integration is the first step toward making information open and sharable. The objective of data integration is to combine data from heterogeneous, distributed sources into homogeneous, compact data platform with resolving a variety of data conflicts. In the case of Yellow River, data to be integrated includes raw measuring data still in paper medias, real-time collected data, legacy data system, data from other measurement networks not under the control of Hydrology bureau of YRCC (Yellow River Conservancy Commission), meteorological data from the regional and national

Application

Web user: Sea rch

Browse

Web user: Sea rch

download

Brows e

download

Internet Intranet

External Web Server Internal Web Server Info search

Info publication App services

Fire wa ll

Data e xchange Service Ordering Application Server

Data Serv ice Web component GeoInfo Se rvice Co mputation and analyzing Se rvice … EJB co mponent

Intranet

Raw data Data loading

Internet / Intranet

Branch

Data Integration

Legacy DBs Monitoring data

Enterprise Data server

Professional DB ET L Basic DB

DW Meta data

others

Fig. 1. Framework of Data Integration and Information Services

Application

Integrating Hydrological Data of Yellow River for Efficient Information Services

999

meteorological systems, etc. Among other things for data integration, one of most challenging tasks is to transfer huge volume of raw measuring data into computer system from paper medias. Data of legacy systems can be extracted and processed by programming or by the help of some data integration tool. Meteorological information can be obtained from related public meteorological systems over the Internet. Hydrological data kept at the 6 branch bureaus (located at Jinan, Zhengzhou, Sanmenxia, Yuci, Baotou and Lanzhou respectively) can be locally processed and then sent to the headquarter over Internet for example. By setting up certain agreement with local administrations in the basin at high level, data collected by measurement network not in the charge of YRCC may also be obtained. Once time that all related data is already, we can perform data cleaning and recoding and then integrate them into a global data schema, which is maintained in an enterprise data server. 3.2 Enterprise Data Server We organize integrated data into different basic categories, such as historical flood data, groundwater data, geographic information of various measurement stations, raw hydrological measurement data, historical data of typhoons that have impact on the Yellow River basin, hydrological analysis result data, etc. We take an advanced relational-object database management system as enterprise data server platform of data management, which must support data warehouse and OLAP techniques and can also manager both spatial and multimedia data. The data of the basic categories make up of professional bases and basic bases, represented by professional DB and basics DB in Fig. 1 respectively. Data contained in professional DB base and basic DB base within the enterprise data server are stored and managed in the way that traditional transaction data is managed. They are aimed at efficiently supporting simple information query and ordinary statistical calculus. But in order to cope with topics-oriented complex information query and analysis, enterprise data warehouse (DW) must be built with the help of ETL (Extract, Transform and Load)2 techniques from the professional DB base and basic DB base. ETL is the set of processes by which data is extracted from numerous databases, applications and systems, transformed as appropriate, and loaded into target systems - including, but not limited to, data warehouses, data marts, analytical applications, etc. The topics contained in the data warehouse are oriented to many fields related to harnessing Yellow River, such as flood management, water resources management, etc. Enterprise DW supports advanced expletory analysis of hydrological information while the metadata stores description information of how to locate and use data contained in the DW and so on. 3.3 Application Server All business logic components needed to realize information services are located within the application server. Each of the components corresponds to specific

1000

H. Kou and W. Zhao

business functionality. They are independent of each other and can also call each other. The application server works as container that provides running environment for the business logic components. According to our requirement analysis of information services, main business components of hydrological information services include data service components, geographical information service components, and computation and analysis service components. Data service components define all services necessary to visit hydrological data in the enterprise data server. Access to the enterprise data server can only be performed by calling some data service. For example, a data service component can be created to support hydrological data exchange. Geographical information service components provide particular access to geographic data and their spatial relationships concerned with all hydrological measurement stations, all measurement sections of main river courses and reservoirs, etc. Computation and analysis service components are focused on providing both ordinary hydrological analysis and advanced specific data exploratory analysis. These components can realize the following functionalities: statistic analysis of hydrological information, frequency analysis of hydrological extreme value events, analysis of hydrological time-series, analysis of river courses, analysis of water resources, etc. Many interdisciplinary methods are incorporated to implement various computation and analysis of hydrological information at different levels. In fact, the methods of hydrology, statistics and more advanced data mining are all exploited together in these components. 3.4 Web Server Web server provides an interactive interface to different users. In order to meet user’s information requirements, various web pages are dynamically generated and maintained by Web server. It is in the charge of receiving users request, dispatching the received requests, getting results and sending responses to them.

4 Functionalities of Information Services The final objective of integrating hydrological information is to break the bottleneck for sharing hydrological information and provide rich information services to the public. Hence on the part of the public users, the following functionalities of information services are realized: information search, information publication, information exchange, application services and personalized services ordering. Information search: it allows users to access through Internet the basic hydrological information associated with arbitrarily given measurement station, including water level, flow, sediment, pollution index of water, velocity of flow, evaporation, precipitation, etc. Information publication: it will regularly push through Internet last comprehensive hydrological information to users. The comprehensive hydrological information

Integrating Hydrological Data of Yellow River for Efficient Information Services

1001

users concern may include analysis report on flood, rainfall, river courses change, water resource survey, riverbed silting, volume status of reservoir, etc. Application services: Different from information search and information publication, it provides more specific services to support many important applications, such as flood prevention, soil erosion control, sediment-water discharge regulation, planning of measurement station network, regulation and utilization of main reservoirs along Yellow River, etc. All these specific services are closely related to corresponding specialized calculation models, which are normally implemented at the application server as business logic components. To name a few, flood evolution model, river channel evolution model, sediment prediction model, comprehensive evaluation model of water quality, etc. Personalized services ordering: it allows users to personalize requirement of information service. Often researchers from university and academic institution or governmental administration need specific hydrological information services for their research projects. In these cases, they can format the specification of their information demands and submit it to the system. Then this system will build certain service components to meet the personalized information. The personalized services can be obtained across Internet. Data exchange is aimed at enabling information communication at the application level so that other applications can directly access authorized hydrological information services.

5 Implementation The main architecture of our implementation conforms to N-tiers structure with J2EE 3 platform at the middle tier, as shown in Fig. 2. The techniques exploited include Client tier

Applications Applications

Bro wsers Bro wsers

Mi ddle tier J2EE EJB Container

Web Container

Geo Info service

Info search Data s ervice

Hydrological co mputation

Info publication

Hydrological analysis JAAJ

Data e xchange

Hydrological analys is

JAX-RPC Web Serv ices

EIS tier Database server Database server

Applications Applications

File

Fig. 2. Implementation architecture

Database

1002

H. Kou and W. Zhao

DBMS for data management, JavaBeans for realizing business logics of hydrological information, JSP and Servlets for building Web pages to support interaction with clients, Web Services for supporting data exchange, etc. It follows the development principles of information systems set up for the “Digital Yellow River” project 4. The client tier consists of applications and Browsers. It is responsible for representing data to users, interacting with user and communicating with the middle tier. The techniques used at this tier concern HTTP protocol, RMI protocol, simple object access protocol (SOAP), XML messaging and etc. It communicates with the middle tiers by the way of well pre-defined interfaces at the middle tier. It also can set up direct communication with the EIS in some specific cases that are authorized. The users can perform all operations at the client tier to get the information they need. For example, downloading hydrology table data of water level and flow related to one or more specific stations. Among other things, the middle tier includes two parts EJB Container and Web Container. Within EJB container, individual components are developed using JavaBeans techniques, which realize the hydrological business computations. The applications at the client tier and the components in the Web container can directly call them to complete high-level functionalities. For example, an application at the client tier can retrieve specific hydrological data directly calling some components in the EJB container through the SOAP protocol. All tasks related to Web processing are done within the Web container, such as serving HTML, instantiating Web page templates, and formatting JSP pages for display by browsers. In most case, in order to the tasks, the corresponding components in the EJB container are called and combined. For example, data exchange in the Web container must call some data services components in the EJB container. But for some tasks, they can also directly access data at the EIS tier without calling components in the EJB container. The EIS (Enterprise Information System) tier contains existing information systems, files, and database management systems, etc. For example, the real-time system of water-rain monitoring and forecasting, which is currently operated at Hydrology Bureau of Yellow River conversancy commission, is one of existing information systems, and partially established database of Yellow River hydrology by the end of 1980’s is one of legacy databases. Java Connector architecture techniques5 are used to integrate existing application systems and database systems with our Web-oriented application management system of hydrological information, which is constructed by using a Web application server product such as IBM WebSphere or BEA WebLogic.

6 Conclusion Among many water problems, drought and water shortage particularly threaten north and northwest of china. Furthermore, such problems become more and more seriously as water demands rapidly grow due to the population growth, urbanization, and economic development. The situation of water resources in the Yellow River basin has close impact on national water problems and water strategies. Both scientific management and planning of water resources in the Yellow river basin are fatally important, so they must

Integrating Hydrological Data of Yellow River for Efficient Information Services

1003

be put against solid ground. Hydrological information is crucial to making such solid ground for water management and planning. This paper presented the framework of integrating hydrological data and providing efficient information service, briefly rather than in great detail. Uniform hydrological data platform, which is built by integrating various data from different sources, can offer strong support for information share and exchange. Hydrological information services built on the uniform data platform can much benefit lots of works related to basin water managements, such as real time water supply, control of the supply of water resources, assessment of water use, etc. Acknowledgement. we express our sincere thanks to Professor Yuguo NIU, Director of bureau of hydrology of YRCC for his strong support of our works. Especial thanks to Professor Wentao LIU, director of bureau of digitalization of YRCC for his constructive suggestions.

References 1. Y.G. Niu, Construction and Development of Hydrology and Water Resources Monitoring System in the Yellow River, In the Proceedings of 1st IYRF, Volume V, pp. 39-52, Zhengzhou, China, ISBN 7-80621-799-1, Yellow River Conservancy Press, 2004. 2. R. Kimball et. al, The Data Warehouse Lifecycle Toolkit, Wiley, ISBN 0-471-25547-5, 1998 3. http://java.sun.com/j2ee/ 4. Digital Yellow River Project, ISBN 7-80621-714-2, Yellow River Conservancy Press (in Chinese), 2003 5. http://java.sun.com/j2ee/connector/

Application and Integration of Information Technology in Water Resources Informatization* Xiaojun Wang and Xiaofeng Zhou School of Computer & Information Engineering, Hohai University, Nanjing 210098 [email protected]

Abstract. The informatization is the only way of history development. The Water Resources informatization has very important status and action by way of an important industry of national economy. The Water Resources informatization is important feature of Water Resources modernization. To research essence of the Water Resources informatization, can get the model of the Water Resources informatization system. Every layer of the Water Resources informatization system needs to use a lot of information technology. The information service platform constructed by the web service can solve integration of these information technologies in the Water Resources informatization, provide data and application system share.

1 The Definition and Meaning of the Water Resources Informatization The conception of Informatization comes of Japan in 1960s. The conception of ‘information society’ and ‘informatization’ used in west society at large begin in 1970s.The cognition of ‘informatization’ is very different in different country, but the cognition of ‘information society’ is common: in information society the information not only raises productivity but also has effect solving society problem and extending mankind activity domain. In our country, the definition about informatization is: the informatization is historical process to breed and develop new productivity delegated by intelligentize tools and benefit society. Our country is a developing country. The process of industrialization has not accomplished. Impelled by global informatization, our government work out tactic that informatization drives industrialization. This incarnates plenty informatization’s important status and action in our country development. The cognition of informatization advance and advance, include all levels governments, all departments of national economy, all enterprises. The country constitutes corresponding informatization examining guideline too. To quantificational examines informatization development level of every region. The Water Resources informatization is historical process to breed and develop new productivity delegated by intelligentize tools and advance benefit of the Water *

This work was supported by National Natural Science Foundation of China under the grant No.60573098, and the high tech project of Jiangsu province under the grant No.BG2005036.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1004 – 1009, 2006. © Springer-Verlag Berlin Heidelberg 2006

Application and Integration of Information Technology

1005

Resources industry. In other words, first, the key of the Water Resources informatization is to breed and develop intelligentize tools, to heighten intelligentize degree of the Water Resources technology and digital degree of the Water Resources administration full making use of the most advanced technology (chief information technology); next, the purpose of the Water Resources informatization is to full heighten technology level and administration efficiency, thereby to heighten the whole benefit of the Water Resources industry; third, the Water Resources informatization is a part of whole history process of society informatization. The chiefly task of the Water Resources informatization is widely to apply information technology in the Water Resources operations, to construct foundation establishment of the Water Resources information systems, adequately to dig underlying knowledge of information resources, to heighten the whole level of the flood prevention control, the water resources optimize control, the water project scrutiny control and the water administration using it, to accelerate modernization of the Water Resources operation.

2 Layer Model and Information Flow of the Water Resources Informatization According informatization development rule the realization of the Water Resources informatization divides into three phases. They are the preliminary informatization phase, the basal informatization phase and the full informatization phase. The first phase primarily solves problem of data gather and share, to achieve target that the information systems can provide any data what need at the water resources works; the second phase primarily make data to useful information, can answer specific question quizzed by applications, namely provides information of appointed subject to applications; third phase primarily abstracts knowledge from information, provides doable solving scheme supported by decision supporting technology and so on. Corresponding three phases, the system general architecture of the Water Resources informatization constitutes of three layers. They are the data underlay layer, the information service layer and the decision-supporting layer. Every layer finishes itself task and provides service for upper layer. The object dealt by the data underlay layer is data. The object dealt by the information service layer is information. The object dealt by the decision-supporting layer is knowledge. The data underlay layer is foundation of system, the information service layer is key of system, the decision-supporting layer is core of system. The system general architecture is shown in Figure 1. In the data underlay layer, the pivotal technology are data gather technology, data transfers technology, data storage technology and OLTP technology, emphases is building foundation data circumstance. The aim is to make data the most abundance, to realize data share. ‘Digital’ is primary sign of this layer. In the information service layer, the pivotal technology are data conformity technology, data assimilation technology and OLAP(for example data warehouse, spatial data, visualization, etc), emphases is building information service platform. The aim is make information sufficient conformity, to realize information service. ‘Informatization’ is primary sign of this layer.

1006

X. Wang and X. Zhou

In the decision-supporting layer, the pivotal technology are knowledge discovery technology, AI technology, expert system, etc, emphases is building DSS. The aim is make knowledge high proficiency, to realize decision-supporting service. ‘Knowledgization’ is primary sign of this layer.

Fig. 1. System general architecture

3 The Pivotal Technology Referred by Each Part of the Water Resources Informatization 3.1 The Technology Referred by the Data Underlay Layer The pivotal technologies have 3S technology, communications and computer network technology, database technology, online transaction processing technology, etc referred by the data underlay layer. They finish gather, transfers and storage of data. 3S Technology: is an organic whole consisting of GPS, RS and GIS. It is an important supporting technology to obtain, store and manage, renew, analyze and apply spatial data. It is one of pivotal technology of digital because the most of data referred by the water resources information are spatial data concerned with geography position. Communications and Computer Network Technology: is an important foundation of the Water Resources informatization. It is a foundation guaranteeing betimes and exactitude of transfers for the water resources information. Database Technology: is a primary technology of data storage now, and an important data storage technology at very long time what can be foresee. The choice of advanced database system is very pivotal for the Water Resources informatization because the database system is one of important pledge of high capability and efficiency of whole water resources informatization system. Online Transaction Processing Technology: is the transaction processing driven user. It is primary used of maintain a lot of data and find data simply in the water resources informatization system. That ensures integrality and coherence of data through OLTP.

Application and Integration of Information Technology

1007

3.2 The Technology Referred by the Information Service Layer The pivotal technologies have online analytical processing technology, data warehouse technology, distributed spatial database technology, etc referred by the information service layer. They provide information service facing subject and polishing. Online Analytical Processing Technology: is fast analysis of share multi-dimension information based on actual application and use demand of OLAP product. OLAP can do all kinds of fast analysis for massive data from different point of view and different lay, provide decision gist for the water resources high-layer decision-maker. Data Warehouse Technology: is data aggregate, it supports decision-analysis processing of enterprise or organization, faces subject, is integration, is not renewed, changes uninterrupted along with time. The data warehouse provides good data foundation to the Water Resources informatization. To designed a great deal of data warehouse according to subject and different request of the water resources application calling data, integrate data of different period and sort, and provide correlative application systems calling. This can raise efficiency that application systems call data, and makes going along complicated data analysis research possibility. Spatial Database Technology: is used to storage the spatial database. The spatial data is data figuring information of position, sharp, size and distributing character of spatial entity. The spatial data not only can figure information of spatial position and form of entity itself but also can figure information of entitative attribute and spatial relation. Most of the water resources data are data correlating with spatial position. It can get twice the result with half the effort using distributed spatial database technology administer and process the water resources data. 3.3 The Technology Referred by the Decision-Supporting Layer The pivotal technologies have data mining and knowledge discovery technology, artificial intelligence and expert system technology, decision supporting system technology, etc. They provide intelligentized decision service. Data Mining and Knowledge Discovery Technology: data mining is distilling interested knowledge from a lot of data. The knowledge is connotative prior unknown latency useful information. The knowledge distilled may express form of concepts, rules, regularities, patterns, etc. Using data mining technology to discovery useful knowledge from the water resources data can help advancing decision level. Artificial Intelligence and Expert System Technology: artificial intelligence simulates intelligent behavior of human brain using computer. The export system simulates thinking process of human export solving problem to resolve all kind of problems inside domain by one of AI. There are a lot of problems needing solved by export in the domain of the water resources. AI will provide help for solving these problems. Decision Supporting System Technology: is human-machine system supporting decision activity and have intelligence function based on administration science, operational research, cybernetics and behavior science means of computer technology, emulation technology and information technology aiming at half- Structured decision problems. DSS can provide necessary data, information and background information for decision-

1008

X. Wang and X. Zhou

maker, help to make sure decision aim and recognize problem, establish or modify decision model, provide different scheme, appraise and choose from different scheme, process analysis, comparison and judgement, provide necessary support for correct decision. The decision supporting service system is the highest levels of development in the Water resources informatization development.

4 Integration Model of Information Technology in the Water Resources Informatization The Water Resources Informatization refers large numbers of information technology. A sort of technology has a great deal of different realization methods and productions. What why integrates application systems developed by all kind of information technology for an organic whole is a main problem needed solve in the Water Resources Informatization development. Therefor we construct information services platform of the Water Resources Informatization operation application systems using web services technology, to achieve integration of different information technology through middleware technology. The information services platform see Figure 2.

Fig. 2. The architecture of the information service platform

In the platform, the storage facility stores all sort of information of the Water Resources Informatization, mainly including various kind of database, knowledge-based, data warehouse, etc. The platform manager take charge effective management for cooperation of each part in whole platform, mainly providing services like directory service, information publish, load balancing, configuration, certificate authority, accounting and priority, etc. The content of this part is independent of specific service and application of the Water Resources Informatization. The main body of the Water Resources Informatization operation applications includes objects, components, middleware, functions, subroutines and etc, which implement the functional logic. And these also can be on different platforms and implemented in any languages. To do so, these modules must firstly be described by WSDL and registered and published in the registration center using UDDI. Web Service is the provider of platform information

Application and Integration of Information Technology

1009

service and if required, there can be multiple web services, which can provide different kinds of information services and can be interoperated through SOAP. The information service platform is in logic, while it is distributed physically, which is responsible for providing the related services and management of resources. The components of the platform may be constructed on different systems and implemented in different languages. And it can’t be developed and lie in isolation. It will be upgraded along with the new development of the applications. In another words, the platform is constructed by the development of the applications in accordance with the standards of web service.

5 Conclusion The Water Resources Informatization is a lengthy complicated process, must follows rule of informatization development. The Water Resources Informatization refers a mass of information technology. The problem what would solve firstly is what integrates these information technologies and gets the most of benefit. Otherwise, the Water Resources Informatization will difficult to obtain expectant result. Although the web services technology has many problems not solved, but there is no problem running on industry intramural network. The integration model given this paper can solve commendably integration problem of various kind of information technologies, and integrate applications to an organic whole in each moment, each directional. It solves radically problem of data shared and application shared.

References 1. Web Services Architecture Requirements, http://www.w3.org/TR/2002/WD-wsa-reqs20020429#N100CB 2. Al Gore, The Digital Earth: Understanding our planet in the 21st Century at the California Science Center, Los Angeles, California, on January 31, 1998

An Empirical Study on Groupware Support for Water Resources Ontology Integration Juliana Lucas de Rezende1, Jairo Francisco de Souza1, Elder Bomfim1, Jano Moreira de Souza1,2, and Otto Corrêa Rotunno Filho3 1

COPPE/UFRJ – Graduate School of Computer Science 2 DCC/IM - Institute of Mathematics 3 COPPE/UFRJ - LabHID - CT/Bloco I, Federal University of Rio de Janeiro, PO Box 68.513, ZIP Code 21.945-970, Cidade Universitária - Ilha do Fundão, Rio de Janeiro, RJ, Brazil [email protected], {jairobd, elderlpb, jano}@cos.ufrj.br, [email protected]

Abstract. In this paper we discuss groupware support for ontology integration and present an experiment carried out with a group of specialists in the Water Resources domain and in Ontology Engineering. The main goals of this experiment are to create, in a collaborative way, a well-formed ontology for a Water Resources domain and evaluate the ontology integration process. The motivation of our work came from the development of knowledge management and information systems for the hydrological domain, where ontologies can be used as a broker between heterogeneous systems, to facilitate tasks inter-mediation and to help in knowledge storage, being used to publish semantic enriched documents. Since the development of a great ontology has a high cost, ontology re-use becomes an important activity; and consequently, so does the integration process. For this reason, tool support is essential.

1 Introduction Ontology [1] is a formal specification of concepts and their relationships. By defining a common vocabulary, ontologies reduce concept definition mistakes, allowing for shared understanding, improved communications, and a more detailed description of resources. Ontologies have crossed the borders of philosophy, achieving an important position in Computer Science. The ontology creation supplies a structure of the domain information that assists the common agreement of the domain between people and between software agents, eliminating possible ambiguities and allowing the interoperability between systems and the information re-use. Besides make a hierarch of the domain concepts, ontologies also explain the rules that conduct such concepts, allowing the inference of new information. This way, it became comparable to a knowledge base of the modeled domain. In a software level, the biggest advantage of ontology use is the separation between the domain knowledge and the system operational knowledge. In Database area, ontologies are used on the integration of bases and update of legacy data. In X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1010 – 1021, 2006. © Springer-Verlag Berlin Heidelberg 2006

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1011

information systems, ontologies can be used as a broker between heterogeneous systems, to facilitate tasks inter-mediation and to help in knowledge storage, being used to publish semantic enriched documents. [8] 1.1 Motivation The necessity of this work appeared from the development of knowledge management systems from COPPE/UFRJ Hydrology Laboratory (LabHID). An important aspect that makes these systems efficient in an environment where the activities are strongly conditioned to the domain is that during the information processing and analysis, limitations and rules imposed by this domain are relevant. Because of its characteristics, we can use ontologies to make this aspect respected. In the LabHID, ontologies can be used to guarantee that the artifacts will be classified and easily identified to be re-used. To have a common vocabulary between the laboratory systems and all the members (that are from different areas or institutions), it is necessary to develop a unique ontology. The knowledge management system boarded here is Toth [8], which is a CBR (Case Based Reasoning) tool for scientific processes re-use, and creates an archival of information on water resources for the LabHID. To create a new ontology, the ontology designer doesn’t need to start from scratch. Instead, he can re-use existing ontologies. To use several ontologies in the creation of a new one, it will be necessary to join them. Ontology re-use is now one of the important research issues in the ontology field. [3] In [4], we investigate the problems that may arise and propose an ontology integration support module to be added to COE [7]. In this paper we describe an experiment on tool support for collaborative ontology manipulation including ontology integration, and discuss how the COE (Collaborative Ontology Editor) can support collaborative ontology manipulation, including ontology integration. Our previous results in the experiment indicated that the ontology designer achieved a reduction in the time dedicated to create a new ontology, and obtained better quality ontologies. This paper thus explores whether groupware tools can improve ontology integration so that their effort can be justified. The paper is organized as follows: Section 2 presents Toth, and shows why ontology is necessary. Section 3 presents some tools for ontology integration, discusses ontology re-use methodologies and briefly introduces COE. Section 4 presents our experiment for evaluating tool support for collaborative ontology manipulation including ontology integration. Section 5 discusses the results of the experiment. Conclusions are given in Section 6.

2 Thoth – CBR Tool for Scientific Processes Re-use The main objective of the studies accomplished by the professionals of the LabHID is to identify the potential, limitations and problems of Brazilian hydrological basins in order to help determine the appropriate policies for using the resources found in the basin. These studies are accomplished through agreements with the Brazilian governmental Institutes responsible for the definition of these policies. [8]

1012

J.L. Rezende et al.

A workflow management system, in this environment, aids activity planning and allows control and coordination of these activities. Besides, the information generated during the planning and execution of the work can be considered as activities’ documentation. However, the workflow system doesn’t have as its objective to support the collaboration among the professionals, faster knowledge identification and creation, nor facilitating re-use of best practices. That is the Thoth objective. Thoth considers that knowledge about previous analysis can increase work efficiency, so as to allow re-use of successful practices, the notion about how the data had been acquired and previously handled. Thus it uses the information recorded in the workflow system with the main purpose of allowing processes re-use, the visualization of information about past activities and collaboration among researchers. The CBR approach is used to reach such aims. This approach takes advantage of the knowledge obtained from previous attempts to solve a similar problem (a case), here the approach is performed by raising the analogy between process instances stored on the Workflow Management System and the cases. The use of similarity devices allows that the most important aspects, on the researchers’ optics, can be used during the localization and the cases re-use. However, these aspects are strongly dependents of the domain where the tool is used. Particularly in projects related with the hydrology area it is important to recognize the hydrological basins (or its respective sub-basins), the scientific phenomena that had been studied and the models that had been used during the analysis that had been executed. The recognition of these characteristics is carried through by the ontology.

Fig. 1. Toth Architecture

Thoth architecture is showed in Fig 1. The ontology is used at two moments: first in the processes’ classification, carried through the analysis of the data (and its respective metadata) used in the processes, for example, visualized maps, read and

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1013

generated documents and reports, spread sheets and tables of data, amongst others. The second moment is during the localization of the similar processes. The similarity between the processes is defined through the analysis of the supplied ontological concepts returned by the search with those that classify the processes or through the comparison of the concepts that classify two or more compared processes. The ontology used in this environment brings some challenges. First, the ontology construction process and the similarity definition processes are complex, and, the ontology and its relations and concepts can be modified with the system evolution. Moreover, part of the maps, reports and data that are analyzed are from other institutions, as the National Agency of Water (ANA), who regulates the use of the resources in rivers and federal lakes of Brazil, Brazilian Institute of Geography and Statistics (IBGE), supplier for example of the Brazilian tax data and the economic profile of Brazilian regions, amongst other Brazilian or International research agencies. These data bring new concepts, rules and knowledge that they need to be represented in the ontology, for example a bigger detailing of a basin or river still not represented in the ontology. Thus, it is important to know that the ontology must be in constant evolution, with the possibility of addition of new concepts, relationships and restrictions or modifying the existing ones.

3 Tool Support for Ontology Integration Ontolingua Server, OntoEdit and APECKS are client-server approaches to knowledge sharing through ontologies. The Ontolingua Server is a set of tools and services to help the development of shared ontologies. OntoEdit is a collaborative ontology editor for the Semantic Web. APECKS is an ontology server that supports collaboration allowing individuals to keep private ontologies. [4] Studies carried out in the area of ontology integration offer a few methodologies. There is no consensus regarding a single one. [3] However, more attention is being placed on the area, and a few academic systems have emerged, such as PROMPT and Chimaera. PROMPT is an interactive ontology–merging tool that guides the user through the merging process making suggestions, determining conflicts, and proposing conflict-resolution strategies. Chimaera is an ontology merging and diagnosis tool developed by the Stanford University Knowledge Systems Laboratory. [4] 3.1 Ontology Re-use As mentioned earlier, ontology re-use is one of the important research issues in the ontology field. There are 2 different re-use processes: merge and integration. Merge is the process of building an ontology into one subject re-using two or more different ontologies on the same subject [3]. Integration is the process of building an ontology into subject re-using one or more ontologies in different subjects [3]. In an integration process source ontologies are aggregated and combined, to form the resulting ontology, possibly after re-used ontologies have suffered some changes, such as, extension, specialization or adaptation. It should be noted that both re-use processes are included in the overall process of ontology building.

1014

J.L. Rezende et al.

A lot of research has been conducted under the merge area. There is a clear definition of the process, operations, methodology and several ontologies have been built. [3] In the integration area a similar effort is beginning. The most representative ontology building methodologies [7] recognize integration as part of the ontology development. They don’t even agree on what integration is. However, this process is far more complex than previously anticipated. It is a process in its own. 3.2 Collaborative Ontology Editor (COE) Because their use becomes more common in a wide variety of applications, with many projects developing new ontologies, it’s possible to find two or more different ontologies representing the same or similar knowledge [6]. Taking into account that ontology creation can be a complex and time-consuming process, a tool that enables share and re-use of ontologies is considered of great utility. COE [7] is a peer-to-peer (P2P) application designed to allow ontology developers to share their knowledge. It provides many activities: ontology creation, edition, sharing, re-use, and other traditional P2P mechanisms. It is implemented over COPPEER1; therefore, its users can also take advantage of non-specific collaboration tools provided by it, such as an instant messaging (chat) tool and a file exchange tool. COE provides a visual interface [Fig 2] where users can manipulate the ontology in graphical or textual form.. The user can navigate the ontology (moving nodes to the center), insert, remove, and move nodes (and their corresponding sub-trees).

Fig. 2. COE Interface

1

COPPEER [7] is a framework for creating very flexible collaborative P2P applications that provides non-specific collaboration tools as plug-ins.

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1015

3.3 Ontology Integration in COE We will now discuss and analyze the integration process in relation to the overall ontology building process. The building process framework showed in Fig 3 is well explained in [3]. Here, as in the major works about ontology integration, this process, in this case the “Apply integration operation” activity, is only cited. In an attempt to solve this problem we proposed in [4] a strategy to carry through the ontology integration, that was added to the ontology building process. As in any process, integration is made of several activities. We have identified the activities that should take place along the ontology building lifecycle to perform integration. The integration does not substitute the building; but is a part of it. Activities that precede integration help the ontology designer to analyze, compare, and choose the ontologies that are going to be reused. The ontology integration process was divided in 3 phases. The first involves “Syntactic Reduction” (SR) and “Linguistic Comparison” (LC). In the second, the “Ontology Reduction” (OR) is carried through, and in the third phase the “Semantic Comparison” (SC) [Fig 3].

Fig. 3. Ontology Building Process Extended from [3]

The SR is made to the concept’s name, description and list of synonyms, using a stemming algorithm and stop words elimination. In LC the concepts with a high rate of similar terms are classified as "very similar”. The concepts with a low rate of similarity will be used in phase 3. In OR the ontological structure simplification and properties, as well as axiom elimination are made. Only the relations ‘is-a’ are considered because they are necessary in the next phase. In SC the objective is to reach the hierarchical structure of the concept. “Low similar” concepts of the first phase are compared considering only its position in its tree, and this allows finding the region where one concept could be placed. Here, concepts rated as “very similar” will be suggested to be integrated. More details can be found in [4]. Design criteria guide the integration operations, so that, the resulting ontology has an adequate design and is of quality. After integration, one should evaluate and analyze the resulting ontology. In summary, the integration process can be organized into 3 stages: find the places in the ontologies where they overlap; relate concepts that are semantically close via equivalence and subsumption relations (aligning); check the consistency, coherency and non-redundancy of the result [1]. Due to the difficulty in performing automatic ontology integration, the research presented in [4] proposes an ontology integration

1016

J.L. Rezende et al.

support module, through which the system suggests integrations to the user. The suggestions possess a degree of trustworthiness.

4 The Empirical Study In the following two sections we describe our study on tool support for collaborative ontology manipulation including ontology integration. 4.1 Research Approach and Hypotheses Collaborative Integration: The main goal of collaborative work in our study is to identify how people collaborate and negotiate to create a single ontology. Here, the considered hypothesis is that the groupware used, in the experiment, to facilitate the ontology integration, is a tool that improves group work, easing collaboration among the members. Ontology Integration: Due to the tool support, we need to identify how people integrate ontologies manually, in the attempt to evaluate the ontology integration process proposed in [4]. This way, we believe we can try to validate the integration process or even partially validate it. Moreover, try to find new strategies created by experiment participants that can improve the process proposed. Water Resources Ontology: We also want to validate the ontology created by a nonspecialist, in [8]. The ontology was built as a hierarchical structure, using hyponymy (is-a relation), to be used in the COPPE/UFRJ Hydrology Laboratory (LabHID), where subjects related to Brazilian hydrological basins are studied. Amongst the countless possibilities, this ontology will be used to identify and classify metadata and scientific models used by the research group. Another purpose would be to use this ontology to facilitate the communication with external institutions. To validate the existing ontology, we created a new one, in the same domain, with specialists. 4.2 Experiment Process The experiment was carried out in four stages: in the first, each participant creates an individual ontology (distinct domain parts). The subjects were 10 students from the Civil Engineering undergraduate program, and the domain selected for the experiment was Water Resources. It should be pointed that all of the students are specialists in the domain. To guarantee ontology quality, in the second stage a group of 15 students from the Computer Engineering undergraduate program was invited to transform the badly formed ontologies, created in stage 1, into a well formed2 one. This was necessary because the specialists in Water Resources were not ontology designers. In stage 3, all participants collaborated to create a single ontology (whole domain). In stage 4, the specialists in Water Resources validated the ontology created by a non-specialist, in [8]. 2

A well formed ontology do not have modeling errors on its purpose, for example, concepts that are presented as classes but that would be described as instances, errors in the hierarchic structure, and others.

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1017

As pointed before, one of the objectives of this experiment is to validate an ontology created by a non-specialist in [8]. This ontology has around 120 concepts. So, in the first stage of the experiment, the concepts were divided in groups of 40 (with overlap) and each participant created one’s own ontology with the concepts fed. This ontology must be created using COE. It was defined that the participants had autonomy to add new related concepts, but all the concepts fed had to be used. Another goal of the experiment involved the ontology integration process. In this case, we observed each participant creating ones own ontology, in an attempt to discover new strategies and to evaluate the ontology integration process proposed in [3]. The participant could consult a database where he or she could find other ontologies. Before the first stage was initiated, a brief training was carried out, aimed at: strengthening ontology concepts, and training the participants in COE. The tutorial created was available during all the experiment. In the third stage, a new goal was added: to evaluate the collaborative work in the collectively construct of the single ontology. It is important to remember that this ontology is the one that will be used to validate the ontology created in [8]. The experiment consisted of isolating the students, in such a way as to allow them to communicate only through the COE. In the first step, pairs had been organized to create just one ontology (each pair has two ontologies to integrate, that are the ontologies created in the first stage of the experiment). After this step, we had 12 ontologies. This process was repeated until only one ontology was left. To create a single ontology, group members need to reach an agreement about the concepts and its relations. As this is very difficult in a big group, we decided to initiate with small groups, and so the participants were gradually presented to the negotiation problem. 4.3 Threats to Validity As any empirical study, this experiment exhibits a number of threats to internal and external validity. Internal Validity. Primary threat is the selection of subjects and their assignments to particular treatments. To ensure comparability of team performance we randomly selected students to form them. Second threat arises from the fact that we didn’t control the inspection effort. Third threat is data consistency, that such as process conformance, was much easier to ensure during the experiment due to tool support. External Validity. Regarding external validity, we took specifications from a realworld application context to develop an inspection object representing a realistic situation. The results are obtained through the “in loco” observation of the predefined activities, the questionnaire answers, video capture and messages exchanged. The subjects were students participating in a university class. As pointed out in the literature students may not be representative of real developers. However, Höst [11] observe no significant differences between them for small judgment tasks. According to Tichy [12] using students as subjects is acceptable if they are appropriately trained and the data is used to establish a trend. These conditions are both met in our case.

1018

J.L. Rezende et al.

5 Results In this section we describe key empirical results regarding tool support for collaborative ontology manipulation including ontology integration. We analyze defect discrimination performance (a) including team discussions, (b) fully automated, and present information on meeting effort (see hypotheses in Section 4.1). We also compare the results from the tool-based meeting with data from paper-based inspections, where possible. 5.1 Collaborative Integration At this point, we analyze if the participants collaborative work to create a single ontology is improved by the tool support, using Heuristic Evaluation [13], which is a low cost evaluation methodology to diagnosis usability problems in interfaces, accepted by HCI researchers. The heuristics used are from Teamwork [14], which are specific to groupware evaluation. These heuristics focus on shared workspaces, based on the Collaborative Mechanism framework. [15] Here are the defined questions and variables, beyond the obtained results: Evaluated dimension: Level of collaborative work Questions: • • • •

Q1 - Communication: degree of interaction and participation in quarrels and dialogues. Q2 - Collective design: degree of contribution. Q3 - Coordination: degree of concentration and organization. Q4 - Awareness: degree of process understanding.

Table 1 summarizes the variables observed during the experiment, through registered and perceived evidences, on the questions presented for the evaluation. Table 1. Variables observed during the experiment

Question Q1 Q2

Q3 Q4

Variable V1.1: Amount of exchanged messages V1.2: Quality of exchanged messages V2.1: Amount of contributions in a collectively product construction V2.2: Quality of contributions in a collectively product construction V2.3: Construction/inference on the contributions of other members of the group V3.1 Presence of leadership V3.2 Fulfillment of the tasks V4.1 Agreement on the tasks and their relations

The following data was collected through ‘in loco’ observation or extracted from the answers to the questionnaires, captured from videos and message logs.

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1019

As regards the messages, we verified that the number of exchanged messages decreased and the quality improved, during the experiment. This was an expected result, because it is normal to initiate with many badly elaborated messages, and finish with a lesser number of messages, that are better elaborated. We verified that the group had problems of initial coordination. They delayed about 1 hour organizing themselves, where an acceptable delay is of around half an hour. Despite the coordination problems, 75% of the participants finished the tasks in adequate time. Negotiation problems were well solved in most cases. This can be explained by the fact that the participants are from the same research group, or share similar ideas. 5.2 Ontology Integration In the attempt to analyze the behavior of the participants in the ontology integration process, we chose some variables, based on our experience in the ontology integration process. First of all, we decided to leave the integration process used by the participants free, which is, not imposing any rules. To facilitate the work of the participants, some devices were suggested, and organized in two groups: Syntactic and Semantic. The class name and the property name are syntactic devices. The semantic devices are: “is-a” relation, other kinds of relation, restriction, property value, and instance. This way, the participant can make use of any external resource needed. However, during the experiment, only in 45% of the cases the researchers made use of internet and/or dictionaries to compare synonym terms. Table 2 shows a summary of the use of the available devices. Table 2. Available device usage

Syntactic Class name (100%) Property name (40%)

Semantic “is-a” relation (75%) Other relations (5%) Restriction (0%) Property value (0%) Instance (0%)

In 100% of the integrations carried out, the participants analyzed the class names and only 40% verified the class properties names. For name comparison, in 55% of the cases the participants took suffixes and prefixes of the names, of class and their properties in account. In 25% of the cases the relation “is-a” had been not considered, so, the integration process was made only with the terms’ syntactic analyses. Only one person considered other types of relation beyond the “is-a” relation. 5.3 Water Resources Ontology Finally we reach the final stage. At this point the specialists evaluated the ontology created, by a non-specialist, in [8], and found 15 new terms, that were added to the final ontology. Some relations were incorrect or non-existent (15%) and many properties (names and values) were added (40%). Restrictions and instances did not suffer great changes (10%). Fig 4 shows a sketch of the Water Resources final ontology.

1020

J.L. Rezende et al.

Fig. 4. Water Resources Final Ontology

6 Conclusions and Future Works With the growing availability of large online ontologies, the questions about the integration of independently developed ontologies have become even more important. While in some situations the construction of a new ontology from other ontologies that already exist is only one easiness for the designers, another scene as common is presented in this work, where organizations or people with different backgrounds and interests need to agree on the best form to unify its ontologies in a way that the final ontology, satisfies the necessities of each part. In the LabHID, place where this study was carried through, many specialists from civil engineering, mechanic engineering, mathematics and statistics, work collaboratively with biologists, geography and other researchers of social phenomena in projects where the technician knowledge need to be combined with decisions based on social and politic aspects of the regions that are studied. Many of these researchers also come from other institutions. So, these professionals need to have a common vocabulary, which is an acceptable representation of the domain. In this paper we have described an empirical evaluation of tool support for collaborative ontology manipulation including ontology integration process. The research integrates concepts from the areas of computer supported cooperative work, ontology engineering, water resources, as well as verification and validation. We focused on the performance of tool-support, where the main purpose was to observe the collaborative characteristics. Our empirical data shows that tool support results in a medium performance and the found requirements will be used to improve the tool. Examples of found requirements are: create a chat where each participant has a message with a personalized format; signal arrival messages when the chat window is minimized; allow public and private chat; devices protection; and add coordination elements. In the ontology integration process, we need to improve the “Ontology Reduction” algorithm, also considering properties and axioms. Here is presented part of the obtained results. Complete results can be found in [16]. It is necessary to point out that some operational problems limited the good progress of the task, such as memory and internet access problems.

An Empirical Study on Groupware Support for Water Resources Ontology Integration

1021

Acknowledgments This work was partially supported by CAPES and CNPq. We want to thank the participants in the experiment.

References 1. Gruber, T.R. “Toward Principles for the Design of Ontologies Used for Knowledge Sharing”, International Workshop on Formal Ontology, (1993). 2. Pinto, H. Sofia, et al. A methodology for ontology integration, ACM International Conference on Knowledge Capture, Pages: 131 - 138, Canada, October (2001). 3. Rezende, J. L. ; Souza, J. F. ; Souza, J. M.; Peer-to-Peer Collaborative Integration of Dynamic Ontologies. 9th International CSCWD, Coventry, UK, (2005). 4. Chawathe, Y. et al, “Making Gnutella-like P2P Systems Scalable”, ACM SIGCOMM, University of Karlsrehe, Germany, (2003). 5. Rezende, J.L. et al, “Building Personal Knowledge through Exchanging Knowledge Chains”, IADIS IWBC, Algarve, Portugal, Fevereiro, (2005). 6. Xexeo, G. et al, “Peer-to-Peer Collaborative Editing of Ontologies”, 8th International CSCWD, Xiamen, China, (2004). 7. Gruninger, M., Designing and Evaluating Generic Ontologies. ECAI96’s Workshop on Ontological Engineering, pages 53-64, (1996). 8. Bomfim; E.; Souza, J. M.; Thoth: Improving Experiences Reuses in the Scientific Environment through Workflow Management System. 9th CSCWD, Coventry (2005). 9. Castro, M., et al: Decisio: A Collaborative Decision Support System for Environmental Planning. International Conference on Enterprise Information Systems, France (2003) 10. Medeiros, G. S., Souza, J. M., Starch, J., et al. Coordination Aspects in a Spatial Group Decision Support Collaborative System. ACM/SAC, Las Vegas (2001) 11. Höst, M., et al, Using Students as Subjects - A Comparative Study of Students and Professionals in Lead-Time Impact Assessment,. Empirical Software Engineering (2000). 12. Tichy, W., Hints for Reviewing Empirical Work in Software Engineering. Empirical Software Engineering: An International Journal, (2001). 5:309-312. 13. Nielsen, J., "Heuristic Evaluation". Usability Inspection Methods. J. a. M. In: Nielsen, R. (eds). New York, John Wiley and Sons. (1994) 25-62. 14. Greenberg, S., et al, "Heuristic Evaluation of Groupware Based on the Mechanics of Collaboration". 8th IFIP Int. Conference EHCI, Canada (2001), LNCS, Springer-Verlag. 15. Gutwin, C., et al, "The Mechanics of Collaboration: Developing Low Cost Usability Evaluation Methods for Shared Workspaces." IEEE 9th Int. WET-ICE'00, (2000). 16. Rezende, J.L., Souza, J.S. de, Souza, J.M. de, COE Experiment in Water Resource, Technical Report ES – 667/04 (2005).

Ontology Mapping Approach Based on OCL Pengfei Qian and Shensheng Zhang CIT Lab of Shanghai Jiaotong University, Shanghai 200030, China [email protected]

Abstract. A kind of water resource ontology mapping approach based on OCL is introduced. In this approach, UML together with OCL is chosen as ontology modeling language; set and relation theory is chosen as the theoretical foundation, an ontology model can be represented as a set, an ontology mapping model can be viewed as a relation set between associated sets. The core of this approach is an ontology mapping meta-model which is composed of ontology related elements (OntologyElement, OESet, OESetGroup etc) and mapping related elements (Mapping, MappingClassification etc). Object Constraint Language which is originally used to describe the constraint relationship between the objects, is extended to satisfy these two kinds elements’ requirement: OCL for Ontology Related Elements which describes the features of ontology elements and constraints among them, OCL for Mapping Related Elements which describes the features of mapping relation set between two ontology models. Finally, a case study about water resource ontology mapping is discussed.

1

Introduction

In the heterogeneous water resource information management domain, a large amount of water resource ontology models are produced, such as meteorological ontology, spatial data ontology. Because more and more water resource information (data, knowledge) need to be reused by different computation domains, the interoperation among the different water resource ontology models has been the necessary part of ontology research. Many ad-hoc mapping technologies [1],[2] which can only be used in some specific applications have been provided. However, a general technology which can be used in any scenarios to solve all water resource ontology mapping problems still dose not come into being. Therefore, the current real situation is that most of water resource ontology models are loosely coupled or completely separated, which leads the result that the related ontology models can not be effectively managed. Object-oriented analysis, design and implementation is a maturing filed with many industry standards, and Unified Modeling Language (UML) has been accepted as a general modeling language together with its associated Object Constraint Language( OCL ). At the same time, UML can also be used in representing ontologies [3],[4], and the related OCL can be extended to satisfy the requirement of the ontology field [5],[6], which provides the possibility of the X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1022–1033, 2006. c Springer-Verlag Berlin Heidelberg 2006 

Ontology Mapping Approach Based on OCL

1023

general ontology mapping technology. There are already several papers in introducing the UML profiles for ontology [7], the focus of this paper is how to use OCL to implement the unified water resource ontology mapping mechanism. Our solution is an ontology mapping meta-model based on OCL, OCL is a kind of model constraint language, it is flexible and expressive enough for defining all kinds of ontology model definition rules and mapping rules. But the problem is how to describe a large variety of ontology elements in OCL, the kinds and granularity of ontology elements that people cared may be totally different in different scenarios. Set and relation theory solves the problem, it not only has strong expressive power, but also has strong computation capability. Object Constraint Language based on set & relation theory has been the ground of the construction of ontology mapping meta-model. The remained part of this paper is structured as following. Section 2 mainly introduces the ontology mapping meta-model, section 3 mainly introduces the OCL extension for ontology mapping, a case study will be introduced in section 4 and section 5 is a conclusion.

2

Ontology Mapping Meta-model

A lot of water resource ontology models are loosely coupled or completely separated so that many problems such as synchronization are not possible to be solved. In this paper, a unified ontology mapping mechanism is proposed to realize the unified mappings among different water resource ontology models. The key point of the unified mappings mechanism is the unified specification of the ontology mapping. The mapping specification among several ontology models is called mapping model. For different mapping models, they have some common elements and associations. An ontology mapping meta-model is produced to introduce the common structure of mapping models. Ontology Related Elements

Mapping Related Elements

Property

MappingModel

Ontology Element

1

1 *

Mapping Classfication

*

Mapping *

1

*

Class

*

OESet *

1

Mapping DefinitionRule

*

* 1

1

1

OESetGroup

* 1

Group DefinitionRule

Fig. 1. Meta-model for ontology mapping model

Just because OCL is chosen as the model constraint language and set & relation theory is chosen as the theoretical foundation of ontology mapping metamodel, a mapping model can be represented as a relationship set with some

1024

P. Qian and S. Zhang

constraint rules. The binary mapping model between two ontology models is most basic and common in the ontology mapping, and also is the focus point of this paper. The preliminary UML class diagram [8] of the ontology mapping meta-model is given in Fig.1. All the elements of the ontology mapping meta-model are classified as 2 kinds: ontology related elements, mapping related elements. These 2 kinds of elements will be introduced separately in the following sections. 2.1

Ontology Related Elements

An ontology model composed of some ontology elements is represented as a tuple O(Cset , Pset , Tset )where 1)Cset is a finite set of class element; 2) Pset is a finite set of property element; 3)Tset is a set of constraints that the class and property satisfy in the related domain. While defining one mapping, it is possible that more than one element in an ontology model is mapped with the other ontology model. Therefore, all these elements related to ontology mappings construct another basic unit that we concerned. This unit is called an ontology element Set, abbreviated with OESet which is represented as a tuple OESet(COESet , POESet , TOESet ), where 1)COESet ⊆ O : Cset ; 2)POESet ⊆ O : Pset ; 3)TOESet ⊆ O : Tset , and all possible OESets of an ontologyO(CSet , PSet , TSet )can be computed as the Cartesian product of the power set of O : Cset , O : Pset , O : Tset ,that is OESet(COESet , POESet , TOESet ) ∈ 2O:Cset × 2O:Pset × 2O:Tset . General speaking, an ontology model usually has a large amount of OESets, we classify these OESets with different categories to efficiently manage them according to their types or other criterions, These OESets that have some common characters are called a group of these OESets, shown as OESetGroup in the ontology mapping meta-model which is represented as a tuple OESetGroup(Group, TGroup ) where Group = ((C1 , P1 , T1 ), . . . , (Ci , Pi , Ti ), . . . , (Cm , Pm , Tm )) and m .  Ti , (i = 1, · · · · · · , m),m is the number (Ci , Pi , Ti ) = OESeti ; TGroup = i=1

of OESets in this OESetGroup. For example, we can get an OESetGroup {...{Class:A},{Class:B},{Class:C},...} according to these common characters: each OESet only has one element; and the type of that element is Class. The common characters of this OESetGroup are described in a GroupDefinitionRule described in OCL as below: Context Class inv: elements→forAll(e|e.size()=1 and ( e.element.type=’Class’)). 2.2

Mapping Related Elements

A mapping model may include a large amount of mappings which can be classified as several different categories according to some common constraint rules. MappingClassification is produced, and these common constraint rules are defined as MappingDefinitionRule which is also described with OCL like GroupDefinitionRule.

Ontology Mapping Approach Based on OCL

1025

For two ontology models, Ontology M and Ontology N, they are described with two tuples, OntologyM = (M : Cset , M : Pset , M : Tset ) and OntologyN = (N : Cset , N : Pset , N : Tset ). Mapping model between these two ontology models is a relationship between these 2 tuples which is represented as: M appingM odelM,N = (M C1 , ..., M Ci , ..., M Ck ), (i = 1......k); M Ci = (M Groupi , M Ti ); M Groupi = (M appingi,1 , ...., M appingi,j , ..., M appingi,Li ), (j = 1, ......, Li ); M appingi,j = (M : OESet, N : OESet, M Ti,j ); M : OESet = (M : COESet , M : POESet , M : TOESet ); N : OESet = (N : COESet , N : POESet , N : TOESet ); M Ti =

Li . 

M Ti,j ;

j=1

M Ci : the NO i mapping classification; M Ti : common mapping definition rules set of the NO i mapping classification; M Groupi : the mapping set in the NO i mapping classification; Li : the mapping number of the NO i mapping classification; M appingi,j : the NO j mapping in the NO i mapping classification; M Ti,j : the mapping definition rule set of the NO j mapping in the NO i mapping classification. Every mapping is an ordered OESet pair, and each of them is a transformation from some elements in Ontology M to some elements in Ontology N. For example, OESet{Class : m1 , Class : m3 , P roperty : m5 } is mapped to another OESet {Class : n2 , Class : n4 , P roperty : n6 }, and then ( {class:m1 , class:m3 , property:m5 },{class:n2 , class:n4 , property:n6 } ) is a mapping.

3

OCL Extension for Ontology Mapping

In the recent years, more and more people adopt UML as ontology modeling language, and OCL defined as a standard ”add-on” to UML will be widely used in ontology research field. Originally, OCL is the description of complex constraint relation among software models in object oriented software design and development, and now OCL can also be extended to describe the complex constraint relations among different ontology elements in the ontology modeling and mapping domain[9],[10]. According to the structure of ontology mapping meta-model, OCL can be used in the description of 2 kinds of ontology mapping meta-model elements: OCL for Ontology Related Elements, OCL for Mapping Related Elements.

1026

3.1

P. Qian and S. Zhang

OCL for Ontology Related Elements

The foundation of ontology related elements is OntologyElement. There are 2 kinds of most basic ontology elements: property and class (see the Fig.1), and any other ontology elements can be represented using these 2 elements. Therefore, the OCL constraints in ontology related elements mainly focus on the OCL for ontology property and the OCL for ontology class. Firstly, the ontology property can be classified as 2 kinds: object property and datatype property, which can be distinguished according to whether they relate individuals to individuals (object properties) or individuals to datatypes (datatype properties). But for brevity, the OCL for these 2 kinds properties will be represented in unified format: P(x,y)(that is: property P relates x to y ). A number of property characters described in OCL will be defined on the ontology property: (1) An operation which gets the domain or range of the property. context Property::domain(): Set(X) context Property::range(): Set(Y)

result=elements.x→asSet() result= elements.y→asSet()

(2) An operation which makes sure whether an ontology property is a transitive property. context Property::TransitiveProperty():Boolean pre: domain() = range() post: result = elements→forAll(p,q| p.y = q.x implies exist(f) and f.x = p.x and f.y = q.y ) (3) An operation which gets the inverse property individual of a certain property individual. context Property::inverseOf( Property:p ): Property pre: exist( AllPropety.inludes(q) and q.x=p.y and q.y =p.x) post: result = q (4) An operation which makes sure whether an ontology property is a symmetric property. context Property::SymmetricProperty():Boolean post: result = elements→forAll( p| exists( AllPropety.includes( inverseOf(p))) /*AllPropety stands for the set of whole ontology property individual.*/ (5) An operation which makes sure whether an ontology property is a functional property context Property::FunctionalProperty():Boolean post: result = elements→forAll( p,q|p.x = q.x implies p.y =q.y ) (6) An operation which makes sure whether an ontology property is a symmetric functional property context Property::SymmetricFunctionalProperty():Boolean post: result = SymmetricProperty() and SymmetricFunctionalProperty() (7) Some invariant constraints which describe some features of the ontology property context Property

Ontology Mapping Approach Based on OCL

1027

inv: range()→allValuesFrom( Set(Y) ) inv: range()→someValuesFrom( Set(Y) ) inv: cardinality = n The new OCL keywords: ’AllPropety’, ’allValuesFrom’, ’someValuesFrom’, ’cardinality’ have been added into the OCL keyword list to satisfy the requirement of ontology field. Secondly, a number of OCL constraint operations about ontology class will be defined on the ontology class: (1) Intersection: get the intersection of several ontology class context OWLClass::intersectionOf(OWLClass:a,OWLClass:b) : OWLClass post: result=TSet.elements→Select( e| a.elements→include(e) and b.elements→include(e) ) /*TSet stands for the set of all ontology instance in the domain*/ (2)Union: get the union of several ontology class context OWLClass::unionOf(OWLClass:a,OWLClass:b,TSet) : OWLClass post: result=TSet.elements→Select( e| a.elements→include(e) or b.elements→include(e)) (3)Complement: get the complement of certain ontology class context OWLClass::complementOf( OWLClass:a ) : OWLClass post: result=TSet.elements→Select(e|NOT( a.elements→include(e))) (4) DisjointWith: ensure whether a ontology class is disjoint with another ontology class context OWLClass::disjointWith(OWLClass:a, OWLClass:b) : Boolean post:result=a.elements→forAll(e|NOT(b.elements→include(e))) These OCL constraints for ontology property and class can be used in the definition of GroupDefintionRule. 3.2

OCL for Mapping Related Elements

For a concrete binary mapping, generally speaking, mapping starts from an OESet and ends in another OESet, these 2 OESets will have different names: the previous is ’domain’, the latter is ’range’, and each mapping pair has 2 members: x and y,∀x, y, x ∈ domian, y ∈ range. For this kind of binary mapping, there are several general basic OCL constraint rules. 9 of them are listed in the below: (1)An invariant constraint ensures that the elements of mapping pairs in the mapping are selected from the 2 OESets (domain and range). Context MappingPair inv: domain→includesAll (elements.x→asSet( ) ) and range→includesAll (elements.y→asSet()) (2)An invariant constraint ensures that no 2 mapping pairs in the mapping refer to exactly the same domain and range elements. Context MappingPair inv: elements→forAll(e,f|(e.x =f.x and e.y=f.y) implies e=f)

1028

P. Qian and S. Zhang

(3)An image operation which gets the set of elements from the range actually mapped onto, under the mapping. Context MappingPair:: image(): Set(Y) post: result = elements.y→asSet (4) An inverse image operation which gets the set of elements from the domain actually mapped from, under the mapping. Context MappingPair:: inverseimage(): Set(X) post: result=elements.x→asSet (5) The mapping is surjection, if the image is the range and the inverse image is the domain Context MappingPair:: surjection(): Boolean post: result = ( image = range ) and ( inverseimage =domain ) (6) The mapping is functional, if and only if an element of the domain maps to at most one element in the range. Context MappingPair:: isfunctional(): Boolean post: result = elements→forAll ( p, q| p.x=q.x implies p=q ) (7) The mapping is an injection, if and only if, an element of the domain maps to at most one element in the range and an element of the range is mapped to from at most one element in the domain. Context MappingPair:: isinjection(): Boolean post: result = elements→forAll( p,q | (p.x=q.x or p.y=q.y) implies p=q) (8) The mapping is a bijection if it is both an injection and surjection. Context MappingPair:: isbijection(): Boolean post: result = isinjection and surjection (9) Finding all the mapping pairs in the mapping which mention an element in the range or domain. Context MappingPair:: domainFinding( x ) : Set( MappingPair ) post: result = elements→select( p| p.x = x ) Context MappingPair:: rangeFinding( y ) : Set( MappingPair ) post: result = elements→select( p| p.y = y ) These OCL constraints for binary mapping can be used in the definition of MappingDefintionRules.

4

A Case Study

Here we discuss a very typical example to show the main principle of our approach. Let us assume that there are two UML formatted ontologies A (English Ontology about Vehicle in Water Resource Equipment Domain) and B (Chinese Ontology about Che in Water Resource Equipment Domain), see the Fig.2. An ontology mapping model of this paper is for two ontology models, and mapping models are often defined with some GroupDefinitionRules and MappingDefinitionRules. In the Ontology Mapping Meta-model, a GroupDefinitionRule is used to define an OESetGroup, and a MappingDefinitionRule is used to define a MappingClassification.

Ontology Mapping Approach Based on OCL

1029

A:English Ontology about Vehicle in Water Resource Equipment Domain FunctionalProperty

TransitiveProperty

HasMaker

ConnectWith

disjoint with each other

Vehicle







Car

Bicycle

An OESet with Group1 Constraint:

Railcar

Salooncar

Bus

Motorcycle

Truck

Train

Railbus

An OESet with Group1 Constraint: disjoint with each other

Mapping2: MappingClassfication2

Mapping1: MappingClassfication1

B:Chinese Ontology about Che in Water Resource Equipment Domain FunctionalProperty

ChangJia

Che An OESet with Group2 Constraint:

ZiXingChe

QiChe

MotuoChe

HuoChe





JiaoChe

KeChe

KaChe

An OESet with Group3 Constraint: disjoint with each other

Fig. 2. Two ontologies for vehicle

1030

P. Qian and S. Zhang

Group 1 ( OESetGroup ) is defined in the Ontology A, its OCL GroupDefinitionRule is as follow: GroupDefinitionRule1: Context OESet: AllDisjiontWithEachOther( OESet: oe ): Boolean post: result= oe.elements→forAll ( e, f| exists( disjointWith(e,f))) Context Group1OESet inv: elements→forAll(e |OESet::AllDisjiontWithEachOther(e)) inv: elements→forAll(e | e.elements→forAll(ec|exists(ec.HasMaker.FunctionalProperty()) and exists ( ec.ConnectWith. TransitiveProperty()))) This GroupDefinitionRule indicates that each concept (class) of the OESets in the Group1 has a functional property(HasMaker) and a transitive property(ConnectWith), and all concepts of an OESet are disjointed with each other. This OESetGroup can be represented as the following set: {......{Bicycle, Car, Truck, Motorcycle, Train}, { Truck, Bus, Salooncar },...... }(see the Fig.2). There are 2 OESetGroups defined in the Ontology B: Group2 and Group3, their OCL GroupDefinitionRules are as follow: GroupDefinitionRule2: Context Group2OESet inv: elements→forAll( e| e.elements→forAll(ec|exists( ec.HasMaker.FunctionalProperty()))) This GroupDefinitionRule indicates that each concept of the OESets in the Group2 has a functional property(Changjia). This OESetGroup can be represented as the following set:{......,{ ZiXingChe, QiChe, MotuoChe, HuoChe},......} (see the Fig.2). GroupDefinitionRule3: Context Group3OESet inv: elements→forAll(e |OESet::AllDisjiontWithEachOther(e)) inv: elements→forAll(e| e.elements→forAll(ec|exists( ec.HasMaker.FunctionalProperty() ) ) ) This GroupDefinitionRule indicates that each concept of the OESets in the Group3 has a functional property(Changjia), and all concepts of an OESet are disjointed with each other. This OESetGroup can be represented as the following set: {......,{ JiaoChe, KeChe, KaChe },......}(see the Fig.2). It is clear that each MappingDefinitionRule can generate a MappingClassification for two ontology models, and a set of MappingDefinitionRules together with associated GroupDefinitionRules can construct a mapping model for two ontology models. For the ontology mapping model between Ontology A ( English Ontology about Vehicle ) and Ontology B(Chinese Ontology about Che), the following 2 MappingDefinitionRules described in OCL can be given:

Ontology Mapping Approach Based on OCL

1031

MappingDefinitionRule1 (’Group1-Group2’): Context Group1toGroup2Mapping inv: surjection() /* image is range and inverse image is domain*/ inv: functional() /* an element of the domain maps to at most one element in the range*/ inv: elements→forAll(( Setx, Sety )| Group1OESet.includes(Setx) and Group2OESet.includes(Sety) and Mappings.link (Setx,Sety) ) MappingDefinitionRule1 has defined the characters of the MappingClassification1, this rule indicates that each mapping of the MappingClassification1 is surjection and functional, and the previous OESet (Setx) belongs to Group1OESet, the latter OESet ( Sety ) belongs to Group2OESet, and the returned result of Mappings.link ( Setx, Sety ) is true. MappingClassification1 can be represented as the following set: {......,({Bicycle, Car, Truck, Motorcycle, Train}, {ZiXingChe, QiChe, MotuoChe, HuoChe} ),......}. MappingDefinitionRule2(’Group1-Group3’): Context Group1toGroup3Mapping inv: bijection () /* both injection and surjection.*/ inv: elements→forAll ((Setx, Sety )| Group1OESet.includes(Setx) and Group3OESet.includes(Sety) and Mappings.link(Setx,Sety) ) MappingDefinitionRule2 has defined the characters of the MappingClassification2, this rule indicates that each mapping of the MappingClassification2 is bijection, and the returned result of Mappings.link ( Setx, Sety ) is true. MappingClassification2 can be represented as the following set: {......,( {Truck, Bus, Salooncar}, {JiaoChe, KeChe, KaChe} ),......}. Where link (Set, Set) is predefined operation as below: Context Mappings:: linkelement ( x : Class , y : Class ): Boolean /*return the relation between x and y */ post: if ( Similarity( x , y ) > Δ ) result = true else result = false /*Δ is predefined similarity factor less than 1, such as 0.9*/ Context Mappings:: link ( setx : Set, sety : Set ): Boolean post: isfunctional=true, tempresult=true setx.elements→forAll( x| { size=0, sety. elements→ forAll ( y | { if ( linkelement(x, y) = true ) { size= size+1 if ( Group1. disjointWith= true) setx.remove(x) if ( Group2. disjointWith =true) sety.remove(y) } } ) if ( size>1 ) isfunctional= false } ) /* (size>1) stands for an element of the domain maps to more than one element in the range*/

1032

P. Qian and S. Zhang

if (surjection()=true)tempresult=tempresult and (setx= NULL) and (sety=NULL) /* Add surjection condition*/ if (functional()=true)tempresult=tempresult and isfunctional /* Add functional condition */ ...... /* Add other conditions */ result = tempresult

5

Conclusion

Just because UML has been used as ontology modeling language, and OCL can be extended to satisfy the requirement of the ontology engineering, we have presented an water resource ontology mapping approach based on OCL which can be easy understood through an ontology mapping meta-model. This mapping meta-model contains 2 kinds of elements: ontology related elements and mapping related elements, and OCL has been extended to define the constraint rules of these two kinds of mapping meta-model elements: 7 OCL constraint rules about ontology property and 4 rules about ontology class are listed in the section 3.1, and 9 rules for mapping are listed in the section 3.2. Finally, a case study about water resource ontology mapping between the ontology A: ”English Ontology about Vehicle in Water Resource Equipment Domain” and ontology B ”Chinese Ontology about Che in Water Resource Equipment Domain”: is introduced, a lot of new OCL constraints mentioned above have been used and combined. We have done a lot of experiments on the approach. The approach has been applied in three projects. One is funded by the National High Technology Research and Development Program of China (No. 2002AA411420), another is supported by the Natural Science Funds of China(60374071), and the third one is supported by the Shanghai Commission of Science and Technology Key Project (03DZ19320). In the experiments we found that object constraint language with a little extension can be used to describe almost all MappingDefinitionRules and GroupDefinitionRules.

References 1. E. Rahm, P.A. Bernstein: A survey of approaches to automatic matching. The VLDB Journal. Oct, 2001. 334-350. 2. K. Czarnecki, S. Helsen: Classification of Model Transformation Approaches. In Online Proceedings of the 2nd OOPSLA03 Workshop on Generative Techniques in the Context of MDA, Anaheim, Oct. 2003.v1. 1-17. 3. S. Cranefield, S. Haustein, M. Purvis: UML-Based Ontology Modeling for Software Agents. Proc. of Ontologies in Agent Systems Workshop. Montreal: Agents 2001. 21-28 4. D. Duric: MDA-based Ontology Infrastructure. ComSIS. February 2004. Vol.1,No.1. 91-116.

Ontology Mapping Approach Based on OCL

1033

5. S. Cranefield, M. Purvis: UML as an Ontology Modeling Language. Proceedings of the IJCAI-99 Workshop on Intelligent Information Integration. Stockholm, Sweden: AI Societies ,1999.234-241. 6. D. Akehurst, A. Kent: A Relational Approach to Defining Transformations in a Meta-model. UML 2002, LNCS 2460. 2002. 243-258. 7. S. Cranefield, M. Purvis: A UML profile and mapping for the generation of ontology-specific content languages. The Knowledge Engineering Review, 2002, Cambridge University Press. Vol. 17:1. 21-39 8. The Object Management Group(OMG): UML 2.0 Superstructure Final Adopted Specification. OMG document pts/03-08-02, Aug.2003. 9. D. Akehurst: Relations in OCL. In UML 2004. Workshop: OCL and Model Driven Engineering, 2004.16-29. 10. E. Cariou, R. Marvie, L. Seinturier: OCL for the Specification of Model Transformation Contracts. OCL and Model-Driven Engineering UML 2004 Workshop. Oct 2004.

Object Storage System for Mass Geographic Information∗ Lingfang Zeng1, Dan Feng1,**, Fang Wang1, Degang Liu2, and Fayong Zhang2 1

Key Laboratory of Data Storage System, Ministry of Education, School of Computer, Huazhong University of Science and Technology, Wuhan, China 2 GIS Software Development & Application Research Centre, Ministry of Education, Info-engineering College of China University of Geoscience, Wuhan, China [email protected], [email protected]

Abstract. Existing non-standardized multi-sources and multi-scales data have a shortage of spatial information shared in either internal and external organizations or departments, especially in national or global applications. It is very important and difficult to store and access the distributed mass spatial data for GIS application. Integrated object storage technology, GIS has high scalability and availability. At the same time, with the higher abstract object interface, both the GIS and the storage device have higher intelligence which facilitate the management of mass spatial data and the various applications of GIS.

1 Introduction Most computer technology is designed to increase a decision-maker’s access to relevant data. GIS (geographic information system) [1] [2] [9] is much more than mapping software. GIS goes beyond mining data to give us the tools to interpret that data, allowing us to see relationships, patterns, or trends intuitively that are not possible to see with traditional charts, graphs, and spreadsheets. GIS is, therefore, about modeling and mapping the world for better decision making. GIS tools range from simple contact mapping tools to consumer analysis to complex enterprise systems that are part of an organization’s overall enterprise resource planning infrastructure. In the early decades of GIS, professionals concentrated primarily on data compilation and focused application projects, spending a majority of their time creating GIS databases and authoring geographic knowledge. Gradually, GIS professionals began to use and exploit these knowledge collections in numerous GIS applications and settings. Users applied comprehensive GIS workstations to compile ∗

This paper is supported at Huazhong University of Science and Technology by the National Basic Research Program of China (973 Program) under Grant No. 2004CB318201, National Science Foundation of China No.60273074, No.60303032, Huo Yingdong Education Foundation No.91068 and at China University of Geoscience by High Technology Research and Development Program of China No.8001AA135170, plus industrial support from WUHAN ZONDY INFO-ENGINEERING CO., LTD. ** Corresponding author. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1034 – 1039, 2006. © Springer-Verlag Berlin Heidelberg 2006

Object Storage System for Mass Geographic Information

1035

geographic data sets, build work flows for data compilation and quality control, author maps and analytical models, and document their work and methods. This reinforced the traditional view of a GIS user with a professional scientific workstation that connected to data sets and databases. The workstation had a comprehensive GIS application with advanced GIS logic and tools to accomplish almost any GIS task. Recent developments in computing—the growth of the Internet, advances in DBMS technology, object-based storage [4] [5] [7] [8], GIS Grid [3], mobile computing, and wide GIS adoption to—have led to an evolving vision and role for GIS. However, including GIS desktops, GIS software is required to be centralized in application servers and Web servers to deliver GIS capabilities to any number of users over networks. Focused sets of GIS logic can be embedded and deployed in custom applications. And increasingly, GIS is deployed in mobile devices. Also, enterprise GIS users connect to central GIS servers using traditional, GIS desktops as well as Web browsers, focused applications, mobile computing devices, and digital appliances. This vision of the GIS platform is expanding and the storage of spatial object is required high availability and high scalability. Spatial information resources are the composition of different application systems. Spatial data are multi-terabyte and require massive storage system. The architecture of spatial data is objects, fundamental containers that house both application data and an extensible set of object attributes. Nowadays, new storage architecture is emerging. OSS (object storage system) [5] is the foundation for building massively parallel storage systems that leverage commodity processing, networking, and storage components to deliver unprecedented scalability and aggregate throughput in a costeffective and manageable package. And OSS can be extended in WAN (wide area network) for its high scalability (Figure 1). With the help of object storage technology, GIS attains more intelligent and enables the capture and sharing of geographic knowledge in many forms—advanced GIS data sets, maps, data models, the expertise of professionals who have developed standardized work flows, and advanced models of geographic processes. Intelligent GIS also enables the building and management of knowledge repositories that can be published for others to use.

2 Spatial Object-Based Storage 2.1 The Components of Object Storage System In the OSS, objects are primitive, logical units of storage that can be directly accessed on an object storage controller (OSC). The OSS built from the OSCs is shown in Figure 1. A metadata server (MS) provides the information necessary to directly access objects, along with other information about data including its attributes, security keys, and permissions (authentication). The OSCs export object-based interface, and the access/storage unit is object. It operates in a mode in which data is organized and accessed as objects rather than as an ordered sequence of sectors. Clients contact with MS and get the information about objects. The OSCs receive and process those requests with some policies. In our previous work [5], smart object storage controller is introduced.

1036

L. Zeng et al.

Metadata is frequently described as “data about data” [6]. In MS, metadata is additional information (besides the spatial and tabular data) that is required to make the data useful. It is information we need to know in order to use the data. Metadata represents a set of characteristics about the data that are normally not contained within the data itself. Metadata could include: (1) An inventory of existing data; (2) Definitions of the names and data items; (3) A keyword list of names and definitions; (4) An index of the inventory and the keyword list for access; (5) A record of the steps performed on the data including how it was collected; (6) Documentation of the data structures and data models used; (7) A recording of the steps used on the data for analysis. Spatial metadata is important because it not only describes what the data is, but it can reduce the size of spatial data sets. By creating metadata, users are creating a standard in naming, defining, cataloging, and operating standards. This in turn is a vital foundation for understanding, collaborating, and sharing resources with others.

Fig. 1. GIS based on object storage controller

2.2

Spatial Object Data, Attributes and Methods

The object is the fundamental unit of data storage in the OSS. Storage object is a logical collection of bytes in the OSC. An object on the OSC consists of an ordered set of sectors associated with an object ID (OID). Data is referenced by the OID and an offset into the object. Conceptually similar to a file, it is allocated and placed on the media by the OSC itself, while the operating system manages its files and metadata in these object constructs, instead of managing sectors of data. 1. Spatial object data The backbone of GIS is good data. Inaccurate data can result in inaccurate models and maps, skewing the results of our analysis and ultimately resulting in poor decisions. “Garbage in, garbage out,” as the adage says. The wide availability makes

Object Storage System for Mass Geographic Information

1037

it critical to understand what GIS data is, how it is used, and how to select the right data for our needs. Geography is information about the earth’s surface and the objects found on it. This spatial object data comes in two basic forms: (1) Map data. Map data contains the location and shape of geographic features. Maps use three basic shapes to present real-world features: points, lines, and areas (called polygons). (2) Image data. Image data ranges from satellite images and aerial photographs to scanned maps (maps that have been converted from printed to digital format). A GIS stores information about the world as a collection of themed layers that can be used together. A layer can be anything that contains similar features such as customers, buildings, streets, lakes, or postal codes. This spatial object data contains either an explicit geographic reference, such as a latitude and longitude coordinate, or an implicit reference such as an address, postal code, census tract name, forest stand identifier, or road name. 2. Spatial object attributes Spatial object attributes are the descriptive spatial object data that GIS links to map features. Spatial object attributes are collected and compiled for specific areas like states, census tracts, cities, and so on and often comes packaged with map data. Each spatial object has one or more attributes that identify what the object is, describe it, or represent some magnitude associated with the object. There are five types of spatial attribute: categories, ranks, counts, amounts, ratios. Categories are groups of similar things. They help us organize and make sense of spatial objects. All spatial objects with the same value for a category are alike in some way and different from objects with other values for that category. Ranks put spatial objects in order from high to low. Ranks are used when direct measures are difficult or if the quantity represents a combination of factors. Counts and amounts show users total numbers. A count is the actual number of spatial objects on the map. An amount can be any measurable quantity associated with a spatial object such as the number of employees at a business. Using a count or amount lets users see the actual value of each object as well as its magnitude compared to other spatial objects. Ratios show users the relationship between two quantities and are created by dividing one quantity by another, for each spatial object. 3. Spatial object methods Spatial object methods define the behavior of geographically integrated features and manage connectivity among features in a set of feature classes. Method may be a user-defined modular operation on stream data, and it is applied to per-object basis.

3 Intelligent GIS Based on OSC For traditional GIS, hints for storage system come from three aspects: The first is in combination with accurate file layout. The second is from existing file system and depends on user input. The third is from file content analysis. But, spatial object, as a fundamental storage component in an OSC, is different from storage in traditional

1038

L. Zeng et al.

storage device. And spatial object provides ample hints for OSC, which are help for designing intelligent GIS. Intelligent GIS makes it possible to digitally encapsulate geographic knowledge and is engineered to support this knowledge-based approach. OSC is the building block of the system and can be built from off-the-shelf components. Like other intelligent disks, it has processor, memory, network interface and block-based disk interface, and thus has ability to do intelligent processing on spatial object stored in it. Spatial object attributes makes the GIS be adaptive to more application fields. Users can register and upload methods (or rules) at storage device and associate a spatial object with a chain of methods [5]. Any kind of operations on spatial objects can be performed by the method chains which are executed by taking the object as input stream in an OSC. The OSC has explored two promising enhancement: objectbased interface and embedded computational capability, and it improves the scalability of storage system. For instance, as in other database management systems, numerous data updates are constantly being posted to an OSC in GIS. Hence, GIS database in an OSC, like other databases, must support update transactions. However, GIS users have some specialized transactional requirements. The main concept underlying this is often referred to as a long transaction. This is because, in GIS, a single editing operation can involve changes to multiple rows in multiple tables. Users need to be able to undo and redo their changes before they are committed. Editing sessions can span a few hours or even days. Often the edits must be performed in a system that is disconnected from the central, shared database. In many cases, database updates pass through a series of phases. GIS work flow processes may span days and months. Yet the GIS database still requires continuous availability for daily operations where users might have their own views or states of the shared GIS database. By registering version control method, GIS can process such long transaction according to users’ management policy. Geographic intelligence is inherently distributed and loosely integrated. Rarely is all the necessary information present in a single database instance with a single data schema. GIS users count on one another for portions of their GIS spatial objects. An important component in GIS is a GIS metadata portal with a registry of the numerous spatial object holdings and information sets. A number of GIS users act as special object stewards who compile and publish their spatial object sets for shared use by other organizations (by organization metadata server). They register their information sets at an organization metadata portal. By searching a GIS metadata portal, other GIS users can find and connect to desired information sets. The GIS metadata portal is a Web site where GIS users can search for and find GIS information relevant to their needs and, as such, depends on a network of published GIS data services, map services, and metadata services. Periodically, a GIS metadata portal site can harvest metadata from a collection of participating sites to publish one organization metadata. Thus, an organization metadata can reference object holdings contained at its site as well as at other sites. It is envisioned that a series of OSS metadata servers will be available to form a high-speed network. GIS spatial objects and services are documented in OSS metadata records in an OSS metadata that can be searched to find candidates for use in various GIS applications.

Object Storage System for Mass Geographic Information

1039

4 Conclusion In the digital computing age, people begun to capture everything they know and share it across networks. These knowledge collections are rapidly becoming digitally enabled. Simultaneously, GIS is evolving to help us better understand, represent, manage, and communicate many aspects of our earth as a system. For GIS, there is an increasing demand for storage capacity and throughput. Thus, there is a need for storage architectures that scale with the processing power with the growing size of geographic dataset. In this paper, the object storage system is introduced to the GIS. It offers some significant advantages. First of all, since OSC can be built from off-theshelf components, the cost is much smaller. Second, OSC can store integrated spatial objects and provides object interface for GIS. Third, OSS enhances the intelligence of GIS by introducing the concept of method, which performs on a spatial object.

References 1. Website, 2005. http://www.gis.com, June 24, 2005. 2. Yongping Zhao, D.A. Clausi, “Design and establishment of multi-scale spatial information system based on regional planning and decision making”. Geoscience and Remote Sensing Symposium, pp.1965-1967, 2001. 3. Dan Feng, Lingfang Zeng, Fang Wang etc. “Geographic Information Systems Grid”. P.M.A. Sloot et al. (Eds.): EGC 2005, LNCS 3470, pp. 823 – 830, 2005. Springer-Verlag Berlin Heidelberg 2005. 4. Intel Corporation, “Object-Based Storage: The Next Wave of Storage Technology and Devices”, January 2004, accessible from http://www.intel.com/labs/storage/osd/ 5. Dan Feng, Ling-jun Qin, Ling-Fang Zeng, Qun Liu. “a Scalable Object-based Intelligent Storage Device”. Proceedings of the Third International Conference on Machine Learning and Cybernetics, Shanghai,pp:387-391, 26-29 August 2004. 6. Website, July, 2005, http://www.fgdc.gov/ 7. M. Mesnier, G.R. Ganger, and E. Riedel, “Object-based storage”. Communications Magazine, IEEE, Vol 41, Issue: 8, pp. 84–90, Aug. 2003. 8. SNIA, Object-Based Storage Devices (OSD) workgroup, January 2004, accessible from http://www.snia.org/osd, January 2004. 9. “GIS Standards and Standardization: A Handbook”. United Nations Economic and Social Commission for Asia and Pacific, New York, 1998.

The Service-Oriented Data Integration Platform for Water Resources Management* Xiaofeng Zhou, Zhijian Wang, and Feng Xu School of Computer & Information Engineering, Hohai University, Nanjing 210098 [email protected]

Abstract. The data resources have characteristics of distributing, autonomy, multi-source, heterogeneity, real-time and safety in water resources management. The traditional approach of data integration can not satisfy these requests at one time, such as federated database, Mediated, data warehouse and so on. Along with the development of Service-Oriented Architecture (SOA) and relation technology, using SOA to integrate data resources has become an effective way. This paper designs and realizes a data integration platform for water resources management using SOA, it provides effective, safe and flexible services of data share.

1 Introduction It is important that data integration is used on domain of water resources management. It can optimize admeasurement of water resources, advance efficiency of water resources utilized. But the data integration for water resources management is very complicated, so the data shared is very difficult. User needed these data resources can’t get them at proper time. The data resources needed by water resources management has the following characters, because the water resources management department is distribute on region and close relates with region services: z z z z z

*

Distributed: the data resources needed by water resources management distribute at different department, region, and etc. Autonomous: the different data resources belong to different departments, their build and maintenance are assumed by different department too. Multi-sources: the data resources needed by water resources management come from different sources. Some are gathered and others are exchanged. Heterogeneity: the data resources needed by water resources management are heterogeneous in aspect of system, storage methods, data type, naming, and etc. Real-time: the real time of the data resources called is high because the management of water resources management very emphasizes time efficiency.

This work was supported by National Natural Science Foundation of China under the grant No.60573098, and the high tech project of Jiangsu province under the grant No.BG2005036.

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1040 – 1045, 2006. © Springer-Verlag Berlin Heidelberg 2006

The Service-Oriented Data Integration Platform for Water Resources Management z

1041

Security: the security of data resources needed by water resources management is very important because these data resources relate with security of people life and property, relate with national stabilization and development.

The data integration has become research problem needing to be solved urgently on domain of water resources management. The traditional approach of data resources integration can’t satisfy these requests at one time. This paper advances an approach of data resources integrated in water resources management based on SOA.

2 The Traditional Approach of Data Resources Integration The method of multi-database integration is advanced as early as middle of 1970s. The multi-database integration system provides user uses single language defining and operating data to call a lot of independent data sources at one time. Mcleod advanced the concept of federated database system based on multi-database integration system. A federated database system (FDBS) is a collection of cooperating database systems that are autonomous and possibly heterogeneous[1]. This method can’t realize data integration of non-database. Mediated is another approach of data resources integration. It integrates data with providing virtual view of all heterogeneous data sources. These data sources may be database, legacy system, web data, and etc. Mediated consists of mediator and wrapper[2][3]. The mediator is a software component that mediates between the user and physical data source, can process and execute queries over data stored locally and in several external data sources. A wrapper is a small application that translates a physical data source into global model. Along with development of middleware technology the mediated realizes based on middleware. It is called distribute integration system based on middleware[4]. Mediated effectively solves problem of high cost of development and difficult code reuse brought by federated database system, It can effectively realize complicated massive data integration using network computing surroundings. But the mediated only realizes read of data because physical data sources are autonomous. Other methods of data resources integration include data warehouse and mobile agent. Data warehouse encompasses architectures, algorithms, and tools for bringing together selected data from multiple databases or other data sources into a single repository, suitable for direct querying or analysis[5]. Mobile agent can realize customization of data, and satisfy user’s individuation need[6].

3 Service Oriented Architecture and Web Services Service Oriented Architecture (SOA) is an evolution of the Component Based Architecture, Interface Based Design (Object Oriented) and Distributed Systems of the 1990s. Service Oriented Architecture is an architectural paradigm for components of a system and interactions or patterns between them. A component offers a service that waits in a state of readiness. Other components may invoke the service in compliance with a service contract[7].

1042

X. Zhou, Z. Wang, and F. Xu

Most of architectures that are called SOA include a service provider, a service consumer, and some messaging infrastructure. The service consumer and the contract between the service provider and service consumer are implied. A software architect that designs a software application using all the minimal concepts of SOA has designed an application that is compliant with an SOA. Web Services is a specific of SOA implementations that embody the core aspects of a service-oriented approach to architecture and extends the basic SOA Reference Model.

4 Data Resources Integration Platform for Water Resources Management 4.1 The Architecture of Platform This section describes the architecture depicted in Figure 1.The platform is based on web services, can be divided into three part on logic: data resources owner, data resources user and platform manager correspond with service provider, service consume and registration center of SOA. It consists of metadata, UDDI registration center, service component for data called, service component for user calling and functions of platform.

Fig. 1. The architecture of platform

The Service-Oriented Data Integration Platform for Water Resources Management

1043

4.2 Metadata Metadata is standard about contents and structure of data resources. The content of metadata describe how a data resource sees the world, which objects it knows and how objects relate to each other. The metadata is a foundation of data resources integration platform, looked upon global as view of data resources called. Now many water resources management departments have achieved database design. The ministry of water resources is instituting standard about dataset of water resources management. So the metadata of water resources management can be accessed easily. 4.3 UDDI Registration Center The UDDI registration center is a establishment following UDDI criterion and managing all kind of service components. Now there are many products and opening code of UDDI. 4.4 Service Component for Data Called The service component for data called is created by data resources owner using service-oriented data integration framework. It is the service component what can be used by legal user. This service component can be called by legal user directly, can compose more big service component for data called, and can be called by operation application system on upper layer. 4.5 Service Component for User Calling The service component for user calling is created by data resources user for really need based on service component for data called. It is only used by user creating it. It must be publised on UDDI registration center If it need be used by other. 4.6 The Function of Platform 4.6.1 Metadata Manager The metadata manager realizes management of metadata in platform. It includes add, delete and modify. The services created and correlative definition should be adjusted automatically when the metadata is modified. The metadata can’t be deleted if it has used in principle. If the metadata must be deleted compellent the message must be sent to correlative service provider. 4.6.2 Service Called Manager The service called manager is extension of UDDI function. It watches platform running, intervenes deviant action, manages service called, and so on.

1044

X. Zhou, Z. Wang, and F. Xu

4.6.3 Service Created The service created is first step realizing data resources integrated and shared. It provides data resources owner to create service components for data called. It includes service created and service maintained. When data resources owner uses service created function to create new service component for data called, first he selects data source including data resources shared, then selects data, constructs relation of data and metadata, finally create service component for data called. A service component for data called can provide data shared in one data table or multi- data tables. If data come from multi- data table primary key and relation between multi- data table must be defined. The service maintained provides data resources owner to delete and upgrade service components for data called. 4.6.4 Service Published The service published publishes service had created using UDDI. Service component owner authorizes to user who can calls the service component, publishes service component to UDDI business registration center. It includes authorization and published of service component. First is authorization of service component. The service component owner selects service components needing be published, these service component has created using service created. Then he select users who can call service component from user database for every service component. Second is published of service component. Service component owner publishes these service components to UDDI business registration center using UDDI. 4.6.5 Service Customized The service customized is core realizing active service. Data resources user can customizes needful data, individual data view, model of data pushed, frequency of data pushed, and etc. First data resources user selects service components from service components for data called that he can call, and selects needful data from these service components for data called. If these data come from different service components for data called he must define relation between these service components for data called. Then he define format of data shown, model of data pushed, frequency of data pushed, and etc using tools. The platform creates service component for user calling automatically based on upper definition. 4.6.6 Service Called All legal users can call data through platform. The service called provides legal users call data. The way of service called has three. First is that user selects service component for data called from UDDI business registration center, and calls data through service component for data called directly. Second is that user selects service component for user calling defined by service customized, and calls data through service component for user calling. Third is that platform pushes the data to user according with user’s definition at service customized. The data called by user can be printed, stored, and so on.

The Service-Oriented Data Integration Platform for Water Resources Management

1045

5 The Conclusion This platform can easily realize integration of data for water resources management. These data are distributed, heterogeneous and autonomous. The problem of traditional approach of data resources integration can be avoid using SOA, such as complicated maintenance of relation between Global as View and Local as View, query processes and optimization, and etc. The platform can insure security of data called, it can provide RABC(Role Based Access Control) combining with PKI/PMI easily. The data resources owner can decide and adjust at any moment who can use these data resources and which authorities are allowed using function of platform.The function of service customized provided by platform can push data resources to user, provides active service.

References 1. A. P. Sheth, J. a. Larson, Federated Database Systems for Managing Distributed, Heterogeneous and Autonomous Databases, ACM Computing Surveys, 1990, Vol. 22(3), 183-236 2. S. Busse, R. Kutsche, U. Leser, H .Weber, Federated Information Systems: Concepts, Terminology and Architectures, Technical Report 99-9,Berlin Technical University, 1999 3. Tore Risch, Vanja Josifovski, Distributed Data Integration by Object-Oriented Mediator Servers, Concurrency and Computation: Pratice and Experience, Vol. 13, Issue 11, 933-953, 2001 4. M. Haas, R. J. Miller, B. Niswonger, M. Tork Roth, P. M. Schwarz, E. l. Wimmers, Transforming Heterogeneous Data with Database Middleware: Beyond Integration, IEEE Data Engineering Bulletin, 1999 5. J. Widom, Research Problems in Data Warehousing, Proceedings of the Fourth International Conference on Information and Knowledge Management, 5-30, November 1995 6. David Kotz, Robert S. Gray, Mobile Agents and the Future of the Internet, ACM Operating Systems Review 33(3), 7-13, August 1999 7. http://www.adobe.com/enterprise/pdfs/Services_Oriented_Architecture_from_Adobe.pdf

Construction of Yellow River Digital Project Management System Houyu Zhang and Dutian Lu Department of Construction and Management, YRCC, Zhengzhou, Henan 450003 [email protected]

Abstract. Digital Project Management is one of main application systems of Digital Yellow River. It mainly consists of information collection and transmission, application etc. The latter is divided into 5 systems of project construction management, operation management, safety monitor, safety assessment and maintenance management. After its construction, the project operation state can be obtained in real time to assess the safety of projects. Projects maintenance scheme can be generated automatically. In addition, it provides support to decision making under the condition of visualization and decision making more scientific and predictive.

1 Introduction Digital Projects Management System (DPMS) is one of main application systems of “Digital Yellow River”. DPMS is to conduct remote safety monitoring on Yellow River flood control projects, based on computer network built for the “ Digital Yellow River” project and service platform, depending on sensors built in the projects, exterior digital photography survey equipments, 3s technology etc modern and traditional methods to collect basic data5. It is to obtain the state of projects operation, assess safety of projects, inquire basic information of projects swiftly and generate project maintenance scheme, provide decision making support to project management under the condition of visualization intensify the scientific and predictive feature of decision making and ensure project safety and realize normal benefits of flood control. DPMS is a must for realizing modernization of project management.

2 Present Situation of Project Management 2.1 Present Situation of Projects Since the Yellow River was trained by human being, in order to prevent flood, embankment of lower Yellow River has been elevated and thickened for 4 times and relatively systematic river training projects construction have been carried out. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1046 – 1052, 2006. © Springer-Verlag Berlin Heidelberg 2006

Construction of Yellow River Digital Project Management System

1047

Sanmenxia, Xiaolangdi, Luhun and Guxian reservoirs were constructed in main stream and tributaries. Dongping Lake and Beijindi detention areas were set up. The flood control project system of “retarding in the upper reach, discharging in the lower reach, detention with projects along the banks” was initially formed. Depending on these measures and guard from people and army along the Yellow River, its safety was ensured in past years. At present, there are 36 channel projects, 920 dams or buttress and embankment protection projects in the Yumenkou-Tongguan reach of middle Yellow River. The total project length is 139 Km. There are 40 protection projects, 262 dams, buttress in the reach from Tongguan to the dam of Sanmenxia reservoir. There are 363 Km long embankment, 59 vulnerable guiding projects, 1211 dam, buttress and embankment protection projects in the lower Weihe River. The project length is 122 Km. The length of all kinds of embankment protection projects in lower Yellow River is 2291 Km. Among it, embankment of main stream is 1371 Km, embankment in detention area is 314 Km, embankment of tributaries is 196 Km and others is 264 Km. The embankment in the estuary is 146 Km. There are 215 various types of vulnerable spots, 6317 dam, buttress and embankment protection projects, with project length 419 Km. There are 231 guiding and beach protection projects, 4459 buttress, with project length 427 Km. In addition, there are 79 embankment protection projects, 405 protection dam, total 107 sluice for flood conveyance or water abstraction. The construction of these flood control projects enhances the flood control capability in the middle and lower Yellow River. Due to the long distance of the flood control projects, various types of projects as well as large amounts of items, project maintenance management is a heavy task. 2.2 Projects Management Situation Being the subordinate agency of Ministry of Water Resources in Yellow River Basin, YRCC represents Ministry of Water Resources to conduct administrative functional authority of the Yellow River. The Department of Construction and Management is one administrative branch of YRCC. It is mainly responsible for projects construction management and operation management of the Yellow River basin. 3 levels of bureau including provincial level, municipal level and county level are established under YRCC. In the lower Yellow River, there established Henan Bureau, Shandong Bureau, 14 bureaus of municipal level( administrative bureau), 63 bureaus ( municipal, district) of county level ( including 12 sluice management bureau). 18407 staff are engaging in project operation management. In the middle Yellow River, there established Shanxi, Shaanxi, Sanmenxia bureau of municipal level, 18 bureaus of county level, altogether 600 staff. Yellow River project management has adopted the management mode of combining special management and multi-agency management for years. This mode played a role under the planning economy mechanism for keeping projects integrated, improving flood control capability of projects. However, with the development of market economy and the deepening of separating management and maintenance, namely reform of project management, previous mode doesn’t meet the demand of

1048

H. Zhang and D. Lu

modern project management any longer. It is urgent to build a new management mode to be fit for modernized project management. Project management includes management during the construction period and that during the operation period. Information collection and transmission 2.3 Information Classification Information of DPMS can be classified into 3 classes according to the usage: information on project construction management, information on project operation management and information on project safety supervision. The forms of information are number, text, table or figure, sound, static pictures and dynamic picture etc. The information on project construction management can be classified into comprehensive information on project construction and management information of each side of practical project according to the business nature. Information on project operation management can be classified into assessment information of project management, construction item management information within the channel, subordinate instruments management information of maintenance staff, management information of project environment. 2.4 Information Transmission DPCM is a distributed multi-level system that covers the whole river. According to the principle of centralized monitoring and multi-level management” and the administrative institution and operation mechanism of YRCC, it is composed of 4 levels management of YRCC, provincial level, municipal level and county level. The agency of YRCC is Yellow River Project Management Centre. The provincial agencies are Yellow River Henan Project Management Centre and Yellow River Shandong Project Management Centre. The corresponding management centers are also established at municipal level. Yellow River project management stations are established at county level. Shanxi Project Management Bureau, Shaanxi Project Management Bureau, Sanmenxia Project Management Bureau, Sanmenxia Reservoir, Yellow River Project Management Centre of Shanxi Bureau, Shaanxi Bureau are directly under the Project Management Centre of YRCC. Data transmission is mainly among the 4 levels or between 4 levels and spots.

3 Application System of DPMS 3.1 Project Construction Management System (DCMS) DCMS is to meet the demand of preparation period, construction period and approving period, and realize information management and modernization of project management. It consists of project bidding invitation management, project construction management and project approving management.

Construction of Yellow River Digital Project Management System

1049

3.1.1 Project Bidding Invitation Management (1) Bidding and bidding invitation management. It includes the publishing and enquiry of all kinds of bidding invitation information, synchronized management of bidding and bidding invitation situation, collection and inquiry of all kinds of bidding and bidding invitation project information4. (2) Enterprise quality management. It includes quality management of enterprise under YRCC and that of construction partner. For enterprise under YRCC, they are under dynamic management of initial assessment, submitting, annual assessment and quality class etc. The quality of construction partner need to be checked. 3.1.2 Project Construction Management (1) Project quality management. It is to build a project construction quality control and management system which is supervised by government, responsible by artificial person, guaranteed by construction unit and managed by supervision unit. It consists of objective control, organization and management, process control, document management, management regulations and quality analysis tools. (2) Project progress management. It is mainly to provide convenient functions such as data input, summary and statistic, report generation and database maintenance etc. project progress information processing for artificial person of project, all levels of construction unit, supervision governing unit, quality supervision governing unit. Due to these, artificial person of project and all levels of construction units can inspect progress of the project in time. (3) Project budget management. It is mainly to realize dynamic budget management, investment control of supervision unit, project cost budget management. It should be able to manage and analyze document, announcements, modification and contracts on project budget and fund management, adjust project cost in real time and provide project investment information for all construction partners6. (4) Project construction management. It is mainly to include all inquiry and supervision of all kinds of contracts of projects and synchronized contract execution progress management. 3.1.3 Project Approving Management It includes project approving and project document management, provides relevant stipulations, regulations and technique data service for different periods of project, as well as management of all kinds of documents, important files and approving information during project construction. 3.2 Project Operation Management System It provides basic information of project operation progress, serving management and decision making and publishing relevant stipulations, standards, methods and project maintenance information of project operation5. The system consists of project management and operation, objectives assessment, river involved construction management, subordinate establishments management, management and maintenance group management, project environment management.

1050

H. Zhang and D. Lu

3.3 Project Safety Monitoring System Monitoring system of flood control project safety is monitoring the operation of all flood control projects. It is main part of the hydraulic project construction and management system, the basis of project safety assessment system, maintenance and management system and the operation of management system, as well as the important parameter and proof of water regulation decision. It consists of two parts: online analysis and processing and monitoring apparatuses management. 3.3.1 Online Analysis and Processing Online analysis and processing is pre-assessment on data collected by project safety monitoring apparatuses. It is different from safety assessment of safety assessment system, usually not for proof of forecasting and decision making. However, it is significant in judgment of the abnormal data, improvement of daily management, realization of project operation rules and improvement the veracity of safety assessment. Online analysis and processing is realized through building a project safety forecasting model and establishing corresponding threshold. Threshold basically reflects the safety degree of project. When the threshold is exceeded, Warning signal will be reported. Safety forecasting model is built based on project design and construction data and historical information are improved gradually with the operation of the system. Online analysis and processing pre-warning information is processed according to the levels of the corresponding unit. 3.3.2 Monitoring Apparatuses Management The location, operation state and parameter of project safety monitoring apparatuses greatly impact the error of data collected. The situation of all kinds of instruments must be managed effectively and error of instruments must be corrected timely so that the project safety parameters can be truly reflected. Monitoring apparatuses management system possesses the function of dynamic management. 3.4 Project Safety Assessment System Project safety assessment system is to synthesize, analyze and process effective monitoring data and information observed, appropriately assess the interior and exterior quality of project, acknowledge and grasp the operation state of project to support flood control regulation, integrated water resources regulation, flood control project rescue, strengthening and maintenance with safety and quality standard system and assessment model. The working process of the system includes two steps: The first is to conduct intellect synthesis analysis on all kinds of basic data collected by monitoring system with safety assessment model; The second is to conduct synthesis assessment with the calculation result by the model put in the project safety assessment system which consists of database, expert experience base; Finally a series of indexes reflecting project safety operation were put forward. From the perspective of content and responsibility of project management, flood control project safety assessment need

Construction of Yellow River Digital Project Management System

1051

to be conducted in 4 aspects: normal assessment of flood control project safety, special items assessment, flood period assessment and real time assessment2. 3.5 Project Maintenance and Management System It is the main task and important responsibility of project management to conduct flood control project maintenance, risk rescue and strengthening projects, keep the project integrated and improve the bearing capacity of the project. Through maintaining the project, risk rescue and strengthening, maintaining and improving the flood control capability of projects, the project is kept integrated and the maxim management benefit of flood control project is pursued. Project maintenance and management system includes 3 parts: basic project information management, project maintenance decision supporting and project maintenance information management. 3.5.1 Basic Project Information Management Basic project information is basic information of Yellow River flood control projects (embankment, controlling and guiding project at vulnerable spots, sluices, hydraulic multi-purpose projects etc.) It includes data of geology of project, designing and planning, construction, close and operation as well as management data. It is to collect and process basic information of project according to standards and demands, store them in the database, meeting the demand of inquiry, summary and analysis of project construction and management, flood control regulation and water regulation. It is to realize visualized inquiry of river regime, project location and structure profile with GIS technology. 3.5.2 Project Management and Maintenance Decision Support Based on monitoring results of projects and project safety assessment results, a set of project maintenance scheme is generated automatically through project maintenance standard model. Then optimized project maintenance strategy are established and sorted according to their PRI using decision and conference system consisted of relevant stipulations, standard bases and expert experience bases. It is to achieve the goal of improving project maintenance decision making and realizing optimized resources allocation.

4 Conclusions “Digital Project Management” is an important part of “Digital Yellow River” and a brand-new systematic project with obvious social and economic benefits. After the system is constructed and put into use, it will bring great change to the working ideas and project management measures. The future is a society of information. With the development of society, Yellow River project construction and management will turn to the modern management with the digital management as a symbol.

1052

H. Zhang and D. Lu

References [1] [2] [3] [4] [5] [6]

Plan of digital yellow river project, YRCC, 2003. Lu dutian etc, annex 4 of plan of digital yellow river project, 2003. Management method of digital yellow river project construction, YRCC, 2003. Standard construction of digital yellow river project, YRCC, 2003. Lu dutian etc, requirement report of digital project management, 2002. Implement scheme of digital yellow river project,YRCC, 2003.

Study on the Construction and Application of 3D Visualization Platform for the Yellow River Basin Junliang Wang, Tong Wang, Jiyong Zhang, Hao Tan, Liupeng He, and Ji Cheng Yellow River Engineering Consulting Co., Ltd., Zhengzhou 450003, China http://www.yrdesign.cn/

Abstract. The 3D platform for the Yellow River Basin is an infrastructure for the construction of Digital Yellow River Programme. The platform inputs the Yellow River Basin visually, vividly and systematically into computer by adopting the advanced computer techniques, such as GIS (Geographical Information System), RS (Remote Sensing), 3D Modeling, Virtual Reality (VR) etc., and constructs virtual corresponding state for actual Yellow River basin, which is “3D Visualization Platform for the Yellow River Basin”. On the platform, the basin planning, management and multiple applicable systems can be developed and articulated, which is convenient for modeling, analyzing and studying the natural phenomena of the Yellow River, exploring the internal rule and providing powerful 3D visualized decision-making support environment for each plan in the Yellow River regulation, development and management.

1 Introduction Constructing unified integrated digital platform and virtual environment and developing powerful system software and mathematics model are the key issues in the construction of the Digital Yellow River Programme. The analysis on construction and application of 3D visualization platform for the Yellow River Basin is to study how to input the Yellow River Basin into computer visually, vividly and systematically, construct a virtual corresponding state for actual Yellow River Basin, and provide a 3D visualized integrate digital platform and virtual environment for the other application systems in the Digital Yellow River Programme.

2 Study of 3D Digital Visualization Platform Construction for the Yellow River Basin The 3D visualization platform construction for the Yellow River Basin shall not only study how to input the whole basin into computer, but also consider how to make the platform convenient for utilization so as to satisfy the application demand by other application systems in the Digital Yellow River Programme. This has created a higher requirement for the platform construction. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1053 – 1058, 2006. © Springer-Verlag Berlin Heidelberg 2006

1054

J. Wang et al.

2.1 Practice of Platform Construction for the Basin During 2003 to 2004, according to the overall arrangement of the Digital Yellow River Programme, Yellow River Engineering Consulting Co., Ltd (YREC) has developed the Interactive 3D Visualization System for the Yellow River lower reach. The system inputs the Yellow River lower reach into computer visually, vividly and systematically by adopting advanced computer techniques such as GIS, RS, 3D Modeling, VR, etc. This platform can help users swiftly overview, acknowledge, and study the whole Yellow River lower reach and provide a full, systematical and visualized study and analysis circumstance for the management and development of the Yellow River lower reach. User Interface of Interactive 3D Visualization System for the Yellow River Lower Reach is shown as Fig. 1.

Fig. 1. User Interface of Interactive 3D Visualization System for Yellow River Lower Reach

The design and development of Interactive 3D Visualization System for Yellow River Lower Reach is the first step for the research and practice on the construction of 3D visualization platform for the Yellow River Basin. The successful development and application of the system has accumulated sufficient experiences in 3D visualization platform construction for basin and provided sound technical base for construction. 2.2 Basic Thought and Key Technique for 3D Visualization Platform Construction for the Yellow River Basin Since the whole Yellow River Basin is larger than the range of the lower reach, the 3D visualization platform for the Yellow River Basin shall further take the river training requirements from all sides into consideration to construct the flexible and multiple-layered 3D visualization platform for the Yellow River Basin which can conduct

Study on the Construction and Application of 3D Visualization Platform

1055

integrated analysis. The following key technical issues must be properly dealt with in constructing 3D visualization platform for the Yellow River Basin through the study and development of interactive 3D visualization platform for the Yellow River lower reach. (1) Effective Management of Mass Data The 3D visualization platform for the Yellow River Basin shall construct and present a 3D virtual environment for the whole basin, and construction of the 3D models for landform, texture image and major structure in the 3D virtual environment is on the base of mass data. How to store and manage the mass data is a key technical issue in the 3D visualization platform construction for the Yellow River Basin. (2) Data Update Dynamics As a applicable platform to serve the Yellow River training, the scene data in the 3D visualization platform for the basin shall have given-period effectiveness, which requires a real-time update of DEM (digital elevation model) and DOM (digital objective model) data and of property data in the 3D landform scene for the basin. (3) Quick Positioning of 3D Mass Scene Because the basin covers larger area, users hope the system can reach any place in the scene quickly, which means a quick scene positioning in mass environment. How to conduct a quick positioning of 3D mass scene according to scene project and culture feature title is the key technical issue in the 3D visualization platform construction for the Yellow River Basin. (4) GIS Analysis in 3D Mass Environment In the 3D virtual environment, we usually need to acknowledge some information, such as landform cross-section situation, engineering culture feature dimension in a place, elevation location of any point, etc., and the 3D visualization platform for the Yellow River Basin needs to conduct the research work on GIS analysis in 3D mass environment. (5) Combination with Distributed Mathematic Model As for the application of basin size, it is necessary to solve the problems in combination between the 3D visualization platform and the distributed mathematic model for the basin properly because the distributed mathematic model for the basin is the base for solving the problems about the water in the basin.

3 Application Study on 3D Visualization Platform for the Yellow River Basin The practice of construction of Interactive 3D Visualization System for the Yellow River Lower Reach shows that the 3D platform can offer more sufficient information and representation means that 2D platform and it will provide a background platform for visual picture system. We have carried out the following application research

1056

J. Wang et al.

work on the base of the actual condition of the Yellow River lower reach and made excellent achievements. 3.1 Study on Application Combined with GIS-Based 2D Flow/Sediment Model for the Yellow River Lower Reach The GIS-based 2D Flow-sediment Mathematical Model for Yellow River Lower Reach (referred as 2D Flow-sediment Model) is another study item in the Digital Yellow River Programme, which is to establish the advanced and practical mathematical model mainly based on theoretical research and the analysis of the annual sediment scouring and deposition changes of the channel in the lower reach to simulate and study the flow and sediment evolution rule in the lower reach channel. The utilization of the Interactive 3D Visualization System for Yellow River Lower Reach combined with the 2D Flow-sediment Model mainly will raise the calculation result level of the 2D Flow-sediment Model, it can become a virtual experiment platform to simulate and study the lower reach channel rule and can provide powerful technical support for flood control decision-making. Fig.2 is an application interface for Interactive 3D Visualization System for Yellow River Lower Reach combined with the 2D Flow-sediment Model.

Fig. 2. Application Interface for Interactive 3D Visualization System for Yellow River Lower Reach combined with 2D Flow-sediment Model

3.2 Study on Application Combined with the Lab’ing Yellow River During the construction of the Three Yellow Rivers, the Digital Yellow River and the Lab’ing Yellow River are both measures and tools of the study and training for the natural Yellow River. The plans for the Yellow River training and development are simulated by computers through utilizing the Digital Yellow River and several

Study on the Construction and Application of 3D Visualization Platform

1057

possible plans will be submitted afterwards. Then those possible plans submitted by the Digital Yellow River will be examined by the Lab’ing Yellow River and workable plans will be put forward. The Interactive 3D Visualization System for the Yellow River Lower Reach considered as the 3D digital platform of the Digital Yellow River can also be regarded as the digital representing platform of the Lab’ing Yellow River that can present the experiment results of the Lab’ing Yellow River in 3D way. During the research on the Regulation Plan for Braided Reach and Channel of the Yellow River Lower Reach, the project staff and the engineers majoring in the Interactive 3D Visualization System for the Yellow River Lower Reach have carried out the study on the representing ways of the Lab’ing Yellow River result on the 3D digital platform and have represented the experiment results of the Lab’ing Yellow River towards the channel regulation plan in 3D way through overlying the vector Lab’ing Yellow River experiment data on the 3D digital platform. And by this means, one can represent the regulation plan with the results more visually, which can surely benefit the scientific decision-making. 3.3 Study on Application in the Flood Control in the Yellow River Lower Reach Consulting system for project and emergency situation can be developed and articulated on the 3D visualization platform and a 3D visualized consulting system for project and emergency situation is established. Articulating GIS-based 2D flood evolution model on the digital virtual platform set by the system can provide a visualized platform for decision-making support in the lower reach flood control. The evolution of flood hydrograph in various grades with various formation causes in various engineering condition can be simulated on the platform and the evolution process of flood in various grades in the channel can be represented visually. Based on the flood evolution result, the evacuation plan for the people in the flooded plain in the lower reach under the condition of various flood grades or other effective measure for risk avoiding can be formulated, and the duration necessary for preparation work of flood fighting, such as arrangement of materials, people, equipment, etc., shall be ascertained for the project with possible risk. 3.4 Study on Application in the Flood Control in the Yellow River Lower Reach The information related to water dispatch can be integrated through 3D digital platform and a background platform for water dispatch of the whole basin can be formed through developing the corresponding dispatch model to supply the all-side technical support for water dispatch management and decision-making. 3.5 Study on Application in Planning, Construction and Management of Flood Control Project in the Yellow River Lower Reach The construction and application of the 3D visualization platform for the Yellow River lower reach shows that the construction of the 3D visualization platform is very important for the Yellow River regulation and development and it is an important measure for scientific research and decision. With the construction of 3D visualization platform for the whole Yellow River Basin, the application of the platform will

1058

J. Wang et al.

be in a more extensive range and can provide all-side, systematic and visualized technical support for the regulation, development, research and management of the basin.

4 Conclusions It can be seen from the analysis on construction and application of the 3D visualization platform for the Yellow River Basin that the construction and application of the 3D visualization platform for the Yellow River Basin is of great importance to the research on regulation and development of the river basin and it provides a background platform for systematic analysis and study on the issues about the Yellow River. The visual representation, integrating capability and repeatable utilization of the platform supplies measures which are more economical for the study on the issues about the Yellow River. Much research work which needed to be carried out through model or topographic experiments in the past can now be realized by mathematic model which reflects the rule. This provides powerful technical measures for multiplan research and is of great importance in the research on basin rule for the purpose of maintaining the healthy life of the Yellow River.

References 1. Li Guo-ying.: Digitalization of the Yellow River, Vol.23. No.11. YELLOW RIVER (2001) .ISSN: 1000-1379 2. Zhu Qing-ping.: A Summary of “Model Yellow River” Project Planning, Vol.26. No.3. YELLOW RIVER (2004) .ISSN: 1000-1379 3. Philippe Rigaux, et al, Spatial Databases: With Application to GIS, ISBN: 1-55860-588-6 4. David Luebke, et al, Level of Detail for 3D Graphics, ISBN: 1-55860-838-9 5. Yellow River Conservancy Commission (2003), The Initiative of the Digital Yellow River Programme (in Chinese). ISBN: 7-80621-714-2

A Light-Weighted Approach to Workflow View Implementation* 1

1

1

2

3

Zhe Shan , Yu Yang , Qing Li , Yi Luo , and Zhiyong Peng 1

Department of Computer Science, City University of Hong Kong, Kowloon, Hong Kong [email protected], [email protected], [email protected] 2 China Software Development Lab, IBM, Beijing, China [email protected] 3 State Key Lab of Software Engineering, Wuhan University, Wuhan, China [email protected]

Abstract. Interaction is the central concern of the B2B E-commerce today. The concept of workflow views provides a convenient interface for advanced business transactions across enterprise boundaries. However, until now there is little work discussing the implementation issue of workflow views. In this paper, we present a light-weighted implementation model for workflow views, which advocates an easy-to-deploy approach to implement workflow view functions based on existing WFMSs without breaking their integrity. Via this approach, enterprises can equip their workflow systems with the versatile workflow views in an efficient and economical way.

1 Introduction Workflow views are derived from workflows as the fundamental support for workflow inter-operability and visibility by external parties [1-3]. The components of a workflow view include the process flow graph, input/output parameters, documents, and so on, which are also contained in a workflow. Hence, a workflow view encloses the information about business process structure and contents. Based on specific business collaborations, a company may derive corresponding workflow views based on the local workflow process. Such a workflow view includes all the necessary information of the company for this business. It can be used as the interaction interface in a business transaction which is carried out with external parties. However, until now most of existing works focus on the conceptual model of workflow views. There is little work investigating the implementation details of *

This research was largely supported by a grant from the Research Grants Council of the Hong Kong SAR, China (Project. No. CityU 117405), and sponsored by 973 National Basic Research Program, Ministry of Science and Technology of China under Grant No. 2003CB317006, State Key Lab of Software Engineering (Wuhan University, China) under grant: SKLSE03-01, National Natural Science Foundation of China (60573095) and Program for New Century Excellent Talents in University of China (NCET-04-0675).

X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1059 – 1070, 2006. © Springer-Verlag Berlin Heidelberg 2006

1060

Z. Shan et al.

workflow views. In [4], a workflow deputy model was proposed to support the realization of workflow views. However, that approach limits the implementation of workflow and workflow views in the SmallTalk [5] environment, which makes it uneasy for enterprises to develop the view functions for existing workflow systems. In this paper, we describe a light-weighted implementation model for workflow views, by defining an easy-to-deploy approach for the workflow view implementation based on existing WFMSs. This model keeps the integrity of the existing WFMS while setting up a workflow view layer based on the exposed workflow APIs and devising specific client and administration interfaces for workflow views. Our primary objective here is to develop the view functions without modifying the workflow kernel engine. Via such an approach, enterprises can obtain the advantages of workflow views in an efficient and lost-cost way. The remainder of the paper is organized as follows. Section 2 introduces the background of workflow views and its meta-model. Section 3 investigates the research works related to this paper. Section 4 presents the light-weight implementation model for workflow views. Section 5 displays the functionality of the model via a sample prototype system. Section 6 concludes the paper with discussion issues and future research plans.

2 Background of Workflow Views A business process usually involves many participating organizations in a B2B ecommerce environment (i.e., such a business process involves several inter-operating and interacting workflows from different organizations). This is often referred as a cross-organizational workflow. To support workflow inter-operability, we need a mechanism to allow authorized external parties to access only the related and relevant parts of a workflow while maintaining the privacy of other unnecessary or proprietary information. Motivated by views of databases, Chiu et al. proposed the use of workflow views as a fundamental mechanism for cross-organizational workflow interaction [2]. A workflow view involves all the components included in a workflow,

Fig. 1. Workflow View Meta-model

A Light-Weighted Approach to Workflow View Implementation

1061

such as a flow graph of activities, a set of input/output messages, etc. A workflow view is a structurally correct subset of a workflow. The use of workflow views facilitates sophisticated interactions among workflow management systems (WFMSs) and allows these interactions to inter-operate in a gray box mode (i.e., they can partially access each other’s internal information). The artifact of workflow views is therefore a handy mechanism to enact and enforce cross-organizational interoperability over the Internet. In addition, workflow views are useful in providing access to business processes for external customers or users, including B2C ecommerce. Even within an organization, workflow views are useful for security applications, such as to restrict the access to a workflow system (resembling the use of views in databases). A meta-model of workflow views in UML class diagram is shown in Fig. 1, illustrating the relation between workflow and workflow view.

3 Related Work In this section, we discuss some related works in connection with our works in the areas of workflow views. There have been some earlier works in the area of workflow views. Liu and Shen [6] presented an algorithm to construct a process view from a given workflow, but did not discuss its correctness with respect to inter-organizational workflows. Our initial approach of workflow views has been presented in [2]. From then, workflow views have been utilized as a beneficial approach to support the interactions of business processes in E-service environment [1, 3]. However, most of these works focused on the conceptual level. The realization issues, especially the underlying implementation approach, are largely neglected. To address the derivation of private workflows from inter-organizational workflows, Van der Aalst and Weske [7] used the concept of workflow projection inheritance introduced in [8]. A couple of derivation rules are proposed so that a derived workflow is behaviorally bi-similar to the original workflow based on branching semantics, in contrast to the trace semantics adopted in the workflow view model. Schulz and Orlowska [9] considered communication aspects of workflow views in terms of state dependencies and control flow dependencies. They proposed to tightly couple private workflow and workflow view with state dependencies, whilst to loosely couple workflow views with control flow dependencies. A Petri-Net-based state transition method was proposed to bind states of private workflow tasks to their adjacent workflow view-task. This approach only considered the state aspect of workflow views. Moreover, it is difficult to accomplish the explicit modeling of state mapping. In our previous work [4], we extended the object deputy model [10] to the workflow deputy model, supporting the realization of workflow views. The deputy class and the deputy algebra are formally defined for workflow classes. Also, specific deputy operations were designed for each kind of workflow component classes. The disadvantage of this approach is that the whole workflow system should be developed within the deputy framework, i.e. a SmallTalk [5] environment, which makes it infeasible to be adopted by the enterprises who already have legacy workflow systems.

1062

Z. Shan et al.

4 A Light-Weight Implementation Model for Workflow Views As mentioned earlier, the motivation of our light-weight implementation model (LWIM) is to define a feasible approach for the implementation of workflow views based on the existing WFMSs. As shown in Fig. 2, its key idea is to keep the integrity of existing WFMSs, develop the workflow view layer based on exposed workflow client and admin APIs, and design specific client and administration interfaces for workflow views. The information related to workflow views will be loaded and stored in a unified way as original workflow information. The workflow view characteristic can be recorded in specific parameters, such as extended attribute in XPDL [11], which can be recognized by the workflow view layer. With this information, the workflow view layer can track the original workflow information and perform corresponding actions. In this section, we introduce the three major components of this model, i.e., the view definition language, the view management module and the view functional module.

Fig. 2. The Framework of LWIM

4.1 Workflow View Definition Language In order to execute and manage workflow views, a workflow view definition language is devised. It presents the execution patterns of workflow views, and stores the reference information to original workflows, providing a base for the workflow view layer to parse and execute. As noted, a workflow view is a structurally correct subset of a workflow. It involves all the components included in a workflow, such as a flow graph of activities, workflow relevant data, etc. Fig. 3 shows a meta-model of workflow view definition language, where the primitive of workflow view definition is presented. For each entity in the workflow view derived from the original workflow, there is an attribute that refers to the corresponding entity in the original workflow definition. If an activity view contains more than one original workflow activities, the reference attribute may refer to several entities in the original workflow definition. A workflow view definition language should be compatible with the workflow language recognized by the WFMS. Hence, the view definition can be regarded as a common workflow definition. As shown in Fig. 3, a workflow process may have more than one workflow views existing. Each workflow view only corresponds to one original workflow process. In

A Light-Weighted Approach to Workflow View Implementation

1063

the process level, a source workflow process attribute is used to store the corresponding workflow process ID in the workflow definition file. One major advantage of view is its superiority on customization convenience, which focuses on the congregation of activities and transitions in definition schema. That means, one activity view may be the congregation of more than one activity elements of the source workflow. A simple way to describe this congregation is via a link set, which includes all the link information to those activities. Transition view connects activity views. It contains the filtered condition information adopted from the source workflow. As for how to intercept the information from both transition and activity, its details are beyond the scope of this paper. The relevant data view exhibits the concept of limited information disclosed. Via relevant data view, we can only construct views for the data desired to be released. Also, relevant data view can be a composite one aggregated from several data sources. Workflow views and source workflows share the participant and application information. Workflow views can be assigned to the end users for browse and execution. The application declaration is bounded to specific activities. As an activity is made available via activity views, its linked applications can also be invoked by the views. 1 1

Workflow Process Definition

1

Workflow View Definition

1 * 1 1..* 1 Workflow Process Activity 0..*

performed by

-To 0. .*

-From

0..*

Transition Information

1 1 1

* 1 1 1Workflow Activity View

-From

performed by

-To

1 Transition View

*

1..*

* Workflow Relevant Data *

Relevant Data View

1..* 1 Invoke Workflow Application Declaration

*

Invoke *

* Workflow Participant Specification

*

*

Fig. 3. Meta-model of View Language

4.2 Workflow View Management Module The workflow view management module is in charge of the load, initialization, and control of workflow view definitions and instances. In LWIM, the workflow view module takes the same loading and unloading approach as that of workflow

1064

Z. Shan et al.

definitions. During the initialization of workflow view instances, the management module will search for the source workflow instances via the link information stored in the view definition, and establish the linkage between workflow view instances and source workflows instances. The management module also provides the monitor and control interface to workflow view instances. We can monitor execution of workflow view instances, their states and properties, and relevant information defined in the view. As for the interaction between workflow view instances and source workflow instances, it is controlled by the workflow view function module. 4.3 Workflow View Function Module The workflow view function module contains two components, i.e., presentation engine and update mechanism. The presentation engine interprets the link attributes, retrieve the data from source workflows, and display the information in the view admin and client. The update mechanism translates the view operations into the underlying operations on source workflows via pre-defined rules. They provide the bi-directional communication between view instance and source instance. In this paper, we mainly concentrate on activity state and relevant data. Table 1. State Mapping Table Workflow View Activity Instance open.not_running.not_started open.running open.not_running.suspended

closed.completed closed.terminated closed.aborted

Source Workflow Activity Instance(s) All the activities have the state open.not_running.not_started At lease one activity is open.running, and no activity is closed.terminated or closed.aborted. At least one activity is open.not_running.suspended, and no activity is closed.terminated or closed.aborted. Or, At least one activity is closed.completed, at least one activity is open.not_running.not_started, and no activity is open.running, closed.terminated, or closed.aborted. All the activities are closed.completed. At least one activity is closed.terminated. At least one activity is closed.aborted, and no activity is closed.terminated.

State Mapping: According to [12], the activity states can be classified into six categories: • • • •

open.running: the activity is being executed. open.not_running.not_started: the activity has not been started. open.not running.suspended: the activity is suspended by its user. closed.completed: the activity has been completed with success.

A Light-Weighted Approach to Workflow View Implementation

1065

• closed.terminated: the activity is stopped without success for some reasons. It will not be executed again. • closed.aborted: the activity is aborted by a user. The activity views also have these six states. When an activity view only have one linked source activity, the activity transfer is straight-forward. However, when an activity view associates with more than one source activity, the state dependence is unclear. Here, we constructed a mapping table (Table 1) to do this job. This approach is also adopted in [9]. Update on Relevant Data Views: The relevant data view can be defined on one or more source relevant data in read-only or writable model. For the ready-only model, the presentation engine controls the interpretation from the sources to the view based on the defined one-to-one or one-to-many relation. The update mechanism controls the behavior from the view update to source update. For the one-to-one case, the update on the relevant data view is directly mapped to the update on the source data. For the one-to-many case, the update mechanism will block the update request because of the semantic ambiguity.

5 The Prototype System In order to show the functionality of the light-weight model discussed in Section 4, we have built a prototype system based on an open source WFMS, named Enhydra Shark [13]. Enhydra Shark is a workflow engine completely based on WfMC and OMG specifications. It uses WfMC’s XML Process Definition Language (XPDL) [11] as its native definition format. XPDL is a standard of process languages proposed by WfMC, which has been widely accepted by the academic and industry world. Based on the meta-model defined in Fig. 3, we extended XPDL to XPVDL (XML Process View Definition Language) supporting the view presentation. The XPVDL not only describes the view definition shown to the end users, but also connects the view definition with the source process definition. In order to keep the compatibility with XPDL, the linkage tags are added in the section of extended attribute. Hence, XPVDL shares the same schema with XPDL and can be resolved by the workflow engine. The correctness of view definitions is guaranteed by the view management module. 5.1 Demonstration Example In order to illustrate the functionality of workflow views, we adopt a real-life workflow case of information system integrator in this section. Fig. 4 shows an example of the workflow for the system integrator that provides both of hardware and software services. In B2B application, there is another type of organizations involved that collaborates with the system integrator to perform the procurement, namely, end user. In order to improve the business efficiency and win the market competition, the system integrator goes through a business workflow, say, for an advanced server system.

1066

Z. Shan et al.

Fig. 4. The Workflow of System Integrator

Fig. 5. Workflow Views of System Integrator towards End-user

A Light-Weighted Approach to Workflow View Implementation

1067

1. Order Making Stage: First, when receiving the quotation enquires, the system integrator checks the products price, which is a block activity that executes an activity set. In the activity set, the products cost is checked firstly. Then a market assessment is conducted. After that, the products profit is calculated. Lastly, the price of the required products would be established. After price check is completed, the system integrator would prepare the quotation to reply to the end user. If the end user is satisfied with the quotation, it would send an order to the system integrator. Then the system integrator needs to confirm that the order had been successfully received. Next, it would check whether the order is valid. If it is not, the deal would fail and the process ends. Otherwise the deal risk is analyzed. If it is more than 20%, the deal would fail and the process end. Otherwise the credit of the end user is checked. If it is invalid, the deal would fail and the process end, otherwise the order is proved OK and the process succeeds. 2. System Developing Stage: Now the system integrator needs order the hardware, which is a sub process composed of following activities: firstly it checks the hardware repository, then orders the missing parts and assembles the system, lastly tests the system. Next the hardware installation and system design and development are executed in parallel. After the hardware is successfully installed, it would be tested in the target environment. And, after software development completes, software system would be tested. The software design and development is a block activity, which executes the following activity set: firstly requirement analysis is conducted, then it will perform the overall design and detailed design, after that the coding work is performed, lastly the software development group would have the unit and module test. When both hardware and software are successfully presented, the system would be deployed in the real environment. Finally, the end user would be trained by the system integrator. 3. Payment Settling Stage: In the final stage, payment would be conducted. There are two types of payments, which are cash and check payment. The end user can choose either of them. After the system integrator confirms the payment, the deal successfully ends. Considering the business interaction with the end user, the system integrator may be unwilling to shown some business details and only wish to present to the end user the parts involved in the business interaction. So a workflow view to the end user is defined via the following steps: 1. For the private block activity, namely, Price Check, its detailed activities in the activity set are not displayed. The end user can only see the price check activity in an atomic activity form. 2. The system integrator hides the activity Prepare Quotation. 3. The sub-flow that is composed of Order Confirmation, Order Validity Check, Risk Analysis, and Credit Verification, would be hidden the details and only be shown as an atomic activity, i.e. Order Confirmation, which includes the information which needs to be shown to the end user.

1068

Z. Shan et al.

4. The sub-process Order Hardware includes the information of system integrators’ interactions with the parts supplier, which is hidden in the workflow view in consideration of information privacy. 5. For the block activity Software Design and Development, the activities which include Overall Design, Detailed Design, Coding, and Unit & Module Test in the executed activity set are not completely shown to the end user, and only an atomic activity Construction which represents these activities is visible. 6. Hardware Test and Software Test need to be merged into one activity in order to clearly present the test process. 7. The system integrator does not care the payment type and only an activity Payment Arrangement is shown. After a series of clippings, the workflow view that the system integrator may wish to present to the end user is depicted as Fig. 5.

Fig. 6. View Admin: Monitoring of View Instance

5.2 View Admin Interface Fig. 6 shows the monitoring interface for view instances. Via this interface, users can supervise the execution of view instances. Also, users can investigate the relevant data via the view mode. As shown in Fig. 6, none of the process variables is updatable, since all of them are defined in a read-only mode. 5.3 View Client Interface Fig. 7 shows the view client interface. It extends the worklist handler with a view monitoring component. Via such an interface, users can not only accept and complete

A Light-Weighted Approach to Workflow View Implementation

1069

activities distributed to their worklists, but also access to the information released from the corresponding view. By this way, they may understand their works’ importance in the whole process, and be aware of the progress status.

Fig. 7. View Client

6 Discussion and Conclusion Workflow views are derived from workflows as a fundamental support for workflow inter-operability and visibility by external parties in a web service environment. Most of the previous works concentrated on the conceptual model and application scenarios of workflow views [1-3, 14]. In this paper, we have presented a light-weight implementation model for workflow views. The primary goal is to develop the view functions for any existing workflow system without altering the workflow kernel engine. With this approach, enterprises can equip and upgrade their workflow systems with the versatile workflow views in an efficient and lower-cost way. To the best of our knowledge, view instances are mapped to source instances in a real-time style. Any changes in the source instances will be reflected to the views. Hence, the transaction management and exception handling issues that pertain to conventional workflow systems are not concerned by our workflow views mechanism. Our future work will focus on the implementation of the B2B interface of our prototype system based on the LWIM model. By targeting at the public E-services environments including both BPEL4WS [15] and ebXML [16], we are especially interested in tackling the message management issues occurring in the B2B enactment interface.

1070

Z. Shan et al.

References 1. Chiu, D.K.W., Cheung, S.C., Till, S., Karlapalem, K., Li, Q., Kafeza, E.: Workflow View Driven Cross-Organizational Interoperability in a Web Service Environment. Information Technology and Management 5 (2004) 221-250 2. Chiu, D.K.W., Karlapalem, K., Li, Q.: Views for Inter-organization Workflow in an Ecommerce Environment. Semantic Issues in E-Commerce Systems, IFIP TC2/WG2.6 Ninth Working Conference on Database Semantics. (2001) 137-151 3. Chiu, D.K.W., Karlapalem, K., Li, Q., Kafeza, E.: Workflow View Based E-Contracts in a Cross-Organizational E-Services Environment. Distributed and Parallel Databases 12 (2002) 193-216 4. Shan, Z., Li, Q., Luo, Y., Peng, Z.: Deputy Mechanism for Workflow Views. 10th International Conference on Database Systems for Advanced Applications. Lecture Notes in Computer Science, Vol. 3453 (2005) 816-827 5. Smalltalk: http://www.smalltalk.org. 6. Liu, D.-R., Shen, M.: Modeling workflows with a process-view approach. Seventh International Conference on Database Systems for Advanced Applications. (2001) 260267 7. Aalst, W.M.P.v.d., Weske, M.: The P2P Approach to Interorganizational Workflows. 13th International Conference Advanced Information Systems Engineering. Lecture Notes in Computer Science, Vol. 2068 (2001) 140-156 8. Basten, T., Aalst, W.M.P.v.d.: Inheritance of Behavior. Journal of Logic and Algebraic Programming 47 (2001) 47-145 9. Schulz, K.A., Orlowska, M.E.: Facilitating cross-organisational workflows with a workflow view approach. Data & Knowledge Engineering 51 (2004) 109-147 10. Kambayashi, Y., Peng, Z.: An Object Deputy Model for Realization of Flexible and Powerful Objectbases. Journal of Systems Integration 6 (1996) 329-362 11. XPDL: XML Process Definition Language, http://www.wfmc.org/standards/XPDL.htm. 12. WfMC: Interface 1: Process Definition Interchange Process Model Specification, version 1.0. Workflow Management Coalition (1999) 13. Shark: http://shark.objectweb.org. 14. Shan, Z., Chiu, D.K.W., Li, Q.: Systematic Interaction Management in a Workflow View Based Business-to-business Process Engine. 38th Annual Hawaii International Conference on System Sciences (2005) 15. BPEL4WS: http://www.ibm.com/developerworks/webservices/library/ws-bpel/. 16. ebXML: http://ebxml.org/.

RSS Feed Generation from Legacy HTML Pages Jun Wang1, Kanji Uchino2, Tetsuro Takahashi2, and Seishi Okamoto2 1

Fujitsu R& Center Co., Ltd., B306, Eagle Run Plaza, No.26 Xiaoyun Rd., 100016 Beijing, China [email protected] 2 Fujitsu Laboratories, Ltd., 4-1-1 Kami-kodanaka, Nakahara-Kawasaki, Kanagawa 211-8588, Japan {kanji, takahashi.tet}@jp.fujitsu.com, [email protected]

Abstract. Although RSS demonstrates a promising solution to track and personalize the flow of new Web information, many of the current Web sites are not yet enabled with RSS feeds. The availability of convenient approaches to “RSSify” existing suitable Web contents has become a stringent necessity. This paper presents a system that translates semi-structured HTML pages to structured RSS feeds, which proposes different approaches based on various features of HTML pages. For the information items with release time, the system provides an automatic approach based on time pattern discovery. Another automatic approach based on repeated tag pattern mining is applied to convert the regular pages without the time pattern. A semi-automatic approach based on labelling is available to process the irregular pages or specific sections in Web pages according to the user’s requirements. Experimental results and practical applications prove that our system is efficient and effective in facilitating the RSS feed generation.

1 Introduction The knowledge workers who strive to keep up with the latest news and trends in the field have to frequently revisit specific Web pages containing list-oriented information such as headlines, "what's new", job vacancies and event announcements. The above information can certainly help enterprises and individuals track competitions and opportunities, and understand markets and trends, however it becomes not easy for workers to keep current when information sources exceed a handful. RSS (Rich Site Summary/RDF Site Summary/Really Simple Syndication), a machine-readable XML format for content syndication [1], allows users to subscribe to the desired information and receive notification when new information is available. RSS feeds send information only to the parties that are truly interested, thereby relieving the pressure on email systems suffering from spam [2]. Since virtually almost any list-oriented content could be presented in RSS format, RSS demonstrates a promising solution to track and personalize the flow of new Web information. Furthermore, enterprises can take advantage of the simplicity of the RSS specification to feed information inside and outside of a firewall. X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1071 – 1082, 2006. © Springer-Verlag Berlin Heidelberg 2006

1072

J. Wang et al.

Today RSS has become perhaps the most widely used XML format on the Web. However, much of the current Web content is not yet enabled by RSS feeds. In order to evangelize RSS application and leverage the Web’s valuable contents, the availability of convenient approaches to “RSSify” suitable Web content has become a stringent necessity. The point is to translate existing semi-structured HTML pages into structured RSS feeds. The simplest way is to observe HTML pages and code extraction rules manually [3, 4, 5]. However, writing rules is time-consuming, errorprone and not scalable. Therefore, we need more efficient approaches for RSS feed generation, which should be automated to the largest extent possible, in order to allow for large scale extraction tasks even in presence of changes in the underlying sites.

2 Approaches for RSS Feed Generation Since core contents of different RSS versions are similar in general structure and consistent in concept [1], related RSS tags are presented only in RSS 2.0 format in this paper. At the basic level, a RSS feed consists of a channel with its own metadata (e.g. title, link, description, pubDate, language etc) and a number of items, each of which has its own metadata (e.g. title, link, pubDate, category etc). The title in the RSS channel can be easily extracted from the content of the title in the HTML head. The url of the HTML page can be treated as the link in the RSS channel. When the metadata of the HTML head contain description or keywords, we can convert them to contents of the description in the RSS channel. If the HTML page is static, we can convert the last-modified time in the HTTP head to pubDate in RSS channel. The language of RSS channel can be extracted from the content-language or charset metadata of the HTML head. The primary contents of the information items in listoriented pages are the title, url and release time which are the counterparts of the title, link and pubDate in the item of RSS specification. The url of a news item in HTML pages is in the href attribute of a tag and the corresponding title usually resides in texts in or near the tag. Therefore, the primary task is to locate suitable tags and texts in HTML pages. However, Web pages often contain multiple topics and a number of irrelevant pieces from navigation, decoration, and interaction parts [6]. It is not easy for the machine to automatically locate and convert target items since HTML is designed for presentation instead of content description [7, 8]. This paper introduces approaches to solve this problem based on different features of listoriented information in HTML pages. 2.1 Automatic Approach Based on Time Pattern Discovery In news or “what’s new” pages, the news item is often published with the corresponding release time. This feature is a prominent and useful clue for locating and extracting the target news items. Fig. 1 shows a Fujitsu press release page. Since the formats of date and time are simple and limited, the release time is easily identified and we can easily construct a database of time patterns represented in regular expressions. In our current experiment, only about 20 time patterns are required to cover almost all the time and date formats we have met on Japanese and Chinese sites. In Fig. 2, there are some typical date and time formats.

RSS Feed Generation from Legacy HTML Pages

1073

2004-06-28 03:26 PM 20040518 14:50 2004/06/28 13 January 2005 Oct. 1, 2004 2003 5 1

Fig. 2. Examples of time format Category

Items

Time

Fig. 1. Press release page

0.1.9.0.0.0.2.0.0.0.0.0.4.0.0.1.0.0.0.0.0.0 12 January 2005 0.1.9.0.0.0.2.0.0.0.0.0.4.0.0.1.0.0.0.2.0.0 11 January 2005 0.1.9.0.0.0.2.0.0.0.0.0.4.0.0.1.0.0.0.4.0.0 11 January 2005 0.1.9.0.0.0.2.0.0.0.0.0.4.0.0.1.0.0.0.6.0.0 10 January 2005 0.1.9.0.0.0.2.0.0.0.0.0.4.0.0.1.0.0.0.8.0.0 10 January 2005 ------------------------------------------------------------------------0.1.9.0.0.0.2.0.0.0.0.0.6.0.0.1.0.0.0.0.0.0 13 January 2005 0.1.9.0.0.0.2.0.0.0.0.0.6.0.0.1.0.0.0.2.0.0 5 January 2005

Fig. 3. Example of address array M

Firstly, we create a DOM tree for the HTML page and use the number to represent the address of nodes in DOM tree. The address consists of numbers joined by a ‘.’, starting with ‘0’, and followed by the order (index of the node in its parent’s children nodes list) of the nodes in the tree that are ancestors of current node, starting from the top. As a bit of a special case, the address of the root is simply ‘0’. Secondly, we need to extract all text nodes containing the release time of news items in DOM tree. By pre-order traversing of the DOM tree, each text node matching the time pattern in the database is named a time node TN, and its address and corresponding time pattern are recorded in an address list AL and a time pattern list TPL respectively. In some cases, there are multiple time patterns in a Web page, and we can output the time nodes of all time patterns, or just time nodes belonging to specific patterns selected by a heuristic rule, or just time nodes matching patterns designated by the user. It is dependent on the concrete application requirement. Since the syntax structure of a HTML page is usually consistent with its semantic structure, based on the DOM tree structure, AL can be segmented into sections in each of which time nodes keep spatial locality. Each address in AL can be split into a 1-dimension array based on the separator ‘.’, and AL finally is converted to a 2dimension array M. Fig. 3 shows the M corresponding to the release time listed in Fig. 1. We can segment AL by partitioning M. A triple defines a section in M. r and c are the row number and the column number, respectively, of the top left element in the section. n is total number of rows contained in the section. R[i] is said to be ith row of M and corresponds to a TN in DOM tree, and C[j] represent the jth column of M. M[i, j] is said to be the element in the ith row and the jth column of M and also corresponds to a node in DOM tree. Let the total row number of M be TR and present full section of M as . Fig. 4 shows the recursive segmentation algorithm.

1074

J. Wang et al.

Segment (M, ) { do { j ĸ c; isAllValuesSame ĸ checkValueInArray(C[j]); if (isAllValuesSame == TRUE) { j++; } } until (isAllValuesSame == FALSE); SectSet = {|0 ” p ” k-1} ĸ splitByValues(); if (in ∀ ∈ SectSet, np == 1) { InfoExtract (M, , j, TPL); } else { for each in SectSet { Segment (M, ); } } }

Fig. 4. Segmentation algorithm

checkValueInArray(C[j]) checks if all the values in the jth column of the M are same or not. If the values are not same, splitByValues() will segment the section into k sub-sections in each which the values in the jth column are the same. When each sub-section contains only 1 row, the segmentation process will be stopped and we can extract the information items in the current section. Although HTML pages containing the time pattern have diverse contents and structures, they can be classified into two types in terms of the layout. In the first type, each news item has an individual release time. For the second type every release time is followed by multiple news items. For the list-oriented information, each item is usually displayed in an individual line, and this is an important feature for layout analysis. The line presentation relies on the DOM tree structure and specific tags such as
    ,
  • , ,

    , and
    , which cause a new line in the display. The heuristic rules are applied to select the href attribute of a suitable node and a proper title text in the current line as the title and link in RSS feeds [9]. After recognition of all the items in a section, we can decide the complete border of this section. In some pages, such as the page in Fig. 1, each section has a category title for summarizing content in the section, which corresponds to the category in the RSS item. The category data is usually presented in a line above and adjacent to the first item of the section, and contained in continuous text nodes on the left part in the line. If category is presented in an image, we can use a similar method to check the alt attribute of the appropriate node. If necessary, we can also extract this information automatically. 2.2 Automatic Approach Based on Repeated Tag Pattern Mining Although the approach based on time pattern discovery can generate RSS feeds with high performance, there are still some pages containing no time pattern. In HTML pages containing list-oriented information, information items are usually arranged

    RSS Feed Generation from Legacy HTML Pages

    1075

    orderly and compactly in a coherent region, with the same style of presentation and a similar pattern of HTML syntax. We call this kind of coherent region InfoBlock. Information items in an InfoBlock usually share a repetitive tag pattern and have a same parent node. Fig. 5 shows a repeated tag pattern and its corresponding instances (occurrence of the pattern) in a music news page. Therefore mining the repeated tag patterns in HTML pages provides guidance for the effective extraction of information items and generation of RSS feed.

    Repeated Tag Pattern:






    Items Instances

    Fig. 5. Example of repeated tag pattern

    Fig. 6. GUI for labelling

    Since it is more convenient to discover repetitive patterns by token stream, we generate tag token stream by pre-order traversing DOM tree. We also create a mapping table between each tag token in the stream and the corresponding node in the DOM tree. We use the tag to represent a text node. A Suffix Trie [10, 11] is constructed for the tag token stream and applied to induce repetitive patterns. We apply "non-overlap" (The occurrences of a repeated pattern cannot overlap each other) and "left diverse" (The tags at the left side of each occurrence of a repeated pattern belong to different tag types.) rules to filter out the improper patterns and generate suitable candidate patterns and associated instance sets [10]. For RSS feed generation, the target items are located in the and nodes, so the patterns containing no and will also be removed. Finally more than 90% of the repeated patterns are discarded. By a method similar to that used to segment AL in section 2.1, we can partition the instance set of each repeated tag pattern into sub-sets based on structure of DOM tree. Here the basic unit is a series of nodes belonging to a repeated pattern instance instead of one time node. After the partition, the instances in each sub-set will present spatial locality. For the instances in a sub-set, we can find corresponding nodes in DOM

    1076

    J. Wang et al.

    tree, and the root node of the smallest sub-tree containing all these nodes is called RST (the root of the smallest sub-tree) node, which represent a page region, i.e. InfoBlock. Since sometimes a RST node associated with a specific information item format may correspond to multiple instance sets belonging to different patterns discovered previously, each of which represents the information item format wholly or partly, we need to assess and select the best qualified set for identifying the correct border of information items under the current RST node. We create a series of criteria such as the frequency of occurrences, length, regularity and coverage of the repeated pattern for the assessment. Regularity of a repeated instance set is measured by computing the standard deviation of the interval between two adjacent occurrences. It is equal to the standard deviation of the intervals divided by the mean of the intervals. Coverage is measured by the ratio of the volume of the contents contained by repeated instance set to the volume of the all contents under the RST node. Each of the criteria has a threshold that can be adjusted by the user. An assessment usually applies one or more of above criteria, either separately or in combination [12]. Since the each news item usually is displayed in an individual line, this feature also can be helpful to identify and information items and their borders. The desired part i.e. list-oriented information for the RSS feed generation, usually occupies notable regions in a HTML page. Therefore, we can select the pattern whose instance set contains the maximum contents or occupies the maximum area in the HTML page. We also can list candidate patterns and show their corresponding regions in the page, and let the user to select the pattern compatible with his requirements. After selecting the right pattern and identifying the border of each information item, it is easy to extract the title and link from target items due to the simple structure of news items. If necessary, we also can employ the similar method used in section 2.1 to extract the category information based on the border of each InfoBlock. 2.3 Semi-automatic Approach Based on Visual Labelling No automatic approach can process all list-oriented HTML pages well, and there are always some exceptions for a fraction of irregular or complicated pages during automatic RSS feed generation. Sometimes a HTML page contains several suitable regions, but user wants to select only one specific section to generate the RSS feed. In order to solve above problems, we design a semi-automatic labelling GUI tool to process pages with unsatisfying result in automatic approaches. As shown in Fig. 6, the GUI tool contains two part of labelling interfaces: a DOM tree in the left side and a browser in the right side. The user can label RSS metadata on appropriate parts of HTML page directly and intuitively in the browser interface. When the user clicking the hyperlinks or selecting the texts displayed in the browser interface, the tool can help the user to locate the corresponding nodes in DOM tree automatically and associate RSS metadata with the nodes conveniently. The user can also select and mark the nodes in the DOM tree interface to define a region in the Web page or associate the nodes with corresponding RSS metadata. When a DOM tree node is selected, the corresponding region in the HTML page can be located and displayed in the browser at the same time. As we mentioned before, the information items in HTML pages, as discerned in their rendered views, often exhibit spatial

    RSS Feed Generation from Legacy HTML Pages

    1077

    locality in the DOM tree, and we also exploit this feature to optimize the labelling operations. After we label an item in a list, the tool can automatically deduce other items in this list based on the structure of the current item in the DOM tree. After we finish the labelling on an item list of first category, the tool can automatically deduce the lists of other categories similarly. During the deducing process, the user can simultaneously adjust labelling process and range according to the result displayed in a visual interface.

    http://www.asahi.com/ EUC-JP

    /HTML[0]/BODY[0]/TABLE[1]/TR[0]/TD[0]/DIV[1]/TABLE[0]/TR[0]/TD[2]/SPAN[0]/TEXT[0]

    /TABLE/TR[0]/TD[0]/H2[0]/A[0]

    /UL/LI/A[0] /UL/LI/SPAN[0]/TEXT[0]



    yyyy MM dd HH mm HH:mm

    Fig. 7. Example of extraction rule

    After labelling the page and verifying the converting result, we can induce an extraction rule automatically. The rule is represented in a simple format similar to XPath and can be reused to process the new contents of current page in the future. Fig. 7 shows a rule example generated from the asahi.com. But for some irregular pages whose semantic structure are not consistent with the syntax structure, above automatic deducing process will fail, and we have to mark the items or lists manually one by one, however, even in this poor situation the tool is still useful especially for the non-technical, because the user just need to click mouse instead of writing complicated extraction programs. Actually, for the above two automatic feed generation approaches, it is also possible to induce the reused rule from extraction result, and reduce the computing work of the RSS feed generation in the future.

    3 Experiments The system has been tested on a wide range of Japanese and Chinese Web sites containing the news or other list-oriented information, including country wide news paper sites, local news paper sites, TV sites, portal sites, enterprise sites and i-mode (the most popular mobile Internet service in Japan) sites for cellular phones. We measure two performance metrics: precision and recall, which are based on the number of extracted objects and actual number of target objects checked by manual work.

    1078

    J. Wang et al. 

    Precision

    

    Recall

    

    (a) title

     as a hi y om iu r i n ik ke i s an ke pe i op led ail y ya ho o n if ty s in a fu j it v ph oe nix tv loc a b ei lnews jin g n ew s fu j itsu n ec h it a ch i can on ha ier hu aw ei i-m od e int r an et

    

    

    Precision

    

    Recall

     (b) link ne c ac hi can on h ai er hu aw ei i-m od e int r an et h it

    ty s in a fu j it v ph oe nix tv loc a b e lnew s ijin gn ew s fu j itsu

    n if

    as a h

    

    i yo miu ri n ik ke i s an ke pe i op led aily ya ho o

    

    

    Precision

    

    Recall

     (c) pubDate ty s in a fu p ho jitv eni x tv loc a b e lnew ijin s gn e ws fu j itsu ne c h it ac hi can on ha ier h ua we i i-m od e int r an et

    n if

    as a h

    

    i yo mi u ri n ik ke i s an ke pe i op led ail y ya ho o

    

    

    Precision

    

    Recall

     

    ne c ac hi can on ha ier hu aw ei i-m od e int r an et h it

    ty s in a fu p ho jitv eni x tv loc a b e lnew ijin s gn ew s fu j itsu

    n if

    i yo mi u ri n ik ke i s an ke pe i op led ail y ya ho o

    (d) category

    as a h

     

    Fig. 8. Experimental result for the approach based on time pattern discovery

    Firstly, we investigated about 200 Japanese and Chinese sites and found that about 70% of news sites and almost all “what’s new” or press release pages in the enterprise sites contain the release time of news items. We also checked lots of intranet sites in our company and found 90% of news information list are provided with the release time. We selected 217 typical pages with time pattern from various sites as the representative examples. Fig. 8 presents the experimental result based on time pattern discovery. Since the time pattern has the distinct feature for the recognition, the extraction of the pubDate in target items has very high performance. The time pattern is a useful and accurate clue for locating the target item, therefore, as shown in the Fig. 8 the extraction result of other data is also very good. The errors in pubDate extraction

    RSS Feed Generation from Legacy HTML Pages

    1079

    occur in only very few conditions, for example, there are multiple occurrences of current time pattern in one target item. We will solve this problem by checking the global structure of the item list in the future. The category extraction depends on the partitioning item list into the appropriate sections, however, in some irregular cases the syntax structure of the page is not consistent with its semantic structure and consequently the partition will be mislead. In some other cases, the partition result is correct, but there are some advertisements or recommendations information between the category title and news items, and the extraction also fails. Therefore the extraction result of the category is not as good as title, link and pubDate. title Pre. title Rec. link Pre. link Rec. category Pre. nifty

    localnews

    i-mode

    category Rec.

    Fig. 9. Experimental result for the approach based on repeated tag pattern mining

    Furthermore we tested another automatic approach based on repeated tag pattern mining. Since most of news-like pages in big sites we investigated contain time patterns, we selected test pages without time pattern from the some small local news paper sites. We also found that some sites such as nifty.com (one of the top portal sites in Japan) have many pages containing list-oriented information without time pattern, so test pages also selected from them. Many i-mode pages have no time pattern associated with target items, so they are also good test candidates. Fig. 9 shows the experimental result on 54 test pages. Compared with the time pattern based approach the complexity of this approach is much bigger and the performance is also lower due to the complicated repeated pattern mining. In some cases, some irrelevant InfoBlocks share the same repeated pattern with target items, so the precision decreases. In the future we plan to analyze the display position of each section of the HTML page in the browser, which can help us to locate data-rich regions more correctly. Since most of the data-rich sections are usually displayed in the centre part of the page, and top, bottom, left and right side of the page are the noise information such as navigation, menu or advertisement [13]. We can remove the redundant InfoBlocks containing the same time pattern according to the display position. Because i-mode page structure is very succinct and contains the evident repeated pattern, the corresponding extraction result is very good. According to the above experiments, we know the automatic extraction of category is not easy due to its irregularity. If the target section is small or displayed in a special position, the automatic approaches do not work too. Therefore we need complement our system with a semi-automatic interactive tool. Since the tool is based on the manual labelling, the generation result can be under control and the result is always correct. The point is the complexity of the operation which is dependent on the regularity

    1080

    J. Wang et al.

    of the target page. Currently, we need 4-10 clicks to label common pages, but the operation highly depends on concrete requirements.

    4 Related Work Since RSS feeds have great potential to help knowledge workers gather information more efficiently and present a promising solution for information integration, recently more and more attentions are paid to approaches for translating legacy Web contents authored in HTML into the RSS feeds. There has been some existing services or systems to “RSSify” HTML pages. FeedFire.com [14] provides an automatic “Site-ToRSS” feed creation that allows the user to generate RSS feed for Web sites. But the FeedFire is only extracting all hyperlinks and corresponding anchor texts in the page and does not identify the data-rich regions and desired information items for RSS generation. Therefore, the results of the RSS feed are often full of noises such as links in the regions for navigation, menu and advertisement. MyRSS.JP [15] also provides an automatic RSS feed generation service similar to FeedFire.com, which is based on monitoring the difference between the current contents and previous contents of a Web page. The new hyperlinks emerge in current contents are extracted with corresponding anchor texts. This approach can reduce part of the noise, but the results are not good enough due to complexity of Web pages. The above two services cannot extract the release time of the information items. xpath2rss [16] is a scraper converting HTML pages to RSS feeds and uses XPath instead of using regular expressions. However its converting rule in XPath has to be coded manually. Blogwatcher [17, 18] contains an automatic system to generate RSS feeds from HTML pages based on date expression and page structure, which is similar to our work on the time pattern discovery, and provides the title selection by simple NLP technology. But its structure analysis is not flexible enough to apply on HTML pages with complicated layout, such as the pages in which every release time is followed by multiple news items. Compared with existing work, our work focuses on the efficient information extraction for RSS feed generation and provides adaptive approaches based on the distinct features of the list-oriented information in HTML pages, consequently reaching a better result.

    5 Practical Applications and Future Work Based on above “RSSifying” approaches we implement a feed generation server to offer a series of practical services, and following are some typical cases. Fujitsu EIP maintains a daily updated list of Japanese IT news, which is collected from 253 Japanese news sites and enterprise sites. The list was gathered and updated by manual work before, so it took much time to keep the list current. Recently the feed generation server is applied on these news sources to produce RSS feeds, and then news list is updated automatically by aggregating feeds. When the service is launched, 204 sites can be well processed by fully automatic approaches and the user only needs to register the url and assign its update frequency. Semi-automatic labelling approach can induce the extraction rule from 38 other sites and they are also

    RSS Feed Generation from Legacy HTML Pages

    1081

    successfully loaded to server with rules. Totally more than 95% of sites can be processed by feed generation server, and only 11 sites containing the content dynamically generated by Javascript can not be converted successfully. In the future we will integrate a Javascript interpreter such as Rhino into our system and solve this problem. Inside Fujitsu hundreds of Web sites belonging to different departments are equipped with various old systems which are rigid to update for supporting RSS feed. It would be cumbersome and cost prohibitive to replace or reconstruct all these legacy service systems. Even for the some sites providing RSS feeds, only a small fraction of suitable content is RSS-enabled. The feed generation server makes suitable contents on these sites RSS-enabled without modifying legacy system and contents can be easily integrated into EKP. Currently more 2600 pages in our intranet are translated into RSS feeds successfully and that number keeps increasing fast and constantly. We also provide feed generation ASP service to a financial and securities consulting company, and they subscribe the RSS feeds generated from dozens of financial sites. Since the stock market changes very quickly, they need to revisit related sites frequently. With our help the manual intervention has been greatly reduced and the fresh contents can be aggregated into their information system conveniently. In the next step we will continue to improve automatic approaches and optimize interactive labeling operation, and main contents extraction of HTML pages [6] will integrated into our system to provide description or content for the item in RSS feed. We will also try to apply our system on more and more practical applications for further aggregation and analysis.

    References 1. Hammersley, B.: Content Syndication with RSS, Oreilly & Associate, Inc. 1st edition (2003). 2. Miller, R.: Can RSS Relieve Information Overload?, EContent Magazine, March 2004 Issue (2004). 3. Garcia-Molina, H., Cho, J., Aranha, R., Crespo, A.: Extracting Semistructured Information from the Web, In Workshop on the Management of Semistructured Data (1997) 4. Huck, G., Fankhauser, P., Aberer, K., Neuhold, E.J.: Jedi: Extracting and Synthesizing Information from the Web, In CoopIS1998, 3rd International Conference of Cooperative Information Systems. New York. (1998) 5. Sahuguet, A., Azavant, F.: Web Ecology: Recycling HTML pages as XML documents using W4F, In WebDB (1999). 6. Gupta, S., Kaiser, G., Neistadt, D., Grimm, P.: DOM-based Content Extraction of HTML Documents, In the 12th International Conference on World Wide Web, Budapest, Hungary (2003). 7. Berners-Lee, T.: The Semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American, (2001). 8. Yang, Y., Zhang, H.: HTML Page Analysis Based on Visual Cues, In the Proceedings of the Sixth International Conference on Document Analysis and Recognition, Washington, DC, USA (2001) 9. Wang, J., Uchino, K.: Efficient RSS Feed Generation from HTML Pages, In the proceedings of the First International Conference on Web information Systems and Technologies, Miami, USA (2005)

    1082

    J. Wang et al.

    10. Gusfield, D.: Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology, Cambridge University Press; 1st edition (1997) 11. Ukkonen, E.: On-line construction of suffix trees. Algorithmica, 14(3):249-260, (1995) 12. Chang, C., Lui, S.: IEPAD: Information Extraction based on Pattern Discovery, In the 10th International Conference on World Wide Web, Hong Kong (2001) 13. Chen, Y., Ma, W., Zhang H.: Detecting Web Page Structure for Adaptive Viewing on Small Form Factor Devices, In the 12th International Conference on World Wide Web, Budapest, Hungary (2003) 14. http://www.feedfire.com/site/ 15. http://myrss.jp/ 16. Nottingham, M.: XPath2rss, http://www.mnot.net/ 17. Nanno, T., Suzuki, Y., Fujiuki, T., Okumura, M.: Automatic Collection and Monitoring of Japanese Weblogs, In the WWW2004 Workshop on the Weblogging Ecosystem: Aggregation, Analysis and Dynamics (2004) 18. Nanno, T., Okumura, M.: Automatic Generation of RSS Feed based on the HTML Document Structure Analysis, In Proceeding of the 19th Annual Conference of JSAI (2005)

    Ontology Driven Securities Data Management and Analysis Xueqiao Hou1, Gang Hu 1, Li Ma1, Tao Liu1, Yue Pan 1, and Qian Qian 2 1

    IBM China Research Laboratory, Building 19, Zhongguancun Software Park, ShangDi, Beijing 100094, P.R. China 2 Department of Computer Science, Tsinghua University, Beijing 100084, P.R. China {houxueqiao, hugang, malli, liutao, panyue}@cn.ibm.com [email protected]

    Abstract. With the increase of fraudulent transactions in world wide securities market, it is critical for regulators, investors and public to accurately find such business practices to avoid serious loss. This paper makes a novel attempt to efficiently manage securities data and effectively analyze suspicious illegal transactions using an ontology driven approach. Ontology is a shared, formal, explicit and common understanding of a domain that can be unambiguously communicated between human and applications. Here, we propose an ontology model to characterize entities and their relationships in securities domain based on a large number of case studies and industry standards. Securities data, (namely said financial disclosure data, such as annual reports of listed companies), are currently represented in XBRL format and distributed in physically different systems. These data from different sources are firstly collected, populated as the instances of the constructed ontology and stored into an ontology repository. Then, inference is performed to make the relationships between entities explicit for further analysis. Finally, users can pose semantic SPARQL queries on the data to find suspicious business transactions following formal analysis steps. Experiments and analysis on real cases show that the proposed method is highly effective for securities data management and analysis.

    1 Introduction Today, world wide securities market regulators and investors are all facing the critical challenges on collecting, managing and analyzing the mass volume of the securities data, which is mainly about the financial disclosure of listed companies, such as annual reports. Take the emerging China securities market as an example, by 2004, there are 1473 listed companies in China securities market. Only in 2004, the disclosure reports are up to 46,003 pieces, which cover all the public information about these listed companies. The receivers of these information range from securities regulators, exchange markets, financial institutions, brokers and agencies, to numberless individual investors. For this reason, the publication of securities disclosure data should not only be easy to share and exchange, but also be well-organized for management and analysis purposes. On the other hand, effectively and efficiently collecting, managing and analyzing the data demands new technologies and solutions to structure, integrate, store and X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1083 – 1095, 2006. © Springer-Verlag Berlin Heidelberg 2006

    1084

    X. Hou et al.

    query the data. Industry has attempted to apply the XML based technologies to structure, share and exchange the data, which can be seen by the wide deployment of eXtensible Business Reporting Language (XBRL) [1] and related products. However, beyond the initial efforts of structuring and sharing the data, lots of technical challenges still remain to be solved. The most important is on how to manage and analyze the data to provide decision support to securities regulators and investors. In this paper, we propose an ontology driven securities data management and analysis system, which focuses on Entity-Relationship analysis of XBRL based securities data. (1) By building up a Web Ontology Language (OWL) [3] ontology rather than using the popular XBRL based taxonomy, the entities and their relationships in securities domain can be better characterized. (2) By transforming the XBRL data into RDF/OWL data [2, 3], it imposes more formal semantics to the data. (3) By storing and inferencing the RDF/OWL data collection in an ontology repository, it provides a more flexible securities data management solution. (4) By adopting W3C’s SPARQL [4] as the query language, it enables business users analyze the entity relationships of the data using business rules and the domain model, rather than simply retrieving the XBRL content of the data. This system adopts a series of industry and W3C standards, including XBRL, OWL, RDF, SPARQL [1-4]. These standards improve the openness, extensibility and flexibility of our system as an industry solution for securities data management. We believe our practice together with existing XBRL related efforts will finally equip the securities industry with a total solution for the efficient share, exchange, management and analysis of the disclosure data for both publishers and receivers. 1.1 Related Work XBRL (eXtensible Business Reporting Language) [1] is an emerging technology based on the W3C XML related standards, such as XML and XLink. It allows software vendors, programmers and end users to enhance the creation, exchange, and comparison of business reporting information. In recent years, XBRL has been adopted in many countries to publish business information, especially for securities, such as US SEC (US Securities and Exchange Commission), KSE (Korean Securities Exchange), Japan's Financial Services Agency. In China, although the securities market is not so mature, the demand for standard securities information disclosure is significantly increasing as the scale of market extends rapidly. In 2003, CSRC (China Securities Regulatory Commission) has delivered the regulation of listed company information disclosure and the corresponding XBRL taxonomy and initiated several XBRL related projects for pilot. There are some products based on XBRL in the market already, such as UBmatrix [5], EDGAR [6], Semansys [7] etc. But most of the applications focus on the XBRL taxonomy and instance authoring and the business reporting process management, lacking deep data analysis. In this paper, we will present a new application based on XBRL which enables the entity relationship analysis on the securities data to detect illegal business transactions.

    Ontology Driven Securities Data Management and Analysis

    1085

    The remainder of this paper is organized as follows. Section 2 describes ontology driven securities data management, including ontology model, ontology based securities data storage, query and inference. Section 3 presents securities data analysis with a real case. Section 4 reports the performance of our securities data management system. Section 5 details discussions and future work. Section 6 concludes our work.

    2 Ontology Driven Securities Data Management Securities data is currently distributed in physically different systems and lacks of effective management for intelligent analysis. Here, we propose an ontology driven approach to securities data management. Firstly, an OWL ontology is built to represent entities and their relationships in securities domain and to organize the securities data. Then, the data is collected from different sources and populated as the instances of the built ontology. Such a processing imposes the data originally in XBRL format with formal semantics. Next, inference is conducted on the data to discover rich relationships between entities (such as list companies). Finally, business users can perform analysis by posing SPARQL queries to the ontology store. The overview of the method is shown in Figure 1 and detailed in subsequent subsections.

    Analyze and visualize Entity-relationship network by user interactions Inference for relationship analysis

    entity-

    Ontology and Instance Store RDF/OWL

    SPARQL

    Collect and populate data from different local sources as instances of securities market domain ontology O n to lo g y

    XBRL Data Sources

    Stock Market

    Listed Company

    Regulators

    Fig. 1. Overview of Ontology Driven Securities Data Management

    2.1 XBRL Data The securities information publishers deliver both of financial and non-financial reports periodically. In the non-financial reports, they disclose lots of relationship information including the management relations, e.g. shareholding information, and

    1086

    X. Hou et al.

    transactions between companies, e.g. guarantee and share transfer. For business analysts, the relationship information could play a more important role than the pure financial data. In our approach, we build an ontology to model the relationship information between companies and import the securities data in XBRL format to the ontology store for further analysis. The ontology will be described in next subsection. Below is a sample XBRL instance fragment which contains the information of guarantee transaction. The corresponding XBRL taxonomy is defined by CSRC.

    In the above XBRL instance fragment, it discloses a guarantee transaction between Company A, which is annotated with XBRL tag DanBaoFang, i.e. the guarantor, and Company B, which is annotated with XBRL tag BeiDanBaoFang, i.e. the warrantee. Also, the duration and the amount are published. Those structured relationship information will be imported into the ontology store. By transforming the XBRL data collection into RDF data collection, it would provide more formal semantics, such as the concept hierarchy of the counterparty and the relation, to the data than the XML syntax of XBRL. It will enable more advanced analysis on the relationship information as the next section described. 2.2 Ontology for Securities Domain Here, we define an ontology for the entity-relationship analysis on the securities data imported from the XBRL instance. We refer to some existing ontology resources like IBM’s IFW (Information Framework) [11] and FOAF (Friend of Friend) [12] for multi-dimensional and social relationship modeling. In order to model the entity-relationship information, we define two core concept hierarchies named Entity Hierarchy and Relation Hierarchy to capture the semantics which is not formalized represented in XBRL. The Relation represents the interested business transaction or relation, such as share transfer and guarantee. The Entity concept represents the counterparty of the relation, such as the company and person. The properties belong to each concept, e.g. the duration of guarantee transaction or name of the company, are also defined in the ontology represented in OWL. The details of part of the core concepts are shown in following figure. By building up OWL based domain ontology, it explicitly captures the reusable knowledge of securities domain including the concept hierarchies for both Entity and Relation, the important properties of the concepts, etc. On the other hand, OWL, which is a W3C recommendation for ontology representation, makes the conceptual modeling more formal and explicit than XBRL taxonomy.

    Ontology Driven Securities Data Management and Analysis

    1087

    Fig. 2. Ontology for Securities Domain

    2.3 Securities Data Storage, Inference and Query Data from different data sources are populated as instances of the defined ontology model for securities domain and represented in OWL format. Therefore, an efficient knowledge base is needed to store, inference and query such kind of securities data. By inference, hidden relationships between companies can be discovered and suspicious illegal transactions can thus be disclosed. By storing and inferencing the RDF/OWL data collection in an ontology repository, it provides a more flexible securities data management solution. This section describes ontology storage, inference and query with large-scale instances. The logical foundation of OWL is Description Logic (DL) which is a decidable fragment of First Order Logic (FOL). A DL knowledge base comprises two components, TBox and ABox. The TBox describes the terminology, i.e., the vocabulary of an application domain, while the ABox contains assertions about individuals in terms of this vocabulary. Correspondingly, the DL inference includes TBox inference (i.e. reasoning about concepts) and ABox inference (i.e. reasoning about individuals). It is demonstrated in [8] that DL reasoners are able to cope with TBox reasoning of real world ontologies. But, the extremely large amount of securities data makes it difficult for DL reasoners to deal with ABox reasoning. Here, an efficient ontology store on relational databases is designed and developed to meet such a critical need. It has following advantages. 1. The inference method combines TBox inference of the DL reasoner with the logic rules translated from Description Logic Program (DLP) for the ABox inference. This promises that inference of OWL ontologies restricted by DLP is complete and sound. 2. The schema of the back-end database is designed based on both the translated logic rules and OWL constructs to support efficient ontology inference. Figure 3 shows the overview of the ontology store, including Import Module, Inference Module, Storage Module and Query Module.

    1088

    X. Hou et al.

    Fig. 3. Overview of the Ontology Store

    Import Module consists of an OWL parser and a translator. The parser parses an OWL ontology into an Object Oriented Model in memory, then the translator populates all assertions into the relational database. Storage Module is intended to store both original and inferred assertions. Since inference and storage are considered as an inseparatable component in a complete storage and query system for ontologies, we design the schema of the back-end database to optimally support ontology inference. A DL reasoner and a rule engine compose the inference module. The rule inference covers the semantics of DLP [9, 10], while the DL reasoner obtains the subsumption relationship between classes and properties which cannot be completely captured by the rules. The query language supported by the developed ontology store is W3C’s SPARQL language. No inference is conducted during the query answering stage because the inference has already been done at the time of data loading. Such processing further improves the query response time. By inference, the relations between entities (such as companies) are made explicit. Therefore, users can issue SPARQL queries on the inferred data to find specious business transactions. An example is given in the next section to show how to use SPARQL queries for analysis.

    3 Securities Data Analysis The use of SPARQL in our system offers powerful reasoning ability to analyze complicated entity relationships for fraud investigation, which is far beyond simple data retrieval. In securities data analysis process, rules are the first class of citizen to express complicated business logic for their strengths in multiple relationships representation. On the other hand, SPARQL is a standardized query language for RDF, which provides facilities to extract information in the forms of URIs, bNodes, plain and typed literals, inferring on existed data facts and constructing new RDF graph after inference [4]. Thus, SPARQL becomes the best choice for rule presentation in our system. Generally, the analysis process for fraud investigation always takes five steps. 1. 2. 3. 4. 5.

    Build rules for fraud investigation based on supervisory regulations. Build rules for suspicious fraud based on experiences from real case studies. Query for Fraud facts by applying the rules defined in (1) Query for Fraud clues by applying the rules defined in (2) Evaluate the query results by applying evaluation rules and if possible generate new fraud rules or suspicious rules.

    Ontology Driven Securities Data Management and Analysis

    1089

    In order to elaborate the process about fraud investigation such as illegal transaction detection, we present a real case to demonstrate how to do analysis in our system. The case is about “Relationship Analysis for Frauds of Company A”. Step 1: We build two sample rules based on supervisory regulations for fraud detection, which are expressed in logic programs and SPARQL queries respectively. (1) Fraud(x,y) Å

    Company(x), Company(y), ShareHolding(x,y,sharePercentage), sharePercentage < 0.5, Guarantee(x,y). Prefix SecOnto: Select * Where (?x SecOnto:ClassConstraint SecOnto:Company) (?y SecOnto:ClassConstraint SecOnto:Company) (SecOnto:ShareHolding ?x ?y) (SecOnto:Guarantee ?x ?y) (SecOnto:ShareHodling SecOnto:sharePercentage ?z) AND ?Z>0.5 (2) Fraud(x,y) Å Company(x), Company(y), Person(z), ShareHolding(x,y,sharePercentage), sharePercentage < 0.5), Guarantee(x,z), debtRage > 0.7. Prefix SecOnto: Select * Where (?x SecOnto:ClassConstraint SecOnto:Company) (?y SecOnto:ClassConstraint SecOnto:Company) (?z SecOnto:ClassConstraint SecOnto:Person) (SecOnto:ShareHolding ?x ?y) (SecOnto:Guarantee ?x ?z) (SecOnto:ShareHodling SecOnto:sharePercentage ?z) AND ?Z0.7

    Step 2: We build one sample rule for suspicious fraud based on experiences from real case studies. SuspiciousFraud(x,y) Å

    Person(z), Company(x), Company(y), ShareHolding(p,x), ShareHolding(p,y). Prefix SecOnto: Select * Where (?x SecOnto:ClassConstraint SecOnto:Company) (?y SecOnto:ClassConstraint SecOnto:Company) (?z SecOnto:ClassConstraint SecOnto:Person) (SecOnto:ShareHolding ?z ?x) (SecOnto:ShareHolding ?z ?y)

    Step 3: In this step, we try to find if company “A” performed a fraud based on the rules in Step 1. We append company constraint in the two rules by (?x SecOnto:Name SecOnto:A). After query evaluation, we found no such fraud facts for company “A”.

    Fig. 4. Results of Fraud Investigation on Company A

    1090

    X. Hou et al.

    Step 4: Similar to Step 3, we add company constraint in the rule of Step 2 by (?x SecOnto:Name SecOnto:A). After query evaluation, we obtain some suspicious frauds as Figure 4, which are visualized by our business visualization engine. Step 5: Evaluate the query results by applying evaluation rules and if possible generate new fraud rules or suspicious rules. With the five steps, we perform a typical fraud investigation process which facilitates business users to analyze complicated relationships among a large number of entities much more efficiently and effectively.

    4 Performance Evaluation In this section, we evaluate the performance of securities data management and demonstrate its efficiency by experiments on import and query modules. The typical query jobs in our system mainly focus on discovering multiple relationships between various entities and relations, which require multi-join operations on database. All the experiments are implemented in Java with JDK1.4 on a PC with Pentium 2.0GHz and 1G RAM. The securities data is represented in OWL format which contain securities information about Chinese companies. The different characteristics of securities data in our experiments are shown in Table 1, which mainly cover nine concepts within two base classes of “Entity” and “Relation”. Table 1. Characteristics of securities data Concept Types Entity Entity Entity Entity Relation Relation Relation Relation Relation

    Concepts in Ontology Company Person Occupation ListedCompany Shareholding Transaction Guarantee ConnectedTransaction Sharetransfer

    # of tuples 86,646 70,884 20,916 1,384 269,297 12,560 7,897 7,299 4,663

    Size in XBRL (MB) 19.6 6.1 2.5 0.5 54.2 4.3 1.8 1.4 1.1

    Size in owl files (MB) 22.3 6.5 3.2 0.6 61.1 5.6 2.4 2.2 1.8

    The performance of import module. We evaluate the performance of import module by varying the data size on different kinds of concepts. For “Entity”, we choose individuals of concept “Company” to test, as well as individuals of concept “Shareholding” for “Relation”. In our experiments, “Company” and “Shareholding” individuals are presented as follows:

    CompanyA

    10,000,000

    Ontology Driven Securities Data Management and Analysis

    1091

    500

    1990-07-01

    Investment true





    (a)

    (b)

    Fig. 5. The performance of import module

    Figure 5 shows that import module in our system has linear scalability in terms of the data size. Benefited from ABOX and TBOX inferences in this stage, our query module can achieve high efficiency, which is also demonstrated in Figure 6. The performance of query module. We illustrate the efficiency of SPARQL query processing from two aspects. First, we test the response time on different query complexities. Second, we investigate the impact of data volume on response time. (1) In order to test the performance of different query complexities, we generate query patterns in SPARQL which are based on two transitive “Relation” concepts: “Shareholding” and “Guarantee”. For each concept, five query patterns are constructed with different relationship chain lengths from one to five. As an example, for concept “Shareholding”, these five queries are divided by their different Lengths of Relationship Chain (LRC). (a) LRC = 1: select * where ( ?a ?b ) (b) LRC = 2: select * where ( ?a ?b ) ( ?b ?c ) (c) LRC = 3: select * where ( ?a ?b ) ( ?b ?c ) ( ?c ?d )

    1092

    X. Hou et al.

    (d) LRC = 4: select * where ( ?a ?b ) ( ?b ?c ) ( ?c ?d) ( ?d ?e ) (e) LRC = 5: select * where ( ?a ?b ) ( ?b ?c ) ( ?c ?d ) ( ?d ?e ) ( ?e ?f )

    Therefore, we obtain ten general query patterns. In our database, there are totally 1384 listed companies, and for each query pattern we can extend it to 1384 queries with start-point company constraints appended. Figure 6 gives the results about query performance on these two concepts.

    (a)

    (b)

    Fig. 6. The performance of query module

    We use average response time under different queries as our evaluation criteria. The curves of Figure 6 represent the average query response time of 1384 queries with different start-point company constraints. Overall, we observe that the query

    Fig. 7. The scaleup comparison of query module

    Ontology Driven Securities Data Management and Analysis

    1093

    performance is efficient with the increase on the length of relationship chain, where the most complicated queries only cost around 3300ms. More specifically, in Figure 6(a), from queries with LRC from 4 to 5, the response time changes smoothly as well as the queries with LRC from 3 to 5 in Figure 6(b). This is caused by different features of “Shareholding” and “Guarantee” data, where most transitive “Shareholding” and “Guarantee” relationships are no more than 4 layers and 3 layers respectively. (2) For scaleup comparison, we choose the query pattern with LRC = 3. Therefore, we have 1384 queries for concepts “Shareholding” and “Guarantee” respectively. Figure 7 shows how our query processing scales with the increasing number of tuples. For the given query pattern, we find a linear increase in the running time with increasing data size for both concepts “Shareholding” and “Guarantee”.

    5 Discussions and Future Work Aiming to use ontology technologies for securities data management and analysis, we discuss ontology design for securities domain, ontology data storage, inference and query, and the logic rules based analysis method. Our experiences and experiments show that it is feasible and attractive to adopt ontology driven approach to securities data management. To widely and successfully apply such a system, however, some problems need to be further explored. OWL is Semantic Web standards that provide a framework for asset management, enterprise integration and the sharing and reuse of data/metadata on the Web. As W3C’s standard, OWL is commonly accepted and used. Rules are usually used to represent deep knowledge and complex business processes, which may be divided into information model rules (knowledge) and business behavior/process rules. The approach we used is to take DL reasoner as TBox inference and logic rules translated from DLP as ABox inference, whose inference power is limited although it can meet the common requirements for Securities Data Management and Analysis. Earlier this year, W3C held a workshop on Rule Languages [11] for Interoperability. Several rule languages are proposed as the candidates of W3C standard rule language, such as WSML (Web Service Modeling Language), RuleML (Rule Modeling language), SWSL Rules (Rules in the Semantic Web Services Language), N3 (Notation3), and SWRL (Semantic Web Rule Language). Besides rules used in ABox inference, our system further supports rules by user defined SPARQL queries. We are considering the tight integration of OWL ontology and rules by adopting W3C’s SWRL standard to model securities data information and business behavior/process more accurately. As the core of the system, there are no components of the system more important than ontology for securities data management. But it is hard to define a widelyaccepted ontology due to the complexity of security data and the subjectivity of ontology definition. We defined the ontology based on the two criteria: meet customers’ business requirements and guarantee the completeness and soundness of ontology based on laws and codes as well as the experiences and expertise of the domain experts in security data management area. We also refer to some existing ontology resources like IBM’s IFW (Information Framework), FOAF (Friend of Friend), and UMASS’s work [14]. Actually, some useful securities data taxonomy/vocabularies

    1094

    X. Hou et al.

    are being used to XBRL files. We will work together with related parties involved in securities data management to design a standardized ontology for securities domain. For the next version of our system, we will adopt the OWL+rules approach to model the securities data information and business behavior/process rules in order to make our ontology more standard-oriented and obtain more powerful inference capability. Also, we will build a well-accepted ontology for securities data management and analysis by collecting more resources currently available and cooperating with related parties in this area. By doing that, we are able to better meet business requirements for securities data management and analysis. Considering the mass amount of data and the complexity of data/relationships to be analyzed, we will improve the system performance to meet the critical runtime response requirements with powerful-enough inference capability. Furthermore, we hope to apply this approach and methodology to other areas in order to verify the powerfulness of our system and find out more values for this kind of technologies as well.

    6 Conclusions This paper presented a novel ontology-driven method to efficiently manage securities data and effectively analyze entities and their relationships. Compared with the popular XBRL based taxonomies, we built an ontology model obtained from a large number of case studies and industry standards to better characterize entities and their relationships in securities domain. The data from different sources were firstly collected, populated as the instances of the constructed ontology and stored into an ontology repository. This imposes more formal semantics to the original XBRL data. Then, inference was performed to make the relations between entities explicit for further analysis. Storing and inferencing the RDF/OWL data collection in an ontology repository provides a more flexible data management solution. Finally, users can pose semantic SPARQL queries on the data to find specious business transactions following a formal analysis process. This enables business users analyze the entity relationships of the data using business rules and the domain model, rather than simply retrieving the XBRL content of the data.

    Acknowledgements The authors would like to thank LiQin Shen, Cheng Wang and Bo Li for their constructive comments, Jian Zhou for his help with implementation of the ontology store.

    References 1. XBRL, eXtensible Business Reporting Language, http://www.xbrl.org/, 2005. 2. G. Klyne and J. Carroll, Resource Description Framework (RDF): Concepts and Abstract Syntax, W3C Recommendation, http://www.w3.org/TR/rdf-concepts/, 2004. 3. Michael K. Smith, Chris Welty, Deborah L. McGuinness, OWL Web Ontology language Guide, http://www.w3.org/TR/owl-guide/, 2004.

    Ontology Driven Securities Data Management and Analysis

    1095

    4. SPARQL query language for RDF, http://www.w3.org/TR/2004/WD-rdf-sparql-query20041012/, 2005. 5. UBMatrix, www.ubmatrix.com/, 2005. 6. EDGAR, http://www.edgar-online.com/, 2005. 7. Semansys, www.semansys.com/, 2005. 8. Volker Haarslev and Ralf Moller, "High Performance Reasoning with Very Large Knowledge Bases: A Practical Case Study", In Proc. IJCAR, pp.161-168, 2001. 9. Benjamin Grosof, Ian Horrocks, Raphael Volz and Stefan Decker, "Description logic programs: Combining logic programs with description logic" in proc. of WWW2003. 10. V., Raphael: Web ontology reasoning with logic databases, Univ. of Karlsruhe, 2004. 11. Rule Language Standardization, http://www.w3.org/2004/12/rules-ws/report/, 2005 12. IBM IFW (IBM Information Framework), http://www-03.ibm.com/industries/ financialservice s/doc/content/solution/391981103.html. 13. FOAF(friend of a friend), http://www.foaf-project.org/. 14. Using Relational Knowledge Discovery to Prevent Securities Fraud. University of Massachusetts, Technical Report 05-23.

    Context Gallery: A Service-Oriented Framework to Facilitate Context Information Sharing Soichiro Iga, Makoto Shinnishi, Masashi Nakatomi, Tetsuro Nagatsuka, and Atsuo Shimada Ricoh Co., Ltd., 16-1 Shinei-cho, Tsuzuki-ku Yokohama, Japan [email protected] http://www.ricoh.co.jp/

    Abstract. Context-aware computing enables the users to seamlessly manipulate information through the relevant contexts, however, it is problematic for application developers to search for the particular contexts they could use, and to determine how reliable the contexts are and how the contexts are depending on each other. We propose a XML-based context information syndication format called “context summary”, and its browsing web-based service called “context gallery”. We demonstrate several novel interactive systems leveraging various context information, and we then discuss how these applications are loosely interconnected with each other through contexts on our framework.

    1

    Introduction

    As Weiser noted, “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”[18], to realize this invisible and transparent characteristic of the technologies[12], the technologies have to be aware of the users’ situation and context[1]. Context-aware computing is to make information appliances and environments more user-friendly, flexible, and adaptable for the users by capturing, representing, and processing context information (e.g. location, time, and other persons nearby)[3]. Our company, as office equipment producer, offers information appliances, for example, copiers, printers, and digital cameras. Information technology capabilities of these appliances are gradually extending. Taking a copier as an example, it is no more just a copier (e.g. a machine that makes copies of printed or graphic matter). It has ability to connect to the network, and it involves much sophisticated services like a document management system, a user logging system, and so on. Integration and combination of various functions of information appliances would develop information services both qualitatively and quantitatively. These multi-functional information appliances would be located everywhere in upcoming ubiquitous computing situation. Increased mobility of the users, devices, and applications suggests that information services should adapt themselves to the X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1096–1106, 2006. c Springer-Verlag Berlin Heidelberg 2006 

    Context Gallery: A Service-Oriented Framework

    1097

    activities of the users based on knowledge of both location and the task contexts of the users. Context-aware computing enables the users to seamlessly manipulate information through the relevant contexts, however, it is problematic for application developers to search for the particular contexts they could use, and to determine how reliable the contexts are and how the contexts are depending on each other. Also, the capabilities of information appliances need to be invisible and transparent for the users, however, they also need physical and tangible user interface where they could interact with their users[7]. Beyond making technologies invisible and transparent, interaction techniques that physically, effortlessly, and uniformly elicits mix-and-match functional capabilities to manipulate information would be highly expected. This paper describes a service-oriented framework of context-aware systems. We propose a XML-based context information syndication format called “context summary”, and its browsing web-based service called “context gallery”. We also propose several novel interactive systems to facilitate information sharing among the multiple devices by providing physically and location-aware operation. We then discuss how each application system loosely coupled with each other through context information on our framework.

    2 2.1

    Context Gallery Service-Oriented Framework for Context-Aware Services

    No application should be alone. To develop capabilities of information technologies and to leverage variety of context data by combining multiple applications, an information architecture needs to be more net-centric than platform-centric. We think that the most remarkable feature to differentiate context-aware computing and other computing styles is the merit of scale in services. A considerable degree of services has to be provided both quantitatively and qualitatively for the particular context-aware environment to bring certain advantages to the end-users. We have been practicing a service-oriented architecture to flexibly and effectively integrate variety of context-aware services. A service-oriented architecture is essentially a collection of services, and these services communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Fig. 1 shows a design of our service-oriented framework model for building context-aware systems. Every service is basically composed of three function layers: (a) to capture some specific contexts, (b) to process captured context, and (c) to represent them by certain applications. Every service can flexibly communicate with each other in each layer, for example, (d) a context processing layer of service 2 can utilize captured context data of service 1, or (e) a particular application in service 2 can utilize context information processed in service 1. Single service can have multiple applications (f), for example, a particular location-based service can have a museum navigation guide and a reminder system at the same time. One of the particular kind of service is a database repos-

    1098

    S. Iga et al.

    Fig. 1. Service-oriented context-aware computing framework

    itory for managing context data and processed information. Multiple services can asynchronously share context data and information via context repository service (g). We designed requisite minimum interface specifications to support applicationto-application communication. We let the application developers to provide external interfaces for accessing raw captured context layer and/or context processing layer using standard protocols (XML/SOAP). We also accepted rather legacy protocols (CGI, FTP, SMTP, and simple socket) for the devices which are not ready for the advanced web-based services. 2.2

    Context Summary and Context Gallery

    It is problematic for application developers to look for the particular contexts that other applications offer, and to determine reliability and dependency of contexts. We provide a notion called “context summary” which facilitates an application to share context information with other applications. A context summary is a XML-based format extending RSS (RDF Site Summary)[13] format that allows the syndication of lists of contexts and developmental process of contexts that each application manages. RSS is a family of XML file formats for web syndication mostly used by news websites and weblogs. We let each application developer to manually describe a context summary format for their applications. From this format, all the other developers can estimate reliability and accuracy of each context information that the application offers. Fig. 2 shows an excerpt result of context summary request that an application name “In/Out Board” generated a context named “presence of the user” from the context “userid” and “avg. rfid signal level” of external application “Location Capture”. In this way, application developers can presume reliability and accuracy of context information that they are going to leverage. A Context Gallery is an web-based aggregator software which collects context summary information from various applications and provides them in a simple web form. Application developers could browse a list of context information that

    Context Gallery: A Service-Oriented Framework

    1099



    In/Out Board http://.../index.xml ...

    presence of the user http://.../tabacomio.cgi?key=presence cgi 80 processed

    userid Location Capture

    avg. rfid signal level Location Capture

    ... Fig. 2. An excerpt result of context summary request

    each application provides and dependency among contexts which could further be utilized for deciding reliability and accuracy of contexts. It also notifies the updates of contexts. Thereby, efficiency and reusability of contexts for developers could be enhanced.

    3

    Related Research

    A variety of research efforts have explored physical interaction to seamlessly incorporate both digital and physical information [6,7,10,11,14,17]. These approaches show promise of physical and tangible UIs in particular application areas, however, building physical and tangible UI systems are problematic because a large amount of effort in developing physical and tangible input technique is needed by designers and developers. Toolkit and platform approach can reduce costs related to the development of UI prototype systems for designers and developers[8]. The Context Toolkit focuses on separating context sensing from context use and making contextaware application easier to develop[15]. It consists of three different types of components, which are widgets, servers and interpreters. Papier-Mache aims to provide a interactive toolkit for building tangible user interfaces(TUI)[9]. It highly abstracts technologies, for example, computer vision, electronic tags, and bar-codes, that TUI application designers have to work with. It also facilitates

    1100

    S. Iga et al.

    portability of application systems. Also, many researches on multi-device user interface platforms are aiming to allow a user to interact using various kinds of computers including traditional office desktop, laptop, palmtop, PDA with or without keyboards, and mobile telephone[16]. These approaches provide different views of the same information and coordinate the services available to users from different computing platforms. Toolkit approaches enhance efficiency, reusability and portability of UI systems, however, considerable research and development efforts may need to apply these techniques as practical services that synergize a variety of applications. Platform approaches interconnect various applications to organize them as a cohesive service, however, it is problematic to cover a broad range of application domains from the initial designing stage. Providing variety of interactive applications are not enough to realize real ubiquitous computing situation. To realize such situation, applications should be interconnected each other, however, toolkit and platform approaches have difficulty in covering a broad range of application domains. We think that a service-oriented architecture should fit in well to our approach which is to flexibly design and interconnect application systems. Service-oriented middleware for building context-aware services has been proposed[4]. It provides a formal context model based on ontology using Web Ontology Language. Our approach is not focusing on describing general hierarchy of semantic concept of contexts in particular application domain. Our approach is basically focusing on providing an essential key information for application developers to estimate reliability, accuracy and dependency of context information, and to facilitate developers to introduce context information to build into their applications. We also aim to provide a practical way of describing context information for developers rather than just enhancing descriptive power of context model.

    4

    Application System

    We have been developing a variety of application systems that leverage context information. Some of these applications are loosely coupled with each other by context information. In this section, we introduce part of our application systems. 4.1

    Sneaker

    Ubiquitous computing promises the constant access to information and computational capabilities, however, information related to our specific work are generally deconcentrated by each application sub-system (e.g. e-mail, word processor, web browser, groupware, and so on). The goal of the “Sneaker” system is to provide an interactive software to be able to navigate hypertext relationships between user’s documents and context information(Fig. 3). The system monitors and collects information manipulated by the user, and the system automatically categorize them by correlation of the context information extracted from them. A family of information which are strongly associated with is represented in a visual object(“task”). The user can effortlessly access to the information that are relevant each other.

    Context Gallery: A Service-Oriented Framework

    1101

    Fig. 3. Sneaker: the system enables the users to navigate hypertext relationships between the user’s “tasks” which contain documents and context information (e.g. location, icon of the users, keywords, and so on)

    4.2

    In/Out Board

    One of the common and potential location-aware application for office work environment is an in/out board. We developed a simple web-based in/out board which could share location, text messages, and digital documents among multiple users (Fig. 4). In our lab, each lab member carries active RFID tag for user identification. More than twenty RFID reader devices are mounted every rooms where we frequently use. Whenever we come close to the particular place, the reader device recognizes the ID of tags and sends to the server. It also handles information like short text messages, digitally captured documents, schedule, and list of recent shared documents that are relevant to each user. 4.3

    Coin

    “Coin” system facilitates information sharing among multiple devices by providing physically-aware operation(Fig. 5). Using coin-shaped small RFID tags, the user can transfer information from one device to another by physical and tangible operation (“from this to that”). Software agent running on an information appliance recognizes the operation of the user, and it determines what the user is performing and focusing on that device. When the user put a tag close to the device, the system associate the tag ID with the task information that the user is performing on the device, and send those information to the server through the network. When the user put that tag close to another device, the stored information is downloaded from the server and would be activate that device according to the type of the information. For example, the user can transfer scanned image from the scanner appliance to the PC just by putting the tag closer to the scanner and then to his or her PC,

    1102

    S. Iga et al.

    Fig. 4. A digital in/out board

    Fig. 5. Coin: the user can transfer information from one device to another by physical and tangible operation just by holding a RFID tag up from device to device

    or the user can process binarization and resize of digitally captured color image data just by hopping to the “binarization” device and to the “resize” device. 4.4

    Position Messenger

    We developed a location-aware mobile system that could represent information that are relevant to the user’s contexts (e.g. location, time, other persons nearby, and scheduled events). While the user carries a PDA with an active RFID tag, the system reminds and recommends information when the user satisfies conditions of contexts associated with the particular locale. A basic user interface composed of layout maps of building floors, and the user can easily drag and drop documents on the point on the map where he or she wants to associate information with. The user can also define conditions like when or who is nearby to elicit and represent information which was associated with the particular locale.

    Context Gallery: A Service-Oriented Framework

    4.5

    1103

    Bookmark Everything

    “Bookmark Everything” system can give marks, scores and comments to whatever information the user is browsing on his or her computer screen. Bookmarked information can be utilized as a personal reminder or can be shared as recommendation for other users. A software agent running on the user’s PC monitors and captures what the user is performing and browsing on the computer window screen, for example, directory paths of the documents, URLs, titles of the application window screens, mouse events, keyboard events, and so on. As the user hold a special keyboard command down, the software agent pops up a dialog window and the user can give scores or put comments for the current information that the user is browsing, and it sends captured data to the server system. 4.6

    Snichin

    A “Snichin” is a software agent (chat-bot) that could interactively browse information of other application systems. For example, the user can retrieve a location of the particular user by chat up to the agent character in the text command. The system is connected to the internet that the user can look up a on-line dictionary, breaking news, and so on. The system also has a capability of reading RSS feeds from various websites that it could incorporate updates of relevant websites into the topic for impromptu chatting. 4.7

    Location Capture

    A “Location Capture” system is a simple networked server software to manage information of RFID tags. A person wears an active RFID tag, and the system collects and manages information of the tag when he or she entered certain areas. Information included in the tag are, for example, user-ID, user name, location name, RFID reader name, and level of signal.

    5

    Discussion

    In this section, we discuss how the applications we introduced here are loosely coupled with each other on our framework. Our project was two years project and it consisted of six researchers taking very agile project management approach[5] in order to develop numbers of application systems in a short span of time. Instead of making strict development plan before starting our project, we encouraged individual ideas and development style to get advantage of scale in numbers of context-aware application systems. Alternatively, we set approximate research goals for each project member, and we made a developing guidelines that each application system has to syndicate its own context summary information. We also made a policy for each project member to release his/her application systems among project members for trial operation when the systems are almost completed in development and ready to use.

    1104

    S. Iga et al.

    As the phase of development proceeds, several application systems has been released for trial use. Some application systems would be judged to be practical and continually used, on the other hand, some application systems are left unused. From iterative improvement and trial use of application systems, each project members would be able to experience the advantages and robustness of the systems that implies the reliability and accuracy of each context information the system contains. If some researcher decide to develop certain application system, he/she could browse context gallery system to forage through context information which had been created by prior released systems for latter development. We analyzed how part of our application systems introduced here are loosely coupled with each other on our framework. We collected syndicated context summary information from each application system, and analyzed the correlation between each contexts on the context gallery system. Table 1 shows a cross-tabulation of recipient of context information by provider of context information. This table is similar to the design structure matrix[2], and it shows relationships and interdependencies among context information each application system contains. Note if particular application provides or utilizes particular context information by itself, it would be mapped in the corresponding diagonal element (gray areas in the table). From this table, we can assume the characteristics of each application that whether particular application tends to provide context information for the others, or it tend to leverage context information provided by the others. For example, application (g), a service for managing user RFID tags, provides user ID and location information for other applications, meanwhile application (d) mainly leverages context information provided by other applications like (a), (b), (c) and (g). Each application leverages particular context information of other applications by various factors including credibility, accuracy, accessibility, and relevancy. For example, application (f) uses location information of (c), while application (c) uses location information of (g). Application (g) manages raw context information, for example, user-ID, location(RFID reader device name), and RFID signal level, while application (c) processes those raw context information from (g) to calculate higher abstract context information, for example, legal name of the users, specific location name, and so on. Here in our framework, application (f) could flexibly use already processed abstract context information of application (c) that every application could flexibly and efficiently reuse context information resource and computational power. We do see large potential in automatically and indirectly inferring contexts from other contexts by reasoning technique or natural language processing technique. However, over two years of our experiences of development and sustainable utilization of applications, we think that contexts whose dependency are hidden from developers and end users have difficulty in getting reliability for practical use. We believe that our framework could facilitate developing context-aware applications which is to provide information that developers and end users can estimate reliability and accuracy of contexts by browsing the dependency among them.

    Context Gallery: A Service-Oriented Framework

    1105

    Table 1. Cross-tabulation of recipient of context information by provider of context information

    6

    Conclusion

    In this paper, we proposed the service-oriented framework of context-aware systems which facilitates the flexible interconnection among applications through

    1106

    S. Iga et al.

    context information. We also proposed a XML-based context information syndication format called “context summary”, and its web-browsing service called “context gallery”. We demonstrated part of our prototype systems leveraging various context information, and discussed how these applications are loosely interconnected with each other by contexts, and how our framework facilitated developing context-aware applications. We are currently specifying the requirement of context summary format through the practical operation of the system. Our future work would be to develop support tool for describing context summary, and to evaluate how our framework facilitates the use and reuse of context information for developers.

    References 1. Abowd, G.D., Mynatt, E.D., Rodden, T.: The Human Experience. Pervasive Computing, January-March (2002) 48–57 2. Baldwin, C.Y., Clark, K.B.: Design Rules, Vol.1 - The Power of Modularity. The MIT Press (2000) 3. Chen, G., Kotz, D.: A Survey of Context-Aware Mobile Computing Research. Technical Report: TR2000-381, Dartmouth College (2000) 4. Gu, T., Pung, H.K., Zhang, D.Q.: A Middleware for Building Context-Aware Mobile Services. Proc. of IEEE Vehicular Technology Conference (2004) 5. Highsmith, J.: Agile Project Management - Creating Innovative Products. AddisonWesley Professional (2004) 6. Holmquist, L.E., Mattern, F., Schiele, B., Alahuhta, P., Beigl, M., Gellersen, H.W.: Smart-Its Friends: A Technique for Users to Easily Establish Connections between Smart Artefacts. Proc. of UbiComp 2001 (2001) 116–122 7. Ishii, H., Ullmer, B.: Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms. Proc. of CHI’97, (1997) 234–241 8. Kindberg, T., Barton, J.: A Web-based Nomadic Computing System. Computer Networks, 35(4) (2001) 443–456 9. Klemmer, S.R., Li, J., Lin J., Landay, J.A.: Papier-Mache: Toolkit Support for Tangible Input. Proc. of CHI2004 (2004) 399–406 10. Kohtake, N., Rekimoto, J., Anzai, Y.: Infostick: an interaction device for interappliance computing. Proc. Of HUC’99 (1999) 246–258 11. Konomi, S., Muller-Tomfelde, C., Streitz, N.A.: Passage: Physical Transportation of Digital Information in Cooperative Buildings. Proc. of CoBuild’99 (1999) 45–54 12. Norman, D.A.: The Invisible Computer. The MIT Press (1999) 13. RDF Site Summary(RSS) 1.0: http://web.resource.org/rss/1.0/ (2000) 14. Rekimoto, J.: Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. Proc. of UIST’97 (1997) 31–39 15. Salber, D., Dey, A.K., Abowd, G.D.: The Context Toolkit: Aiding Development of Context-Enabled Applications. Proc. of CHI’99 (1999) 434–441 16. Seffah, A., Javahery, H.: Multiple User Interfaces. Wiley (2004) 17. Ullmer, B., Ishii, H., Glas, D.: mediaBlocks: Physical Containers, Transports, and Controls for Online Media. Proc. of SIGGRAPH’98 (1998) 379–386 18. Weiser, M.: The Computer for the Twenty-First Century. Scientific American, September (1991) 94–104

    A Service-Oriented Architecture Based Macroeconomic Analysis & Forecasting System Dongmei Han1, Hailiang Huang1, Haidong Cao1, Chang Cui1, and Chunqu Jia2 1

    School of Information Management & Engineering, Shanghai University of Finance & Economics, Shanghai 200433, China (dongmeihan, hlhuang, caohaidong1, cuichang)@mail.shufe.edu.cn 2 State-owned Assets Supervision and Administration Commission of the State Council, Beijing 100080, China

    Abstract. Macroeconomic analysis & forecasting system (MAFS) simulates and forecasts economy macroeconomic cycle trend by analyzing the macroeconomic data using various models. Currently, as a system, which should integrate multi-model and multi-data-source, MAFS has two bottlenecks in its development and application, i.e. models’ update, reuse and system integration, and data update and integration. This paper proposes a Service-Oriented-Architecture (SOA)-based Macroeconomic Analysis & Forecasting System, named SMAFS to solve the problems. In SMAFS, certain econometric models and data are encapsulated with the form of Web Services, which are distributed in network space and can be reorganized into seamless integrated system through the standard programming and data interfaces. By this architecture, the system’s abilities of software reusing and cross-platform are highly enhanced. The architecture, functionality and implementing methods of the system are presented and discussed. The workflow of SMAFS is presented and the design of Web Services of this system is described. A case example is demonstrated and proves that the SMAFS can highly enhance the effectiveness of data collecting and processing.

    1 Introduction Macroeconomic Analysis & Forecasting System (MAFS) simulates and forecasts economy macroeconomic cycle trend by analyzing the macroeconomic data 1, 2. It provides technical tools to government and research institutes for effective economy management and policy making. As MAFS is very important to illustrate regional and nation-wide economy, many administrations and government have developed or are developing their own MAFS with large number of manpower and financial investments. During the system development, a few of valuable models or software modules with high-level technologies have been built. But since all research and development of MAFS are carried out independently, their system architectures are based on conventional architectures, which become an obstruction for model reuse, data integration and system replanting. Currently, as a system that should integrate multi-model and multi-data-source, MAFS has two bottlenecks in its development and applications: X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1107 – 1117, 2006. © Springer-Verlag Berlin Heidelberg 2006

    1108

    D. Han et al.

    Model update, reuse and system integration. The accuracy and effectiveness of the outcome of MAFS are guaranteed by the models and methods based on the economic fluctuation theory. As economy grows quickly, new economic phenomena appear time-to-time, and new explanations and rules of economic problems have been discovered continually. Therefore, the new or revised models should be included in the MAFS in time. But since these systems are mostly based on traditional architectures, they can only be accessed within local systems or LAN and do not provide interfaces for external applications. It is hard to achieve model reuse and system integration. Therefore, it is a key issue to develop a sharing platform that could integrate distributed models rapidly. Data update and integration. The key factor to ensure MAFS’s efficiency is essentially based on the rapid data collection for all necessary data from various industries. MAFS needs extensive and large amount of data, hundreds or even thousands of economy entities or industrials data are need to be processed. These data usually lie in the various regional or governments where they may employ different kind of database and operational systems. So it is really difficult to make the data collected and integrated effectively. According to the bottlenecks mentioned above, this paper proposes a MAFS based on service-oriented architecture (SOA). In this architecture, all econometric models3 can be organized and realized as Web Services. Through Web Services description language (WSDL) 4, users can use the remote services as conveniently as running a local application. Under this architecture, the heterogeneous data sources can be visited via the form of Web Services, which is expected to solve the data collection and integration problems effectively. In general, the SOA-based MAFS has the advantages as below: − Reusable modules. Based on modularization of the low-level service, complex services can be assembled by those low-level modules in order to realize the reuse of external models (including econometric models and data processing module). In addition, since the service users do not visit service suppliers directly, this kind of service can be used more flexibly. − Easier maintenance. The loose coupling relationship between service supplier and users as well as the open standards can assure the easier maintenance of the system. − Across different platforms. After defining the logical functions as services, there are none difference between platforms and languages. Any modules or services programmed by a certain language can reuse other services programmed with another language. − Integration of distributed resources. This paper presents a SOA-based Macroeconomic Analysis & Forecasting System, SMAFS. The content of this paper are arranged as follows: section 2 introduces the whole structure and function of MAFS based on SOA; section 3 describes the realization of SMAFS; section 4 evaluates the system performance with comparison to the traditional one, and section 5 gives the conclusion.

    A SOA Based Macroeconomic Analysis & Forecasting System

    1109

    2 The Architecture and Functions of the SOA-Based Macroeconomic Analysis & Forecasting System (SMAFS) The main tasks of MAFS include: 1) analysis, simulation of real economic activities trend (this is performed by economic cycle index sub-system in this paper); 2) preventing the departure from normal path during economic development and operation or crisis occurs, the early-warning system is established (this is performed by early-warning sub-system in this paper); 3) forecasting the results that would influence main macroeconomic index in the future (this is performed by macroeconomic forecasting sub-system). 2.1 The Goals and Functions of SMAFS Based on business cycle theory, this paper develops MAFS and use multi-mathematics methods5 to establish MCIS, EWS6 and EFS.

    Fig. 1. The functionalities of the SMAFS

    The main workflow of this system include: 1) obtaining monthly, season and annual macroeconomic data information in this system and collect economy data; 2) using season adjustment methods such as X117 to manage data; 3)establishing economic cycle monitoring and forecasting indicators group; 4) using econometric model to

    1110

    D. Han et al.

    calculate economic cycle index; 5) using econometric model to judge and forecast the turning points8 of business cycle fluctuation; 6) using models to obtain economic warning curve; 7) judging the consistence of economic cycle index, warning signal and the result of forecast with the situation of real economy. If they are consistent, gives the policy proposal through the policy simulation. Otherwise select the index again and correct the model. SMAFS compasses 6 sub-systems: macroeconomic cycle index sub-system (MCIS), early-warning sub-system (EWS), economy forecast sub-system (EFS), data and file management sub-system (DFMS), Heterogamous Data Integration Platform and user interface sub-system (UIS). The functional structure is shown in Fig.1. The Heterogamous Data Integration Platform serves as the data support platform of the SMAFS. It draws data from various heterogamous databases and transforms them into the format that required by models’ calculation and integrated them as needed. The web-based user interface calls appropriate models and data on users’ request and feedback the result with the form of table and/or charts to users. 2.2 The Sub-systems of SMAFS MCIS MCIS consists of two functions: − Select economic cycle indicators. − Compound cycle index. EWS EWS employs a few sensitive indicators that reflect the state of economic situation. Then it uses the related data processing method to combine some indicators into one composite indicator 9. These indicators, just like the traffic lights in red, yellow and green, give different signals in different economic situation and we can forecast the economic development trend by observing and analyzing the changes of signals.

    Fig. 2. The data file management sub-system

    EFS EFS consists of two functions: − Simultaneous equations model forecasting sub-system10. − Single indicators forecasting system.

    A SOA Based Macroeconomic Analysis & Forecasting System

    1111

    Data file management sub-system Since we need to carry out a series of data processing to economic Indicators in the forecasting methods, it is necessary to save these indicators as data files, so this system develop the perfect function of data file management, as shown in Fig.2. User interface sub-system The Figure 3 depicts the SMAFS main user interface and a couple of output screenshot. Fig.3(a) shows the gateway user interface, Fig.3 (b) shows the output chart of MCIS, while Fig.3 (c) illustrates the output chart of data file management sub-system.

    Fig. 3. SMAFS User Interfaces (a, b, c)

    SMAFS is operated under B/S structure and the users can visit the each sub-systems through visiting sites of systems through browser and further. The results of model are output in figures and charts etc.

    3 Implementation of SMAFS The core thought of the SMAFS is to provide clients with varied modules and methods of macroeconomics analysis & forecasting in the form of Web Services. The interfaces of these services are based on comprehensive standards, and their parameters are provided with the data type of XML Schema. In this architecture, all econometric models are provided in the form of Web Services and registered on UDDI. Once the client (normally browsers, but also can be other kind of client applications) send a SOAP or http request to the server, the server calls appropriate Web Services (either local or demote ones) to deal with the request and give responses. The remote Web Services will be located through the UDDI server. As mentioned before, the data used in econometric models (Web Services) typically come from the heterogonous data integration platform. The network architecture of SMAFS is depicted in Fig.4. 3.1 The Design and Realization of Web Services Based Econometric Model Web Services based econometric models are the core of the system. The main characteristics of Web Services is loose coupling and file-driven, which makes it better than the connection-based interfaces. When the client calls a web service, they usually

    1112

    D. Han et al.

    Fig. 4. The network architecture of SMAFS

    send a whole file but not a set of discrete parameters. Web Services receive the whole file, process it and return result. In SMAFS, the most widely used econometric models are implemented by the form of Web Services, the architecture of which is depicted in Fig.5. In this architecture, the Web Services do not implement its model logic directly. Instead, their main tasks are to authenticate and authorize incoming service requests, and then relay the request details to back-end model components and workflows for processing. In SMAFS, the Web Services carries out the functions as follows: − Providing publicly accessible endpoints for service requests. − Authenticating and authorizing incoming service requests. The model components realize the econometrics logic implementations but do not expose to publicly accessible endpoints. They do not process SOAP requests directly, nor have the ability to filter out incoming service requests based on security tokens. Only through the Web Services interfaces, they can communicate with external modules. In general, the Fig. 5. Web Services based econometric model architecture has the advantages as follows: − Flexible function extending. Due to the separation between interface and model components, the function can be flexibly extended in the interface level.

    A SOA Based Macroeconomic Analysis & Forecasting System

    1113

    − Invoking other models. Each model can be invoked or accessed by other ones through standard protocols in Internet environment. − Across the heterogonous platform. 3.2 Web Services Based Heterogonous Data Integration Platform Running of MAFS needs huge numbers of economics data series. Currently, the economic data series mainly come from “Statistic Database” and “China Economic Information Database”, which are distributed in different network spot and are heterogonous ones and are hard to integrate them into a whole dataset by traditional approaches. In order to conveniently and effectively collect and make use of these data, we build a uniform data platform to integrate data from heterogeneous databases via a uniform interface offered. Web Services technology provides a distributed computing technology via more open Internet standards and eliminates the problem of interaction of existent solution.

    Fig. 6. The heterogonous data integration platform

    The heterogeneous data integration framework is shown in Fig.6. Application Interface is the first layer of system, which provides an accessible interface for the external data consumers. Data integration layer is the second layer. Through data service list, it records data services that offered by different databases, and updates them dynamically. Through the data service list, users can easily access distributed data sources that are needed by SMAFS. The third is data service layer, which offers the resource framework of data services and the data table or view that can be shared. By using XML files, it shields the divergences of various databases so that to provide system with data response in uniform data format.

    1114

    D. Han et al.

    The fourth and the bottom is data source layer. It consists of various databases with original structures. It can either act as an independent data service crunodes in network, or act as heterogeneous database’s offering data service independently.

    4 The Evaluation of SMAFS This system provides nearly 100 econometrics models and mathematics models, the information of the integrated database is reached to ten thousands, including monthly, seasons and annual data of all industrials. The system can transfer all functions of the system by interfaces, such as, the compound of coincident index and draw early-warning signal charts. In order to prove the system’s validity, this paper tested the rationality of operation efficiency and results of system through three tests. [Test1] The comparison of the results of coincident index The aim of test1 is to compare the composition of coincident index based on the different economic data, one is provided by the system before data integrated, and the other is provided by the system after data integrated. We employ artificial method to collect the data from China Statistic annuals, China Economy Prosperity Monthly, etc. into the system database, filter 5 indicators and generate coincident index 1. In contrast, 9 economy cycle indicators are filtered from hundreds of data sources, which are mainly from China Macroeconomic Database, China Economy Statistic Database, etc., and generate coincident index 2. These databases are in the charge of 7 different departments located in different network spot. They are all encapsulated with the form of Web Services, which can be accessed by other appropriate applications. The details of them are shown in Table2. Table 1. Data sources for economic cycle indicators

    No. 1 2 … 7

    Database Name Database Content DBMS China China productionSybase macroeconomic amount statistic database China Total society fixedOracle assets macroeconomic investment statistic database …… …… …… Industry data database Value Added ofOracle Industry

    OS IP address Windows 202.121.142.177 XP Windows 202.121.142.173 2000 …… …… Windows 202.101.140.39 XP

    The output of the CI that results from the coincident indicator group is depicted in Fig.7. The green curve in Chart1 is reference standard to the coincident index. The pink curve is coincident index 1 which was integrated by indicators group 1, while the blue curve is coincident index 2 which is integrated by indexes group 2. Through Fig.7, we

    A SOA Based Macroeconomic Analysis & Forecasting System

    1115

    find that coincident index 2 is more accordant with coincident index as reference standard (green curve). The main reason is that the index collection that constitutes economy cycle index. Coincident composite indexes 2 are filtered from different field indexes through more databases that are sensitive and representative to economy cycle fluctuation. Thus the composite index is more accurately. The test result illuminates that the result of index composition used the system after data integrated is more accurately, and the system is effective.

    Fig. 7. Coincident CI comparison

    Fig. 8. Early-warning indicator signals

    And the pink curve is far from the green curve, because the data source is limited and the collection fashion is traditional. This makes it limited to choice the economic cycle indicators, so the dispersion is appeared. The results of test1 illustrates the composite indexes based on the database after integrated can reflect economy real situations more really. [Test2] Early-warning indicator signal figure creation We filtered 8 early-warning indicators from more than 100 economic data and create early-warning indicator signal. Fig. 8 reflects that industry production and fixed assets investment growth rate go down from ‘overheat’ to ‘trend to heat’ or ‘normal’. Based on the stable increasing in previous years, the resident consumption’s increasing appears going up trend rapidly in recent year, goes slowly for total foreign trade; import and export growth trends go differently. The money supply and different loan go down rapidly from ‘trend to heat’ to ‘trend to cold’, and so on. It means that the early-warning indicators and models we selected are reasonable. [Test3] Macroeconomic principle indicators forecast Using economy forecast sub-system, we forecast the China principle macroeconomic indexes. Firstly we pick up about 70 economic data from the integrated database, as shown in table 2. Secondly, through resolving the simultaneous equations and integrating the forecasting of a series of single indicator, we can obtain the forecast results of main economic indicators, as shown in table 3.

    1116

    D. Han et al. Table 2. Economy cycle forecasting indicator group

    Endogenous variable gross domestic product amount Value added of Industry Total Value of Imports and Exports Gross Capital Formation ……

    Exdogenous variable exchange rate financial Subsidies IMF Ex-Factory Price Indices of Industrial Products ……

    Table 3. Macroeconomic principle index forecast results

    Name of index

    2004 (real2004 forecastDeviation value) value (%) GDP Growth Rate 9.5 8.6~9.5 -0.9 ~0 IndexA* 17.7 13.1~15.4 -4.6~2.3 IndexB* 35.83 23.1~34 -12.73-0.07 M1 Growth Rate 14.12 11~13 -3.12~1.12 M2 Growth Rate 14.36 12.6~16 -1.76~1.64 IndexC* 2.7 2~4 -0.7~1.3

    SMAFS efficiency 3~5 3~5 3~5 3~5 3~5 3~5

    Traditional efficiency 20~30 20~30 20~30 20~30 20~30 20~30

    * IndexA: Total Retail Sales of Consumption Growth Rate Total; IndexB: Imports Growth Rate; IndexC: Consumption Price Indices Growth Rate.

    The forecast results indicate that their forecasting errors are 30% within 1%, 90% within 5% which satisfied with the requirements of forecast accuracy. In addition, the data in the item of this system and traditional system show that the efficiency of this system is 3 to 5 days, in contrast with that of traditional system (20 to 30 days).

    5 Conclusions This paper proposes SMAFS, a Service-Oriented-Architecture (SOA)-based Macroeconomic Analysis & Forecasting System. In SMAFS, all econometric models are provided in the form of Web Services. The Heterogonous Data Integration Platform serves as the data support of the SMAFS, which draws data from various heterogonous database and transform them into the format that required by models’ calculation and integrate them as needed. The architecture, functionality and implementing methods of SMAFS are designed and implemented. The major advantages brought by the SMAFS are as follows: − Achieving the model reuse. The SOA brings the system high extensibility through the Web Services interface, which makes the developer and end-user convenient to access required logic and data resources. Only by encapsulating data resources into Web Services, we have saved about 40% cost and time. − Enhancing the system performance. Through three representative case examples, we compared the performance of the traditional system and SMAFS. We found that the output from the latter was much more accurate and of more effectiveness than that of the former. The reason is that SMAFS incorporates more data sources from

    A SOA Based Macroeconomic Analysis & Forecasting System

    1117

    distributed network spots, which makes the analysis more align with the real situations of macro-economy.

    References 1. The Report of Business Cycle Dating Committee. National Bureau of Economic Research, http://www.nber.org. 2. Tiemei Gao, Xianli Kong, Jinming Wang, International Economy Economic cycle Research Development Summary. Quantitative & Technical Economics Research 11 (2003) 158-162. 3. Torben G.Andersen. Simulation-Based Econometric Methods. Cambridge, Cambridge University Press (2000). 4. Sanjiva Weerawarana. Web Services Platform Architecture: SOAP, WSDL, WS-Policy, WS-Addressing, WS-BPEL, WS-Reliable Messaging, and More. Prentice Hall (2005). 5. OECD Composite Leading Indicators (CLI)-Updated. http://www.oecd.org (2003) 6. Haibing Gu. Macroscopic Economy Warning Research, Theory, Method, History. Economy Theory and Management 4 (1997) 1-7. 7. Shiskin,J. The Variant of the Census Method II: Seasonal Adjustment Program. U.S Department of Commerce Bureau of the Census (1965). 8. Lei Chen. China Shunt Period Economy Economic cycle Measurement and Analysis. International Economy 12 (2001) 63-68. 9. Wenquan Dong, Tiemei Gao, Shizhang Jiang, Lei Chen. Economy Periodical Fluctuation Analysis and Warning, Jilin University Press (1998). 10. Tiemei Gao, Xindong Zhao, Dongmei Han China macroeconomic measurement model and policy simulation analysis. China Soft Science 8 (2000) 114-120.

    A Web-Based Method for Building Company Name Knowledge Base Zou Gang1, Meng Yao1, Yu Hao1, and Nishino Fumihito2 1

    Fujitsu Research and Development Center Co. Ltd. 2 Fujitsu Laboratories Ltd., Kawasaki, Japan

    Abstract. The fact that a company always owns various names, such as Chinese full names, Chinese abbreviative names and English abbreviative names makes it very difficult to collect and extract relative information about the company, because: (1) It is hard to identify a company’s Chinese abbreviative names. (2) It is hard to discover relationships between the names. This paper is to present a solution by building a large-scaled company name knowledge base, automatically, based on web pages. Firstly, name candidates will be picked out from the company’s homepage. Then relationships between them will be discovered, and candidate will be ranked accordingly. Thirdly, name knowledge base will be built according to above results. This knowledge base can be applied to identify abbreviative company names and to collect relative information about the company. Experiments’ results indicate that this method is effective and can be applied to company name normalization and key word expansion, and it has worked in a practical company information extraction system.

    1 Introduction A company always has various names. For examples, a Chinese telecom company, with “ ” as its full name, also has “ ” and “ ” as its Chinese abbreviative names, and “China Unicom” as its English abbreviative name, all of which appear in different occasions. For examples, as shown in Chart 1, in Sample A, only full names appear in the whole text; but in Sample B, only abbreviative names appear; while in Sample C, the English and Chinese names of the same company appear at the same time. Variety of the names brings many difficulties in collecting and extracting company information in our company IE system, since (1)

    It is hard to identify abbreviative Chinese company names, because there isn’t any distinct internal feature; (2) It is hard to discover relationship between the names. For examples, as shown in Sample C, “ ” and “Intel” are different abbreviative names of the same company, but without any hints from context, even human being, if without relative background knowledge, can not tell out that these two names just belong to one company.

    To solve these problems, a feasible solution is to build up a large-scaled company name knowledge base, in which various names are stored according to their correlations. With this name knowledge base, identification of the company names and their X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1118 – 1125, 2006. © Springer-Verlag Berlin Heidelberg 2006

    A Web-Based Method for Building Company Name Knowledge Base

    1119

    correlations will become much easier, whenever and wherever they appear in the text. While in this solution, creation and maintenance of the knowledge base is a critical problem, due to the fact that relative human cost is high. Hence, we try to build it from Internet automatically. As we know, with fast development of the Internet, a great number of companies have set up their own websites for introducing themselves or supplying services. As mentioned in an Internet investigation report [1], there are 158,293 domains ended with “.com.cn” until 2004, about 41.4% of the whole domains ended with “.cn”. In addition, there are lots of companies with their domains ended with “.com”, and many without isolated domains. According to these facts, we can conclude that it is feasible to acquire information about company names from the Internet. In the Internet, we find that the homepage is a kind of special page, around which company names gather. Therefore, we concentrate ourselves on the homepage to collect name candidates and try to find out their relationships, and then rank them accordingly. At last, we pick up the best names and store them into the database together with their relations. By this way, we have built the knowledge base, and use it to identify abbreviate names and their relationships. And experiment result shows that this method is effective. Chart 1. Three examples of company names in news web pages. Company names are marked with bold and italic.

    Sample A: http://www.cnhubei.com/200405/ca453799.htm ᅵ䅵㕆ᇚᇍϔᡍ೑ӕ䖯㸠ᅵ䅵 (2004-05-02 07:04:59)

    ᥂ᮄढ⼒ ...... 䖭ᡍ㹿ᅵ䅵ऩԡߚ߿Ў …… Ё೑㘨ড়䗮ֵ䲚ಶ݀ৌǃ⼲ढ䲚ಶ᳝䰤䋷 ӏ݀ৌǃЁ೑䗮⫼ᡔᴃ᥻㙵᳝䰤䋷ӏ݀ৌ੠೑ᆊ⚳㤝ϧपሔDŽ Sample B: http://www.nugoo.com/news/detail.php?code=200504010586972 ϡᯢᳫ಴㋴೼๲ࡴ Ё೑㘨䗮㾺ϞᏖҹᴹ᳔ԢӋ www.nugoo.com  2005-04-01 ᴹ⑤Ѣ ⠯㙵䋶㒣

      ≾⏅㙵Ꮦ᥶Ԣಲछ   ೼ⲬЁϟ䆩ᮄԢПৢ,᯼᮹Ϟ䆕㓐ᣛሒⲬ᳝᠔ಲछ,䗶ԢфⲬ䍟Ѣ⿃ ᵕDŽԚᣛᷛ㙵ПϔЁ೑㘨䗮㸼⦄⮆ᔅ,ⲬЁ᳔Ԣϟ᥶㟇 2.62 ‫ܗ‬,䖭ᰃᅗ㞾 ϞᏖҹᴹⱘग़৆᳔ԢӋDŽ…… Sample C: http://www.pconline.com.cn/news/gjyj/0504/587912.html 䋱⨲⡍:Intel ೼᮹ᴀ≵᳝⢃⊩ 䞡㾚 Turion 64 㣅⡍ᇨ 佪ᐁᠻ㸠ᅬ䋱⨲⡍೼ᴀ਼ϝ⿄ˈሑㅵҎӀ᱂䘡乘䅵Ҟᑈⱘञ ᇐԧ㸠ϮᰃϔϾᣕᑇⱘᑈӑDŽԚᰃ⬅Ѣ⫼Ѣヨ䆄ᴀ⬉㛥ⱘ䖙偄㢃⠛Ꮦഎ 䳔∖ᯎⲯˈ಴ℸҞᑈࠄⳂࠡЎℶ㣅⡍ᇨⱘϮ㒽ℷ೼Āড়⧚ⱘ๲ 䭓āDŽ…… This method relates to Link Analysis. As we know, web can be modeled as a directed graph with webpages as nodes and hyperlinks as directed edges[2]. One basic

    1120

    Z. Gang et al.

    assumption of Link Analysis is that if there is a hyperlink from webpage A to webpage B, it means that author of webpage A found webpage B being more valuable. Based on this assumption, we believe that anchor name of the hyperlink is also valuable, so most name candidates are collected from the anchor names in this method. Except that it can be applied to our system, the knowledge base can also be put into use in the following areas: (1)

    Identification of Chinese organization names. Imported into the identification of named entities during segmentation, it is useful to improve recall rate of abbreviative company names; (2) Company name normalization. With this knowledge base, relationships between names can be judged, so we can find which company names belong to the same company. It is helpful for information extraction such as event merging task; (3) Expansion of key words. We can expand a company name key word into its other names, which is something like looking for its synonyms. It can help search engine to retrieve more information.

    In this paper, section 2 will propose basic concept and methodology; section 3 will introduce several applications and experiment results; section 4 will give out conclusion.

    2 Automatically Acquiring Company Name Knowledge on the Web 2.1 Basic Concept As we know, a large number of companies have set up their web sites on the Web, which supply fruitful information for company names. But most web information is semi-structured and there are even many errors in it, so it can not be used directly. However we find the homepage is a kind of special node in the web graph model, which always has a large outdegree and indegree, in other words, the homepage is not only a navigational page, but also always referenced by many other webpages. Since homepage is start point of a web site, when an author wants to create a hyperlink to some other website, he will always make the hyperlink point to its homepage using its name as anchor name. According to the assumption of Link Analysis, this anchor name is reliable. So the anchor name is one important point where we can find company names. Apart from it, company names always appear in various positions in the homepage, e.g. the title field, the copyright declaration at the bottom and the meta field in the source. A typical example is shown in Figure 1. Based on those facts, we regard the homepage as an information source, around which the company names gather. Another problem is to identify abbreviate company name. Although there is not any distinct internal feature, abbreviative company names almost derive from the full name whether in English or Chinese. In other words, if the full company name is found, it is easy to check whether other strings are its abbreviation under the circumstance of the homepage. Since rich name information gathers around the homepage, various company names can be collected and processed there as well.

    A Web-Based Method for Building Company Name Knowledge Base

    1121

    From the above discussions, there are five steps for building company name knowledge base: (1)

    Pre-process grabbed webpages and store their link information into the database; (2) Locate the company information source, in other words, find the company homepage; (3) Collect name candidates around the information source; (4) Analyze those name candidates and get various company names and their correlations; (5) Build the company knowledge base for further applications. In the following, we will emphasis on the 2nd and the 4th step.

    Anchor names of hyperlinks pointing to the homepage: Oracle Oracle Corporation Oracle home page Oracle website Site Web Oracle Oracle[ ] Oracle 10g Oracle Corporation Homepage ……

    Fig. 1. Company names appear in various positions of the homepage

    2.2 Locating the Company Homepage Restricted to the company field, the method of homepage finding is much easier and faster than TREC Homepage Finding task. One obvious feature of the full company name is it always ends with such suffixes as “ ”, “ ” and “inc”, so we can locate the company homepage just through hyperlinks with those specific anchor names. This method is easier and faster than iterating through every webpage, but there exist some problems:

    1122

    Z. Gang et al.

    (1)

    The same anchor name may be owned by some hyperlinks which point to different webpages; (2) If we haven’t collected the full company name in the pre-processing stage, the homepage will be missed.

    To solve the problem (1), we use a simple algorithm to rank a small group of webpages based on the two evidences which are URL-type and hyperlink recommendations [3], and then pick up the webpage with highest score as the homepage. Problem (2) belongs to data sparseness. It is hard to solve it, but compared with processing every webpage, the price of the homepage loss is worthwhile. 2.3 Identification of the Company Names After locating the company homepage, name candidates are collected from the following four positions: (1) anchor names of hyperlinks pointing to this homepage; (2) title of the homepage; (3) copyright declaration at the bottom of the homepage; (4) meta elements in the html source. E.g. Chart 2 shows results of collection in the homepage of China Unicom. There are ambiguities and errors in the candidates as shown in Chart 2, since updating, setting up and removing websites may be frequent and hyperlinks are created by human, all of which will lead to the anchor names becoming out of date or some mistakes made by human, e.g. the formal translation of “IBM” is “ ”, while “ ” appears in a few texts if you search it in Google. Therefore, the candidates need be trimmed and analyzed. Another problem is how to pick up company names from candidates existing errors and ambiguities. Only depending on the frequency or the specific suffix is not enough, e.g. there are three candidates ended with the specific suffix “ ” as shown in Chart 2. Hence in the analysis stage, correlations between the candidates should be concerned. E.g. in Chart 2, “ ” is the abbreviation of “ ”. Both of them belong to one company name cluster. Summarily, there are two steps in identifying company name. The first step is trimming process and the second step is name analysis process. The trimming process is composed of such procedures as removing url from the candidates, recovering html character entities (e.g. “ ”) and transition from traditional Chinese to simplified Chinese. The analysis process is based on the trimmed results. The name analysis process is mainly to rank candidates. Incorporating frequencies and correlations between the candidates into ranking algorithm, it is made up of two steps. (1) Discover correlations of the name candidates, so to build name cluster dominated by every full company name candidate. A name cluster is a set of names dominated by a full company name candidate, that is to say, all name candidates in the cluster are abbreviation of the dominated full name candidate. Hence the name cluster includes not only the names but also their correlations. After ranking, the best name cluster will be chosen as the company names. The ranking algorithm depends on three attributes of the name cluster. The first is positions of the candidates appearing in the webpage; the second is number of

    A Web-Based Method for Building Company Name Knowledge Base

    1123

    the hosts which have hyperlinks pointing to this homepage; while the third is language of the name cluster. The 1st attribute is an auxiliary feature. Observing the webpages, we found that a name candidate at some specific position is more possible to be the company name. E.g. the author always places the company name in title field of the homepage. The 2nd attribute is the most important feature, which denotes how popular the name cluster is. The name cluster with maximum popularity is most possible to be company names. The 3rd attribute is used to distinguish the language of the name cluster so that English and Chinese name clusters can be chosen separately. Chart 2. The name candidates which are collected around the homepage of China Unicom China united telecommunications corporation China Unicom www.chinaunicom.com.cn http://www.chinaunicom.com.cn

    Name Cluster 1 Full Company Name:

    Abbreviative Names:

    Name Cluster 1 attributes Position: Title, anchor name 10 host frequency: Chinese language:

    Fig. 2. One example of the name cluster built from the name candidates in Chart 2

    (2) Rank every name cluster. Score function is shown in Equation (1). Score(x) = Į(x.host_freq)/T + (1-Į)( P(x.position) )

    (1)

    In Equation (1), x stands for the name cluster, and T is the total host frequency of the whole name candidates, and Į is the weight. Different position has different priority score. P function generates the whole priority score of a set of positions. E.g. P(anchor)=0.3, P(anchor & title)=0.8. After the 2nd step, we can pick up the name cluster with max score as the company name. For examples, in Chart 2, although “ ” is ended with “ ”, yet it is only an abbreviation of “ ”, so it can not be the company full name can”. From didate, and it belongs to the name cluster dominated by “ Chart 2, three name clusters can be built, one of which is shown as Figure 2. After scoring every name cluster, we can pick up one Chinese and one English name cluster with the highest score respectively, and then all the company names can be gotten from the name clusters. At last we store them and their correlations into the database. It is the whole process of building the company name knowledge base.

    1124

    Z. Gang et al.

    3 Experiments After processing over 3,500,000 webpages, we have built up a knowledge base containing approximately 25,000 companies and about 40,000 company names. Based on this knowledge base, we have done three experiments. The first experiment is testing precision of the knowledge base; the second is evaluating company name normalization; the third is key word expansion about the company name. In the first experiment, we get part data randomly from the database and calculate precision manually. The results are shown in Chart 3. Most errors come from ambiguities in identifying full company names, such as “ ”. Recall rate of the company name hasn’t been calculated, because human cost is too high, however, it can be embodied to some extents in the following experiments. The second experiment is company name normalization. We have a corpus tagged with company names. At first, 734 company names including abbreviation and full names are extracted from this corpus, which are owned by 428 companies. They are regarded as keys, and then we normalize those 734 company names using the knowledge base. At last, we get the results that they belong to 537 companies, in which 397 companies exactly match the keys, but the rest of 31 companies haven’t been merged correctly. So precision of the normalization is 92.6%. Data sparseness is a main reason for these errors. If one company name has not been collected into the knowledge base, it can not be normalized correctly. Another reason is flexible expression of the company name, which leads to many errors, e.g. “ (Panasonic) ”, “netac( ) ” and “ Sanyo”. The third experiment is company key word expansion. Given some company names, we can expand those names by querying the company name knowledge base. Parts of the results retrieved from the knowledge base are shown in Chart 4. Chart 3. Precision in the part of the database

    Total company names Error company names Precision

    1000 73 92.7%

    2000 167 91.65%

    Chart 4. The expansion of given company key word

    Keyword Microsoft

    Fujitsu Dell

    Synonyms , Microsoft inc., , China Unicom, , China United Telecommunications corporation Siemens China, , , Siemens China co ltd , , Fujitsu inc Dell Computer, , , , Dell Computer inc., Dell inc.

    A Web-Based Method for Building Company Name Knowledge Base

    1125

    This knowledge base is also helpful for identifying named entities. Youzheng Wu[4] said that there were two kinds of errors of organization names in his named entity identification system. The first was it couldn’t recognize the abbreviative names before appearance of full name in his pool strategy; the second was that if the full name never appeared, the abbreviative names couldn’t be recognized. Essence of these two errors is data sparseness. The knowledge base can help it to alleviate this problem. Restricted to the conditions, we haven’t done an experiment to evaluate on this idea.

    4 Conclusion With explosively growing web information, how to extract useful information becomes a challenge for information processing. Our company IE system is for the purpose of extracting web company intelligence to ease the human’s burden. Construction and maintenance of company name knowledge base is an important part of our system. Compared with other methods such as human labor, the automatic method is faster and more effective. This built knowledge base can not only solve the problems in our system, but also be applied to Chinese named entity identification, Chinese information extraction and expansion of the key word. The experiment results show that the method is effective and the knowledge base can be used to solve realistic problems. In the future, firstly, company full name identification should be concerned. With its help, the identification of company names can be more accurate. Secondly, with more webpages are processed, we believe data sparseness problem can be alleviated.

    References 1. CNNIC. . July 2004. 2. Chris H.Q. Ding. et al. Link Analysis: Hubs and Authorities on the World Wide Web. Technical Report 47847, Lawrence Berkeley National Laboratory, 2001. 3. Trystan Upstill, Query-Independent Evidence in Home Page Finding. ACM Transactions On Information Systems, July 2003. 4. Youzheng Wu. et al. Chinese Named Entity Recognition Combining a Statistical Model with Human Knowledge. Proceedings of the ACL 2003 Workshop on Multilingual and Mixed-language Named Entity Recognition

    Healthy Waterways: Healthy Catchments – An Integrated Research/Management Program to Understand and Reduce Impacts of Sediments and Nutrients on Waterways in Queensland, Australia Eva G. Abal1, Paul F. Greenfield2, Stuart E. Bunn3, and Diane M. Tarte1 1

    Healthy Waterways Partnership, Level 4, Hitachi Building, Corner George and Adelaide Streets, Brisbane, Qld, Australia 4072 [email protected] [email protected] 2 Office of the Senior Deputy Vice Chancellor, The University of Queensland, Brisbane, Qld, Australia 4072 [email protected] 3 Centre for Riverine Landscapes, Griffith University, Nathan, Qld, Australia 4111 [email protected]

    Abstract. The Moreton Bay Waterways and Catchments Partnership, now branded the Healthy Waterways Partnership, has built on the experience of the past 15 years here in South East Queensland (SEQ). It focuses on water quality and the ecosystem health of our freshwater, estuarine and marine systems through the implementation of actions by individual partners and the collective oversight of a regional work program that assists partners to prioritise their investments and address emerging issues. This regional program includes monitoring, reporting, marketing and communication, development of decision support tools, research that is directed to problem solving, and maintaining extensive consultative and engagement arrangements. The Partnership has produced information-based outcomes which have led to significant cost savings in the protection of water quality and ecosystem resources by its stakeholders. This has been achieved by: • providing a clear focus for management actions that has ownership of governments, industry and community; • targeted scientific research to address issues requiring appropriate management actions; • management actions based on a sound understanding of the waterways and rigorous public consultation; and, • development and implementation of a strategy that incorporates commitments from all levels of stakeholders. While focusing on our waterways, the Partnership’s approach includes addressing catchment management issues particularly relating to the management of diffuse X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1126 – 1135, 2006. © Springer-Verlag Berlin Heidelberg 2006

    Healthy Waterways: Healthy Catchments

    1127

    pollution sources in both urban and rural landscapes as well as point source loads. We are now working with other stakeholders to develop a framework for integrated water management that will link water quality and water quantity goals and priorities.

    1 What Is the Healthy Waterways Partnership? The Healthy Waterways Partnership (The Partnership) framework illustrates a unique integrated approach to water quality management whereby scientific research, community participation, and policy/strategy development are done in parallel with each other. This collaborative effort has resulted in a water quality management strategy, which integrates the socio-economic and ecological values of the waterways, and has led to significant cost savings by providing a clear focus on initiatives towards achieving the healthy waterways:healthy catchments vision which is By 2020, our waterways and catchments will be healthy ecosystems supporting the livelihoods and lifestyles of people in South East Queensland, and will be managed through collaboration between community, government and industry.

    Fig. 1. Applicability of the adaptive management framework in the Healthy Waterways Partnership

    The Partnership represents a whole-of-government, whole-of-community approach to understanding, planning for, and managing the use of our waterways. The key elements of the Partnership include: the implementation by a range of partners of

    1128

    E.G. Abal et al.

    management actions ranging from upgrades in sewage treatment plants, to improved planning regimes and rehabilitation of riparian vegetation; a multi-disciplinary science and research program that underpins the management action program and monitors its effectiveness; and the Healthy Waterways promotional and educational program that seeks to build on similar activities of partners and ensure that there is community awareness and support for action. The Adaptive Management Framework (Figure 1), one of the operating philosophies of the Partnership, can be described as ongoing knowledge acquisition, monitoring and evaluation leading to continuous improvement in the identification and implementation of management. The approach recognises that action can seldom be postponed until we have “enough” information to fully understand the situation. This leads to improved understanding of the means for dealing with resource management issues, as well as providing the flexibility necessary for dealing with changing socio-economic or socio-ecological relationships. 1.1 Balanced Approach The natural checks and balances provided by the tripartite model of management, research and monitoring (Figure 2) provide the foundation for the Partnership. The

    Fig. 2. The Partnership’s philosophy of the tripartite model of management, research and monitoring, with feedback interactions

    Healthy Waterways: Healthy Catchments

    1129

    interactions between management, research and monitoring involves a two-way flow of information. Management, with input from the community, provides environmental values and resource management objectives, identifies key environmental issues and knowledge gaps. Researchers addresses the key issues, gather information to narrow the knowledge gaps and provide the scientific linkages that support and create the various indicators used by resource managers involved in monitoring. Monitoring provides feedback to researchers in the form of prioritised research based on patterns observed during the assessment of the ecosystem. The interactions between monitoring and management are somewhat similar to the management-research interactions. In both cases, management, with input from the community, provides the environmental values and resource management objectives. The major difference is that monitoring provides management with the feedback on various management actions invoked. The achievement of the outcomes of the Partnership relies on people from various sectors of society working together. It is proposed here that the central driving force for the developing an effective program or Study is not the absolute amount of scientific or management activity or expertise; rather it is the balance between management, research and monitoring.

    2 What Are the Issues? South East Queensland (Figure 3) has one of the fastest growing populations in Australia, with just over 2 million people, increasing by 2.9% per annum. These increases in population are expected to result in 75 km2 of bushland, agricultural land and other rural land being converted annually to housing and other urban purposes. Initial scenario runs using the Partnership’s decision support tools have enabled us to understand the potential impacts in 20 years’ time from the predicted population growth. Thus, the human footprint on the catchment is rather extensive today and promises to be more extensive in the future. The riverine and estuarine environments of the South East Queensland catchment have been significantly altered. Land use changes and vegetation clearing have resulted in increased flows, erosion and delivery of both nutrients and sediments from the catchments to the waterways. Only 26% of the catchment’s original vegetation remains. Channel (gully and streambank) erosion is the dominant form of erosion in the SEQ Catchment. Most of the sediment is generated from quite specific locations, with more than 60% of the sediment coming from less than 30% of the area. Given the episodic nature of rainfall in the catchment, protection of riparian areas, especially in the headwater (first and second-order) streams need to be in place to prepare the catchment and waterways for extreme flow events. During smaller events or dry conditions, urban areas may have a significant contribution of sediment loads to our waterways. As a case study in South East Queensland, Australia, an interdisciplinary study of waterways, was initiated by the Partnership to address water quality issues which link sewage and diffuse loading with environmental degradation of Moreton Bay, an Australian estuary, and its waterways. Moreton Bay (153oE; 27oS) is a shallow, subtropical embayment with abundant seagrasses, mangroves, sea turtles, and dugong,

    1130

    E.G. Abal et al.

    Fig. 3. The South East Queensland region, showing the different catchments and Moreton Bay

    Healthy Waterways: Healthy Catchments

    1131

    adjacent to a rapidly expanding population. Like many Australian estuaries, Moreton Bay is characterised by strong lateral gradients in water quality, with hyperautotrophic and oligotrophic waters within tens of kilometres. High sediment loads, especially during high flow events, together with resuspension of fine-grained sediments in the river estuaries and western Moreton Bay lead to high turbidity, reduced light penetration and subsequent seagrass loss. Sewage-derived nutrient enrichment, particularly nitrogen (N), has been linked to algal blooms. (Dennison and Abal, 1999).

    3 Sustainable Loads Concept – Ensuring That Aquatic Ecosystems Are Protected The challenge protecting, maintaining and improving the ecosystem health of South East Queensland’s waterways, in the face of increasing population growth in the region, can be met by a quantitative and defensible approach such as that which is being referred to as the “sustainable loads” concept. The ecosystem health condition of our waterways is an aggregate of the impacts of point (industrial emissions and wastewater treatment plants) and diffuse source (urban stormwater, agricultural runoff and natural systems run-off) emissions from the catchments and the assimilative capacity (freshwater and tidal flushing and internal processing) of our waterways. The approach of setting targets (or quantifiable objectives) as a ‘goal post’ for management is very explicit in the development of water quality improvement plans and regional natural resource management plans. The concept of “sustainable loads” is defined as the amounts of pollutants (e.g. nutrients and sediments) that a waterway can assimilate without becoming degraded. For operational purposes, sustainable loads are loads that a waterway can take and still achieve the water quality objectives which relate to the environmental values set for the waterway (taking into consideration flow into and assimilative capacity of the waterway). Determination of relevant and achievable water quality objectives, reflecting the environmental values and uses which the community attribute to our waterways, is important in the “sustainable loads” concept. This involves an iterative process where suites of environmental values, management goals, and water quality objectives are evaluated in terms of their acceptability/feasibility (using decision support tools), and compared with alternative suites of feasible values, goals and objectives, if required. The strength of the “sustainable loads” concept is the determination of explicit links between pollutant loads from the catchment (which are a reflection of the activities and pressures in the catchment) to the ecosystem health of our waterways

    4 Developing Decision Support Tools With today’s increasing emphasis on integrated water resources management, there is a strong need to deliver to resource managers enhanced capacity to make decisions on appropriate management actions. The adaptive management approach is based on the recognition that we often need to act on the basis of an imperfect understanding of the

    1132

    E.G. Abal et al.

    systems within which management action occurs. However, unless there is active research to expand the knowledge base for management as well as appropriate decision support tools for stakeholders, the outcomes of the adaptive management process will improve only slowly, if at all. Thus, the Partnership is firmly committed to continually improving the knowledge base and the development of decision support tools to assist stakeholders the achievement of natural resource management outcomes. Consequently there has been a realisation that there is a definitive need for modelling tools (Figure 4), supported by locally-specific scientific understanding, to enable the ‘conversion’ of catchment-derived (point and diffuse) loads into resultant (ambient) levels of water quality, which can then be compared with water quality objectives. This provides an important direct or causative link between catchment loads and associated waterway ecosystem health, underpinning the sustainable loads concept.

    Fig. 4. The 3-linked submodels comprising the South East Queensland EMSS and their links to the Receiving Water Quality Models (RWQM) for tidal waterways and Moreton Bay (Vertessy & McAlister in Abal, et al., 2005)

    This causative link has been pioneered in the region by the Partnership and its stakeholders and has led to the development of a range of stand-alone catchment and receiving water quality models, which have then been linked as determine sustainable loads for the different waterways. The catchment models are largely based on the Environmental Management Support System (EMSS), a software tool specifically developed to meet the requirements of the Partnership. In the case of receiving water quality modelling, a range of models has been developed and used, depending on the needs and specific nature of waterway being studied. The main computational ‘tool’ used in this regard is the Receiving Water Quality Model (RWQM), a 2-D (plan view) model which covers Moreton Bay and all major estuaries in the region. Other tools ranging from 0-D through to full 3-D parameterisations have also been applied (McAlister, 2005).

    Healthy Waterways: Healthy Catchments

    1133

    Decision support tools such as the Environmental Management Support System (EMSS) and the Receiving Water Quality Models, are useful not only in evaluating the relative efficacy of various management actions aimed at the improvement of water quality, but can also assist stakeholders in determining sustainable loads and setting environmental targets for waterways. Development of a user-friendly interface to decision support tools also ensures that research outcomes are extended to endusers and stakeholders in the most effective and relevant form. The Partnership framework illustrates a unique integrated approach to water quality management whereby improved understanding and availability of appropriate decision support tools for management of land and water resources result in effective prioritisation of initiatives targeted towards achieving the Healthy Catchments:Waterways vision.

    5 Ecosystem Health Monitoring Program and Annual Regional ‘Report Card’ – Tracking Ecosystem Responses to Management Actions One of the hallmarks of the Partnership has been the development of a comprehensive and defensible ecosystem health monitoring program (EHMP) to provide an objective assessment of the health of waterways throughout southeast Queensland. The information collected in the EHMP is used to advise councils and land managers on areas of declining health, report on the effects of different land uses, and to evaluate the effectiveness of management actions aimed at improving and protecting aquatic ecosystems. The estuarine and marine EHMP began in 2000 and includes monthly monitoring at 260 sites in coastal waterways from Noosa to the New South Wales border. The freshwater EHMP commenced in 2002 and now involves twice-yearly sampling at over 120 sites on all of the major streams in the region. Both programs use a broad range of biological, chemical and physical indicators – chosen because they provide essential information about the status of valuable waterway assets. Monitoring alone, of course, is useful only for documenting declines. A key component of the EHMP is the effective communication of monitoring activities and scientific results. One of the major outputs from the Program is the Annual Report Card, which provides a timely reminder to local and State Governments and the broader community as to how well we are tracking in terms of protecting the health of our waterways. This Report Card is underpinned by a comprehensive Annual Technical Report and up-to-date, interpreted water quality and biological information is readily available for use by council and industry partners, managers and scientists and the broader community (see www.ehmp.org). The Partnership’s Healthy Waterways Campaign provides an essential portal to communicating the understanding of environmental issues to the stakeholders. Healthy Waterways currently enjoys around 50% “brand” recognition in the South East Queensland Regional Community.

    1134

    E.G. Abal et al.

    6 Challenges and Future Directions for Integrated Water Resource Management in South East Queensland South East Queensland (SEQ) is the most sought after place to live in Australia. According to the South East Queensland Regional Plan (Mackenroth, 2005), in the next twenty years, this wonderful region will grow by another one million people. While this massive population growth will drive significant changes in SEQ, for water, the issues will focus around adequacy of good supply and the significant demands urban growth will place on water allocation, water quality and waterways quality. Water supply, water quality, waste water treatment, urban use and reuse and healthy waterways are inextricably linked, and the strategic planning for these needs to be tightly linked. The sustainability of our rural sector and industrial growth, as well as achieving good ecosystem health in our waterways are all essential for urban growth. Hence planning for water needs to be done in a whole of systems context, embracing the SEQ region overall and embodying natural resource management in the broadest sense and not just water resources in isolation. . In addition, experience shows that such planning and management must be adaptive as our understanding of the region’s natural systems, the impacts of global changes and our own interactions through planning and management interventions grows. Inherent in the Healthy Waterways Partnership model is demonstrable proof that we can take a regional approach and have singular success. Guided by sound scientific and planning advice we have been able to improve the standard of wastewater discharged to our waterways. About 80% of the nitrogen and 60% of the phosphorus now is removed from the wastewater with consequent improvements in water quality for Moreton Bay and other waterways. From a healthy waterways perspective, the next major challenge is to effectively address the pollutant loads coming from non-point (diffuse) sources, if environmental targets are to be met. The Partnership Secretariat estimates that within three years, diffuse pollution sources will contribute three quarters of the total nitrogen load, one third of the phosphorus load and up to 90% of the sediments coming from our catchments into our waterways. Clearly this remains a priority, but it is equally as clear that this issue now needs to be addressed in concert with a wider set of issues relating to sustainable water futures for urban and rural growth. There is already a strong focus within SEQ to deliver through a “whole of water cycle” philosophy that gives a strong weighting to water quality planning and management. Key regional planning initiatives have been established in anticipation of the rapid population growth expected within SEQ over the next 20 odd years. What is not clear is how each of these initiatives will knit together to ensure a strong framework for the management of water as a potentially limiting resource, in ensuring security of supply for all sectors and the environmental outcomes we are also seeking for our catchments and our waterways. The opportunity that faces us is the application of the Partnership model to a wider set of regional needs in terms of water resources, not just for healthy waterways , but for water quality generally, and possibly for greater yields as well. In the past we have used the Partnership to deliver on key aspects of our goal for sustainable, healthy waterways. The opportunity and the

    Healthy Waterways: Healthy Catchments

    1135

    challenge that faces us now is how to best evolve the Partnership model, with its attributes of strong regional integration, critical technical skills, track record and a strong “brand” presence to help deliver on our future needs for water and waterways health.

    Acknowledgements The authors would like to acknowledge the stakeholders of the Moreton Bay Waterways and Catchments Partnership, including the 19 local governments, 6 state agencies and 30 major industry and environmental groups in the SEQ region.

    References Abal, E.G., Bunn, S.E. and Dennison, W.C. 2005. Healthy Waterways-Healthy Catchments. Moreton Bay Waterways and Catchments Partnership. 222pp. Dennison, W.C. and Abal, E.G. 1999. Moreton Bay Study: a scientific basis for the healthy waterways campaign. South East Queensland Regional Water Quality Management Strategy. 245pp. Mackenroth, T. 2005. South East Queensland Regional Plan 2005-2026. Qld Office of Urban Management. 137pp. McAlister, T. 2005. Theory and Practice - Decision support systems and case studies showcasing the ‘sustainable loads’ concept. Special Session in the 8th International Riversymposium Brisbane Australia. September, 2005

    Groundwater Monitoring in China Qingcheng He and Cai Li China Institute for Geo-Environment Monitoring, No 20, Dahuisi Road, Beijing 100081, P.R. China [email protected], [email protected]

    Abstract. Groundwater accounts for 1/3 of the water resources in China and is indispensable for water supply and ecological support in many areas, especially in North China. But unreasonable groundwater development has caused some serious geo-environment problems such as land subsidence, surface collapse, and seawater intrusion. Moreover, groundwater has been polluted by industrial, domestic and agricultural activities. Groundwater monitoring in China started in early 1950's and a fundamental network of 23,800 monitoring wells in national level, provincial level and local level combined with the corresponding ground water monitoring and research institutes has been established. It distributed in the 31 provinces or regions controlling nearly 1 million km2. The national monitoring institute and the provincial counterparts have successively set up the groundwater database. Framework of data collection, transmission, analysis, and information release has been established. Auto-monitoring of groundwater and real-time data transmission is in trial in 3 pilot areas. System of hierarchical management and information release of monitoring data is on the way to be sophisticated.

    1 Introduction 1.1 Groundwater Resource in China Generally speaking, China is characterized by great complex conditions of regional hydrogeology and roughly divided into six hydrogeological zones (Fig. 1) I. II. III. IV. V. VI.

    The great East Plain: including Songliao Plain and Huang-Huai-Hai Plain, with enormously thick unconsolidated sediments forming multiple aquifers chiefly recharged by rainfall. Inner Mongolian Plateau and Loess plateau: an intermediate zone between the semi-humid region in the east and the desert region in the west. The Western Inland Basins: consisting of the Hexi Corridor, Zhungeer Basin, Talimu basin, and Chaitamu basin, typical arid desert land, usually with plenty of groundwater in broad piedmont plains. The Southeast and Central-South Hilly Land: characterized by widely exposed different rocks and dominated by fissure water. The Southwest Karst Mountainous Area: carbonate rocks distributed extensively, and karst water and subterranean river well developed. Qing-Zhang Plateau: with an average elevation around 4000 m, aquifers mainly of permafrost or glacial genesis, and groundwater vertically zoned.

    X. Zhou et al. (Eds.): APWeb 2006, LNCS 3841, pp. 1136 – 1143, 2006. © Springer-Verlag Berlin Heidelberg 2006

    Groundwater Monitoring in China

    1137

    Fig. 1. Sketchily shows the 6 hydrogeological zones in China. I. The Great East Plain: I1 Songliao PIain, I2 Huang-Huai-Hai Plain. II. Inner Mongolian Plateau (II1) and Loess Plateau (II2). III. The Western Inland Basins: III1 Hexi Corridor, III2 Zhungeer Basin, III3 Talimu Basin, III4 Chaidamu Basin. IV. The Southeast and Central-South hilly land V. The Southwest Karst Mountainous Area VI. Qing-Zang Plateau: VI1 Permafrost Plateau, VI2 Mountainous Territory.(modified from Mengxiong Chen and Zuhuang Cai, 2000). Table 1. Classification of Groundwater Resources in China Area Flatland Hillyland and Mountainous area Total

    Category of Groundwater Pore water Fissure water Karst water

    Amount (billion m3) 250.354 417.363 203.967 871.684

    % of totle groundwater resources 29 43 28 100

    Table 2. Distribution of the three types of water in China

    Area North China South China Total

    Pore water Amount % (billion m3)

    Fissure water Amount % (billion m3)

    Karst water Amount % (billion m3)

    177.317

    70

    113.955

    35.6

    19.264

    27

    73.037

    30

    303.408

    64.4

    184.703

    73

    250.354

    417.363

    203.967

    1138

    Q. He and C. Li

    The total annual precipitation in China is about 6×1012m3.Groundwater (natural recharge) in China is about 871.6 billion m3 per year, as 1/3 of the water resources. Natural groundwater resources in flatland (chiefly in pore water) are around 250.3 billion m3 per year. In hilly land groundwater (chiefly in fissure water and karst water), is about 621.3 billion m3 per year (Table 1). If we divide China into two parts as North China and South China, Table. 2 lists out how the three types of water distribute. 1.2 Development and Utilization of Groundwater The exploitable groundwater is approximate 352.7 billion m3 per year, 153.6 billion m3 in north area and 199.1 billion m3 in south area. It is indispensable for socialeconomic development as an important source of water supply and ecological support, especially in north area. According to statistics the average annual extraction of groundwater throughout China is 75.8 billion m3 as 11.6% of the total groundwater resources. 76% of the exploited groundwater is in flatland and the rest is in hillyland or mountainous area. For distribution about 86% occurs in North China and 14% in South China. North China Plain is the most intensive area of groundwater extraction. 72% of the total extraction is from there, chiefly for agricultural irrigation. Table.3 tells the groundwater extraction in China. Table 3. Groundwater Extraction in China

    Area North China Sorth China

    Pore water Amount % (billion m3)

    Fissure water Amount % (billion m3)

    Karst water Amount % (billion m3)

    total Amount (billion m3)

    %

    54.035

    71.24

    0.551

    0.73

    9.463

    12.48

    64.049

    84.45

    2.636

    3.48

    1.227

    1.62

    7.942

    10.45

    11.805

    15.55

    Taiwan Sinkiang Ningxia Qinghai Gansu Shannxi Sitsang Yunnan Guizhou Sichuan Chongqing Hainan Guangxi Guangdong Hunan Hubei Henan Shandong Jiangxi Fujian Anhui Zhejiang Jiangsu Shanghai Heilongjiang Jining Liaoning Inner Mongolia Shanxi Hebei Tianjing Beijing

    Fig. 2. Shows average groundwater exploitation in principal cities of China during 1970's, 1980's and 1999, respectively

    Groundwater Monitoring in China

    1139

    As more water demand from population expansion and rapid development of industry and agriculture, groundwater becomes increasingly significant for water supply. According to an underestimate, 61 among 181 medium-big cities rely on groundwater for water supply, and 40 on joint supply of groundwater and surface water. 20% municipal water supply in China comes from groundwater. In north area, the figure exceeds 70%. Moreover, 40% farmland in China is irrigated by groundwater and it is the primary source of drinking water in rural area. Groundwater exploitation in China goes up obviously in the last 30 years. Average exploitation in 1970's is 57.2 billion m3 per year; in 1980's it amounts to 74.8 billion m3 per year, and in 1999 it reaches to 111.6 billion m3 per year. Presently, groundwater percentage in urban water supply has increased to 20% from 14% in early 1980's. 1.3 Groundwater Issues In the absence of a unified planning and strict management system, arbitrary exploitation of groundwater in some regions has led to consistent declines of groundwater level in the area with serious groundwater over-extraction. Overexploitation of groundwater has been in North China Plain for many years, and also in Guanzhong Plain, Northeast Plain, some watersheds in northwest inland, Yangtze River Delta, and southeast coastal area. By a rough estimation, the detected large cones of depression have been more than 100 which cover an area of 150,000 km2. The over-exploited area has reached to 620,000 km2 involving more than 60 cities. Continuous groundwater over-exploitation has caused geo-environment problems or even geo-hazards. In North China Plain and Yangtze River Delta unreasonable groundwater development has caused regional land subsidence.. In Karst area there have occurred 1,400 Karst collapses which are mainly introduced by groundwater over-exploitation. In coastal area it brings the problem of seawater intrusion. In the arid and semi-arid area in northwest inland of China decreasing recharge to groundwater has caused ecological environment deterioration. On the other side, groundwater in 195 cities has been polluted to various degrees due to industrial and agricultural activities. 16 principal cities in north area and in 3 in south area get in a worse situation. In some places the polluted groundwater has affected the safety of drinking water. In Zibo City of Shandong Province a large well field providing 510 thousand m3 per day is going to be abandoned because of heavy oil pollution. Even in the capital city of Beijing, organism with great potential hazards like hexachloro cyclohexane and DDT has been detected in the shallow groundwater. Furthermore, groundwater with high content of arsenic and fluorine, and low content of iodine, in some region caused endemic diseases. For instance, 900,000 people in Inner Mongolia suffer from arsenic pollution.

    2 Groundwater Monitoring in China 2.1 Current Status At present, groundwater monitoring is chiefly organized by Ministry of Land and Resources. Ministry of Water Resources and Ministry of Construction also conduct partial groundwater monitoring for their specific purpose.

    1140

    Q. He and C. Li

    Fig. 3. Map of the current groundwater monitoring network in China

    Ministry of Land and Resources firstly carried out groundwater monitoring in 1950's. After over 50 years' efforts it has built a groundwater monitoring network with national level, provincial level and local level and the institutes of groundwater monitoring and research. There is one national and 31 provincial monitoring groundwater monitoring institutes. The current monitoring wells distributed in the 31 provinces or regions are more than 23,800 controlling an area of nearly 1 million km2. It covers all the provincial capitals and important plains and basins with great agricultural interests. 217 cities consuming groundwater in North China and mediumlarge well fields are in the scope of premier monitoring network (Fig.3.) Monitoring wells primarily are the boreholes constructed during groundwater resources investigation, and supplemented by all kinds of producing wells, spare wells and disable producing wells. Almost every aquifer is under monitoring and also for some important springs and subterranean rivers. The monitoring contents include groundwater level, temperature, quality, and discharge of springs and subterranean rivers. Groundwater monitoring initially serves groundwater evaluation, development and management. Now its objectives have expanded to prevent groundwater from over-exploitation and pollution, provide information support to environment protection agency, and offer precautions of geo-hazards. 2.2 Contents of Groundwater Monitoring Water level, temperature, quality of ground water and flux of springs and subterranean rivers constitute of the contents of monitoring. The measuring job is done by the local monitoring agencies. 2.2.1 Water Level It is to measure the static water level and buried depth of groundwater. Besides, in the central part of depression cone, important well field, exploited sections easily to be

    Groundwater Monitoring in China

    1141

    depleted, the stable dynamic water level is required. Wells in the regional scope are measured every 10 days in a month, and wells in cities are measured every 5 days in a month. Level of surface water that has hydrological connection with groundwater is measured simultaneously. 2.2.2 Water Quality Wells for quality monitoring are bored in recharge area, discharge area, significant aquifers, and areas with geo-environment problems such as groundwater pollution, seawater intrusion and endemic disease. 50% of level-monitoring wells in regional network and 80% of ones in cities are also for long-term monitoring of water quality. Groundwater is sampled once in dry and wet season of a year, respectively, for the shallow pumped aquifers and areas with big fluctuation of water quality; and once in peak extraction season for the deep pumped aquifers and areas with little fluctuation of water quality. Require items of quality analysis are 20 such as PH, ammonia nitrogen, NO3-, NO2-, vitalizing phenols, CN-, As, Hg, Cr6+, total hard, Pb, F, Cd, Fe, Mn, TDS, index of KMnO4, sulfate, chloride and coliform group. 2.2.3 Temperature Temperature monitoring is conducted in typical area of a geo-hydrology unit, areas with strong hydrologic connection between groundwater and surface water, artificial recharge areas, and areas with thermal pollution or thermal abnormality. For longterm monitoring wells it is measured every 10 days and for wells with automonitoring equipments it is measured twice every day. And in other areas temperature is measured once a year accompanying the level measurement in dry season. 2.2.4 Flux of Springs and Subterranean Rivers Flux measurement of springs and subterranean rivers is done once a month for new monitoring points, twice a season for stable springs and subterranean rivers (stable coefficient 1 >0.5),, twice a month for moderately stable ones (stable coefficient is 0.1~0.5), three times a month for unstable ones (stable coefficient