PRICAI 2000 Topics in Artificial Intelligence: 6th Pacific Rim International Conference on Artificial Intelligence Melbourne, Australia, August 28 - ... (Lecture Notes in Computer Science, 1886) 3540679251, 9783540679257

PRICAI 2000, held in Melbourne, Australia, is the sixth Pacific Rim Interna­ tional Conference on Artificial Intelligenc

122 27 46MB

English Pages 832 [858] Year 2000

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

PRICAI 2000 Topics in Artificial Intelligence: 6th Pacific Rim International Conference on Artificial Intelligence Melbourne, Australia, August 28 - ... (Lecture Notes in Computer Science, 1886)
 3540679251, 9783540679257

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science Edited by J. G. Carbonell and J. Siekmann

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen

1886

springer Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo

Riichiro Mizoguchi

John Slaney (Eds.)

PRICAI 2000 Topics in Artificial Intelligence 6th Pacific Rim International Conference on Artificial Intelligence Melbourne, Australia, August 28 - September 1,2000 Proceedings

^ S Springer

Series Editors Jaime G. Carbonell,Caraegie Mellon University, Pittsburgh, PA, USA Jorg Siekmann, University of Saarland, Saarbriicken, Germany

Volume Editors Riichiro Mizoguchi Osaka University, Institute of Scientific and Industrial Research 8-1 Mihogaoka, Ibaraki, Osaka, 567-0047, Japan E-mail: [email protected] John Slaney Australian National University, Computer Sciences Laboratory Research School of Information Sciences and Engineering Canberra, ACT 0200, Australia E-mail: [email protected] Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Topics in artificial intelligence : proceedings / PRICAI2000, 6th Pacific Rim International Conference on Artificial Intelligence, Melbourne, Australia, August 28 - September 1, 2000. Riichiro Mizoguchi; John Stanley (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 2000 (Lecture notes in computer science ; Vol. 1886 : Lecture notes in artificial intelligence) ISBN 3-540-67925-1

CR Subject Classification (1998): 1.2 ISSN 0302-9743 ISBN 3-540-67925-1 Springer-Verlag Berlin Heidelberg New York This work is subject to copyriglit. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprintmg, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmaimSpringer Science+Business Media GmbH © Springer-Veriag Beriin Heidelberg 2000 Printed in Germany Typesetting: Camera-ready by author, data conversion by Steingraber Satztechnik GmbH, Heidelberg Printed on acid-free paper SPIN 10722476 06/3142 5 4 3 2 10

Preface

PRICAI 2000, held in Melbourne, Australia, is the sixth Pacific Rim International Conference on Artificial Intelligence and is the successor to the five earlier PRICAIs held in Nagoya (Japan), Seoul (Korea), Beijing (China), Cairns (Australia) and Singapore in the years 1990, 1992, 1994, 1996 and 1998 respectively. PRICAI is the leading conference in the Pacific Rim region for the presentation of research in Artificial Intelligence, including its applications to problems of social and economic importance. The objectives of PRICAI are: To provide a forum for the introduction and discussion of new research results, concepts and technologies; To provide practising engineers with exposure to and an evaluation of evolving research, tools and practices; To provide the research community with exposure to the problems of practical applications of AI; and To encourage the exchange of AI technologies and experience within the Pacific Rim countries. PRICAI 2000 is a memorial event in the sense that it is the last one in the 20"" century. It reflects what researchers in this region believe to be promising for their future AI research activities. In fact, some salient features can be seen in the papers accepted. We have 12 papers on agents, while PRICAI 96 and 98 had no more than two or three. This suggests to us one of the directions in which AI research is going in the next century. It is true that agent research provides us with a wide range of research subjects from basic ones to applications. In contrast, the number of papers on knowledge discovery and data mining is not increasing, though this may be because a new conference, PAKDD, was established in 1997. We have a good number of papers on basic AI and see steady activity on that topic in our region. Compared to the theoretical papers, we have had a smaller number of application- or system-oriented papers in the past two conferences. However, we can find a new movement in AI applications to Web technology on which we have 4 papers this year. This also suggests a promising direction to pursue. The technical program comprised two days of workshops and tutorials, followed by paper and poster sessions and invited plenary lectures. We had five invited speakers: Nick Jennings, Shun-ichi Amari, Randy Goebel, Qiang Yang and Jae Kyu Lee. Their topics included Agents, Brain science. Knowledge rep>resentation. Search on the Internet and AI in electronic commerce. These talks nicely reflect the themes of the contributed technical papers and provoked interesting discussion. There were 207 submissions from 25 countries. The program committee worked hard to ensure that the conference would be of high quality and that every paper was seen by at least two and in most cases three expert reviewers.

VI

Preface

The overall standard of submitted papers was high, and owing to space and time constraints several interesting papers could not be accepted for full presentation and publication. As a result, in addition to the 72 full papers in this volume, the conference made a feature of 44 poster presentations, abstracts of which can also be found in these proceedings. We have many people to thank, starting with the members of the program committee and the many reviewers who worked hard to get more than 200 papers carefully reviewed under quite severe time constraints. The conference chair Geoff Webb, the organising chair Chengqi Zhang and the rest of their committees deserve our grateful thanks and are duly acknowledged below. Finally, we wish to thank administrative assistant Diane Kossatz and program committee member Sylvie Thiebaux without whose help in the practical matter of dealing with the submitted papers, and in the latter case in the process of allocating them to reviewers, the technical program of PRICAI 2000 would not have existed.

August 2000

Riichiro Mizoguchi and John Slaney Program Co-chairs PRICAI 2000

Organization

PRICAI 2000 was organized by the department of Computer Science, Deakin Univeristy, and held at the Melbourne Convention Centre from 28 August to 1 September, 2000. It was co-located with RoboCup 2000 and with two other conferences: the Symposium on the Application of Artificial Intelligence in Industry and the Australian Conference on Robotics and Automation (ACRA 2000).

Conference Committee Conference Chair: Geoff Webb (Deakin University, Australia) Program Chairs: Riichiro Mizoguchi (Osaka University, Japan) John Slaney (Australian National University) Organizing Chair: Chengqi Zhang (Deakin University) Treasurer: Douglas Newlands (Deakin University) Publicity Chair: Achim Hoffmann (University of New South Wales, Australia) Workshop Chair: Huan Liu (National University of Singapore) Tutorial Chair: Eric Tsui (CSC)

Program Committee Edward Altman Sung-Bae Cho John Debenham Norman Foo Scott Goodwin Jieh Hsiang Mitsuru Ishizuka David Israel Shyam Kapur Shigenobu Kobayashi Alfred Kobsa Jae Kyu Lee Dayou Liu H Lee-Kwang Hing Yan Lee Chee-Kit Looi Yuji Matsumoto

Satoru Miyano Riichiro Mizoguchi Hideyuki Nakashima Fred Popowich Ramakoti Sadananda M Sasikumar Zhongzhi Shi John Slaney Keith Stenning Leon Sterhng Sylvie Thiebaux Benjamin Watson Albert Wu Takahira Yamaguchi Wai-kiang Yeap Alex Zelinsky Ingrid Zukerman

VIII

Organization

Referees Jun Arima Samer Abdallah Akinori Abe Tamas Abraham Sushil Acharya Mark Ackerman Jing-jun Ai David Albrecht Ayman Ammoura Elisabeth Andre K.S.R. Anjaneyulu Grigoris Antoniou Hiroki Arimura Laura Arns Minoru Asada Hideki Asoh Noboru Babaguchi Hideo Bannai Caroline Barriere Anup Basu Jonathan Baxter Bettina Berendt Jean Berger Walter F. Bischof Yngvi Bjornsson Natashia Boland Maria Paola Bonacina Paul Brna Bhavesh M. Busa Terry Caelli Sandra Carberry Alison Cawsey Javaan Chahl Frangois Charpillet Shiva Chaudhuri Gordon Cheng Lee-Feng Chien Paul Chung Carohna Cruz-Neira Jirapun Daengdej Kerstin Dautenhahn Arnaud Delhay Yasuharu Den Gamini Dissanayake

Alan Dorin Fadi Dornaika Norberto Eiji Nawa Dan Fass Matthias Fuchs . Xiaoying Gao Chris Gaskett Hector Geffner Ian Gent David Gerhard S.B. Goschnick Simon Goss R. Greiner Tom Gross Yan Guo Corin Gurr Eli Hagen Masateru Harao Michael Hareries Clint Heinze Steve Helmreich Achim Hoffmann Eric Horvitz Guan Shieng Huang Mitsuru Ikeda Katsumi Inoue Kentaro Inui Alka Irani Koji Iwanuma Noriaki Izumi Anthony Jameson Ray Jarvis M.E. Jefferies Su Jian Wenpin Jiao Xu Jinhui Julia Johnson Arne Jonsson Jean-Pierre Jouannaud Soonchul Jung Hitoshi Kanoh Kamran Karimi S. Karthik Akihiro Kashihara

Susumu Katayama Hirofumi Katsuno Concepcion L. Khan Phihp Kilby Jonathan Kilgour Young-il Kim Hajime Kimura Hajime Kita Yasuhiko Kitamura Les Kitchen Lindsay Kleeman Kevin Knight Alistair Knott Jiirgen Konemann Sven Koenig Kevin Korb Benjamin Korvemaker Miyuki Koshimura Sarit Kraus Frederick Kroon C. Indira Kumari Wa Labuschagne Chris Leckie Seungsoo Lee Wee Kheng Leow Neal Lesh James C. Lester Yuefeng Li Churn-Jung Liau Tan Chew Lim Huan Liu James Liu Yaxin Liu John Lloyd Wong Lung Hsiang Xudong Luo Guangwei Ma Cara MacNish Raj Madhavan Victor Marek Tomoko Matsui Satoshi Matsumoto Hiroshi Matsuno Yutaka Matsuo

Organization Brendan McCane Jon McCormack Eric McCreath Paul McFetridge Jean McKendree Vibhu Mittal Takashi Miyata Kazuteru Miyazaki Philippe Mulhera Shigeru Muraki Sivakuraar Nagarajan Masaaki Nagata Akira Namatame Kanlaya Naruedomkul Monty Newborn Katsumi Nitta Emma Norling Nasser Noroozi Kevin Novins Jon Oberlander Tsukasa Ogasawara Miho Ohsaki Yukio Ohsawa Takashi Okada Manabu Okumura Patrick Olivier Isao Ono Mehmet Orgun Ji-hong OuYang Dan-tong Ouyang Mandar Padhye Maurice Pagnucco Wanlin Pang Seihwan Park Simon Parsons S.C. Patodi Adrian Pearce P. Ravi Prakash Wanda Pratt Helmut Prendinger Wolfgang Prinz Teodor C. Przymusinski

F. van Raamsdonk Gordon Tisher Kanagasabai Rajaraman Janine Toole Arthur Ramer Eric Tsang Durgesh Rao Shusaku Tsumoto M.R.K. Krishna Rao Hsieh-Chang Tu Bhavani Raskutti Cristina Urdiales Magnus Rattray Olivier de Vel Terry Regier Oleg Veryovka Fiorella de Rosis Toby Walsh Sebastien Rougeaux Fei Wang Holly Rushmeier Song-Xin Wang Michael Rusinowitch Xizhao Wang R.A. Russell Takashi Washio Chiaki Sakama Zhang Wei Claude Sammut Kay C. Wiese K. Samudravijaya David E. Wilkins Taisuke Sato Richard Willgoss Yoichi Sato Sartra Wongthanavasu Ken Satoh Vilas Wuwongse Jonathan Schaeffer Gordon Wyeth Richard Segal Torn Yamaguchi Yeon-Gyu Sec Masayuki Yamamura Shinichi Shimozono Susumu Yamasaki Ayumi Shinohara David Yarowsky Ye Shiwei Tralvex Yeap Alexander Sigel Naoki Yonezaki Zoltan Somogyi Soe-Tsyr Yuan Liz Sonenberg Osmar R. Zaiane Von-Wun Soo Byoung-Tak Zhang Takahiro Sugiyama Chengqi Zhang Ji-Gui Sun Dongmo Zhang Jiping Sun Hong Zhang Erkki Sutinen Jianping Zhang Einoshin Suzuki Wei Zhang Hirokazu Taki Zili Zhang Hai-Ying Tang Qiangfu Zhao Tan Boon Tee Jackson Y.S. Zhu Michael Thielscher Kenny Zhu Takenobu Tokunaga Yanqiu Zhu Simon Thompson Cao Zining John Thornton Qijia Tian

IX

X

Organization

Sponsors Deakin University, Geelong, Australia Mindbox Inc., Greenbrae, California, USA Cooperative Research Centre for Distributed Systems Technology, BrisbaneSydney-Melbourne, Australia University of Melbourne, Australia Griffith University, Brisbane, Australia

Table of Contents

Invited Talks Automated Haggling: Building Artificial Negotiators N. Jennings

1

Information Geometry of Neural Networks S.-i. Amari

2

Knowledge Representation, Belief Revision, and the Challenge of OptimaUty R. Goebel

3

Artificial Intelligence Applications in Electronic Commerce J.K. Lee

4

Towards a Next-Generation Search Engine Q. Yang, H.-F. Wang, J.-R. Wen, G. Zhang, Y. Lu, K.-F. Lee, H.-J. Zhang

5

Logic and Foundations of A I The Gap between Symbol and Non-symbol Processing An Attempt to Represent a Database by Predicate Formula* S. Ohsuga

16

Argumentation Semantics for Defeasible Logics G. Governatori, M.J. Maker, G. Antoniou, D. Billington

27

A Unifying Semantics for Causal Ramifications M. Prokopenko, M. Pagnucco, P. Peppas, A. Nayak

38

Inconsistency and Preservation P. Wong

50

Induction a n d Logic P r o g r a m m i n g Inductive Inference of Chess Player Strategy A.R. Jansen, D.L. Dowe, G.E. Farr CompiUng Logical Features into Specialized State-Evaluators by Partial Evaluation, Boolean Tables and Incremental Calculation T. Kaneko, K. Yamaguchi, S. Kawai

61

72

Using Domain Knowledge in ILP to Discover Protein Functional Models .. 83 T. Ishikawa, M. Numao, T. Terano

XII

Table of Contents

The Hyper System: Knowledge Reformation for ElRcient First-Order Hypothetical Reasoning H. Prendinger, M. Ishizuka, T. Yamamoto Determination of General Concept in Learning Default Rules K. Ohara, H. Taka, N. Babaguchi, T. Kitahashi

93 104

Reinforcement Learning A Theory of Profit Sharing in Dynamic Environment S. Kato, H. Matsuo

115

Experience-Based Reinforcement Learning to Acquire Effective Behavior in a Multi-agent Domain S. Arai, K. Sycara, T.R. Payne

125

A Region Selecting Method Which Performs Observation and Action in the Multi-resolution Environment T. Matsui, H. Matsuo, A. Iwata

136

Machine Learning RWS (Random Walk Splitting): A Random Walk Based Discretization of Continuous Attributes M. Hanaoka, M. Kobayashi, H. Yamazaki The Lumberjack Algorithm for Learning Linked Decision Forests W.T.B. Uther, M.M. Veloso

146

156

Efficient Iris Recognition System by Optimization of Feature Vectors and Classifier S. Lim, K. Lee, 0. Byeon, T. Kim

167

A Classifier Fitness Measure Based on Bayesian Likelihoods: An Approach to the Problem of Learning from Positives Only A. Skahar, A. Maeder, B. Pham

177

Evaluating Noise Correction CM. Teng An Efficient Learning Algorithm Using Natural Gradient and Second Order Information of Error Surface H. Park, K. Pukumizu, S.-i. Amari, Y. Lee

188

199

Knowledge Discovery Fast and Robust General Purpose Clustering Algorithms V. Estivill-Castro, J. Yang

208

Table of Contents

XIII

An Algorithm for Checking Dependencies of Attributes in a Table with Non-deterministic Information: A Rough Sets Based Approach H. Sakai, A. Okuma

219

Tropical Cyclone Intensity Forecasting Model: Balancing Complexity and Goodness of Fit G. W. Rumantir

230

Bayesian Networks Trading Off Granularity against Complexity in Predictive Models for Complex Domains /. Zukerman, D.W. Albrecht, A.E. Nicholson, K. Doktor

241

Recognizing Intentions from Rejoinders in a Bayesian Interactive Argumentation System /. Zukerman, N. Jitnah, R. McConachy, S. George

252

Efficient Inference in Dynamic Belief Networks with Variable Temporal Resolution T.A. Wilkin, A.E. Nicholson

264

Beliefs and Intentions in Agents Epistemic States Guiding the Rational Dynamics of Information J. Heidema, I.C. Burger

275

Merging Epistemic States T. Meyer

286

Perceiving Environments for Intelligent Agents Y. Li, C. Zhang

297

A Preference-Based Theory of Intention T. Sugimoto

308

A u t o n o m o u s Agents Autonomy of Autonomous Agents D. Zhang, N. Foo

318

Constructing an Autonomous Agent with an Interdependent Heuristics . . . 329 K. Moriyama, M. Numao Unified Criterion of State Generalization for Reactive Autonomous Agents 340 T. Yairi, K. Hori, S. Nakasuka From Brain Theory to Autonomous Robotic Agents A. Weitzenfeld

351

XIV

Table of Contents

Agent Systems A Multi-agent Approach for Optical Inspection Technology T. Buchheim, G. Hetzel, G. Kindermann, P. Levi

362

The Use of Mobile Agents in Tracing an Intruder in a Local Area Network 373 M. Asaka, T. Onahuta, T. Inoue, S. Goto A Framework to Model Multiple Environments in Multiagent Systems . . . . 383 J.-C. Soulie, P. Marcenac Task Models, Intentions, and Agent Conversation Policies R. Elio, A. Haddadi, A. Singh

394

Genetic Algorithms Genetic Algorithm with Knowledge-Based Encoding for Interactive Fashion Design H.-S. Kim, S.-B. Cho

404

Designing Wastewater Collection Systems Using Genetic Algorithms L.Y. Liang, R.G. Thompson, D.M. Young

415

Hybrid Genetic Algorithms Are Better for Spatial Clustering V. Estivill-Castro

424

Genetic Programming Improving Performance of GP by Adaptive Terminal Selection S. Ok, K. Miyashita, S. Nishihara Evolving Neural Networks for Decomposable Problems Using Genetic Programming B. Talko, L. Stern, L. Kitchen

435

446

Constraint Satisfaction Dual Encoding Using Constraint Coverings S. Nagarajan, S.D. Goodwin, A. Sattar

457

Consistency in General CSPs W. Pang, S.D. Goodwin

469

Neural Networks Need for Optimisation Techniques to Select Neural Network Algorithms for Process Modelling of Reduction Cell V. Karri, F. Frost

480

Table of Ckintents

XV

Productivity Improvements through Prediction of Electrolyte Temperature in Aluminium Reduction Cell Using BP Neural Network . . . . 490 F. Frost, V. Karri Pruned Neural Networks for Regression R. Setiono, W.K. Leow

500

Optimal Design of Neural Nets Using Hybrid Algorithms A. Abraham, B. Nath

510

Markov Decision Processes A POMDP Approximation Algorithm That Anticipates the Need to Observe V. Bayer Zubeck, T. Dietterich

521

Generating Hierarchical Structure in Reinforcement Learning from State Variables B. Hengst

533

Robotics Humanoid Active Audition System Improved by the Cover Acoustics K. Nakadai, E.G. Okuno, H. Kitano

544

Overcoming the Effects of Sensory Delay by Using a Cerebellar Model . . . . 555 D. Collins, G. Wyeth Layered Specification of Intelligent Agents P. Scerri, J. Ydren, N. Reed

566

Image Processing and P a t t e r n Recognition Sub-pixel Precise Edge Localization: A ML Approach Based on Color Distributions R. Hanek Efficient Joint Detection Considering Complexity of Contours M. Kanoh, S. Kato, H. Itoh

577

588

Feature-Based Face Recognition: Neural Network Using Recognition-by-Recall W. Zhang, Y. Guo

599

Segmentation of Connected Handwritten Chinese Characters Based on Stroke Analysis and Background Thinning S. Zhao, P. Shi

608

XVI

Table of Contents

A Framework of Two-Stage Combination of Multiple Recognizers for Handwritten Numerals K. Lee, Y. Lee

617

N a t u r a l Language Processing Aligning Portuguese and Chinese Parallel Texts Using Confidence Bands . . 627 A. Ribeiro, G. Lopes, J. Mexia Interactive Japanese-to-Braille Translation Using Case-Based Knowledge on the Web S. Ono, Y. Hamada, Y. Takagi, S. Nishihara, K. Mizuno

638

Speech and Spoken Language Psychological Effects Derived from Mimicry Voice Using Inarticualte Sounds N. Suzuki, Y. Takeuchi, M. Okada

647

Statistical Model Based Approach to Spoken Language Acquisition N. Iwahashi

657

A I in W e b Technology Discovery of Shared Topics Networks among People - A Simple Approach to Find Community Knowledge from WWW Bookmarks H. Takeda, T. Matsuzuka, Y. Taniguchi Collaborative Filtering with the Simple Bayesian Classifier K. Miyahara, M.J. Pazzani Supervised and Unsupervised Learning Algorithms for Thai Web Pages Identification B. Kijsirikul, P. Sasiphongpairoege, N. Soonthomphisaj, Solving the Personal Computer Configuration Problems as Discrete Optimization Problems : A Preliminary Report V. Tarn, K.T. Ma

668 679

690 S. Meknavin 701

Intelligent Systems Improved Efficiency of Oil Well Drilling through Case Based Reasoning . . . 712 P. Skalle, J. Sveen, A. Aamodt Functional Understanding Based on an Ontology of Functional Concepts . . 723 Y. Kitamura, T. Sano, R. Mizoguchi ProbabiUstic Modehng of Alarm Observation Delay in Network Diagnosis . 734 K. Hashimoto, K. Matsumoto, N. Shiratori

Table of Contents

XVII

A Diagnosis Function of Arithmetical Word Problems for Learning by Problem Posing T. Hirashima, A. Nakano, A. Takeuchi

745

Combining Kalman Filtering and Markov Localization in Network-Like Environments S. Thiebaux, P. Lamb

756

AI and Music Microbes and Music F. Soddell, J. Soddell

767

A Lightweight Multi-agent Musical Beat Tracking System S. Dixon

778

Posters Frame-Structure Logic with Extended Attribute Relations K. Komatsu, N. Nishihara, S. Yokoyama

789

Fast Hypothetical Reasoning by Parallel Processing Y. Matsuo, M. Ishizuka

790

TURAS: A Personalised Route Planning System L. McGinty, B. Smyth

791

Analysis of Phase Transitions in Graph-Coloring Problems Based on Constraint Structures K. Mizuno, A. Hayashimoto, S. Nishihara

:. . . 792

Minimal Model Generation with Factorization and Constrained S e a r c h . . . . 793 M. Koshimura, M. Kita, R. Hasegawa Method of Ideal Solution in Fuzzy Set Theory and Multicriteria Decision Making G. Beliakov

794

A New Axiomatic Framework for Prioritized Fuzzy Constraint Satisfaction Problems X. Luo, H.-f. Leung, J.H.-m. Lee

795

Constraint Satisfaction over Shared Multi-set Value Domains M.J. Sanders

796

Algorithms for Solving the Ship Berthing Problem K.S. Goh, A. Lim

797

Temporal Interval Logic in Data Mining C.P. Rainsford, J.F. Roddick

798

XVIII

Table of Contents

Data Mining in Disease Management - A Diabetes Case Study H. He, H. Koesmamo, T. Van, Z. Huang

799

A Limited Lattice Structure for Incremental Association Mining Y. Zhao, J. Shi, P. Shi

800

Markov Modelling of Simple Directional Features for Effective and Efficient Handwriting Verification A. McCabe

801

Texture Analysis and Classification Using Bottom-Up Tree-Structured Wavelet Transform Y. Miyamoto, M.N. Shirazi, K. Uehara

802

A Stereo Matching Algorithm Using Adaptive Window and Search Range . 803 H.-S. Koo, C.-S. Jeong A Design of Rescue Agents for RoboCup-Rescue M. Ohta, N. Ito, S. Tadokoro, H. Kitano

804

A Cooperative Architecture to Control Multi-agent Based Robots M. Becht, R. Lafrenz, N. Oswald, M. Schule, P. Levi

805

Automatic Development of Robot Behaviour Using Monte Carlo Methods . 806 J. Brusey Adapting Behavior by Inductive Prediction in Soccer Agents T. Matsui, N. Inuzuka, H. Seki

807

Computing the Local Space of a Mobile Robot M.E. Jefferies, W.-K. Yeap, L.I. Smith

808

Learning Situation Dependent Success Rates of Actions in a RoboCup Scenario S. Buck, M. Riedmiller

809

Cooperative Bidding Mechanisms among Agents in Multiple Online Auctions T. Ito, N. Fukuta, R. Yamada, T. Shintani, K. Sycara

810

Framework of Distributed Simulation System for Multi-agent Environment 811 /. Noda Dependence Based Coalitions and Contract Net: A Comparative Analysis . 812 M. Ito, J.S. Sichm,an A Tracer for Debugging Multi-agent System Based on P-Q Signal Method 813 T. Ozono, T. Shintani

Table of Contents A Multi-agent Approach for Simulating Bushfire Spread W. Magill, X. Li Multi-agent Cooperative Reasoning Using Common Knowledge and Implicit Knowledge L. He, Y. Chao, K. Yamada, T. Nakamura, H. Itoh

XIX 814

815

Life-Like Agent Design Based on Social Interaction Y. Takeuchi, T. Takahashi, Y. Katagiri

816

Agent-Oriented Programming in Linear Logic: An Example A. Al Amin, M. Winikoff, J. Harland

817

Emotional Intelligence for Intuitive Agents P. Ray, M. Toleman, D. Lukose

818

Formalization for the Agent Method by Using 7r-Calculus K. Iwata, N. Ito, N. Ishii

819

Utilization of Coreferences for the Translation of Utterances Containing Anaphoric Expressions M. Paul, E. Sumita

820

Word Alignment Using a Matrix E. Sumita

821

Deterministic Japanese Word Segmentation by Decision List Method H. Shinnou

822

Criteria to Choose Appropriate Graph-Types H. Yonezawa, M. Matsushita, T. Kato

823

A Document Classifier Based on Word Semantic Association X. Li, J. Liu, Z. Shi

824

Incorporation of Japanese Information Retrieval Method Using Dependency Relationship into Probabilistic Retrieval H. Fujitani, T. Mine, M. Amamiya

825

A Step Towards Integration of Learning Theories to Form an Effective Collaborative Learning Group A. Inaba, T. Supnithi, M. Ikeda, R. Mizoguchi, J. Toyoda

826

Model-Based Software Requirements Design T. Aida, S. Ohsuga

827

Acquiring Factual Knowledge through Ontological Instantiation H. Shin, S. Koehler

828

XX

Table of Contents

Intrusion Detection by Combining Multiple Hidden Markov Models J. Choy, S.-B. Cho Conceptual Classification and Browsing of Internet FAQs Using Self-Organizing Neural Networks H.-D. Kim, J.-H. Ahn, S.-B. Cho

829

830

The Role of Abduction in Internet-Based Applications A. Abe

831

FERRET: An Intelligent Assistant for Internet Searching J. Zhou, J. Baltes

832

A u t h o r Index

833

Automated Haggling: Building Artificial Negotiators Nick Jennings Dept of Electronics and Computer Science University of Southampton Southampton S017 IBJ

nrj Sees.soton.ac.uk

Abstract. Computer systems in which autonomous software agents negotiate with one another in order to come to mutually acceptable agreements are likely to become pervasive in the next generation of networked systems. In such systems, the agents will be required to participate in a range of negotiation scenarios and exhibit a range of negotiation behaviours (depending on the context). To this end, this talk explores the issues involved in designing and implementating a number of automated negotiators for real-world electronic commerce applications.

R. Mizoguchi and J. Slaney (Eds.): PRICAI 2000, LNAI 1886, p. 1, 2000. © Springer-Verlag Berlin Heidelberg 2000

Information Geometry of Neural Networks Shun-ichi Amari Laboratory for Information Synthesis Brain-Style Information Systems Research Group RIKEN Brain Science Institute 2-1, Hirosawa, Saitama 351-0198, Japan

amariibrain.riken.go.jp

Abstract. Japan has launched a big Brain Science Program which includes theoretical foundations of neurocomputing. Mathematical foundation of brainstyle computation is one of die main targets of our laboratory in the RIKEN Brain Science Institute. The present talk will introduce the Japanese Brain Science Program, and then give a direction toward mathematical foundation of neurocomputing. A neural network is specified by a number of real free parameters (connection weights or synaptic efficacies) which are modifiable by learning. The set of all such networks forms a multi-dimensional manifold. In order to understand the total capability of such networks, it is useful to study the intrinsic geometrical structure of the neuromanifold. When a network is disturbed by noise, its behavior is given by a conditional probability distribution. In such a case. Information Geometry gives a fundamental geometrical structure. We apply information geometry to the set of multi-layer perceptrons. Because it is a Riemannian space, we are naturally lead to the Riemannian or natural gradient learning method, which proves to give a strikingly fast and accurate learning algorithm. The geometry also proves that various types of singularities exist in the manifold, which are not peculiar to neural networks but common to all the hierarchical systems. The sigularities give severe influence on learning behaviors. All of these aspects are analyzed mathematically.

R. Mizoguchi and J, Slaney (Eds.): PRICAI2000, LNAI1886, p. 2, 2000. ® Springer-Verlag Berlin Heidelberg 2000

Knowledge Representation, Belief Revision, and the Challenge of Optimality Randy Goebel Department of Computing Science University of Alberta Edmonton, Canada, T6G 2H1 goebelScs.ualberta.ca

Abstract. The fields of KR and BR are closely related, and abstractly circumscribe the two requirements of articulating and consistently accumulating knowledge. The application to particular problem solving tasks provides further constraints on the articulation and accumulation of knowledge, many of which are complex, conflicting, and difficult to formalize. Beginning with the idea that belief revision provides the most general framework for accumulating knowledge, we review recent experience in optimization problem solving, where the difficulties include specifying the problem, the objective function for solution, and the knowledge of how to search a large search space. The experience reveals ideas for a general optizmation problem solving framework, in which belief revision, constraint programming, and heuristic optimzation all work together.

R. Mizoguchi and J. Slaney (Eds.): PRICAI2000, LNAI 1886, p. 3, 2000. © Springer-Verlag Berlin Heidelberg 2000

Artificial Intelligence Applications in Electronic Commerce Jae Kyu Lee Graduate School of Management Korea Advanced Institute of Science and Technology 207-43 Cheongryang, Seoul, Korea 130-012 jkleeSmsd.kaist.ac.kr

Abstract. There are various applications of AI on web-based electronic commerce (EC) environment. However, the application of intelligence in EC is confined by the state-of-the-art of the AI aldiough EC has a high potential of AI deployment. In this talk, we will review the status of AI applications in EC, and a research opportunity for future applications The key AI technologies applied in EC include agents for search, comparison, and negotiations; search by configuration and salesman expert systems; thesaurus of EC terms; knowledge-based processing of workflow; personalized e-catalog directory management; data mining in customer relationship management; natural language conversation, voice recognition and synthesis, and machine translation. 1. Agent: Concerning the agents, we will discuss the trend of XML standard such as ebXML for search, communication, and price negotiation. We will also review the status of learning from the seller and buyer agents' point of view. 2. Search by conflguration: Most searches seek standard commodities, while many products like electronic goods need to add optional parts to fulfill the required specification. This implies that we need to find a most similar template first, and then to adjust the optional parts that offer the minimum cost. 3. Thesaurus to aid comparison shopping: The product specification should be comparable each other whether they are represented in numbers, symbols, and words. To comprehend the specifications for comparison, we need a thesaurus of application domain. 4. Personalized e-catalog directory management: A buyer site defines a personalized e-catalog directory out of conmion standard catalog. The anomalies such as unbalanced directory are defined and automatic remedies are developed. 5. Other issues like data mining in CRM, natural language, voice recognition, and machine translation on the web will be demonstrated. The talk will be ended with the AI research opportunities in EC.

R. Mizoguchi and J. Slaney (Eds.): PRICAI 2000, LNAI 1886, p. 4, 2000. © Springer-Verlag Berlin Heidelberg 2000

Towards a Next-Generation Search Engine Qiang Yang', Hai-Feng Wang, Ji-Rong Wen, Gao Zhang, Ye Lu', Kai-Fu Lee, and Hong-Jiang Zhang Microsoft Research China 5F, Beijing Sigma Center No. 49 Zhichun Road, Haidian District Beijing 100080 P.R. China (qyang,yel)@cs.sfu.ca, (i-haiwan, i-jrwen, i-gzhang, kfl, hj Zhang)©microsoft.com

Abstract. As more information becomes available on the World Wide Web, it has become an acute problem to provide effective search tools for information access. Previous generations of search engines are mainly keyword-based and cannot satisfy many informational needs of their users. Search based on simple keywords returns many irrelevant documents that can easily swamp the user. In this paper, we describe the system architecture of a next-generation search engine that we have built with a goal to provide accurate search result on frequently asked concepts. Our key differentiating factors from other search engines are natural language user interface, accurate search results, and interactive user interface and multimedia content retrieval. We describe the architecture, design goals and experience in developing the search engine.

1 Introduction With the explosive growth of information on the World Wide Web, there is an acute need for search engine technology to keep pace with the users' need for searching speed and precision. Today's popular search engines such as Yahoo! and MSN.com are used by millions of users each day to find information, but the main method of search has been kept the same as when the first search engine appeared years ago, relying mainly on keyword search. This has resulted in unsatisfactory search results, as a simple keyword may not be able to convey complex search semantics a user wishes to express, returning many irrelevant documents and eventually, disappointed users. The purpose of this paper is to sketch a next-generation search engine, which offers several key features that make it more natural and efficient for users to search the web.

' The research was conducted while the author was visiting Microsoft Research China, on leave from School of Computing Science, Simon Fraser University, Bumaby EC Canada V5A 1S6 R. Mizoguchi and J. Slaney (Eds.): PRICAI 2000, LNAI 1886, pp. 5-15, 2000. ® Springer-Verlag Berlin Heidelberg 2000

6

Q. Yang et al.

Search has so far experienced two main evolutions. The first is keyword based search engines [4, 11], as is currently the case with the majority of search engines on the web (e.g., Yahoo! and MSN.com). These engines accept a keyword-based query from a user and search in one or more index databases. Despite its simplicity, these engines typically return millions of documents in response to a simple keyword query, which often makes it impossible for a user to find the needed information. In response to this problem, a second generation of search engines is aimed at extracting frequently asked questions (FAQ's) and manually indexes these questions and their answers; the result is adding one level of indirection whereby users are asked to confirm one or more rephrased questions in order to find their answer. A prime example of this style of search engines is Askjeeves.com. An advantage of this style of interaction and cataloging is much higher precision: whereas the keyword based search engines return thousands of results, Askjeeves often gives a few very precise results as answers. It is plausible that this style of FAQ-based search engines will enjoy remarkable success in limited domain applications such as web-based technical support. While the Askjeeves search engine generates impressive results, its design and architecture are not available for other researchers to experiment with due to its proprietary nature. We felt, however, a pressing need in the search-engine research community to exchange ideas in order to move towards newer-generation search engines. We observe that although FAQ-based second-generation search engines have improved search precision, much remains to be desired. A third-generation search engine will be able to deal with concepts that the user intends to query about, by parsing an input natural language query and extracting syntactic as well as semantic information. The parser should be robust in the sense that it will be able to return partial parse results whenever possible, and the results will be more accurate when more information is available. It will have the capability to deal with languages other than English, as, for example, the Chinese language poses additional difficulty in word segmentation and semantic processing. When facing ambiguity, it will interact with the user for confirmation in terms of the concept the user is asking. The query logs are recorded processed repeatedly, for providing a powerful language model for the natural language parser as well as for indexing the frequently asked questions and providing relevance-feedback learning capability. The basis for our approach is that an important hypothesis we call the conceptspace coverage hypothesis: A small subset of concepts can cover most user queries. If this hypothesis is true, then we can simply track this small subset and use semiautomated method to index the concepts precisely - this results in a search engine that satisfies most users most of the times. To support our hypothesis, we took a one-day log from MSN.com query log and manually mapped queries to pre-defined concept categories. In Fig. 1, the horizontal axis represents the 27 predefined categories of concepts or keywords, and the vertical axis is the coverage of all queries under consideration by the corresponding subset of concepts or keywords. Examples of the concepts are: "Finding computer and internet related products and services ", "Finding movies and toys on the Internet" and so on. We took the top 3000 distinct user queries that represent 418,248 queries on Sept 4, 1999 (which we chose arbitrarily), and then classified these queries. The keywords are sorted by frequency, such that the

Towards a Next-Generation Search Engine

7

ith value on the horizontal axis corresponds to the subset of all / previous keywords in the sorted list. As can be seen, both the keyword and the concept distribution obey the pattern that the first few popular categories will cover most of the queries. Furthermore, the concept distribution converges much faster than the keyword distribution as we expect. As an example, 30% of the concepts in fact cover about 80% of all queries in the selected query pool. This preliminary result shows that our hypothesis stands at least for MSN.com query log data. We are currentiy conducting more experiments to further confirm this hypothesis.

1.0000000 0.8000000 r^'^'^

I

90%

0.6000000

o 0.4000000 0.2000000 0.0000000

query noveran

80% query 11 t

1

3

1

1

1

1

5 7

1

1

1

1

9 11 13 15 17 19 21 23 25 27

Concept Category Concept Distribution —•— Keyword Distribution Fig 1. Plotting the query distribution against concept categories

In this paper, we describe our design of a prototype search engine based on FAQ analyses. We show that it is possible to build such search engines based on data mining and natural language processing, as well as well-designed user interfaces. This prototype system, shown in Fig. 2, demonstrates many exciting new opportunities for research in next-generation intelligent computing. We only focus on two main components of the system, the natural language subsystem and the log data mining subsystem, leaving the evaluation of the system to future discussions. Code-named "Brilliant", the search engine has the ability to robustly parse natural languages based on grammatical knowledge obtained through analysis of query log data. In order to assemble answers to questions, it has a methodology to process query logs for the purpose of obtaining new question templates with indexed answers. It also has relevance-feedback capability for improving its indexing and ranking functions. This capability allows the system to record users' actions in browsing and in selecting the search result, so that the ranking of these results and the importance of each selection can be learned over time.

8

Q. Yang et al.

Qii?sdjau

Seai-cli Result

I Usw biteiface

Quettipit

.A s

Keywords

NLP

t

S5~^eywonl

Pai-sei-

I

Templafe Database

Aniwer Database

Fig. 2

Brilliant system architecture

2 Natural Language Processing with LEAP In this section, we first introduce the concept of robust parsing and how to apply it to natural language understanding in search engines. We then describe several challenges of using robust parsing in our search engine design.

2.1 Using Robust Parsing in Search Engine Spoken language understanding (SLU) researchers have been working on robust parsing to handle ill-formed inputs. Usually, a robust parser attempts to overcome the extra-grammaticality by ignoring the un-parsable words and fragments and by conducting a search for the maximal subset of the original input that is covered by the grammar [9, 13]. Natural language understanding share many features with SLU: input sentences are almost always ungrammatical or incomplete. It is difficult to parse such sentences using a traditional natural language parser. One advantage of search engine queries is that they are short - again a feature shared with spoken languages. Therefore we choose a robust parser as our natural language parser in the search engine. There are several main advantages of robust parsers. First, if an input sentence contains words that are not parsable, a robust parser can skip these words or phrases and still output an incomplete result. In contrast, a traditional parser will either completely break down, or need revision of the grammar rules. Second, if a given sentence is incomplete such that a traditional parser cannot find a suitable rule to match it

Towards.a Next-Generation Search Engine

9

exactly, robust parsers can provide multiple interpretation of the parsing result and associate with each output a confidence level. In Brillianf'^'^ search engine, this confidence level is built based on statistical training. LEAP (Language Enabled Applications) is an effort in the speech technology group in Microsoft Research that aims at spoken language understanding [14]. Leap's robust parsing algorithm is an extension of the bottom-up chart-parsing algorithm [1]. The framework of the algorithm is similar to the general chart-parsing algorithm. The differences include: 1) traditional parsers require that a hypothesis h and a partial parse p have to cover adjacent words in the input; in robust parsers this is relaxed. This makes it possible for the parser to omit noisy words in the input. 2) Different hypotheses can result from partial parses by skipping some symbols in parse rules. This allows non-exact matching of a rule. A LEAP grammar defines semantic classes. Each semantic class is defined by a set of rules and productions. For example, we can define a semantic class for the travel path from one place to another. This class is represented as follows: TravelPath { => @from @to ©route; @from => from | ...; } Place { Beijing I Shanginai | ...;

1 In the semantic classes above, defines a return class type, and TravelPath is a semantic class that contains a number of rules (the first line) and productions (the second line). In this class, ©from must parse a piece of the input sentence according to a production as shown in the second line. The input item after the @from object must match according to semantic class. If there are input tokens that are not parsable by any parts of the rule, it will be ignored by the parser. In this case, the scoring of the parse result will be correspondingly discounted to reflect a lower level of confidence in the parse result. Suppose the input query is: How to go from Beijing to Slianghai? The LEAP parser will return the following result: How to go from place to place How to go from place to place place place

Here represents the root semantic class. Note that this input query cannot be parsed using the first rule in the semantic class TravelPath completely if we used a traditional parser. In our implementation, we calculated the score of the parsing

10

Q. Yang et al.

result by discounting the number of input items and rule items that are skipped during the parsing operation. This value is normalized to give a percentage confidence value.

Query Log LM

Questions

iz

Stop

Keyword

1 Keywords

Word Segmenta-

7Y

Parser

1

^ w Parse tree

p-i^ Dictionary

Rule

Fig. 3. The flowchart of natural language processing We have adapted the LEAP system in our search engine using grammar rules trained through query logs. The system architecture is shown in Fig. 3. In this figure, we first perform Chinese word segmentation. The segmented sentence is passed to LEAP next. If LEAP parses the sentence successfully, the module outputs a parse tree. Otherwise keywords will be simply extracted and outputted. The template-matching module of the search engine will be able to use both types of output.

2.2 Evaluate the Parsing Result In our search engine, we adapt Leap so that it evaluates the output based on the coverage of a rule against the input query. A parsed result will be selected if it covers the most words in the query and the most parts of rules. In order to improve the scoring strategy, we learn probabilities from query logs to include: • probabilities of the rules; • penalty for robust rule matching (insertion, deletion, substitution); • probabilities of "non-matching" words; • term probability according to their frequency in query log. Considering the rule in the semantic class TravelPath: D

@from @to ©route;

Towards a Next-Generation Search Engine

11

We illustrate how to train the probability values associated with this rule. A rule having a high probability value means that using this rule to parse a query is more reliable. This is similar to PCFG [8]. We can also train the penalty values for robust matching: For an item in either a rule or the query sentence, if the item is skipped during parsing, a penalty will be exacted. In the above rule, if ©from is skipped, the penalty can be set relatively low. However, if ©route is skipped, the penalty should be high. In the sentence "How to get from Beijing to Shanghai" if "how to go" is skipped, the penalty should be high. In our search engine, penalty and score statistics are gathered using the query log files.

3 Question Matching and Query Log Mining In this section, we describe the question-matching module. We first describe the question matching process used in the module. We then discuss issues related to obtaining the template and answer databases from the query log data.

3.1 Question Matching Process The purpose of the question-matching module is to find a set of relevant concepts and their related answers from a given user query. To accomplish this, we set up a collection of FAQ concepts and a mapping from these concepts to answers. These answers can be further compressed by grouping together their similar parameters - the result is what we call templates. To illustrate this mapping process, we consider a realistic example. Suppose that the user asked "How to go from Beijing to Shanglnai?". We store in our FAQ database a collection of concepts indexed on "Route" and "Travel", and possibly other related concepts. We include synonyms to allow a broader coverage of the concepts in the FAQ database in order to increase the recall of matching operation (see the "Concept-FAQ table"). We can achieve a further improvement by including a concept hierarchy to map user queries into a collection of related concepts. The rest of the processing is decomposed into three steps. Step 1. Mapping from the question space to the concept space The natural language processing module returns a parse tree that contains a semantic class and its parameters. The LEAP parser returns a concept "Route" as a semantic class. We can use the semantic class "Route" to search the Concept-FAQ table, which can then be used to search the concept-FAQ table. The concept-FAQ table is the core data structure for the whole database. Every FAQ is assigned a FAQ-ID, which is uniquely distinguished from the others. An FAQ is made up of a few concepts that are in fact represented by certain terms such as "Route". Every {Concept, FAQ ID, Weight) record denotes that the FAQ is related to the concept with a corre-

12

Q. Yangetal.

lation Weight factor, which is learned from a later analysis of the query log file. Every FAQ is related to one or more concepts and every concept is related to one or more FAQ's. Thus there is a many-to-many relationship between FAQ's and concepts. Using the concept-FAQ table, we can compute a correlation factor between a concept set D (concept,, concept^, .. .concept^) and a FAQ with ID of x as n

^ Weight (concept.,

x).

Hence, given a concept set, it is straightforward to obtain the top n best-matched FAQ's. Step 2. Mapping from the FAQ space to the template space A template represents a class of standard questions. It corresponds to a semantic class in the LEAP module. Every template has one or more parameters with values. Once all the parameters in a template are assigned a value, a standard question is derived from this template. For example, "air flights to *" is a template representing a class of questions about the flight from or to a certain location. Here the wild card "*" denotes that there is a parameter in the template that can be assigned an arbitrary place name. If "Shanghai" is chosen, this template is transformed into a standard question "what are the air flights to Shanghai?". We cannot construct a template for every question since there are many similar questions. We solve this problem by choosing all similar questions and prepare a single template for them. This effectively compresses the FAQ set Step 3. Mapping from the template space to the answer space All answers for a template are previously stored in a separate answer table. This answer table is indexed by the parameter values of the template. When a matching is done, the best parameter is calculated and passed to the GUI component to be shown to the user. Every answer is made up of two parts: a URL and its description.

3.2 Query Log Mining Our system is purely data-driven, where the most important information is derived from the query logs and the World Wide Web. In our current work, we address the following critical questions a. How to find the frequently asked questions from a large amount of user questions? The challenge here is to address the time-variant nature of the questions, because some questions important today may become unimportant tomorrow. b. How to find answers for a template automatically or semi-automatically? c. How to determine the weights between concepts and FAQ's, and between FAQ's and templates?

Towards a Next-Generation Search Engine

13

Our system for query log mining consists of statistical query co-occurrence analysis, clustering and classification analyses. In our experience, we have found these tools to offer the search engine index maintainer very effective help in keeping the FAQ and template databases up to date.

4 User Interface Fig. 4 and 5 show the process in which users ask natural language queries. Much research has shown that natural language is more powerful in expressing user's intention than keywords. Users usually-have difficulty in formulating good searching keywords even when they have clear ideas about what they need [12].

l i n n u \u

U^^^ w- iBlSfJi

fliPSH-

m^mfS

mm mm-im--m-mj^: G(y)] and to the more general cases. If the premise of an implicative formula includes two predicates with the same variable like (Vx/D)[Fl(x)DF2(x) -> G(x)], then two independent states Sfl and Sf 2 of D are made corresponding to Fl(x) and F2(x) respectively. Then a compound state Sf such as Sf = Sfl x Sf 2 is made as the Cartesian product. From its compound probability vector Pf a probability vector Pg for the state Sg is derived in the same way as above. In this case the number of states in Sf is 2^" and transition matrix T becomes 2^"D2° matrix. Or it can be represented in a three-dimensional space by a 2"n2"n2" matrix and is called a Cubic Matrix. Each of three axes represents a predicate in the formula, that is, either Fl or F2 or G. It is a convenient way for a visual understanding and making the matrix consistent with logical definition. (I, J)-th element in each plane is made in such a way that it represents a consistent relation with the definition of logical imply when the states of D with respect to Fl and also to F2 are I and J respectively. For example, in a plane of the state vector Sg including G(a) =0, (I,J)-th element corresponding to the states I and J of Sfl and Sf 2 including D l(a)=l, F2(a)=l

20

S. Ohsuga

must be zero. It is to prevents a contradictory case of D l(a)=l, F2(a)=l and G(a) =0 to occur. There can be cases in which more than two predicates are included in the premise. But in principle, these cases are decomposed to the case of two predicates. For example, Fl(x)[]F2(x)nF3(x)—> G(x) can be decomposed into Fl(x)DF2(x) -^ K(x) andK(x)nF3(x)-^G(x).

pO pi p2 p3 p4 p5 p6 p7 p8 p9 plO pU p l 2 p l 3 pl4pl5 pO pi p2 p3 p4 p5 p6 p7 p8 p9 p l pll pl2 pl3 pl4 pl5

x x x x x x x x x x x 0 x 0 x 0 x 0 x 0 x 0 O O x x O O x x O O x 0 0 0 x 0 0 0 x 0 0 0 O O O O x x x x O O O 0 0 0 0 0 x 0 x 0 0 0 O O O O O O x x O O O 0 0 0 0 0 0 0 x 0 0 0 O O O O O O O O x x x 0 0 0 0 0 0 0 0 0 x 0 O O O O O O O O O O O x 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

X X x X 0 0 0 0 X X x X 0 0 0 0

X 0 0 0 x 0 0 0 x 0 0 0 X 0 0 0

X X 0 0 x x 0 0 x x 0 0 x x 0 0

X X 0 x x x 0 x x x Ox x x Ox x x Ox XX Ox x x Ox XX 0 1

x; non-negative value with row sum =1 Fig. 1. Transition matrix to represent logical expression (Ax/d)[F(x)*G(x)] D = (al,a2, a3, a4)

Further extension is necessary for more than two variables, for example, (Ax/D)(Ay/E)[Fl(x)nF2(x, y) —> G(y)]. In this case a new variable z defined over the set D X E is introduced and a cubic matrix can be made. The following treaty is similar to the above case. In this way the set of logical implicative forms with the corresponding transition matrices is extended to include very practical expressions. The computation Pgj = EgPfoX tu for Pg = Pf x T is formally the same as that included in a neural network of which the input and output vectors are Pfgand Pgj respectively and the weight of an arc between nodes I and J is ty. A neural network includes a non-linear transformation after this linear operation. Usually a function called Sigmoid Function is used. At the moment this part is ignored. A transition matrix for representing predicate logic has many restrictions comparing with a matrix to represent a neural network. First, since the former represents a probabilistic process, every element in this matrix must be zero or a positive finite number less than or equal one while any weight value of neural

The Gap between Symbol and Non-symbol Processing

21

network is not restricted to an interval [0,1]. But this is to some extent the matter of measurement. By preprocessing the input values the different neural network of which the range of every input value becomes similar to probability may be obtained with substantially the same functionality as original one. Thus the first difference is not substantial one. Second, in order for a matrix to keep the same relation as logical implication, it has to fulfil a further restriction as shown in Figure 1, while the matrix to represent neural network is free from such a restriction. A neural network can represent an object at the very fine level and in many cases to continuous level. In other words, granularity of representation is very fine in the framework of neural network. But the framework is rigid and to expand its scope is difficult. Therefore integration of two or more neural networks, or a neural network with the other non-symbol processing, is not easy. Persons must define an ad hoc method for integration for every specific case. The granularity of logical predicate on the other hand is very course. Predicate logic can expand the scope with the sacrifice of granularity of representation at the fine level. Therefore predicate logic cannot represent neural network correctly. In general, it is difficult for symbolic systems to represent non-symbolic processing precisely. For the purpose of discovering knowledge automatically, a new style of information processing with a new representation scheme and its processing method must be defined. It is desirable to have advantages of both non-symbolic and symbolic processing systems. An approach to expand the syntax of an orthodox predicate logic toward to include probabilistic measure is taken in the sequel. One of the problems included in this method is the rapid increase of computational complexity by combination. It is discussed in section 5 that it is possible to reduce the number of computations in some cases of discovery in databases.

4. Extending Syntax of Logical Expression Predicate logic is lacking a quantitative measure for representing non-symbolic processing. In this section therefore the syntax of predicate logic is expanded to include probability of truth of a logical expression while preserving its advantages of expandability. In the representation of matrix form a probability vector Pf of the state vector Sf represented an occurrence probability of logical states. In the formal syntax of classical first order logic however only two cases of Pf actually appear. These are (0,0,0,—,1) and (0, *, *,—,*) that correspond to (Ax/D)F(x) and (Ex/D)F(x) respectively. Here * denotes any value in [0, 1]. Since a set D= {a, b, c, ~, z} is assumed finite, (Ax/D)F(x) - F(a) A F(b) A - A F(Z) . Even if the probability of T(x); True' is different for every element, that is, for x = a or x = b or — or x = z, ordinary first order logic cannot represent it. In order to improve it a probability measure is introduced there. Let a probability of T(x):True' be p(x) for D 3 x. Then the syntax of logical fact expressions (Vx/n)F(x) is expanded to (Vx/n){F(x), p(x)}meaning Tor every x of • , F(x) is true with probability p(x)\ Since p(x) is a distribution over the set D, it is different from Pf that is a distribution over the set of states Sf . It is possible to obtain Pf from p(x) and vice

22

S. Ohsuga

versa. Every state in Sf is defined as the combination of T(x); True' or T(x); False' for all elements in D. I-th element of Sf is SFj. An element in Pf corresponding to SF] is Pfg. Let T(x); True' for the element x; i, j , -- and 'F(y); False' for y; k, 1, - - in SFj. Then Pfn= p(i) DpO) D- - n(l-p(k)) n(l-p(l)) D- -. On the other hand, let an operation to sum up all positive components with respect to i in Pf beZ.jGi Pf i- Here the 'positive component with respect to i' is Pfn corresponding to SFj in which 'F(i);True'. This represents a probability that i -th element x in D is in the state 'F(x); True". That is.Z.jniPf i = p(x). Implicative formula is also expanded. Let an extension of an implicative formula (Vx/D)[F(x)^G(x)] be considered as an example. The detail of the quantitative measure is discussed later. Whatever it may be it generates from (Vx/n){F(x), p(x)} a conclusion in the same form as the premise with its own probability distribution, i.e. (Vx/n){G(x), r(x)}. In general r(x) must be different from p(x) because an implicative formula may also have some probabilistic uncertainty and it affects the probability distribution of the consequence. The matrix introduced in section 3 gives a basis for extension of implicative formula. Figure 1 showed an example of transition matrix that generated a logical formula as conclusion for logical premise. If one intends to introduce a probabilistic measure in the inference, the restriction imposed to the matrix is released in such a way that any positive value in [0, 1] is allowed to every element under only the constraint that row sum is one for every row. With this matrix and an extended fact representation as above, it is possible to represent extended logical inference as follows. (1) Generate Pf fromp(x) of (Vx/n){F(x), p(x)}. (2) Obtain Pg as the product of Pf and the expanded transition matrix. (3) Obtain r(x) of (Vx/n){G(x), r(x)}from Pg . This is a process to obtain a conclusion given (Vx/n){F(x), p(x)} and the transition matrix. Thus if matrix representation is available in predicate logic, it seems a good extension of predicate logic because it includes continuous values and appears to represent the same process as non-symbolic operation. But it has drawback in two aspects. First, it needs to keep a large matrix to every implicative representation and second, and the more important, it loses modularity that was the largest advantage of predicate logic for expanding the scope autonomously. Modularity comes from the mutual independence of elements in D in a logical expression. The mutual independence between elements in D is lost in the operation Pg = Pf x T and it causes the loss of modularity. This is an operation to derive Pgj by Pgj - EnPfoX tu = PfiX ty + Pf2X t2j + — + PfNX INJ • That is, J-th element of Pg is affected by the other elements of Pf than J-th element. If this occurs, any logical predicate is affected by the other predicates. There is no modularity any more. In order to keep modularity it is desirable to represent logical implication in the same form as the fact representation like (Vx/D){[F(x)—>G(x)], q(x)}. It is read 'for every x of D, F(x) —>G(x) with probability q(x)'. In this expression q(x) is defined to each element in D independently. Then logical inference is represented as follows. (Vx/D) {F(x), p(x)} A (VxyD){[F(x)^G(x)], q(x)}Dn(Vx/D){G(x), r(x)}, r(x) = f(p(x),q(x)).

The Gap between Symbol and Non-symbol Processing

23

If it is possible to represent logical inference in this form, the actual inference operation can be divided into two parts. The first part is the ordinary logical inference such as, (Vx/D) F(x) A ( V X / D ) { F ( X ) ^ G ( X ) } n(Vx/D)G(x) The second part is the probability computation r(x) = f(p(x), q(x)). This is the operation to obtain r(x) as the function only of p(x) and q(x) with the same variable and is performed in parallel with the first part. Thus logical operation is possible only by adding the second part to the ordinary inference operation. This is the possible largest extension of predicate logic to include a quantitative evaluation meeting the condition for preserving the modularity. This extension reduces the gap between non-symbolic and symbolic expression to a large extent. But it can not reduce the gap to zero but leaves a distance between them. If this distance can be made small enough, then predicate logic can approximate non-symbolic processing. Here arises a problem of evaluating the distance between non-symbolic processing and this expanded predicate logic. Coming back to the matrix operation, the probability of the consequence of inference is obtained for i-th element as r(Xi)=2*in[Pg |=S.ini(2iPfiXtu),DXi is i-th element of DD This expression belongs to non-symbolic processing. Then an approximation is made that produces an expression like the expression (*). First the following quantities are defined. q (Xk) =2.kDi tNj, r'(Xk) =(S.knjINJ )(S*iDiPf O.DXk is k-th element in DD r'(x) is obtained by replacing every IJ-th element by NJ-th element, that is, by the replacementDtu -^flies which contradict one another, no conclusive decision can be made about whether a bird with a broken wing can fly. But if we introduce a superiority relation > with r' > r, then we can indeed conclude that the bird cannot fly. The superiority relation is required to be acyclic. It is not possible in this short paper to give a complete formal description of the logic. However, we hope to give enough information about the logic to make the discussion intelligible. We refer the reader to [19,6,17,2] for more thorough treatments.

Argumentation Semantics for Defeasible Logics

29

A rule r consists of its antecedent (or body) A{r) which is a finite set of literals, an arrow, and its consequent (or head) C{r) which is a literal. Given a set R of rules, we denote the set of all strict rules in Rby Rg, the set of strict and defeasible rules in R by Rsd, the set of defeasible rules in R by Rd, and the set of defeaters in R by RdftR[q] denotes the set of rules in R with consequent q. If g is a literal, ~ q denotes the complementary literal (if g is a positive literal p then ~ g is ^p; and if q is -ip, then ~ q isp). A defeasible theory D is a triple {F, R, >) where F is a finite set of facts, R a finite set of rules, and > a superiority relation on R. A conclusion of D is a tagged literal; in our original defeasible logic there are two tags, d and A, that may have positive or negative polarity (further tags for defeasible logic variants will be introduced shortly): +Aq which is intended to mean that q is definitely provable in D (i.e., using only facts and strict rules). —Aq which is intended to mean that we have proved that q is not definitely provable in D. +dq which is intended to mean that q is defeasibly provable in D. —dq which is intended to mean that we have proved that q is not defeasibly provable in D. Provability is based on the concept of a derivation (or proof) in D = {F,R, >). A derivation is a finite sequence P = (•P(l), • • • -?("•)) of tagged literals satisfying four conditions (which correspond to inference rules for each of the four kinds of conclusion). Here we briefly state the conditions for positive defeasible conclusions [6]. The structure of the inference rules for negative literals is the same as that for the corresponding positive one, but the conditions are negated in some sense. The purpose of the —A and —d inference rules is to establish that it is not possible to prove a corresponding positive tagged literal. These rules are defined in such a way that all the possibilities for proving +dq (for example) are explored and shown to fail before ~dq can be concluded. Thus conclusions with these tags are the outcome of a constructive proof that the corresponding positive conclusion cannot be obtained. In this paper we present the inference rules in a simplified form instead of the general one. In particular we do not consider the superiority relation. In fact, in [1], we proved that the superiority relation can be simulated in terms of the other elements of defeasible logic, and we provide an effective translation to transform a defeasible theory in an equivalent one with an empty superiority relation. The use of the simplified conditions will make our formal considerations much simpler. In the following P{l..i) denotes the initial part of the sequence P of length i.

+d:

-9:

If P{i + 1) = +dq then either (1) +Aq e P(l..i) or (2.1) 3r e RedWa G A{r) +da G P(l..i) and (2.2) - Z i ~ g G P ( l . . i ) a n d (2.3) Vs G R[~q] 3a G A(s) : -da G P(l..i)

If P{i + 1) = -dq then (1) -Aq G P(l..i) and (2.1) Vr G Rsd[q] 3a G A{r) : -da G P(l..i) or (2.2) +Z\ ~ g G P(1..2) or (2.3) 3s G R[^q] such that Va G A{s) : +da G P(l..i)

30

G. Govematori et al.

Let us work through the condition for +d. To show that q is provable defeasibly we have two choices: (1) We show that q is already definitely provable; or (2) we need to argue using the defeasible part of D as well. In particular, we require that there must be a strict or defeasible rule with head q which can be applied (2.1). But now we need to consider possible "attacks", that is, reasoning chains in support of ~ q. To be more specific: to prove q defeasibly we must show that ^ q is not definitely provable (2.2). And finally (2.3), we need to show that all rules with head ~ 5 are inapplicable. In [2] we presented a framework for defeasible logic, where we showed how to tune defeasible logic in order to define variants able to deal with different nonmonotonic phenomena. In particular, we proposed different ways in which conclusions can be obtained. One of the properties most discussed in the literature is whether ambiguities should be propagated or blocked. In the logic above ambiguities are blocked. In the following we introduce an ambiguity propagating variant. The result of [1] can be easily extended to this variant; thus the appropriate inference rules will be presented in simplified form without reference to the superiority relation. The first step is to determine when a literal is "supported" in a defeasible theory D. Support for a literal p i+Sp) consists of a chain of reasoning that would lead us to conclude p in the absence of conflicts. This leads to the following inference conditions: +E: If P ( l + 1) == +Sp then ( l ) p G F , or (2) 3r G Rsd\p]Va G A{r) + Sae P{\..i)

-E: If P ( l + 1) = -Sp then ( l ) p ^ F , and either (2) Vr G Rsd[p\: 3a G A{r) - Ea & P{\..i)

A literal that is defeasibly provable is supported, but a literal may be supported even though it is not defeasibly provable. Thus support is a weaker notion than defeasible provability. A literal is ambiguous if there is a chain of reasoning that supports a conclusion that p is true, and another that supports that -^p is true. We can achieve ambiguity propagation behaviour by making a minor change to the inference condition for +d: instead or requiring that every attack on p be inapplicable in the sense of —d, now we require that the rule for ~ p be inapplicable because one of its antecedents cannot be supported. Thus we are imposing a stronger condition for proving a literal defeasibly. Here is the formal definition:

If P{i + 1) = ~dq then If P(i + 1) = +dapq then either (1) +Aq G P(l..i) or (1) -Aq G P{l..i) and (2.1) 3r G Rsd[ -^hi ^ ttj => -lOj

and the fact &oThis theory produces the following conclusions: —dai, —d^ai, +dbi, —d^bi, for i = 0 , . . . ,n. For each i, consider the following arguments: Ai : true => -laj Bi : true => foi_i ==> a; => -^bi and their subarguments. Notice that - each argument Ai is attacked by Bi at a^. - each argument Bi is attacked by Bi_i at 6i_i. Eventually, both Ai and Bi will be rejected, since neither can defeat the other, but this cannot be done until the status of bi-i is determined. As noted above, this depends on B j - i . Thus the situation incorporates some sequentiality, where Bi_i must be resolved before resolving Bi, and this suggests that a characterization of RArgs^ must be iterative, even after all the justified literals have been identified.

4

Related Work

[16] proposes an abstract defeasible reasoning framework that is achieved by mapping elements of defeasible reasoning into the default reasoning framework of [7]. While this framework is suitable for developing new defeasible reasoning languages, it is not appropriate for characterizing defeasible logic because: - [7] does not address Kunen's semantics of logic programs which provides a characterization of failure-to-prove in defeasible logic [18].

36

G. Govematori et al.

- The correctness of the mapping needs to be established if [16] is to be applied to an existing language like defeasible logic. In fact the representation of priorities is inappropriate for defeasible logic. Two more systems characterized by Dung's grounded semantics, even if developed with different design choices and motivations, are those proposed by Simari and Loui [23] and Prakken and Sartor [21,20]. Both are similar to the ambiguity blocking variant of defeasible logic, but their superiority relations are different: the first is argument based instead of rule based, while the second does not deal with teams of rules. The abstract argumentation framework of [24] addresses both strict and defeasible rules, but not defeaters. However, the treatment of strict rules in defeasible arguments is different from that of defeasible logic, and there is no concept of team defeat. There are structural similarities between the definitions of inductive warrant and warrant in [24] and Jf and JArgsD, but they differ in that acceptability is monotonic in S whereas the corresponding definitions in [24] are antitone. The semantics that results is not sceptical, and more related to stable semantics than Kunen semantics. The framework does have a notion of ultimately defeated argument similar to our rejected arguments, but the definition is not iterative, possibly because the framework does not have a directly sceptical semantics. Among other contributions, [8] provides a sceptical argumentation theoretic semantics and shows that LPwNF - which is weaker, but very similar to defeasible logic [5] is sound with respect to this semantics. However, both LPwNF and defeasible logic are not complete with respect to this semantics.

5 Conclusion Defeasible logic is a simple but efficient rule-based nonmonotonic logic. So far defeasible logic has been defined only proof-theoretically. In this paper we presented an argumentation-theoretic semantics for defeasible logic and an ambiguity propagating variant. This paper is part of our ongoing effort to establish close connections between defeasible reasoning and theories of argumentation. Acknowledgments We thank Alejandro Garcia for fruitful discussions on defeasible logic and argumentation. This research was supported by the Australia Research Council under Large Grant No. A49803544.

References 1. G. Antoniou, D. Billington, G. Governatori and M.J. Maher. Representation Results for Defeasible Logic. Technical Report, CIT, Griffith University, 2000. 2. G. Antoniou, D. Billington, G. Governatori and M.J. Maher. A Flexible Framework for Defeasible Logic. Proc. American National Conference on Artificial Intelligence (AAAI-2000).

Argumentation Semantics for Defeasible Logics

37

3. G. Antoniou, D. Billington and M.J. Maher. On tiie analysis of regulations using defeasible rules. In R.H. Sprague (Ed.) Proc. of the 32^'' Annual Hawaii International Conference on System Sciences. IEEE Press, 1999. 4. G. Antoniou, D. Billington, M.J. Maher, A. Rock, Efficient Defeasible Reasoning Systems, Proc. Australian Workshop on Computational Logic, 2000. 5. G. Antoniou, M. Maher and D. Billington, Defeasible Logic versus Logic Programming without Negation as Failure, Journal of Logic Programming, 42, 47-57, 2000. 6. D. Billington. Defeasible Logic is Stable. Journal of Logic and Computation 3 (1993): 370400. 7. A. Bondarenko, P.M. Dung, R. Kowalski, and F. Toni. An Abstract, Argumentation-Theoretic Framework for Default Reasoning. Artificial Intelligence, 93 (1997): 63-101. 8. Y. Dimopoulos and A. Kakas. Logic Programming without Negation as Failure. In Proc. ICLP-95, MIT Press 1995. 9. P.M. Dung. An Argumentation Semantics for Logic Programming with Explicit Negation. Proceedings of the Tenth Logic Programming Conference. MIT Press, Cambridge: 616-630. 10. P.M. Dung. On The acceptability of Arguments and Its Fundamental Role in Non-monotonic Reasoning, Logic Programming, and n-person games. Artificial Intelligence, 77 (1995): 321357. 11. G. Governatori and M.J. Maher An Argumentation-Theoretic Characterization of Defeasible Logic. In W. Horn (ed.) ECAI2000. Proceedings of the 14th European Conference on Artificial Intelligence, lOS Press, Amsterdam, 2000. 12. B.N. Grosof. Prioritized Conflict Handling for Logic Programs. In Proc. Int. Logic Programming Symposium, J. Maluszynski (Ed.), 197-211. MIT Press, 1997. 13. B.N. Grosof, Y. Labrou, and H.Y Chan. A Declarative Approach to Business Rules in Contracts: Courteous Logic Programs in XML, Proceedings of the 1st ACM Conference on Electronic Commerce (EC-99), ACM Press, 1999. 14. J.F. Horty. Some Direct Theories of Nonmonotonic Inheritance. In D.M. Gabbay, C.J. Hogger and] .A.Rohinson(eds.): Handbook ofLogic in Artificial Intelligence and Logic Programming Vol. 3, 111-187, Oxford University Press, 1994, 15. H. Jakobovits and D. Vermeir. Robust Semantics for Argumentation Frameworks. Journal of Logic and Computation, Vol. 9, No. 2, 215-261, 1999. 16. R. Kowalski and F. Toni. Abstract Argumentation. Artificial Intelligence and Law 4 (1996): 275-296. 17. M. Maher, G. Antoniou and D. Billington. A Study of Provability in Defeasible Logic. In Proc. Australian Joint Conference on Artificial Intelligence, 215-226, LNAI1502, Springer, 1998. 18. M. Maher and G. Governatori. A Semantic Decomposition of Defeasible Logics. Proc. American National Conference on Artificial Intelligence (AAAI-99J, 299-305. 19. D. Nute. Defeasible Logic. In D.M. Gabbay, C.J. Hogger and J. A. Robinson (eds.): Handbook of Logic in Artificial Intelligence and Logic Programming Vol. 3, Oxford University Press 1994, 353-395. 20. H. Prakken. Logical Tools for Modelling Legal Argument: A Study of Defeasible Reasoning in Law. Kluwer Academic Publishers, 1997. 21. H. Prakken and G. Sartor. Argument-based Extended Logic Programming with Defeasible Priorities. Journal of Applied and Non-Classical Logics 1 (1997): 25-75. 22. D.M. Reeves, B.N. Grosof, M.P. Wellman, and H.Y. Chan. Towards a Declarative Language for Negotiating Executable Contracts, Proceedings of the AAAI-99 Workshop on Artificial Intelligence in Electronic Commerce (AIEC-99), AAAI Press / MIT Press, 1999. 23. G.R. Simari and R.P. Loui. A Mathematical Treatment of Argumentation and Its Implementation. Artificial Intelligence, 53 (1992): 125-157. 24. G. Vreeswijk. Abstract Argumentation Systems. Artificial Intelligence, 90 (1997): 225-279.

A Unifying Semantics for Causal Ramifications Mikhail Prokopenko^, Maurice Pagnucco^, Pavlos Peppas^, and Abhaya Nayak^ ^ Computational Reasoning Group Department of Computing, Macquarie University, NSW 2109, Australia ^ Sisifou27,Korinthos 20100, Greece Abstract. A unifying semantic framework for different reasoning approaches provides an ideal tool to compare these competing alternatives. However, it has been shown recently that a pure preferential semantics alone is not capable of providing such a unifying framework. On the other hand, variants of preferential semantics augmented by additional structures on the state space have been successfully used to characterise some influential approaches to reasoning about action and causahty. The primary aim of this paper is to provide an augmented preferential semantics that is general enough to unify two prominent frameworks for reasoning about action and causality—Sandewall 's causal propagation semantics [4] and Thielscher's causal relationships approach [5]. There are indications that these and other different augmented preferential semantical approaches can by unified into a general framework, and provide the unified semantics that is lacking so fan

1

Introduction

Preferential style semantics have always been seen as a critical step on the way towards a concise solution to the frame and ramification problems. However, it has been argued in recent literature that an explicit representation of causal information is required to solve these problems in a concise manner. It has been shown that some approaches demand a more complex semantics than a pure preferential semantics. For example, McCain and Turner's causal theory of action [1] was recently characterised by an augmented preferential semantics, using an appropriately constructed binary relation on states in addition to a preference relation [2]. This additional relation captured causal context of action systems by translating individual causal laws into state transitions. Another causal theory of action — that of Thielscher [5] — has been characterised by a variant of an augmented preferential semantics [3]. Here the minimality component was complemented by a binary relation on states of higher dimension. The standard state-space of possible worlds was extended to a hyper-space, and action effects (including indirect ones) were traced in the hyper-space. Again, the purpose of these hyper-states was to supply extra context to the process of causal propagation. The hyper-space semantics [3] can be clearly seen to employ a component of tninimal change coupled with causality. On the other hand, another rather general semantical approach — the causal propagation semantics proposed by Sandewall [4] — deals with causal ramifications without explicitly relying on the principle of minimal change. This work introduces a preferential style semantics augmented with a causal transition relation on states, that is general enough to unify two of the above-mentioned frameworks to reasoning about action and causality — Sandewall's causal propagation R. Mizoguchi and J. Slaney (Eds.); PRICAI2000, LNAI1886, pp. 3 8 ^ 9 , 2000. © Springer-Verlag Berlin Heidelberg 2000

A Unifying Semantics for Causal Ramifications

39

semantics [4] and Thielscher's causal relationships approach [5]. This is achieved by observing that the principle of minimal change is hidden behind action invocation and causal propagation in both proposals.

2

Causal Propagation Semantics

The causal propagation semantics introduced by Sandewall [4], uses the following basic concepts. The set of possible states of the world, formed as a Cartesian product of the finite sets of a finite number of state variables, is denoted as TZ. E is the set of possible actions. The causal propagation semantics extends a basic state transition semantics with a causal transition relation. The causal transition relation C is a non-reflexive relation on states in 72.. A state r is called stable if it does not have any successor s such that C{r,s); we will denote the set of stable states {r G 72. : Ss G 72., C(r, s)} as ScAnother component, 72.c C Sc, is a set of admitted states. Another important concept, introduced by Sandewall, is an action invocation relation G{e, r, r'), where e £ E is an action, r is the state where the action e is invoked, and r' is "the new state where the instrumental part of the action has been executed" [4]. In other words, the state r' satisfies direct effects of the acdon e. It is required that every action is always invokable, that is, for every e £ E and r G TZ there must be at least one r' such that G{e, r, r') holds. Of course, this requirement does not mean to guarantee that every action results in an admitted state—on the contrary, the intention is to trace the indirect effects of the action, possibly reaching an admitted (and, therefore, stable) state. A finite (the infinite case is omitted) transition chain for a state w £TZc and an action e G E is a finite sequence of states ri,r2,..., (rfc), where G{e,w,ri),C{ri,ri^i) for every i,l < i < k, and where rk is a stable state. The last element of a finite transition chain is called a result state of action e performed in state w. These basic concepts define an action system as a tuple (72., E, C, TZc, G). The following definition strengthens action systems based on the causal propagation semantics. Definition 1. If three states w,p,q are given, we say that the pair p, q respects w, denoted as Ou) (p, ?). if and only if p{f) ^ q{f) —> p{f) = •w{f) for every state variable f that is defined in 72., where r{f) is a valuation of variable f in state r. An action system (72., E, C, TZc, G) is called respectful if and only if, for every w G 72.C, every e £ E, w is respected by every pair Vi, r^+i in every transition chain for the state w, and the last element of the chain is a member of TicAccording to Sandewall [4], respectful action systems are intended to ensure that in each transition there cannot be changes in state variables which have changed previously upon invocation or in the causal propagation sequence. This requirement, of course, guarantees that a resultant state is always consistent with the direct effects of the action (which cannot be cancelled by indirect ones), and that there are no cycles in transition chains. As with many other state transition action systems, the intention is to characterise a result state in terms of an initial state w and action e, without "referring explicitiy to the details of the intermediate states" [4]. In other words, it is desirable to define a selection function Res{w, a). For a respectful action system (7^, E,C,TZc,G),a selection function can be given as

40

3

M. Prokopenko et al.

Causal Relationships Approach

Let ^ be a finite set of symbols from a fixed language B, called fluent names. A fluent literal is either a fluent name f G !F or its negation, denoted by - i / . Let L-p be the set of all fluent literals defined over the set of fluent names !F. We will adopt from Thielscher [5] the following notation. If e G L-p, then |e| denotes its affirmative component, that is, I/I = |-,/| = / , where f G J-. This notation can be extended to sets of fluent literals as follows: | 5 | = {|/| : / G S}. By the term state we intend a maximal consistent set of fluent literals. We will denote the set of all states as W, and call the number m of fluent names in !F the dimension of W. By [(j)] we denote all states consistent with the sentence (f) £ B (i.e., [(f)] = {w GW : w\- }). Domain constraints are sentences which have to be satisfied in all states. Thielscher's [5] causal theory of acfion consists of two main components: action laws which describe direct effects of action performed in a given state, and causal relationships which determine indirect effects of action. Every action law contains a condition C, which is a set of fluent literals, all of which must be contained in an initial state where the action is intended to be applied; and a (direct) effect E, which is also a set of fluent literals, all of which must hold in the resulting state after having applied the acUon. An action may result in a number of state transitions. Definition 2. Let J- be the set of fluent names and let A be a finite set of symbols called action names, such that J- n A = ^. An action law is a triple (C, a, E) where C, called condition, and E, called effect, are individually consistent sets of fluent literals, composed of the very same set affluent names (i.e., \C\ = \E\) and a G A. If w is a state then an action law a = (C, a, E) is applicable in w ijfC C w. The application of a tow yields the state {'w\C)[J E (where \ denotes set subtraction). Causal relationships are specified as e causes p if $, where e and p are fluent literals and