Intelligence and Security Informatics: First NSF/NIJ Symposium, ISI 2003, Tucson, AZ, USA, June 2-3, 2003, Proceedings (Lecture Notes in Computer Science, 2665) 354040189X, 9783540401896

Since the tragic events of September 11, 2001, academics have been called on for possible contributions to research rela

128 70 11MB

English Pages 404 [406] Year 2003

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Intelligence and Security Informatics
Preface
ISI 2003 Organizing Committee
Table of Contents
Using Support Vector Machines for Terrorism Information Extraction
1 Introduction
1.1 Motivation
1.2 Research Objectives and Contributions
1.3 Paper Outline
2 Related Work
3 Problem Definition
4 Proposed Method
4.1 Overview
4.2 Feature Acquisition
4.3 Extraction Model Construction
5 Experiments and Results
5.1 Datasets
5.2 Results
6 Conclusions
References
Criminal Incident Data Association Using the OLAP Technology
1 Introduction
2 Brief Review of OLAP and OLAP-Based Data Mining
3 Method
3.1 Rationale
3.2 Definitions
3.3 Outlier Score Function (OSF) and the Crime Association Method
4 Application
4.1 Attribute Selection
4.2 Evaluation Criteria
4.3 Result and Comparison
5 Conclusion
References
Names: A New Frontier in Text Mining
1 Introduction
2 Database Name Matching
3 Named Entity Extraction
4 Intra- and Inter-document Coreference
5 Name Text Mining Support for Visualization, Link Analysis, and Deception Detection
6 Procedure for a Name Extraction and Matching Text Mining Module
7 Research Issues
8 Conclusion
References
Appendix: Comparison of LAS MetaMatch( Search Engine Returns with SQL-Soundex Returns
Web-Based Intelligence Reports System
1 System Concept
2 Objectives
3 Relationships to Other Systems and Sources of Information
3.1 Calls for Service
3.2 Messages
4 PPDR System Architecture
5 PPDR Structural WEB Design
5.1 System Security
5.2 Regular Reports
5.3 AD HOC Reports
5.4 Public Reports
5.5 CAD/MDT Messages Reports
5.6 Updater™s Block
5.7 Administrative Functionality
Fig. 9. Edit and Update Response Time
6 Database Solutions
6.1 Multiple Databases
6.2 Summary Tables
6.3 File Storage and Messages MetaDatabase
7 Data Transition
8 Conclusion
References
Authorship Analysis in Cybercrime Investigation
1 Introduction
2 Literature Review
2.1 Authorship Analysis
2.2 Feature Selection
2.3 Techniques for Authorship Analysis
3 Applying Authorship Analysis in Cybercrime Investigation
4 Experiment Evaluation
4.1 Testbed
4.2 Implementation
4.3 Experiment Design
4.4 Results & Analysis
5 Conclusion & Future Work
References
Behavior Profiling of Email
1 Introduction
1.1 Applying Behavior-Based Detection to Email Sources
1.2 EMT as an Analyst Workbench for Interactive Intelligence Investigations
2 EMT Features
2.1 Attachment Models
2.2 Email Content and Classification
2.3 Account Statistics and Alerts
2.4 Group Communication Models: Cliques
3 Conclusion
References
Detecting Deception through Linguistic Analysis
1 Introduction
2 Background
3 Method
4 Results
4.1 Individual Cue Analysis
4.2 Cluster Analysis by C4.5
5 Discussion
Acknowledgement
References
Appendix: Individual Cue Comparisons by Modality and Deception Condition*
A Longitudinal Analysis of Language Behavior of Deception in E-mail
1 Introduction
2 Theoretical Foundation and Hypotheses
2.1 Media Richness Theory
2.2 Interpersonal Deception Theory (IDT)
2.3 Interpersonal Adaptation Theory (IAT)
2.4 Hypotheses
3 Method
3.1 Experiment Design
3.2 Independent Variables
3.3 Dependent Variables
4 Results
4.1 Repeated Measures Analyses
4.2 Analyses of Variance
5 Discussion
6 Conclusion
Acknowledgement & Disclaimer. Portions of this research were supported by funding from the U.S. Air Force Office of Scientific Research under the U.S. Department of Defense University Research Initiative (Grant #F49620-01-1-0394). The views, opinions, and/or findings in this report are those of the authors and should not be construed as an official Department of Defense position, policy, or decision.
References
Evacuation Planning: A Capacity Constrained Routing Approach
1 Introduction
2 Problem Formulation
3 Capacity Constrained Routing Approach
3.1 Single-Route Capacity Constrained Routing Approach
3.2 Multiple-Route Capacity Constrained Routing Approach
4 Comparison and Cost Models of the Two Algorithms
5 Solution Quality and Performance Evaluation
5.1 Experiment Design
5.2 Experiment Setup and Results
6 Conclusion and Future Work
References
Locating Hidden Groups in Communication Networks Using Hidden Markov Models
1 Introduction
1.1 Example
2 Probabilistic Setup
2.1 The Maximum Likelihood Approach
2.2 General Maximum Likelihood Formulation
3 Concluding Remarks
References
Automatic Construction of Cross-Lingual Networks of Concepts from the Hong Kong SAR Police Department
1 Introduction
2 Automatic Construction of Parallel Corpus
2.1 Title Alignment
2.2 Experiment
3 A Corpus-Based Approach: Automatic Cross-Lingual Concept Space Generation
3.1 Automatic English Phrase Extraction
3.2 Chinese Phrase Extraction
3.2.1 Automatic Phrase Selection
3.2.2 Co-occurrence Weight
3.2.3 The Hopfield Network Algorithm
4 Concept Space Evaluation
4.1 Experimental Design
4.2 Experimental Result
4.3 Evaluation Provided by 10 Graduate Subjects
4.4 Translation Ability of the Concept Space
5 Conclusion
References
Decision Based Spatial Analysis of Crime
1 Introduction
2 Problem Statement
3 Model Development
3.1 Spatial Choice Patterns
3.2 Specification of Prior Probability
3.3 Spatial Misspecification
3.4 Clustering Methods
4 Application of Spatial Choice Model for Real Crime Analysis
4.1 Crime Data Set
4.2 Feature Selection by Similarities
4.3 Model Estimation and Prediction
4.4 Model Comparisons
5 Conclusion
References
CrimeLink Explorer: Using Domain Knowledge to Facilitate Automated Crime Association Analysis
1 Introduction
2 Literature Review
2.1 Link Analysis
2.2 Domain Knowledge Incorporation Approaches
2.3 Shortest-Path Algorithms
3 System Design
3.1 Crime Incident Reports
3.2 Concept Space Approach
3.3 Heuristic Approach
3.4 Association Path Search
3.5 User Interface
4 System Evaluation
5 Conclusions and Future Work
References
A Spatio Temporal Visualizer for Law Enforcement
1 Introduction
2 Background and Motivation
3 Literature Review
3.1 Periodic Data Visualization Tools
3.2 Timeline Tools
3.3 Crime Mapping Tools
4 Features of STV
4.1 Technologies Used
4.2 Components
Control Panel. The control panel (figure 1.c) maintains central control over temporal aspects of the data.
Periodic View. The main purpose of the periodic view (figure 1.d) is to give the crime analyst a quick and easy way to search for crime patterns.
Timeline View. The timeline view (figure 1.a) is a 2D timeline with a hierarchical display of the data in the form of a tree.
GIS View. The GIS view (figure 1.b) displays a map of the city of Tucson on which incidents can be represented as points of a specific color.
5 A Crime Analysis Example
6 Lessons Learned
6.1 Current Strengths of STV
6.2 Areas of Improvement for STV
7 Conclusions and Future Directions
References
Tracking Hidden Groups Using Communications
1 Introduction
2 Detecting Frequent Patterns
3 Visual Exploration
4 Related Work
5 Conclusion
References
Examining Technology Acceptance by Individual Law Enforcement Officers: An Exploratory Study
1 Introduction
2 Literature Review and Motivation
3 Overview of COPLINK Technology
4 Research Model and Hypotheses
5 Instrument Development and Validation
6 Data Analysis Results
7 Discussion
References
Appendix: Listing of Questions Items
"Atrium" - A Knowledge Model for Modern Security Forces in the Information and Terrorism Age
1 Introduction
2 The "Atrium" as Colleague and Institutional Memory
3 The Core Œ Main Operational Knowledge Creation and Application Hierarchies
4 The Task Forces Œ Responses in Knowledge Creation and Security Applications
5 Advantages Œ Surprise-Oriented, Scalable Knowledge-Enabled Institutions
References
Untangling Criminal Networks: A Case Study
1 Introduction
2 Related Work
2.1 Network Creation
2.2 Structural Analysis
Discovery of Patterns of Interaction. Patterns of interaction between subgroups can be discovered using an SNA approach called blockmodel analysis [2]. Given a partitioned network, blockmodel analysis determines the presence or absence of an association between a pair of subgroups by comparing the density of the links between them at a predefined threshold value. In this way, blockmodeling introduces summarized individual interaction details into interactions between groups so that the overall structure of the network becomes more apparent.
2.3 Network Visualization
3 System Architecture
3.1 Network Creation Component
3.2 Structural Analysis Component
Centrality Measures. We used all three centrality measures to identify central members in a given subgroup. The degree of a node could be obtained by counting the total number of links it had to all the other group members. A node™s score of betweenness and closeness required the computation of shortest paths (geodesics) using Dijkstra™s algorithm [7].
Blockmodeling. At a given level of a cluster hierarchy, we compared between-group link densities with the network™s overall link density to determine the presence or absence of between-group relationships.
3.3 Network Visualization Component
4 Case Study
4.1 Data Preparation
4.2 Result Validation
Detection of Subgroups. Since our system could partition a network into subgroups at different levels of granularity, we selected the partition that the crime investigators considered to be closest to their knowledge of the network organizations. The result showed that our system could detect subgroups from a network correctly:
4.3 Usefulness of System
5 Conclusions and Future Work
Acknowledgement. This project has primarily been funded by the National Science Foundation (NSF), Digital Government Program, fiCOPLINK Center: Information and Knowledge Management for Law Enforcement,fl #9983304, July, 2000-June, 2003 and the NSF Knowledge Discovery and Dissemination (KDD) Initiative. Special thanks go to Dr. Ronald Breiger from the Department of Sociology at the University of Arizona for his kind help with the initial design of the research framework. We would like also to thank the following people for their support and assistance during the entire project development and evaluation processes: Dr. Daniel Zeng, Michael Chau, and other members at the University of Arizona Artificial Intelligence Lab. We also appreciate important analytical comments and suggestions from personnel from the Tucson Police Department: Lieutenant Jennifer Schroeder, Sergeant Mark Nizbet of the Gang Unit, Detective Tim Petersen, and others.
References
Addressing the Homeland Security Problem: A Collaborative Decision-Making Framework
1 Introduction
2 Research Foundations
3 Argument Structure and Connectionism
3.1 Argument Structure
3.2 The Connectionist Formalism
4 Inference Rules
4.1 Transitive Sequence
4.2 Common Successor
4.3 Common Predecessor
4.4 Argument Consistency
5 Computational Analysis
5.1 An Example
6 Effects of Information and Argument Reliability
7 Conclusion
References
Collaborative Workflow Management for Interagency Crime Analysis
1 Introduction
2 Characteristics of Crime Analysis Process: A Field Study
2.1 Crime Analysis Processes in a Major Police Department
2.2 The Need for Greater Support in Collaborative Workflow
3 A Conceptual Model of Collaborative Workflow for Interagency Crime Analysis
4 A Collaborative Workflow Management System for Interagency Crime Analysis
4.1 A Three-Layer Framework
4.2 Web Services Enabled System Architecture for Interagency Crime Analysis Workflow
5 Event-Based Workflow and Event Management Language
5.1 Meta-level and Instance-Level Workflow Models
5.2 A Workflow Event Language and Associated Operators
5.3 Uniqueness of Event-Based Workflows
6 Conclusions
References
COPLINK Agent: An Architecture for Information Monitoring and Sharing in Law Enforcement
1 Introduction
2 Research Background
2.1 Information Systems in the Law Enforcement Domain
2.2 Information Monitoring and Sharing
3 Research Questions
4 COPLINK Agent System Architecture
4.1 System Architecture Overview
4.2 Searching and Monitoring Module
4.3 Collaboration Module
4.4 Alerting Module
5 Sample User Sessions with COPLINK Agent
5.1 Searching and Collaborating
5.2 Information Monitoring
5.3 Managing Search Sessions
6 Evaluation
6.1 Methodology
6.2 Participants
6.3 Data Collection Procedures
6.4 Summary of Evaluation Study
6.5 Discussions
7 Conclusions and Future Directions
Acknowledgement. The work described in this report was substantially supported by the following grants: (1) NSF Digital Government Program, fiCOPLINK Center: Information and Knowledge Management for Law Enforcement,fl #9983304, July 2000ŒJune 2003; (2) National Institute of Justice, fiCOPLINK: Database Integration and Access for a Law Enforcement Intranet,fl July 1997ŒJanuary 2000; (3) NSF/CISE/CSS, fiAn Intelligent CSCW Workbench: Personalization, Visualization, and Agents,fl #9800696, June 1998ŒJune 2001. We also would like to thank all of the personnel from TPD who participated in this study. In particular, we would like to thank Lt. Jenny Schroeder, Det. Tim Petersen, and Dan Casey. Lastly, we also would like to thank members of the COPLINK Team at the University of Arizona and Knowledge Computing Corporation (KCC) for their support.
References
Active Database Systems for Monitoring and Surveillance
1 Introduction
2 Background and Related Research
3 The Proposal
3.1 Extended Triggers
3.2 Implementation
4 Conclusion and Further Research
References
Integrated "Mixed" Networks Security Monitoring - A Proposed Framework
1 Introduction
2 Security Overview
3 Conceptual Model
3.1 Obtainment of Security Data
3.2 Standardization of Data
3.3 Calculation of the Overall System Score
3.4 Structure of Web-Based Interface
4 Conclusions
References
Bioterrorism Surveillance with Real-Time Data Warehousing
1 The Threat of Bioterrorism
2 Bioterrorism Surveillance Systems
2.1 Multidimensional Indicators
2.2 Real-Time Information
2.3 Pattern Recognition and Alarm Thresholds
3 The Decision Making Context
4 Flash Data Warehousing
5 Demonstration Bioterrorism Surveillance System in Florida
5.1 Bioterrorism Threat Indicators
5.2 Real-Time Data Feeds
5.3 Pattern Recognition and Alarm Thresholds
5.4 Surveillance Dashboards
6 Conclusions
References
Privacy Sensitive Distributed Data Mining from Multi-party Data
1 Introduction
2 Privacy Preserving Correlation Computation and Orthogonal Matrices
3 Random Projection Matrices for Correlation Computation
4 Computing Correlation from Distributed Data
5 Experimental Results
6 Conclusions and Future Work
References
PROGENIE: Biographical Descriptions for Intelligence Analysis
1 Introduction
2 Motivation and Relevance
3 System Description
4 Final Remarks
References
Scalable Knowledge Extraction from Legacy Sources with SEEK
1 Introduction
2 Overview of the SEEK Approach
3 Conclusion
References
"TalkPrinting": Improving Speaker Recognition by Modeling Stylistic Features
1 Introduction
2 Overview of Task and Baseline System
3 High-Level Speaker Features
4 Results
5 Effect of Test Segment Duration
6 Conclusions
Acknowledgments. This work was funded by a KDD supplement to NSF IRI-9619921. We thank Gary Kuhn for helpful discussion and technical suggestions.
References
Emergent Semantics from Users' Browsing Paths
1 Introduction
2 Our Approach
3 Preliminary Results
References
Designing Agent99 Trainer: A Learner-Centered, Web-Based Training System for Deception Detection
1 Introduction
2 Background
3 System Design and Development
4 Experiment
5 Results and Discussion
References
Training Professionals to Detect Deception
1 Introduction
2 Study Design and Procedures
3 Findings
4 Discussion
References
An E-mail Monitoring System for Detecting Outflow of Confidential Documents
1 Introduction
2 Our Approach
3 Experimental Results
4 Conclusions
Acknowledgements. The present research was conducted by the research fund of Dankook University in 2003.
References
Intelligence and Security Informatics: An Information Economics Perspective
1 Introduction
2 Incentives and Credibility of Information Sources
3 Conclusion
References
An International Perspective on Fighting Cybercrime
1 Introduction
2 An Overview of Cybercrime
3 Fighting Cybercrime in Different Countries
3.1 Within-Country Strategy
3.2 Across-Country Strategy: Collaborative Fighting of Cybercrime
4 Recommendations
5 Conclusions and the Future
References
Hiding Traversal of Tree Structured Data from Untrusted Data Stores
Criminal Record Matching Based on the Vector Space Model
Database Support for Exploring Criminal Networks
Hiding Data and Code Security for Application Hosting Infrastructure
Secure Information Sharing and Information Retrieval Infrastructure with GridIR
Semantic Hacking and Intelligence and Security Informatics (Extended Abstract)
References
Author Index
Recommend Papers

Intelligence and Security Informatics: First NSF/NIJ Symposium, ISI 2003, Tucson, AZ, USA, June 2-3, 2003, Proceedings (Lecture Notes in Computer Science, 2665)
 354040189X, 9783540401896

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen

2665

3

Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo

Hsinchun Chen Richard Miranda Daniel D. Zeng Chris Demchak Jenny Schroeder Therani Madhusudan (Eds.)

Intelligence and Security Informatics First NSF/NIJ Symposium, ISI 2003 Tucson, AZ, USA, June 2-3, 2003 Proceedings

13

Volume Editors Hsinchun Chen Daniel D. Zeng Therani Madhusudan University of Arizona Department of Management Information Systems Tucson, AZ 85721, USA E-mail: {hchen/zeng/madhu}@eller.arizona.edu Richard Miranda Jenny Schroeder Tucson Police Department 270 S. Stone Ave., Tucson, AZ 85701, USA E-mail: [email protected] Chris Demchak University of Arizona School of Public Administration and Policy Tucson, AZ 85721, USA E-mail: [email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at .

CR Subject Classification (1998): H.4, H.3, C.2, I.2, H.2, D.4.6, D.2, K.4.1, K.5, K.6.5 ISSN 0302-9743 ISBN 3-540-40189-X Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2003 Printed in Germany Typesetting: Camera-ready by author, data conversion by PTP-Berlin GmbH Printed on acid-free paper SPIN: 10927359 06/3142 543210

Preface

Since the tragic events of September 11, 2001, academics have been called on for possible contributions to research relating to national (and possibly international) security. As one of the original founding mandates of the National Science Foundation, mid- to long-term national security research in the areas of information technologies, organizational studies, and security-related public policy is critically needed. In a way similar to how medical and biological research has faced significant information overload and yet also tremendous opportunities for new innovation, law enforcement, criminal analysis, and intelligence communities are facing the same challenge. We believe, similar to “medical informatics” and “bioinformatics,” that there is a pressing need to develop the science of “intelligence and security informatics” – the study of the use and development of advanced information technologies, systems, algorithms and databases for national security related applications, through an integrated technological, organizational, and policy-based approach. We believe active “intelligence and security informatics” research will help improve knowledge discovery and dissemination and enhance information sharing and collaboration across law enforcement communities and among academics, local, state, and federal agencies, and industry. Many existing computer and information science techniques need to be reexamined and adapted for national security applications. New insights from this unique domain could result in significant breakthroughs in new data mining, visualization, knowledge management, and information security techniques and systems. This first NSF/NIJ Symposium on Intelligence and Security Informatics (ISI 2003) aims to provide an intellectual forum for discussions among previously disparate communities: academic researchers (in information technologies, computer science, public policy, and social studies), local, state, and federal law enforcement and intelligence experts, and information technology industry consultants and practitioners. Several federal research programs are also seeking new research ideas and projects that can contribute to national security. Jointly hosted by the University of Arizona and the Tucson Police Department, the NSF/NIJ ISI Symposium program committee was composed of 44 internationally renowned researchers and practitioners in intelligence and security informatics research. The 2-day program also included 5 keynote speakers, 14 invited speakers, 34 regular papers, and 6 posters. In addition to the main sponsorship from the National Science Foundation and the National Institute of Justice, the meeting was also cosponsored by several units within the University of Arizona, including the Eller College of Business and Public Administration, the Management Information Systems Department, the Internet Technology, Commerce, and Design Institute, the NSF COPLINK Center of Excellence, the Mark and Susan Hoffman E-Commerce Lab, the Center for the Management of

VI

Preface

Information, and the Artificial Intelligence Lab, and several other organizations including the Air Force Office of Scientific Research, SAP, and CISCO. We wish to express our gratitude to all members of the conference Program Committee and the Organizing Committee. Our special thanks go to Mohan Tanniru and Joe Hindman (Publicity Committee Co-chairs), Kurt Fenstermacher, Mark Patton, and Bill Neumann (Sponsorship Committee Co-chairs), Homa Atabakhsh and David Gonzalez (Local Arrangements Co-chairs), Ann Lally and Leon Zhao (Publication Co-chairs), and Kathy Kennedy (Conference Management). Our sincere gratitude goes to all of the sponsors. Last, but not least, we thank Gary Strong, Art Becker, Larry Brandt, Valerie Gregg, and Mike O’Shea for their strong and continuous support of this meeting and other related intelligence and security informatics research.

June 2003

Hsinchun Chen, Richard Miranda, Daniel Zeng, Chris Demchak, Jenny Schroeder, Therani Madhusudan

ISI 2003 Organizing Committee

General Co-chairs: Hsinchun Chen Richard Miranda

University of Arizona Tucson Police Department

Program Co-chairs: Daniel Zeng Chris Demchak Jenny Schroeder Therani Madhusudan

University of Arizona University of Arizona Tucson Police Department University of Arizona

Publicity Co-chairs: Mohan Tanniru Joe Hindman

University of Arizona Phoenix Police Department

Sponsorship Co-chairs: Kurt Fenstermacher Mark Patton Bill Neumann

University of Arizona University of Arizona University of Arizona

Local Arrangements Co-chairs: Homa Atabakhsh David Gonzalez

University of Arizona University of Arizona

Publication Co-chairs: Ann Lally Leon Zhao

University of Arizona University of Arizona

VIII

Organization

ISI 2003 Program Committee

Yigal Arens Art Becker Larry Brandt Donald Brown Judee Burgoon Robert Chang Andy Chen Lee-Feng Chien Bill Chu Christian Collberg Ed Fox Susan Gauch Johannes Gehrke Valerie Gregg Bob Grossman Steve Griffin Eduard Hovy John Hoyt David Jensen Judith Klavans Don Kraft Ee-Peng Lim Ralph Martinez Reagan Moore Clifford Neuman David Neri Greg Newby Jay Nunamaker Mirek Riedewald Kathleen Robinson Allen Sears Elizabeth Shriberg Mike O’Shea Craig Stender Gary Strong Paul Thompson Alex Tuzhilin Bhavani Thuraisingham Howard Wactlar Andrew Whinston Karen White

University of Southern California Knowledge Discovery and Dissemination Program National Science Foundation University of Virginia University of Arizona Criminal Investigation Bureau, Taiwan Police National Taiwan University Academia Sinica, Taiwan University of North Carolina, Charlotte University of Arizona Virginia Tech University of Kansas Cornell University National Science Foundation University of Illinois, Chicago National Science Foundation University of Southern California South Carolina Research Authority University of Massachusetts, Amherst Columbia University Louisiana State University Nanyang Technological University, Singapore University of Arizona San Diego Supercomputing Center University of Southern California Tucson Police Department University of North Carolina, Chapel Hill University of Arizona Cornell University Tucson Police Department Corporation for National Research Initiatives SRI International National Institute of Justice State of Arizona National Science Foundation Dartmouth College New York University National Science Foundation Carnegie Mellon University University of Texas at Austin University of Arizona

Organization

Jerome Yen Chris Yang Mohammed Zaki

IX

Chinese University of Hong Kong Chinese University of Hong Kong Rensselaer Polytechnic Institute Keynote Speakers

Richard Carmona Gary Strong Lawrence E. Brandt Mike O’Shea Art Becker

Surgeon General of the United States National Science Foundation National Science Foundation National Institute of Justice Knowledge Discovery and Dissemination Program Invited Speakers

Paul Kantor Lee Strickland Donald Brown Robert Chang Pamela Scanlon Kelcy Allwein Gene Rochlin Jane Fountain John Landry John Hoyt Bruce Baicar Matt Begert John Cunningham Victor Goldsmith

Rutgers University University of Maryland University of Virginia Criminal Investigation Bureau, Taiwan Police Automated Regional Justice Information Systems Defense Intelligence Agency University of California, Berkeley Harvard University Central Intelligence Agency South Carolina Research Authority South Carolina Research Authority and National Institute of Justice National Law Enforcement & Corrections Technology Montgomery County Police Department City University of New York

Table of Contents

Part I: Full Papers Data Management and Mining Using Support Vector Machines for Terrorism Information Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aixin Sun, Myo-Myo Naing, Ee-Peng Lim, Wai Lam

1

Criminal Incident Data Association Using the OLAP Technology . . . . . . . Song Lin, Donald E. Brown

13

Names: A New Frontier in Text Mining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frankie Patman, Paul Thompson

27

Web-Based Intelligence Reports System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Dolotov, Mary Strickler

39

Authorship Analysis in Cybercrime Investigation . . . . . . . . . . . . . . . . . . . . . . Rong Zheng, Yi Qin, Zan Huang, Hsinchun Chen

59

Deception Detection Behavior Profiling of Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Salvatore J. Stolfo, Shlomo Hershkop, Ke Wang, Olivier Nimeskern, Chia-Wei Hu

74

Detecting Deception through Linguistic Analysis . . . . . . . . . . . . . . . . . . . . . . Judee K. Burgoon, J.P. Blair, Tiantian Qin, Jay F. Nunamaker, Jr

91

A Longitudinal Analysis of Language Behavior of Deception in E-mail . . . 102 Lina Zhou, Judee K. Burgoon, Douglas P. Twitchell

Analytical Techniques Evacuation Planning: A Capacity Constrained Routing Approach . . . . . . . 111 Qingsong Lu, Yan Huang, Shashi Shekhar Locating Hidden Groups in Communication Networks Using Hidden Markov Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Malik Magdon-Ismail, Mark Goldberg, William Wallace, David Siebecker

XII

Table of Contents

Automatic Construction of Cross-Lingual Networks of Concepts from the Hong Kong SAR Police Department . . . . . . . . . . . . . . . . . . . . . . . . . 138 Kar Wing Li, Christopher C. Yang Decision Based Spatial Analysis of Crime . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 Yifei Xue, Donald E. Brown

Visualization CrimeLink Explorer: Using Domain Knowledge to Facilitate Automated Crime Association Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Jennifer Schroeder, Jennifer Xu, Hsinchun Chen A Spatio Temporal Visualizer for Law Enforcement . . . . . . . . . . . . . . . . . . . 181 Ty Buetow, Luis Chaboya, Christopher O’Toole, Tom Cushna, Damien Daspit, Tim Petersen, Homa Atabakhsh, Hsinchun Chen Tracking Hidden Groups Using Communications . . . . . . . . . . . . . . . . . . . . . . 195 Sudarshan S. Chawathe

Knowledge Management and Adoption Examining Technology Acceptance by Individual Law Enforcement Officers: An Exploratory Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Paul Jen-Hwa Hu, Chienting Lin, Hsinchun Chen “Atrium” – A Knowledge Model for Modern Security Forces in the Information and Terrorism Age . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Chris C. Demchak Untangling Criminal Networks: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . 232 Jennifer Xu, Hsinchun Chen

Collaborative Systems and Methodologies Addressing the Homeland Security Problem: A Collaborative Decision-Making Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 T.S. Raghu, R. Ramesh, Andrew B. Whinston Collaborative Workflow Management for Interagency Crime Analysis . . . . 266 J. Leon Zhao, Henry H. Bi, Hsinchun Chen COPLINK Agent: An Architecture for Information Monitoring and Sharing in Law Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Daniel Zeng, Hsinchun Chen, Damien Daspit, Fu Shan, Suresh Nandiraju, Michael Chau, Chienting Lin

Table of Contents

XIII

Monitoring and Surveillance Active Database Systems for Monitoring and Surveillance . . . . . . . . . . . . . . 296 Antonio Badia Integrated “Mixed” Networks Security Monitoring – A Proposed Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308 William T. Scherer, Leah L. Spradley, Marc H. Evans Bioterrorism Surveillance with Real-Time Data Warehousing . . . . . . . . . . . 322 Donald J. Berndt, Alan R. Hevner, James Studnicki

Part II: Short Papers Data Management and Mining Privacy Sensitive Distributed Data Mining from Multi-party Data . . . . . . . 336 Hillol Kargupta, Kun Liu, Jessica Ryan ProGenIE: Biographical Descriptions for Intelligence Analysis . . . . . . . . . 343 Pablo A. Duboue, Kathleen R. McKeown, Vasileios Hatzivassiloglou Scalable Knowledge Extraction from Legacy Sources with SEEK . . . . . . . . 346 Joachim Hammer, William O’Brien, Mark Schmalz “TalkPrinting”: Improving Speaker Recognition by Modeling Stylistic Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350 Sachin Kajarekar, Kemal S¨ onmez, Luciana Ferrer, Venkata Gadde, Anand Venkataraman, Elizabeth Shriberg, Andreas Stolcke, Harry Bratt Emergent Semantics from Users’ Browsing Paths . . . . . . . . . . . . . . . . . . . . . . 355 D.V. Sreenath, W.I. Grosky, F. Fotouhi

Deception Detection Designing Agent99 Trainer: A Learner-Centered, Web-Based Training System for Deception Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Jinwei Cao, Janna M. Crews, Ming Lin, Judee Burgoon, Jay F. Nunamaker Training Professionals to Detect Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Joey F. George, David P. Biros, Judee K. Burgoon, Jay F. Nunamaker, Jr. An E-mail Monitoring System for Detecting Outflow of Confidential Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Bogju Lee, Youna Park

XIV

Table of Contents

Methodologies and Applications Intelligence and Security Informatics: An Information Economics Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Lihui Lin, Xianjun Geng, Andrew B. Whinston An International Perspective on Fighting Cybercrime . . . . . . . . . . . . . . . . . . 379 Weiping Chang, Wingyan Chung, Hsinchun Chen, Shihchieh Chou

Part III: Extended Abstracts for Posters Data Management and Mining Hiding Traversal of Tree Structured Data from Untrusted Data Stores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 Ping Lin, K. Sel¸cuk Candan Criminal Record Matching Based on the Vector Space Model . . . . . . . . . . . 386 Jau-Hwang Wang, Bill T. Lin, Ching-Chin Shieh, Peter S. Deng Database Support for Exploring Criminal Networks . . . . . . . . . . . . . . . . . . . 387 M.N. Smith, P.J.H. King Hiding Data and Code Security for Application Hosting Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Ping Lin, K. Sel¸cuk Candan, Rida Bazzi, Zhichao Liu

Security Informatics Secure Information Sharing and Information Retrieval Infrastructure with GridIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 Gregory B. Newby, Kevin Gamiel Semantic Hacking and Intelligence and Security Informatics . . . . . . . . . . . . 390 Paul Thompson

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391

Using Support Vector Machines for Terrorism Information Extraction Aixin Sun1 , Myo-Myo Naing1 , Ee-Peng Lim1 , and Wai Lam2 1

Centre for Advanced Information Systems, School of Computer Engineering Nanyang Technological University, Singapore 639798, Singapore [email protected] 2 Department of Systems Engineering and Engineering Management Chinese University of Hong Kong, Shatin, New Territories, Hong Kong SAR [email protected]

Abstract. Information extraction (IE) is of great importance in many applications including web intelligence, search engines, text understanding, etc. To extract information from text documents, most IE systems rely on a set of extraction patterns. Each extraction pattern is defined based on the syntactic and/or semantic constraints on the positions of desired entities within natural language sentences. The IE systems also provide a set of pattern templates that determines the kind of syntactic and semantic constraints to be considered. In this paper, we argue that such pattern templates restricts the kind of extraction patterns that can be learned by IE systems. To allow a wider range of context information to be considered in learning extraction patterns, we first propose to model the content and context information of a candidate entity to be extracted as a set of features. A classification model is then built for each category of entities using Support Vector Machines (SVM). We have conducted IE experiments to evaluate our proposed method on a text collection in the terrorism domain. From the preliminary experimental results, we conclude that our proposed method can deliver reasonable accuracies. Keywords: Information extraction, terrorism-related knowledge discovery.

1

Introduction

1.1

Motivation

Information extraction (IE) is a task that extracts relevant information from a set of documents. IE techniques can be applied to many different areas. In the intelligence and security domains, IE can allow one to extract terrorism-related information from email messages, or identify sensitive business information from  

This work is partially supported by the SingAREN 21 research grant M48020004. Dr. Ee-Peng Lim is currently a visiting professor at Dept. of SEEM, Chinese University of Hong Kong, Hong Kong, China.

H. Chen et al. (Eds.): ISI 2003, LNCS 2665, pp. 1–12, 2003. c Springer-Verlag Berlin Heidelberg 2003 

2

A. Sun et al.

news documents. In some cases where perfect extraction accuracy is not essential, automated IE methods can replace the manual extraction efforts completely. In other cases, IE may produce the first-cut results reducing the manual extraction efforts. As reported in the survey by Muslea [9], the IE methods for free text documents are largely based on extraction patterns specifying the syntactic and/or semantic constraints on the positions of desired entities within sentences. For example, from the sentence, “Guerrillas attacked the 1st infantry brigade garrison”, one can define the extraction pattern subject active-attack to extract “Guerrilas” as a perpetrator, and active-attack direct object to extract “1st infantry bridage garrison” as a victim1 . The extraction pattern definitions currently used are very much based on some pre-defined pattern templates. For example, in AutoSlog [12], the above subject active-attack extraction pattern is an instantiation of the subject active-verb template. While pattern templates reduce the combinations of extraction patterns to be considered in rule learning, they may potentially pose as the obstacles to derive other more expressive and accurate extraction patterns. For example, IBM acquired direct-object is a very pertinent extraction pattern for extracting company information but cannot be instantiated by any of the 13 AutoSlog’s pattern templates. Since it will be quite difficult to derive one standard set of pattern templates that works well for any given domain, IE methods that do not rely on templates will become necessary. In this paper, we propose the use of Support Vector Machines (SVMs) for information extraction. SVM was proposed by Vapnik [16] and has been widelyused in image processing and classification problems [5]. The SVM technique finds the best surface that can separate the positive examples from negative ones. Positive and negative examples are separated by the maximum margin measured by a normal vector w. SVM classifiers have been used in various text classification experiments [2,5] and have been shown to deliver good classification accuracy. When SVM classifiers are used to solve an IE problem, two major research challenges must be considered. – Large number of instances: IE for free text involves extracting from document sentences target entities (or instances) that belong to some pre-defined semantic category(ies). A classification task, on the other hand, is to identify candidate entities from the document sentences, usually in the form of noun phrases or verb phrases, and assign each candidate entity to zero, one or more pre-defined semantic category. As large number of candidate entities can potentially be extracted from document sentences, it could lead to overheads in both learning and classification steps. – Choice of features: The success of SVM very much depends on whether a good set of features is given in the learning and classification steps. There should be adequate features that distinguish entities belonging to a semantic category from those outside the category. 1

Both extraction patterns have been used in the AutoSlog system [12].

Using Support Vector Machines for Terrorism Information Extraction

3

In our approach, we attempt to establish the links between the semantic category of a target entity with its syntactic properties, and reduce the number of instances to be classified based on their syntactic and semantic properties. A natural language parser is first used to identify the syntactic parts of sentences and only those parts that are desired are used as candidate instances. We then use both the content and syntax of a candidate instance and its surrounding context as features. 1.2

Research Objectives and Contributions

Our research aims to develop new IE methods that use classification techniques to extract target entities, while not using pattern templates and extraction patterns. Among the different types of IE tasks, we have chosen to address the template element extraction (TE) task which refers to extracting entities or instances in a free text that belong to some semantic categories2 . We apply our new IE method on free documents in the terrorism domain. In the terrorism domain, the semantic categories that are interesting include victim, perpetrator, witness, etc. In the following, we summarize our main research contributions. – IE using Support Vector Machines (SVM): We have successfully transformed IE into a classification problem and adopted SVM to extract target entities. We have not come across any previous papers reporting such an IE approach. As an early exploratory research, we only try to extract the entities falling under the perpetrator role. Our proposed IE method, nevertheless, can be easily generalized to extract other types of entities. – Feature selection: We have defined the content and context features that can be derived for the entities to be extracted/classified. The content features refer to words found in the entities. The context features refer to those derived from the sentence constituents surrounding the entities. In particular, we propose the a weighting feature scheme to derive context features for a given entity. – Performance evaluation: We have conducted experiments on the MUC text collection in the terrorism domain. In our preliminary experiments, the SVM approach to IE has been shown to deliver performance comparable to the published results by AutoSlog, a well known extraction pattern-based IE system. 1.3

Paper Outline

The rest of the paper is structured as follows. Section 2 provides a survey of the related IE work and distinguishes our work from them. Section 3 defines our IE problem and the performance measures. Our proposed method is described in Section 4. The experimental results are given in Section 5. Section 6 concludes the paper. 2

The template element extraction (TE) task has been defined in the Message Understanding Conference series (MUC) sponsored by DARPA [8].

4

A. Sun et al.

2

Related Work

As our research deals with IE for free text collections, we only examine related work in this area. Broadly, the related work can be divided into extraction pattern-based and non-extraction pattern-based. The former refers to approaches that first acquire a set of extraction patterns from the training text collections. The extraction patterns use the syntactic structure of a sentence and semantic knowledge of words to identify the target entities. The extraction process is very much a template matching task between the extraction patterns and the sentences. The non-extraction pattern-based approach are those that use some machine learning techniques to acquire some extraction models. The extraction models identify target entities by examining their feature mix that includes those based on syntactics, semantics and others. The extraction process is very much a classification task that involves accepting or rejecting an entity (e.g. word or phrase) as a target entity. Many extraction pattern-based IE approaches have been proposed in the Message Understanding Conference (MUC) series. Based on 13 pre-defined pattern templates, Riloff developed the AutoSlog system capable of learning extraction patterns [12]. Each extraction pattern consists of a trigger word (a verb or a noun) to activate its use. AutoSlog also requires a manual filtering step to discard some 74% of the learned extraction patterns as they may not be relevant. PALKA is another representative IE system that learns extraction patterns in the form of frame-phrasal pattern structures [7]. It requires each sentence to be first parsed and grouped into multiple simple clauses before deriving the extraction patterns. Both PALKA and AutoSlog require the training text collections to be tagged. Such tagging efforts require much manual efforts. AutoSlog-TS, an improved version of AutoSlog, is able to generate extraction patterns without a tagged training dataset [11]. An overall F1 measure of 0.38 was reported for both AutoSlog and AutoSlog-TS for the entities in perpetrator, and around 0.45 for victim and target object categories in the MUC-4 text collection (terrorism domain). Riloff also demonstrated that the best extraction patterns can be further selected using bootstrapping technique [13]. WHISK is an IE system that uses extraction patterns in the form of regular expressions. Each regular expression can extract either single target entity or multiple target entities [15]. WHISK has been experimented on the text collection under the management succession domain. SRV, another IE system, constructs first-order logical formulas as extraction patterns [3]. The extraction patterns also allow relational structures between target entities to be expressed. There have been very little IE research on non-extraction pattern based approaches. Freitag and McCallum developed an IE method based on Hidden Markov models (HMMs), a kind of probabilistic final state machines [4]. Their experiments showed that the HMM method outperformed the IE method using SRV for two text collections in the seminar announcements and corporate acquisitions domains.

Using Support Vector Machines for Terrorism Information Extraction

5

TST1-MUC3-0002 SAN SALVADOR, 18 FEB 90 (DPA) -- [TEXT] HEAVY FIGHTING WITH AIR SUPPORT RAGED LAST NIGHT IN NORTHWESTERN SAN SALVADOR WHEN MEMBERS OF THE FARABUNDO MARTI NATIONAL LIBERATION FRONT [FMLN] ATTACKED AN ELECTRIC POWER SUBSTATION. ACCORDING TO PRELIMINARY REPORTS, A SOLDIER GUARDING THE SUBSTATION WAS WOUNDED. THE FIRST EXPLOSIONS BEGAN AT 2330 [0530 GMT] AND CONTINUED UNTIL EARLY THIS MORNING, WHEN GOVERNMENT TROOPS REQUESTED AIR SUPPORT AND THE GUERRILLAS WITHDREW TO THE SLOPES OF THE SAN SALVADOR VOLCANO, WHERE THEY ARE NOW BEING PURSUED. THE NOISE FROM THE ARTILLERY FIRE AND HELICOPTER GUNSHIPS WAS HEARD THROUGHOUT THE CAPITAL AND ITS OUTSKIRTS, ESPECIALLY IN THE CROWDED NEIGHBORHOODS OF NORTHERN AND NORTHWESTERN SAN SALVADOR, SUCH AS MIRALVALLE, SATELITE, MONTEBELLO, AND SAN RAMON. SOME EXPLOSIONS COULD STILL BE HEARD THIS MORNING. MEANWHILE, IT WAS REPORTED THAT THE CITIES OF SAN MIGUEL AND USULUTAN, THE LARGEST CITIES IN EASTERN EL SALVADOR, HAVE NO ELECTRICITY BECAUSE OF GUERRILLA SABOTAGE ACTIVITY.

Fig. 1. Example Newswire Document

Research on applying machine learning techniques on name-entity extraction, a subproblem of information extraction, has been reported in [1]. Baluja et al proposed the use of 4 different types of features to represent an entity to extracted. They are the word-level features, dictionary features, part-of-speech tag features, and punctuation features (surrounding the entity to be extracted). Except the last feature type, the other three types of features are derived from the entities to be extracted. To the best of our knowledge, our research is the first that explores the use of classification techniques in extracting terrorism-related information. Unlike [4], we represent each entity to be extracted as a set of features derived from the syntactic structure of the sentence in which the entity is found, as well as the words found in the entity.

3

Problem Definition

Our IE task is similar to the template element (TE) task in the Message Understanding Conference (MUC) series. The TE task was to extract different types of target entities from each document, including perpetrators, victims, physicaltargets, event locations, etc. In MUC-4, a text collection containing newswire documents related to terrorist events in Latin America was used as the evaluation dataset. An example document is shown in Figure 1. In the above document, we could extract several interesting entities about the terrorist event, namely location (“SAN SALVADOR”), perpetrator (“MEMBERS OF THE FARABUNDO MARTI NATIONAL LIBERATION FRONT [FMLN]”), and victim(“SOLDIER”). The MUC-4 text collection consists of a training set (with 1500 documents and two test sets (each with 100 documents). For each document, MUC-4 specifies for each semantic category the target entity(ies) to be extracted.

6

A. Sun et al.

In this paper, we choose to focus on extracting target entities in the perpetrator category. The input of our IE method consists of the training set (1500 documents) and the perpetrator(s) of each training documents. The training documents are not tagged with the perpetrators. Instead, the perpetrators are stored in a separate file known as the answer key file. Our IE method therefore has to locate the perpetrators within the corresponding documents. Should a perpetrator appear in multiple sentences in a document, his or her role may be obscured by features from these sentences, making it more difficult to perform extraction. Once trained, our IE method has to extract perpetrators from the test collections. As the test collections are not tagged with candidate entities, our IE method has to first identify candidate entities in the documents before classifying them. The performance of our IE task is measured by three important metrics: Precision, Recall and F1 measure. Let ntp , nf p , and nf n be the number of entities correctly extracted, number of entities wrongly extracted, and number of entities missed respectively. Precision, recall and F1 measure are defined as follows: P recision =

Recall =

F1 =

4 4.1

ntp ntp + nf p

ntp ntp + nf n

2 · P recision · Recall P recision + Recall

Proposed Method Overview

Like other IE methods, we divide our proposed IE method into two steps: the learning step and the extraction step. The former learns the extraction model for the target entities in the desired semantic category using the training documents and their target entities. The latter applies the learnt extraction model on other documents and extract new target entities. The learning step consists of the following smaller steps. 1. Document parsing: As the target entities are perpetrators, they usually appear as noun-phrases in the documents. We therefore parse all the sentences in the document. To break up a document into sentences, we use the SATZ software [10]. As a noun-phrase could be nested within another noun-phrase in the parse tree, we only select all the simple noun-phrases as candidate entities. The candidate entities from the training documents are further grouped as positive entities if their corresponding noun-phrases match the perpetrator answer keys. The rest are used as negative entities.

Using Support Vector Machines for Terrorism Information Extraction

7

2. Feature acquisition: This step refers to deriving features for the training target entities, i.e., the noun-phrases. We will elaborate this step in Section 4.2. 3. Extraction model construction: This step refers to constructing the extraction model using some machine learning technique. In this paper, we explore the use of SVM to construct the extraction model (or classification model). The classification step performs extraction using the learnt extraction model following the steps below: 1. Document parsing: The sentences in every test document are parsed and simple noun phrases in the parse trees are used as candidate entities. 2. Feature acquisition: This step is similar to that in the learning step. 3. Classification: This step applies the SVM classifier to extract the candidate entities. By identifying all the noun-phrases and classifying them into positive entities or negative entities, we transform the IE problem into classification problem. To keep our method simple, we do not use co-referencing to identify pronouns that refers to the positive or negative entities. 4.2

Feature Acquisition

We acquire for each candidate entity the features required for constructing the extraction model and for classification. To ensure that the extraction model will be able to distinguish entities belonging to a semantic category or not, it is necessary to acquire a wide spectrum of features. Unlike the earlier work that focus on features that are mainly derived from within the entities [1] or the linear sequence of words surrounding the entities [4], our method derives features from syntactic structures of sentences in which the candidate entities are found. We divide the entity features into two categories: – Content features: These refer to the features derived from the candidate entities themselves. At present, we only consider terms appearing in the candidate entities. Given an entity e = w1 w2 · · · wn , we assign the content feature fi (w) = 1 if word w is found in e. – Context features: These features are obtained by first parsing the sentences containing a candidate entity. Each context feature is defined by a fragment of syntactic structure in which the entity is found and words associated with the fragment. In the following, we elaborate the way our context features are obtained. We first use the CMU’s Link Grammar Parser to parse a sentence [14]. The parser generates a parse tree such as the one shown in Figure 2. A parse tree represents the syntactic structure of a given sentence. Its leaf nodes are the word tokens of the sentence and internal nodes represents the syntactic constituents of the sentence. The possible syntactic constituents are S (clause), VP (verb phrase), NP (noun phrase), PP (prepositional phrase), etc.

8

A. Sun et al.

(S (NP Two terrorists) (VP (VP destroyed (NP several power poles) (PP on (NP 29th street))) and (VP machinegunned (NP several transformers))) .) Fig. 2. Parse Tree Example

For each candidate entity, we can derive its context features as a vector of term weights for the terms that appear in the sentences containing the nounphrase. Given a sentence parse tree, the weight of a term is assigned as follows. Terms appearing in the sibling nodes are assigned the weights of 1.0. Terms appearing in the higher level or lower level of the parse tree will be assigned smaller weights as they are further away from the candidate entity. The feature weights are reduced by half for every level further away from the candidate entity in our experiments. The 50% reduction factor has been chosen arbitrarily in our experiments. A careful study needs to be further conducted to determine the optimal reduction factor. For example, the context features of the candidate entity “several power poles” are derived as follows3 . Table 1. Context features and feature weights for “several power poles” Label Terms PP NP VP NP

on 29th street destroyed Two terrorists

Weight 1.00 0.50 0.50 0.25

To ensure that the included context features are closely related to the candidate entity, we do not consider terms found in the sibling nodes (and their subtrees) of the ancestor(s) of the entity. Intuitively, these terms are not syntactically very related to the candidate entity and are therefore excluded. For example, for the candidate entity “several power poles”, the terms in the subtree “and machinegunned several transformers” are excluded from the context feature set. 3

More precisely, stopword removal and stemming are performed on the terms. Some of them will be discarded during this process.

Using Support Vector Machines for Terrorism Information Extraction

9

If an entity appears in multiple sentences in the same document, and the same term is included as context features from different parse trees, we will combine the context features into one and assign it the highest weight among the original weights. This is necessary to keep one unique weight for each term. 4.3

Extraction Model Construction

To construct an extraction model, we require both positive training data and negative training data. While the positive training entities are available from the answer key file, the negative training entities can be obtained from the noun phrases that do not contain any target entities. Since pronouns such as “he”, “she”, “they”, etc. may possibly be co-referenced with some target entities, we do not use them as positive nor negative training entities. From the training set, we also obtain a entity filter dictionary that consists of noun-phrases that cannot be perpetrators. These are non-target noun-phrases that appear more than five times in the training set, e.g., “dictionary”, “desk” and “tree”. With this filter, the number of negative entities is reduced dramatically. If a larger number is used, fewer noun-phrases will be filtered causing a degradation of precision. On the other hand, a smaller number may increase the risk of getting a lower recall. Once an extraction model is constructed, it can perform extraction on a given document by classifying candidate entities in the document into perpetrator or non-perpetrator category. In the extraction step, a candidate entity is classified as perpetrator when the SVM classifier returns a positive score value.

5 5.1

Experiments and Results Datasets

We used MUC-4 dataset in our experiments. Three files (muc34dev, muc34tst1 and muc34tst2) were used as training set and the remaining two files (muc34tst3 and muc34tst4) were used as test set. There are totally 1500 news documents in the training set and 100 documents each for the two test files. For each news document, there are zero, one or two perpetrators defined in the answer key file. Therefore, most of the noun phrases are negative candidate entities. To avoid severely unbalanced training examples, we only considered the training documents that have at least one perpetrator defined in the answer key files. There are 466 training documents containing some perpetrators. We used all the 100 news documents in the test set since the classifier should not know if a test document contains a perpetrator. The number of documents used, number of positive and negative entities for the training and test sets are listed in Table 2. From the table, we observe that negative entities contribute about 90% of the entities of training set, and around 95% of the test set. 5.2

Results

We used SV M light as our classifiers in our experiment [6]. The SV M light is an implementation of Support Vector Machines (SVMs) in C and has been widely

10

A. Sun et al. Table 2. Documents, positive/negative entities in traing/test data set Dataset Documents Positive Entities Negative Entities Train Tst3 Tst4

466 100 100

1003 117 77

9435 2336 1943

used in text classification and web classification research. Due to the unbalanced training examples, we set the cost-factor (parameter j) of SV M light to be the ratio of number of negative entities over the number of positive ones. The costfactor denotes the proportion of cost allocated to training errors on positive entities against errors on negative entities. We used the polynomial kernel function instead of the default linear kernel function. We also set our threshold to be 0.0 as suggested. The results are reported in Table 3. Table 3. Results on training and test dataset Dataset Train Tst3 Tst4

Precision

Recall

F1 measure

0.7752 0.3054 0.2360

0.9661 0.4359 0.5455

0.8602 0.3592 0.3295

As shown in the table, the SVM classifier performed very well for the training data. It achieved both high precision and recall values. Nevertheless, the classifier did not perform equally well for the two test data sets. About 43% and 54% of the target entities have been extracted for Tst3 and Tst4 respectively. The results also indicated that many other non-target entities were also extracted causing the low precision values. The overall F1 measures are 0.36 and 0.33 for Tst3 and Tst4 respectively. The above results, compared to the known results given in [11] are reasonable as the latter also showed not more than 30% precision values for both AutoSlog and AutoSlog-TS4 . [11] reported F1 measures of 0.38 which is not very different from ours. The rather low F1 measures suggest that this IE problem is quite a difficult one. We, nevertheless, are quite optimistic about our preliminary results as they clearly show that the IE problem can be handled as a classification problem.

4

The comparison cannot be taken in absolute terms since [11] used a slightly different experimental setup for the MUC-4 dataset.

Using Support Vector Machines for Terrorism Information Extraction

6

11

Conclusions

In this paper, we attempt to extract perpetrator entities from a collection of untagged news documents in the terrorism domain. We propose a classificationbased method to handle the IE problem. The method segments each document into sentences, parses the latter into parse trees, and derives features for the entities within the documents. The features of each entity are derived from both its content and context. Based on SVM classifiers, our method was applied to the MUC-4 data set. Our experimental results showed that the method performs at a level comparable to some well known published results. As part of our future work, we would like to continue our preliminary work and explore additional features in training the SVM classifiers. Since the number of training entities is usually small in real applications, we will also try to extend our classification-based method to handle IE problems with small number of seed training entities.

References 1. S. Baluja, V. Mittal, and R. Sukthankar. Applying machine learning for high performance named-entity extraction. Computational Intelligence, 16(4):586–595, November 2000. 2. S. T. Dumais, J. Platt, D. Heckerman, and M. Sahami. Inductive learning algorithms and representations for text categorization. In Proceedings of the 7th International Conference on Information and Knowledge Management, pages 148– 155, Bethesda, Maryland, November 1998. 3. D. Freitag. Information extraction from HTML: Application of a general machine learning approach. In Proceedings of the 15th Conference on Artificial Intelligence (AAAI-98) 10th Conference on Innovation Applications of Artificial Intelligence (IAAI-98), pages 517–523, Madison, Wisconsin, July 1998. 4. D. Freitag and A. K. McCallum. Information extraction with hmms and shrinkage. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction, pages 31–36, Orlando, FL., July 1999. 5. T. Joachims. Text categorization with support vector machines: learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning, pages 137–142, Chemnitz, DE, 1998. 6. T. Joachims. Making large-scale svm learning practical. In B. Sch¨ olkopf, C. Burges, and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT-Press, 1999. 7. J.-T. Kim and D. I. Moldovan. Acquisition of linguistic patterns for knowledgebased information extraction. IEEE Transaction on Knowledge and Data Engineering, 7(5):713–724, 1995. 8. MUC. Proceedings of the 4th message understanding conference (muc-4), 1992. 9. I. Muslea. Extraction patterns for information extraction tasks: A survey. In Proceedings of the AAAI-99 Workshop on Machine Learning for Information Extraction, pages 1–6, Orlando, Florida, July 1999. 10. D. D. Palmer and M. A. Hearst. Adaptive sentence boundary disambiguation. In Proceedings of the 4th Conference on Applied Natural Language Processing, pages 78–83, Stuttgart, Germany, October 1994.

12

A. Sun et al.

11. E. Riloff. Automatically generating extraction patterns from untagged text. In Proceedings of the 13th National Conference on Artificial Intelligence (AAAI-96), pages 1044–1049, Portland, Oregon, 1996. 12. E. Riloff. An empirical study of automated dictionary construction for information extraction in three domains. Artificial Intelligence, 85(1-2):101–134, 1996. 13. E. Riloff and R. Jones. Learning dictionaries for information extraction by multilevel boot-strapping. In Proceedings of the 16th National Conference on Artificial Intelligence, pages 1044–1049, 1999. 14. D. Sleator and D. Temperley. Parsing english with a link grammar. Technical Report CMU-CS-91-196, Computer Science, Carnegie Mellon University, October 1991. 15. S. Soderland. Learning information extraction rules for semi-structured and free text. Journal of Machine Learning, 34(1-3):233–272, 1999. 16. V. N. Vapnik. The nature of statistical learning theory. Springer Verlag, Heidelberg, DE, 1995.

Criminal Incident Data Association Using the OLAP Technology Song Lin and Donald E. Brown Department of Systems and Information Engineering University of Virginia, VA 22904, USA {sl7h, brown}@virginia.edu

Abstract. Associating criminal incidents committed by the same person is important in crime analysis. In this paper, we introduce concepts from OLAP (online-analytical processing) and data-mining to resolve this issue. The criminal incidents are modeled into an OLAP data cube; a measurement function, called the outlier score function is defined on the cube cells. When the score is significant enough, we say that the incidents contained in the cell are associated with each other. The method can be used with a variety of criminal incident features to include the locations of the crimes for spatial analysis. We applied this association method to the robbery dataset of Richmond, Virginia. Results show that this method can effectively solve the problem of criminal incident association. Keywords. Criminal incident association, OLAP, outlier

1 Introduction Over the last two decades, computer technologies have developed at an exceptional rate, and become an important part of our life. Consequently, information technology now plays an important role in the law enforcement community. Police officers and crime analysts can access much larger amounts of data than ever before. In addition, various statistical methods and data mining approaches have been introduced into the crime analysis field. Crime analysis personnel are capable of performing complicated analyses more efficiently. People committing multiple crimes, known as serial criminals or career criminals, are a major threat in the modern society. Understanding the behavioral patterns of these career criminals and apprehending them is an important task for law enforcement officers. As the first step, identifying criminal incidents committed by the same person and linking them together is of major importance for crime analysts. According to the rational choice theory [5] in criminology, a criminal evaluates the benefit and the risk for committing an incident and makes a “rational” choice to maximize the “profit”. In the routine activity theory [9], a criminal incident is considered as the product of an interactive process of three key elements: a ready criminal, a suitable target, and lack of effective guardians. Brantingham and Brantingham [2] claim that the environment sends out some signals, or cues (physical, spatial, cultural, etc.), about its characteristics, and the criminal uses these cues to evaluate the target and make the decision. A criminal incident is usually an outcome H. Chen et al. (Eds.): ISI 2003, LNCS 2665, pp. 13–26, 2003. © Springer-Verlag Berlin Heidelberg 2003

14

S. Lin and D.E. Brown

of a decision process involving a multi-staged search in the awareness space. During the search phase, the criminal associates these cues, clusters of cues, or cue sequences with a “good” target. These cues form a template of the criminal, and once the template is built, it is self-reinforcing and relatively enduring. Due to the limit of the searching ability of a human being, a criminal normally does not have many decision templates. Therefore, we can observe criminal incidents with the similar temporal, spatial, and modus operandi (MO) features, which possibly come from the same template of the same criminal. It is possible to identify the serial criminal by associating these similar incidents. Different approaches have been proposed and several software programs have been developed to resolve the crime association problem. They can be classified into two major categories: suspect association and incident association. The Integrated Criminal Apprehension Program (ICAP) developed by Heck [12] enables police officers to match between the suspects and the arrested criminals using MO features; the Armed Robbery Eidetic Suspect Typing (AREST) program [1] employs an expert approach to perform the suspect association and classify a potential offender into three categories: probable, possible, or non suspect. The Violent Criminal Apprehension Program developed by the Federal Bureau of Investigation (FBI) (ViCAP) [13] is an incident association system. MO features are primarily considered in ViCAP. In the COPLINK [10] project undertaken by the researchers in the University of Arizona, a novel concept space model is built and can be used to associate searching terms with suspects in the database. A total similarity method was proposed by Brown and Hagen [3], and it can solve problems for both incident association and suspect association. Besides these theoretical methods, crime analysts normally use the SQL (Structure Query Language) in practice. They build the SQL string and make the system return all records that match their searching criteria. In this paper, we describe a crime association method that combines both OLAP concepts from the data warehousing area and outlier detection ideas from the data mining field. Before presenting our method, let us briefly review some concepts in OLAP and data mining.

2 Brief Review of OLAP and OLAP-Based Data Mining OLAP is a key aspect of many data warehousing systems [6]. Unlike its ancestor, OLTP (online transaction processing) systems, OLAP focus on providing summary information to the decision-makers of an organization. Aggregated data, such as sum, average, max, or min, are pre-calculated and stored in a multi-dimensional database called a data cube. Each dimension of the data cube consists of one or more categorical attributes. Hierarchical structures generally exist in the dimensions. Most existing OLAP systems concentrate on the efficiency of retrieving the summary data in the cube. For many cases, the decision-maker still needs to apply his or her domain knowledge and sometimes common sense to make the final decision. Data mining is a collection of techniques that detect patterns in large amounts of data. Quantitative approaches, including statistical methods, are generally used in data mining. Traditionally, data mining algorithms are developed for two-way datasets. More recently researchers have generalized some data mining methods for multi-

Criminal Incident Data Association Using the OLAP Technology

15

dimensional OLAP data structures. Imielinski et al. proposed the “cubegrade” problem [14]. The cubegrade problem can be treated as a generalized version of the association rule. Imielinski et al. claim that the association rule can be viewed as the change of count aggregates when imposing another constraint, or in OLAP terminology, making a drill-down operation on an existing cube cell. They think that other aggregates like sum, average, max, or min can also be incorporated, and the cubgegrade could support the “what if” analysis better. Similar to the cubegrade problem, the constrained gradient analysis was proposed by Dong et al. [7]. The constrained gradient analysis focuses on retrieving pairs of OLAP cubes that are quite different in aggregates and similar in dimensions (usually one cell is the ascendant, descendent, or sibling of the other cell). More than one aggregates can be considered simultaneously in the constrained gradient analysis. The discovery-driven exploration problem was proposed by Sarawagi et al. [18]. It aims at finding exceptions in the cube cells. They build a formula to estimate the anticipated value and the standard deviation (σ) of a cell. When the difference between the actual value of the cell and the anticipated value is greater than 2.5σ, the cell is selected as an exception. Similar to above approaches, our crime association method also focuses on the cells of the OLAP data cube. We define an outlier score function to measure the distinctiveness of the cell. Incidents contained in the same cell are determined to be associated with each other when the score is significant. The definition of the outlier score function and the association method is given in section 3.

3 Method 3.1 Rationale The rationale of this method is explained as follows: although theoretically the template (see section 1) is unique for each serial criminal, the data collected in the police department does not contain every aspect of the template. Some observed parts of the templates are “common” so that we may see a large overlap in these common templates. The creators (criminals) of those “common” templates are not separable. Some templates are “special”. For these “special templates”, we are more confident to say that the incidents come from the same criminal. For example, consider the weapon used in a robbery incident. We may observe many incidents with the value “gun” for weapon used. However, no crime analyst would say that the same person commits all these robberies because “gun” is a common template shared by many criminals. If we observe several robberies with a “Japanese sword” – an uncommon template, we are more confident in asserting that these incidents result from a same criminal. (This “Japanese sword” claim was first proposed by Brown and Hagen [4]). In this paper, we describe an outlier score function to measure this distinctiveness of the template.

16

S. Lin and D.E. Brown

3.2 Definitions In this section, we give the mathematical definitions used to build the outlier score function. People familiar with OLAP concepts can see that our notation derives from terms used in OLAP field. A1, A2, …, Am are m attributes that we consider relevant to our study, and D1, D2, …, Dm are their domains respectively. Currently, these attributes are confined to be categorical (categorical attributes like MO are important in crime association analysis). Let z(i) be the i-th incident, and z(i).Aj be the value on the j-th attribute of incident i. z(i) can be represented as z (i ) = ( z1(i ) , z 2(i ) ,..., z m(i ) ) , where

z k( i ) = z ( i ) . Ak ∈ D k , k ∈ {1,..., m} . Z is the set of all incidents. Definition 1. Cell Cell c is a vector of the values of attributes with dimension t, where t≤m. A cell can be represented as c = (ci1 , ci2 ,..., cit ) . In order to standardize the definition of a cell, for each Di, we add a “wildcard” element “*”. Now we allow D’i= Di∪{*}. For cell

c = (ci1 , ci2 ,..., cit ) , we can represent it as c = (c1 , c 2 ,..., c m ) , where c j ∈ D’j ,

and cj=* if and only if j ∉ {i1 , i2 ,..., it } . C denotes the set of all cells. Since each incident can also be treated as a cell, we define a function Cell: Z Å C. Cell(z)= (z1,z2,…,zm), if z=(z1,z2,…,zm), Definition 2. Contains relation We say that cell c = (ci1 , ci2 ,...,cit ) contains incident z if and only if z.Aj=cj or cj=*, j=1,2,…,m. For two cell, we say that cell c ’= (c1 ’, c 2 ’,..., c m ’) contains cell

c = (c1 , c2 ,..., cm ) if and only if c j ’= c j or c j ’= * , j = 1,2,..., m Definition 3. Count of a cell Function count is defined on a cell, and it returns the number of incidents that cell c contains. Definition 4. Parent cell Cell c’= (c’1 , c’2 ,..., c’m ) is the parent cell of cell c on the k-th attribute when: and

c’k = *

c’j = c j , for j ≠ k . Function parent(c,k) returns parent cell of cell c on the k-th

attribute.

Criminal Incident Data Association Using the OLAP Technology

17

Definition 5. Neighborhood P is called the neighborhood of cell c on the k-th attribute when P is a set of cells that takes the same values as cell c in all attributes but k, and does not take the wildcard value * on the k-th attribute, i.e., P= {c (1) , c ( 2 ) ,..., c (|P|) } where

cl( i ) = cl( j ) for all

l ≠ k , and c k( i ) ≠ * for all i = 1,2,..., | P | . Function neighbor (c , k ) returns the neighborhood of cell c on attribute k. (In OLAP field, the neighborhood is sometimes called siblings.) Definition 6. Relative frequency We call freq(c, k ) =

count(c) the relative frequency of cell c with count( parent(c, k ))

respect to attribute k. Definition 7. Uncertainty function We use function U to measure the uncertainty of a neighborhood. This uncertainty measure is defined on the relative frequencies. If we use P = {c (1) , c ( 2) ,..., c denote the neighborhood of cell c on attribute k, then,

U (c, k ) = U ( freq(c (1) , k ), freq(c ( 2) , k ),..., freq(c

(P)

(1)

(P)

} to

, k )) P

Obviously, U should be symmetric for all c , c ( 2) ,..., c . U takes a smaller value if the “uncertainty” in the neighborhood is low. One candidate uncertainty function is entropy, which comes from information theory:

U (c , k ) = H (c , k ) = −

∑ freq (c ’, k ) log( freq (c’, k ))

For

the

c ’∈neighbor ( c , k )

freq=0, we define 0 ⋅ log(0) = 0 , as is common in information theory. 3.3 Outlier Score Function (OSF) and the Crime Association Method Our goal is to build a function to measure the confidence or the significance level of associating crimes. This function is built over OLAP cube cells. We start building this function from analyzing the requirements that it needs to satisfy. Consider the following three scenarios: I.

II.

We have 100 robberies. 5 take the value of “Japanese sword” for the weapon used attributes, and 95 takes “gun”. Obviously, the 5 “Japanese swords” is of more interest than the 95 “guns”. Now we add another attribute: method of escape. Assume we have 20 different values: “by car”, “by foot”, etc. for the method of escape attribute. Each of them has 5 incidents. Although both “Japanese sword” and “by car” has 5 incidents, they should not be treated equally.

18

S. Lin and D.E. Brown

III.

“Japanese sword” highlights itself because all other incidents are “guns”, or in other words, the uncertainty level of the weapon used attribute is smaller. If we have some incidents takes “Japanese sword” on the weapon used attribute, and “by car” on the method of escape attribute, then the combination of “Japanese sword” and “by car” is more significant than both “Japanese sword” only and “by car” only. The reason is that we have more “evidences”.

Now we define function f as follows: − log( freq(c, k ))  ) max ( f ( parent(c, k )) +  f (c) = k takes all non−* dim ensionof c H (c, k ) 0 c = (*,*,...,*) When H(c,k) = 0, we say − log( freq (c, k )) = 0. H (c , k )

(1)

It is simple to verify that f satisfies above three requirements. We call f the outlier score function. (The term “outlier” is commonly used in the field of statistics. Outliers are observations significantly different that other observations and possibly are generated from a unique mechanism [11].) Based on the outlier score function, we give the following rule to associate criminal incidents: Given a pair of incidents, if there exists a cell containing both these incidents, and the outlier score of the cell is greater than some threshold value τ, we say that these two incidents are associated with each other. This association method is called an OLAP-outlier-based association method, or outlier-based method for abbreviation.

4 Application We applied this criminal incident association method to a real-world dataset. The dataset contained information on robbery incidents that occurred in Richmond, Virginia in 1998. The dataset consisted of two parts: the incident dataset and the suspect dataset. The incident dataset had 1198 records, and the temporal, spatial, and MO information were stored in the incident database. The name (if known), height, and weight information of the suspect were recorded in the suspect database. We applied our method to the incident dataset and used the suspect dataset for verification. Robbery was selected for two reasons: first, compared with some violent crime such as murder or sexual attack, serial robberies were more common; second, compared with breaking and entering crimes, more robbery incidents were “solved” (criminal arrested) or “partially solved” (the suspect’s name is known). These two points made the robbery favorable for evaluation purposes.

Criminal Incident Data Association Using the OLAP Technology

19

4.1 Attribute Selection We used three types of attributes in our analysis. The first set of attributes consisted of MO features. MO was primarily considered in crime association analysis. 6 MO attributes were picked. The second set of attributes was census attributes (the census data was obtained directly from the census CD held in library of the University of Virginia). Census data represented the spatial characteristics of the location where the criminal incident occurred, and it might help to reveal the spatial aspect of the criminals’ templates. For example, some criminals preferred to attack “high-income” areas. Lastly, we chose some distance attributes. They were distances from the incident location to some spatial landmarks such as a major highway or a church. Distance features were also important in analyzing criminals’ behaviors. For example, a criminal might preferred to initiate an attack from a certain distance range from a major highway so that the offense could not be observed during the attack, and he or she could leave the crime scene as soon as possible after the attack. There were a total of 5 distances. The names of all attributes and their descriptions are given in appendix I. They have also been used in a previous study on predicting breaking and entering crimes by Brown et al. [4]. An attribute selection was performed on all numerical attributes (census and distance attributes) before using the association method. The reason was that some attributes were redundant. These redundant attributes were unfavorable to the association algorithm in terms of both accuracy and efficiency. We adopted a featureselection-by-clustering methodology to pick the attributes. According to this method, we used the correlation coefficient to measure how similar or close two attributes were, and then we clustered the attributes into a number of groups according to this similarity measure. The attributes in the same group were similar to each other, and were quite different from attributes in other groups. For each group, we picked a representative. The final set of all representative attributes was considered to capture the major characteristics of the dataset. A similar methodology was used by Mitra et al. [16]. We picked the k-medoid clustering algorithm. (For more details about the kmedoid algorithm and other clustering algorithm, see [8].) The reason was that kmedoid method works on similarity / distance matrix (some other methods only work on coordinate data), and it tends to return spherical clusters. In addition, k-medoid returns a medoid for each cluster, based upon which we could select the representative attributes. After making a few slight adjustments and checking the silhouette plot [15], we finally got three clusters, as given in Fig. 1. The algorithm returned three medoids: HUNT_DST (housing unit density), ENRL3_DST (public school enrollment density), and TRAN_PC (expenses on transportation: per capita). We made some adjustments here. We replaced ENRL3_DST with another attribute POP3_DST (population density: age 12-17). The attackers and victims. For similar reasons, we replaced TRAN_PC with MHINC (median household income).

20

S. Lin and D.E. Brown

Fig. 1. Result of k-medoid clustering

There were a total of 9 attributes used in our analysis: 6 MO attributes (categorical) and 3 numerical attributes picked by applying the attributes selection procedure. Since our method was developed on categorical attributes, we converted the numerical attributes to categorical ones by dividing them into 11 equally sized bins. The number was determined by Sturge’s number of bins rule [19][20].

4.2 Evaluation Criteria We wanted to evaluate whether the association determined by our method corresponded to the true result. The information in the suspect database was considered as the “true result”. 170 incidents with the names of the suspects were used for evaluation. We generated all incident pairs. If two incidents in a pair had the suspects with the same name and date of birth, we said that the “true result” for this incident pair was a “true association”. There were 33 true associations. We used two measures to evaluate our method. The first measure was called “detected true associations”. We expected that the association method would be able to detect a large portion of “true associations”. The second measure was called “average number of relevant records”. This measure was built on the analogy of the search engine. Consider a search engine as Google. For each searching string(s) we give, it returns a list of documents considered to be “relevant” to the searching criterion. Similarly, for the crime association problem, if we give an incident, the algorithm will return a list of records that are considered as “associated” with the given incident. A shorter list is always preferred in both cases. The average “length” of the lists provided the second measure and we called it the “average number of relevant records”. The algorithm is more accurate when this measure has a smaller

Criminal Incident Data Association Using the OLAP Technology

21

value. In the information retrieval area [17], two commonly used criteria in evaluating a retrieval system are recall and precision. The former is the ability for a system to present relevant items, and the latter is the ability to present only the relevant items. Our first measure was a recall measure, and our second measure was equivalent to a precision measure. The above two measures do not work for our approach only; they can be used in evaluating any association algorithms. Therefore, we can use these two measures to compare the performances of different association methods. 4.3 Result and Comparison Different threshold values were set to test our method. Obvious if we set it to 0, we would expect that the method can detect all “true associations” and the average number of relevant records was 169 (given 170 incidents for evaluation). If we set the threshold, τ, to infinity, we would expect the method to return 0 for both “detected true associations” and “average number of relevant records”. As the threshold increased, we expected a decrease in both number of detected true associations and average number of relevant records. The result is given in Table 1. Table 1. Result of outlier-based method

Threshold 0 1 2 3 4 5 6 7 ∞

Detected true associations 33 32 30 23 18 16 8 2 0

Avg. number of relevant records 169.00 121.04 62.54 28.38 13.96 7.51 4.25 2.29 0.00

We compared this outlier-based method with a similarity-based crime association method. The similarity-based method was proposed by Brown and Hagen (Brown and Hagen, 2003). Given a pair of incidents, the similarity-based method first calculates a similarity score for each attribute, and then computes a total similarity score using the weighted average of all individual similarity scores. The total similarity score is used to determine whether the incidents are associated. Using the same evaluation criteria, the result of the similarity-based method is given in Table 2. If we set the average number of relevant records as the X-axis and set the detected true associations as the Y-axis, the comparisons can be illustrated as in Fig. 2. In Fig. 2, the outlier-based method lies above the similarity-based method for most cases. That means given the same “accuracy” (detected true associations) level, the outlier-based method returns fewer relevant records. Also if we keep the number

22

S. Lin and D.E. Brown Table 2. Result of similarity-based method

Threshold 0 0.5 0.6 0.7 0.8 0.9

Detected true associations 33 33 25 15 7 0



Avg. number of relevant records 169.00 112.98 80.05 45.52 19.38 3.97

0

0.00

of relevant records (average length of the returned list) for both methods, the outlierbased method is more accurate. The curve of the similarity-based method sits slightly above the outlier-based method when the average number of relevant records is above 100. Since the size of the evaluation incident set is 170, no crime analyst would consider putting further investigation on any set of over 100 incidents. The outlierbased method is generally more effective.

35

30

Detected Associations

25

20 Similarity Outlier 15

10

5

0 0

20

40

60

80

100

120

140

160

180

Avg. relevant records

Fig. 2. Comparison: the outlier-based method vs. the similarity-based method

5 Conclusion In this paper, an OLAP-outlier-based method is introduced to solve the crime association problem. The criminal incidents are modeled into an OLAP cube and an outlier-score function is defined over the cube cells. The incidents contained in the

Criminal Incident Data Association Using the OLAP Technology

23

cell are determined to be associated with each other when the outlier score is large enough. The method was applied to a robbery dataset and results show that this method can provide significant improvements for crime analysts who need to link incidents in large databases.

References 1.

2. 3. 4. 5.

6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

17. 18.

19. 20.

Badiru, A.B., Karasz, J.M. and Holloway, B.T., “AREST: Armed Robbery Eidetic Suspect Typing Expert System”, Journal of Police Science and Administration, 16, 210–216 (1988) Brantingham, P. J. and Brantingham, P. L., Patterns in Crimes, New York: Macmillan (1984) Brown D.E. and Hagen S.C., “Data Association Methods with Applications to Law Enforcement”, Decision Support Systems, 34, 369–378 (2003) Brown, D. E., Liu, H. and Xue, Y., “Mining Preference from Spatial-temporal Data”, Proc. of the First SIAM International Conference of Data Mining (2001) Clarke, R.V. and Cornish, D.B., “Modeling Offender’s Decisions: A Framework for Research and Policy”, Crime Justice: An Annual Review of Research, Vol. 6, Ed. by Tonry, M. and Morris, N. University of Chicago Press (1985) Chaudhuri, S. and Dayal, U., “An Overview of Data Warehousing and OLAP Technology”, ACM SIGMOD Record, 26 (1997) Dong, G., Han, J., Lam, J. Pei, J., and Wang, K., “Mining Multi-Dimensional Constrained Gradients in Data Cubes”, Proc. of the 27th VLDB Conference, Roma, Italy (2001) Everitt, B. Cluster Analysis, John Wiley & Sons, Inc. (1993) Felson, M., “Routine Activities and Crime Prevention in the Developing Metropolis”, Criminology, 25, 911–931 (1987) Hauck, R., Atabakhsh, H., Onguasith, P., Gupta, H., and Chen, H., “Using Coplink to Analyse Criminal-Justice Data”, IEEE Computer, 35, 30–37 (2002) Hawkins, D., Identifications of Outliers, Chapman and Hall, London, (1980) Heck, R.O., Career Criminal Apprehesion Program: Annual Report (Sacramento, CA: Office of Criminal Justice Planning) (1991) Icove, D. J., “Automated Crime Profiling”, Law Enforcement Bulletin, 55, 27–30 (1986) Imielinski, T., Khachiyan, L., and Abdul-ghani, A., Cubegrades: “Generalizing association rules”, Technical report, Dept. Computer Science, Rutgers Univ., Aug. (2000) Kaufman, L. and Rousseeuw, P. Finding Groups in Data, Wiley (1990) Mitra, P., Murthy, C.A., and Pal, S.K., “Unsupervised Feature Selection Using Feature Similarity”, IEEE Trans. On Pattern Analysis and Machine Intelligence, 24, 301–312 (2002) Salton, G. and McGill, M. Introduction to Modern Information Retrieval, McGraw-Hill Book Company, New York (1983) Sarawagi, S., Agrawal, R., and Megiddo. N., “Discovery-driven exploration of OLAP data cubes”, Proc. of the Sixth Int’l Conference on Extending Database Technology (EDBT), Valencia, Spain (1998) Scott, D. Multivariate Density Estimation: Theory, Practice and Visualization, New York, NY: Wiley (1992) Sturges, H.A., “The Choice of a Class Interval”, Journal of American Statistician Association, 21, 65–66 (1926)

24

S. Lin and D.E. Brown

Appendix I. Attributes used in the analysis (a) MO attributes Name Description Rsus_Acts Actions taken by the suspects R_Threats Method used by the suspects to threat the victim R_Force Actions that suspects force the victim to do Rvic_Loc Location type of the victim when robbery was committed Method_Esc Method of escape the scene Premise Premise to commit the crime (b) Census attributes Attribute name Description General POP_DST Population density (density means that the statistic is divided by the area) HH_DST Household density FAM_DST Family density MALE_DST Male population density FEM_DST Female population density Race RACE1_DST RACE2_DST RACE3_DST RACE4_DST RACE5_DST HISP_DST

White population density Black population density American Indian population density Asian population density Other population density Hispanic origin population density

Population Age POP1_DST POP2_DST POP3_DST POP4_DST POP5_DST POP6_DST POP7_DST POP8_DST POP9_DST POP10_DST

Population density (0-5 years) Population density (6-11 years) Population density (12-17 years) Population density (18-24 years) Population density (25-34 years) Population density (35-44 years) Population density (45-54 years) Population density (55-64 years) Population density (65-74 years) Population density (over 75 years)

Householder Age AGEH1_DST AGEH2_DST AGEH3_DST

Density: age of householder under 25 years Density: age of householder under 25-34 years Density: age of householder under 35-44 years

Criminal Incident Data Association Using the OLAP Technology

Attribute name AGEH4_DST AGEH5_DST AGEH6_DST

Description Density: age of householder under 45-54 years Density: age of householder under 55-64 years Density: age of householder over 65 years

Household Size PPH1_DST PPH2_DST PPH3_DST PPH6_DST

Density: 1 person households Density: 2 person households Density: 3-5 person households Density: 6 or more person households

Housing, misc. HUNT_DST OCCHU_DST VACHU_DST MORT1_DST MORT2_DST COND1_DST OWN_DST RENT_DST

Housing units density Occupied housing units density Vacant housing units density Density: owner occupied housing unit with mortgage Density: owner occupied housing unit without mortgage Density: owner occupied condominiums Density: housing unit occupied by owner Density: housing unit occupied by renter

Housing Structure HSTR1_DST HSTR2_DST HSTR3_DST HSTR4_DST HSTR6_DST HSTR9_DST HSTR10_DST

Density: occupied structure with 1 unit detached Density: occupied structure with 1 unit attached Density: occupied structure with 2 unit Density: occupied structure with 3-9 unit Density: occupied structure with 10+ unit Density: occupied structure trailer Density: occupied structure other

Income PCINC_97 MHINC_97 AHINC_97

Per capita income Median household income Average household income

School Enrollment ENRL1_DST ENRL2_DST ENRL3_DST ENRL4_DST ENRL5_DST ENRL6_DST ENRL7_DST

School enrollment density: public preprimary School enrollment density: private preprimary School enrollment density: public school School enrollment density: private school School enrollment density: public college School enrollment density: private college School enrollment density: not enrolled in school

Work Force CLS1_DST CLS2_DST

Density: private for profit wage and salary worker Density: private for non-profit wage and salary worker

25

26

S. Lin and D.E. Brown

Attribute name CLS3_DST CLS4_DST CLS5_DST CLS6_DST CLS7_DST

Description Density: local government workers Density: state government workers Density: federal government workers Density: self-employed workers Density: unpaid family workers

Consumer Expenditures ALC_TOB_PH APPAREL_PH EDU_PH ET_PH FOOD_PH MED_PH HOUSING_PH PCARE_PH REA_PH TRANS_PH ALC_TOB_PC APPAREL_PC EDU_PC ET_PC FOOD_PC MED_PC HOUSING_PC PCARE_PC REA_PC TRANS_PC

Expenses on alcohol and tobacco: per household Expenses on apparel: per household Expenses on education: per household Expenses on entertainment: per household Expenses on food: per household Expenses on medicine and health: per household Expenses on housing: per household Expenses on personal care: per household Expenses on reading: per household Expenses on transportation: per household Expenses on alcohol and tobacco: per capita Expenses on apparel: per capita Expenses on education: per capita Expenses on entertainment: per capita Expenses on food: per capita Expenses on medicine and health: per capita Expenses on housing: per capita Expenses on personal care: per capita Expenses on reading: per capita Expenses on transportation: per capita

(c) Distance attributes Name D_Church D_Hospital D_Highway D_Park D_School

Description Distance to the nearest church Distance to the nearest hospital Distance to the nearest highway Distance to the nearest park Distance to the nearest school

Names: A New Frontier in Text Mining 1

2

Frankie Patman and Paul Thompson 1

Language Analysis Systems, Inc. 2214 Rock Hill Rd., Herndon, VA 20170 [email protected] 2 Institute for Security Technology Studies Dartmouth College, Hanover, NH 03755 [email protected]

Abstract. Over the past 15 years the government has funded research in information extraction, with the goal of developing the technology to extract entities, events, and their interrelationships from free text for further analysis. A crucial component of linking entities across documents is the ability to recognize when different name strings are potential references to the same entity. Given the extraordinary range of variation international names can take when rendered in the Roman alphabet, this is a daunting task. This paper surveys existing technologies for name matching and for accomplishing pieces of the cross-document extraction and linking task. It proposes a direction for future work in which existing entity extraction, coreference, and database name matching technologies would be harnessed for cross-document coreference and linking capabilities. The extension of name variant matching to free text will add important text mining functionality for intelligence and security informatics toolkits.

1 Introduction Database name matching technology has long been used in criminal investigations [1], counter-terrorism efforts [2], and in a wide variety of government processes, e.g., the processing of applications for visas. With this technology a name is compared to names contained in one or more databases to determine whether there is a match. Sometimes this matching operation may be a straightforward exact match, but often the process is more complicated. Two names may not match exactly for a wide variety of reasons and yet still refer to the same individual [3]. Often a name in a database comes from one field of a more complete database record. The values in other fields, e.g., social security number, or address, can be used to help match names which are not exact matches. The context from the complete record helps the matching process. In this paper we propose the design of a system that would extend database name matching technology to the unstructured realm of free text. Over the past 15 or so years the federal government has funded research in information extraction, e.g., the Message Understanding Conferences [4], Tipster [5], and Automatic Content

H. Chen et al. (Eds.): ISI 2003, LNCS 2665, pp. 27–38, 2003. © Springer-Verlag Berlin Heidelberg 2003

28

F. Patman and P. Thompson

Extraction [6]. The goal of this research has been to develop the technology to extract entities, events, and their interrelationships, from free text so that the extracted entities and relationships can be stored in a relational database, or knowledgebase, to be more readily analyzed. One subtask during the last few years of the Message Understanding Conference was the Named Entity Task in which personal and company names, as well as other formatted information, was extracted from free text. The system proposed in this paper would extract personal and company names from free text for inclusion in a database, an information extraction template, or automatically marked up XML text [7]. It would expand link analysis capabilities by taking into account a broad and more realistic view of the types of name variation found in texts from diverse sources. The sophisticated name matching algorithms currently available for matching names in databases are equally suited to matching name strings drawn from text. Analogous to the way in which the context of a full database record can assist in the name matching process, in the free text application, the context of the full text of the document can be used not only to help identify and extract names, but also to match names, both within a single document and across multiple documents.

2 Database Name Matching Name matching can be defined as the process of determining whether two name strings are instances of the same name. It is a component of entity matching but is distinct from that larger task, which in many cases requires more information than a name alone. Name matching serves to create a set of candidate names for further consideration—those that are variants of the query name. ‘Al Jones’, for example, is a legitimate variant of ‘Alfred Jones,’ ‘Alan Jones,’ and ‘Albert Jones.’ Different processes from those involved in name matching will often be required to equate entities, perhaps relation to a particular place, organization, event, or numeric identifier. However, without a sufficient representation of a name (the set of variants of the name likely to occur in the data), different mentions of the same entity may not be recognized. Matching names in databases has been a persistent and well-known problem for years [8]. In the context of the English-speaking world alone, where the predominant model for names is a given name, an optional middle name, and a surname of AngloSaxon or Western European origin, a name can have any number of variant forms, and any or all of these forms may turn up in database entries. For example, Alfred James Martin can also be A. J. Martin; Mary Douglas McConnell may also be Mary Douglas or Mary McConnell or Mary Douglas-McConnell; Jack Crowley and John Crowley may both refer to the same person; the surnames Laury and Lowrie can have the same pronunciation and may be confused when names are taken orally; jSmith is a common typographical error entered for the name Smith. These familiar types of name variation pose non-trivial difficulties for automatic name matching, and numerous systems have been devised to deal with them (see [3]). The challenges to name matching are greatly increased when databases contain names from outside the Anglo-American context. Consider some common issues that arise with names from around the world.

Names: A New Frontier in Text Mining

29

œ

In China or Korea, the surname comes first, before the given name. Some people may maintain this format in Western contexts, others may reverse the name order to fit the Western model, and still others may use either. The problem is compounded further if a Western given name is added, since there is no one place in the string of names where the additional name is required to appear. Ex: Yi Kyung Hee ~ Kyung Hee Yi ~ Kathy Yi Kyung Hee ~ Yi Kathy Kyung Hee ~ Kathy Kyung Hee Yi œ In some Asian countries, such as Indonesia, many people have only one name; what appears to be a surname is actually the name of the father. Names are normally indexed by the given name. Ex: former Indonesian president Abdurrahman Wahid is Mr. Abdurrahman (Wahid being the name of his father). œ A name from some places in the Arab world may have many components showing the bearer’s lineage, and none of these is a family name. Any one of the name elements other than the given name can be dropped. Ex: Aziz Hamid Salim Sabah ~ Aziz Hamid ~ Aziz Sabah ~ Aziz œ Hispanic names commonly have two surnames, but it is the first of these rather than the last that is the family name. The final surname (which is the mother’s family name) may or may not be used. Ex: Jose Felipe Ortega Ballesteros ~ Jose Felipe Ortega, but is less likely to refer to the same person as Jose Felipe Ballesteros œ There may be multiple standard systems for transliterating a name from a native script (e.g. Arabic, Chinese, Hangul, Cyrillic) into the Roman alphabet, individuals may make up their own Roman spelling on the fly, or database entry operators may spell an unfamiliar name according to their own understanding of how it sounds. Ex: Yi ~ Lee ~ I ~ Lie ~ Ee ~ Rhee œ Names may contain various kinds of affixes, which may be conjoined to the rest of the name, separated from it by white space or hyphens, or dropped altogether. Ex: Abdalsharif ~ Abd al-Sharif ~ Abd-Al-Sharif ~ Abdal Sharif; al-Qaddafi ~ Qaddafi Systems for overcoming name variation search problems typically incorporate one or more of (1) a non-culture-specific phonetic algorithm (like Soundex1 or one of its refinements, e.g. [9]); (2) allowances for transposed, additional, or missing characters; (3) allowances for transposed, additional or missing name elements and for initials and abbreviations; and (4) nickname recognition. See [10] for a recent example. Less commonly, culture-specific phonetic rules may be used. The most serious problem for name-matching software is the wide variety of naming conventions represented in modern databases, which reflects the multicultural composition of many societies. Name-matching algorithms tend to take a one-size-fits-all approach, either by underestimating the effects of cultural variation, 1

Soundex, the most well-known algorithm for variant name searching in databases, is a phonetics-based system patented in 1918. It was devised for use in indexing the 1910 U.S. census data. The system groups consonants into sets of similar sounds (based on American names reported at the time) and assigns a common code to all names beginning with the same letter and sharing the same sequence of consonant groups. Soundex does not accommodate certain errors very well, and groups many highly dissimilar names under the same code. See [11].

30

F. Patman and P. Thompson

or by assuming that names in any particular data source will be homogenous. This may give reasonable results for names that fit one model, but may perform very poorly with names that follow different conventions. In the area of spelling variation alone, which letters are considered variants of which others differs from one culture to the next. In transcribed Arabic names, for example, the letters “K” and “Q” can be used interchangeably; “Qadafi” and “Kadafi” are variants of the same name. This is not the case in Chinese transcriptions, however, where “Kuan” and “Quan” are most likely to be entirely different names. What constitutes similarity between two name strings depends on the culture of origin of the names, and typically this must be determined on a case-by-case basis rather than across an entire data set. Language Analysis Systems, Inc. (LAS) has implemented a number of approaches to coping with the wide array of multi-cultural name forms found in databases. Names are first submitted to an automatic analysis process, which determines the most likely cultural/linguistic origin of the name (or, at the discretion of the user, the culture of origin can be manually chosen). Based on this determination, an appropriate algorithm or set of rules is applied to the matching process. LAS technologies include culturally sensitive search systems and processes for generating variants of names, among others. Some of the LAS technologies are briefly discussed below. Automatic Name Analysis: The name analysis system (NameClassifier¹) contains a knowledge base of information about name strings from various cultures. An input name is compared to what is known about name strings from each of the included cultures, and the probability of the name’s being derived from each of the cultures is computed. The culture with the highest score is assigned to the input name. The culture assignment is then used by other technologies to determine the most appropriate name-matching strategy. NameVariantGenerator¹: Name variant generation produces orthographic and syntactic variants of an input string. The string is first assigned a culture of origin through automatic name analysis. Culture-specific rules are then applied to the string to produce a regular expression. The regular expression is compared to a knowledge base of frequency information about names drawn from a database of over 750,000,000 names. Variant strings with a high enough frequency score are returned in frequency-ranked order. This process creates a set of likely variants of a name, which can then be used for further querying and matching. NameHunter¹: NameHunter¹ is a search engine that computes the similarity of two name strings based on orthography, word order, and number of elements in the string. The thresholds and parameters for comparison differ depending on the culture assignment of the input string. If a string from the database has a score that exceeds the thresholds for the input name culture, the name is returned. Returns are ranked relative to each other, so that the highest scoring strings are presented first. NameHunter allows for noisy data; thresholds can be tweaked by the user to control the degree of noise in returns. MetaMatch¹: MetaMatch¹ is a phonetic-based name retrieval system. Entry strings are first submitted to automatic name analysis for a culture assignment. Strings are then transformed to phonetic representations based on culture-specific rules, which are then stored in the database along with the original entry. Query strings are similarly processed, and the culture assignment is retained to determine the particular

Names: A New Frontier in Text Mining

31

parameters and thresholds for comparison. A similarity algorithm based on linguistic principles is used to determine the degree of similarity between query and entry strings [12]. Returns are presented in ranked order. This approach is particularly effective when name entries have been drawn from oral sources, such as telephone conversations. NameGenderizer¹: This module returns the most likely gender for a given name based on frequency of assignment of the name to males or females. A major advantage of the technologies developed by LAS is that a measure of similarity between name forms is computed and used to return names in order of their degree of similarity to the query term. An example of the effectiveness of this approach over a Soundex search is provided in Fig.1 in the Appendix.

3 Named Entity Extraction The task of named entity recognition and extraction is to identify strings in text that represent names of people, organizations, and places. Work in this area began in earnest in the mid-eighties, with the initiation of the Message Understanding Conferences (MUC). MUC is largely responsible for the definition of and specifications for the named entity extraction task as it is understood today [4]. Through MUC-6 in 1995, most systems performing named entity extraction were based on hand-built patterns that recognized various features and structures in the text. These were found to be highly successful, with precision and recall figures reaching 97% and 96%, respectively [4]. However, the systems were trained exclusively on English-language newspaper articles with a fixed set of domains, leaving open the question of how they would perform on other text sources. Bikel et al. [13] found that rules developed for one newswire source had to be adapted for application to a different newswire service, and that English-language rules were of little use as a starting point for developing rules for an unrelated language like Chinese. These systems are labor-intensive and require people trained in text analysis and pattern writing to develop and maintain rule sets. Much recent work in named entity extraction has focused on statistical/ probabilistic approaches (e.g., [14], [15], [13], [16]). Results in some cases have been very good, with F-measure scores exceeding 94%, even for systems gathering information from the least computationally expensive sources, such as punctuation, dictionary look-up, and part-of-speech taggers [15]. Borthwick et al. [14] found that by training their system on outputs tagged by hand-built systems (such as SRA’s NameTag extractor), scores improved to better than 97%, exceeding the F-measure scores of hand-built systems alone, and rivaling scores of human annotators. These results are very promising and suggest that named entity extraction can be usefully applied to larger tasks such as relation detection and link analysis (see, for example, [17]).

32

F. Patman and P. Thompson

4 Intra- and Inter-document Coreference The task of determining coreference can be defined as “the process of determining whether two expressions in natural language refer to the same entity in the world,” [18]. Expressions handled by coreference systems are typically limited to noun phrases of various types—including proper names—and pronouns. This paper will consider only coreference between proper names. For a human reader, coreference processes take place within a single document as well as across multiple documents when more than one text is read. Most coreference systems deal only with coreference within a document (see [19], [20], [21], [18], [22]). Recently, researchers have also begun work on the more difficult task of crossdocument coreference ([23], [24], [25]). Bagga [26] offers a classification scheme for evaluating coreference types and systems for performing coreference resolution, based in part on the amount of processing required. Establishing coreference between proper names was determined to require named entity recognition and generation of syntactic variants of names. Indeed, the coreference systems surveyed for this paper treat proper name variation (apart from synonyms, acronyms, and abbreviations) largely as a syntactic problem. Bontcheva et al., for example, allow name variants to be an exact match, a word token match that ignores punctuation and word order (e.g., “John Smith” and “Smith, John”), a first token match for cases like “Peter Smith” and “Peter,” a last token match for e.g., “John Smith” and “Smith,” a possessive form like “John’s,” or a substring in which all word tokens in the shorter name are included in the longer one (e.g., “John J. Smith” and “John Smith”). Depending on the text source, name variants within a single document are likely to be consistent and limited to syntactic variants, shortened forms, and synonyms, such as nicknames.2 One would expect intra-document coreference results for proper names under these circumstances to be fairly good. Bontcheva et al. [19] obtained precision and recall figures ranging from 94%-98% and 92%-95%, respectively, for proper name coreferences in texts drawn from broadcast news, newswire, and newspaper sources.3 Bagga and Baldwin [23] also report very good results (F-measures up to 84.6%) for tests of their cross-document coreference system, which compares summaries created for extracted coreference chains. Note, however, that their reported research looked only for references to entities named "John Smith," and that the focus of the cross-document coreference task was maintaining distinctions between different entities with the same name. Research was conducted exclusively on texts from the New York Times. Nevertheless, their work demonstrates that context can be effectively used for disambiguation across documents. Ravin and Kazi [24] focus on both distinguishing different entities with the same name and merging variant names 2

3

Note, however, that even within a document inconsistencies are not uncommon, especially when dealing with names of non-European origin. A Wall Street Journal article appearing in January 2003 referred to Mohammed Mansour Jabarah as Mr. Jabarah, while Khalid Sheikh Mohammed was called Mr. Khalid. When items other than proper names are considered for coreference, scores are much lower than those reported by Bontcheva et al. for proper names. The highest F-measure score for coreference at the MUC-7 competition was 61.8%. This figure includes coreference between proper names, various types of noun phrases, and pronouns.

Names: A New Frontier in Text Mining

33

referring to a single entity. They use the IBM Context Thesaurus to compare the contexts in which similar names from different documents are found. If there is enough overlap in the contextual information, the names are assumed to refer to the same entity. Their work was also limited to articles from the New York Times and the Wall Street Journal, both of which are edited publications with a high degree of internal consistency. Across documents from a wide variety of sources, consistent name variants cannot be counted on, especially for names originating outside the Anglo/Western European tradition. In fact, the many types of name variation commonly found in databases can be expected. A recent web search on Google for texts about Muammar Qaddafi, for example, turned up thousands of relevant pages under the spellings Qathafi, Kaddafi, Qadafi, Gadafi, Gaddafi, Kathafi, Kadhafi, Qadhafi, Qazzafi, Kazafi, Qaddafy, Qadafy, Quadhaffi, Gadhdhafi, al-Qaddafi, Al-Qaddafi, and Al Qaddafi (and these are only a few of the variants of this name known to occur). A coreference system that can be of use to agencies dealing with international names must be able to recognize name strings with this degree of variation as potential instances of a single name. Cross-document coreference systems currently suffer from the same weakness as most database name search systems. They assume a much higher degree of source homogeneity than can be expected in the world outside the laboratory, and their analysis of name variation is based on an Anglo/Western European model. For the coreference systems surveyed here, recall would be a considerable problem within a multi-source document collection containing non-Western names. However, with an expanded definition of name variation, constrained and supplemented by contextual information, these coreference technologies can serve as a starting point for linking and disambiguating entities across documents from widely varying sources.

5 Name Text Mining Support for Visualization, Link Analysis, and Deception Detection Commercial and research products for visualization and link analysis have become widely available in recent years, e.g., Hyperbolic Tree, or Star Tree [27], SPIRE [28], COPLINK [29], and InfoGlide [30]. Visualization and link analysis continues to be an active area of on-going research [31]. Some current tools have been incorporated into systems supporting intelligence and security informatics. For example, COPLINK [29] makes use of several visualization and link analysis packages, including i2’s [32] Analyst Notebook. Products such as COPLINK and InfoGlide also support name matching and deception detection. These tools make use of sophisticated statistical record linkage, e.g. [33], and have well developed interfaces to support analysts [32, 29]. Chen et al. [29] note that COPLINK Connect has the built-in capability for partial and phonetic-based name searches. It is not clear from the paper, however, what the scope of coverage is for phonetically spelled names, or how this is implemented. Research software and commercial products have been developed, such as those presented in [34, 30], which include modules that detect fraud in database records. These applications’ foci model ways that criminals, or terrorists, typically alter records to disguise their identity. The algorithms used by these systems could be

34

F. Patman and P. Thompson

augmented by taking into account a deeper multi-cultural analysis of names, as discussed in section 2.

6 Procedure for a Name Extraction and Matching Text Mining Module In this section a procedure is presented for name extraction and matching within and across documents. This algorithm could be incorporated in a module that would work with an environment such as COPLINK. The basic algorithm is as follows. Within document: 1. Perform named entity extraction. 2. Establish coreference between name mentions within a single document, creating an equivalence class for each named entity. 3. Discover relations between equivalence classes within each document 4. Find the longest canonical name string in each equivalence class. 5. Perform automatic name analysis on canonical names using NameClassifier; retain culture assignment. 6. Generate variant forms of canonical names according to culture-specific criteria using NameVariantGenerator. Across documents: 7. For each culture identified during name analysis, match sets of canonical name variants belonging to that culture against each other; for each pair of variant sets considered, if there are no incompatible (non-matching) members in the sets, mark as potential matches (e.g., Khalid bin (son of) Jamal and Khalid abu (father of) Jamal would be incompatible). 8. For potential name set matches, use a context thesaurus like that described in [24] to compare contexts where the names in the equivalence classes are found; if there are enough overlapping descriptions, merge the equivalence classes for the name sets (which will also expand the set of relations for the class to include those found in both documents); combine variant sets for the two canonical name strings into a single set, pruning redundancies. 9. For potential name set matches where overlapping contextual descriptions do not meet the minimum threshold, mark as a potential link, but do not merge. 10. Repeat process from #7 on for each pair of variant sets, until no further comparisons are possible. This algorithm could be implemented within a software module of a larger text mining application. The simplest integration of this algorithm would be as a module that extracted personal names from free text and stored the extracted names and relationships in a database. As discussed by [7], it would also be possible to use this algorithm to annotate the free text, in addition to creating database entries. This automatic markup would provide an interface for an analyst which would show not only the entities and their relationships, but also preserve the context of the surrounding text.

Names: A New Frontier in Text Mining

35

7 Research Issues This paper proposes an extension of linguistically-based, multi-cultural database name matching functionality to the extraction and matching of names from full text documents. To accomplish such an extension implies an effective integration of database and document retrieval technology. While this has been an on-going research topic in academic research [35, 36] and has received attention from major relational database vendors such as Oracle, Sybase, and IBM, effective integration has not yet been achieved, in particular in the area of intelligence and security informatics [37]. Achieving the sophistication of database record matching for names extracted from free text implies advances in text mining [38, 39, 40, 41]. One useful structure for supporting cross document name matching would be an authority file for named entities. Library catalogs maintain authority files which have a record for each author, showing variant names, pseudonyms, and so on. An authority file for named entity extraction could be built which would maintain a record for each entity. The record could start with information about the entity extracted from database records. When the named entity was found in free text, contextual information about the entity could be extracted and stored in the authority file with an appropriate degree of probability in the accuracy of the information included. For example, a name followed by a comma-delimited parenthetical expression, is a reasonably accurate source of contextual information about an entity, e.g., “X, president of Y, resigned yesterday”. A further application of linguistic/cultural classification of names could be to tracking interactions between groups of people where there is a strong association between group membership and language. For example, an increasing number of police reports in which both Korean and Cambodian names are found in the same documents might indicate a pattern in Asian crime ring interactions. Finally, automatic recognition of name gender could be used to support the process of pronominal coreference. Work is underway to provide a quantitative comparison of key-based name matching systems (such as Soundex) with other approaches to name matching. One of the hindrances to effective name matching system comparisons is the lack of generally accepted standards for what constitutes similarity between names. Such standards are difficult to establish in part because the definition of similarity changes from one user community to the next. A standardized metric for the evaluation of degrees of correlation of name search results, and a means for using this metric to measure the usefulness of different name search technologies is sorely needed. This paper has focused on personal name matching. Matching of other named entities, such as organizations, is also of interest for intelligence and security informatics. While different matching algorithms are needed, extending company name matching, or other entity matching, to free text will also be useful. One promising research direction integrating database, information extraction, and document retrieval that could support effective text mining of names is provided by work on XIRQL [7].

36

F. Patman and P. Thompson

8 Conclusion Effective tools exist for multi-cultural database name matching and this technology is becoming available in analytic tool kits supporting intelligence and security informatics. The proportion of data of interest to intelligence and security analysts that is contained in databases, however, is very small compared to the amount of data available in free text and audio formats. The extension of name extraction and matching to free text and audio will add important text mining functionality for intelligence and security informatics toolkits.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

Taft, R.L.: Name Search Techniques. Special Rep. No. 1. Bureau of Systems Development, New York State Identification and Intelligence System, Albany (1970) Verton, D.: Technology Aids Hunt for Terrorists. Computer World, 9 September (2002) Borgman, C.L., Siegfried, S.L.: Getty’s Synoname and Its Cousins: A Survey of Applications of Personal Name-Matching Algorithms. Journal of the American Society for Information Science, Vol. 43 No. 7. (1992) 459–476 Grishman, R., Sundheim, B.: Message Understanding Conference – 6: A Brief History. In: th Proceedings of the 16 International Conference on Computational Linguistics. Copenhagen (1999) DARPA. Tipster Text Program Phase III Proceedings. Morgan Kaufmann, San Francisco (1999) National Institute of Standards and Technology. ACE-Automatic Content Extraction Information Technology Laboratories. http://www.itl.nist.gov/iad/894.01/tests/ace/index.htm (2000) Fuhr, N.: XML Information Retrieval and Extraction [to appear] Hermansen, J.C.: Automatic Name Searching in Large Databases of International Names. Georgetown University Dissertation, Washington, DC (1985) Holmes, D., McCabe, M.C.: Improving Precision and Recall for Soundex Retrieval. In: Proceedings of the 2002 IEEE International Conference on Information Technology – Coding and Computing. Las Vegas (2002) Navarro, G., Baeza-Yates, R., Azevedo Arcoverde, J.M.: Matchsimile: A Flexible Approximate Matching Tool for Searching Proper Names. Journal of the American Society for Information Science and Technology, Vol. 54 No. 1 (2003) 3–15 Patman, F., Shaefer, L.: Is Soundex Good Enough for You? On the Hidden Risks of Soundex-Based Name Searching. Language Analysis Systems, Inc., Herndon (2001) Lutz, R., Greene, S.: Measuring Phonological Similarity: The Case of Personal Names. Language Analysis Systems, Inc., Herndon (2002) Bikel, D.M., Schwartz, R., Weischedel, R.M.: An Algorithm that Learns What’s in a Name. Machine Learning, Vol. 34 No. 1-3. (1999) 211–231 Borthwick, A., Sterling, J., Agichtein, E., Grishman, R.: NYU: Description of the MENE Named Entity System as Used in MUC-7. In: Proceedings of the Seventh Message Understanding Conference. Fairfax (1998) Baluja, S., Mittal, V.O., Sukthankar, R.: Applying Machine Learning for High Performance Named-Entity Extraction. Pacific Association for Computational Linguistics (1999) Collins, M.,: Ranking Algorithms for Named-Entity Extraction: Boosting and the Voted th Perceptron. In: Proceedings of the 40 Annual Meeting of the Association for Computational Linguistics. Philadelphia (2002) 489–496

Names: A New Frontier in Text Mining

37

17. Zelenko, D., Aone, C., Richardella, A.: Kernel Methods for Relation Detection Extraction. Journal of Machine Learning Research [to appear] 18. Soon, W.M., Ng, H.T., Lim, D.C.Y.: A Machine Learning Approach to Coreference Resolution of Noun Phrases. Association for Computational Linguistics (2001) 19. Bontcheva, K., Dimitrov, M., Maynard, D., Tablin, V., Cunningham, H.: Shallow Methods for Named Entity Coreference Resolution. TALN (2002) 20. Hartrumpf, S.: Coreference Resolution with Syntactico-Semantic Rules and Corpus Statistics. In: Proceedings of CoNLL-2001. Toulouse (2001) 137–144 21. Ng, V., Cardie, C.: Improving Machine Learning Approaches to Coreference Resolution. th In: Proceedings of the 40 Annual Meeting of the Association for Computational Linguistics. Philadelphia (2002) 104–111 22. McCarthy, J.F., Lehnert, W.G.: Using Decision Trees for Coreference Resolution. In: Mellish, C. (ed.): Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence (1995) 1050–1055 23. Bagga, A., Baldwin, B.: Entity-Based Cross-Document Coreferencing Using the Vector th Space Model. In: Proceedings of the 36 Annual Meeting of the Association for th Computational Linguistics and the 17 International Conference on Computational Linguistics (1998) 79–85 24. Ravin, Y., Kazi, Z. Is Hillary Rodham Clinton the President? Disambiguating Names Across Documents. In: Proceedings of the ACL’99 Workshop on Coreference and Its Applications (1999) 25. Schiffman, B., Mani, I., Concepcion, K.J. : Producing Biographical Summaries : th Combining Linguistic Knowledge with Corpus Statistics. In: Proceedings of the 39 Annual Meeting of the Association for Computational Linguistics (2001) 450–457 26. Bagga, A.: Evaluation of Coreferences and Coreference Resolution Systems. In: Proceedings of the First International Conference on Language Resources and Evaluation (1998) 563–566 27. Inxight. A Research Engine for the Pharmaceutical Industry. http://www.inxight.com 28. Hetzler, B., Harris, W.M., Havre, S., Whitney, P.: Visualizing the Full Spectrum of Document Relationships. In: Structures and Relations in Knowledge Organization. th Proceedings of the 5 International ISKO Conference. ERGON Verlag, Wurzburg (1998) 168–175 29. Chen, H., Zeng, D., Atabakhsh, H., Wyzga, W., Schroeder, J.: COPLINK: Managing law enforcement data and knowledge. Communications of the ACM, Vol. 46 No. 1 (2003) 30. InfoGlide Software. Similarity Search Engine: The Power of Similarity Searching. http://www.infoglide.com/content/images/whitepapers.pdf(2002) 31. American Association for Artificial Intelligence Fall Symposium on Artificial Intelligence and Link Analysis (1998) 32. i2. Analyst’s Notebook. http://www.i2.co.uk/Products/Analysts_Notebook (2002) 33. Winkler, W.E.: The State of Record Linkage and Current Research Problems. Technical Report RR99/04. U.S. Census Bureau, http://www.census.gov/srd/papers/pdf/rr99-04.pdf 34. Wang, G., Chen, H., Atabakhsh, H.: Automatically Detecting Deceptive Criminal Identities [to appear] 35. Fuhr, N.: Probabilistic Datalog – A Logic for Powerful Retrieval Methods. In: Proceedings th of SIGIR-95, 18 ACM International Conference on Research and Development in Information Retrieval (1995) 282–290 36. Fuhr, N.: Models for Integrated Information Retrieval and Database Systems. IEEE Data Engineering Bulletin, Vol. 19 No. 1. (1996) 37. Hoogeveen, M., van der Meer, K.: Integration of Information Retrieval and Database Management in Support of Multimedia Police Work. Journal of Information Science, Vol. 20 No. 2 (1994) 38. Institute for Mathematics and Its Applications. IMA Hot Topics Workshop: Text Mining. http://www.ima.umn.edu/reactive/spring/tm.html (2000)

38

F. Patman and P. Thompson

39. KDD-2000 Workshop on Text Mining. The Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Boston (2000) http:www2.cs.cmu.edu/~dunja/WshKDD2000.html 40. SIAM Text Mining Workshop. http://www.cs.utk.edu/tmw02 (2002) 41. Text-ML 2002 Workshop on Text Learning. The Nineteenth International Conference on Machine Learning ICML-2002. Sydney (2002)

Appendix: Comparison of LAS MetaMatch¹ Search Engine Returns with SQL-Soundex Returns

Fig. 1. These searches were conducted in databases containing common surnames found in the 1990 U.S. Census data. The surnames in the databases are identical. The MetaMatch database differs only in that the phonetic form of each surname is also stored. The exact match “Sadiq” th th was 54 in the list of Soundex returns. “Siddiqui” was returned by Soundex in 26 place. th “Sadik” was 109 .

Web-Based Intelligence Reports System Alexander Dolotov and Mary Strickler Phoenix Police Department 620 W. Washington Street, Phoenix, Arizona 85003 {alex.dolotov, mary.strickler}@phoenix.gov

Abstract. Two areas for discussion will be included in this paper. The first area targets a conceptual design of a Group Detection and Activity Prediction System (GDAPS). The second area describes the implementation of the WEBbased intelligence and monitoring reports system called the Phoenix Police Department Reports (PPDR). The PPDR System could be considered the first phase of a GDAPS System. The already operational PPDR system’s goal is to support data access to heterogeneous databases, provide a means to mine data using search engines, and to provide statistical data analysis with reporting capabilities. A variety of static and ad hoc statistical reports are produced with the use of this system for interdepartmental and public use. The system is scalable, reliable, portable and secured. Performance is supported on all system levels using a variety of effective software designs, statistical processing and heterogeneous databases/data storage access.

1 System Concept The key to the effectiveness of a law enforcement agency and all of its divisions is directly related to its ability to make informed decisions for crime prevention. In order to prevent criminal activity, a powerful multilevel analysis tool based on a mathematical model could be used to develop a system that would target criminals and/or criminal groups and their activities. Alexander Dolotov and Ziny Flikop have proposed an innovative conceptual design of such a system. The fundamental idea of this project involves information collection and access to the information blocks reflecting the activities of an individual or groups of individuals. This data could then be used to create a system that would be capable of maintaining links and relationships for both groups and individuals in order to predict any abnormal activity. Such a system might be referred to as the “Group Detection and Activity Prediction System (GDAPS). The design of the GDAPS would include maintaining all individuals’ and groups’ activity trends simultaneously in real time mode. This system would be “selfeducating” meaning it would become more precise over a longer time-period and as more information is gathered. The design would be based on the principles of statistical modeling, fuzzy logic, open-loop control and many-dimensional optimizations. In addition, the latest software design technologies would be applied. The ultimate goal of this system would be to produce notifications alerting users of any predicted abnormal behavior. The initial plan for the design of GDAPS would be to break the system into three subsystems. The subsystems would consist of the PPDR system re H. Chen et al. (Eds.): ISI 2003, LNCS 2665, pp. 39–58, 2003. © Springer-Verlag Berlin Heidelberg 2003

40

A. Dolotov and M. Strickler

named to the “Data Maintaining and Reporting Subsystem” (DmRs), the “Group Detection Subsystem” (GTeS), and the “Activity Prediction Subsystem” (APreS). The first phase of the GDAPS System would be the PPDR System, which is currently operational within the Phoenix Police Department. This system is explained in detail in the remaining sections of this document. The DmRs subsystem (renamed from PPDR) supports access to heterogeneous databases using data mining search engines to perform statistical data analysis. Ultimately, the results are generated in report form. The GTeS subsystem would be designed to detect members of the targeted group or groups. In order to accomplish this, it would require monitoring communications between individuals using all available means. Intensity and duration of these communications can define relationships inside the group and possibly define the hierarchy of the group members. The GTeS subsystem would have to be adaptive enough to constantly upgrade information related to each controlled group, since every group has a life of its own. GTeS would provide the basic foundation for GDAPS. The purpose of the APreS subsystem is to monitor, in time, the intensity and modes of multiple groups’ communications by maintaining a database of all types of communications. The value of this subsystem would be the ability to predict groups’ activities based upon the historical correlation between abnormalities in the groups’ communication modes and intensities, along with any previous activities. APreS is the dynamic subsystem of GDAPS. To accelerate the GDAPS development, methodologies, already created for other industries, can be modified for use [1], [2], [3], [7]. Because of the complexity of the GDAPS system, a multi-phase approach to system development should be considered. Taking into account time and resources, this project can be broken down into manageable sub-projects with realistic development and implementation goals. The use of a multi-dimensional mathematical model will enable developers to assign values to different components, and to determine relationship between them. By using specific criteria, these values can be manipulated to determine the outcome under varying circumstances. The mathematical model, when optimized, will produce results that could be interpreted as “a high potential for criminal activity”. The multi-dimensional mathematical model is a powerful “forecasting” tool. It provides the ability to make decisions before a critical situation or uncertain conditions arise [4], [5], [6], [8]. Lastly, accumulated information must be stored in a database that is supported/serviced by a specific set of business applications. The following is a description of the PPDR system, the first phase of the Group Detection and Activity Prediction System (GDAPS).

2 Objectives A WEB-based intelligence and monitoring reports system called Phoenix Police Department Reports (PPDR) was designed in-house for use by the Phoenix Police Department (PPD). Even though this system was designed specifically for the Phoenix Police Department, it could easily be ported for use by other law enforcement agencies. Within seconds, this system provides detailed, comprehensive, and informative statistical reports reflecting the effectiveness and responsiveness of any division, for any date/time period, within the Phoenix Police Department. These reports are designed for use by all levels of management, both sworn and civilian, from police

Web-Based Intelligence Reports System

41

chiefs’ requests to public record requests. The statistical data from these reports provides information for use in making departmental decisions concerning such issues as manpower allocation, restructuring and measurement of work. Additionally, PPDR uses a powerful database mining mechanism, which would be valuable for use in the future development of the GDAPS System. In order to satisfy the needs of all users, the PPDR system is designed to meet the following requirements: - must maintain accurate and precise up-to-date information; - the use of a specific mathematical model for statistical analysis and optimization [5] [6]; - perform at a high level with quick response times; - must have the ability to support different security levels for different categories of users; - must be scalable and expandable; - must have a user friendly presentation and; - be able to easily maintain reliable and optimized databases and other information storage. The PPDR system went into production in February 2002. This system contains original and effective solutions. It provides the capability to make decisions which will ultimately have an impact on the short and long term plans for the department, the level of customer service provided to the public, overall employee satisfaction and organizational changes needed to achieve future goals. The PPDR system could be considered the first phase of a complex Intelligence Group Detection and Activity Prediction System.

3 Relationships to Other Systems and Sources of Information 3.1 Calls for Service There are two categories of information that are used for the PPDR. They are calls for service data and text messages sent by Mobile Data Terminal (MDT) and Computer Aided Dispatch (CAD) users. Both sources of information are obtained from the Department’s Computer Aided Dispatch and Mobil Data Terminal (CAD/MDT) System. The CAD/MDT System is operating on three redundant Hewlett Packard (HP) 3000 N-Series computers. The data is stored in HP’s proprietary Image database for six months. Phoenix Police Department’s CAD/MDT System handles over 7,000 calls for service daily from citizens of Phoenix. Approximately half of these calls require an officer to respond. The other half are either duplicates or ones where the caller is just asking for general information or wishing to report a non-emergency incident. Calls for Service data is collected when a citizen calls the emergency 911 number or the Department's crime stop number for service. A call entry clerk enters the initial call information into CAD. The address is validated against a street geobase which provides information required for dispatching such as the grid, the beat and the responsible precinct where the call originated. After all information is collected, the call is automatically forwarded to a dispatcher for distribution to an officer or officers in the field. Officers receive the call information on their Mobile Data Terminals (MDT). They enter the time they start on the call, arrive at the scene and the time they

42

A. Dolotov and M. Strickler

complete the call. Each call for service incident is given a disposition code that relates to how an officer or officers handled the incident. Calls for service data for completed incidents are transferred to a SQL database on a daily basis for use in the PPDR System. Police officers and detectives use calls for service information for investigative purposes. It is often requested by outside agencies for court purposes or by the general public for their personal information. It is also used internally for statistical analysis. 3.2 Messages The messages are text sent between MDT users, MDT users and CAD users, CAD users to other CAD users. The MDT system uses a Motorola radio system for communications, which interfaces to the CAD system through a programmable interface computer. The CAD system resides on a local area network within the Police Department. The message database also contains the results of inquiries on persons, vehicles, or articles requested by officers in the field from their MDTs or by CAD user from any CAD workstation within the Department. Each message stored by the CAD system contains structured data, such as the identification of the message sender, date and time sent and the free-form body of the message. Every twenty-four hours, more than 15,000 messages are passed through the CAD System. Copies of messages are requested by detectives, police officers, the general public and court systems, as well as outside law enforcement agencies.

4 PPDR System Architecture The system architecture of the PPDR system is shown in Figure 1.

5 PPDR Structural WEB Design PDR has been designed with seven distinctive subsystems incorporated within one easy to access location. The subsystems are as follows: Interdepartmental Reports; Ad Hoc Reports; Public Reports; Messages Presentation; Update functionality; Administrative Functionality; and System Security. Each subsystem is designed to be flexible as well as scaleable. Each subsystem has the capability of being easily expanded or modified to satisfy user enhancement requests.

Web-Based Intelligence Reports System

Fig. 1. PPDR Architecture (continued on next page)

43

44

A. Dolotov and M. Strickler

Fig. 1. (continued from previous page)

5.1 System Security Security begins when a user logs into the system and is continuously monitored until the user logs off. The PPDR security system is based on the assignment of roles to each user through the Administrative function. Role assignment is maintained across multiple databases. Each database maintains a set of roles for the PPDR system. Each role has the possibility of being assigned to both a database object and a WEB functionality. This results in a user being able to perform only those WEB and database functions that are available to his/her assigned role. When a user logs onto the system, the userid and password is validated with the database security information. Database security does not use custom tables but rather database tables that contain encrypted roles, passwords, userids and logins. After a successful login, the role assignment is maintained at the WEB level in a secure state and remains intact during the user’s session.

Web-Based Intelligence Reports System

45

The PPDR System has two groups of users: those that use the Computer Aided Dispatch System (CAD) and those that do not. Since most of the PPDR users are CAD users, it makes sense to keep the same userids and passwords for both CAD and PPDR. Using a scheduled Data Transfer System (DTS) process, CAD userids and passwords are relayed to the PPDR system on a daily basis, automatically initiating a database upgrade process in PPDR. The non-CAD users are entered into the PPDR system through the Administrative (ADMIN) function. This process does not involve the DTS transfer, but is performed in real time by a designated person or persons with ADMIN privileges. Security for non-CAD users is identical that of CAD users, including transaction logging that captures each WEB page access. In addition to transaction logging, another useful security feature is the storage of user history information on a database level. Anyone with ADMIN privileges can produce user statistics and historical reports upon request. 5.2 Regular Reports In general, Regular Reports are reports that have a predefined structure, based on input parameters entered by the user. In order to obtain the best performance and accuracy for these reports, the following technology has been applied: A special design of multiple databases which includes “summary” tables ( see Section V. Database Solutions); the use of cross tables reporting functionality which allows for creating a cross table recordset on a database level; and the use of a generic XML stream with XSLT performance on the client side instead of the use of ActiveX controls for the creation of reports. Three groups of Regular Reports are available within the PPDR system. The three groups are Response Time Reports, Calls for Service Reports and Details Reports. Response Time Reports. Response Time Reports present statistical information regarding the average response time for calls for service data obtained from the CAD System. Response time is the period between the time an officer was dispatched on a call for service and the time the officer actually arrived on the scene. Response time reports can be produced on several levels, including but not limited to beat, squad, precinct and even citywide level. Using input parameters such as date, time, shift, and squad area, a semi-custom report is produced within seconds. Below is an example of the “Average Quarterly Response Time By Precinct” report for the first quarter of 2002. This report calculates the average quarterly response time for each police precinct based on the priorities assigned to the calls for service. The right most column (PPD) is the citywide average, again broken down by priority. Calls for Service Reports. Calls for Service Reports are used to document the number of calls for service in a particular beat, squad, precinct area or citywide. These reports have many of the same parameters as the Response Time Reports. Some reports in this group are combination reports, displaying both the counts for calls for service

46

A. Dolotov and M. Strickler

Fig. 2. Response Time Reports

and the average response time. Below is an example of a “Monthly Calls for Service by Squad” report for the month of January 2002. This report shows a count of the calls for service for each squad area in the South Mountain precinct, broken down by calls that are dispatched and calls that are handled by a phone call made by Callback Unit.

Fig. 3. Calls For Service Report

Details Reports. These reports are designed to present important details for a particular call for service. Details for a call for service include such information as call location, disposition code (action taken by officer), radio code (type of calls for service - burglary, theft, etc.), received time and responding officer(s). From a Detail Re-

Web-Based Intelligence Reports System

47

port, other pertinent information related to a call for service is obtained quickly with a click of the mouse. Other available information includes unit history information. Unit history information is a collection of data for all the units that responded to a particular call for service, such as time the unit was dispatched, time unit arrived and what people or plates were checked. 5.3 AD HOC Reports The AD HOC Reports subsystem provides the ability to produce “custom” reports from the calls for service data. To generate an AD HOC report, a user should have basic knowledge of SQL queries using search criteria as well as basic knowledge of the calls for service data. There are three major steps involved in producing an AD HOC report: -

Selecting report columns Selecting search criteria Report generation

Selecting report columns and selecting search criteria use an active dialog. The report generation uses XML/XSLT performance. Selecting Report Columns. The first page that is presented when entering the AD HOC Reports subsystem allows the user to choose, from a list of tables, the fields that are to be displayed in the desired report. OLAP functionality is used for accessing a database’s schema, such as available tables and their characteristics, column names, formats, aliases and data types. The first presented page of the AD HOC Reports is displayed below. A selection can be made for any required field by checking the left check box. Other options such as original value (Orig), count, average (Averg), minimum (Minim), and maximum (Maxim) are also available to the user. Count, average, minimum and maximum are only available for numeric fields. As an example, if a user is requiring a count of the number of calls for service, a check is required in the Count field. When the boxes are check, the SELECT clause generates as a DHTML script. For instance, if the selected fields for an Ad Hoc report are ‘Incident Number ‘, ‘Date’, ‘Address’ and ‘Average of the Response Time’ (all members of the Incidents table), the following SELECT clause will be generated:

Date’,Incidents.Inc_Location AS ’Address’,Incidents.Inc_Time_Rec AS ’Received time’,Avg(Incidents.Inc_Time_Rec) AS ’Avg Of Received time’ FROM Incidents Syntaxing is maintained in the SELECT clause generation on the business logic level using a COM+ objects. Selecting Search Criteria. When all desired fields have been selected, click on “Submit” and the following search criteria page is presented:

48

A. Dolotov and M. Strickler

Fig. 4. Selecting Report Columns

This page will allow the user to build the search criteria necessary for the generation of the desired report. Most available criteria and their combinations are available to the user (i.e., >, among routes from s to all destinations d ∈ D, ( where n0 = s and nk = d ); (2) Sort routes Rs by total travel time, increasing order; (3) for each route Rs in sorted order do { (4) Initialize next start node on route Rs to move: st = 0; (5) while not all evacuees from n0 reached nk do { (6) t =next available time to start move from node nst ; (7) nend =furthest node can be reached from nst without stopping; (8) f low = min( number of evacuee at node nst , Available Edge Capacity(all edges between nst and nend on Rs ), Available N ode Capacity(all nodes from nst+1 to nend on Rs ), ); (9) for i = st to end − 1 do { (10) t = t + T ravel time(eni ni+1 ); (11) Available Edge Capacity(eni ni+1 , t) reduced by f low; (12) Available N ode Capacity(ni+1 , t ) reduced by f low; (13) t = t ; (14) } (15) st =closest node to destination on route Rs with evacuee; (16) } (17) } (18) Postprocess results and output evacuation plan; (19)

and edge capacities at certain time points along the route. The detailed pseudocode and algorithm description are as follows. In the first step(line 1-2), for each source node s, we find the route Rs with shortest total travel time among routes between s and all the destination nodes. The total travel time of route Rs is the sum of the travel time of all edges on Rs . For example, in figure 2, RN 1 is N1-N3-N4-N6-N10-N13 with a total travel time of 14 time units. RN 2 is N2-N3-N4-N6-N10-N13 with a total travel time of 14 time units. RN 8 is N8-N10-N13 with total travel time of 4 time units. This step is done by a variation of Dijkstra’s algorithm[7] in which edge travel time

Evacuation Planning: A Capacity Constrained Routing Approach

117

Table 2. Result Evacuation Plan of the Single-Route Capacity Constrained Planner Group of People ID Origin No. of People Start Time A N8 6 0 B N8 6 1 C N8 3 2 D N1 3 0 E N1 3 0 F N1 1 0 G N1 2 1 H N1 1 1 I N2 2 0 J N2 3 0

Route Exit Time N8-N10-N13 4 N8-N10-N13 5 N8-N10-N13 6 N1-N3-N4-N6-N10-N13 14 N1-N3(W1)-N4-N6-N10-N13 15 N1-N3(W2)-N4-N6-N10-N13 16 N1-N3(W1)-N4-N6-N10-N13 16 N1-N3(W2)-N4-N6-N10-N13 17 N2-N3(W3)-N4-N6-N10-N13 17 N2-N3(W4)-N4-N6-N10-N13 18

is treated as edge weight and the algorithm terminates when the shortest route from s to one destination node is determined. The second step(line 3), is to sort the routes we obtained from step 1 in increasing order of the total travel time. Thus, in our example, the order of routes will be RN 8 ,RN 1 ,RN 2 . The third step(line 4-18), is to reserve capacities for each route in the sorted order. The reservation for route Rs is done by sending all the people initially at node s to the exit along the route in the least amount of time. The people may need to be divided into groups and sent by waves due to the constraints of the capacities of the nodes and edges on Rs . For example, for RN 8 , the first group of people that starts from N8 at time 0 is at most 6 people because the available edge capacity of N8-N10 at time 0 is 6. The algorithm makes reservations for the 6 people by reducing the available capacity of each node and edge at the time point that they are at each node and edge. This means that available capacities are reduced by 6 for edge N8-N10 at time 0 because the 6 people travel through this edge starting from time 0; for node N10 at time 3 because they arrive at N10 at time 3; for edge N10-N13 at time 3 because they travel through this edge starting from time 3. They finally arrive at N13(EXIT1) at time 4. The second group of people leaving N8 has to wait until time 1 since the first group has reserved all the capacity of edge N8-N10 at time 0. Therefore, the second group leaves N8 at time 1 and reaches N13 at time 5. Similarly, the last group of 3 people leaves N8 at time 2 and reaches N13 at time 6. Thus all people from N8 are sent to exit N13. The next two routes, RN 1 and RN 2 , will make their reservation based on the available capacities that the previous routes left with. The final step of the algorithm is to output the entire evacuation plan, as shown in Table 2, which takes 18 time units.

118

3.2

Q. Lu, Y. Huang, and S. Shekhar

Multiple-Route Capacity Constrained Routing Approach

The Multiple-Route Capacity Constrained Planner (MRCCP) is an iterative approach. In each iteration, the algorithm re-computes the earliest time route from any source to any destination taking the previous reservations and possible onroute waiting time into consideration. Then it reserves the capacity for this route in the current iteration. The detailed pseudo-code and algorithm description are as follows. Algorithm 2 Multiple-Route Capacity Constrained Planner (MRCCP) Input: 1) G(N, E): a graph G with a set of nodes N and a set of edges E; Each node n ∈ N has two properties: M aximum N ode Capacity(n) : non-negative integer Initial N ode Occupancy(n) : non-negative integer Each edge e ∈ E has two properties: M aximum Edge Capacity(e) : non-negative integer T ravel time(e) : non-negative integer 2) S: set of source nodes, S ⊆ N ; 3) D: set of destination nodes, D ⊆ N ; Output: Evacuation plan Method: while any source node s ∈ S has evacuee do { (1) find route R < n0 , n1 , . . . , nk >= with earliest destination arrival time among routes between all s,d pairs, where s ∈ S,d ∈ D,n0 = s,nk = d; (2) f low = min( number of evacuee still at source node s, Available Edge Capacity(all edges on route R), Available N ode Capacity(all nodes from n1 to nk on route R), ); (3) for i = 0 to k − 1 do { (4) t = t + T ravel time(eni ni+1 ); (5) Available Edge Capacity(eni ni+1 , t) reduced by f low; (6) Available N ode Capacity(ni+1 , t ) reduced by f low; (7) t = t ; (8) } (9) } (10) Postprocess results and output evacuation plan; (11)

The MRCCP algorithm keeps iterating as long as there are still evacuees at any source node (line 1). Each iteration starts with finding the route R with the earliest destination arrival time from any sources node to any any exit node based on the current available capacities (line 2). This is done by generalizing Dijkstra’s shortest path algorithm [7] to work with the time series capacities and edge travel time. Route R is the route that reaches an exit in the least

Evacuation Planning: A Capacity Constrained Routing Approach

119

Table 3. Result Evacuation Plan of the Multiple-Routes Capacity Constrained Planner Group of People ID Origin No. of People Start Time A N8 6 0 B N8 6 1 C N8 3 0 D N1 3 0 E N1 3 1 F N1 3 0 G N1 1 2 H N1 3 1 I N2 2 2

Route Exit Time N8-N10-N13 4 N8-N10-N13 5 N8-N10-N14 5 N1-N3-N4-N6-N10-N13 14 N1-N3-N4-N6-N10-N13 15 N1-N3-N5-N7-N11-N14 15 N1-N3-N4-N6-N10-N13 16 N2-N3-N5-N7-N11-N14 16 N2-N3-N5-N7-N11-N14 17

amount of time and at least one person can be sent to the exit through route R. For example, at the very first iteration, R will be N8-N10-N13, which can reach N13 at time 4. The actual number of people that will travel through R is the smallest number among the number of evacuees at the source node and the available capacities of each of the nodes and edges on route R (line 3). Thus, in the example, this amount will be 6, which is the available edge capacity of N8-N10 at time 0. The next step is to reserve capacities for the people on each node and edge of route R (lines 4-9). The algorithm makes reservation for the 6 people by reducing the available capacity of each node and edge at the time point that they are at each node and edge. This means that available capacities are reduced by 6 for edge N8-N10 at time 0, for node N10 at time 3, and for edge N10-N13 at time 3. They finally arrive at N13(EXIT1) at time 4. Then, the algorithm goes back to line 2 for the next iteration. The iteration terminates when the occupancy of all source nodes is reduced to zero, which means all evacuee have been sent to exits. Line 11 outputs the evacuation plan, as shown in Table 3.

4

Comparison and Cost Models of the Two Algorithms

It can be seen that the key difference between the two algorithms is that the SRCCP algorithm only produces one single route for each source node, while the MRCCP can produce multiple routes for groups of people in each source node. MRCCP can produce evacuation plan with shorter evacuation time than SRCCP by the flexibility of adapting to the available capacities after previous reservations. Yet, MRCCP needs to re-compute the earliest time route in each iteration which incurs more computational cost than SRCCP. We then provide simple algebraic cost models for the computational cost of the two proposed heuristic algorithms. We assume the total number of nodes in the graph is n, the number of source nodes is ns , and the number of groups generated in the result evacuation plan is ng .

120

Q. Lu, Y. Huang, and S. Shekhar

The cost of the SRCCP algorithm consists of three parts: the cost of the computing the shortest time route from each source node to any exit node is denoted by Csp , the cost of sorting all the pre-computed routes by their total travel time is denoted by Css , and the cost of reserving capacities along each route for each group of people is denoted by Csr . The cost model of the SRCCP algorithm is given as follows: CostSRCCP = Csp + Css + Csr = O(ns × nlogn) + O(ns logns ) + O(n × ng ) (1) The MRCCP algorithm is an iterative approach. In each iteration, the route for one group of people is chosen and the capacities along the route are reserved. The total number of iterations is determined by the number of groups generated. In each iteration, the route with earliest destination arrival time from each source node to any exit node is re-computed with the cost of O(ns ×nlogn). Reservation is made for the node and edge capacities along the chosen route with the cost of O(n). The cost model of the MRCCP algorithm is given as follows: CostM RCCP = O((ns × nlogn + n) × ng )

(2)

In both cost models, the number of groups generated for the evacuation plan depends on the network configuration which include maximum capacity of nodes and edges, and the number of people to be evacuated at each source node.

5

Solution Quality and Performance Evaluation

In this section, we present the experiment design, our experiment setup, and the results of our experiments on a building dataset.

5.1

Experiment Design

Figure 3 describes the experimental design to evaluate the impact of parameters on the algorithms. The purpose is to compare the quality of solution and the computational cost of the two proposed algorithms with that of EVACNET which produces optimal solution. First, a test dataset which represents a building layout or road network is chosen or generated. The dataset is a evacuation network characterized by its route capacities and its size (number of nodes and edges). Next, a generator is used to generate the initial state of the evacuation by populating the network with a distribution model to assign people to source nodes. The initial state will be converted to EVACNET input format to produce optimal solution via EVACNET and converted to node-edge graph format to evaluate the proposed two heuristic algorithms. The solution qualities and algorithm performance will be analyzed in analysis module.

Evacuation Planning: A Capacity Constrained Routing Approach route capacity

number of nodes, edges

Test Dataset (Building layout or road network) number of people

121

Algorithm 1 Conversion to Node-Edge Model

initial people location distribution model

Intial State of the Building or Road Network Generator

Solution 1 Running Time1 Solution 2

Algorithm 2

Analysis

Running Time 2 Conversion to EVACNET Model

Optimal Solution Running Time 3

Fig. 3. Experiment Design

5.2

Experiment Setup and Results

The test dataset we used in the following experiments is the floor-map of Elliott Hall, a 6-story building on the University of Minnesota campus. The dataset network consists of 444 nodes with 5 exits nodes, 475 edges, and total node capacity of 3783 people. The generator produces initial states by varying source node ratio and occupancy ratio from 10% to 100%. The experiment was conducted on a workstation with Intel Pentium III 1.2GHz CPU, 256MB RAM and Windows 2000 Professional operating system. The initial state generator distributes Pn people to Sn randomly chosen Sn source nodes. The source node ratio is defined as and total number of nodes Pn . the occupancy ratio is defined as total capacity of all nodes We want to answer two questions: (1)How does people distribution affect the performance and solution quality of the algorithms? (2) Are the algorithms scalable with respect to the number of people to be evacuated? Experiment 1: Effect of People Distribution. The purpose of the first experiment is to evaluate how the people distribution affects the quality of the solution and the performance of the algorithms. We fixed the occupancy ratio and varied the source node ratio to observe the quality of the solution and the running time of the two proposed algorithms and EVACNET. The experiment was done with fixed occupancy ratio from 10% to 100% of total capacity. Here we present the experiment results with occupancy ratio fixed at 30% and source node ratio varying from 30% to 100% which shows a typical result of all test cases. Figure 4 shows the total evacuation time given by the three algorithms and Figure 5 shows their running time. As seen in Figure 4, at each source node ratio, MRCCP produces solution with total evacuation time that is within 10% longer than optimal solution produced by EVACNET. The quality of solution of MPCCP is not affected by the distribution of people when the total number of people is fixed. For SRCCP, the solution is 59% longer than EVACNET optimal solution when source node ratio is 30% and drops to 29% longer when source node ratio increases to 100%. It shows that the solution quality of SRCCP increases when source node ratio increases. In Figure 5, we can see that the running time of EVACNET grows

122

Q. Lu, Y. Huang, and S. Shekhar

Total Evacuation Time

250 200 150

SRCCP MRCCP EVACNET

100 50 0 30

50

70

90

100

Source Node Ratio (%)

Fig. 4. Quality of Solution With Respect to Source Node Ratio

35

Running Time (second)

30 25 SRCCP MRCCP EVACNET

20 15 10 5 0 30

50

70

90

100

Source Node Ratio (%)

Fig. 5. Running Time With Respect to Source Node Ratio

much faster then the running time of SRCCP and MRCCP when source node ratio increases. This experiment shows: (1)SRCCP produces solution closer to optimal solution when source node ratio is higher. (2)MRCCP produces close to optimal solution (less than 10% longer than optimal) with less than half of running time of EVACNET. (3) The distribution of people does not affect the performance of two proposed algorithms when total number people is fixed. Experiment 2: Scalability with Respect to Occupancy Ratio. In this experiment, we evaluated the performance of the algorithms when the source node ratio is fixed and the occupancy ratio is increasing. Figure 6 and Figure 7 show the total evacuation time and the running time of the 3 algorithms when the source node ratio is fixed at 70% and occupancy ratio varies from 10% to 70% which is a typical case among all test cases. As seen in Figure 6, compared with the optimal solution by EVACNET, solution quality of SRCCP decreases when occupancy ratio increases, while solution quality of MRCCP still remains within 10% longer than optimal solution. In Figure 7, the running time of EVACNET grows significantly when occupancy

Evacuation Planning: A Capacity Constrained Routing Approach

123

450

Total Evacuation Time

400 350 300 SRCCP MRCCP EVACNET

250 200 150 100 50 0 10

30

50

70

Occupany Ratio (%)

Fig. 6. Quality of Solution With Respect to Source Node Ratio

Running Time (second)

60 50 40 SRCCP MRCCP EVACNET

30 20 10 0 10

30

50

70

Occupany Ratio (%)

Fig. 7. Running Time With Respect to Source Node Ratio

ratio grows, while running time of MRCCP remains less than half of EVACNET and only grows linearly. This experiment shows: (1)The solution quality of SRCCP goes down when total number of people increases. (2) MRCCP is scalable with respect to number of people.

6

Conclusion and Future Work

In this paper, we proposed and evaluated two heuristic algorithms of capacity constrained routing approach. Cost models and experimental evaluations using a a real building dataset are presented. The proposed SRCCR algorithm can produces plan instantly but the quality of solution suffers when evacuee number grows. The MRCCR algorithm produces solution within 10% of optimal solution while the running time is scalable to number of evacuees and is reduced to half of the optimal algorithm. Both algorithms are scalable with respect to the number of evacuees. Currently, we choose the shortest travel time route without considering the available capacity of the route. In many cases, a longer route with larger available capacity may be a better choice. In our future work, we

124

Q. Lu, Y. Huang, and S. Shekhar

would like to explore heuristics with route ranking method based on weighted available capacity and travelling time while choosing best routes. We also want to extend and apply our approach to vehicle evacuation in transportation road networks. Modelling vehicle traffic during evacuation is a more complicated job than modelling pedestrian movements in building evacuation because modelling vehicle traffic at intersections and the cost of taking turns are challenging tasks. Current vehicle traffic simulation tools, such as DYNASMART [14], DYNAMIT [2], uses an assignment-simulation method to simulate the traffic based on origin-destination routes. We plan to extend our approach to work with such traffic simulation tools to address vehicle evacuation problems. Acknowledgment. We are particularly grateful to Spatial Database Group members for their helpful comments and valuable discussions. We would also like to express our thanks to Kim Koffolt for improving the readability of this paper. This work is supported by the Army High Performance Computing Research Center (AHPCRC) under the auspices of the Department of the Army, Army Research Laboratory under contract number DAAD19-01-2-0014. The content does not necessarily reflect the position or policy of the government and no official endorsement should be inferred. AHPCRC and the Minnesota Supercomputer Institute provided access to computing facilities.

References 1. Hurricane Evacuation web page. http://i49south.com/hurricane.htm, 2002. 2. M. Ben-Akiva et al. Deveopment of Dynamic Traffic Assignment System for Planning Purposes: DynaMIT User’s Guide. ITS Program, MIT, 2002. 3. S. Browon. Building America’s Anti-Terror Machine: How Infotech Can Combat Homeland Insecurity. Fortune, pages 99–104, July 2002. 4. The Volpe National Transportation Systems Center. Improving Regional Transportation Planning for Catastrophic Events(FHWA). Volpe Center Highlights, pages 1–3, July/August 2002. 5. L. Chalmet, R. Francis, and P. Saunders. Network Model for Building Evacuation. Management Science, 28:86–105, 1982. 6. C. Corman, T. Leiserson and R. Rivest. Introduction to Algorithms. MIT Press, 1990. 7. E.W. Dijkstra. A Note on Two Problems in Connexion with Graphs. Numerische Mathematik, 1:269–271, 1959. 8. ESRI. GIS for Homeland Security, An ESRI white paper. http://www.esri.com/library/whitepapers/pdfs/homeland security wp.pdf, November 2001. 9. R. Francis and L. Chalmet. A Negative Exponential Solution To An Evacuation Problem. Research Report No.84-86, National Bureau of Standards, Center for Fire Research, October 1984. 10. B. Hoppe and E. Tardos. Polynomial Time Algorithms For Some Evacuation Problems. Proceedings of the 5th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 433–441, 1994.

Evacuation Planning: A Capacity Constrained Routing Approach

125

11. B. Hoppe and E. Tardos. The Quickest Transshipment Problem. Proceedings of the 6th annual ACM-SIAM Symposium on Discrete Algorithms, pages 512–521, January 1995. 12. T. Kiosko and R. Francis. Evacnet+: A Computer Program to Determine Optimal Building Evacuation Plans. Fire Safety Journal, 9:211–222, 1985. 13. T. Kiosko, R. Francis, and C. Nobel. EVACNET4 User’s Guide. University of Florida, http://www.ise.ufl.edu/kisko/files/evacnet/, 1998. 14. H.S. Mahmassani et al. Development and Testing of Dynamic Traffic Assignment and Simulation Procedures for ATIS/ATMS Applications. Technical Report DTFH6 1-90-R-00074-FG, CTR, University of Texas at Austin, 1994.

Locating Hidden Groups in Communication Networks Using Hidden Markov Models Malik Magdon-Ismail1 , Mark Goldberg1 , William Wallace2 , and David Siebecker1 1

CS Department, RPI, Rm 207 Lally, 110 8th Street, Troy, NY 12180, USA. {magdon,goldberg,siebed}@cs.rpi.edu 2 DSES Department, RPI, 110 8th Street, Troy, NY 12180, USA. [email protected].

Abstract. A communication network is a collection of social groups that communicate via an underlying communication medium (for example newsgroups over the Internet). In such a network, a hidden group may try to camoflauge its communications amongst the typical communications of the network. We study the task of detecting such hidden groups given only the history of the communications for the entire communication network. We develop a probabilistic approach using a Hidden Markov model of the communication network. Our approach does not require the use of any semantic information regarding the communications. We present the general probabilistic model, and show the results of applying this framework to a simplified society. For 50 time steps of communication data, we can obtain greater than 90% accuracy in detecting both whether or not their is a hidden group, and who the hidden group members are.

1

Introduction

The tragic events of September 11, 2001 underline the need for a tool which is capable of detecting groups that hide their existence and functionality within a large and complicated communication network such as the Internet. In this paper, we present an approach to identifying such groups. Our approach does not require the use of any semantic information pertaining to the communications. This is preferable because communication within a hidden group is usually encrypted in some way, hence the semantic information will be misleading, or unavailable. Social science literature has developed a number of theories regarding how social groups evolve and communicate, [1,2,3]. For example, individuals have a higher tendency to communicate if they are members of the same group, in accordance with homophily theory. Given some of the basic laws of how social groups evolve and communicate, one can construct a model of how the communications within the society should evolve, given the (assumed) group structure. If the group structure does not adequately explain the observed communications, but the addition of an extra, hidden, group does explain them, then we H. Chen et al. (Eds.): ISI 2003, LNCS 2665, pp. 126–137, 2003. c Springer-Verlag Berlin Heidelberg 2003 

Locating Hidden Groups in Communication Networks

127

have grounds to believe that there is a hidden group attempting to camouflage its communications within the existing communication network. The task is to determine whether such a group exists, and identify its members. We use a maximum likelihood approach to solving this task. Our approach is to model the evolution of a communication network using a Hidden Markov Model. A Hidden Markov model is appropriate when an observed process (in our case the macroscopic communication structure) is naturally driven by an unobserved, or hidden, Markov process (in our case the microscopic group evolution). Hidden Markov models have been used extensively in such diverse areas as: speech recognition, [4,5]; inferring the language of simple grammars [6]; computer vision, [7]; time series analysis, [8]; biological sequence analysis and protein structure prediction, [9,10,11,12,13]. Our interpretation of the group evolution giving rise to the observed macroscopic communications evolution makes it natural to model the evolution of communication networks using a Hidden Markov model as well. Details about the general theory of Hidden Markov models can be found in [4,14,15]. In social network analysis there are many static models of, and static metrics for the measurement and evaluation of social networks [16]. These models range from graph structures to large simulations of agent behavior. The models have been used to discover a wide array of important communication and sociological phenomenon, from the small world principle [17] to communication theories such as homophily and contagion [1]. These models, as good as they are, are not sufficient to study the evolution of social groups and the communication networks that they use; most focus on the study of the evolution of the network itself. Few attempt to explain how the use of the network shapes its evolution [18]. Few can be used to predict the future of the network and communication behavior over that network. Though there is an abundance of simulation work in the field of computational analysis of social and organizational systems [2,19,3] that attempts to develop dynamic models for social networks, none have employed the proposed approach and few incorporate sound probability theory or statistics [20] as the underlying model. The outline of the paper is as follows. First we consider a simplified example, followed by a description of the general framework. We also present some results to illustrate proof of concept on an example, and we end with some concluding remarks. 1.1

Example

A simple, concrete example will help to convey the details of our method. A more detailed formulation will follow. Consider the newsgroups, for example alt.revisionism, alt.movies. A posting to a newsgroup in reply to a previous posting is a communication between two parties. Now imagine the existence of a hidden group that attempts to hide its communications, illustrated in the figure below. Figure 1(a) shows the group structure. There are 4 observed groups. A fifth hidden group also exists, whose members are unshaded. We do not observe the actual group composition, but rather the communications (who is posting and

128

M. Magdon-Ismail et al. 1

2

X

3

4

(a)

(b)

(c)

Fig. 1. Illustration of a society.

Communication Graph Time Series for 1 Hidden Group Communication Graph, t=1

Communication Graph, t=2

Communication Graph, t=3

Communication Graph, t=4

Communication Graph, t=5

Communication Graph Time Series for No Hidden Groups Communication Graph, t=1

Communication Graph, t=2

Communication Graph, t=3

Communication Graph, t=4

Communication Graph, t=5

Fig. 2. Communication time series of two societies.

replying to posts in a given newsgroup). This is illustrated in Figure 1(b), where all the communications are between members of the same group. Figure 1(c) illustrates the situation when the hidden group members need to broadcast some information among themselves. The hidden group member who initiates the broadcast (say X) communicates with all the other hidden group members who are in the same visible groups as X. The message is then passed on in a similar manner until all the hidden members have received the broadcast. Notice that no communication needs to occur between members who are not in the same group, yet, a message can be broadcast across the whole group. In order to maintain the the appearance of being a bona-fide member of a particular newsgroup, a hidden node will participate in the “normal” communications of that group as well. Only occasionally will a message need to be broadcast through the hidden group, resulting in a communication graph as in Figure 1(c). The matter is complicated by the fact that the communications in Figure 1(c) will be overlayed onto the normal group communications, Figure 1(b). What we observe are a time

Locating Hidden Groups in Communication Networks

129

series of node to node communications as illustrated in Figure 2, which shows the evolving communications of two hypothetical communities. The individuals are represented by nodes in the graph. An edge between two nodes represents communication during that time period. The thickness of the edge indicates the intensity of the communications. The dotted lines indicate communications between the hidden group members. The task is to take the communication history of the community (for example the one above) and to determine whether or not there exists a hidden group functioning within this community, and to identify its members. It would also be useful to identify which members belong to which groups. The hidden community may or may not be functioning as an aberrant group trying to camouflage its communications. In the above example the hidden community trying to camouflage its broadcasts. However, the hidden group could just as well be a new group that has suddenly arisen, and we would like to discover its existence. We assume that we know the number of observed groups (for example the newsgroups societies are known), and we have a model of how the society evolves. We do not know who belongs to which news group, and all communications are aggregated into the communications graph for a given time period. We will develop a framework to determine the presence of a hidden group that does not rely on any semantic information regarding the communications. The motivation for this approach is that even if the semantics are available (which is not likely), the hidden communications will usually be encrypted and designed so as to mimic the regular communications anyway.

2

Probabilistic Setup

We will illustrate our general methodology by first developing the solution of the simplified example discussed above. The general case is similar, with only minor technical differences. The first step is to build a model for how individuals move from group to group. More specifically, let Ng be the number of observed groups in the society, and denote the groups by F1 , . . . , FNg . Let n be the number of individuals in the society, and denote the individuals by x1 , . . . , xn . We denote by F(t), the micro-state of the society at time t. The micro-state represents the state of the society. In our case, F(t) is the membership matrix at time t, which is a binary n × Ng matrix that specifies who is in which group,  1 if node xi is in group Fj , (1) Fij (t) = 0 otherwise. The group membership may change with time. We assume that F(t) is a Markov chain, in other words, the members decide which groups to belong to at time t + 1 based solely on the group structure at time t. In determining which groups to join in the next period, the individuals may have their own preferences, thus there is some transition probability distribution P [F(t + 1)|F(t), θ],

(2)

130

M. Magdon-Ismail et al.

where θ is a set of (fixed) parameters that determine, for example, the individual preferences. This transition matrix represents what we define as the micro-laws of the society, that determines how its group structure evolves. A particular setting to the parameters θ is a particular realization of the micro-laws. We will assume that the group membership is static, which is a trivial special case of a Markov chain where the transition matrix is the identity matrix. In the general case, this need not be so, and we pick this simplified case to illustrate the mechanics of determining the hidden group, without complicating it with the group dynamics. Thus, the group structure, F(t) is fixed, so we will drop the t dependence. We do not observe the group structure, but rather the communications that are a result of this structure. We thus need a model for how the communications arise out of the groups. Let C(t) denote the communications graph at time t. Cij (t) is the intensity of the communication between node xi and node xj at time t. C(t) is the “expression” of the micro-state F. Thus, there is some probability distribution P [C(t)|F(t), λ],

(3)

where λ is a set of parameters governing how the group structure gets expressed in the communications. Since F(t) is a Markov chain, C(t) follows a Hidden Markov process governed by the two probability distributions P [F(t + 1)|F(t), θ] and P [C(t)|F(t), λ]. In particular, we will assume that there is some parameter 0 < λ < 1 that governs how nodes in the same group communicate. We assume that the communication intensity Cij (t) has a Poisson distribution with parameter Kλ, where K is the number of groups that both nodes are members of. If K = 0, we will set the Poisson parameter to λ2  1. otherwise K = λ. Thus, nodes that are not in any groups will tend not to communicate. The Poisson distribution is often used to model such “arrival” processes. Thus,  P(k; Kλ) xi and xj are in K > 0 groups together, P [Cij = k] = (4) P(k; λ2 ) xi and xj are in no groups together. Where P(k; λ) is the Poisson probability distribution function, P(k; λ) =

e−λ λk . k!

(5)

We will assume that the communications between different pairs of nodes are independent of each other, as are communications at different time steps. Suppose we have a broadcast hidden group in the society as well, as illustrated in Figure 1(c). We assume a particular model for the communications within the hidden group, namely that every pair of nodes that are in the same visible group communicate. The intensity of the communications, B is assumed to follow a Poisson distribution with parameter β, thus P [B = k] = P(k; β),

(6)

Locating Hidden Groups in Communication Networks

131

We have thus fully specified the model for the society, and how the communications will evolve. The task is to use this model to determine, from communication history (as in Figure 2), whether or not there exists a hidden group, and if so, who the hidden group members are. 2.1

The Maximum Likelihood Approach

For simplicity we will assume that the only unknown is F, the group structure. Thus, F is static and unknown and λ and β are known. Let H be a binary indicator variable that is 1 if a hidden group is present, and 0 if not. Our approach is to determine how likely the observed communications would be if there is a hidden group, l1 and compare this with how likely the observed communications would be if there was no hidden group, l0 . To do this, we use the model describing the communications evolution with a hidden group (resp. without a hidden group) to find what the best group structure F would be if this model were true, and compute the likelihood of communications given this group structure and the model. Thus, we have two optimization problems, l1 = max P [Data|F, v, λ, β, H = 1],

(7)

l0 = max P [Data|F, λ, H = 0],

(8)

F,v F

where Data represents the communication history of the society, namely {C(t)}Tt=1 , and v is a binary indicator variable that indicates who the hidden and visible members of the society are. If l1 > l0 , then the communications are more likely if there is a hidden group, and we declare that there is a hidden group. As a by product, of the optimization, we will obtain F and v, hence we will identify not only who the hidden group members are, but also the remaining group structure for the society. In what follows, we will derive this likelihood function that needs to be optimized for our example society. What remains is to then solve the two optimization problems to obtain l1 , l0 . The simpler case is when there is no hidden group, which we analyze first. Suppose that F is given. Let fij be the number of groups that nodes xi and xj are both members of,  Fik Fjk . (9) fij = k

Let λij be the Poisson parameter for the intensity of the communication between nodes xi and xj ,  λ2 fij = 0, (10) λij = λfij fij > 0. Let P (t) be the probability of obtaining the observed communications C(t) at time t. Since the communications between nodes are assumed independent, and

132

M. Magdon-Ismail et al.

each is distributed according to a Poisson process with parameter λij , we have that P (t) = P [C(t)|F, λ, H = 0] n  = P(Cij (t); λij )

(11) (12)

i