GeoSpatial Semantics: Third International Conference, GeoS 2009, Mexico City, Mexico, December 3-4, 2009. Proceedings [1 ed.] 3642104355, 9783642104350

This book constitutes the refereed proceedings of the Third International Conference on GeoSpatial Semantics, GeoS 2009,

282 52 5MB

English Pages 205 [211] Year 2009

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter....Pages -
Multi-cultural Aspects of Spatial Knowledge....Pages 1-8
Towards Reasoning Pragmatics....Pages 9-25
A Functional Ontology of Observation and Measurement....Pages 26-43
The Case for Grounding Databases....Pages 44-62
Towards a Semantic Representation of Raster Spatial Data....Pages 63-82
Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags....Pages 83-102
Ontology-Based Integration of Sensor Web Services in Disaster Management....Pages 103-121
A Spatial User Similarity Measure for Geographic Recommender Systems....Pages 122-139
SPARQL Query Re-writing Using Partonomy Based Transformation Rules....Pages 140-158
iRank : Ranking Geographical Information by Conceptual, Geographic and Topologic Similarity....Pages 159-174
Towards an Ontology for Reef Islands....Pages 175-187
Narrative Geospatial Knowledge in Ethnographies: Representation and Reasoning....Pages 188-203
Back Matter....Pages -
Recommend Papers

GeoSpatial Semantics: Third International Conference, GeoS 2009, Mexico City, Mexico, December 3-4, 2009. Proceedings [1 ed.]
 3642104355, 9783642104350

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

5892

Krzysztof Janowicz Martin Raubal Sergei Levashkin (Eds.)

GeoSpatial Semantics Third International Conference, GeoS 2009 Mexico City, Mexico, December 3-4, 2009 Proceedings

13

Volume Editors Krzysztof Janowicz The Pennsylvania State University, Department of Geography University Park, PA 16802, USA E-mail: [email protected] Martin Raubal University of California, Department of Geography Santa Barbara, CA 93106, USA E-mail: [email protected] Sergei Levashkin National Polytechnic Institute, Centro de Investigacion en Computacion 07738 Mexico City, Mexico E-mail: [email protected]

Library of Congress Control Number: 2009939281 CR Subject Classification (1998): J.2, J.4, H.2.8, H.3, H.2, H.4, I.2.9 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI ISSN ISBN-10 ISBN-13

0302-9743 3-642-10435-5 Springer Berlin Heidelberg New York 978-3-642-10435-0 Springer Berlin Heidelberg New York

This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12800599 06/3180 543210

Preface

GeoS 2009 was the third edition of the International Conference on Geospatial Semantics. It was held in Mexico City, December 3-4, 2009. Within the last years, geospatial semantics has become a prominent research field in GIScience and related disciplines. It aims at exploring strategies, computational methods, and tools to support semantic interoperability, geographic information retrieval, and usability. Research on geospatial semantics is a multidisciplinary and heterogeneous field, which combines approaches from the geosciences with philosophy, linguistics, cognitive science, mathematics, and computer science. With the increasing popularity of the Semantic Web and especially the advent of linked data, the need for semantic enablement of geospatial services becomes even more pressing. In general, semantic interoperability plays a role if data are acquired in a different context than they are finally used for. This is the case when shifting from the document Web to the data Web. The core idea of linked data is to make information contributed by various actors, with different cultural backgrounds, and different applications in mind available to the public. Understanding, matching, and translating between the conceptualizations underlying these data becomes a key challenge for future research on geospatial semantics. This volume contains full research papers, which were selected from among 19 submissions received in response to the Call for Papers. Each submission was reviewed by three or four Program Committee members and 10 papers were chosen for presentation. The papers focus on foundations of geo-semantics, the formal representation of geospatial data, semantics-based information retrieval and recommender systems, spatial query processing, as well as geo-ontologies and applications. Overall, a diverse body of research was presented coming from institutions in Austria, Germany, Mexico, The Netherlands, Spain, Taiwan, and the USA. We are in debt to many people who made this event happen. The members of the Program Committee offered their help with reviewing submissions. Our thanks go also to Miguel Matinez, Nahun Montoya, Walter Renteria, Iyeliz Reyes, and Linaloe Sarmiento who formed the Local Organizing Committee and took care of all the logistics. The Centro de Investigaci´ on en Computaci´ on, Mexico City, Mexico, was the local host and co-sponsored GeoS 2009. Finally, we would like to thank all the authors who submitted papers to GeoS 2009, Christoph Stasch, Arne Br¨ oring, and Pascal Hitzler for giving tutorials about Sensor Web Enablement and rules in OWL, respectively, as well as our keynote speakers Andrew Frank and Pascal Hitzler. December 2009

Krzysztof Janowicz Martin Raubal Sergei Levashkin

Organization

Organizing Committee Conference Chair

Sergei Levashkin Centro de Investigaci´on en Computaci´ on Mexico City, Mexico

Program Chairs

Krzysztof Janowicz The Pennsylvania State University, USA Martin Raubal University of California, Santa Barbara, USA

Organizing Chairs

Centro de Investigaci´ on en Computaci´ on Mexico City, Mexico Miguel Matinez (Chair) Nahun Montoya Walter Renteria Iyeliz Reyes Linaloe Sarmiento

GeoS 2009 Program Committee Neeharika Adabala Ola Ahlqvist Naveen Ashish Brandon Bennett Ioan Marius Bilasco Stefano Borgo Boyan Brodaric Gilberto Camara Isabel Cruz Clodoveu Davis Andrew Frank Mark Gahegan Brent Hecht Cory Henson Stephen Hirtle Pascal Hitzler Prateek Jain Tomi Kauppinen

Microsoft Research India, India The Ohio State University, USA University of California, Irvine, USA University of Leeds, UK Laboratoire d’Informatique Fondamentale de Lille, France National Research Council, Italy Geological Survey of Canada, Canada INPE, Brazil University of Illinois at Chicago, USA Universidade Federal de Minas Gerais, Brazil Technical University Vienna, Austria The University of Auckland, New Zealand Northwestern University, Chicago, USA Wright State University, USA University of Pittsburgh, USA Wright State University, USA Wright State University, USA Helsinki University of Technology, Finland

VIII

Organization

Marinos Kavouras Carsten Keßler Alexander Klippel Craig Knoblock Margarita Kokla Dave Kolas Werner Kuhn Felix Mata Marco Painho Christine Parent Vasily Popovich

Sudha Ram Christoph Schlieder Angela Schwering Shasi Shekhar Kathleen Stewart Hornsby Nancy Wiegand Stephan Winter

National Technical University of Athens, Greece University of M¨ unster, Germany The Pennsylvania State University, USA University of Southern California, USA National Technical University of Athens, Greece BBN Technologies, USA University of M¨ unster, Germany Centro de Investigaci´on en Computaci´ on Mexico City, Mexico ISEGI Universidade Nova de Lisboa, Portugal ´ Ecole publique Polytechnique F´ed´erale de Lausanne, Switzerland St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, Russia University of Arizona, USA University of Bamberg, Germany University of M¨ unster, Germany University of Minnesota, Germany The University of Iowa, USA University of Wisconsin-Madison, USA The University of Melbourne, Australia

Sponsoring Institutions Instituto Polit´ecnico Nacional (IPN), Mexico Centro de Investigaci´on en Computaci´ on (CIC), Mexico National Council on Science and Technology (CONACYT), Mexico

Table of Contents

Keynotes Multi-cultural Aspects of Spatial Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . Andrew U. Frank

1

Towards Reasoning Pragmatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pascal Hitzler

9

Foundations of Geo-semantics A Functional Ontology of Observation and Measurement . . . . . . . . . . . . . . Werner Kuhn

26

The Case for Grounding Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Simon Scheider

44

Formal Representation of Geospatial Data Towards a Semantic Representation of Raster Spatial Data . . . . . . . . . . . . Rolando Quintero, Miguel Torres, Marco Moreno, and Giovanni Guzm´ an Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carsten Keßler, Patrick Mau´e, Jan Torben Heuer, and Thomas Bartoschek

63

83

Semantics-Based Information Retrieval and Recommender Systems Ontology-Based Integration of Sensor Web Services in Disaster Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grigori Babitski, Simon Bergweiler, J¨ org Hoffmann, Daniel Sch¨ on, Christoph Stasch, and Alexander C. Walkowski A Spatial User Similarity Measure for Geographic Recommender Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Matyas and Christoph Schlieder

103

122

Integration of Semantics into Spatial Query Processing SPARQL Query Re-writing Using Partonomy Based Transformation Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Prateek Jain, Peter Z. Yeh, Kunal Verma, Cory A. Henson, and Amit P. Sheth

140

X

Table of Contents

iRank: Ranking Geographical Information by Conceptual, Geographic and Topologic Similarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Felix Mata

159

Geo-ontologies and Applications Towards an Ontology for Reef Islands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stephanie Duce Narrative Geospatial Knowledge in Ethnographies: Representation and Reasoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chin–Lung Chang, Yi–Hong Chang, Tyng–Ruey Chuang, Dong–Po Deng, and Andrea Wei–Ching Huang Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

175

188

205

Multi-cultural Aspects of Spatial Knowledge Andrew U. Frank Geoiformation TU Wien, Gusshausstrasse 27-29/E127 [email protected]

It is trivial to observe differences between cultures: people use different languages, have different modes of building houses and organize their cities differently, to mention only a few. Differences in the culture of different people were and still are one of the main reasons for travel to foreign countries. The question whether cultural differences are relevant for the construction of Geographic Information Systems is longstanding (Burrough et al. 1995) and is of increasing interest since geographic information is widely accessible using the web and users volunteer information to be included in the system (Goodchild 2007). The review of how the question of cultural differences was posed at different times reveals a great deal about the conceptualization of GIS at different times and makes a critical review interesting. At the heart of the discussion of cultural differences relevant for GIScience is a Whorfian hypothesis that different cultural backgrounds could be responsible for differences in the way space and spatial relations are conceived. Whorf claimed that people using a language with more differentiation, for example in terms describing different types of snow, also perceive reality differently from people using a language with less differentiation (Carroll 1956). An early contribution picked up on suggestions made by Mark and others (1989b) and identified several distinct issues that could be investigated individually (Campari et al. 1993): 1. the cultural assumptions that are built into the GIS software may differ from those of the user; 2. the influence of decision context in which a GIS is used; 3. the conceptualization of space and time may differ; 4. differences in the administrative processes and how they structure space; 5. the sense of territoriality, ownership or dominance of space, is different between people, again citing ethnographic examples; 6. the influence of the material culture, the ecosystem, economy and technology. Campari and Frank in this early paper asked the question whether a single or a few GIS software packages could serve universally or local (national) development of GIS software, which still existed at that time, were justified by cultural differences.

1 Initial Focus on Cognitive Cultural Differences Montello (1995) concentrated on cultural differences in the conceptualization of space and argued that a large share of spatial cognition is universal, i.e., the same for all human beings, because the problems the environment posed to humans during their K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 1–8, 2009. © Springer-Verlag Berlin Heidelberg 2009

2

A.U. Frank

development and to which their cognitive apparatus adapted, is basically the same for all humans; Montello refers to substantial empirical evidence for this claim. Evidence to the contrary had, despite efforts, not be found. For example the study by Freundschuh investigated whether growing up in a regular “Manhattan” grid would influence the spatial cognition compared with other teenagers who grew in a modern, curved road suburban setting; the results were not conclusive (Freundschuh 1991). Linguistics has explored the different ways that the languages of the world express spatial relations. Well known are the central—periphery organizations used in Hawaii (Mark et al. 1989b), the use of up-down in valleys or on slopes (Bloom et al. 1994). Montello also addresses the Whorfian hypothesis and mentions the lack of evidence that 'language structures space' in a direct way (as a paper title by Talmy (1983) may be misunderstood to suggest). The differences in methods to express spatial situations in different languages, e.g., the preference for egocentric or absolute frames (Levinson 1996; Frank 1998a; Klatzky 1998), are observed in situations where no best solution exists and are preferences rather than absolute choices: Western cultures prefer an egocentric expression (the glass on the left) whereas others, often rural groups prefer cardinal directions (Perderson (1993) for India, observe also the use of cardinal directions in a fight between two men in Synge's play 'The Playboy of the Western World' in Anglo-Irish); these are only preferences for one method and the other method is available as well. Montello's contribution suggests for the GIS development that multiple software modules that recognize and work with spatial situations could be useful universally and no cultural adaptation for differences in spatial conceptualization is likely necessary. Egenhofer gave mathematical definitions for spatial terminology to express topological relations between regions and to make precise studies of what natural language terms like 'touch' mean possible (Egenhofer 1989; Egenhofer et al. 1991). He defined, for example, a large number of differentiable topological relations between a region and a line. With Mark he observed how people would group these relations in groups that are differentiated in verbal expressions (Mark et al. 1992; Mark et al. 1994). The testing situation asked questions about a road (the line) and a park (the region); it must be suspected that the context created in the testing situation, affects the grouping—separating cases that contain differences that are practically relevant in the situation. More tests could be worthwhile to see how context influences, but tests to discover cultural differences were not successful (Mark et al. 1995a) and revealed more commonality (Ragni et al. 2007). Comparable to the formalization of topological relations are efforts to construct qualitative distance and direction relations (Frank 1992; Freksa 1992; Zimmermann 1993; Hernández et al. 1995; Zimmermann et al. 1996) from which qualitative spatial reasoning emerged as a subfield of spatial information theory. This line of research produced typically tables showing the result of composing two (or more) relations: e.g., Santa Barbara is west of Los Angeles and Los Angeles is west of New York, therefore we can conclude that Santa Barbara is west of New York. The research in spatial cognition applied to GIS was driven by a hope that natural language like communication with humans would become the way we interact with a GIS; the influential paper 'Naive Geography' by Mark and Egenhofer discussed the differences between formal and human conceptualizations (Mark et al. 1995a). It was expected that computer programs could correct for typical human incorrect

Multi-cultural Aspects of Spatial Knowledge

3

conceptualizations (e.g., alignment error (Stevens et al. 1978) in the coast line around Santa Barbara, which runs conceptually north-south, but geographically east-west). Despite well-documented, regularly observed cases, where human and formal definitions systematically differ, no formalization has been published so far, but it was found, that these typical human spatial reasoning errors are independent of cultures. For example, Xiao found similar effects of regionalization in China as is observed in western cultures (Xiao et al. 2007).

2 Linguistic Differences Users of GIS with a native language different than the one used for the user interface of the software could encounter difficulties and errors and misunderstandings result. Campari investigated (1994) the command language of GIS and the differences that result when non-native speakers are confronted with command language terms originating from English. It showed in detail that a native Italian speaker could misunderstand the meaning of translated command language terms because the connotations and metaphors evoked are different. For example the English term 'layer' has different connotations than the corresponding Italian term 'copertura'. The concern in the early 1990 was that spatial professionals would have only a basic knowledge of English and would use translated manuals and command language; today, GIS specialists learn the English based GIS command as they learn other computer terms, fully aware of the limits of metaphorical transfer of common-sense knowledge to the virtual realm. The difficulty Campari pointed to is absorbed by the trained GIS specialist who builds the application for users and bridges the differences from the English based technical GIS vocabulary to the user's description of operations in his language. The differences between the vocabulary appear simpler: Different vocabulary terms seem to describe the same class of things and translation a simple mapping: from French 'chien' to German 'Hund' to English 'dog'. Unfortunately, for most terms, translation is not as simple: the English use the two terms 'in' and 'on' whereas German differentiates 'in' 'an' and 'auf' (Frank 1998b) and a direct mapping fails: Germans ride 'in' the train or bus, whereas English ride 'on' the train or bus. In a landmark paper Mark (1993) compares natural language terms for landscape features in English, French, and Spanish; it becomes apparent that these closely related languages use different distinctions (Frank 2006) to comparable (but not strictly translatable) landscape terms. Despite the clear-cut definitions in dictionaries, comparison with the pragmatics of landscape terms, i.e., their actual use, is strongly influenced by the ecological context. His work was based on dictionary definitions, but comparing actual use of such terms in toponymes casts doubts on the strictness of the definitions (Mark personal communication). Current research of Mark and his colleagues in ethnophysiography investigates landscape terminology used by indigenous people in different parts of the world (Mark et al. 2007). They are careful to select people in similar ecological situations (arid regions in south western USA and in northwest of Australia) to reduce the effects of ecotopes. An observation surprising us is that a stream-bed and the water flowing in it can be strongly separated conceptually. They also observe a strong

4

A.U. Frank

tendency to 'populate' the landscape with spirits (ghosts), which is a reflection of a polytheistic religion and thus a cultural difference.

3 Differences in the Spatial Structure and the Physical Environment GIS still force our understanding of the world to fixed, exactly bounded objects. This is, on one hand, the effect of the use of coordinated geometry, and on the other, the inheritance from a land use planning tradition, where land use is planned for nonoverlapping, clearly bounded regions. Legal traditions differ in how sharp they create boundaries; current European law varies between a concept of general boundary that is formed by a (possible wide) hedge in England and Wales and geometrically sharp boundary lines fixed by coordinates in Austria. A variety of aspects were discussed in a workshop (Burrough et al. 1996b); resulting in reports that show the counter-intended effects sharp boundaries can have (Burrough et al. 1996a). Campari (1996) discussed the conflict between clear-cut two dimensional planning regions as they are applicable in the physical environment for which the earliest planning GIS were built in the 1970s, i.e., the U.S. Midwest suburban towns and their planning, and applications of GIS to capture the reality of traditional towns, built on steep inclines, e.g., in southern Europe. The limited two dimensional view is insufficient and a three dimensional representation is necessary, but not likely resulting in sharply delimited and single use regions that can be entered in a GIS; the 'open' space in a town serves for transportation, access to sunlight but also for rainwater runoff. The application to GIS in other cultures, with other building styles, climates etc. may require other deviations from the two dimensional, sharp boundary model. Efforts to define approximate spatial relations between vague regions are important to bridge the gap between the geometric reasoning of GIS and the human users. Sharma described in his PhD thesis an approximate calculus for distance and directions (1996) following the approach by Frank (1992) and Rezayan et al. (2005). Specialized systems for spatial navigation in cars but increasingly also for pedestrians have become very popular and questions of how humans give directions are now practically important (Lovelace et al. 1999) studies in the 1980s (Denis 1997) have shown (small) differences between genders, but no differences between, say European and USA (unpublished thesis TU Wien). State of the art in commercial devices give satisfactory wayfinding instructions in simple cases, but they are not satisfactory in complex situations where the spatial reasoning by the system differs widely from the way humans conceptualize a situation. To give precise verbal instructions to navigate a complex, multi-bifurcation is assisted by using graphical displays— distracting the driver in a situation where his attention should be on the other moving cars around him; equally distracting are the differences between the system's view of where a turn instruction is necessary and where not—indicating that the concept of 'following a road' based on the road classification and numbering scheme and the visual perceived reality conflict. It is apparent that geographic information could be used to improve the search on the web. In many cases, a query has a spatial focus and objects satisfying the

Multi-cultural Aspects of Spatial Knowledge

5

conditions, but far away, are not relevant (e.g., search for a pizza place, or an ATM). To process queries like 'show me the pizza places downtown' or 'find a hotel in the black forest' we require a definition of where 'downtown' (Montello et al. 2003) or what the 'black forest' is. Efforts to glean this information statically from the use of such terms on the web are underway and often reported as using 'vernacular' location terms (to differentiate from the toponymes collected in official gazetteers) (Twaroch et al. 2008). The context dependence of qualitative spatial relations is well known but poorly understood. Most of the above issues to make GIS more usable and more 'user friendly' depend on understanding how the present context influences the meaning of the terms used. Linguists have studied context dependence of semantics in general and have—unfortunately—not come up with a satisfactory answer. A recent publication by Gabora, Rosch and Aerts (2008) gives a very precise account of the difficulties of previously proposed approaches and sketches a novel method; it uses quantum mechanics as a calculus for transforming expressions between different contexts and claims that it corresponds to empirical observations. The application to spatial situations is a promising, but open question.

4 Conclusions Differences between cultures affect how GIS is used and 'cultural differences' form a major obstacle in the application of GIS. The initial fears of substantial differences in the cognition of space by human beings from different cultures has not been confirmed. Similarly, the way spatial relations are described appears more situation (context) dependent than culturally different. Studies formalizing spatial reasoning showed very substantial “cultural” differences between the way a computer system treats geometry and human performance, a field of research, were many questions are still open—but fortunately simplified in so far as we may expect that human spatial cognition and human spatial reasoning is universal. Differences in the conceptualization of spatial situations—independent from the socio-economic (cultural) context— are not documented, but large differences in language expressions to communicate spatial situations exist. A possible explanation is that the conceptualization and the mental classification is much finer and only for communication mapped to the coarser verbal expression. Early research in spatial cognition in GIS assumed a close connection between the mental concepts and the verbal expressions (Mark et al. 1989a) and followed a linguistic tradition to reported the verbal expression as spatial concept. Large differences exist in the way, spatial information is used in different cultures. The practice of spatial decision making is different, because the cultural (social, legal, economic) situation provides a different context and requires different distinctions (Frank 2006) between objects to form classes of situations that can be dealt with similarly. Such cultural differences are visible between countries—especially those using the same language—but are also observable between different agencies in a single city, or even between different parts of a single organization. To understand multicultural influences in geographic information, research today could be focused on the following aspects:

6

A.U. Frank

• Differences in the vocabulary: terminology differs (lake vs. lac, in the distinctions used to separate the terms, e.g., small-large to separate pond and lake, vs. l'etang separated from lac by man made—natural; • Differences in graphical style, perhaps most obvious in the cartographic styles found in maps of different National Mapping Agencies. Some of the differences follow from the landscape, ecological, economical and political situation; • Cultural meaning of terms; evident are the differences even between countries sharing the same language (USA, Canada, India, UK, etc. or Germany, Austria and (part of) Switzerland). The cultural (legal) environment defines concepts and terms, which are meaningful in this social-cultural context (X counts as Y in the context Z—(Searle 1995)) and differ widely, even when using the same word. • Differences in conversation style (Grice 1989): is it acceptable to anthropomorphizing computers? Levels of politeness are required even in a computer dialog. Length of turns between the partners in a conversation.

References Bloom, P., Peterson, M.A., Nadel, L., Garrett, M.F. (eds.): Language and Space. Language, Speech and Communication. MIT Press, Cambridge (1994) Burrough, P., Couclelis, H.: Practical Consequences of Distinguishing Crisp Geographic Objects. In: Masser, I., Salgé, F. (eds.) Geographic Objects with Indeterminate Boundaries, pp. 333–335. Taylor & Francis, London (1996a) Burrough, P.A., Frank, A.U.: Concepts and Paradigms in Spatial Information: Are Current Geographic Information Systems Truly Generic? International Journal of Geographical Information Systems 9(2), 101–116 (1995) Burrough, P.A., Frank, A.U. (eds.): Geographic Objects with Indeterminate Boundaries. GISDATA Series. Taylor & Francis, London (1996b) Campari, I.: GIS Commands as Small Scale Space Terms: Cross-Cultural Conflicts of Their Spatial Content. In: SDH 1994, Sixth International Symposium on Spatial Data Handling, Association for Geographic Information, Edinburgh, Scotland (1994) Campari, I.: Uncertain Boundaries in Urban Space. In: Burrough, P.A., Frank, A.U. (eds.) Geographic Objects with Indeterminate Boundaries, vol. 2, pp. 57–69. Taylor & Francis, London (1996) Campari, I., Frank, A.U.: Cultural differences in GIS: A basic approach. In: EGIS 1993, Genoa, March 29 - April 1, EGIS Foundation (1993) Carroll, J.B.: Language, Thought and Reality - Selected Writing of Benjamin Lee Whorf. The MIT Press, Cambridge (1956) Denis, M.: The Description of Routes: A cognitive approach to the production of spatial discourse. Cahiers de psychologie cognitive 16(4), 409–458 (1997) Egenhofer, M.J.: Spatial Query Languages. PhD University of Maine (1989) Egenhofer, M.J., Franzosa, R.D.: Point-Set Topological Spatial Relations. International Journal of Geographical Information Systems 5(2), 161–174 (1991) Frank, A.U.: Qualitative Spatial Reasoning about Distances and Directions in Geographic Space. Journal of Visual Languages and Computing (3), 343–371 (1992) Frank, A.U.: Formal models for cognition - taxonomy of spatial location description and frames of reference. In: Freksa, C., Habel, C., Wender, K.F. (eds.) Spatial Cognition 1998. LNCS (LNAI), vol. 1404, pp. 293–312. Springer, Heidelberg (1998a)

Multi-cultural Aspects of Spatial Knowledge

7

Frank, A.U.: Specifications for Interoperability: Formalizing Spatial Relations ’In’, ’Auf’ and ’An’ and the Corresponding Image Schemata ’Container’, ’Surface’ and ’Link’. In: Proceedings of 1st Agile-Conference, ITC, Enschede, The Netherlands (1998b) Frank, A.U.: Distinctions Produce a Taxonomic Lattice: Are These the Units of Mentalese? In: International Conference on Formal Ontology in Information Systems (FOIS), Baltimore, Maryland. IOS Press, Amsterdam (2006) Freksa, C.: Using Orientation Information for Qualitative Spatial Reasoning. In: Frank, A.U., Formentini, U., Campari, I. (eds.) GIS 1992. LNCS, vol. 639, pp. 162–178. Springer, Heidelberg (1992) Freundschuh, S.M.: Spatial Knowledge Acquisition of Urban Environments from Maps and Navigation Experience. PhD Buffalo (1991) Gabora, L., Rosch, E., Aerts, D.: Toward an Ecological Theory of Concepts. Ecological Psychology 20(1), 84–116 (2008) Goodchild, M.: Citizens as Sensors: the World of Volunteered Geography. GeoJournal 69(4), 211–221 (2007) Grice, P.: Logic and Conversation. In: Studies in the Way of Words, pp. 22–40. Harvard University Press, Cambridge (1989) Hernández, D., Clementini, E., Di Felice, P.: Qualitative Distances. In: Kuhn, W., Frank, A.U. (eds.) COSIT 1995. LNCS, vol. 988, pp. 45–57. Springer, Heidelberg (1995) Klatzky, R.L.: Allocentric and egocentric spatial representations: definitions, distinctions, and interconnections. In: Freksa, C., Habel, C., Wender, K.F. (eds.) Spatial Cognition 1998. LNCS (LNAI), vol. 1404, pp. 1–17. Springer, Heidelberg (1998) Levinson, S.C.: Frames of Reference and Molyneux’s Question: Crosslinguistic Evidence. In: Bloom, P., Peterson, M.A., Nadel, L., Garett, M.F. (eds.) Language and Space, pp. 109–170. MIT Press, Cambridge (1996) Lovelace, K.L., Hegarty, M., Montello, D.R.: Elements of Good Route Directions in Familiar and Unfamiliar Environments. In: Freksa, C., Mark, D.M. (eds.) COSIT 1999. LNCS, vol. 1661, p. 65. Springer, Heidelberg (1999) Mark, D.M.: Toward a Theoretical Framework for Geographic Entity Types. In: Campari, I., Frank, A.U. (eds.) COSIT 1993. LNCS, vol. 716, pp. 270–283. Springer, Heidelberg (1993) Mark, D.M., Comas, D., Egenhofer, M., Freundschuh, S., Gould, M.D., Nunes, J.: Evaluating and Refining Computational Models of Spatial relations through Cross-Linguistic Human-Subjects Testing. In: Kuhn, W., Frank, A.U. (eds.) COSIT 1995. LNCS, vol. 988, pp. 553–568. Springer, Heidelberg (1995) Mark, D.M., Comas, D., Egenhofer, M.J., Freundschuh, S.M., Gould, M.D., Nunes, J.: Evaluating and Refining Computational Models of Spatial Relations Through Cross-Linguistic Human-Subjects Testing. In: Kuhn, W., Frank, A.U. (eds.) COSIT 1995. LNCS, vol. 988, pp. 553–568. Springer, Heidelberg (1995b) Mark, D.M., Egenhofer, M.J.: An Evaluation of the 9-Intersection for Region-Line Relations. In: GIS/LIS 1992 Proceedings, ACSM-ASPRS-URISA-AM/FM, San Jose (1992) Mark, D.M., Egenhofer, M.J.: Calibrating the Meanings of Spatial Predicates from Natural Languages: Line-Region Relations. In: Sixth International Symposium on Spatial Data Handling, Edinburgh, Scotland (1994) Mark, D.M., Frank, A.U.: Concepts of Space and Spatial Language. In: Auto-Carto 9, ASPRS & ACSM, Baltimore, MA (1989a) Mark, D.M., Frank, A.U., Egenhofer, M.J., Freundschuh, S.M., McGranaghan, M., White, R.M.: Languages of Spatial Relations: Initiative Two Specialist Meeting Report. National Center for Geographic Information and Analysis (1989b) Mark, D.M., Turk, A.G., Stea, D.: Progress on Yindjibarndi Ethno-Physiography. In: Winter, S., Duckham, M., Kulik, L., Kuipers, B. (eds.) COSIT 2007. LNCS, vol. 4736, pp. 1–19. Springer, Heidelberg (2007)

8

A.U. Frank

Montello, D.R.: How Significant are Cultural Differences in Spatial Cognition? In: Kuhn, W., Frank, A.U. (eds.) COSIT 1995. LNCS, vol. 988, pp. 485–500. Springer, Heidelberg (1995) Montello, D.R., Goodchild, M., Gottsegen, J., Fohl, P.: Where’s Downtown? Behavioral Methods for Determining Referents of Vague Spatial Queries. Spatial Cognition and Computation 3(2&3), 185–204 (2003) Pederson, E.: Geographic and Manipulable Space in Two Tamil Linguistic Systems. In: Campari, I., Frank, A.U. (eds.) COSIT 1993. LNCS, vol. 716, pp. 294–311. Springer, Heidelberg (1993) Ragni, M., Tseden, B., Knauff, M.: Cross-Cultural Similarities in Topological Reasoning. Springer, Heidelberg (2007) Rezayan, H., Frank, A.U., Karimipour, F., Delavar, M.R.: Temporal Topological Relationships of Convex Spaces in Space Syntax Theory. In: International Symposium on SpatioTemporal Modeling 2005, Beijing, China, Hong Kong Polytechnic University (2005) Searle, J.: The Construction of Social Reality. The Free Press, New York (1995) Sharma, J.: Integrated Spatial Reasoning in Geographic Information Systems. PhD Maine (1996) Stevens, A., Coupe, P.: Distortions in judged spatial relations. Cognitive Psychology 10, 422–437 (1978) Talmy, L.: How Language Structures Space. In: Pick, H., Acredolo, L. (eds.) Spatial Orientation: Theory, Research, and Application. Plenum Press, New York (1983) Twaroch, F., Jones, C.B., Abdelmoty, A.I.: Acquisition of a vernacular gazetteer from web sources. In: Boll, S., Jones, C.B., Kansa, P., et al. (eds.) Proceedings of the First International Workshop on Location and the Web, LocWeb, vol. 300, pp. 61–64. ACM, New York (2008) Xiao, D., Liu, Y.: Study of Cultural Impacts on Location Judgments in Eastern China. In: Winter, S., Duckham, M., Kulik, L., Kuipers, B. (eds.) COSIT 2007. LNCS, vol. 4736, pp. 20–31. Springer, Heidelberg (2007) Zimmermann, K.: Enhancing Qualitative Spatial Reasoning - Combining Orientation and Distance. In: Campari, I., Frank, A.U. (eds.) COSIT 1993. LNCS, vol. 716, pp. 69–76. Springer, Heidelberg (1993) Zimmermann, K., Freksa, C.: Qualitative Spatial Reasoning Using Orientation, Distance, and Path Knowledge. Applied Intelligence 6, 49–58 (1996)

Towards Reasoning Pragmatics Pascal Hitzler Kno.e.sis Center, Wright State University, Dayton, Ohio http://www.pascal-hitzler.de/

Abstract. The realization of Semantic Web reasoning is central to substantiating the Semantic Web vision. However, current mainstream research on this topic faces serious challenges, which force us to question established lines of research and to rethink the underlying approaches.

1

What Is Semantic Web Reasoning?

The ability to combine data, mediated by metadata, in order to derive knowledge which is only implicitly present, is central to the Semantic Web idea. This process of accessing implicit knowledge is commonly called reasoning, and formal modeltheoretic semantics tells us exactly what knowledge is implicit in the data.1 Let us attempt to define reasoning in rather general terms: Reasoning is about arriving at the exact answer(s) to a given query. Formulated in this generality, this encompasses many situations which would classically not be considered reasoning – but it will suffice for our purposes. Note that the definition implicitly assumes that there is an exact answer. In a reasoning context, such an exact answer would normally be defined by a model-theoretic semantics.2 Current approaches to Semantic Web reasoning, however, which are mainly based on calculi drawn from predicate logic proof theory, face several serious obstacles. – Scalability of algorithms and systems has been improving drastically, but systems are still incapable of dealing with amounts of data on the order of magnitude as can be expected on the World Wide Web. This is aggrevated by the fact that classical proof theory does not readily allow for parallelization, and that the amount of data present on the web increases with a similar growth rate as the efficiency of hardware. – Realistic data, in particular on the web, is generally noisy. Established prooftheoretic approaches (even those including uncertainty or probabilistic methods) are unable to cope with this kind of data in a manner which is ready for large-scale applications. 1

2

It is rather peculiar that a considerable proportion of so-called Semantic Web research and publications ignores formal semantics. Even most textbooks fail to explain it properly. An exception is [7]. Simply referring to a formal semantics is too vague, since this would also include procedural semantics, i.e. non-declarative approaches, and thus would include most mainstream programming languages.

K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 9–25, 2009. c Springer-Verlag Berlin Heidelberg 2009 

10

P. Hitzler

– It is a huge engineering effort to create web data and ontologies which are of sufficiently high quality for current reasoning approaches, and usually beyond the abilities of application developers. The resulting knowledge bases are furthermore severely limited in terms of reusability for other application contexts. The state of the art shows no indications that approaches based on logical proof theory would overcome these obstacles anytime soon in such a way that largescale applications on the web can be realized. Since reasoning is central to the Semantic Web vision, we are forced to rethink our traditional methods, and should be prepared to tread new paths. A key idea to this effect, voiced by several researchers (see e.g. [3,23]) is to explore alternative methods for reasoning. These may still be based more or less closely on proof-theoretic considerations, or they may not. They could, e.g., utilize methods from statistical machine learning or from nature-inspired computing. Researchers who are used to thinking in classical proof-theoretic terms are likely to object to this thought, arguing that a relaxation of strict proof-theoretic requirements on algorithms, such as soundness and completeness, would pave the way for arbitrary algorithms which do not perform logical reasoning at all, and thus would fail to adhere to the specification provided by the formal semantics underlying the data – and thus jeopardize the Semantic Web vision. While such arguments have some virtue, it needs to be stressed that the nature of the underlying algorithm is, effectively, unimportant, as long as the system adheres to the specification, i.e. to the formal semantics. Imagine, as a thought experiment, a black box system which performs sound and complete reasoning in all application settings it is made for – or at least up to the extent to which standard reasoning systems are sound and complete.3 Does it matter then whether the underlying algorithm is provably sound and complete? I guess not. The only important thing is that its performance is sound and complete. If the black box were orders of magnitude faster then conventional reasoners, but somebody would tell you that it is based on statistical methods, which one would you choose to work with? Obviously, the answer depends on the application scenario – if you’d like to manage a bank account, you may want to stick with the proof-theoretic approach since you can prove formally that the algorithm does what it should; but if you use the algorithm for web search, the quicker algorithm might be the better choice. Also, your choice will likely depend on the evidence given as to the correctness of the black box algorithm in application settings. This last thought is important: If a reasoning system is not based on proof theory, then there must be a quality measure for the system, i.e., the system must 3

Usually, they are not sound and complete, although they are based on underlying algorithms which are, theoretically, sound and complete. Incompleteness comes from the fact that resources, including time, are limited. Unsoundness comes from bugs in the system.

Towards Reasoning Pragmatics

11

be evaluated against the gold standard, which is given by the formal semantics, or equivalently by the provably sound and complete implementations [23]. If we bring noisy data, as on the web, into the picture, it becomes even clearer why a fixation on soundness and completeness of reasoning systems is counterproductive for the Semantic Web: In the presence of such data, even the formal model-theoretic semantics breaks down, and it is quite unclear how to develop algorithms based on proof theory for such data. The notions of soundness and completeness of reasoning in the classical sense appear to be almost meaningless. But only almost, since alternative reasoning systems which are able to cope with noisy data can still be evaluated against the gold standard on non-noisy data, for quality assurance. In the following, we revisit the role of soundness and completeness for reasoning, and argue further for an alternative perspective on these issues (Section 2). We also discuss key challenges which need to be addressed in order to realise reasoning on and for the Semantic Web, in particular the questions of expressivity of ontology languages (Section 3), roads to bootstrapping (Section 4), knowledge acquisition (Section 5), and user interfacing (Section6). We conclude in Section 7.

2

The Role of Soundness, Completeness, and Computational Complexity

Computational complexity has classically been a consideration for the development of description logics, which underlie the Web Ontology Language OWL – which is currently the most prominent ontology language for Semantic Web reasoning. In particular, OWL is a decidable logic. The currently ongoing revision OWL 2 [6] furthermore explicitly defines fragments, called profiles, with lower (in fact, polynomial) computational complexity. Soundness and completeness are central properties of classical reasoning algorithms for logic-based knowledge representation languages, and are thus central notions in the development of Semantic Web reasoning around OWL. However performance issues have prompted researchers to advocate approximate reasoning for the Semantic Web (see e.g. [3,23]). Arguing for this approach provokes radically different kinds of reactions: some logicians appear to be abhorred by the mere thought, while many application developers find it the most natural thing to do. Often it turns out that the opposing factions misunderstand the arguments: counterarguments usually state that leaving the model-theoretic semantics behind would lead to arbitrariness and thus loss of quality. So let it be stated again explicitly: approximate reasoning shall not replace sound and complete reasoning in the sense that the latter would no longer be needed. Quite in contrast, approximate reasoning in fact needs the sound and complete approaches as a gold standard for evaluation and quality assurance. The following shall help to make this relationship clear.

12

P. Hitzler

2.1 Sound but Incomplete Reasoning There appears to be not much argument against this in the Semantic Web commmunity, even from logicians: they are used to this, since some KR languages, including first-order predicate logic, are only semi-decidable,4 i.e. completeness can only be achieved with unlimited time resources anyway. For decidable languages, however, a sound but incomplete reasoner should always be evaluated against the gold standard, i.e., against a sound and complete reasoner, in order to show the amount of incompleteness incurred versus the gain in efficiency. Interestingly, this is rarely done in a structured way, which is, in my opinion, a serious neglect. A statistical framework for evaluation against the gold standard is presented in [23]. 2.2

Unsound but Complete Reasoning

Allowing for reasoning algorithms to be unsound appears to be much more controversial, and the usefulness of this concept seems to be harder to grasp. However, there are obvious examples. Consider, e.g., fault-detection in a power plant in case of an emergency: The system shall determine (quickly!) which parts of the factory need to be shut down. Obviously, it is of highest importantance that the critical part is contained in the shutdown, while it is less of a problem if too many other parts are shut down, too.5 Another obvious example is semantic search: In most cases, users would prefer to get a quick set of replies, among which the correct one can be found, rather than wait longer for one exact answer. Furthermore, sound-incomplete and unsound-complete systems can sometimes be teamed up and work in parallel to provide better overall performance (see e.g. [24]). 2.3

Unsound and Incomplete Reasoning

Following the above arguments to their logical conclusion, it should become clear why unsound and incomplete reasoning has its place among applications. Remember that there is the gold standard against which such systems should be evaluated. And obviously there is no reason to stray from a sound and complete approach if the knowledge base is small enough to allow for it. The most prominent historic example for an unsound and incomplete yet very successful reasoning system is Prolog. Traditionally, the unification algorithm, which is part of the SLD-resolution proof procedure used in Prolog [13], is used without the so-called occurs check, which, depending on the exact implementation, can cause unsoundness [16].6 This omission was made due to reasons of efficiency, and turned out to be feasible since it rarely causes a problem for Prolog programmers. 4 5 6

Some non-montonic logics are not even semi-decidable. This example is due to Frank van Harmelen, personal communication. To obtain a wrong answer, execute the query ?-p(a). on the logic program consisting of the two clauses p(a) :- q(X,X). and q(X,f(X))., e.g. under SWI-Prolog. – The example is due to Markus Kr¨ otzsch.

Towards Reasoning Pragmatics

13

Likewise, it is not unreasonable to expect that carefully engineered unsound and incomplete reasoning approaches can be useful on the Semantic Web, in particular when sound and complete systems fail to provide results within a reasonable time span. Furthermore, there is nothing wrong with using entirely alternative approaches to this kind of reasoning, e.g., approaches which are not based on proof theory. To give an example of the latter, we refer to [2], where the authors use a statistical learning approach using support vector machines. They train their machine to infer class membership in ALC, which is a description logic related to OWL, and achieve a 90% coverage. Note that this is done without any proof theory, other than to obtain the training examples. In effect, their system learns to reason with high coverage without performing logical deduction in the prooftheoretic sense. For a statistical framework for evaluation against the gold standard we refer again to [23]. 2.4

Computational Complexity and Decidability

Considerations on computational complexity and decidability have been driving research around description logics, which underlie OWL, from the beginning. At the same time, there are more and more critical voices concerning the fixation of that research on these issues, since it is not quite clear how practical systems would benefit from these. Indeed, theoretical (worst-case) computational complexity is hardly a relevant measure for the performance of real systems. At the same time, decidability is only guaranteed assuming bug-free implementations– which is an unrealistic assumption –, and given enough resources – which is also unrealistic since the underlying algorithms often require exponential time in the worst case. The misconception underlying these objections is that computational complexity and decidability are not practial measures which have a direct meaning in application contexts. They are rather a priori measures for language and algorithm development, and the recent history of OWL language development indicates that these a priori measures have indeed done a good job. It is obviously better to have such theoretical means for the conceptual work in creating language features, than to have no measures at all. And indeed this has worked out well, since e.g. reasoning systems based on realistic OWL knowledge bases currently seem to behave rather well despite the high worst-case computational complexity. Taking the perspective of approximate reasoning algorithms as laid out earlier, it is actually a decisively positive feature of Semantic Web knowledge representation languages that systems exist which can serve as a gold standard reference. Considering the difficulties in other disciplines (like Information Retrieval) in creating gold standards, we indeed are delivered the gold standard on a silver plate. We can use this to an advantage.

14

3

P. Hitzler

Diverse Knowledge Representation Issues

Within 50 years of KR research, many issues related to the representation of nonclassical knowledge have been investigated. Many of the research results obtained in this realm are currently carried over to ontology languages around OWL, including abductive reasoning, uncertainty handling, inconsistency handling and paraconsistent reasoning, closed world reasoning and non-monotonicity, belief revision, etc. However all these approaches face the same problems that OWL reasoning faces, foremost scalability and the dealing with realistic noisy data.7 Indeed under most of these approaches, runtime performance becomes worse, since the reasoning problems generally become harder in terms of computational complexity. Nevertheless, research on logical foundations of these knowledge representation issues, as currently being carried out, is needed to establish the gold standard. At this time there is a certain neglect, however, in combining several paradigms, e.g. it is quite unclear how to marry paraconsistent reasoning with uncertainty handling. Research into enhancing expressivity of ontology languages can roughly be divided into the following. – Classical logic features: This line of research follows the well-trodden path of extending e.g. OWL with further expressive features, while attempting to retain decidability and in some cases low computational complexity. Some concrete suggestions for next steps in this direction are given in the appendix. – Extralogical features: These include datatypes and additional datastructures, like e.g. Description Graphs [19]. – Supraclassical logic: Logical features related to commonsense reasoning like abduction and explanations (e.g., [8]), paraconsistency (e.g., [15]), belief revision, closed-world (e.g., [4]), uncertainty handling (e.g., [10,14]), etc. There is hardly any work investigating approximate reasoning solutions for supraclassical logics. Investigations into these issues should first establish the gold standard following sound logical principles including computational complexity issues. Only then should extensions towards approximate reasoning be done.

4

Bootstrapping Reasoning

How to get from A (today) to B (reasoning that works on the Semantic Web)? I believe that a promising approach lies in bootstrapping existing applications which use little or no reasoning, based e.g. on RDF. The idea is to enhance these applications very carefully with a bit more reasoning, in order to clearly 7

I’m personally critical about fuzzy logic and probabilistic logic approaches in practice for Semantic Web issues. Dealing with noisy data on the web does not seem to easily fall in the fuzzy or probabilistic category. So probably new ideas are needed for these.

Towards Reasoning Pragmatics

15

understand the added value and the difficulties one is facing when doing this. A (very) generic workflow for the bootstrapping may be as follows. 1. Identify an (RDF) application where enhanced reasoning would bring added value. 2. Identify ontology language constructs which would be needed for expressing the knowledge needed for the added value. 3. Identify an ontology language (an OWL profile or an OWL+Rules hybrid) which covers these additional language constructs. 4. Find a suitable reasoner for the enhanced language. 5. Enhance the knowledge base and plug the software components together. The point of these exercises is not only to show that more reasoning capabilities bring added value, but also to identify obstacles in the bootstrapping process.

5

Overcoming the Ontology Acquisition Bottleneck

The ontology acquisition bottleneck for logically expressive ontologies comes partly from the fact that sound and complete reasoning algorithms work only on carefully devised ontologies, and in many cases it needs an ontology expert to develop them. Creating such high-quality ontologies is very costly. A partial solution to this problem is related to (1) noise handling and (2) the bootstrapping idea. With the current fixation on sound and complete reasoning it cannot be expected that usable ontologies (in the sound and complete sense) will appear in large quantities e.g. on the web. However, it is conceivable that e.g. Linked Open Data8 (LoD) could be augmented with more expressive schema data to allow e.g. for reasoning-based semantic search. The resulting extended LoD cloud would still be noisy and not readily usable with sound and complete approaches. So reasoning approaches which can handle noise are needed. This is also in line with the bootstrapping idea: We already have a lot of metadata available, and in order to proceed we need to make efforts to enhance this data, and to find robust reasoning techniques which can deal with this real-world noisy data.

6

Human Interfacing

Classical ontology engineering often has the appearance of expert sysem creation, if used off the web. On the web, it often lacks the reasoning component. As argued in this paper, in order to advance the Semantic Web vision – on and off the web – we need to find ways to reason with noisy and incomplete data in a realistic and pragmatic manner. This necessity is not reflected by current ontology engineering research. 8

http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/ LinkingOpenData/

16

P. Hitzler

In order to advance towards ontology reasoning applications, we need ontology engineering systems which – support reasoning bootstrapping, – include multiple reasoning support (i.e. multiple reasoning algorithms, classical and non-classical), and – are made to cope with noisy and uncertain data. We need to get away from thinking about ontology creation as coding in the programming sense. This can only be achieved by relaxing the reasoning algorithm requirements, i.e. by realising reasoning systems which can cope with noisy and uncertain data.

7

Putting It All Together

In this paper I argue for roads to realising reasoning on and for the Semantic Web. Efforts on several fronts are put forth: – The excellent research results and ongoing efforts in establishing sound and complete proof-theory-based reasoning systems need to be complemented by investigations into alternative reasoning approaches, which are not necessarily based on proof theory, and can handle noisy and uncertain data. – Reasoning bootstrapping should be investigated seriously and on a broad front, in order to clearly show added value, and in order to identify challenges in adopting ontology reasoning in applications. – Ontology engineering environments should systematically accommodate reasoning bootstrapping and the support of multiple reasoning paradigms, classical and non-classical. Let us recall a main point of this paper: reasoning algorithms do not have to be based on proof theory. But they have to perform well if compared with the gold standard. In a sense, the laid out lines of research lead us a bit further away from knowledge representation (KR), and at the same time they do a small step towards non-KR-based intelligent systems: Not all the intelligence must be in the knowledge base (with corresponding sound and complete reasoning algorithms). We must facilitate intelligent solutions, including machine learning and data mining, and statistical inductive methods, to achieve the Semantic Web vision.

Acknowledgement Many thanks to Cory Henson, Prateek Jain, and Valentin Zacharias for feedback and discussions. The appendix is taken from [5].

Towards Reasoning Pragmatics

17

References 1. Boley, H., Kifer, M. (eds.): RIF Framework for Logic Dialects. W3C Working Draft, July 30 (2008), http://www.w3.org/TR/rif-fld/ 2. Fanizzi, N., d’Amato, C., Esposito, F.: Statistical learning for inductive query answering on OWL ontologies. In: Sheth, A.P., Staab, S., Dean, M., Paolucci, M., Maynard, D., Finin, T., Thirunarayan, K. (eds.) ISWC 2008. LNCS, vol. 5318, pp. 195–212. Springer, Heidelberg (2008) 3. Fensel, D., van Harmelen, F.: Unifying reasoning and search to web scale. IEEE Internet Computing 11(2), 96, 94–95 (2007) 4. Grimm, S., Hitzler, P.: A preferential tableaux calculus for circumscriptive ALCO. In: Proceedings of the Third International Conference on Web Reasoning and Rule Systems, Washington D.C., USA. LNCS. Springer, Heidelberg (to appear, 2009) 5. Hitzler, P.: Suggestions for OWL 3. In: Proceedings of OWL – Experiences and Directions, Sixth International Workshop, Washington D.C., USA (October 2009) (to appear) 6. Hitzler, P., Kr¨ otzsch, M., Parsia, B., Patel-Schneider, P.F., Rudolph, S. (eds.): OWL 2 Web Ontology Language: Primer. W3C Proposed Recommendation, September 22 (2009), http://www.w3.org/TR/2009/PR-owl2-primer-20090922/ 7. Hitzler, P., Kr¨ otzsch, M., Rudolph, S.: Foundations of Semantic Web Technologies. Chapman & Hall/CRC (2009) 8. Horridge, M., Parsia, B., Sattler, U.: Laconic and precise justifications in OWL. In: Sheth, A.P., Staab, S., Dean, M., Paolucci, M., Maynard, D., Finin, T., Thirunarayan, K. (eds.) ISWC 2008. LNCS, vol. 5318, pp. 323–338. Springer, Heidelberg (2008) 9. Horrocks, I., Patel-Schneider, P.F., Boley, H., Tabet, S., Grosof, B., Dean, M.: SWRL: A Semantic Web Rule Language. W3C Member Submission, May 21 (2004), http://www.w3.org/Submission/SWRL/ 10. Klinov, P., Parsia, B.: Optimization and evaluation of reasoning in probabilistic description logic: Towards a systematic approach. In: Sheth, A.P., Staab, S., Dean, M., Paolucci, M., Maynard, D., Finin, T., Thirunarayan, K. (eds.) ISWC 2008. LNCS, vol. 5318, pp. 213–228. Springer, Heidelberg (2008) 11. Kr¨ otzsch, M., Rudolph, S., Hitzler, P.: Description logic rules. In: Ghallab, M., Spyropoulos, C.D., Fakotakis, N., Avouris, N. (eds.) Proceedings of the 18th European Conference on Artificial Intelligence, ECAI 2008, pp. 80–84. IOS Press, Amsterdam (2008) 12. Kr¨ otzsch, M., Rudolph, S., Hitzler, P.: ELP: Tractable rules for OWL 2. In: Sheth, A.P., Staab, S., Dean, M., Paolucci, M., Maynard, D., Finin, T., Thirunarayan, K. (eds.) ISWC 2008. LNCS, vol. 5318, pp. 649–664. Springer, Heidelberg (2008) 13. Lloyd, J.W.: Foundations of Logic Programming. Springer, Heidelberg (1987) 14. Lukasiewicz, T., Straccia, U.: Managing uncertainty and vagueness in description logics for the semantic web. Journal on Web Semantics 6(4), 291–308 (2008) 15. Ma, Y., Hitzler, P.: Paraconsistent reasoning for OWL 2. In: Proceedings of the Third International Conference on Web Reasoning and Rule Systems, Washington D.C., USA, October 2009. LNCS. Springer, Heidelberg (to appear, 2009) 16. Marriott, K., Sondergaard, H.: On Prolog and the occur check problem. SIGPLAN Not. 24(5), 76–82 (1989) 17. McGuinness, D.L., van Harmelen, F. (eds.): OWL Web Ontology Language Overview. W3C Recommendation, February 10 (2004), http://www.w3.org/TR/owl-features/

18

P. Hitzler

18. Motik, B., Cuenca Grau, B., Horrocks, I., Wu, Z., Fokoue, A., Lutz, C. (eds.): OWL 2 Web Ontology Language: Profiles. W3C Proposed Recommendation, September 22 (2009), http://www.w3.org/TR/2009/PR-owl2-profiles-20090922/ 19. Motik, B., Cuenca Grau, B., Horrocks, I., Sattler, U.: Representing Ontologies Using Description Logics, Description Graphs, and Rules. Artificial Intelligence 173(14), 1275–1309 (2009) 20. Motik, B., Patel-Schneider, P.F., Parsia, B. (eds.): OWL 2 Web Ontology Language: Structural Specification and Functional-Style Syntax. W3C Candidate Recommendation, September 22 (2009), http://www.w3.org/TR/2009/PR-owl2-syntax-20090922/ 21. Motik, B., Sattler, U., Studer, R.: Query-answering for OWL-DL with rules. Journal of Web Semantics 3(1), 41–60 (2005) 22. Rudolph, S., Kr¨ otzsch, M., Hitzler, P.: Cheap Boolean role constructors for description logics. In: H¨ olldobler, S., Lutz, C., Wansing, H. (eds.) JELIA 2008. LNCS (LNAI), vol. 5293, pp. 362–374. Springer, Heidelberg (2008) 23. Rudolph, S., Tserendorj, T., Hitzler, P.: What is approximate reasoning? In: Calvanese, D., Lausen, G. (eds.) RR 2008. LNCS, vol. 5341, pp. 150–164. Springer, Heidelberg (2008) 24. Tserendorj, T., Rudolph, S., Kr¨ otzsch, M., Hitzler, P.: Approximate OWLreasoning with Screech. In: Calvanese, D., Lausen, G. (eds.) RR 2008. LNCS, vol. 5341, pp. 165–180. Springer, Heidelberg (2008) 25. W3C OWL Working Group. OWL 2 Web Ontology Language: Document Overview. W3C Working Draft, September 22 (2009), http://www.w3.org/TR/2009/PR-owl2-overview-20090922/

Towards Reasoning Pragmatics

A

19

Appendix: Suggestions for OWL 3 Abstract. With OWL 2 about to be completed, it is the right time to start discussions on possible future modifications of OWL. We present here a number of suggestions in order to discuss them with the OWL user community. They encompass expressive extensions on polynomial OWL 2 profiles, a suggestion for an OWL Rules language, and expressive extensions for OWL DL.

A.1

Introduction

The OWL community has grown with breathtaking speed in the last couple of years. The improvements coming from the transition from OWL 1 [17] to OWL 2 [25] are an important contribution to keeping the language alive and in synch with the users. While the standardization process for OWL 2 is currently coming to a successful conclusion, it is important that the development process does not stop, and that discussions on how to improve the language continue. In this appendix, we present a number of suggestions for improvements to OWL DL,9 which are based on some recent work. We consider it important that such further development is done in alignment with the design principles of OWL, and in particular with the description logic perspective which has governed its creation. Indeed, this heritage has been respected in the development of OWL 2, and is bringing it to a fruitful conclusion. There is no apparent reason for straying from this path. In particular, the following general rationales should be adhered to, as has happened for OWL 1 and OWL 2. – Decidability of OWL DL should be retained. – OWL DL semantics should be based on a first-order predicate logic semantics (and as such should, in particular, be essentially open-world and monotonic). – Analysis of computational complexities shall govern the selection of language features in OWL DL. Obviously, there are other important issues, like basic compatibility with RDF, having an XML-based syntax, backward-compatibility, etc., but we take these for granted and do not consider them to be major obstacles as long as future extensions of OWL are developed along the inherited lines of thinking. The suggestions which we present below indeed adhere to the design rationales just laid out. They concern different aspects of the language, and are basically independent of each other, i.e. they can be discussed separately. At the same time, however, they are also closely related and compatible, so that it is reasonable to discuss them together. 9

OWL DL has always played a special role in defining OWL – it is the basis from which OWL Full and other variants, like OWL Lite or the OWL 2 profiles, are developed. So in this appendix we focus on OWL DL.

20

P. Hitzler

In Section A.2, we suggest a rule-based syntax for OWL. The syntax is actually of a hybrid nature, and allows e.g. class descriptions inside the rules. Nevertheless, it captures OWL with a syntax which is essentially a rule-syntax. In Section A.3, we suggest the addition of Boolean role expressions to the arsenal of language constructs available in OWL. We also explain which cautionary measures need to be taken when this is done, in order to not lose decidability and complexity properties. In Section A.4, we suggest considerably extending OWL by including the DLsafe variable fragment of SWRL [9] together with the DL-safe fragment [21] of SWRL. In Section A.5, we propose a tractable profile, called ELP, which encompasses OWL 2 EL, OWL 2 RL, most of OWL 2 QL, and some expressive means which are not contained in OWL 2. It is currently the most expressive polynomial language which extends OWL 2 EL and OWL 2 RL, and is still relatively easy to implement. In Section A.6, we conclude. Obviously, we do not have the space to define all these extensions in detail, or to discuss all aspects of them exhaustively. We thus strive to convey the main ideas and intuitions, and refer to the indicated literature for details. In the definitions and discussions, we will sometimes drop details, or remain a bit vague (and thus compromise completeness of our exhibition), in order to be better able to focus on the main arguments. We believe that this serves the discussion better than being entirely rigorous on the formal aspects. A.2

An OWL Rules Language

The alignment of rule languages with OWL (and vice versa) has been a much (and sometimes heatedly) discussed topic. The OWL paradigm is quite different in underlying intuition, modelling style, and expressivity than standard rule language paradigms. Recent efforts involving OWL and rules attempt to merge the paradigms in order to get the best of both worlds. The advance from OWL 1 to OWL 2 indeed brings the two paradigms closer together. In particular, a considerable variety of rules, understood as Datalog rules with unary and binary predicates under a first-order predicate logic semantics, can be translated with some effort directly into OWL 2 DL. This observation paves the way for a rule-based syntax for OWL, which we will briefly present below. The suggestions in this section are based on [11]. Given any description logic D ,aD -rule is a rule of the form , A 1 ∧ ···∧ A n →A where Aand A (x ) or R (x ,y ), where Care (possii are expressions of the form C bly non-atomic) concept expressions over D , Rare role names (or role expressions if allowed in D ), and x ,yare either variables or individal names (ymay also be a datatype value if this is allowed in D ), and the following conditions are satisfied. – The pattern of variables in the rule body forms a tree. This is to be understood in the sense that whenever there is an expression R (x ,y ) with a role

Towards Reasoning Pragmatics

21

Man(x) ∧ hasBrother(x, y) ∧ hasChild(y, z) → Uncle(x) ThaiCurry(x) → ∃contains.FishProduct(x) kills(x, x) → PersonCommittingSuicide(x) PersonCommittingSuicide(x) → kills(x, x) NutAllergic(x) ∧ NutProduct(y) → dislikes(x, y) dislikes(x, z) ∧ Dish(y) ∧ contains− (z, y) → dislikes(x, y) worksAt(x, y) ∧ University(y) ∧ ∧supervises(x, z) ∧ PhDStudent(z) → professorOf(x, z) Mouse(x) ∧ ∃hasNose.TrunkLike(y) → smallerThan(x, y) Fig. 1. A SROIQ-Rules knowledge base

R and two variables x, y in the rule body, then there is a directed edge from x to y – hence each body gives rise to a directed graph, and the condition states that this graph must be a tree. Note that individuals are not taken into account when constructing the graph.10 Note also that the rule body must form a single tree. – The first argument of A is the root of the just mentioned tree. Semantically, SROIQ-rules come with the straightforward meaning under a first-order predicate logic reading, i.e., the implication arrow is read as first-order implication, and the free variables are considered to be universally quantified. A D-Rules knowledge base consists of a (finite) set of D-rules,11 which satisfies additional constraints, which depend on D. These constraints guarantee that certain properties of D, e.g., decidability, are preserved. For OWL 2 DL, these additional constraints specify regularity conditions and restrictions on the use of non-simple roles, similarly to SROIQ(D) – we omit the details. Examples for SROIQ-rules are given in Figure 1. The beauty of SROIQ-rules lies in the fact that any SROIQ-Rules knowledge base can be transformed into a SROIQ knowlege base – and that the transformation algorithm is polynomial. This means that SROIQ-rules are nothing more or less than a sophisticated kind of syntactic sugar for SROIQ. It is easy to see that, in fact, any SROIQ-axiom can also be written as a SROIQ-rule, so that modelling in SROIQ can be done entirely within the SROIQ-Rules paradigm. In order to be a useful language, it is certainly important to develop convenient web-enabled syntaxes. Such a syntax could be based on the Rule Interchange Format (RIF) [1], for example, which is currently in the final stages of becoming a W3C Recommendation. A SROIQ-Rules syntax could also be defined as a straightforward extension of the OWL 2 Functional Style Syntax [20]. 10 11

The exact definition is a bit more complicated; see [11]. Notice the difference in spelling: uppercase vs. lowercase.

22

P. Hitzler

∃(testifiesAgainst ⊓ relativeOf).⊤ ⊑ ¬UnderOath hasParent ⊑ hasFather ⊔ hasMother hasDaughter ⊑ hasChild ⊓ ¬hasSon Fig. 2. Examples for Boolean role constructors

Proposal: OWL 3 should have a rule-based syntax based on Description Logic Rules. A.3

Boolean Role Constructors

Boolean role constructors, i.e., conjunction, disjunction, and negation for roles, can be added to description logics around OWL under certain restrictions, without compromising language complexity. Since they provide additional modelling features which are clearly useful in the right circumstances (see Figure 2), there is no strong reason why they shouldn’t be added to OWL. The following summarizes results from [22]. All Boolean role constructors can be added to SROIQ without compromising its computational complexity, as long as the constructors involve only simple roles – the resulting description logic is denoted by SROIQB S . Likewise, OWL 2 EL can be extended with role conjunction without losing polynomial complexity of the language. Regularity requirements coming from SROIQ can be dropped (they are also not needed for polynomiality of the description logic EL++ , which is well-known). Likewise, the extension of OWL 2 RL with role conjunctions is still polynomial. While the complexity results just given are favorable, it has to be noted that suitable algorithms for reasoning with SROIQB S still need to be developed. Algorithms for the respective extensions of OWL 2 EL and OWL 2 RL, however, can easily be obtained by adjusting known algorithms for these languages – see also Section A.5. Proposal: OWL 3 should allow the use of Boolean role constructors wherever appropriate. A.4

DL-Safe Variable SWRL

SWRL [9] is a very natural extension for description logics with first-order predicate logic rules. Despite being a W3C Member Submission rather than a W3C Recommendation, it has achieved an extremely high visibility. However, in its original form, SWRL is undecidable, i.e., it does not closely follow the design guidelines we have listed in the introduction. A remedy for the decidability issue is the restriction of SWRL rules to socalled DL-safe rules [21]. Syntactically, DL-safe rules are rules of the form A1 ∧ · · · ∧ An → A

Towards Reasoning Pragmatics

23

as in Section A.2, but without the requirements on tree-shapedness. Semantically, however, they are read as first-order predicate logic rules, but with the restriction that variables in the rules may bind only to individuals which are present in the knowledge base.12 In essence, this limits the usability of DL-safe SWRL to applications which do not involve TBox reasoning. It is now possible to generalize DL-safe SWRL without compromising decidability. The underlying idea has been spelled out in a more limited setting in [12] (see also Section A.5), but it obviously carries over to SROIQ. In order to understand the generalization, we need to return to SROIQ-rules as defined in Section A.2. Recall that the tree-shapedness of the rule bodies is essential, but that role expressions involving individuals are ignored when checking for the tree structure. The idea behind DL-safe variable SWRL is now to identify those variables in rule bodies which violate the required tree structure, and to define the semantics of the rules such that these variables may only bind to individuals present in the knowledge base – these variables are called DL-safe variables. The other variables are interpreted as usual under the first-order predicate logic semantics. An alternative way to describe the same thing is to say that a rule qualifies as DL-safe variable SWRL if replacing all DL-safe variables in the rule by individuals results in an allowed SROIQ-rule. As an example, consider the rule C(x) ∧ R(x, w) ∧ S(x, y) ∧ D(y) ∧ T (y, w) → V (x, y), which violates the requirement of tree-shapedness because there are two different paths from x to w. Now, if we replace w by an individual, say o, then the resulting rule C(x) ∧ R(x, o) ∧ S(x, y) ∧ D(y) ∧ T (y, o) → V (x, y) is a SROIQ-rule.13 Hence, the rule C(x) ∧ R(x, ws ) ∧ S(x, y) ∧ D(y) ∧ T (y, ws ) → V (x, y), 12

13

The original definition is different, but equivalent. It required that each variable occurred in an atom in the rule body, which is not an atom of the underlying description logic knowledge base. The usual way to achieve this is by introducing an auxiliary class O which contains all known individuals, and adding O(x) to each rule body, for each variable in the rule. Our definition instead employs a redefinition of the semantics, which appears to be more natural in this case. Essentially, the two formulations are equivalent. This rule can be expressed in SROIQ by the knowledge base consisting of the three statements C ⊓ ∃R.{o} ⊑ ∃R1 .Self D ⊓ ∃T.{o} ⊑ ∃R2 .Self R1 ◦ S ◦ R2 ⊑ V. See [11].

and

24

P. Hitzler

where w s is a DL-safe variable, is a DL-safe variabe SWRL rule. Note that the other variables can still bind to elements whose existence is guaranteed by the knowledge base but which are not present in the knowledge base as individuals, which would not be possible if the rule were interpreted as DL-safe. In principle, naive implementations of this language could work with multiple instantiations of rules containing DL-safe variables, but no implementations yet exist. In principle, they should not be much more difficult to deal with than DL-safe SWRL rules. Proposal: OWL 3 DL should incorporate DL-safe SWRL and DL-safe variable SWRL. A.5

Pushing the Tractable Profiles

The OWL 2 Profiles document [18] describes three designated profiles of OWL 2, known as OWL 2 EL, OWL 2 RL, and OWL 2 QL. These three languages have been designed with different design principles in mind. They correspond to different description logics, have different expressive features, and can be implemented using different methods. The three profiles have in common that they are all of polynomial complexity, i.e., they are rather inexpressive languages, despite the fact that they have already found applications. While having three polynomial profiles is fine due to their fundamental differences, the question about maximal expressivity while staying in polynomial time naturally comes into view. The ELP language [12] is a language with polynomial complexity which properly contains both OWL EL and OWL RL. It also contains most of OWL QL.14 Furthermore, it still features rather simple algorithms for reasoning implementations. More precisely, ELP has the following language features. – It contains OWL 2 EL Rules, i.e. EL++ -rules as defined in Section A.2.15 Note that EL++ -rules cannot be converted to EL++ (i.e. OWL 2 EL) using the algorithm which converts SROIQ-rules to SROIQ. – It allows role conjunctions for simple roles. – It allows the use of DL-safe variable SWRL rules, in the sense that replacement of the safe variables by individuals in a rule must result in a valid EL++ -rule. – General DL-safe Datalog16 rules are allowed. The last point – allowing general DL-safe Datalog rules – is a bit tricky. As stated, it destroys polynomial complexity. However, if there is a global bound on the number of variables allowed in Datalog rules, then polynomiality is retained. Obviously, one would not want to enforce such a global bound; nevertheless 14 15 16

Role inverses cannot be expressed in ELP. EL++ -rules are D-rules with D = EL++ . One could also simply allow DL-safe SWRL rules.

Towards Reasoning Pragmatics

25

NutAllergic(x) ∧ NutProduct(y) → dislikes(x, y) Vegetarian(x) ∧ FishProduct(y) → dislikes(x, y) orderedDish(x, y) ∧ dislikes(x, y) → Unhappy(x) dislikes(x, vs ) ∧ Dish(y) ∧ contains(y, vs ) → dislikes(x, y) orderedDish(x, y) → Dish(y) ThaiCurry(x) → contains(x, peanutOil) ThaiCurry(x) → ∃contains.FishProduct(x) → NutProduct(peanutOil) → NutAllergic(sebastian) → ∃orderedDish.ThaiCurry(sebastian) → Vegetarian(markus) → ∃orderedDish.ThaiCurry(markus) Fig. 3. A simple example ELP rule base about food preferences. The variable vs is assumed to be DL-safe.

the result indicates that a careful and limited use of DL-safe Datalog rules in conjunction with a polynomial description logic should not in general have a major impact on reasoning efficiency. Since ELP is fundamentally based on EL++ -rules, it features rules-style modelling in the sense in which SROIQ-rules provide a rules modelling paradigm for SROIQ. An example knowledge base can be found in Figure 3. As for implementability, reasoning in ELP can be done by means of a polynomial-time reduction to Datalog, using standard Datalog reasoners. Note that TBox-reasoning can be emulated even if the Datalog reasoner has no native support for this type of reasoning. The corresponding algorithm is given in [12]. An implementation is currently under way. Proposal: OWL 3 should feature a designated polynomial profile which is as large as possible, based on ELP. A.6

Conclusions

Following the basic design principles for OWL, we made four suggestions for OWL 3. – – – –

OWL 3 should have a rule-based syntax based on Description Logic Rules. OWL 3 should allow the use of Boolean role constructors. OWL 3 should incorporate DL-safe SWRL and DL-safe variable SWRL. OWL 3 should feature a designated polynomial profile which is as large as possible, based on ELP.

We are aware that these are only first suggestions, and that a few open points remain to be addressed in research. We hope that our suggestions stimulate discussion which will in the end lead to a favorable balance between application needs and language development from first principles.

A Functional Ontology of Observation and Measurement Werner Kuhn Institute for Geoinformatics, University of Muenster Weselerstr. 253, 48151 Münster, Germany [email protected]

Abstract. An ontology of observation and measurement is proposed, which models the relevant information processes independently of sensor technology. It is kept at a sufficiently general level to be widely applicable as well as compatible with a broad range of existing and evolving sensor and measurement standards. Its primary purpose is to serve as an extensible backbone for standards in the emerging semantic sensor web. It also provides a foundation for semantic reference systems by grounding the semantics of observations, as generators of data. In its current state, it does not yet deal with resolution and uncertainty, nor does it specify the notion of a semantic datum formally, but it establishes the ontological basis for these as well as other extensions. Keywords: observation, measurement, ontology, semantics, sensors.

1 Introduction Given that observation is the root of information, it is surprising how little we understand its ontology. Measurement theory, the body of literature on the mathematics of measurements, is only representational, treating questions of how to represent observed phenomena by symbols and how to manipulate these. Ontological questions like „what can be observed“ or „how do observations relate to reality?“ are not answered by it. Consequently, the semantics of information in general, and of observations in particular, rests on shaky ground. With sensor observations becoming ubiquitous and major societal decisions (concerning, for example, climate, security, or health) being taken based on them, an improved understanding of observations as information has become imperative. Answering some of the deepest and most pressing questions in geographic information science, such as how to model and monitor change, also requires progress in this direction. Furthermore, issues of scale, quality, trust, and reputation, are all intimately linked to observation processes. In response to these needs, this paper proposes a first cut at an ontology of observation. The ontology specifies observation as a process, not only as a result, and treats it as an information item with semantics that are independent of observation technology. The goal of this work is to understand the information processes involved in observations, not the details of physical, psychological, or technological processes. K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 26–43, 2009. © Springer-Verlag Berlin Heidelberg 2009

A Functional Ontology of Observation and Measurement

27

The ontology is kept at a sufficiently general level to be widely applicable as well as compatible with a broad range of existing methods and standards. Its primary purpose is to serve as a backbone for a seamless semantic sensor web. It also provides a foundation for semantic reference systems, systematizing the semantics of observations, which are the basic elements of spatial information. More generally, it serves as an ontological account of data production. Observations link information to reality and provide the building blocks of conceptualizations. As such they ground communication and relate data to the world and its observers. The paper shows that they hold together the four top-level branches in the foundational DOLCE ontology [1]: they are afforded by changes in the environment (stimuli), which involve endurants and perdurants, and their results consist of abstract symbols, which stand for qualities inhering in these endurants and perdurants. Studying the ontology of observation is, thus, also likely to clarify fundamental categories in ontology as well as their mutual interaction. The proposed ontology is presented in the form of a simulation of observation processes, providing a testable model as an additional benefit. To achieve this, it is written in the typed functional language Haskell [2], providing the expressiveness required to capture and distinguish concepts like endurants, perdurants, qualities, qualia, stimuli, signals, and values. The ontology is built and tested by specifying algebraic theories and models for these concepts. Its development combines a bottomup strategy, working from actual use cases, with a top-down structure, taken from DOLCE. It does not force the expressive limitations of semantic web languages and reasoners on the modeling and representation of observations, though it allows for subsequent translation into these representations. The technological focus of today’s sensor standards, putting encoding before modeling, motivates an effort to „lift sensors from their platforms“, so to speak. The proposed ontology generalizes from technical and human sensors [3] to the role of an observer. It models observation as an action afforded to observers by their environment. People, devices, sensor systems and sensor networks can then all realize this affordance, leading to a vast generalization of sensing behavior, which simplifies software architectures [4, 5]. The paper reviews previous work relevant to the ontology of observation (section 2), states the ontological commitments taken and their implications (section 3), presents the core concepts of observation (section 4) followed by a series of examples (section 5), and walks the reader through the observation ontology and its formalization (section 6), before ending with conclusions and an outlook (section 7).

2 Previous Work Observation and measurement processes have received attention from several perspectives, including physics, mathematics and statistics, ontology, and information sciences. They also have been the subjects of recent and ongoing standardization efforts. Yet, the ontology of observation processes remains surprisingly underdeveloped. The focus is typically on endurants (in particular, physical objects) rather than on perdurants and qualities and we lack an understanding of observation in general, as opposed to specific kinds of observations in various application areas.

28

W. Kuhn

This section reviews some results from science and engineering that are relevant to a more general observation ontology. The broader topic of the ontology of qualities is beyond the scope of this paper and will only be touched upon where it is necessary to understand the ontology of observation. The narrower problem of the ontology of measurement units is factored out, as it represents the focus of parallel research and standardization activities. The question whether there are observations that are ontologically more basic than others is also not addressed here. 2.1 Physics Physics (and metaphysics) has asked questions about the nature of observation all the way back to (at least) Aristotle. Relevant core ontological distinctions have resulted from these analyses. Modern physics has revealed challenges (like the interaction of the observer with the observed and the limitations to precise knowledge) that affect all theorizing and some of the practice of observing. In his insightful book Physics as Metaphor, Roger S. Jones [6] pointed out that consciousness plays a much larger role in constructing physical reality than commonly accepted and that the separability of mind and matter is untenable even from the point of view of measurement. He argued that "the celebrated ability to quantify the world is no guarantee of objectivity and that measurement itself is a value judgment created by the human mind.“ Basic measurements like that of length are not well defined, according to Jones, but have a built-in unspecifiable uncertainty, in the case of length due to the circularity of demarcating what is being measured using length itself (p. 22). While one can take different stances on such philosophical underpinnings of observation and measurement, it is clear that only a careful ontological specification will make these stances explicit and testable. 2.2 Mathematics and Statistics Mathematics and statistics have addressed measurement from a representational point of view: what properties do symbols need to have to represent observations and how can they be manipulated to reveal statistical properties? [7]. This led to the wellknown measurement scales, i.e. classifications of measurement variables according to their algebraic properties. Statistical views of measurement are largely orthogonal to ontological perspectives. Measurement theory does ask ontological questions about measurement scales (regarding, for example, the presence of order relations or of an absolute zero), but the answers to these questions are derived from analytical needs rather than ontological analyses of observed phenomena. Thus, it remains up to a data analyst, for example, to assign the scale levels requested by statistics or visualization programs. Stevens’ measurement scales, with their underdeveloped link to ontology, sometimes get replaced by representational distinctions, such as continuous, discrete, categorical, and narrative attributes [8]. While this is useful for assigning probability distributions, it loosens the connection to what is being measured. As a consequence, the assumptions made in statistics about observed variables are sometimes at odds with ontological analysis. This is not only the case for the assigned measurement scales, but, more fundamentally, for the claims that a measurement variable (like the temperature in a room) has a „true“ value. Since the „truth“ of this value is defined by the observation

A Functional Ontology of Observation and Measurement

29

procedure (involving sampling processes, spatial and temporal resolutions, interference with the measured phenomenon etc.), any procedure has its own „truth“. Presupposing a true measurement scale or a true value of a measurement variable begs the ontological questions about what is being measured and how (see also [9]). When observations from different sources, holding different „truths“, are combined, such discrepancies show up at best (and then need to be accounted for post hoc, often without the necessary information) or they get absorbed in biased statistical measures (by inadvertently mixing multiple sampling processes). 2.3 Ontology A thorough ontological analysis of measurements has been proposed in [10]. It provides the first explicit specification of measurement quantities and engineering models in the form of a formalized ontology. Its focus is on the core distinctions of scalar, vector, and tensor quantities, physical dimensions, units of measure, functions of quantities, and dimensionless quantities. As such, the ontology isolates the measured quantities (expressed by symbols and units) from the measurement process and from the bearers of the measured qualities. This decoupling of concerns is entirely valid and supports a combination with ontologies of measured entities and measurement processes. Again from an engineering perspective, [11] has proposed a taxonomic approach to sensor science. Their scope includes the physics and technology of measurement, and their taxonomy can be seen as a weak form of an ontology, coming close to the information centric view of sensing and measurement advanced here. They also make the case for a unified treatment of technical and human sensors, and point out that sensor science has (at least at that time) not placed enough emphasis upon the sensing function. The measurement ontology proposed in [12] is meant to extend to aspects of measurement left out in previous attempts, such as sampling and quality evaluation. It emphasizes the use of measurements more than their semantics and takes a simplified view of measurement as a function from an object to a numerical value. The deeper question about how a measurement result relates to an object (if any) is left open. The ontology is based on an ontology engineering method, but not on a foundational ontology. It introduces some complex notions (like “traceable resource units”), which are difficult to understand and to relate to other ontologies. A recent proposal for ontological extensions to sensor standards [13] rests on representational distinctions from [14] without developing or refining them further. An ontological analysis of observations and measurements that goes beyond an RDF or UML level representation of XML data types is still lacking and proposed in this paper. 2.4 Geographic Information Science Geographic Information Science has paid considerable attention to fundamental questions about the nature of information and observation in space and time. Chrisman [15] was the first to call for attribute reference systems, to complement spatial and temporal reference systems in supporting the interpretation of data. This was essentially a call for ontologies of observation and measurement. But Chrisman’s notion of reference system remained anchored in measurement scales and their

30

W. Kuhn

extensions, without providing a theoretical basis for assigning the scales in the first place or an ontological account of measurands. A generalized notion of semantic reference systems, encompassing space, time, and theme, and establishing a link to ontologies was proposed in [16, 17]. This research program raises the question of how to define a semantic datum and how to ontologize observations, as the basic components of geodata. Probst [18] provided the first definition of a semantic datum and advanced the ontology of qualities, in particular of spatial qualities, which he related to the dimensions of their carriers [19]. Building on the notion of qualities from [20], he formalized DOLCE’s quality spaces (which are in turn based on Gärdenfors’ conceptual spaces [21]) and introduced reference spaces as quality spaces that have been partitioned by symbols denoting measurement values. Schade has recently extended this work to enable semantic translation between attribute values [22]. The questions how measured qualities relate to their carriers remained open in this line of work, apart from the dimensional restrictions identified in [19]. In a recent paper [23], we have presented a first account of how observations relate to endurants and perdurants, leading to a revised and simpler definition of a semantic datum based on Gibson’s ecological psychology [24]. The present paper complements this account with an ontological analysis of observations and measurements from the point of view of their acquisition. From a data model perspective, efforts have been undertaken to anchor geodata in fundamental models of observation, typically based on the physics notion of fields (for the latest example, see [25]) and then to provide ontological accounts of subsequent abstraction levels [26, 27]. These efforts typically presuppose absolute space and time and introduce the convenient abstraction of point observations, which can be seen as a surrogate ontology of observation. Given their different scopes (data models on the one hand, more general geographic information ontologies on the other), they are compatible with the approach presented here. They differ from it by tying observation to location (e.g., a measure of temperature at some point in a spacetime continuum) instead of to endurants and perdurants (e.g., a measure of temperature of the amount of air surrounding a thermometer). From a data processing perspective, measurement-based systems have been proposed for geospatial information [28, 29], with the purpose of establishing a foundation for spatial analysis through the collection of maximally original data. Positions and their uncertainties, for example, can more reliably and even incrementally be determined if the original terrestrial or satellite measurements are maintained in a system. A generalization of measurement-based approaches toward models that trace their concepts back to measurements requires an ontology of measurement. This will allow for data integration and for computational techniques to be applied to collections of measurements (such as incremental least-squares adjustment [30, 31]). One could even argue that the limited takeup of the idea of measurement-based systems until now has to do with its lacking ontological foundation, preventing a link between measurements and other data about objects and processes. Recently, Goodchild has proposed the notion of Volunteered Geographic Information (VGI), and the idea of Citizens as Sensors going hand in hand with it [3]. Such a conceptualization of sensors as roles played by machines as well as humans lies at the heart of the approach presented here, which has the practical advantage of

A Functional Ontology of Observation and Measurement

31

turning the observation ontology into a solid foundation for volunteered information, and more generally for the social web. 2.5 Standardization The ontological ambiguities inherent in current sensor and observation standards prevent an integration of observation data from multiple sources and their appropriate interpretation in models. The envisioned Semantic Sensor Web [32, 33] needs a stronger ontological foundation to become reality, as evidenced by today’s lack of access to sensor data on the web. Broadly speaking, three aspects of observation and measurement have been the subjects of standardization so far: • measurement units (leading to the SI system of measurement units as well as some ontologies of measurement units); • measurement uncertainty (leading to standard ways of applying probability theory and statistics to measurements as well as early standardization efforts on exchange of data on uncertainty); • measurement technology (leading to various standards of instrumentation and communication). Where such standardization efforts are not based on an ontology, problems can arise in using and combining them. For example, the Observations and Measurements standard of the Open Geospatial Consortium (OGC, [14]) provides a model for describing and XML schemas for encoding observations and measurements, with the goal of supporting interoperability. However, it states that all observed properties are properties of “features of interest”, which it defines as “the real-world object regarding which the observation is made” [14].” Apart from the notorious confusion between real world and information objects, this restricts observations unnecessarily and inconveniently to properties of objects. Where no such object can be identified (e.g., for weather data), even a sensor can become the „feature of interest“. Probst [34] has shown some contradictions and interoperability impediments arising from such ontologically unfounded standards and showed how to remedy the situation, for example by treating the feature of interest as a role. The work presented here extends this idea further, to treat other observation concepts as roles as well, in particular sensors and stimuli. Sensors are modeled from an information-centric point of view, rather than from the technological and information encoding perspective of geospatial standards. A recent effort toward a sensor ontology [35] takes a similar direction, though it remains closely tied to technology.

3 Ontological Commitments The ontology of observation is hindered by, among other factors, the naive idea of a measurement instrument being an objective reporter of the mind-independent state of the world. This commonly held view neglects the fact that instruments are built and calibrated by human beings. Neither the choice of the observed entity, nor the quality assigned to it, nor its link to a stimulus, nor the value assigned to the quality are

32

W. Kuhn

mind-independent. All of them involve human conceptualizations, though these are more amenable to grounding and agreement than anything else. Therefore, an ontology of observation requires ontological commitments of one sort or another and those taken for this work are spelt out in this section. As ontological foundation, DOLCE’s distinction of four top level categories of particulars is adopted here [1]: endurants, perdurants, abstracts, and qualities (which can be physical, temporal, or abstract). Endurants, for example lakes, participate in perdurants, for example rainfalls. The categorization of an entity as endurant or perdurant is often a matter of the desired temporal resolution. On closer analysis, many phenomena involve both categories. For example, a water body can be conceptualized as an endurant, neglecting the flow of water, or as a mereological sum of endurants (amounts of water, terrain features) participating in a water flow perdurant. Qualities inhere in particulars and map them to abstracts (regions in quality spaces). Physical qualities inhere in physical endurants, temporal qualities in perdurants. For example, a temperature quality inheres in an amount of matter and a duration quality inheres in an event. Quality universals shall be admitted, so that a quality can be abstracted from multiple instances to a quality type (e.g., air temperature) and even further to a generalized quality type (e.g., temperature). As in [18], an observation process is seen here as invoking first a quale in the observer’s mind, or an analog signal in a technical sensor. Our notion of qualia is slightly different from the one in [20], but in line with the one in philosophy of mind (for a good overview, see http://en.wikipedia.org/wiki/Qualia): it denotes a quality experienced by an observer and is not abstracted from the carrier of the quality. The red of the rose experienced by an observer belongs to that particular rose as well as to the observer; it is not abstracted from either. Thus, observation involves firstly the production of a quale (analog signal) and secondly its symbolization, i.e., a sequence of impression and expression. Measuring is distinguished here from observing by requiring measurements to have numeric results. Stevens considered measurement as „the assignment of numerals to objects and events according to rule“ (p. 677 of [7]), but included names in his measurement scales. This required, at least in theory, to turn names into numbers by some rules. Here, the term measurement is restricted to quantification, and the term observation is used for sensing processes with results symbolized in any form, not just numerically. The defining property of both, observation and measurement, is that they map qualia to well defined symbol algebras, whether these are numeric or not. The result of an observation (as well as of a measurement) process is an information object, which is commonly referred to as observation as well. It is a non-physical endurant, expressed by abstract symbols. Apart from the value of the observed quality, it can contain temporal and location as well as uncertainty information. Our ontology uses the terms observe for the process and Observation for its result.

4 Core Observation Concepts This section defines the core concepts of observation, using the ontological foundation introduced in the previous section: the notions of observable, stimulus,

A Functional Ontology of Observation and Measurement

33

observer, observation value, and observation process. The guideline for these choices has been to remain as compatible as possible with existing standards, with the literature (such as [36]) and with ordinary language. For example, the common distinction between observation and measurement (the latter having a numeric result) has been retained, and the term sensor denotes technical devices, while the term observer is used for the generalization over humans and devices. On the other hand, ambiguous terms in current sensor standards have been made more precise. For example, sensors have no knowledge of objects and therefore cannot observe object properties. The use of the term observation for both, the process of observing and the result is common and should not cause confusion. An observable is a physical or temporal quality to be observed. For example, the temperature of an amount of air or the duration of an earthquake. Ontologically, an observable combines the quality with the entity it inheres in. If the quality to be measured is temperature, the physical endurant it inheres in can be any amount of matter (air, water, etc.) holding heat energy. The choice of a quality bearing endurant (say, of an air mass surrounding a thermometer) determines the spatial resolution of an observation. If the quality is duration, the perdurant it inheres in can be any event, such as an earthquake, a chess game, or the reign of a dynasty. Since observables per se cannot be detected (how would information about them enter the observer?), the idea of a stimulus is needed, explaining how a signal (and eventually, information) is generated. A stimulus is defined in physiology as a „detectable change in the internal or external environment" of an observer (http://en.wikipedia.org/wiki/Stimulus_(physiology)) or as a “physical or chemical change in the environment that leads to a response controlled by the nervous system” (http://www.emc.maricopa.edu/faculty/farabee/BIOBK/BioBookglossS.html). Simple examples of stimuli are the heat energy flowing between an amount of air and a thermometer or the seismic waves of an earthquake. Stimuli need to have a welldefined physical or chemical relationship to observables. A detectable change is a perdurant and can be a process (periodic or continuous) or an event (intermittent), playing the role of a stimulus when an observer detects it. An observer can also produce the necessary stimulus itself (e.g., a sonar producing a sound wave to measure distance). The stimulus can itself be an observation process (changing the observed value). This recursion allows for observations to combine individual observation results into symbols representing aggregate qualities. Detecting a stimulus requires that an endurant in the internal or external environment of an observer participates in the stimulus. For example, heat flow can be detected by an amount of gas expanding in a thermometer. When the measurand inheres in a perdurant rather than an endurant, there still needs to be a participating endurant; for example, an inertial mass in a seismometer, to be moved by seismic waves. Thus, to be detectable, a stimulus needs to provide one of the following: • a changing physical quality of a participating endurant (e.g., the volume of gas); • a temporal quality of the stimulus or of a perdurant coupled with it (e.g., the duration of the motion of an inertial mass); • an abstract quality of an observation result (e.g., a temperature value).

34

W. Kuhn

An observer provides a symbol for a quality, in two steps. First, it detects • a quality of a stimulus or • one or more proxy qualities (also known as signal variables) of endurants or perdurants internal to the observer. Proxy qualities have to co-vary with the observable in a well-defined way, either through a participation of their carrier endurants in the stimuli or through a process coupling between their carrier perdurants and the stimuli. Second, the observer expresses the analog signal(s) obtained this way through a symbol for the value of the observation. This value is either a Boolean, a count, a measure with a unit, or a category (as in [37]) and can get „stamped“ with the position and time of the observation. The observer role can be played by devices (technical sensors) or humans or animals, either individually or in groups. The observation process can now be conceptualized as consisting of the following steps (the first two required only once, to determine the observed phenomenon): 1. 2. 3. 4.

choose an observable; find one or more stimuli that are causally linked to the observable; detect the stimuli, producing analog signals (“impression”); convert the signals to observation values (“expression”).

This sequence contains the ontologically significant elements influencing the semantics of observation values. It is consistent with the definitions of observations in standards for sensors and for geographic information, such as OGC’s Observations and Measurements standard ("An Observation is an action with a result which has a value describing some phenomenon." [14]) or OGC’s Reference Model ("An observation is an act associated with a discrete time instant or period through which a number, term or other symbol is assigned to a phenomenon." [38]).

5 Observation Examples This section lists examples of technical and human observers taking a wide variety of observations. It illustrates the notions introduced in section 4 and provides the test cases with which the ontology has been developed and tested. Each example identifies an observer, observable, stimulus, and value. A thermometer measures the temperature of an amount of air using heat flow as a stimulus. The stimulus causes an expansion of an amount of gas, the amount of which (relative to its container) is the signal that gets converted to a number of degrees on the Celsius scale. A sonar measures water depth on a lake using sound waves it generates as stimulus and converting the time until they return from the ground (signal) into a measure of distance. A CCD camera observes its visible environment using sunlight reflected from the surfaces in the environment as stimulus. It integrates the received radiation intensity at each of its pixels over some time interval (signal), and returns an image as observation.

A Functional Ontology of Observation and Measurement

35

A weather station reports the observable “type of weather” by combining temperature, pressure, and humidity measurements (each of them a stimulus producing a signal) and aggregating their values. A sailor observes wind speed by watching the frequency and size of ripples on the sea as stimulus and reporting a Beaufort number expressing his impression (quale) of the wind force. A nomad in the desert reports the presence of water in a well by observing sunlight reflected from patches of water (stimulus), getting the impression (quale) that there is some water, and calling a number on a cell phone signifying „water available“ [39]. An epidemiologist collects data on dengue fever risk by separating and counting mosquito eggs in a bucket (their presence being the stimulus, their number the observable, their individuation the qualia). A doctor observes a patient's mood by talking to the patient and describing her impressions (qualia) obtained from the patient’s behavior, which serves as stimulus.

6 A Walk through the Observation Ontology The observation ontology presented in this section takes the form of an algebraic specification. It is an ontology, because it specifies observation concepts axiomatically; at the same time, it is a simulation, because it has an executable model that can be used for testing. The section walks the reader through the formalization, explaining its form and contents together. The full ontology is available from http://musil.uni-muenster.de/ publications/ontologies/. The software engineering technique of algebraic specification [40] uses manysorted algebras to specify conceptualizations. These consist of sets of values from multiple sorts with associated operations. Logical axioms, in the form of equations over terms formed from these operations, constrain the interpretation of the symbols. For example, an axiom in an algebraically specified ontology might state that the time stamp of an observation should be interpreted as the mid point of an observation period. For reasons of expressiveness and ease of testing, the functional language Haskell is used here as an algebraic specification language. It offers a powerful development and testing environment for ontologies, without the restriction to subsets of first order logic and binary relations typical for ontology languages. In particular, it allows for specifying the behavior of observing as a relation over observer, observed entity, and quality types. Introductions to Haskell as a programming language, together with interpreters and compilers, can be found at http://www.haskell.org. An introductory text and language reference is [2], accessible at http://book.realworldhaskell.org/read/. All development and testing of the observation ontology has been done using the interpreter coming with the Glasgow Haskell Compiler, ghci. 6.1 Data Types for Universals By considering types as theories [41], with operation signatures defining the syntax and equational axioms defining the semantics for a vocabulary, one can write theories

36

W. Kuhn

of intended meanings, i.e., ontologies, in Haskell. Universals (a.k.a. categories, classes or concepts) are modeled as data types and individuals (a.k.a. instances) as values. Universals are not just flat sets of individuals, but algebraic structures with characteristic operations (a.k.a. methods or relations). Let us declare a type symbol for each universal (e.g., Person), a function type for each kind of process (e.g., constructing a value for Person from an Id and Position value), and equations on them (e.g., stating that the position of a person remains the one at construction time until it gets changed by a move operation). The Haskell syntax for type declarations uses the keyword data (Haskell keywords are boldfaced throughout the paper) followed by a name for the type and a right-hand side introducing a constructor function for values, possibly taking arguments. A simple example is the declaration of data types for the universals Person and WeatherStation with constructor functions of the same name and two arguments for a name (Id) and a position, which are both data types as well: data Person = Person Id Position data WeatherStation = WeatherStation Id Position Type synonyms can be declared using the keyword type. For example, the type Id is declared as a synonym of the predefined type String: type Id = String The Position type (not shown here) contains alternative constructors for fixed and mobile positions, with the former encoded as coordinates (with a reference system) or as a toponym. A core universal in our ontology is that of observation values. Its various constructors are separated by “|” and take more basic types (Bool etc.) as arguments: data Value = Boolean Bool | Count Int | Measure Float Unit | Category String An observation consists of an observation value combined with a position and time: data Observation = Observation Value Position ClockTime ClockTime is a predefined Haskell type for system clock time. An example of a universal for a quality carrying endurant is the type AmountOfAir. It takes two arguments here, one for the amount of heat energy and the other for the amount of moisture it contains (implying that each amount of air has heat and moisture): data AmountOfAir = AmountOfAir Heat Moisture With a definition of the heat and moisture parameters, one can then define individual values for example as follows: muensterAir = AmountOfAir 10.0 70.0 Additional data types specifying endurants like other sensor types, quality carriers or measurement units are defined in the full code.

A Functional Ontology of Observation and Measurement

37

6.2 Type Classes for Behavior and Subsumption By organizing categories along the subsumption relation, one can transfer behavior from super- to sub-categories. Standard ontology languages define subsumption in terms of instantiation: persons are agents if every person is also an agent. Haskell does not permit this instantiation of an individual to multiple types (because every value has exactly one type), but offers a more powerful form of subsumption, using type classes. These are sets of types sharing some behavior. Type classes are named with upper case letters here, to distinguish them visually from types, and are followed by a parameter for the types belonging to the class (using the same name in lower case): class ENDURANTS endurant Sub-categories are derived from their super-categories using a so-called context (=>): class ENDURANTS physicalEndurant => PHYSICAL_ENDURANTS physicalEndurant class PHYSICAL_ENDURANTS amountOfMatter => AMOUNTS_OF_MATTER amountOfMatter class PHYSICAL_ENDURANTS physicalObject => PHYSICAL_OBJECTS physicalObject Behavior can be added at any level of such a class hierarchy. For example, the ability of agents to tell their position distinguishes the class of agents (called agentive physical objects here, APOS) and then gets passed on to derived classes: class PHYSICAL_OBJECTS apo => APOS apo where getPosition :: apo -> Position To state that persons and weather stations are agentive physical objects and inherit the behavior of these, the Person and WeatherStation types are declared instances of the APOS class, specifying how each of them realizes the getPosition behavior: instance APOS Person where getPosition (Person iD pos) = pos instance APOS WeatherStation where getPosition (WeatherStation iD pos) = pos Note that Haskell’s instance relation is one between a type and a type class. Types can instantiate classes (with the same context syntax), without creating dubious cases of multiple is-a relations. An example is the OBSERVERS class introduced below, combining behavior of agents and qualities. Type classes furthermore allow for inheritance, so that penguins can be birds without flying behavior, or some APOS types (e.g., sensors without locating capacity) can be declared unable to tell their position. 6.3 Constructor Functions for Qualities Qualities are the subject of observation. Each observable quality inheres in an endurant or perdurant, i.e., is a dependent entity. This suggests a specification of quality types as functions and, more specifically, as data constructor functions

38

W. Kuhn

(constructing values of a certain quality from individual endurants or perdurants). Individual quality values are then the results of applying a quality constructor to an endurant or perdurant. Note that this does not imply that endurants or perdurants be individuated prior to observation processes, only as parts of them. Quality types can be generalized even further, abstracting the type they inhere into a type parameter. For example, the temperature quality type is specified independently of the kind of physical endurant it describes, using a class context to constrain its parameter: data PHYSICAL_ENDURANTS physicalEndurant => Temperature physicalEndurant = Temperature physicalEndurant We have seen data constructors before, for example the function Person, which takes an Id and a Position value and constructs a value of type Person. What is different here is that the constructor can take a value of several types, satisfying the context (i.e., belonging to the type class PHYSICAL_ENDURANTS). The declaration establishes a type template, which generates different quality types for different bearer parameters. The temperature of air and that of water, for example, have two signatures with the same constructor function (Temperature), but different parameter types. Air temperature can then be specialized through a type synonym: type AirTemperature = Temperature AmountOfAir Following DOLCE, an individual quality was defined in section 3 as a region in a quality space. It is specified here by the term formed by applying a quality constructor to a particular. The following term specifies the air temperature at Münster (using the term muensterAir specified above): Temperature muensterAir For a quality of an endurant or perdurant that is internal to a human observer, this term can be seen as representing a quale (e.g. "Temperature myBody"). For a technical sensor, it specifies the analog signal generated by the stimulus. In both cases, the term stands for the result of the first step of an observation (i.e., a sense impression), preceding its symbolization in an observation value (i.e., an expression). There is no need and no possibility to evaluate the term further. Its symbolization is specified in the following subsection. 6.4 The Observer Role Putting it all together, let us now specify the role of an observer, as a class of three kinds of types: the observing agent, the observed quality, and the entity bearing the quality. This so-called multi-parameter type class can be seen as a relation over its types, defining which agents can observe which qualities of which entities. It is characterized by the observe behavior, which uses the express operation to symbolize the observed value: class (APOS agent, QUALITIES quality entity) => OBSERVERS agent quality entity where

A Functional Ontology of Observation and Measurement

39

observe :: quality entity -> agent -> IO Observation express :: quality entity -> agent -> Value The observe operation feeds a quality of an entity to an observing agent to produce an observation. Since this involves input from the system clock (to time stamp the observation), one can use the Haskell IO Monad to wrap the result of these two operations. Because the observe behavior is the same for all agent and quality types, it can already be implemented in the class specification. Its specification uses the Haskell do notation, which allows for sequencing the execution of operations. In a first step, the clock time is read, then the Observation is returned as a triple of the result of the express operation, the position of the agent at the time, and the clock time: observe quale agent = do clockTime MAX-ALTITUDE(zle) then etqij ”elev” else etqij ”llan”

EXTRACTION(dem)

1 2 3 4 5 6 7 8 9

ext nExt 0 for i=0 to NUM-COLUMNS(dem) for j=0 to NUM_ROWS(dem) if dem(i,j)>0 then nExt++ e REG8CONN(i,j,nExt) MARK(e, nExt) ext ext {e}

Table 2. Pseudo-code for Description Algorithm DESCRIPTION(ext,sign) 1 props=GET-PROPS(sign) 2 vals=MEASURE-PROPS(ext,props) 3 plant=GET-TEMP(sign) 4 desc=FILL-TEMP(plant,vals)

5

return desc

Towards a Semantic Representation of Raster Spatial Data

79

Fig. 8. First iteration of the segmentation step

Table 3. Results of Description Stage RASTER SPATIAL DATA SET OF DIGITAL ELEVATION MODEL FROM “GRAND CANYON - E AZ ", HAVING SPATIAL RESOLUTION OF 30 SECONDS-ARC. MAX ALTITUDE: 548.000000 METERS, MIN ALTITUDE: 2838.000000 METERS. EXTREME COORDS: (129600.000000, -406800.000000) AND (133200.000000, -403200.000000) SECONDARC, PROJECTION: GEOGRAPHIC. THIS RSDS HAS: A MOUNTAIN WITH AREA: 168300 SQUARE SECONDS-ARC, MIN ALTITUDE: 2699.301185 METERS, MAX ALTITUDE: 2774.861966 METERS, EXTREME COORDS: (132540.000000, 405660.000000) AND (136680.000000, -401460.000000) SECOND-ARC, TOP: (132930.000000, -405540.000000) AT A HEIGHT OF 2774.861966 METERS ... A HILL WITH AREA: 7200 SQUARE SECOND-ARC, MIN ALTITUDE: 2640.503674 METERS, MAX ALTITUDE 2650.418398 METERS, EXTREME COORDS: (133050.000000, 405750.000000) AND (136800.000000, -402090.000000) SECOND-ARC, TOP: (133170.000000, -405720.000000) AT A HEIGHT OF 2650.418398 METERS...

80

R. Quintero et al.

Fig. 9. Extraction of features under signatures “eee”, “eel” and “eed” (e=elev, l=llan and d=depr)

4 Conclusions In this work, a methodology for making semantic descriptions of raster spatial data sets is described. The conceptualization methodology is the most important part of this research; because we propose to make the conceptualization using only three axiomatic relations, which allow to move the “classic” relationships to the conceptualization, giving to them a granularity and semantic richness. As part of case study three ontologies: Kaab ontology for the conceptualization of geographic domain, Hunxeet ontology for the conceptualization of landforms domain, and Wiinkil ontology for the conceptualization of our application were developed. Synthesis stage is focused on the image processing fashion, with phases of preprocessing, processing and post-processing. Description stage is proposed to use the conceptualization and apply some templates for describing geospatial knowledge. As future work, we consider that it is necessary to analyze and conceptualize geographic relationships (topologic and geometric for instance) between concepts identified and described in this work. Also, it is important to consider methods for measuring the quality of the description. We propose the use of building blocks (basic landforms) for building a synthetic model and compare it to the original data set. On the other hand, the description by

Towards a Semantic Representation of Raster Spatial Data

81

using formal first order logic and comparing the resulting logics, in order to obtain a quality metric will be proposed.

Acknowledgments The authors of this paper wish to thank the CIC, SIP Projects: 20082563, 20082580, 20082480, 20080971, 20091264, 20090320, 20091018, 20090775, IPN and CONACYT for their support.

References 1. Ackermann, F.: Automatic generation of digital elevation models, OEEPE Commision B, DTM Accuracy Meeting, Southampton (1993) 2. Hodgson, M.E.: What cell size does the computed slope / aspect angle represent? Photogrammetric Engineering and Remote Sensing 61(5), 513–517 (1995) 3. Etzelmüller, B., Sulebak, J.R.: Developments in the use of digital elevation models in periglacial geomorphology and glaciology. Physische Geographie 41, 35–58 (2000) 4. Weibel, R., DeLotto, J.L.: Automated terrain classification for GIS modeling. In: Proceedings of GIS/LIS 1988, Virginia (1998) 5. Sulebak, J.R., Tallaksen, L.M.: Estimation of areal soil moisture by use of terrain data. Geografiska Annaler 82(A), 89–105 (2000) 6. Uschold, M., King, M.: Towards a Methodology for Building Ontologies. In: Proceedings of the Workshop on Basic Ontological Issues in Knowledge Sharing IJCAI 1995, Montreal, Canada, pp.6.1–6.10 (1995) 7. Uschold, M., Grüninger, M.: Ontologies: Principles, Methods and Applications. Knowledge Engineering Review 11(2), 93–155 (1996) 8. Grüninger, M., Fox, M.S.: Methodology for the design and evaluation of ontologies. In: Proceedings of the Workshop on Basic Ontological Issues in Knowledge Sharing IJCAI 1995, Montreal, Canada, pp. 7.3–7.13 (1995) 9. Bernaras, A., Laresgoiti, I., Corera, J.: Building and reusing ontologies for electrical network applications. In: Proceedings of European Conference on Artificial Intelligence, Budapest, Hungary, pp. 298–302. John Wiley & Sons, Chichester (1996) 10. Fernández, M., Gómez, A., Juristo, N.: METHONTOLOGY: From Ontological Art Towards Ontological Engineering. In: Symposium on Ontological Engineering of AAAI, pp. 33–40. Standford University, California (1997) 11. Gómez, A., Fernández, M., Corcho, O.: Ontological Engineering, 2nd edn. Springer, New York (2004) 12. Swartout, B., Ramesh, P., Knight, K., Russ, T.: Toward Distributed Use of Large-Scale Ontologies. In: Symposium on Ontological Engineering of AAAI, pp. 138–148. Standford University, California (1997) 13. Staab, S., Schnurr, H.P., Studer, R., Sure, Y.: Knowledge Processes and Ontologies. IEEE Intelligent Systems 16(1), 26–34 (2001) 14. Guarino, N., Welty, C.: A Formal Ontology of Properties. In: Dieng, R., Corby, O. (eds.) EKAW 2000. LNCS (LNAI), vol. 1937, pp. 97–112. Springer, Heidelberg (2000) 15. Smith, B., Mark, D.: Ontology and Geographic Kinds. In: Proceedings of the 8th International Symposium on Spatial Data Handling, Vancouver, Canada, pp. 308–320 (1998)

82

R. Quintero et al.

16. Mark, D.M., Smith, B., Egenhofer, M., Hirtle, S.: Emerging Research Theme: Ontological Foundations for Geographic Information Science. University Consortium for Geographic Information Science, Technical Report (2001) 17. Quintero, R.: Representación Semántica de Datos Espaciales Raster, Laboratorio de Procesamiento Inteligente de la Información Geoespacial, México, CIC-IPN, Ph. D. Thesis in Spanish (2007) 18. Torres, M.: Representación ontológica basada en descriptores semánticos aplicada a objetos geográficos, Laboratorio de Procesamiento Inteligente de la Información Geoespacial, México, CIC-IPN, Ph. D. Thesis in Spanish (2007) 19. Moreno, M.: Similitud Semántica entre Sistemas de Objetos Geográficos Aplicada a la Generalización de Datos Geo-espaciales. Laboratorio de Procesamiento Inteligente de la Información Geoespacial, México, CIC-IPN, Ph. D. Thesis in Spanish (2007) 20. Torres, M.: Ontological representation based on semantic descriptors applied to geographic objects. Computación y Sistemas 12(3), 356–371 (2009) 21. INEGI: Diccionario de datos topográficos 1:50 000 (Vectorial), Aguascalientes, Instituto Nacional de Estadística Geografía e Informática (1996)

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags Carsten Keßler, Patrick Mau´e, Jan Torben Heuer, and Thomas Bartoschek Institute for Geoinformatics, University of M¨ unster, Germany {carsten.kessler,patrick.maue,jan.heuer, bartoschek}@uni-muenster.de

Abstract. As directories of named places, gazetteers link the names to geographic footprints and place types. Most existing gazetteers are managed strictly top-down: entries can only be added or changed by the responsible toponymic authority. The covered vocabulary is therefore often limited to an administrative view on places, using only official place names. In this paper, we propose a bottom-up approach for gazetteer building based on geotagged photos harvested from the web. We discuss the building blocks of a geotag and how they relate to each other to formally define the notion of a geotag. Based on this formalization, we introduce an extraction process for gazetteer entries that captures the emergent semantics of collections of geotagged photos and provides a group-cognitive perspective on named places. Using an experimental setup based on clustering and filtering algorithms, we demonstrate how to identify place names and assign adequate geographic footprints. The results for three different place names (Soho, Camino de Santiago and Kilimanjaro), representing different geographic feature types, are evaluated and compared to the results obtained from traditional gazetteers. Finally, we sketch how our approach can be combined with other (for example, linguistic) approaches and discuss how such a bottom-up gazetteer can complement existing gazetteers.

1

Introduction and Motivation

The amount of geotagged user-generated content on the Social Web has been soaring in the last years. Cheaper and smaller GPS chips as well as easy-touse tools for manual geotagging have led to a sharp increase, particularly in the number of geotagged photos. The sheer amount of geotagged pictures – currently over 100 million on Yahoo’s Flickr service alone1 – makes them a very attractive source for geographic information retrieval [1,2]. As such, geotagged photos can be regarded as an implicit kind of Volunteered Geographic Information (VGI) [3]. Merging professional data sources with such VGI is attractive for a number of reasons, such as rapid updates and enrichment with data typically not contained in professional data sets. Examples include the extraction of footprints [1] and grounding of vague geographic terms [4] such as downtown Mexico City 1

According to http://blog.flickr.net/2009/02/05/

K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 83–102, 2009. c Springer-Verlag Berlin Heidelberg 2009 

84

C. Keßler et al.

or mapping of non-geographic terms [5] to determine the regional use of words like soda or pop [6]. One promising use of VGI – and geotagged photos in particular – is the enrichment of gazetteers with vernacular names and vague places [7]. Gazetteers have been developed as directories of named places with information on geographic footprints and place types to facilitate geographic information organization and retrieval. Most gazetteers follow a strict top-down approach, i.e., the gazetteer data is administered by the organization running the gazetteer. Only this toponymic authority can add places or place types to the gazetteer and correct erroneous entries, which slows down updates and hampers the inclusion of local and often tacit knowledge. Moreover, in most gazetteers information on geographic footprints is limited to a single coordinate pair, representing the centre of a city, administrative district or street. Extraction of footprints from geotagged information on the web is thus a promising way to automatically generate polygonal footprints for these gazetteer entries. Although a number of approaches have been developed for this task [5,8,9,10], they are hardly implemented in existing gazetteers. Apart from the GeoNames gazetteer2 , which complements its database with geotagged information from Wikipedia, strict top-down management of gazetteers is still prevalent. In this paper, we present an approach to build gazetteers entirely from volunteered geographic information. We discuss the challenges posed by automatically establishing the foundations of such a gazetteer based on geotagged photos harvested from the web. The implemented algorithms for retrieving geotags and clustering the corresponding locations to generate footprints are well-established. However, the emergent semantics [11] of such a collection of geotagged photos is still largely unspecified. Hence, the main contribution of this paper will be the formal definition of geotags. We explain the relation between the attached label (tag) and the information objects like a photo, its label’s author, as well as creation time and coordinates. We discuss the implicit semantics hidden in this relation, and how gazetteer entries can emerge from collections of such geotags using the presented implementation. Inferred knowledge about places from a source like geotagged photos – usually tagged with subjective keywords – can be seen as a social knowledge building process [12, chapter 9]. Ideally, this process leads to a representation of the group cognition [12] and can thus be regarded as a cognitive engineering [13] process which lets traditional GI applications benefit from the Wisdom of the Crowds [14]. Gazetteers exposing the collaborative perspective on place differ significantly from traditional gazetteers with administrative focus [15]. It is thus not the aim of this research to replace today’s gazetteers, which have already proven useful for countless applications building on geocoding, geoparsing and natural language processing. Instead, we argue for a separation of these different views into separate gazetteers, which can then be accessed through a gazetteer infrastructure as outlined in [7,16]. 2

See http://www.geonames.org

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

85

In order to demonstrate the feasibility of our approach, we have set up an application which retrieved and processed geotags associated to photos published on Flickr, Panoramio and Picasa3 . While there is also other geotagged content online such as videos, blog posts or Wikipedia entries, we chose to limit this experiment to photos. Photos are inherently related to the real world, since every photo has been taken somewhere. Moreover, as mentioned above, there is already a substantial amount of geotagged photos available online. By analyzing the coordinate pairs attached to the pictures, the time they were taken as well as the tags added by their owners, we are able to compute geographic footprints representing specific keywords. The collection of these keywords, derived from all tags of all retrieved photos, is further analyzed to differentiate between toponyms and tags without spatial relation. We test a repository build up this way with queries for Soho, Camino de Santiago (Way of St. James) and Kilimanjaro. We compare the results to those obtained from the same query on GeoNames. This evaluation focuses on the question whether our bottom-up gazetteer can already take on established gazetteers in terms of completeness and accuracy of geographic footprint. The next section points to relevant related work. Section 3 introduces a formal definition of geotags and establishes the relation between gazetteers and geotags. Section 4 describes the crawling and filtering approach implemented in the prototype. Section 5 analyzes the results obtained for the three exemplary queries, followed by conclusions and an outlook on potential applications and future work in Section 6.

2

Related Work

This section points to related work from gazetteer research, tagging and bottomup generation of geographic information. 2.1

Gazetteer Building and Learning

Gazetteers are knowledge organization systems that consist of triples (N, F, T ), where N corresponds to the place name, F to the geographic footprint and T to the place type [17]. Since neither N , F nor T are unique, all three components are required to fully represent and unambiguously identify a named place [17, p. 92]. In the context of gazetteers, a clear distinction is made between place as a social construct based on perceivable characteristics or convention [18], and the actual real-world feature it refers to [19]. Feature types are mostly organized in semi-formal thesauri with natural language descriptions. Recent research demonstrates how gazetteers could benefit from more rigorous, formal place type definitions [16] and develops methods for gazetteer conflation [20]. Existing gazetteers have generally been developed based on databases provided by administrative authorities, or by merging existing gazetteers [17]. More recently, the ever-growing amount of information available on the web has been 3

See http://flickr.com/, http://panoramio.com/ and http://picasaweb.com/

86

C. Keßler et al.

identified as a promising resource of knowledge about named places. Jones et al. [1] introduce a linguistic approach to enrich gazetteers with knowledge about vague places. They use documents harvested via web search and analyze them for cooccurrences of vague place names with more precise co-located places. In another linguistics-based approach presented by Uryupina [21], a bootstrapping algorithm is applied to automatically classify places into predefined categories (e.g. city, mountain). The machine learning techniques employed in this research enabled a high precision of about 85%, albeit the comparably small training data sets of only 100 samples per category. Henrich and L¨ udecke [5] introduce a process based on the results retrieved from a web search engine to derive geographic representations for both geographic and non-geographic terms at query time. Goldberg et al. [22] developed an agent-based system that crawls structured online source such as the USPS zip code database and online phone books. The authors demonstrate that this approach is capable of creating detailed regional, land-parcel level gazetteers with a high degree of completeness. 2.2

User-Generated Geographic Information

Online mapping tools with open APIs such as Google Maps have enabled the creation of the huge amounts of user-generated geographic information – also dubbed collaborative [23] or volunteered GI (VGI) [3] – in the first place. While this mainly refers to projects like OpenStreetMap4 , we argue that geotags, and more importantly the geographic footprints derived from them, can also be filed into this category. Similar approaches have already been sketched in previous research to derive landscape regions [24] or imprecise definitions of boundaries of urban neighborhoods [8] from such geotagged content. We build on this previous work and show how geographic information collected this way can be processed for the integration with existing gazetteers.

3

What Is a Geotag?

We have introduced geotags as particular examples of volunteered geographic information. Before discussing the idea of inferring semantics from the geotag, we are going to formally define it. 3.1

Tagging Geographic Information Objects

Humans adding items like pictures to their collections use individual ordering schemes (besides time) to group similar items, keep different items apart and consequently simplify recovery. We order books in our (real) book shelf according to various criteria, including topic, age, thickness, or even color. Such individual preferences re-appear in virtual collections. Using tags – words or combinations of words people associate with virtual items – is a well accepted approach to sort items on the virtual shelf. Tags, however, can vary significantly from person 4

See http://www.openstreetmap.org/

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

87

to person. The formal definition of a tag therefore has to include both the user and the tagged information object. Gruber [25] suggests to model the tag as the process Tagging = (L, U, I, S), which establishes an immediate relation between the the Label L coming from the User’s (U ) vocabulary associated to an information Item I. This definition includes a Source S, which enables sharing across applications. In the following, we leave this source aside, since it has no direct impact on the presented approach. The following rule states that, if a label is associated with an item by some user, it is regarded as tag. More importantly, it also states that a tag is always bound to its author and the item: ∀l(Label (l)∧ ∃i(Item(i) ∧ associatedTo(l, i)) ∧∃u(User (u) ∧ createdBy (l, u)) → Tag(l))

(1)

Any information object which is inherently hard to classify – basically all nontextual information – requires a solution for its categorization. Tagging is commonly accepted for such contents, such as photos or videos, but also for bookmarks, scientific articles, and many more. In the remainder of this paper, we focus on photos with an identifiable geographical context, e.g. a picture of La Catedral in Mexico City. The items in question are therefore related to objects in the geographic landscape [26]. Goodchild’s “geographic reality” [27] as formal definition of geographical information takes the spatio-temporal nature of the physical (field-based) reality into account. Humans, however, do not perceive reality as continuous fields. They identify individual objects, either directly or indirectly by looking at photos created by camera sensors. In this World of Individual Objects [26] we only consider particulars (entities existing in space and time) with an observable spatial and temporal extension. Objects on the photo have per se no meaning; in Frank’s World of Socially Constructed Reality we eventually associate semantics to be able to reference the particulars [28] in spoken language. Such reference can either be a proper name, which is used as unique identifier [29], e.g., Catedral Metropolitana de la Ciudad de M´exico, or it links to a category5 which groups objects sharing common properties, e.g. cathedral. We finally identify individual particulars according to their spatial or temporal characteristics, by either referring to complex objects (e.g., downtown) or to the homogenous spatial or temporal region the object is proper part of, e.g., Mexico City. So far, this follows the definition of gazetteer entries from Section 2.1. The place type T and place name N in the discussed triple (N, F, T ) both refer to the particular’s semantics, the geographic footprint F on the other hand is related to its spatial extension in physical reality. The same applies to the labels used to tag a photo, which function as references to particulars in geographic space. The nature of this reference, however, cannot be explicitly described: although it appears to be obvious for the mentioned proper names or category names, most tags associated to photos do not have an objective relation to the geographic object. The label vacation09 makes perfect sense for the user, who might have sorted all pictures of his Mexico trip using this 5

The reference is then again the proper name of the object’s type.

88

C. Keßler et al.

tag. Once the items are shared, however, such personal tags loose any usefulness. Other examples which have no immediate relation to the depicted particular are labels naming properties of the item itself (e.g. blue, high-resolution), the process of creating the item (e.g. nikon), its potential use (e.g. wallpaper), or simply the author’s opinion (interesting). Note that we assume that it is the user’s intention to improve the item’s findability; hence, we do not expect to encounter deliberate errors (which is obviously not true in real world settings; we propose an effective solution for this problem in Section 3.3). Once we have identified the references, we can use them to locate the referred-to object in space and time. The following rule makes this dependency between the tag and its role as reference to the depicted particular explicit: ∀l ∃i (Tag(l ) ∧ Item(i ) ∧ associatedTo(l, i)∧ ∃p(Particular (p) ∧ represents(i, p)) → refersTo(l, p))

(2)

The rule does not (and cannot) further specify the reference type. Taking our example of the cathedral, the label Catedral Metropolitana is immediately referencing – here as proper name – the particular. We can then further specify the tag as a proper name: ∀l∃p(Tag(l) ∧ Particular (p) ∧ names(l, p) → ProperName(l))

(3)

The open question here is obviously how to infer if the label is a proper name and, even more important, how to ensure that it is really the proper name of the depicted geographic object. The clustering and filtering approach introduced in the next sections provides answers to both questions. Labels like Mexico or Summer 2009 are indirect references. They point to a region containing the particular (spatially and temporally, respectively). The following rule formalizes our assumption, that, if the tag is a toponym referring to a certain geographic region, we can infer that our depicted object is spatially related to that region: ∀l∃p(Tag(l) ∧ Particular (p) ∧ refersTo(l, p)∧ (4) ∃r(GeographicRegion (r) ∧ names(l, r)) → spatiallyRelated (p, r)) We can only assume that there is a spatial relation between the depicted particular and the place name. By looking only at the labels we cannot infer what kind of spatial (or temporal, for that matter) relation exists, and hence what spatial character this specific label has. In the following section we introduce the concept of a geotag as an extension of the traditional tag. Geotags give us the opportunity to make use of geographic coordinates and points in time to identify the spatio-temporal character of the associated labels. 3.2

A Formal Definition of Geotag

The tagging process establishes the relation between the user, the information item, and the label. If the information item represents one or more geographic

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

89

objects, the associated label may (but does not have to) refer to either dimension of the depicted object: either its semantics (including a proper name of the individual or category) or its spatio-temporal extension (naming, for example, the containing region). A geotag extends the notion of the tag by adding an explicit location in space and time to the information item. In the case of digital photos, a time stamp with the creation date is usually added by the camera automatically. Geographic coordinates are either provided by built-in GPS modules, or added manually by the user. Building on Gruber’s definition of tagging as a relation, we add the time stamp Tand the coordinates C to the relation (and omit the source S): Geotagging = (L, U, C, I, T ). By extending our rule-based definition of a tag (Eq. 1), the following rule reclassifies a label as a geotag ∀l∃i(Label(l) ∧ Item(i) ∧ associatedTo(l, i)

(5)

∧∃c(Coordinate (c) ∧ associatedTo(c, i)) ∧∃t(Timestamp(t) ∧ associatedTo(t, i)) ∧∃u(User (u) ∧ createdBy(l, u)) → Geotag(l)) Note that we do not assume that a label reclassified as geotag is per se a place name. The tag blue is not necessarily related to the depicted object, nor does it have a spatial or temporal character. In our understanding, it is still a geotag, since it is the label used by one user in some occasion to tag an item with an associated location and date. In the following Section 3.3, we introduce an approach which reliably computes whether a label is spatially related to the particular. 3.3

A Clustering Approach to Categorize Geotags

The definition of geotags introduced in the previous section has substantial implications on the conceptual level. An information item is linked to a coordinate and time stamp, and labelled by one or more individuals. If we want to extract one particular aspect, e.g. the spatial coverage of geotags, we have to consider the other four properties as well. Using the definition of a geotag as the relation Geotagging = (L, U, C, I, T ), we use the tuple relational calculus 6 [30] in the remainder to specify the queries used to retrieve different kinds of clouds. For example, the query {g.C|g ∈ Geotagging ∧ g[L] = Li } returns the coordinates of all tuples g where the label (the field L) has the value Li . We call the result of this query a point cloud of a label. A folksonomy – the aggregation of all tags from all users into one (uncontrolled) vocabulary – is then simply formalized as {g.L|g ∈ Geotagging}. The resulting tag cloud can also be reduced to the vocabulary of one particular user Ui with the query {g.L|g ∈ geotags ∧ g[U ] = Ui }. Her spatio-temporal activity – the user’s movement across space and time – is queried using the statement {g.C, g.T |g ∈ Geotagging ∧ g[u] = Ui }. 6

TCR is a concise declarative query language for the relational model, the presented examples can also be expressed in SQL.

90

C. Keßler et al.

We suggest to make use of the point cloud of one label to compute its spatial footprint. A gazetteer build on top of this approach could then return geometries and centroids for proper (potentially unofficial) names of geographical objects. The information we derive from geotags, however, is inherently noisy: many tags do not have an immediate relation to the particular represented by the geotagged item. Only significant occurrences of geotags should therefore be considered for this approach. We define one occurrence of a geotag g= (L i , Ui , Ci , Ii , Ti ) as significant if the following two conditions are fulfilled: 1. At least two tuples gi and gj exist where gi [L] = gj [L], and g[Ui ] = g[Uj ]. Since names in geotags are subjective, this rule assures that only names which are used by different persons are taken into account. 2. The spatial distribution {g.C|g ∈ Geotagging ∧ g[L] = Li } can be clustered. In the following section we describe the algorithm which applies filters checking for these conditions to extract the relevant candidates for toponyms from the large set of tags. The semantic analysis of the two preceding sections can be easily realized as executable rules, for example expressed in the Semantic Web Rule Language (SWRL) [31]. SWRL supports built-ins, the algorithm presented in the following pages can therefore be integrated as geotag:significant and used to extend and clarify rule 2: ∀l∃i(Tag(l) ∧ Item(i) ∧

(6)

associatedTo(l, i) ∧ geotag : significant (l) ∧ ∃p(Particular (p) ∧ represents(i, p)) → refersTo(l, p)) A reasoning engine triggers the execution of the clustering algorithm once it processes the added built-in. The algorithm returns true if the given label is significantly occurring (or false otherwise). Once we have applied the filtering and clustering, our gazetteer can provide the point clouds (and the regions covered by the point clouds) for given place names. For some place names, the clustering process results in multiple clusters (see the example of Soho in the following sections). This does not impair the efficacy of the presented approach as long as the clustering algorithm produces reasonable results (which depends mostly on the number of available geotags). For cases such as Soho, multiple gazetteer entries are generated. Although we introduced time as a fundamental component of the geotag, we have not discussed the implications for the targeted gazetteer. With the presented approach, the tag GEOS 2007 would also be classified as place name. While we cannot discuss this issue here in detail for a lack of space, distinguishing between toponyms and labels naming temporal events can be implemented by applying the clustering approach both to the spatial and temporal dimensions. 3.4

Extraction of Gazetteer Entries

Section 2.1 defines gazetteer entries as triples (N, F, T ). This notion has to be further specified for a gazetteer based on geotags. Since, in our case, the underlying data consist of a large collection of photos geo-located with exactly one

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

91

coordinate pair, the given place name Nmaps to a point cloud as geographic footprint: F = {g.C|g ∈ geotags ∧ g[L] = Li }. Each point in the cloud represents one significant occurrence of the given place name as tag for a photo. Since the footprint is no longer a single coordinate pair, the gazetteer’s mapping from place name to footprint N −→ F should now result in three different mappings. N −→ Fr maps the place name to the raw footprint consisting of the corresponding point cloud. N −→ Fp maps to the polygon which approximates the region occupied by the point cloud. N −→ Fc finally maps a place name to the footprint’s centroid, i.e., to a single coordinate pair as returned by conventional gazetteers. The centroid is the mean of all coordinate pairs in the point cloud and is thus specifically (and intentionally) biased towards areas that contain high numbers of geotags. Fc can thus be regarded as the point of interest best representing a place name, based on the number and location of corresponding geotags.

Fig. 1. Geotagged photos are crawled from the web (1) and fed into an RDF triple store. The tags are filtered based on occurrences to retrieve a subset of toponyms (2). For each place name, regions and centroids are calculated (3). Finally, every place name is categorized using linguistic classification (4). The part outlined in grey has been implemented for this paper (adapted from [7]).

92

C. Keßler et al.

While the derivation of the gazetteer entries from geotags allows for enhanced functionality in the mapping from place name to footprint, the mapping to place type N −→ T remains unchanged. The experimental setup presented in Section 4 leaves the place type unspecified. Potential combinations with linguistic approaches [21] as sketched in Figure 1, however, would allow for a semiautomatic classification of the gazetteer entries based on a predefined typing scheme. This scheme could be adopted from existing gazetteers. Due to the limited reliability of any data coming from such collaborative platforms, such an approach would at least require quality control mechanisms. A fully automatic strong typing of place names with such bottom-up approach is clearly not feasible here. While this is out of scope for this paper, the grouping of a resource’s tags into place names, place types and other tags does appear feasible. Moreover, it stands to reason whether such a tag-based typing is a more practical approach for a community-driven gazetteer [32].

4

Workflow and Algorithm

This section describes the crawling approach implemented in our prototype. The different aspects of the resources that play a role in the filtering process are discussed. 4.1

Crawling Approach

A reliable extraction of geographic footprints requires a sufficiently large number of geotagged resources. We have limited ourselves to photos as resources for various reasons. People sharing their creations on the web want others’ recognition. Community-based web sites take this aspect into account by ranking the photos by popularity, which relies on the findability of the photos. Photo-sharing web sites all provide various means to find a photo: one can use a keyword-based search engine, browse a map with overlaid pictures, browse pictures by date, and so on. Users spent a considerable amount of time to annotate the pictures to cover all these aspects. Since every photo is implicitly located, assigning an explicit location by linking the photo to a point on a base map is a common annotation procedure. Accordingly, digital photos do not only carry detailed metadata in their Exif tags, they are also exceptionally well described by their creators. The last and most important reason to consider only photos as resource for extracting the spatial footprints of place names is the abundant availability. It is therefore reasonable to assume that the crawling yields a large enough sample of geotagged resources to achieve a significant result. The crawling algorithm is conceptually straight-forward. Starting from a specific tag, the algorithm requests all geotagged resources which have been annotated with this tag. All three services used for our study provide this functionality through their APIs. For every tag attached to a retrieved photo, we store a separate complete geotag tuple (L, U, C, I, T ) in our RDF triple store. In the next step, the conditions detailed in section 3.1 are applied to filter out tags which we have identified as not important. The resulting set of geotag tuples is taken as input for the clustering method described in the following.

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

4.2

93

Geotag Extraction Algorithm

A place name either refers to one unique place (e.g. Kilimajaro) or to multiple regions (e.g. the districts Soho in London and New York). The geotag tuples resulting from the crawling algorithm are used to identify clusters of high pointdensity. We consider the point cloud (explained in Section 3.3) as geographic footprint for the label Li if many people used this keyword to annotate their photos taken nearby. Such clusters can have any shape, they are not necessarily concave and can contain holes. Point clouds derived from geotags are not equally distributed over space, but have some tendency to follow structures like trails or streets. In [10] the Delaunay triangulation has been identified as candidate algorithm to find clusters within point clouds. This method is not restricted to places with certain geometries. It computes the smallest possible triangle between three adjacent points; each point is connected to its nearest neighbors by an edge. A Delaunay triangulation for the tag Soho in New York is depicted in Figure 2. In order to split the graph of points and edges into clusters of high density (short edges), we remove all edges longer than a given threshold. If adjacent, remaining triangles are merged into one or more polygons. They represent Fp , the polygonal geographic footprint of the gazetteer’s place name N . A more advanced way to extract polygonal footprints from single locations is the Alpha Shape [33,34], which has also been used to generate the Flickr shape files7 . For reasons of simplicity, we sticked to a Delaunay triangulation for this experiment. The next section shows that even with such a comparably simple clustering approach one can already obtain usable results.

Fig. 2. Cluster graph after the Delaunay triangulation for the place name Soho. The screen shot shows the clustering result depending on the edge length threshold: A small value results in several small clusters shown in blue. When the threshold increases, the fragments starts to join to the large black cluster.

7

See http://code.flickr.com/blog/2008/10/30/

94

C. Keßler et al.

5

Experimental Results and Evaluation

This section presents the results obtained by our prototype implementation. The results are discussed and compared to those obtained from conventional gazetteers. 5.1

Soho, Camino de Santiago and Kilimanjaro

We retrieved geotagged photos annotated with Soho, Camino de Santiago and Kilimanjaro. These three place names were chosen because they represent different geometries: Soho as a city district represents polygonal real-world features up to a few kilometers in diameter. Moreover, we chose this example because there is not “the one” Soho, but both districts in London and New York can be regarded as equally well-known. Camino de Santiago refers to a number of pilgrimage routes leading to the Cathedral of Santiago de Compostela8 in northwestern Spain. It usually refers to Camino Franc´es, the medieval route along Jaca, Pamplona, Estella, Burgos and Le´ on, but it is also used for a number of other ways to Santiago de Compostela across Europe and is thus a prime example of an ambiguous linear real-world feature. The third example, Kilimanjaro, is an example of a large-scale natural feature that can be seen (and hence shot) from far, but is hard to reach. Using this example, we want to investigate how well our approach is apt to derive useful results for such features. T able 1. Figures on the RDF repository used for this study. The numbers include a negligible number of entries added during the testing phase. Geotag Tupels 560,834

Filtered Geotag Tupels 471,393

Unique Names 9,917

Filtered Unique Names 2,035

Resources 10,603

Users 1,103

Table 1 gives an overview of the number of resources and tags obtained by crawling the three photo sharing websites for the three given examples. Only around 15 percent of tuples were removed during the filtering process, the ratio of ∼0.84 is surprisingly high. The ratio from filtered to unfiltered unique names on the other hand is ∼0.21; this shows that our filtering approach identified almost 80% of the names as irrelevant since they were used by only one user. The difference between the two ratios means that the remaining 20% of filtered uniques names appear in 80% of all geotag tuples. Our rather simple approach of not further considering tags that only occur once thus proves very effective. Most tags are noise, but those which remain are used and accepted by many users. Table 2 contains the specific numbers per place name. For Soho, the two biggest clusters emerge as expected in central London and in New York (see Figure 3). Apart from these two main clusters, a number of 8

Tradition has it that the cathedral contains apostle Saint James the Great’s gravesite.

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

95

Table 2. Figures on the three case studies. The last column indicates the distance from the cluster’s centroid to the corresponding footprint in GeoNames (a: London, b: New York). Place name Soho Camino de Santiago Kilimanjaro

Geotag Tuples Resources 11916 3124 5132 1304 2536 825

Users 446 75 72

Dates 3087 1255 808

Distance 0.26a / 0.16b km 285.3 km 3.7 km

Fig. 3. The clusters generated for Soho. The left screen shot shows the cluster in London, the right one shows the cluster in Manhattan, New York.

smaller clusters appear at different locations around the world. An analysis of the corresponding resources showed that most of them correspond to smaller places called Soho, thus representing valid gazetteer entries. The small outlying clusters south of the main cluster in Figure 3, however, are clearly no meaningful results. Such outliers occur frequently when users tag whole photo sets with the name of the place where most of them were taken. This inevitably tags some photos with the wrong place name and will require an improved filtering approach. For Camino de Santiago, the generated clusters give a good impression of the main trail to the Cathedral of Santiago de Compostela (see Figure 4). One apparent problem here is that the clustering algorithm splits up the route into distinct segments. Future research should focus on the development of “intelligent” clustering approaches that take the shape of the cluster into account, in order to enable a more reliable clustering. For Kilimanjaro, the emerging clusters (see Figure 5) expose the main problem with an approach based on tagged and geolocated photos: users often do not tag the picture with the place name of the location where the picture was taken, but with the name of real-world feature shown in the picture. This becomes

96

C. Keßler et al.

Fig. 4. The clusters generated for Camino de Santiago give a good impression of the trail of the route

Fig. 5. The clusters generated for Kilimanjaro are distributed over a large area and show the problem of photos tagged with with the names of features shown in the pictures, although they were taken from far away

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

97

especially apparent for very large real-world features, as in this example. Several smaller clusters expose the high number of pictures taken at these locations, which apparently offer a good view on Mount Kibo, the highest peak of the Kilimanjaro massif. Future work needs to investigate how clusters referring to such real-world features can be detected, for example, by identifying ring-shaped clusters such as the one in Figure 5. 5.2

Geographic Footprints

The footprints extracted by our approach provide additional useful information to the point-based footprints provided by conventional gazetteers. For comparison with GeoNames, we also computed the corresponding centroid as the mean of all coordinates in every cluster (or cluster group, as for Kilimanjaro). This centroid points to what can be described as a named cluster’s group-cognitive centre. In contrast to the geometric centre point, it gives an estimate of the common point of interest of users providing the photos retrieved in the crawling step. In the following, we discuss the extracted footprints and how the group-cognitive centre and the geometric centre point differ for our three examples. For Soho and Kilimanjaro, the distance between the GeoNames footprint and the centroid of our cluster is comparably small, given the respective scale of the cluster (and the size of the corresponding real-world feature). The footprint for Soho, London, in GeoNames is about 260m away from the centroid of our cluster. The cluster itself represents the common notion of Soho very well9 , although it extends across Oxford Street in the north, which is usually taken as Soho’s northern border. The same applies to the eastern extension of the cluster; the southern and western extension match the common notion of Soho very well. Similar observations can be made for Soho, New York: The area that is commonly referred to as Soho10 is completely covered, but the cluster exceeds the actual area in all four directions. This exceeding problem can probably be addressed by adjusting the cutoff length during triangulation and fetching more input data. The centroid of the cluster is only 160m away from the footprint of the corresponding GeoNames entry. The clusters generated for Camino de Santiago stretch very well along the actual trail of the route, despite the gaps discussed above. The calculation of the centroid shows that it is in most cases meaningless to represent linear real-world features by points. While the centroid represents a mean value for all coordinates in the clusters, the footprint from GeoNames is located at one end of the route. Selecting the destination of the pilgrimage trail as footprint certainly makes sense in this case (the coordinate refers to Santiago de Compostela), however, this selection will be completely arbitrary for linear features that lack such a clear destination (such as most roads). For Kilimanjaro, the clusters represent the areas with a view on the Kilimanjaro’s highest peak, rather than the mountain itself (due to the problems discussed above). This also causes a distance of almost 4 km of the clusters’ centroid to the GeoNames 9 10

See http://en.wikipedia.org/wiki/Soho#Streets for comparison. See http://en.wikipedia.org/wiki/SoHo#Geography

98

C. Keßler et al.

footprint, which is nevertheless still within an acceptable range given the size of the real-world feature.

6

Conclusions

This section summarizes the paper and points to different applications of the approach presented in this paper, as well as directions for future work. 6.1

Discussion

In this paper, we have presented an experiment to test the feasibility of the idea to build a gazetteer completely from geotagged photos crawled from the web. We have introduced the theoretical foundations to capture the emergent semantics of geographic information extracted from geotagged resources on the web. A theoretically sound definition of a geotag has been introduced and related to the classical definition of a gazetteer. Using the implementation which clustered and filtered geotags of photos, we have demonstrated how the geographic footprint for a given place name can be derived. The results of our queries for Soho, Camino de Santiago and Kilimanjaro showed that it is possible to derive meaningful geographic footprints from geotagged content, even with comparably simple clustering approaches. Both the footprints as well as their centroids shed a different light on named places than conventional gazetteers. As pointed out in [22], every gazetteer extracted from online information can only be as good as the information it builds on. However, our experiment has demonstrated that useful results can already be obtained with very straight-forward means to extract a group-cognitive perspective [12] on place names. Hence, we do not propose to replace existing gazetteers by our approach, but to complement them within a gazetteer infrastructure [7,16]. Further improvements can be expected from implementing models of trust in the harvesting process, which would allow for an estimation of the quality of the geotags used for clustering [7,23]. From a visual inspection, the generated regions were judged to be plausible representations of the place names’ geographic footprints. Particularly, the algorithm showed the capability to recognize different places carrying the same name, as shown in the Soho example. Moreover, the filtering algorithm has successfully sorted the crawled tags into toponyms and other tags based on the notion of significant occurrences. The example of Kilimanjaro has shown that very large real-world features are problematic for our approach, since they often appear in the context of photos that show them, but that were taken far away from the actual feature. Evidently, the results could be improved by more sophisticated crawling, filtering and clustering approaches. 6.2

Applications

While the crawling approach presented in this paper has been developed with the recursive generation of a bottom-up gazetteer in mind, the underlying algorithms

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

99

are also potentially useful in a number of other applications. The user component, for example, could be used to derive communities and their vocabulary by analyzing how groups of users tag certain real-world features. The temporal component has only been used to identify occurrences and to filter events that might corrupt the place name recognition. Instead of treating these filtered events as noise, however, one could also imagine an application that specifically looks for such events based on temporal clusters. This would enable an automatic calculation of geographic footprints for such events, which could eventually be merged into event gazetteers [35,36]. The fact that every resource carries a time stamp and a user’s name can also be used to extract individual space-time prisms [37,38]. This may provide insight into real-world social interactions between the users of photo sharing platforms, such as “who travelled together” or “who went to this party”. The implications for privacy, however, are obvious and would require a careful consideration of ethical issues. From this perspective, the photo sharing platforms used in this paper might require more fine-grained mechanisms to give their users control over what information they want to reveal to whom. One method to prevent automatic generation of such profiles would be to allow users to exclude specific metadata (or combinations of them) from access through the respective APIs. 6.3

Future Work

The next step in this research will be the combination of the filtering and clustering algorithm presented in this paper with linguistic web crawling approaches. This would facilitate to go beyond place names and their geographic footprints and also extract the corresponding place type, as demonstrated by Uryupina [21]. It is, however, unlikely that it will also be possible to extract a strong place typing from user tags. While straightforward types such as city, street or river may still be found frequently enough in the tags for a reliable extraction, it is unlikely that a user tags a picture taken in Soho with section of populated place – the associated feature class (i.e., place type) in GeoNames. However, same as for footprints and centroids, such a bottom-up typing scheme would reflect place types used in common language, as opposed to the often somewhat artificial administrative place types used in current gazetteers. This bottom-up approach should also allow for a more flexible categorization that does not force every named place into exactly one category [32] in order to fully capture the emergent semantics of collections of geotagged content. We also plan to extend the existing implementation to take the temporal nature of geotags into account. This eventually results in the identification not only of place names, but also of names of events and processes with a spatial character.

Acknowledgments This research has been partly funded by the SimCat project (DFG Ra1062/2-1 and DFG Ja1709/2-2, see http://sim-dl.sourceforge.net) and the GDI-Grid

100

C. Keßler et al.

project (BMBF 01IG07012, see http://www.gdi-grid.de). Figure 1 contains geotag icons under a Creative Commons license from http://geotagicons.com.

References 1. Jones, C.B., Purves, R.S., Clough, P.D., Joho, H.: Modelling vague places with knowledge from the web. International Journal of Geographical Information Science 22(10), 1045–1065 (2008) 2. Larson, R.R.: Geographic information retrieval and spatial browsing. GIS and Libraries: Patrons, Maps and Spatial Information, 81–124 (April 1996) 3. Goodchild, M.F.: Citizens as voluntary sensors: Spatial data infrastructure in the world of web 2.0. International Journal of Spatial Data Infrastructures Research 2, 24–32 (2007) 4. Bennett, B., Mallenby, D., Third, A.: An ontology for grounding vague geographic terms. In: Eschenbach, C., Gruninger, M. (eds.) Proceedings of the 5th International Conference on Formal Ontology in Information Systems (FOIS 2008). IOS Press, Amsterdam (2008) 5. Henrich, A., L¨ udecke, V.: Determining geographic representations for arbitrary concepts at query time. In: LOCWEB 2008: Proceedings of the first international workshop on Location and the web, pp. 17–24. ACM, New York (2008) 6. McConchie, A.: The great pop vs. soda controversy (2002), http://popvssoda.com (last visited august 1st , 2009) 7. Keßler, C., Janowicz, K., Bishr, M.: An agenda for the next generation gazetteer: Geographic information contribution and retrieval. In: ACM GIS 2009, Seattle, WA, USA, November 4–6. ACM, New York (2009) 8. Wilske, F.: Approximation of neighborhood boundaries using collaborative tagging systems. In: Pebesma, E., Bishr, M., Bartoschek, T. (eds.) GI-Days 2008. ifgiPrints, vol. 32, pp. 179–187 (2008) 9. Guo, Q., Liu, Y., Wieczorek, J.: Georeferencing locality descriptions and computing associated uncertainty using a probabilistic approach. International Journal of Geographical Information Science 22(10), 1067–1090 (2008) 10. Heuer, J.T., Dupke, S.: Towards a spatial search engine using geotags. In: Probst, F., Keßler, C. (eds.) GI-Days 2007 – Young Researchers Conference. ifgiPrints, vol. 30, pp. 199–204 (2007) 11. Aberer, K., Mauroux, P.C., Ouksel, A.M., Catarci, T., Hacid, M.S., Illarramendi, A., Kashyap, V., Mecella, M., Mena, E., Neuhold, E.J., et al.: Emergent semantics principles and issues. In: Lee, Y., Li, J., Whang, K.-Y., Lee, D. (eds.) DASFAA 2004. LNCS, vol. 2973, pp. 25–38. Springer, Heidelberg (2004) 12. Stahl, G.: Group Cognition: Computer Support for Building Collaborative Knowledge (Acting with Technology). MIT Press, Cambridge (2006) 13. Raubal, M.: Cognitive engineering for geographic information science. Geography Compass 3(3), 1087–1104 (2009) 14. Surowiecki, J.: The Wisdom of Crowds. Anchor, New York (2005) 15. Schlieder, C.: Modeling collaborative semantics with a geographic recommender. In: Hainaut, J.-L., Rundensteiner, E.A., Kirchberg, M., Bertolotto, M., Brochhausen, M., Chen, Y.-P.P., Cherfi, S.S.-S., Doerr, M., Han, H., Hartmann, S., Parsons, J., Poels, G., Rolland, C., Trujillo, J., Yu, E., Zim´ anyie, E. (eds.) ER Workshops 2007. LNCS, vol. 4802, pp. 338–347. Springer, Heidelberg (2007)

Bottom-Up Gazetteers: Learning from the Implicit Semantics of Geotags

101

16. Janowicz, K., Keßler, C.: The role of ontology in improving gazetteer interaction. International Journal of Geographical Information Science 22(10), 1129–1157 (2008) 17. Hill, L.L.: Georeferencing: The Geographic Associations of Information (Digital Libraries and Electronic Publishing). MIT Press, Cambridge (2006) 18. Casati, R., Varzi, A.C.: Parts and Places. The Structures of Spatial Representation. MIT Press, Cambridge (1999) 19. Goodchild, M.F., Hill, L.L.: Introduction to digital gazetteer research. International Journal of Geographical Information Science 22(10), 1039–1044 (2008) 20. Hastings, J.T.: Automated conflation of digital gazetteer data. International Journal of Geographical Information Science 22, 1109–1127 (2008) 21. Uryupina, O.: Semi-supervised learning of geographical gazetteers from the internet. In: Proceedings of the HLT-NAACL 2003 workshop on Analysis of geographic references, Morristown, NJ, USA, Association for Computational Linguistics, pp. 18–25 (2003) 22. Goldberg, D.W., Wilson, J.P., Knoblock, C.A.: Extracting geographic features from the internet to automatically build detailed regional gazetteers. International Journal of Geographical Information Science 23(1), 93–128 (2009) 23. Bishr, M., Kuhn, W.: Geospatial information bottom-up: A matter of trust and semantics. In: Fabrikant, S., Wachowicz, M. (eds.) The European Information Society – Leading the Way with Geo-information (Proceedings of AGILE 2007), Aalborg, DK. Lecture Notes in Geoinformation and Cartography, pp. 365–387. Springer, Heidelberg (2007) 24. Guszlev, A., Luk´ acs, L.: Folksonomy & landscape regions. In: Probst, F., Keßler, C. (eds.) GI-Days 2007 – Young Researchers Conference. ifgiPrints 30, pp. 193–197 (2007) 25. Gruber, T.: Ontology of folksonomy: A mash-up of apples and oranges. International Journal on Semantic Web & Information Systems 3 (2007), http://tomgruber.org/writing/ontology-of-folksonomy.htm (November 2005) 26. Frank, A.: Ontology for spatio-temporal databases. In: Sellis, T.K., Koubarakis, M., Frank, A., Grumbach, S., G¨ uting, R.H., Jensen, C., Lorentzos, N.A., Manolopoulos, Y., Nardelli, E., Pernici, B., Theodoulidis, B., Tryfona, N., Schek, H.-J., Scholl, M.O. (eds.) Spatio-Temporal Databases. LNCS, vol. 2520, pp. 9–77. Springer, Heidelberg (2003) 27. Goodchild, M.F.: Geographical data modeling. Computational Geosciences 18(4), 401–408 (1992) 28. Saeed, J.I.: Semantics (Introducing Linguistics). Wiley-Blackwell (2003) 29. Searle, J.R.: Proper names. Mind 67(266), 166–173 (1958) 30. Codd, E.F.: A relational model of data for large shared data banks. Communications of the ACM 13(6), 377–387 (1970) 31. O’connor, M., Tu, S., Nyulas, C., Das, A., Musen, M.: Querying the semantic web with SWRL, pp. 155–159 (2007) 32. Shirky, C.: Ontology is overrated – categories, links, and tags. Essay (2005), http://shirky.com/writings/ontology_overrated.html 33. Edelsbrunner, H., Kirkpatrick, D., Seidel, R.: On the shape of a set of points in the plane. IEEE Transactions on Information Theory 29(4), 551–559 (1983) 34. Edelsbrunner, H., M¨ ucke, E.: Three-dimensional alpha shapes. ACM Transactions on Graphics 13(1), 43–72 (1994)

102

C. Keßler et al.

35. Allen, R.: A query interface for an event gazetteer. In: Proceedings of the 2004 Joint ACM/IEEE Conference on Digital Libraries, pp. 72–73 (2004) 36. Mostern, R., Johnson, I.: From named place to naming event: creating gazetteers for history. International Journal of Geographical Information Science 22(10), 1091–1108 (2008) 37. H¨ agerstrand, T.: What about people in regional science? Papers in Regional Science 24(1), 6–21 (1970) 38. Miller, H.J.: A measurement theory for time geography. Geographical Analysis 37, 17–45 (2005)

Ontology-Based Integration of Sensor Web Services in Disaster Management Grigori Babitski1 , Simon Bergweiler2 , J¨org Hoffmann1, Daniel Sch¨on3 , Christoph Stasch4 , and Alexander C. Walkowski4 1

SAP Research, Karlsruhe, Germany {grigori.babitski,joe.hoffmann}@sap.com 2 DFKI, Saarbr¨ucken, Germany [email protected] 3 Itelligence AG, K¨oln, Germany [email protected] 4 Institute for Geoinformatics, M¨unster, Germany {staschc,walkowski}@uni-muenster.de

Abstract. With the specifications defined through the Sensor Web Enablement initiative of the Open Geospatial Consortium, flexible integration of sensor data is becoming a reality. Challenges remain in the discovery of appropriate sensor information and in the real-time fusion of this information. This is important, in particular, in disaster management, where the flow of information is overwhelming and sensor data must be easily accessible for non-experts (fire brigade officers). We propose to support, in this context, sensor discovery and fusion by “semantically” annotating sensor services with terms from an ontology. In doing so, we employ several well-known techniques from the GIS and Semantic Web worlds, e.g., for semantic matchmaking and data presentation. The novel contribution of our work is a carefully arranged tool architecture, aimed at providing optimal integration support, while keeping the cost for creating the annotations at bay. We address technical details regarding the interaction and functionality of the components, and the design of the required ontology. Based on the architecture, after minimal off-line effort, on-line discovery and integration of sensor data is no more difficult than using standard GIS applications.

1 Introduction Disasters may be caused by flooding, earthquakes, technical malfunctions, or terrorist attacks, to name a few. The efficient handling of such emergencies, i.e., the management of the measures taken to fight them, is a key aspect of public security. This is especially true in an increasingly tightly interlinked world, where problems in one area may quickly cause problems in connected areas. This phenomenon often causes disasters to exhibit an explosive growth, especially during their early stages. Defensive measures in such a stage are still premature, leading in combination with the explosive growth to what has been termed the “chaos-phase” [22]. Methods for shortening that phase are widely believed to be essential for limiting the damage caused by the disaster. One of the characteristics of the chaos-phase is the overwhelming flow of information that must be managed by the defense organizations, such as fire brigades and K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 103–121, 2009. c Springer-Verlag Berlin Heidelberg 2009 

104

G. Babitski et al.

the police. Depending on the scale of the disaster, each organization establishes a crisis team, i.e., a committee of officers deciding which actions to take, and monitoring their execution. To come up with informed decisions, members of the crisis team must process an enormous amount of heterogenous information, such as messages from the public, feedback from own forces in the field or from partner organizations, and – last not least – Geospatial information such as weather conditions and water levels. Our focus herein is on the latter. Since not only is the amount of information huge, but also it must be evaluated in a situation of extreme stress and pressure, it is of paramount importance that the information can be accessed quickly and with complete ease. In the SoKNOS project1, we develop a service-oriented system facilitating amongst other things the integration of Geospatial information. This integration is realized in a Geographic Information component (GI Plugin), which offers functionalities to query data from several geospatial web services, to visualize the data in a map component, and to analyze the data through integrated GIS functionalities. Additional analyzing capabilities (e.g. simulations) can be intergrated by adding external processing services. The difficulty of integrating new information into the map depends on the form the information comes in. Our most basic assumption is that the information is encapsulated into Web services conforming with the standard specifications of the Open Geospatial Consortium (OGC). The integration of basic maps is realized through adding data from Web Mapping Services (WMS). Vector data (e.g. risk objects) can be accessed through Web Feature Services (WFS) and hence require the creation of suitable queries which poses serious challenges; indeed, given the stress and pressure of the targeted scenario, pre-specified queries are necessary. An interesting and important middle ground are sensors, accessible through e.g. the Sensor Observation Services (SOS) as specified by the Sensor Web Enablement (SWE) initiative of the OGC. As sensor data is time-dependent, what the user needs to provide is, essentially, the desired Geographic area, the desired time interval, and the desired properties to be observed. The SOS specification lays the basis for doing so in an interoperable manner. Areas and time points are fully covered by standards. The main problems remaining are: (I) For identifying observed properties, mediation is required between the terminology of the user and that of the Web service design. (II) The user may not even know a technical term for the observed property she is looking for, necessitating an option to search by related terms. (III) For fusing the information of several sensors, data transformation (e.g. units of measurement) is needed, and duplicate data needs to be detected and removed. (IV) Sensors may become dysfunctional and in such case need to be replaced with suitable alternative sensors. Characteristic properties of disaster management are that (II) and (IV) are likely to occur, that the number and types of required sensor informations are manifold, that the persons needing them act under high pressure, and that these persons have hardly any IT knowledge. Given this, (I)–(IV) constitute a serious difficulty. 1

Service-oriented architectures supporting networks in the context of public security; http://www.soknos.de

Ontology-Based Integration of Sensor Web Services in Disaster Management

105

In our work, we have developed and implemented a tool architecture that addresses (I)–(IV), up to a point where discovery and integration of sensor data is no more difficult than using standard GIS applications. The key technique is to make use of semantic annotations in a purpose-designed ontology. The technicalities will be summarized directly below, and detailed later on in the paper. First, we need to clarify that our approach encompasses a separate service registration activity, which contrasts with service usage. These correspond to the two fundamentally different phases in our domain, off-line (prior to the disaster) vs. on-line (during the disaster). On-line, pressured and hectic users need to comfortably discover and integrate sensor data. As the basis for that, our approach assumes that – off-line, in peace and with ample time – each service has previously been registered. Such registration means to acquire the service (finding it in the Web), to create a description including the semantic annotation, and to store that description within a local registry.2 Apart from exploiting the off-line phase in a suitable preparatory way, the distinction between service registration and service usage also serves for decoupling these activities, allowing them to be performed by different people. The person performing the registration will also be associated with the fire brigade/police. But she may well have more IT knowledge than typical crisis team members. (That said, clearly, this person will not be a logics expert, so creating the semantic annotations needs to be reasonably easy; if it is not, then the effort for creating them is very likely to lead to non-acceptance anyhow.) A commonly used definition is that an ontology is a formal, explicit specification of a shared conceptualization [7]. In our context, we define an ontology called Geosensor Discovery Ontology (GDO). The GDO defines a terminology suitable for describing sensor observations and related entities. Put in simple terms, the GDO contains: (a) (b) (c) (d)

A taxonomy of phenomena, i.e., of properties that can be observed by sensors. A taxonomy of substances to which phenomena (a) may pertain. A taxonomy of Geographic objects to which phenomena (a) may pertain. The relations between (a), (b), and (c).

To ensure sustainable modeling, the GDO design follows the guiding principles of the DOLCE foundational ontology [16,5]. Simply put, DOLCE corresponds to a kind of widely accepted “best practice” for ontological modelling, serving to avoid common modelling flaws and shortcomings. The semantic annotations associate, for a SOS service, each of the service’s observed properties with a concept from (a). Clearly, these annotations are easy to create. Our architecture provides a simple user interface for doing so via drag-and-drop. In the obvious manner, the annotations solve problem (I). Since phenomena (a) are organized in a taxonomy (enabling us to find more general/more specialized sensors), the GDO also provides sophisticated support for problem (IV). Substances and Geographic objects are likely candidates a fire brigade officer will use as related terms, hence (b), (c), and (d) together serve to solve problem (II). Problem (III), finally, is solved by standard transformations and straightforward usage of the SOS output information. It is also required to make the entire functionality easily accessible to the user. Our Graphical User Interface does so via standard paradigms, and intutive extensions 2

Hence the term “discovery” in this paper refers to finding a suitable sensor, on-line, in a (potentially huge) local registry, not in the Web.

106

G. Babitski et al.

thereof. For service discovery, the area of interest is marked by mouse movements as a rectangle on a map; the desired time points are given by manipulating the boundaries of a time interval; search in the GDO – which from the user’s perspective corresponds to selecting the desired observations – is realized by text search combined with taxonomy browsing and following links (given by the relations between pairs of concepts in the ontology). Once services are discovered, fusing and displaying their data amounts to a single drag-and-drop action for the user. The architecture was successfully demonstrated to an evaluation team of German fire brigade and police officers, obtaining a very positive rating; we give some more details on this in Section 6. The paper is organized as follows. Section 2 provides a brief background on the OGC Sensor Observation Service and the Semantic Web. Section 3 introduces concrete use cases that we will use for illustration. Section 4 covers our architecture, detailing after an overview the design of the GDO, the semantic annotations, as well as sensor discovery and fusion. Section 5 discusses related work, and Section 6 concludes with summary remarks and a discussion of open issues.

2 Background We briefly give the most relevant background on the SOS service specification, and the Semantic Web domain. 2.1 Sensor Observation Service The goal of the OGC Sensor Web Enablement initiative is to enable the creation of web-accessible sensor assets through common interfaces and encodings [2]. Therefore, the SWE initiative defines standards for the encoding of sensor data as well as standards for web service interfaces to access sensor data, task sensors or send and receive alerts. The Sensor Observation Service (SOS) is part of the SWE framework and offers a pull-based access to observations and sensor descriptions [18]. The SOS operations are grouped into three different profiles: the core profile for retrieving the service descriptions, sensor descriptions and observations; the transactional profile for registering new sensors and inserting new observations; the enhanced profile for offering additional service functionalities. In this work, we focus on the basic operations of the SOS defined in the core profile. The core profile comprises the GetCapabilities, DescribeSensor and GetObservation operation. The GetCapabilities operation returns a service description of the service containing information about the supported operations and parameters as well as the observations which are provided, e.g. spatial and temporal extent of the observations, producing sensors and observed properties. Sensor metadata like sensor position, calibration information or sensor administrator can be retrieved using the DescribeSensor operation. The sensor descriptions are usually encoded in the Sensor Model Language (SensorML), a data model and XML encoding for sensor metadata [1]. The core operation of the SOS depicts the GetObservation operation. It offers the possibility to query observations filtered by spatial and temporal extent, producing sensors, certain observed properties, and/or value filters.

Ontology-Based Integration of Sensor Web Services in Disaster Management

107

The Observations and Measurements (O&M) specification [3] is utilized by the SOS to encode the data gathered by sensors. It defines a model describing sensor observations as an act of observing a certain phenomenon. The basic observation model contains five components: The procedure provides a link to the sensor which generates the value for the observation.The observedProperty references the phenomenon which was observed. The Feature Of Interest (FOI) refers to the real world entity (e.g., a river) which was target of the observation. The time, when the observation was made, is indicated by the samplingTime attribute.The result element contains the observation value. The observation acts as a property value provider for a feature: It provides a value (e.g. 27 Celsius) for an observed property (e.g. temperature) of the FOI (e.g. weather station) at a certain timestamp. The location to which the observation belongs is indirectly referenced by the geometry of the FOI. 2.2 Semantic Web In a nutshell (and as far as relevant for this paper), the Semantic Web community is concerned with the investigation of how annotations within a formal language can help with performing many tasks in a more flexible and effective way. Specifically, we are herein concerned with a form of semantic service discovery. The idea is that each Web service of interest is annotated with (an abstract representation of) its meaning – what does it do? – and services are discovered by matching this annotation against a discovery query – what kind of service is wanted? – given in the same logic. Since the annotations and queries, formulated relative to a formal domain model encoding complex dependencies, can be far more precise than free text descriptions, this approach has the potential to dramatically improve precision and recall. Semantic discovery is, by the standards of the field, a long-standing topic in the Semantic Web. Earlier approaches were often based on annotating with, and reasoning about, complex logic languages such as 1st-order logic or rich subsets thereof. See e.g. [13] for a classical Desciption Logics formalization. Arguably, most of these approaches suffer from the prohibitive complexity of creating semantic annotations and discovery queries (and from the prohibitive computational complexity of the required reasoning). A more recent trend in the Semantic Web community is to use more “lightweight” approaches putting less of a burden on these activities, at the cost of reduced generality and power – the slogan being “a little semantics goes a long way” [8]. Our approach falls into this class, with carefully designed technology targeted at providing added value, while keeping the complexity at a level that will lead to actual acceptance by end users (fire brigades etc) in the relevant domain.

3 Example Scenario In our example scenario, the floodwater level of the Rhine river in Germany rises immensely during a long lasting thunderstorm. Cologne and the industry park of Dormagen are affected by the flood. People have to be evacuated and organizations from other German federal states are called to support the disaster management. After a dike has broken and a chemical plant is flooded nearby the Rhine river, explosions occur which

108

G. Babitski et al.

release pollutants into the air and the water. The emergency staff as well as residential areas around the chemical plant are threatened by the released air and water pollutants. We consider the following use cases for the proposed architecture: (A) Discovery and fusion of heterogenous water level measurements. To get a more precise overview, all water gauges along the Rhine upstream of Cologne shall be integrated into the SoKNOS System. The sensor data is provided by different SOS services, using different identifiers for the observed phenomenon (e.g. water level, water gauge, gauge height), using different units of measurement, and partially overlapping each other. The challenges addressed by our architecture are to mediate between the identifiers and the terminology of the non-expert user, to make the sensors easy to find among a huge set of available sensors, to merge multiple data points, and to recognize redundant data. (B) Replacement of a water level measurement sensor. The data displayed to the crisis team of course must be up-to-date. Since access to SOS services is pull-based, the map component sends new queries periodically. One of the sensors may have become damaged, and hence may now be out of order. The challenge addressed by our architecture is to recognize this, and to discover and integrate a suitable replacement sensor automatically. (C) Discovery and fusion of heterogenous air pollutant concentration measurements. With conventional methods, the monitoring of air pollutant concentration is a time consuming and complicated task. There are only few vehicles with appropriate sensors. Hence the spatial resolution of the measured values is rather coarse grained. It takes considerable time for the vehicles to arrive at the area of interest, and the measurements are transferred through verbal communication, prone to delays and misunderstandings. This can be improved considerably through leveraging on resources – SOS services – that happen to be available in the particular scenario: the monitoring systems of chemical plants near the flooding. These SOS services could of course also be integrated off-line into conventional systems. But our approach allows to discover and use them with ease, based on minimal integration effort. Indeed, since registering a service requires hardly more effort than knowing where the service is and which phenomena it observes (see Section 4.3 below), it is conceivable that the integration is performed on-line, e.g. by a system administrator, upon demand by the crisis team members.

4 Semantic Sensor Integration We now explain in detail our architecture, its individual components, and their design and functionality. We begin in Section 4.1 with an overview, giving a rough picture of the components and their interaction. We then delve into the details, describing in Section 4.2 the design of our ontology, explaining in Section 4.3 our semantic annotations and how they are created, describing in Section 4.4 our methods for sensor discovery, and describing in in Section 4.5 our methods for sensor data extraction and fusion. All user interactions are illustrated with screen shots, and all methods are exemplified with the use cases introduced in the previous section.

Ontology-Based Integration of Sensor Web Services in Disaster Management

109

4.1 Architecture Figure 1 shows an overview of our architecture. There are six components. Two of these are graphical user interfaces (GUIs, shown in the top left part of the figure), two are backend components (shown in the bottom left part), and two are data stores (shown on the right).

data

retrieve

JSE

Fusion of sensor data

discovery query service descriptions Sensor discovery

WSR

retrieve

GUI Backend

Data store

WSR

GUI Pose discovery queries discovery query/new s.d.

service descriptions

bounding box service descriptions

service descriptions

GIS

GUI GIS interface

store/retrieve

GDO Formal world model

retrieve

WSR

DB Store sensor descriptions

Fig. 1. An overview of our architecture

The Geographic Information System (GIS) GUI is basically a standard GIS map component, extended to cater for the required interactions with the Web Service Registry (WSR) GUI and the Joined Sensor Engine (JSE). The Web Service Registry GUI is the user interface of the Web Service Registry, which serves for registering and discovering Web service descriptions – i.e., descriptions of SOS services, including their semantic annotations, in our case. The Joint Sensor Engine extracts the data from a set of discovered services. It makes the required data transformations and it detects duplicate data. Most importantly, it monitors the performance of the services, and replaces them – via posing a suitable discovery query to the WSR – fully automatically in case of failure. The Geosensor Discovery Ontology (GDO) is a formalization of the domain, i.e., of the relevant terminology relating to sensor data, as outlined in the introduction. The Web Service Registry database (DB), finally, is the storage container for service descriptions. A brief summary of the interactions is as follows: – GIS GUI with Web Service Registry GUI. The user specifies a bounding box via marking a rectangle on the map within the GIS GUI; the bounding box is sent to the Web Service Registry GUI, to form part of the discovery query. The discovery query is completed in the Web Service Registry GUI, and the discovered services are sent back to the GIS GUI. From that point on, the GIS GUI is responsible for displaying the data of these services. – Web Service Registry GUI with Web Service Registry. Discovery queries are created in the Web Service Registry GUI, comprising the desired area (the bounding box), the desired time interval, as well as the desired kind of phenomenon to be

110





– –





G. Babitski et al.

observed. The queries are sent to the Web Service Registry, which performs the discovery and sends the discovered service descriptions back to the Web Service Registry GUI. Additionally, the user may enter a new service description (possibly including a semantic annotation) in the Web Service Registry GUI, which is then sent to the Web Service Registry for storage. GIS GUI with Joined Sensor Engine. Whenever the GIS GUI needs to extract upto-date data from the discovered sensors, it sends their descriptions to the Joined Sensor Engine. Based on the descriptions, the Joint Sensor Engine connects to the services, and extracts and fuses their data, which is then sent back (as a single data set) to the GIS GUI. Joined Sensor Engine with Web Service Registry. Whenever service monitoring inside the Joint Sensor Engine finds that a sensor has failed, it queries the Web Service Registry for replacement services, delivering equivalent data. Web Service Registry with Web Service Registry DB. The Web Service Registry connects to the database for storage and retrieval of service descriptions. Web Service Registry GUI with Geosensor Discovery Ontology. For specifying a discovery query, the user needs to find the desired concepts in the Geosensor Discovery Ontology, i.e., suitable phenomena or related entities. For that, the Web Service Registry GUI uses the structure of the Geosensor Discovery Ontology, which is read from the storage. Web Service Registry with Geosensor Discovery Ontology. Discovery is made not only directly on the concepts in the query, but also indirectly through the connections within the Geosensor Discovery Ontology, read from the storage. Joined Sensor Engine with Geosensor Discovery Ontology. For the purpose of data transformation, the Joined Sensor Engine needs information from the Geosensor Discovery Ontology in order to detect equivalent observed properties.

These functionalities and interactions will now be explained in detail. We start by detailing the structure of the GDO, which lies at the heart of our approach. 4.2 Ontology Design The GDO is formalized in F-Logic [12], a logic based programming language which we chose mainly for practical reasons: F-Logic provides sufficient modelling power for our purposes, while at the same time being computationally efficient in the reasoning tasks we require.3 In what follows, we do not delve into details of the formalization. Instead, we describe the design of the GDO at an intuitive level. The GDO is designed to support discovery of SOS services, so, naturally, it builds on the relevant specifications [3,18]. SOS service descriptions contain keywords (called “observed properties” in (O&M) [3]) indicating the properties measured by the sensor. These properties are not standardized, but the CF Metadata4 contains a (incomplete) collection. The GDO models those properties relevant for our application, as well as 3

4

There is also a version of the GDO formulated in the standard description logic based language OWL [17]. In our work, this version mainly serves as a reference model. For the sake of simplicity, we do not discuss the OWL version and its relation to the F-Logic version. NetCDF Climate and Forecast (CF) Metadata Convention (http://cf-pcmdi.llnl.gov).

Ontology-Based Integration of Sensor Web Services in Disaster Management

111

some supplementary entities, in the form of taxonomies of categories. Our technology connects those to real sensors via F-Logic rules. An important aspect of the GDO is that it follows well-established ontological design principles. We align the GDO with the well-known DOLCE foundational ontology. DOLCE essentially is a kind of widely accepted “best practice” for ontological modelling. This serves to avoid common modelling flaws and shortcomings. For details regardng DOLCE, we refer the reader to the literature [16,5,6]. In what follows, a rough understanding of the following four concepts will suffice. Endurants and perdurants are distinct regarding their behavior in time. Endurants are wholly present at any time they exist, whereas perdurants extend in time by accumulating different temporal parts. Perdurants embrace entities generally classified as events, processes, and activities. An endurant “lives” in time by participating in some perdurant(s). For example, a building (endurant) participates in its lifespan (perdurant). In the GDO, we use two sub-categories of endurant: “non-agentive physical object” and “amount of matter”. Qualities are the basic entities we can perceive or measure, for example the volume of a lake, the color of a rose, or the length of a street. DOLCE distinguishes physical and temporal qualities, which pertain to endurants and perdurants, respectively. Roles are played by endurants. For example, a physical object may play the role “observed object”, but it may also play the role, e.g., of an “operation site” or of a “target”. To exemplify the importance of such ontological precision: in (O&M), some vital concepts are under-specified or ambigiously defined. For example, “observed property” and “phenomenon” are defined vaguely and used more or less like synonyms. According to DOLCE, they would be a mixture of endurant, perdurant, and quality (see a detailed discussion in [19]). Similarly, “feature of interest” is not perceived as a role (which is done according to DOLCE), but instead as an endurant – although, quite clearly, being observed is not a characteristic property of an object. The Rhine is a river; will it become a different object because it is being observed? Such terminological inclarity is unproblematic when used amongst members of a closed community who know what is meant, but may cause problems when crossing community boundaries – e.g. during a disaster. That said, the GDO is not dogmatic in its alignment to DOLCE; we follow the DOLCE guidelines where sensible, and opt for pragmatic solutions in cases where a full solution would unnecessarily complicate matters. The GDO is based on the design pattern depicted in Figure 2. That is, the ontology is built as a specialization of that pattern, extending the pattern’s high-level categories with whole taxonomies, i.e., with hierarchies of more concrete categories, and instantiating the high-level relations with relations between such concrete categories. In what follows, we briefly explain the main aspects of the design. At first glance, one sees that the pattern does not only cover sensor observations – observable qualities – but also weather phenomenon, substance, geosphere region, and boundary of geosphere regions. This enables search by related terms: rather than laborously searching through a huge set of observable qualities, the user may select a related concept which pertains to the desired quality.5 The advantage is that the 5

The relation may be direct or indirect; hence the has quality and has indirect quality relations in Figure 2. To exemplify the difference: water (directly) has a temperature; in contrast, pressure is not a property of the athmosphere, but is often (indirectly) associated with it.

112

G. Babitski et al.

Fig. 2. The design pattern underlying the GDO (Geosensor Discovery Ontology), slightly simplified for presentation. Concepts inherited from DOLCE are marked by inscription and color.

taxonomies of related concepts tend to be much smaller than that of possible sensor observations. For example, for a non-expert user “wind direction” (or “water level”) are probably much easier to find via “wind” (or “river”) than via browsing the taxonomy of observable qualities. That said, browsing is of course also an option in our system. In the GDO, weather phenomenon captures things such as rain shower, wind, fog; substance is orientated at chemical terminology, distinguishing between pure substances and blended subtances, covering things such as oxygen and nitratemonoxide (pure substances), and salt water (a mixture of substances); geosphere region covers things such as athmosphere, ground, body of water; boundary of geosphere regions covers things such as earth surface, water surface. If needed, these 4 top-level categories can easily be augmented by additional ones. One simply adds the new categories, classifies them according to DOLCE, and gives them the played by relation to observed object – which is defined as a role, c.f. the above discussion. In accordance with DOLCE, observable qualities are distinguished into temporal ones (e.g. speed, flow rate) and physical ones (e.g. temperature, distance). Another aspect worth noting is that observable qualities may be related – one quality informs about another – or even equivalent – one quality informs exactly about another. An example of the former is fog density, which informs about range of sight. An example of the latter are the two ways of observing wind direction: from where vs. whereto. 4.3 Semantic Annotation As stated, our semantic annotations are simple, in order to ensure practicality for organizations such as fire brigades. The precise form of the annotations is as follows: Definition 1. Assume that s is a SOS service. A service description of s is any set D that contains the URL ofs as well as a semantic annotation α ofs , defined as follows.

Ontology-Based Integration of Sensor Web Services in Disaster Management

113

Assume that O P (s) = {op1 , . . . , opk } is the set of observed properties supported by s, across offerings, and assume that OQ is the set of concepts in the GDO that are subconcepts of observable quality. Then a semantic annotation of s is a partial function α : OP (s) → OQ. Sub-concept here refers to the taxonomic structure of the GDO: concept c1 is a subconcept of concept c2 iff c1 lies below c2 (directly or indirectly) in the tree of concepts. In practice, and in our prototype, of course the form of the service descriptions (i.e., the precise set of attributes stored for each service) is fixed. What that form is – other than that it complies with Definition 1 – is not important to this work. Note that α is a partial function, hence allowing the annotation to be incomplete. This allows to register a service without giving it a full semantic annotation. In order to use a particular output (a particular observed property) of a service with our architecture, that output must be annotated, i.e., be in the domain of the annotation function α. Each observed property is characterized by a single concept of the GDO. This is appropriate because it complies well with the intended meaning of the SOS specification: each sensor output corresponds to one atomic category of possible observations. It is important to note that such a simple correspondence would not be valid for more complex OGC services. For example, it would make no sense to restrict the annotation of a WFS service to a single concept in an ontology: since WFS services are databases that may contain a whole variety of data, a description of their data content would definitely need to be some sort of combination of concepts (see also [15]). From a Semantic Web perspective, ours is a classical example of a light-weight approach, c.f. Section 2.2. In our architecture, the simple semantic annotations as per Definition 1 suffice to conveniently discover and, where needed, replace SOS services (details follow in the next sub-sections). Creating the annotations can, obviously, be supported in a straightforward manner using classical GUI paradigms. Figure 3 shows a screenshot of our implemented tool, in a situation corresponding to use case (C) of Section 3, i.e., annotation of air pollutant concentration measurements with concepts from the ontology. As can be seen in Figure 3, the WSR GUI contains a tab for annotating sensor services. The WSR displays the service’s observed properties, as well as any α assignments that have already been made. In a separate part of the window (“Konzepte”), the ontology is displayed. One can search concepts in the ontology via several options that will be detailed in the next section, when we describe how to create discovery queries. Once the desired concept is found, one simply drags it onto the corresponding observed property – in Figure 3, the concept “Lufttemperatur” is dragged onto the output property “airtemperature”. The new assignment is stored in the service’s annotation α. If the output was already assigned previously, then that assignment is over-written. Clearly, this annotation process requires no more expertise than a basic familiarity with computers, as well as some familiarity with SOS service observations and with the GDO. It is realistic to assume that such expertise will be available, or easy to create, within the relevant organizations and their partners. 4.4 Sensor Discovery As is common in semantic service discovery, c.f. Section 2.2, the discovery is formulated as a process of matching the available services against a discovery query. In our

114

G. Babitski et al.

Fig. 3. A screen shot of our GUI for creating semantic annotations. Since our tool is built in cooperation with (and for the use of) German disaster defence organizations, the inscriptions are in German; explanations are in the text.

approach, the semantic annotations serve for terminology mediation, and for allowing indirect matches. The latter enables the user to find the desired services via intuitively related terms, rather than having to laborously search for the actual technical term. Service descriptions and the semantic annotations they contain were defined already in Definition 1. Discovery queries and matches are defined as follows: Definition 2. Assume that CO is the set of all concepts in the GDO. A semantic discovery query sQ is a subset sQ ⊆ CO. Assume that D is the description of a service s, that OP (s) = {op1 , . . . , opk } is the set of observed properties supported by s, and that α ∈ D is the semantic annotation of s. Then sQ and s match in opi iff opi is in the domain of α, α(opi ) = c0 , and there exists q0 ∈ sQ such that q0 is connected to c0 . The latter notion is defined inductively as follows: (1) Every c ∈ CO is connected to itself. (2) If the GDO contains a relation with domain c1 ∈ CO and range c2 ∈ CO, then c1 is connected to c2 . (3) If c1 ∈ CO is a super-concept of c2 ∈ CO, then c1 is connected to c2 . (4) If c1 ∈ CO is connected to c2 ∈ CO, and c2 is connected to c3 ∈ CO, then c1 is connected to c3 . In words, a discovery query is just some collection of terms from the ontology. What the discovery does is to look for services s whose annotation contains a term c0 which one of the query terms (namely q0 in the definition) is “connected” to. All these services s – along with the relevant observation opi and ontology term c0 – are returned, provided the spatial and temporal aspects match as well (see below).

Ontology-Based Integration of Sensor Web Services in Disaster Management

115

Connected in Definition 2 refers to a combination of relations in, and taxonomic structure of, the GDO. It is best understood as defining a set of possible paths through the ontology. Item (1) in Definition 2 says that empty paths are allowed: a query concept q is, of course, relevant to itself. Item (2) says that a path may follow a relation between two concepts c 1 and c 2 – if c 1 is relevant to the query, then c 2 is as well because c 1 relates to c 2 . For example, c 1 may be the concept river, the relation may be has quality, and c 2 may be water level; c.f. use case (A) of Section 3. Item (3) in Definition 2 says that a path may go downwards in the taxonomy, i.e., go from c 1 to c 2 if c 1 lies above c 2 in the taxonomy. This is so because, if c 1 is relevant to the query and c 2 is a special case of c 1 , then clearly c 2 is relevant to the query as well. For example, the query concept may be body of water, which is a super-concept of river, from which by item (2) we may get to water level. Item (4) states transitivity, a technical vehicle for expressing concisely whether or not there exists a path between two concepts. Items (1)–(4) in Definition 2 are implemented in a straightforward way using F-Logic rules. Such a rule takes the form rule-head ⇐ rule-body, meaning that truth of the rule body (right hand side) implies truth of the rule head (left hand side). Rule head and body are composed of F-Logic atoms. Item (4), e.g., is implemented by the rule ∀X,Y,Z connected(X,Z) 1 by definition. The popularity of that position calculates as follows, where Un is the set of users that made decisions at the same position:  popularity(n) = (1 + log decisionsn (u)) (3) u∈Un

In order to use the popularity as the a diagnostic value we make the following assumption: People that share more lower ranked decisions are more similar than

Fig. 2. Sets of spatial choices made by two users

128

C. Matyas and C. Schlieder

people that share more higher ranked decisions. Figure 3a and 3b illustrates the impact of that assumption on the similarity. If we would weight the decisions equally the tanimoto measure would be same for both examples. sim(A, B) = |A∩B| 3 |A∪B| = 6 = 0.5. Depending on the weighting of the different ranks in the second case (figure 3b) would be much more similar because the value of the overlapping lower ranked decisions should be greater than in the first case (figure 3a). The problem of the ranking approach is the high dependency on the geographic context, which can hide regional differences. The larger we extend the geographic context, the smaller the relative significance of the individual ranking is. The most important issue for the similarity of two users is the relative ranking to an overall context. It is difficult to capture the differences of two users who only made spatial decisions in a city when considering the whole country. The use of different levels of spatial separation offers the possibility to better model the user behavior. What we are looking for clearly separable environments where users usually make decisions. If we identify the typical environments in which users typically circulate, in order to make spatial decisions, we then reflect them in the partonomy as separate regions. For example tourists, a main source of images from public image galleries, commonly visit a few particular cities, resulting in a very high coverage of individual cities. In which case we profit from a representation of cities in our partonomy. There are a number of predefined partonomies, like the Nomenclature of Territorial Units for Statistics (NUTS), that can be used as background knowledge for defining these geographic contexts. NUTS results in a hierarchical partitioning of space where each level in the partonomy compromises a tesselation. The regions of a tesselation covers the complete space without overlap, see figure 4 for an example. The advantage of using a tessalation is also the complete coverage of all possible spatial decisions, which is advisable. We have not investigated overlapping regions yet, which would lead to slightly different results as some spatial decisions are considered multiple times. We decided that a hierarchical partonomy gives as the necessary levels of granularity. This allows a broader interpretation of spatial decisions like mentioned before. We are also able to accumulate decisions made in lower layers to result in a more general idea of spatial decisions, like cities that a user visited instead of point based locations. The hierarchical partonomy can be described as a graph G(N, E), where N defines a set of nodes and E a set of edges. The nodes n ∈ N of the graph represent regions and each edge e ∈ E represents a part-of

(a). Low diagnostic value.

(b). High diagnostic value.

Fig. 3. Different ranking combinations

A Spatial User Similarity Measure for Geographic Recommender Systems

(a). Spatial space

129

(b). Decisions space

Fig. 4. Tesselation partitiong

relationship of two regions. The lowest level in our partonomy contains regions of cities, for example table 1. For the recognition of common spatial decisions we have to introduce another layer in our partonomy. This is due to the fact that even when two users visit the same location their GPS coordinates will show some slight differences. To compensate for this the lowest layer consists of clusters of point based features that permit recognition of the same spatial decision. We implemented a software application called the heatmapper that basically uses a geographic approach to cluster these features. We can also use the results of different approaches that are extracting places in such datasets (Ahern et al. [7], Snavely et al. [9]). Even an additional handmade modeling of clusters of spatial choices is imaginable. Whatever the case, if the model clusters all similar decisions, we can just choose any one of them to represent the cluster, e.g. a random image for a cluster of images. The regions occupied by the clusters, and not the points, are now the smallest geographic objects of interest and take place as leaves in the hierarchical tree. We introduced the term cluster of points of view (CP V ) for a cluster of spatial decisions made while shooting images. Generally, we will talk about clusters of spatial decisions (CSD). For each node we calculate two different values in order to support our similarity measurements, it’s popularity and derived from that the diagnostic value of that node. The popularity will be defined recursively: first we measure the popularity as described in equation 3 for every leaf, the rest of the nodes is defined as a sum of the popularity of its children.  popularity(c) (4) popularity(n) = c∈children(n)

The second value we measure can be seen as the inverse of the popularity, as it reflects the diagnostic value of the node reflecting Tverskys notion of diagnosticity mentioned above. The weight w(n) of a node reflects that certain choices occur more often or are considered more important than others, while the more common decisions will have a lower value than the more personal ones. As a supporting value we calculate the information content of a node by the formula, where rp(n) is the relative popularity of a node in relation to the popularity of its siblings (siblings are nodes that share the same parent): inf ormation(n) = − log2 rp(n)

(5)

130

C. Matyas and C. Schlieder

Table 1. Partonomy used for evaluation Region World Germany Baden-W¨ urtemberg Stuttgart Freiburg T¨ ubingen Bavaria Munich N¨ urnberg Bamberg W¨ urzburg Berlin Berlin Italy Toscana Pisa Florence Lucca Lazio Rome Santa Marinella Fiumicino Aprilia France North France Paris Le Mans Caen Saint-Malo

Popularity

Images

2904.76 1684.90 544.14

3471 2047 638

2649.25 138.72 1844.70 1054.94

2977 195 4692 1209

2854.12

3258

747.14 3238.81 1328.80

793 3988 1569

2535.42 44.04 88.42 241.35

2858 49 99 297

3487.95 315.35 624.69 1618.83

3784 390 691 1735

Users 5766 2763 1410 808 505 171 1170 637 44 261 298 350 350 2042 1693 494 876 498 508 430 24 45 18 1461 1461 598 89 182 658

This scales the relative popularity on a logarithmic scale and flatten big differences amongst the popularity values. The information content quantifies the value of information about a user’s decision if that user participates in the corresponding node. We can say that a user participates in a node if one of his spatial choices is made inside the region represented by the node. It is obvious that a user participated in a node if he participated at least in one of the children nodes. The weight function w(n) for a node n is measured as follows and scales the information content to a value between zero and one: w(n) =

inf ormation(n) maxs∈siblings(n) inf ormation(c)

(6)

The different participation patterns in sense of collections of spatial decisions for different spatial contexts can be evaluated to calculate the similarity between

A Spatial User Similarity Measure for Geographic Recommender Systems

131

two users. We discuss different approaches in the next section. The tree structure allows us to exclude lower layers from consideration for a number of benefits. It reduces the complexity of the calculation at the cost of reducing the accuracy of the similarity. Additionally, we could find similarity otherwise not evident due to a lack of any common participation on the lowest level. Two users who visited the same city even if they have not participated in the same places in that city. Or as a more extreme example, users that have participated in the same country but not in the same cities. Each measurement should be able to differentiate between different levels of possible overlap of spatial decisions.

4

User-Based Collaborative Filtering

The initial task for this section is to generate a personal recommendation for users without explicit semantical information. In this sense we use a implementation of the prototype theory as stated by Rosch [28]. Every user has provided prototypes of different geographic contexts of the partonomy and they have been accumulated by the weighting of section 3 into a semantic model of the location. The ranked children of a node denote a typical conceptualization of that region. Using that ranking we can give a rather impersonal recommendation by returning the most popular decisions of this set of children. We will use this approach as a baseline for the evaluation in section 5. In order to give a more personal recommendation we base the calculations of a concept not on the whole community but on the most similar users to the initiator. In respect to Rosch, we use the prototypes of the users that shared the same experiences before to generate a concept of the region for that specific user group alone. This idea of using implicit user semantics can be seen in the implementation of a user-based collaborative filtering which we adopted for the recommendation of geographic objects. Since user-based collaborative filtering was introduced by the Grouplens system (Resnick 1994) it always followed the same principles. Mandatory for the method is a constant feedback of the user. In the original work ratings about newspaper article were used as user feedback. Based on these ratings r ∈ R the first step towards a recommendation is the identification of the most similar users. The Grouplens system used the Persons correlation factor of the user’s u ratings vector Ru = r1 , . . . , rn as similarity measurement. As a final step the opinions of the most similar users Nu ⊂ U about a yet unrated item i are aggregated and used as a predicted rating r. The aggregation is basically done using the following formula, where Ru is the average rating of user u.  u∈Nu (ru,i − Ru ) · sim(u, u)  (7) r u,i = Ru + u∈Nu |sim(u, u)| We take the same basic steps but we use different similarity measurements based on the observations made in section 3 and an adapted aggregation approach. Both steps (finding a similar user and aggregating their experiences) will be described in the following. The first question we have to answer is: how similar

132

C. Matyas and C. Schlieder

are two users in respect to their spatial choices made in the weighted partonomy tree? We specify four possible measurements that are based on different levels of the derived semantic model. 1. Single-layer feature similarity (SFS): In a typical user based recommender the given feature vectors of each user IU are measured against the feature vector of another user, using a correlation metrics like the cosine similarity or the Pearsons correlation factor, to find their similarity value. In our case we take the nodes of one level as features in a users feature vector (for example all CSDs). The values of the vector are the number of decisions a user made in each node. The similarity of two users (u a , ub ∈ U ) is the comparison of these vectors using the cosine similarity. We choose cosine in order to compensate the differences in the number of images in one node, as we are more interested in the relative distribution of images on the different nodes. Iua · Iub (8) simSF S (ua , ub ) = |Iua | · |Iub | 2. Two-layer feature similarity (TFS): This measurement focuses on a single layer of the partonomy and calculates the cosine similarity for each node independently using the children of that node as feature set. The similarity in each node is used to scale the weight w(n) of the node, in the interval of [0,1]. The sum of the scaled node weights are then divided by the sum of the original weights. Nodes are weighted depending on their relative popularity. We only take nodes into account that both users have visited. simSF S (u, u, n) is the cosine similarity restricted to children of node n as feature vectors.  simSF S (ua , ub , n) · w(n) (9) simT F S (ua , ub ) = n∈cities  n∈cities w(n) Generally, we can take any level in the partonomy graph and calculate the cosine similarity on the next lower level. We can calculate the feature similarity based on the cities to measure the similarity on the level of federal states as seen in table 1. 3. Two-layer information similarity (TIS): This measurement uses the same approach as the two layer similarity but is based on a similarity taking the information content (equation 5) of each CSD into account as introduced in section 3. The measure simi (ua , ub , n) measures similarity based on the information value and a tanimoto theme in the context of one specific node n. Sets Bn and An are the set of participating children of the parent node n in relation to user ua and user ub .  ( c ∈ An ∩ Bn )inf ormation(c) (10) simi (ua , ub , n) =  ( c ∈ An ∪ Bn )inf ormation(c)  simi (ua , ub , n) · w(n)  (11) simT IS (ua , ub ) = n∈cities n∈cities w(n)

A Spatial User Similarity Measure for Geographic Recommender Systems

133

4 . Geographic coverage similarity (GCS): This takes the common coverage on every level in the partonomy into account. As most users have some common behavior at the higher levels of the partonomy this measurement is relatively larger than the values of the other measurements. However we are able to find a similarity even if the amount of common decisions in the CSDs is low or not existing. Nu , Nu ⊂ N are subsets of the nodes in the partonomy graph user A respectively user B made a spatial decision.  simGCS (ua , ub ) = 

n∈Nua ∩Nub w(n)

n∈Nua ∪Nub

w(n)

(12)

Basically, two users will be highly similar if they visited the same European countries, within these countries chose similar regions, and within the regions comparable cities. Having established the similarity to other users we are able to finally calculate a personal weighting of the nodes in the graph. In order to give each node a personal weighting wpersonal we use the following binary function. γ(u, n) equals one if the user u participates in the node n and zero otherwise. U (sim) ⊂ U are the nearest neighbors of a user u.  wpersonal (u, n) = sim(u, us ) · γ(us , n) (13) us ∈Usim

It is obvious that for every cluster in which none of the nearest neighbors participated, the personal weighting will accumulate to zero. The more of the nearest neighbors a node has in common, the higher the value of that node will be. The weight of the node will even rise faster if this neighbor is otherwise very similar to the initiator. This measurement is an adaption of the general aggregation approach (equation 7), with some modifications. The original measure was scaled by the sum of all similarities of the user. As we are only interested in the ranking of the nodes we can ignore this factor, as it does not change the ordering. The differences to the average are exchanged with a the binary decision function γ(u, n). Figure 5a and 5b shows the impact of the personal weighting in relation to the original weighting using the popularity of the nodes. The example is showing the first 80 CSDs in Bamberg that have been reweighted for one user. Figure 5a shows the typical rank-popularity distribution based on the popularity measure described in section 3. We see that it follows a power law typical for user generated content [29]. We also see is that many places are very prominent and have a very high popularity while most places are observed by just some users. Anderson [30] calls this a long tail distribution. Most of the items in the long tail are only relevant for some people. The recommender system should be able to speculate which elements of the long tail are relevant for the current user. In figure 5b we see that some previously lower ranked clusters of the long tail become much more prominent after the new personal weighting. An evaluation will now have to prove how many of the items in the new top-n of the new ranking correlate with the user preferences.

134

C. Matyas and C. Schlieder

(a). Ranked by the popularity measurement.

(b). Ranked by the popularity measurement, but weighted by the personal collaborative filtering method. Fig. 5. First 80 clusters of Bamberg

5

Evaluation

In order to illustrate the full approach we demonstrate it on the use case of recommending geotagged images from a public collaborative image library. In June 2009 over 13 million images were accessible on Panoramio. Every image on Panoramio is geo-referenced using latitude and longitude information either from a GPS device or by self-positioning on a map interface. The dataset fulfills any constraints we discussed in section 3 for collaborative geographic data sets. As a result we are able to find multiple images for most of the tourist highlights all over the world (e.g. about 13,000 images of the Eiffel Tower and about 700 images from the Spire of Dublin). One advantage of panoramio is the focus of the collected data on having images of places (“Panoramio is different from other photo sharing sites because the photos illustrate places”, as written in the help text on the site). This permits the suggestion that most user have a common motivation to upload images. Other sites would overcome diverse motivation by filtering specific images without real relation to its GPS position from consideration, like photos of families. We previously showed that we can actually expect a power law pattern in respect to the popularity of the objects found in a specific region, take figure 5a as an example. Our aim is the recommendation of an image set of ten images of a specific city. The user selects a node in the partonomy the system ranks the children of this city node, in our case the calculated CSDs, using their popularity

A Spatial User Similarity Measure for Geographic Recommender Systems

135

value. We are now able to give a first recommendation using the impersonal baseline approach, which is the Popularity (Top10) approach in figure 7a. This approach is later compared to the recommendation results. If our assumption holds true the recommendation results should correlate better with the user’s actual decisions. The test was performed on a subset of images from panoramio, 33,947 images from 5,766 users in total. We identified 19 different cities in 3 different countries and fetched the images using the public API of panoramio. We choose the different cities based on the users of Bamberg, so that most users who made spatial choices in Bamberg also uploaded images in one of the other cities. This characteristic made them good test candidates for recommendations of Bamberg as we expect high overlap in lower levels of the partonomy among these users. The partonomy graph reads as follows (table 1), where the popularity and count of images are the sum of the lower nodes. The evaluation uses a cross validation approach [31] which splits the available dataset in two separate non-overlapping datasets: a training set that is used to calculate a recommendation and a test set that is used for comparison. We repeatedly selected a user who uploaded at least 5 images for Bamberg as a test candidate and the training set consisted of the remaining decisions (see figure 6). For every test candidate, we first excluded his images made in the city of Bamberg and calculated a recommendation on the rest of his images. If we are not able to get a recommendation we did not take it under consideration when calculating the precision value. For the first three measures of section 4 (SFS, TFS and TIS) we need at least one overlap on one of the CSDs with another user otherwise the similarity between the initiator and all other users is zero and the recommendation will be empty. In the case of the geographic coverage similarity we can always find a similar user as all decisions have at least the world node in common. Using this evaluation method we tested the recommendation on a collected test set of 31 users that made nearly 500 images in Bamberg. Every user made 15 images in Bamberg in average which are scattered in average among eight different spatial choices per user. As performance metrics you usually find precision (and recall) in various evaluations of recommender systems in conjunction with a cross-validation [14] [11] [12] [17]. Precision is defined as relevant items divided by the number of recommended items. Because always 10 items are recommended, we work with precision at rank 10, or P@10 for short. In addition, an evaluation at higher ranks is not really interesting because of the low average count of eight spatial decisions per user. We consider the CSDs found in the test set to constitute the relevant items as we suppose that a user is only uploading an item if it is relevant to him. Precision is therefore the percentage of the recommended items that are found in the selection of the user. Recall measures the percentage of discovered relevant items against the count of all relevant items. We take a users own feedback in the hidden test set to evaluate the precision and recall of his recommendation. The precision and recall of each top-10 recommendation based on the different user similarity measurement can be seen in figures 7a and 7b. The

136

C. Matyas and C. Schlieder

Fig. 6. Steps taken for the cross validation of a single user A

(a). Precision measures.

(b). Recall measures.

Fig. 7. Precision and recall. Black is the precision/recall at rank 10, gray is recommendations without objects of a rank lower than 10

recommendation approach performs significantly better than the baseline. Reading the figure we see that each of the proposed similarity measurements is able to improve the precision (black bars in in reffig:image107) from the initial precision of 0.24 up to 0.41. The recall value (black bars in in 7b) enhanced from a value of 0.35 up to 0.62, which indicates a real improvement in combination with the rising precision. The user similarity measurement that evaluated the best results was the two-layer information similarity, which takes the full potential of the generated semantic model into account. The second precision value (gray bars in figure 7a and 7b) describes the precision of the recommendation after excluding every object that was already found in the top 10 from the list of recommended objects. This measure gives a hind how good our approach is in

A Spatial User Similarity Measure for Geographic Recommender Systems

137

recommending objects that are not seen by a simple popularity ranking, respectively how good the results are in the long tail. The best values for precision as well as recall were achieved by the two-layer information similarity, respectively 71% and 75% better than the baseline.

6

Conclusion and Outlook

We identified geographic metadata as a possible user feedback for a geographic recommender system that is able to suggest geographic objects. Based on the feedback we added explicit semantic to a partonomy. We proposed four different user similarity measures based on the spatial choices these users made in different geographic contexts. The evaluation of these similarity measures to support recommendation for georeferenced images from panoramio, showed that the described two-layer information similarity (TIS) provides the best personalization results. In conclusion, we may say that notions of spatial similarity, useful for improving geographic recommending, should take into account data about the frequency of spatial choices mapped to a partonomy. Our approach shows that data from the semantic Web can be combined with data from the social Web to support a recommendation system. Because of our success with one source of implicit semantics, we believe that there are other as yet undiscovered sources. One promising direction could be the use of the temporal context of spatial choices, such as their order, duration or temporal frequency. Additionally, we also intent to investigate how recommenders could profit from explicit semantics attached to the objects. This could help in better separate the objects into categorized CSDs or to express the semantic behind user-based selections of images. Recommender techniques offer a variety of different approaches that can be used in conjunction with a spatial partonomy. Item-based collaborative filtering can be used to exploit items that show a high relevance to another. We are able to identify nodes in the partonomy that users most likely associate together. Item similarity allows recommendations in a geographic context like suggesting an additional city in Italy when the user has already visited a few other cities in Italy. Another situation would be the recommendation of places in the immediate environment. Combining recommendation results from various approaches could lead to a hybrid geographic recommendation that answers more advanced queries in the future.

Acknowledgments The authors gratefully acknowledge support by the European Commission who funded parts of this research within the Tripod project under contract number IST-FP6-045335. We also wish to thank Neil Crossley for his fruitful discussions about geographic recommendations.

138

C. Matyas and C. Schlieder

References 1. Goodchild, M.: Citizens as sensors: the world of volunteered geography. GeoJournal 69(4), 211–221 (2007) 2. Scharl, A., Tochtermann, K., Jain, L., Wu, X.: The Geospatial Web: How Geobrowsers, Social Software and the Web 2.0 are Shaping the Network Society. Springer-11645 /Dig. Serial. Springer, London (2007) 3. Schlieder, C.: Modeling collaborative semantics with a geographic recommender. In: Hainaut, J.-L., Rundensteiner, E.A., Kirchberg, M., Bertolotto, M., Brochhausen, M., Chen, Y.-P.P., Cherfi, S.S.-S., Doerr, M., Han, H., Hartmann, S., Parsons, J., Poels, G., Rolland, C., Trujillo, J., Yu, E., Zim´ anyie, E. (eds.) ER Workshops 2007. LNCS, vol. 4802, pp. 338–347. Springer, Heidelberg (2007) 4. Schlieder, C., Matyas, C.: Photographing a city: An analysis of place concepts based on spatial choices. Spatial Cognition & Computation 9(3), 212–228 (2009) 5. Resnick, P., Iacovou, N., Suchak, M., Bergstrom, P., Riedl, J.: Grouplens: an open architecture for collaborative filtering of netnews. In: CSCW 1994: Proceedings of the 1994 ACM conference on Computer supported cooperative work, pp. 175–186. ACM, New York (1994) 6. Girardin, F., Fiore, F.D., Blat, J., Ratti, C.: Understanding of tourist dynamics from explicitly disclosed location information. In: The 4th International Symposium on LBS & TeleCartography (2007) 7. Ahern, S., Naaman, M., Nair, R., Yang, J.H.I.: World explorer: visualizing aggregate data from unstructured text in geo-referenced collections. In: JCDL 2007: Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries, pp. 1–10. ACM, New York (2007) 8. Rattenbury, T., Naaman, M.: Methods for extracting place semantics from flickr tags. ACM Trans. Web 3(1), 1–30 (2009) 9. Snavely, N., Seitz, S.M., Szeliski, R.: Modeling the world from internet photo collections. Int. J. Comput. Vision 80(2), 189–210 (2008) 10. Simon, I., Seitz, S.M.: Scene segmentation using the wisdom of crowds. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part II. LNCS, vol. 5303, pp. 541–553. Springer, Heidelberg (2008) 11. Burke, R.: Hybrid recommender systems: Survey and experiments. User Modeling and User-Adapted Interaction 12(4), 331–370 (2002) 12. McLaughlin, M.R., Herlocker, J.L.: A collaborative filtering algorithm and evaluation metric that accurately model the user experience. In: SIGIR 2004: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 329–336. ACM, New York (2004) 13. Zhang, J., Pu, P.: A recursive prediction algorithm for collaborative filtering recommender systems. In: RecSys 2007: Proceedings of the 2007 ACM conference on Recommender systems, pp. 57–64. ACM, New York (2007) 14. Karypis, G.: Evaluation of item-based top-n recommendation algorithms. In: CIKM 2001: Proceedings of the tenth international conference on Information and knowledge management, pp. 247–254. ACM, New York (2001) 15. Park, Y.J., Tuzhilin, A.: The long tail of recommender systems and how to leverage it. In: RecSys 2008: Proceedings of the 2008 ACM conference on Recommender systems, pp. 11–18. ACM, New York (2008) 16. Zhang, M., Hurley, N.: Avoiding monotony: improving the diversity of recommendation lists. In: RecSys 2008: Proceedings of the 2008 ACM conference on Recommender systems, pp. 123–130. ACM, New York (2008)

A Spatial User Similarity Measure for Geographic Recommender Systems

139

17. Ziegler, C.N., Lausen, G., Schmidt-Thieme, L.: Taxonomy-driven computation of product recommendations. In: CIKM 2004: Proceedings of the thirteenth ACM international conference on Information and knowledge management, pp. 406–415. ACM, New York (2004) 18. Linden, G., Smith, B., York, J.: Amazon.com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing 7(1), 76–80 (2003) 19. Tversky, A.: Features of similarity. Psychological Review 84, 327–352 (1977) 20. Jones, C.B., Alani, H., Tudhope, D.: Geographical information retrieval with ontologies of place. In: Montello, D.R. (ed.) COSIT 2001. LNCS, vol. 2205, pp. 322–335. Springer, Heidelberg (2001) 21. Rodr´ıguez, M.A., Egenhofer, M.J.: Comparing geospatial entity classes: An asymmetric and context-dependent similarity measure. International Journal of Geographical Information Science 18, 229–256 (2004) 22. Schwering, A.: Approaches to semantic similarity measurement for geo-spatial data: A survey. Transactions in GIS 12(1), 5–29 (2008) 23. Janowicz, K., Raubal, M., Schwering, A., Kuhn, W. (eds.): Special Issue on Semantic Similarity Measurement and Geospatial Applications. Transactions in GIS 12(6) (2008) 24. Resnik, P.: Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language. Journal of Artificial Intelligence Research 11, 95–130 (1999) 25. Bae-Hee, L., Heung-Nam, K., Jin-Guk, J., Geun-Sik, J.: Location-based service with context data for a restaurant recommendation. In: Bressan, S., K¨ ung, J., Wagner, R. (eds.) DEXA 2006. LNCS, vol. 4080, pp. 430–438. Springer, Heidelberg (2006) 26. Horozov, T., Narasimhan, N., Vasudevan, V.: Using location for personalized poi recommendations in mobile environments. In: SAINT 2006: Proceedings of the International Symposium on Applications on Internet, Washington, DC, USA, pp. 124–129. IEEE Computer Society, Los Alamitos (2006) 27. Tanimoto, T.T.: An Elementary Mathematical Theory of Classification and Prediction (1958) 28. Rosch, E.: Principles of Categorization, pp. 27–48. John Wiley & Sons Inc., Chichester (1978) 29. Guy, M., Tonkin, E.: Folksonomies: Tidying up tags? D-Lib Magazine 12 (2006) 30. Anderson, C.: The Long Tail: Why the Future of Business Is Selling Less of More. Hyperion (2006) 31. Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. 22(1), 5–53 (2004)

SPARQL Query Re-writing Using Partonomy Based Transformation Rules* Prateek Jain1, Peter Z. Yeh2, Kunal Verma2, Cory A. Henson1, and Amit P. Sheth1 1

Kno.e.sis, Computer Science Department, Wright State University, Dayton, OH, USA {prateek,cory,amit}@knoesis.org 2 Accenture Technology Labs, San Jose, CA, USA {peter.z.yeh,k.verma}@accenture.com

Abstract. Often the information present in a spatial knowledge base is represented at a different level of granularity and abstraction than the query constraints. For querying ontology’s containing spatial information, the precise relationships between spatial entities has to be specified in the basic graph pattern of SPARQL query which can result in long and complex queries. We present a novel approach to help users intuitively write SPARQL queries to query spatial data, rather than relying on knowledge of the ontology structure. Our framework re-writes queries, using transformation rules to exploit part-whole relations between geographical entities to address the mismatches between query constraints and knowledge base. Our experiments were performed on completely third party datasets and queries. Evaluations were performed on Geonames dataset using questions from National Geographic Bee serialized into SPARQL and British Administrative Geography Ontology using questions from a popular trivia website. These experiments demonstrate high precision in retrieval of results and ease in writing queries. Keywords: Geospatial Semantic Web, Spatial Query Processing, SPARQL, Query Re-writing, Partonomy, Transformation Rules, Spatial information retrieval.

1 Introduction Recently, spatial information has become widely available to consumers through a number of popular sites such as Google Maps, Yahoo Maps and Geonames.org [1]. In the context of the Semantic Web, Geonames has provided RDF [2] encoding of their knowledge base. One issue that makes using the Geonames ontology, or any nontrivial spatial ontology difficult to use, is that users have to completely understand the structure of the ontology before they can write meaningful queries. To illustrate our point, consider the following query from National Geographic Bee [3], “In which *

The evaluation components related to this work are available for download at http://knoesis.wright.edu/students/prateek/geos.htm

K. Janowicz, M. Raubal, and S. Levashkin (Eds.): GeoS 2009, LNCS 5892, pp. 140–158, 2009. © Springer-Verlag Berlin Heidelberg 2009

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

141

country is the city of Pamplona?” This seems to be a straightforward question, and one would assume that the logic for encoding this question into SPARQL [4] query would be to ask – Return a country which contains a city called Pamplona. However, it turns out that such a simple query does not work. This is because Pamplona is a city within a state, within the country of Spain. Therefore the correct logic for encoding the question into query would be – Return a country which contains a state, which contains a county, which contains a city called Pamplona. Unless the user fully understands the structure of the ontology, it is not possible to write such queries. In this paper, we describe a system called PARQ (Partonomical Relationship Based Query Rewriting System) that will automatically align the gap between the constraints expressed in user’s query and the actual structured representation of information in the ontology. We leverage existing work in classification of partonomic relationships[5] to re-write queries. To study the accuracy of our approach for re-write, we tested it on (1) 120 randomly selected questions from the National Geographic Bee and evaluated them on Geonames ontology (2) 46 randomly selected trivia questions related to British villages and counties from trivia website[22] and evaluated them on British Administrative Geography Ontology[23]. For both the evaluations, users were instructed to read the questions and to write queries in SPARQL for the questions. PARQ rewrote the queries using partonomical relationships. The results were encouraging, and on an average, for evaluation 1, PARQ was able to re-write and answer 84 of 120 queries posed by users, whereas a SPARQL processing system could answer only 20 such queries. For evaluation 2, PARQ was able to re-write and answer 41 of 46 queries posed by users. For both the evaluations, we also compare the performance of PARQ with another well known system PSPARQL [24] which extends SPARQL with path expressions to allow use of regular expressions with variables in predicate position of SPARQL. The contributions of this work are the following: 1.

2. 3.

This work focuses on rewriting SPARQL Queries, written from a user’s perspective without worrying about the underlying representation of information. Our work utilizes partonomic transformation rules to re-write SPARQL queries. PARQ has been completely evaluated on third party data (queries and dataset) and shows that it is able to re-write and answer queries not answered by a SPARQL processing system. We demonstrate PARQ can significantly improve precision without any recall loss.

The rest of the paper is organized as follows: section 2 discusses the background work, section 3 discusses approach followed by evaluation in Section 4. In Section 5, we discuss the related work and finally we conclude with section 6.

2 Background All spatial entities are fundamentally part of some other spatial entity. Hence, spatial query processing systems often encounter queries such as (1) querying for parts of

142

P. Jain et al.

spatial entities (for example, give me all counties in Ohio) (2) querying for wholes which encompass spatial parts (for example, return a country which contains a city called Pamplona). By identifying which relationships between spatial entities are partonomic in nature it becomes feasible to identify if queries involving those relationships fail because of part-whole mismatch and it becomes possible to fix the mismatches using transformation rules that leverage the partonomic relationships. In this section, we will provide a brief overview of work related to partonomic relationships. Our work of query rewriting to remove these mismatches is based upon using wellaccepted partonomic relationships to address mismatches between a user’s conceptualization of a domain and the actual information structure. Part/Whole relation, or partonomy, is an important fundamental relationship which manifests itself across all physical entities such as human made objects (Cup-Handle), social groups (Jury-Jurors) and conceptual entities such as time intervals (5th hour of the day). Its frequent occurrence results in manifestation of part-for-whole mismatch and whole-for-part mismatch within many domains especially spatial datasets. Winston [5] created a categorization of part whole relations which identified and covers part whole relations from a number of domains such as artifacts, geographical entities, food and liquids. We believe it is one of the most comprehensive categorization of partonomic relationships and other works in similar spirit such as [6] analyze his categorization. This categorization has been created using three relational elements: 1. Functional/Non-Functional (F/NF):- Parts are in a specific spatial/temporal relation with respect to each other and to the whole to which they belong. Example: Belgium is a part of NATO partly because of its specific spatial position. 2. Homeomerous/Non-Homeomerous (H/NH):- Parts are same as each other and to the whole. Example: Slice of pie is same as other slices and the pie itself [5]. 3. Separable/Inseparable (S/IN): - Parts are separable/ inseparable from the whole. Example: A card can be separated from the deck to which it belongs. Table 1 illustrates these six different categories, their description using the relational elements and examples of partonomic relationships covered by them. Using this classification and relational elements, relations between two entities can be marked as partonomic or non partonomic in nature. Further if they are partonomic, the category to which they belong is identified. Finally, appropriate transformation rules can be defined for each category to fix these mismatches. For the purpose of this work, we have focused our attention on the last category “Place-Area”. Places are not parts of any area because of any functional contribution to the whole, and they are similar to the other places in the area as well. Also places cannot be separated from the area to which they belong. Hence, this classification can allow appropriate ontological relationships to be mapped to Place-Area category such as those found in Geonames.

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

143

Table 1. Six type of partonomic relation with relational elements Category Component-Integral Object Member-Collection

Portion-Mass

Stuff-Object

Feature-Activity

Place-Area

Description Parts are functional, nonhomeomerous and separable from the whole. Parts are non functional, non homeomerous and separable from the whole. Parts are non functional, homeomerous and separable from the whole. Parts are non functional, nonhomeomerous and not separable from the whole. Parts are functional, nonhomeomerous and not separable from the whole. Parts are non functional, homeomerous and not separable from the whole.

Example Handle-Cup

Tree-Forest

Slice-Pie

Gin-Martini

Paying-Shopping

Everglades-Florida

3 Approach At the highest level of abstraction, PARQ takes in a SPARQL query and transforms it with the help of transformation rules. This section provides the details of our system. We describe the various modules of the system, the technologies used for building the system, the transformation rules utilized for transformation of the SPARQL queries and the motivation behind them. Finally we describe the underlying algorithm that explains how the transformation rules are utilized by PARQ for re-writing queries. 3.1 System Architecture PARQ consists of following three major modules: 1) Mapping Repository 2) Transformation Rule generator and 3) Query Re-writer. Figure 1 illustrates the overall architecture of this system. Mapping Repository. This module stores mappings of ontological properties to Winston’s categories. These mappings are utilized by the Transformation Rule Generator to generate domain specific rules, which are consumed by the Query Re-writer. This is the only module in our system which requires user interaction (other than for query submission). In other words, the user has to specify these mappings. Each mapping is encoded as a rule in Jena’s rule engine format where the antecedent is a triple specifying an ontological property to be mapped and the consequent is a triple specifying the Winston category that the property is mapped to. For example, the following mapping:

144

P. Jain et al.

[parentFeature: (?a geo:parentFeature ?b)=>(?a place_part_of ?b)] maps “parentFeature” – a property from the Geonames ontology – to “place_part_of” – Winston’s category of Place-Area. Transformation Rule Generator. This module automatically generates domain specific transformation rules using the mapping repository and pre-defined meta-level transformation rules based on Winston’s categories of part-whole relations, which we will explain later. For example, given the following meta-level transformation rule: [transitivity_placePartOf: place_part_of ?c)]

(?a

place_part_of

?b)(?b

place_part_of

?c)=>(?a

This module will utilize the parentFeature mapping defined above to generate the following domain specific transformation rule. [transitivity_parentFeature: (?a geo:parentFeature ?b)(?b geo:parentFeature ?c)=>(?a geo:parentFeature ?c)] The resulting rule is used by the Query Re-writer to re-write the graph pattern of SPARQL queries in the event of a partonomic mismatch. This design enables PARQ to be easily used with a wide-range of ontologies. The knowledge engineer only needs to specify the mappings between properties of these ontologies and Winston’s categories, which requires less effort than generating the domain-specific transformation rules themselves. This design also allows the transformation rules to be extended in an ontology agnostic manner. We implemented this module using Jena’s [7] rule engine API. Like the mappings, the meta-level transformation rules and the generated rules are encoded in the format accepted by Jena rule engine API. The rule engine allows reading, parsing and processing of rules along with the creation and serialization of new rules. Query Re-writer. This module re-writes a SPARQL query in case of a partonomic mismatch between the query and the knowledge base to which the query is posed. This module is implemented using Jena and ARQ API [8]. Jena and ARQ provide functionality to convert a query into algebraic representation and vice versa. The triples specified in the query are identified. If they map to partonomic relation using the mapping repository and using Jena’s Rule Engine API, the domain specific transformation rule, appropriate transformation is performed on the triples. These transformations are then utilized to re-write the triples exhibiting the mismatch using the features provided by ARQ API. We believe including transitivity as a part of the reasoner can result in significant overhead for large datasets such as geonames where transitivity applies to almost all the entities. By including it as a part of query rewriting method (1) it allows the mismatches to be resolved on an "on demand" basis (2) it makes it easy to plug in support for resolving other kinds of mismatches.

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

145

Original Query SELECT ?schoolname {?school geo:parentFeature ?state. ?state geo:featureCode A.ADM1. ?school geo:parentFeature S.SCH. ?school geo:name ?schoolname. ?state geo:name “Ohio”.}

PARQ System Mapping Repository Mappings 1.(a geo:parentFeature b)-> (a place_part_of b) 2.(a lubm:subOrganizationOf b)->(a component_part_of b). 3.(a wine:consistsOf b)->(a stuff_part_of b) 4. …..

Transformation Rules Generator Meta Level Rules 1. (a place part b),(b place part c)=>(a place part c) 2. ( a component part b)(b component part c)=>( a component part c) 3. ( a stuff part b)(b stuff part c)=>( a stuff part c) 4………………….

Query Re-writer Domain Specific Rules 1. (a geo:parentFeature b),(b geo:parentFeature c)=>(a geo:parentFeature c) 2.(a lubm:subOrganizationOf b)(b lubm:subOrganizationOf c)=>( a lubm:subOrganizationOf c) 3. (a wine:consistsOf b),(b wine:consistsOf c)=>(a wine:consistsOf c) 4. …………….

Re-written Query SELECT ?schoolname {?school geo:parentFeature ?county; geo:featureCode S.SCH; geo:name ?schoolname. ?county geo:parentFeature ?state. ?state geo:featureCode A.ADM1; geo:name “Ohio”.}

Fig. 1. PARQ System Architecture. The relevant rules and mappings for queries shown are highlighted in bold.

146

P. Jain et al.

3.2 Meta-level Transformation Rules Meta-level transformation rules are used to generate domain-specific rules that are used to resolve mismatches resulting from differences in encoding between the granularity of query constraints and the knowledge base by transforming the encoding of the constraints in the query to match the knowledge base. These meta-level rules are defined at the level of Winston’s categories, and a rule defined for a particular category applies to only the partonomic relations covered by that category. For example, rules defined for Component-Object category will cover only relations between machines and their parts, organization and their members, etc. We used the following methodology to define the meta-level rules used by our system. First, we leveraged previous work by Varzi[9] and Winston, who both showed the semantics of transitivity holds true as long as it is applied across the same category of partonomic relation. From this result, we defined the meta-level transitive transformation rules shown in Table 2, that correspond to Winston’s six part-whole categories. Table 2. Transitivity for Winston's categories

ID 1 2 3 4 5 6

Antecedent1 a component part of b a member part of b a portion part of b a stuff part of b a feature part of b a place part of b

Antecedent2 b component part of c b member part of c b portion part of c b stuff part of c b feature part of c b place part of c

Consequent A component part of c A member part of c A portion part of c A stuff part of c A feature part of c A place part of c

Next, we investigated the interaction between Winston’s categories by examining all possible combinations of these categories for additional transformation rules. This investigation, however, resulted in only frivolous rules, which were not useful for resolving mismatches. For example, the following transformation rule resulted from composing the Feature-Activity category with the Place-Area category. (a place_part_of b) (b feature_part_of c) => (a feature_part_of c) However given the following query and triples in an ontology (given in English for brevity), QUERY: “What state was attacked in WW-II?” TRIPLE 1 : Florida is a place part of USA (Place-Area). TRIPLE 2: USA was attacked in WW-II (Feature-Activity) The rule incorrectly transformed this query to match the ontology, that resulted in an incorrect answer being returned (i.e. Florida).

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

147

The reason for these frivolous rules is because Winston’s categories are mutually exclusive as they are defined using relational elements. Hence, our meta-level transformations consist of only transitive rules. Despite this small number of rules, we found – through our evaluation – that transitivity by itself provide significant leverage in resolving part-whole mismatches. 3.3 Algorithm The algorithm used in applying transitivity for resolving mismatches is as follows SPR= Set of Partonomic Relation If the query is not well formed return else Convert the query Q into its algebric representation (AR). Identify the graph pattern(GP) and query variables(QV). For every triple t Є GP if t.property Є SPR If t.subject is a variable Identify other triples with t.subject and use them to unify t.subject Insert unified values in s.List else Insert t.subject in s.List If t.object is a variable Identify other triples with t.object and use them to unify t.object Insert unified values in o.List else Insert t.object in o.List for each s Є s.List for each o Є o.List path= Find path between s and o using the transformation rule. If (path! =null) Replace the resources in the path such that, path.source = t.subject. path.destination = t.object The intermediate nodes are replaced such that the object and subject of contiguous triples have the variable names. Replace the triple in the graph pattern with the path containing the variables. Return the query Q' to the user

Explanation Let us explain the algorithm using a query “In which county can you find the village of Crook that is full of lakes?” If the SPARQL Query submitted by user for this question is

148

P. Jain et al.

SELECT ?countyName WHERE { ?village ord:hasVernacularName "Crook" . ?county rdf:type ord:County ; ord:hasVernacularName ?countyName ; ord:spatiallyContains ?village . }

Step 1: The system compiles the query to verify if it is well formed. Since, in this case it is a well written query, the system moves on to Step 2. Step 2: The query is converted into its algebraic representation, and the system iterates through its list of triples to identify triples containing partonomic relationship using the mapping file provided by the user. In this case the last triple t=?county ord:spatiallyContains ?village contains “spatiallyContains” property which indicates that the object is part of the subject. Hence, this triple is identified as a triple for re-writing. Step 3: The other triples which contain the variables mentioned in “t”,such as: ?village ord:hasVernacularName "Crook"., ?county rdf:type ord:County. ?county ord:hasVernacularName ?countyName. are utilized for unifying the values of variables of t (i.e. ?village and ?county). Using these ?village ={ osr7000000000013015 } which is the resource for “Crook” in Administrative Geography Ontology and ?county={set of resources belonging to counties} is computed. Step 4: The set of unified values from Step3 is then utilized to compute a path by executing transformation rule of transitivity involving the property “tangentiallySpatiallyContains”, “completelySpatiallyContains” ?place ={osr7000000000013015} following path being returned:

?county={List of counties}.This results in the

1. osr7000000000013244 tangentiallySpatiallyContains osr7000000000012934 2. osr7000000000012934 completelySpatiallyContains osr7000000000013015 Step 5: In the path, the source and destination are replaced as mentioned in the original query, and the intermediate node is consistently replaced by a variable. 1. ?county ord:tangentiallySpatiallyContains ?var 2. ?var ord:completelySpatiallyContains ?village.

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

149

Step 6: In the original query the last triple is replaced by these two triples resulting in the following query SELECT ?countyName WHERE { ?village ord:hasVernacularName "Crook" . ?county rdf:type ord:County ; ord:hasVernacularName ?countyName ; ord:tangentiallySpatiallyContains ?var . ?var ord:completelySpatiallyContains ?village . }

There can be certain cases where a number of paths are computed between two end points because of transitivity. This will result in generation of multiple re-written queries. We try to rank these generated queries using the following parameters: (1) Re-written queries generating results are given higher ranking than ones which do not (2) If both queries generate results, in those scenarios queries requiring minimum amount of re-writing are given a higher ranking.

4 Evaluation We present two evaluations to assess the performance of our approach on resolving partonomic mismatches between SPARQL queries written by users and the ontology’s to which these queries are posed. We perform these evaluations using: (1) Questions from National Geographic Bee on Geonames Ontology (2) Questions from a popular trivia website which hosts quiz related to “British Villages and Counties” on British Administrative Geography Ontology. 4.1 Evaluation Objective and Setup Our objective is to determine whether our approach enables users to successfully pose queries about partonomic information to ontology where the users are not familiar with its structure and organization. This lack of familiarity will result in many mismatches that need to be resolved in order to achieve good performance. To evaluate our objective, we chose Geonames [1] and British Ordinance SurveyAdministrative Geography Ontology [23] as our ontology’s because: (1) they are one of the richest sources of partonomic information available to the semantic web community. (2) they are rich in spatial information. Geonames has over 8 million place names – such as countries, monument, cities, etc. – which are related to each other via partonomic relationships corresponding to Winston’s category of Place-Area. For example, cities are parts of provinces and provinces are parts of countries. Table 3 shows some key relationships found in Geonames.

150

P. Jain et al. Table 3. Geonames important properties

Property http://www.geonames.org/ontology#name http://www.geonames.org/ontology#featureCode http://www.geonames.org/ontology#parentFeature

Description Name of the place Identifies if the place is a country, city, capital etc. Identifies that the place identified by domain is located within the place identified by the range

Similarly, Administrative Geography Ontology provides data related to location of villages, counties and cities of the United Kingdom which again map to Winston’s place-area relation. Table 4 shows the description of key administrative geography ontology properties. Namespace has been omitted for brevity. Table 4. Administrative Geography important properties

Property spatiallyContains

tangentiallySpatiallyContains

completelySpatiallyContains

Description The interior and boundary of one region is completely contained in the interior of the other region, or the interior of one region is completely contained in the interior or the boundary of the other region and their boundaries intersect. The interior of one region is completely contained in the interior or the boundary of the other region and their boundaries intersect. It is a sub-property of spatiallyContains. The interior and boundary of one region is completely contained in the interior of the other region. It is a sub-property of spatiallyContains.

For evaluating our approach on Geonames ontology, we constructed a corpus of queries for evaluation by randomly selecting 120 questions from previous editions of National Geographic Bee[3], an annual competition organized by the National Geographic Society which tests students from across the world on their knowledge of world geography. For British Administrative Geography ontology, we selected 46 questions from a popular trivia website [22] that hosts a number of quizzes related to British geography. We chose these questions for evaluation because: • • •

These questions are publicly available, so others can replicate our evaluation. Each question has a well-defined answer, which avoids ambiguity when grading the performance of our approach. These questions are of places and their partonomic relationship to each other. Hence, there is significant overlap with Geonames and Administrative Geography Ontology.

SPARQL Query Re-writing Using Partonomy Based Transformation Rules

151

Examples of such questions include: • •

The Gobi Desert is the main physical feature in the southern half of a country also known as the homeland of Genghis Khan. Name this country. In which English county, also known as "The Jurassic Coast" because of the many fossils to be found there, will you find the village of Beer Hackett?

Once the questions were selected, we employed 4 human respondents (computer science students at a local university) to encode the corresponding SPARQL query for each question. These respondents are familiar with SPARQL (familiarity ranged from intermediate to advanced) but are not familiar with Geonames or Administrative Geography Ontology. These two conditions meet our evaluation objective. For the National Geographic Bee questions, each subject was given all 120 questions along with a description of the properties in the Geonames ontology. Each subject was then instructed to encode the SPARQL query for each question using these properties and classes. For the trivia questions, we employed only one human respondent to encode the corresponding SPARQL query because of limitations in time and resources. This respondent was given all 46 questions along with a description of the properties in the administrative geography ontology. These instructions, original queries, responses and our source code is available for download at http://knoesis.wright.edu/students/prateek/geos.htm 4.2 Geonames Results and Discussion We compared our approach to PSPARQL and SPARQL. PSPARQL [24] extends SPARQL with path expressions to allow use of regular expressions with variables in predicate position of SPARQL. The regular expression patterns allowed in PSPARQL grammar can be constructed over the set of uris, blank nodes and variables. For example, the following query when posed to PSPARQL returns all cities connected to the capital of France by a plane or train. Select ?City2 WHERE { ?City1 ex:capital ex:France . ?City1 (ex:plane | ex:train) ?City2 . } We posed queries encoded by human respondents (see previous subsection) to SPARQL and PARQ. We graded the performance of each approach using the metrics of precision (i.e. the number of correct answers over the total number of answers given by an approach) and recall (i.e. the number of correct answers over the total number of answers for the queries). We said an approach correctly answered a query if its answer was the same as the answer provided by the National Geographic Bee. Table 5 shows the result of this evaluation for PARQ and SPARQL. PARQ on an average correctly re-writes 84 queries of the 120 posed by users performing significantly better than SPARQL processing system across all respondents (p < 0.01 for the X2 test in each case). The low performance (61 queries by using PARQ and 19 by

152

P. Jain et al. Table 5. Comparison Re-written queries Vs original SPARQL queries

System Respondent1 Respondent2 Respondent3 Respondent4

PARQ SPARQL PARQ SPARQL PARQ SPARQL PARQ SPARQL

# of queries answered 82 25 93 26 61 19 103 33

Precision

Recall

100% 100% 100% 100% 100% 100% 100% 100%

68.3% 20.83% 77.5% 21.6% 50.83% 15.83% 85.83% 27.5%

SPARQL) for respondent 3 can be attributed to this subject having the least familiarity with writing queries in SPARQL and writing improper SPARQL queries. The high performance (103 queries using PARQ and 33 using SPARQL) for respondent 4, can be attributed to this subject having the most experience with SPARQL. For each respondent, the difference of 120 and re-written queries is the number of queries not re-written using PARQ. For this comparison, we also compared the execution time of PARQ to PSPARQL as shown in Table 6. Because of limitations in time and resources, we were able to employ only one respondent to encode the queries posed to PSPARQL. Hence, we selected Respondent 4 because this respondent has the most experience and familiarity with SPARQL. Table 6. Comparison PSPARQL and PARQ for Respondent 4

System PARQ PSPARQL

Precision 100% 6.414%

Recall 86.7% 86.7%

Execution time/query in seconds 0.3976 37.59

Although PARQ and PSPARQL deliver the same recall (86.7%), we clearly illustrate that PARQ performs much better than PSPARQL in precision (p