268 125 8MB
English Pages 696 [673] Year 2008
Enterprise Interoperability III
Kai Mertins • Rainer Ruggaber • Keith Popplewell Xiaofei Xu Editors
Enterprise Interoperability III New Challenges and Industrial Approaches
123
Editors Professor Dr.-Ing.Kai Mertins Fraunhofer IPK Pascalstraße 8–9 10587 Berlin Germany
Dr. Rainer Ruggaber SAP AG Vincenz-Prießnitz-Straße 1 76131 Karlsruhe Germany
Professor Keith Popplewell Future Manufacturing Applied Research Centre Coventry University Priory Street Coventry CV1 5FB UK
Professor Xiaofei Xu School of Computer Science and Technology Harbin Institute of Technology 92 West Da-Zhi Street Harbin P.R. China
ISBN 978-1-84800-220-3
e-ISBN 978-1-84800-221-0
DOI 10.1007/978-1-84800-221-0 British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Control Number: 2008922193 © 2008 Springer-Verlag London Limited Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued by the Copy-right Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers. The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant laws and regulations and therefore free for general use. The publisher makes no representation, express or implied, with regard to the accuracy of the information contained in this book and cannot accept any legal responsibility or liability for any errors or omissions that may be made. Printed on acid-free paper 9 8 7 6 5 4 3 2 1 springer.com
Preface Kai Mertins1, Rainer Ruggaber2, Keith Popplewell3 and Xiaofei Xu4 1
2
3
4
Fraunhofer IPK Berlin, Pascalstr. 8-9, 10587 Berlin, Germany General Chairperson of I-ESA’08 [email protected] SAP AG, Vincenz-Prießnitz-Straße 1, 76131 Karlsruhe, Germany General Co-Chairperson of I-ESA’08 [email protected] Future Manufacturing Applied Research Centre, Coventry University, Priory Street, Coventry, CV1 5FB, UK Chairperson of I-ESA’08 International Programme Committee [email protected] School of Computer Science and Technology, Harbin Institute of Technology, 92 West Dayhi Street, Harbin, P.R. China 150001 Co-Chairperson of I-ESA’08 International Programme Committee [email protected]
Interoperability in the context of enterprise applications is the ability of a system or an organisation to work seamless with other systems or organisation without any special effort. The capability to interact and exchange information both internally and with external organisations (partners, suppliers, customers) is a key issue in the global economy. It is fundamental in order to speed up production of goods and services at lower cost, while ensuring higher levels of quality and customisation. Despite the fact of many efforts spend in the past decade to overcome interoperability barriers in the industry non interoperability cause an enormous cost for all business partners. Studies show: More than 40 % of IT-costs are devoted to solve interoperability problems. This book provides knowledge for cost savings and business improvement as well as new technical solutions. I-ESA'08 (Interoperability for Enterprise Software and Applications) is the fourth of a series of conferences, this time under the motto "Science meets Industry". The I-ESA’08 Conference was organised by Fraunhofer IPK and DFI (Deutsches Forum für Interoperabilität e.V.) jointly promoted by the INTEROPVLab (European Virtual Laboratory for Enterprise Interoperability - www.interopvlab.eu) and the EIC (Enterprise Interoperability Centre - www.eiccommunity.org). World's leading researchers and practitioners in the area of Enterprise Interoperability contributed to this book. You will find integrated approaches from different disciplines: Computer Science, Engineering and Business Administration.
vi
Preface
The structucture of this book Enterprise Interoperability III: New Challenges and Industrial Approaches was inspired by the ATHENA Interoperability Framework. The House of Enterprise Interoperability is designed top-down from Business to Process and Service Execution aligned with the important topics for Semantics and Ontology’s. Common basis of the levels are the aspects of Systems Engineering, Modelling as well as Architectures and Frameworks.
Business & Strategies, Cases Cross Organisational Collaboration and Cross Sectoral Processes Service Design and Execution
Ontology’s and Semantics for Interoperability Interoperability in Systems Engineering Modelling and Meta Modelling Methods and Tools for Interoperability Architectures and Frameworks for Interoperability Fig.: Enterprise Interoperability House of I-ESA’08
Kai Mertins, Berlin Rainer Ruggaber, Karlsruhe Keith Popplewell, Coventry Xiaofei Xu, Harbin January 2008
Acknowledgements
We like to thank all the authors, invited speakers, reviewers, Senior Programme Committee members, and participants of the conference that made this book a reality and the I-ESA'08 a success. We express our gratitude to all organisations which supported the I-ESA'08 preparation especially Interop V-Lab and its German Pole DFI as well as the Enterprise Interoperability Centre. We are deeply thankful to the local organisation support notably Thomas Knothe, Amparo Roca de Togores, Anett Wagner, Sebastian Zabre and Nikolaus Wintrich for their excellent work for the preparation and the management of the conference.
Contents
Part I - Business and Strategies, Cases An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability S. Izza, R. Imache, L. Vincent, and Y. Lounis ...........................................................3 Industrialization Strategies for Cross-organizational Information Intensive Services Ch. Schroth.............................................................................................................15 SME Maturity, Requirement for Interoperability G. Benguria, I. Santos ............................................................................................29 Information Security Problems and Needs in Healthcare – A Case Study of Norway and Finland vs Sweden R.-M. Åhlfeldt and E. Söderström...........................................................................41 Impact of Application Lifecycle Management – A Case Study J. Kääriäinen, A. Välimäki .....................................................................................55 Part II - Cross-organizational Collaboration and Cross-sectoral Processes A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration Ch. Schroth.............................................................................................................71 Patterns for Distributed Scrum – A Case Study A. Välimäki, J. Kääriäinen .....................................................................................85 Understanding the Collaborative Workspaces G. Gautier, C. Piddington, and T. Fernando..........................................................99 Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network H. Weinaug, M. Rabe ...........................................................................................113
x
Contents
Platform Design for the B2(B2B) Approach M. Zanet, St. Sinatti ..............................................................................................125 Trust and Security in Collaborative Environments P. Mihók, J. Bucko, R. Delina, D. Palová ............................................................135 Prototype to Support Morphism between BPMN Collaborative Process Model and Collaborative SOA Architecture Model J. Touzi, F. Bénaben, H. Pingaud.........................................................................145 Heterogeneous Domains’ e-Business Transactions Interoperability with the use of Generic Process Models S. Koussouris, G. Gionis, A. M. Sourouni, D. Askounis, K. Kalaboukas..............159 Matching of Process Data and Operational Data for a Deep Business Analysis S. Radeschütz, B. Mitschang, F. Leymann............................................................171 Methods for Design of Semantic Message-Based B2B Interaction Standards E. Folmer, J. Bastiaans ........................................................................................183 Part III - Service Design and Execution An Adaptive Service-Oriented Architecture M. Hiel, H. Weigand, W.-J. Van Den Heuvel .......................................................197 FUSE: A Framework to Support Services Unified Process N. Protogeros, D. Tektonidis, A. Mavridis, Ch. Wills, A. Koumpis ......................209 Adopting Service Oriented Architectures Made Simple L. Bastida, A. Berreteaga, I. Cañadas..................................................................221 Making Service-Oriented Java Applications Interoperable without Compromising Transparency S. De Labey, E. Steegmans ...................................................................................233 A Service Behavior Model for Description of Co-Production Feature of Services T. Mo, X. Xu, Z. Wang ..........................................................................................247 An Abstract Interaction Concept for Designing Interaction Behaviour of Service Compositions T. Dirgahayu, D. Quartel, M. van Sinderen .........................................................261 Preference-based Service Level Matchmaking for Composite Service Y. Shiyang, W. Jun, H. Tao...................................................................................275 Ontological Support in eGovernment Interoperability through Service Registries Y. Charalabidis.....................................................................................................289 Towards Secured and Interoperable Business Services A. Esper, L. Sliman, Y. Badr, F. Biennier.............................................................301
Contents
xi
Part IV - Ontologyies and Semantics for Interoperability Semantic Web Services based Data Exchange for Distributed and Heterogeneous Systems Q. Wang, X. Li, Q. Wang......................................................................................315 Ontology-driven Semantic Mapping D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt..329 Self-Organising Service Networks for Semantic Interoperability Support in Virtual Enterprises A. Smirnov, M. Pashkin, N. Shilov, T. Levashova ................................................343 Semantic Service Matching in the Context of ODSOI Project S. Izza, L. Vincent .................................................................................................353 Ontology-based Service Component Model for Interoperability of Service Systems Z. Wang, X. Xu......................................................................................................367 Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding.............................................................................................................381 Part V - Interoperability in Systems Engineering Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation of Manufacturing Systems M. Rabe, P. Gocev................................................................................................397 Semantic Interoperability Requirements for Manufacturing Knowledge Sharing N. Chungoora, R.I.M. Young ................................................................................411 Collaborative Product Development: EADS Pilot Based on ATHENA N. Figay, P. Ghodous ...........................................................................................423 Contribution to Knowledge-based Methodology for Collaborative Process Definition: Knowledge Extraction from 6napse Platform V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré, H. Pingaud..............................437 SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services S. Liu, X. Xu, Z. Wang ..........................................................................................451 Coevolutionary Computation Based Iterative Multi-Attribute Auctions L. Nie, X. Xu, D. Zhan ..........................................................................................461 Knowledge Integration in Global Engineering R. Anderl, D. Völz, Th. Rollmann .........................................................................471
xii
Contents
Part VI - Modelling and Meta-modelling Methods and Tools for Interoperability A Framework for Executable Enterprise Application Integration Patterns Th. Scheibler, F. Leymann....................................................................................485 Experiences of Tool Integration: Development and Validation J-P. Pesola, J. Eskeli, P. Parviainen, R. Kommeren, M. Gramza ........................499 Interoperability – Network Systems for SMEs K. Mertins, Th. Knothe, F.-W. Jäkel.....................................................................511 Engineer to Order Supply Chain improvement based on the GRAI Meta-model for Interoperability: an Empirical Study A. Errasti, R. Poler...............................................................................................521 Proposal for an Object Oriented Process Modeling Language R. Anderl, J. Malzacher, J. Raßler .......................................................................533 Enterprise Modeling Based Application Development for Interoperability Problem Solving M. Jankovic, Z. Kokovic, V. Ljubicic, Z. Marjanovic, Th. Knothe........................547 IS Outsourcing Decisions: Can Value Modelling Be of Help? H. Weigand...........................................................................................................559 Process Composition in Logistics: An Ontological Approach A. De Nicola, M. Missikoff, L. Tininini.................................................................571 Interoperability of Information Systems in Crisis Management: Crisis Modeling and Metamodeling S. Truptil, F. Bénaben, P. Couget, M. Lauras, V. Chapurlat, H. Pingaud ...........583 A Novel Pattern for Complex Event Processing in RFID Applications T. Ku1, Y. L. Zhu, K. Y. Hu, C. X. Lv....................................................................595 Part VII - Architectures and Frameworks for Interoperability Enterprise Architecture: A Service Interoperability Analysis Framework J. Ullberg, R. Lagerström, P. Johnson .................................................................611 Logical Foundations for the Infrastructure of the Information Market M. Heather, D. Livingstone, N. Rossiter ..............................................................625 Meeting the Interoperability Challenges of eTransactions among Heterogeneous Business Partners: The Advantages of Hybrid Architectural Approaches for the Integrating Middleware G. Gionis, D. Askounis, S. Koussouris, F. Lampathaki ........................................639 A Model-driven, Agent-based Approach for a Rapid Integration of Interoperable Services I. Zinnikus, Ch. Hahn, K. Fischer.........................................................................651
Contents
xiii
BSMDR: A B/S UI Framework Based on MDR P. Zhang, S. He, Q. Wang, H. Chang ...................................................................665 A Proposal for Goal Modelling Using a UML Profile R. Grangel, R. Chalmeta, C. Campos, R. Sommar, J.-P. Bourey .........................679 Index of Contributors ...........................................................................................691 Index of Keywords ...............................................................................................693
Part I
Business and Strategies, Cases
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability S. Izza(1), R. Imache(2) L. Vincent(1), and Y. Lounis(3) (1)
(2)
(3)
Ecole des Mines de Saint-Étienne, Industrial Engineering and Computer Science Laboratory, OMSI Division, 158 cours Fauriel, 42023, Saint-Etienne, Cedex 2, France. {izza, vincent}@emse.fr University of Boumerdes, Department of Informatics, LIFAB, Boumerdes, Algeria. [email protected] University of Tizi-Ouzou, Department of Informatics, Tizi-Ouzou, Algeria. [email protected]
Abstract. Today the concept of agility has become popular and is still growing in popularity in the management and technology literatures. How do we really define an agile information system? How do we know if an information system is agile? And how do we evaluate the agility in the context of enterprise information systems? This paper tries to answer some of these questions. It precisely aims to present the concept of agility of information systems and also provides an approach for the evaluation of this agility, namely POIRE approach that evaluates the agility as an amalgamation function that combines the agility measure of five complementary aspects that are Process, Organizational, Informational, Resource and Environmental aspects. Finally, this paper studies the role of interoperability in achieving agility and also the rapprochement of the concept of interoperability with the concept of agility. Keywords: Enterprise Information System; Business; Information Technology; Agility; POIRE; Agility Evaluation; Fuzzy Logic; Interoperability.
1 Introduction In the last few years, the concept of agility has become popular and is still growing in popularity in the management and technology literatures. On the management front, enterprises are daily facing pressures to demonstrate agile behaviour according to the dynamic environment. On the technology front, resources deployed to manage and automate information systems are facing flexibility and leanness issues in order to make easier the first front.
4
S. Izza, R. Imache L. Vincent, and Y. Lounis
The purpose of this paper is to investigate the notion of agility and the role of interoperability in achieving business agility. Also, this paper provides some measuring agility principles that can be exploited in the context of enterprise interoperability and some discussions on the relation between interoperability and agility. This paper is organized as follows: Section 2 introduces some of the important related work on the concept of agility. Section 3 presents the POIRE (Process, Organization, Information, Resource and Environment) approach for measuring agility in general, with a focus on the relation that exists between interoperability and agility. Section 4 describes the agility evaluation in the context of enterprise interoperability. Finally, section 5 presents some conclusions and outlines some important future work.
2 Related Work 2.1 The Concept of Agility The concept of agility originated at the end of the eighties and the early nineties in the manufacturing area in the United States. Agile Manufacturing was first introduced with the publication of a report by [8] entitled "21st Century Manufacturing Enterprise Strategy". Since then, the concept was used for agile manufacturing and agile corporations. The concept was extended to supply chains and business networks [2], to enterprise information systems [15] and also to software development [1]. Despite the age of the concept, there is no consensus yet on a definition of agility. According to [4], most of the agility concepts are adaptations of elements such as flexibility and leanness, which originated earlier. In developing their definitions, [4] draw on the concepts of flexibility and leanness to define agility as the continual readiness of an entity to rapidly or inherently, proactively or reactively, embrace change, through high quality, simplistic, economical components and relationships with its environment. According to [5], being agile is generally result in the ability to (i) sense signals in the environment, (ii) process them adequately, (iii) mobilize resources and processes to take advantage of future opportunities, and (iv) continuously learn and improve the operations of the enterprise. In the same idea, [9] interpreted agility as creativity and defined the enterprise agility as the ability to understand the environment and react creatively to both external and internal changes. In the same way, [11] interpret agility of information systems as the ability to become vigilant. Agility can also be defined in terms of the characteristics of the agile enterprise [20]: (i) sensing: the ability to perceive environmental conditions; gather useful information from the system and readily detect changes and the ability to anticipate changes; (ii) learning: the ability to effectively modify organizational behaviours and beliefs through experience and the ability to use information to improve the organization; (iii) adaptability: the ability to effect changes in systems and structures in response to (or in anticipation of) changes in environmental condition; (iv) resilience: robustness to diversity and variability, and the ability to recover from changes; (v) quickness: the ability to
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 5
accomplish objectives in a short period of time; pace at which changes are accomplished, and the rate of movement within organizational systems; (vi) innovation: the ability to generate many solutions to a problem; (vii) concurrency: the ability to effectively perform related activities at the same time with the same assets; and (viii) efficiency: the ability to use minimal resources to accomplish desired results. Currently it is well established that the agility can be studied through two complementary perspectives: the management (or business) perspective and the IT (information technology) perspective [3] [5]. First, the concept of agility in the management perspective is often synonym of the concept of alignment, of effective and efficient execution of business processes [7]. Another view of agility can be expressed in terms of an enterprise’s abilities to continually improve business processes [18], or as "the capacity to anticipate changing market dynamics, adapt to those dynamics and accelerate change faster than the rate of change in the market, to create economic value". Second, in the IT perspective, agility of information systems is often studied through IT solutions that compose information systems. Currently, it is well established that IT and business are related and enterprises invest more and more in IT to drive current business performance, enable future business initiatives, and create business agility [17]. There are many works, in both academia and industry, that studied the impact of IT on business agility [17]. These works present two distinct and opposing views in relation to the impact of IT investment on business agility. The first view is that IT can stabilize and facilitate business processes but can also obstruct the functional flexibility of organizations, making them slow and cumbersome in the face of emerging opportunities. In such systems, business processes are often hardwired by rigid predefined process flows. The second view portrays IT applications as disruptive influences, often dislodging efficiently working processes and leading to widespread instability that reduces the effectiveness of the organization's competitive efforts [13]. These two opposing views need resolution for managers who face a continually changing IT-driven business environment. Being agile is a compelling catch cry but may lead to a complex and confusing interaction between stability and flexibility [17]. 2.2 Approaches for Characterizing and Measuring Agility There are also some works that treats the agility issues within enterprises and they mainly concern the strategizing of agility [7], the identification of the capabilities of agility [19], the identification of the agility levels [14], and the proposition of conceptual agility frameworks [16], and the measurement of the agility [21]. [7] studied the agility in the strategy point of view and mentions that there are three main points for strategizing agility: (i) the exploitation strategy, (ii) the exploration strategy, and (iii) the change management strategy. The exploitation strategy concerns the environmental and organizational analysis, the enterprise information and knowledge systems, the standardized procedures and rules, and the information services. The exploration strategy is related on the alternative futures of information systems, the existing communities of practice, the flexibility of project teams, the existence of knowledge brokers, and the possibility of cross-
6
S. Izza, R. Imache L. Vincent, and Y. Lounis
project learning. The change management strategy depends on the ability to incorporate the ongoing learning and review. [19] distinguish three interrelated capabilities of agility: (i) operational agility: is the ability to execute the identification and implementation of business opportunities quickly, accurately, and cost-efficiently; (ii) customer agility: is the ability to learn from customers, identify new business opportunities and implement these opportunities together with customers; and (iii) Partnership agility: is the ability to leverage business partner's knowledge, competencies, and assets in order to identify and implement new business opportunities. This distinction is in line with the multiple initiatives proposed in the literature: (i) internally focused initiatives (operational agility), (ii) demand-side initiatives (customer agility), and (iii) supply-side initiatives (partnership agility). Concerning the identification of agility levels, [14] argues that systems can be agile in three different ways: (i) by being versatile, (ii) by reconfiguration, and (iii) by reconstruction. Being versatile implies that an information system is flexible enough to cope with changing conditions as it is currently set up. If current solutions are not versatile enough, reconfiguration will be needed; this can be interpreted as pent-up agility released by a new configuration. If reconfiguration is not enough, reconstruction will be needed; this means that changes or additions have to be made to the information system. Furthermore, [14] proposed a framework that discusses how agility is produced and consumed. This is closely related to the level of agility that can be interpreted as a result of an agility production process to which resources are allocated. These agility levels are then used in order to consume agility when seizing business opportunities. Additionally, he outlines that when consuming agility within a business development effort, in many situations agility is reduced. This means that we are confronted to negative feedback that indicates how much enterprise's agility is reduced by this business development effort. An important agility framework, which concerns the management perspective, is that proposed by [16]. In this framework, we begin with the analyses of the change factors, where a required response of the enterprise is related to the enterprise's IT capability. Then, an enterprise's agility readiness is determined by its business agility capabilities. These latter are the reasons behind the existence or non existence of agility gaps. If there is a mismatch between the business agility needs and the business agility readiness, there is a business agility gap. This has implications for the business agility IT strategy. Another important work is by [12] who studied the agility in the socio-technical perspective. In this latter, the information system is considered as composed of two sub-systems: a technical system and social system. The technical subsystem encompasses both technology and process. The social subsystem encompasses the people who are directly involved in the information systems and reporting structure in which these people are embedded. To measure information system agility using the socio-technical perspective, [12] use the agility of the four components: technology agility, process agility, people agility, and structure agility. Hence, [12] argue that the agility is not a simple summing of the agility of the four components, but it depends on their nonlinear relationship. Furthermore, [22] mention the importance of preservation of agility through audits and people education. This
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 7
latter aspect is important because most of organizations continually need education for continuous agility. Finally, [21] proposed a fuzzy logic knowledge-based framework to evaluate the manufacturing agility. The value of agility is given by an approximate reasoning method taking into account the knowledge that is included in fuzzy IFTHEN rules. By utilizing these measures, decision-makers have the opportunity to examine and compare different systems at different agility levels. For this purpose, the agility is evaluated accordingly to four aspects: (i) production infrastructure, (ii) market infrastructure, (iii) people infrastructure and (iv) information infrastructure. Although all these works are important, our work is mostly closed to those proposed by [12] and [21]. In the following, we propose to extend these last researches to the evaluation of the agility of information system, and in particular in the context of enterprise interoperability perspective.
3 The POIRE Framework Based our research on the work of [12] and [21], we suggest framework, called POIRE (Process, Organization, Information, Environment). In the following, we will briefly expose the main POIRE and then we will focus on the evaluation of the agility in enterprise interoperability.
the following Resource and dimensions of the context of
3.1 POIRE Dimensions We suggest for our agility framework (POIRE) the following dimensions that are necessary in the context of measuring the agility of enterprise information systems (Figure 1):
8
S. Izza, R. Imache L. Vincent, and Y. Lounis
interacts with Environment (E)
Organization (O)
provides is sent to, is received from
uses
uses
Information (I)
contains
Resource (R) uses
manipulates
Process (P)
Fig. 1. POIRE dimensions for enterprise information system agility
x
x
x
x
Process dimension (P): This dimension deals with the enterprise behaviour i.e. business processes. It can be measured in terms of time and cost needed to counter unexpected changes in the process of the enterprise. Agile process infrastructure enables in-time response to unexpected events such as correction and reconfiguration. It can be measured by their precision, exhaustively, non redundancy, utility, reliability, security, integrity, actuality, efficiency, effectiveness, and feasibility. Organization dimension (O): This dimension deals with all the organizational elements involved in industry, i.e. structure, organization chart, etc. It can be measured by their hierarchy type, management type, range of subordination, organizational specialization, intensity of their head quarter, nun redundancy, flexibility, turnover, and exploitability. Information dimension (I): This dimension deals with all the stored and manipulated information within the enterprise. It concerns the internal and external circulation of information. It can be measured from the level of information management tasks, i.e. the ability to collect, share and exploit structured data. It can be measured by their accuracy, exhaustively, non redundancy, utility, reliability, security, integrity, actuality, publication, and accessibility. Resource dimension (R): This dimension is about the used resources within the enterprise. It can mainly concern people, IT resources, and organizational infrastructures. It can be measured by their usefulness, necessity, use, reliability, connectivity and flexibility. Concerning the people, which constitute in our opinion the main key in achieving agility within an enterprise, it can be assessed by the level of training of the
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability 9
x
personnel, the motivation/inspiration of employees and the data accessible to them. Environment dimension (E): This dimension deals with the external factors of the enterprise, including customer service and marketing feedback. It can be measured by the ability of the enterprise to identify and exploit opportunities, customize products, enhance services, deliver them on time and at lower cost and expand its market scope. It can be measured by their reactivity, proactivity and accuracy.
3.2 POIRE Metamodel Figure 2 shows the POIRE metamodel. As illustrated, the agility is evaluated according to a certain number of agility factors which are determined using a set of agility evaluation criteria for each dimension of the information system. These criteria are measured thanks to some identified metrics, that concern a given aspect (or dimension) of the information system. The evaluation of the metrics is practically based on the evaluation of a certain number of questions that are defined within a questionnaire of the corresponding dimension. Furthermore, agility factors and criteria are not independent in the way that they may mutually influence each other. Information System
Questionnaire
concerns
c
concerns
Agility grid
c on
er
ns
contains
Information System Dimension
Question
concerns
c
er
ns
evaluates
evaluates
concerns
Agility Factor
c on
Aglity Criteria influences
influences
Fig. 2. POIRE metamodel
Metric
10
S. Izza, R. Imache L. Vincent, and Y. Lounis
3.3 POIRE Methodology In order to evaluate the agility, we begin with the analysis of the information system and the determination of the target information system grid. Then, we customize the questionnaire and we apply the linguistic variable concept of fuzzy logic to evaluate the different metrics that allows determining the agility criteria and also the real agility grid. Once the real and target grids are compared, we conclude with an HAIS (High Agility of the Information System) message, or we make the necessary adjustments in the case where there is an AAIS (Average Agility of the Information System) or LAIS (Low Agility of the Information System). Figure 3 briefly illustrates the main principle of the proposed methodology.
Analyze and/or modify the I.S.
I.S
Determine the target I.S. agility
target agility grid
Customization
Fulfill the questionnaire
questionnaire
Evaluate agility metrics
real agility metrics
Evaluate the real agility grid
adjustments
real agility grid
Compare between target and real agility grids
H.A.I.S
A.A.I.S LAIS
Define corrections
Fig. 3. POIRE methodology
4 Agility Evaluation in the Context of Enterprise Interoperability A basis for assessing agility in a context of interoperability is still at its beginning; hence, we notice the fact that the concept of agility is still being investigated and refined with time and context of the enterprise. In the present work, according to the preceding sections, agility evaluation concerns all the dimensions of agility; hence, for each agility dimension, we suggested a list of pertinent points that must be considered in order to evaluate the agility of enterprise information systems. In the same way, we conducted a study in order to learn about the role of interoperability in achieving the agility, and also in order to better identify the rapprochement that may exist between these two concepts and that is not yet discussed in our opinion in the literature. Let’s recall that interoperability is the ability for two heterogeneous components to communicate and cooperate with one another despite differences in languages, interfaces, and platforms [23] [24]. In order to insure true
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability11
interoperability between components or systems, the syntactic interoperability [25] and the semantic interoperability [10] [23] must be specified. For the purpose of studying the rapprochement between agility and interoperability, we identified several metrics within the five above dimensions that are related to the interoperability aspect. Then we evaluated them using maturity grids, basing on CMMI model (with five maturity levels) and using sample examples taken from industrial realities. The idea is to understand how to endow the enterprise with the ability to maintain and adjust its interfaces, in terms of agility of interoperability, at different levels of abstraction of its dimensions under unexpected changes and conditions. Figure 4 illustrates the principle of the rapprochement between interoperability and agility using maturity grids. Maturity Interoperability Level 0 1 2 Grid 3 4 5
Maturity Grid
A
A
M
B
M
B
e
C
e
C
t
D
t
D
r i
E F
r i
E F G
G
c
c
Agility Level 0 1 2 3 4 5
H
H
I
I
Legend: : Real : Target : Ideal
Fig. 4. Evaluating the role of interoperability in achieving the agility with grids
Due to the data complexity and the large number of details concerning the conducted experiment, we briefly describe the main obtained results. First of all, we can retain that interoperability can be seen as one important property of agility. Furthermore, agility can be considered as a non linear function of interoperability. This may be explained in theory by the fact that the interoperability which is correlated to complexity influences the agility in two ways. First, with no interoperability there will be no agility, while with excessive interoperability, and thus with excessive complexity, agility will also decline as illustrated in figure 5. We also notice that there is an asymptotic equilibrium that is defined as the tendency of the agility of the system during its life cycle.
12
S. Izza, R. Imache L. Vincent, and Y. Lounis
Agility
Asymptotic equilbrium
Interoperability
Fig. 5. Relating agility and interoperability - theoretical tendency
In addition, we can notice that practically we have some turbulence in the agility grids due to the happening of some changes (technology changes, business changes, strategic changes, organizational changes) in the enterprise (Figure 6). These changes lead to breaking zones that need appropriate management in order to conduct the enterprise during this state of transition.
Agility
Asymptotic equilbrium
Breaking zone (changes in technology in the IT teams, … )
Fig. 6. Relating agility and interoperability – practical tendency
Interoperability
An Approach for the Evaluation of the Agility in the Context of Enterprise Interoperability13
5 Conclusions We have presented in this paper an approach of the evaluation of the agility in the context of enterprise interoperability. We have precisely presented the main principles of POIRE framework. We also studied the role of interoperability and also its rapprochement with agility. We notice that implementing syntactic and semantic interoperability yields in an increase in agility by making easier the process of reconfiguration when adapting the information system to unpredictable changes. However, there is an asymptotic equilibrium for the agility level after a certain degree of interoperability. The future work will concern the exploitation of this framework in large realities in order to validate it and will also concern the investigation in more details of other forms of agility properties such as vigilant information systems, lean information systems, outsourced information systems and also information systems in the context of the Enterprise 2.0 wave.
References [1]
Abrahamsson P., Warsta J., Siponen M.T. and Ronkainen J., "New Directions on Agile Methods: A Comparative Analysis". Proceedings of ICSE'03, 2003. pp. 244254. [2] Adrian E., Coronado M. and Lyons A. C., "Investigating the Role of Information Systems in contributing to the Agility of Modern supply Chains". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 150162. [3] Ahsan M. and Ye-Ngo L. The relationship between IT infrastructure and strategic agility in organizations. In proceedings of the Eleven Americas Conference on Information Systems. N. C. Romano, Jr. editor, Omaha, NE. 2005. [4] Conboy K. and Fitzgerald B., "Towards a Conceptual Framework of Agile Methods: A Study of Agility in Different Disciplines". ACM Workshop on Interdisciplinary Software Engineering Research, Newport Beach, CA, November 2004.. [5] Desouza, K. C., "Preface". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. [6] Dove R., "Response Ability: the Language, Structure and Culture of Agile Enterprise". New York, Wiley, 2001. [7] Galliers R. D., "Strategizing for Agility: Confronting Information Systems Inflexibility in Dynamic Environments". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 1-14. [8] Goldman S. et al., "21st Century Manufacturing Enterprise Strategy". Brthlehem, PA: Iacocca Institute, Lehigh University. 1991. [9] Goranson H. T., "The Agile Virtual Enterprise, Cases, Metrics, Tools". Quorum Books. 1999. [10] Heiler S., "Semantic Interoperability". ACM Computing surveys, vol. 27, issue 2, pp. 271-273, 1995. [11] Houghton R. J. et al., "Vigilant Information Systems: The Western Digital Experience". In Desouza K. C. editor, Agile Information Systems: Conceptualization,
14
[12]
[13]
[14]
[15]
[16]
[17] [18]
[19]
[20]
[21]
[22]
[23] [24]
[25]
S. Izza, R. Imache L. Vincent, and Y. Lounis
Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 222-238. Lui T-W. and Piccoli G., "Degrees of agility: Implications from Information systems Design and Firm Strategy". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 122-133. Lyytinen K. and Rose G. M., "The disruptive nature of IT innovations: The case of Internet computing in systems development organizations". MIS Quarterly, 277 (4), 2003. pp. 557. Martensson A., "Producing and Consuming Agility". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 41-51.. Mooney J. G. and Ganley D., "Enabling Strategic Agility Through Agile Information Systems". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 97-109. Oosterhout M. V., Waarts E., Heck E. V. and Hillegersberg J. V., "Business Agility: Need, Readiness and Alignment with it Strategies". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 52-69. Ross J. W. and Beath C. M., "Beyond the business case: New approaches to IT investment". MIT Sloan Management Review, 43 (2), 2002. pp. 51-59. Rouse W. B., "Agile Information Systems for Agile Decision Making". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 16-30. Sambamurthy V., Bharadwaj A. and Grover V., "Shaping agility through digital options: Reconceptualising the role of information technology in contemporary firms". MIS Quarterly, 27 (2), 237-263. 2003. Stamos E. and Galanou E., "How to evaluate the agility of your organization: Practical guidelines for SMEs". VERITAS. 2006. Available at: http://www.veritaseu.com/files/VERITAS_D6_1_Agility_Evaluation_ Handbook.pdf. Tsourveloudis et al., "On the Measurement of Agility in Manufacturing Systems". Journal of Intelligent and Robotic Systems, Kluwer Academic Publishers Hingham, MA, USA, 33 (3) 2002. pp. 329 - 342 . Wensley A. and Stijn E. V., "Enterprise Information Systems and the Preservation of Agility". In Desouza K. C. editor, Agile Information Systems: Conceptualization, Construction, and Management. Elsevier, Burlington, USA, ISBN 10: 0-7506-8235-3, 2007. pp. 178-187. Wegner P., "Interoperability". ACM Computing surveys, vol. 28, issue 1, 1996. Wileden J. C. and Kaplan A., "Software Interoperability: principles and practice". Proceedings of the 21st International Conference on Software Engineering (ICSE), pp. 675-676, ACM, 1999. Wileden et al., "Specification level interoperability". Proceedings of the 12th International Conference on Software Engineering (ICSE), pp. 74-85, ACM, 1990.
Industrialization Strategies for Cross-organizational Information Intensive Services Christoph Schroth1, 2 1
2
University of St. Gallen, MCM Institute, Bumenbergplatz 9, 9000 St. Gallen, Switzerland SAP Research CEC St. Gallen, Blumenbergplatz 9, 9000 St. Gallen, Switzerland [email protected]
Abstract. Cross-organizational collaboration is about to gain significant momentum and facilitates the emergence of a globally networked service economy. However, the organization and implementation of business relationships which span across company boundaries still shows considerable weaknesses with respect to productivity, flexibility and quality. New concepts are therefore required to facilitate a comprehensive industrialization by improving the formalization, standardization and automation of related concepts and methodologies. In this work, we briefly elaborate on a specific case of governmental administration in Switzerland, which represents a cross-organizational service industry with significant potential for performance enhancement. We present a reference architecture for service-oriented business media which allow the different involved stakeholders to organize and implement cross-company collaboration as efficiently as possible. By applying this reference architecture to the case of public administration, we show that “Lean” service consumption and provision between organizations can be realized: Similar to the manufacturing context, the seven major kinds of “waste” (defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing) are reduced. Keywords: Engineering interoperable systems, Requirements engineering for the interoperable enterprise, Interoperability issues in electronic markets, Interoperability performance analysis, Interoperable enterprise architecture
1 Introduction Cross-organizational collaboration is about to gain significant momentum and facilitates the emergence of a globally networked service economy. The relentless march of improvements in the cost-performance ratio of information technology today already provides companies the opportunity to execute such crossorganizational business relationships electronically and to thus extend market
16
Christoph Schroth
reach, save time, cut costs and respond to customer queries more agilely [1, 2]. However, the organization of such business relationships still shows considerable weaknesses with respect to productivity, flexibility and quality. Business processes are often unstructured, unclear terminology prevents from a common understanding, functional as well as non-functional parameters are rarely formalized and standardized, and the frequently manual execution lead to a huge variability and hard manageability of results. New concepts are therefore required to facilitate a comprehensive industrialization by improving the formalization, standardization and automation of related concepts and methodologies. In this work, we first of all present clear definitions of relevant terms and also elaborate on the research approach applied (Section two). We also briefly revisit traditional product manufacturing with respect to the major advancements which facilitated its industrialization over time. In section three, we present a reference architecture for service-oriented, electronic business media which allow for improving the organization and implementation of cross-organizational business relationships. Section four is devoted to the application of this reference architecture to an exemplary use case in the field of public administration. We show that “Lean” service consumption and provision between organizations can be realized: Similar to the manufacturing context, the seven major kinds of “waste” (defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing) are reduced significantly. Section five concludes the work with a brief summary and an overview of selected related research work.
2 Definition of Terms and Research Approach Numerous scholars [3, 4, 5] have worked on differentiating services from products, resulting in various kinds of characteristics which they consider as unique to the world of services. The service definition underlying this work is as follows: A service is considered as activity that is performed to create value for its consumer by inducing a change of the consumer himself, his intangible assets or his physical properties. In specific, information intensive services (IIS) can be described as activities that are conducted by either machines or humans and deal with the “manipulation of symbols, i.e. collection, processing and dissemination of symbols – data, information and decisions” [4, p.492]. The focus of this study lies on IIS which are provided and consumed across corporate boundaries (e.g., the collaborative creation of documents or order and invoice processes). Studies of cross-organizational collaboration can be traced back several centuries: In the 18th century, Smith [6] argued that the division of labour among agents facilitates economic wealth. Through their specialization on limited scopes of duties, productivity can be improved as the repeated execution of the same activity helps tapping existing potential for optimization. Malone [1] describes in which way decreasing costs for communication and transportation (also referred to as transaction costs [7]) enabled this division of labour and made organizations evolve from isolated, small businesses over large hierarchies to decentral markets. Today, organizations try to lower the costs of collaborating through information
Industrialization Strategies for Cross-organizational Information Intensive Services
17
exchange even more. The term industrialization is used in various contexts and with very different meanings. In this work we refer to industrialization as the application of scientific means to be able to formalize, standardize and automate the creation of customer value with the final goal to improve performance (measured on the basis of the indicators productivity, quality and flexibility [8]). 18th/ 19th century Watt, Whitney, Bessemer
-
Power machinery, interchangeable parts, division of labor
-
However, no interrelation of different processes, no consideration of the single workers
Early 20th century Taylor, Ford, Sloan
-
Holistic view on production processes Introduction of the „production-line approach“ Separation of process planning and execution
Late 20th century Ohno, Krafcik
-
Lean Manufacturing to reduce waste (superfluous overhead) Simultaneous improvement of productivity, quality and flexibility
Worker specialization Focus on both productivity and flexibility (Sloan)
Fig. 1. Industrialization of traditional product manufacturing
The industrialization of traditional manufacturing (Fig. 1) can trace its roots back to the work of inventors such as James Watt in the 18th century who developed techniques to make steam engines significantly faster, safer, and more fuelefficient than existing devices. During the 18th and also the 19th century, the general focus of industrialization was mainly on the development of power machinery, interchangeable parts, and the division of labour. A more holistic perspective that takes into account overall production processes and also the management of workers did not emerge until the early 20th century: Taylor [9], Ford and Sloan introduced scientific means to the analysis of production processes, promoted the production-line approach (to enable mass-production) and focused on both productivity and manufacturing flexibility. In the second half of the 20th century, the first “Lean” manufacturing systems emerged [8]. As opposed to the traditional, buffered approach (where inventory levels are high, assembly lines have built-in buffers against partial system breakdowns and utility workers are kept on the payroll to buffer against unexpected demand), the Lean Manufacturing approach aims to reduce superfluous overhead (represented by the seven kinds of “waste”: defects, overproduction, excessive inventory, transportation, waiting, motion, over-processing). In this way, all relevant performance indicators (productivity, quality, flexibility) could be improved.
18
Christoph Schroth
3 A Reference Architecture for Seamless Cross-Organizational Collaboration As briefly introduced above, electronic, cross-organizational collaboration shows significant weaknesses with respect to performance indicators such as quality, flexibility and productivity. The reference architecture introduced in this section facilitates the organization and implementation of electronic media for efficient and effective collaboration: For this purpose, we leverage Schmid’s Media Reference Model (MRM) [10]: The concept of media can first of all be defined as enabler of interaction, i.e. they allow for exchange, particularly the communicative exchange, between agents. The MRM comprises the three major components Physical Component (physical basis of the medium), Logical Component (logical sphere between interacting agents) and Organizational Component (social interaction organization). It further consists of a dedicated layer and phase model which builds upon these three components and allows for systematically modelling, understanding and reorganizing media. Organizational Component
Physical Component (Service-Bus) t
Information Objects Interaction Rules Registry Roles Coor-
dinati on
Software Web Client
Publi c Servi ce 1
Logical Componen
Form ats
Bus
Publi c Servi ce 2
Services
Publi c Servi ce n
Routing Abo Directory Error Handling Event Catalogue
Fig. 2. Service-Oriented Business Medium Reference Architecture
Fig. 2 visualizes the major components of our service-oriented reference architecture: As a part of the MRM’s Organizational Component, the structural as well as process-oriented organization of the agents’ interaction is defined (“Coordination” in Fig. 2). For the structural organization, electronic media need to provide a registry of participating agents as well as a role model defining rights and obligations for each of them. Also, the information objects (e.g., documents) need to be specified in a common way to ensure seamless interaction between the agents (who assume the above mentioned roles). In terms of process-oriented organization, a cross-organizational status- and action approach is leveraged: Each of the roles may execute certain actions (consume or provide services) depending on the current status of the collaboration and on a set of generally valid business rules. Adhering to the principle of modularization, atomic interaction patterns are introduced which can be assembled towards more high-level business processes [11]. Pieces of the process-oriented organization can be automated on the basis of finite state machines (FSM): Perceived events may change their state and trigger a sequence of further events, representing a sub-process.
Industrialization Strategies for Cross-organizational Information Intensive Services
19
This organizational component is complemented by the Logical Sphere (LComponent) which ensures that agents can seamlessly interact on the basis of a common understanding. Semantics and the structure of exchanged pieces of information represent one major element of this component. The description and the underlying understanding of interaction patterns as well as role model semantics represent further elements of this component. As a third component, the C-Component enables the interaction of the different stakeholders by providing the technology (the physical means) for exchanging messages and relies on the principles of Event-Driven Architectures. It may be considered as a cross-organizational operating system as it provides services for Information Routing, Error and Exception Handling, Data Validation and Transformation, Security, User Directory (supporting the retrieval of adequate business partners), Event/ Catalogue Services (storing information about all supported events and related user rights) as well as services allowing for diverse kinds of information dissemination (Unicast, Publish-Subscribe etc.). Besides these operational services, the C-Component also features coordination services which implement the logic of the above outlined finite state machines (devoted to automating specified pieces of the process-oriented organization). Finally, the actual agents (stakeholders interacting with the help of an electronic medium) encapsulate the services they offer with the help of Web Services-based, standardized adapters which enable them to connect and thus use existing, often proprietary applications. For stakeholders who do not use local software application, Web-based clients are provided. This reference architecture is elaborated in more detailed in [12]. Adapters play a crucial role with respect to both scalability and flexibility of our reference architecture: In the course of the HERA project [13], different “ecosystems” of agents have been found to interact on the basis of proprietary information objects, different role models and interaction patterns as well as rules. To account for such individual requirements in electronic collaboration while still ensuring global connectivity and interoperability, adapters are essential to mediate between the diverse spheres. Adapters may thus not only be used to shield specificities of one agent from the common medium (see Fig. 2), but may also act as a mediator between two or more different electronic business media: In this way, users which are connected to different media (e.g., as they belong to different business ecosystems) can interact with the help of an adapter which translates different data formats or other elements of our reference architecture. In [12], a thorough analysis of this cross-media connectivity (architectural recursivity) is provided.
20
Christoph Schroth
4 Industrialization of IIS in the Case of Swiss Public Administration 4.1 The Collaborative Process of Creating Tax Declarations in Switzerland In this section, we elaborate on a case study that has been conducted in the course of the Swiss government-funded project HERA [13] which aims at an improvement of the tax declaration procedure in Switzerland. It serves as a specific case for the interaction of certain stakeholders who mutually consume and provide IIS in order to achieve a common goal: There are mainly four stakeholders involved in the cross-organizational creation of a tax declaration. First, a company itself aims at submitting a tax declaration that complies with laws, is consistent with the forms issued by the various cantons (Swiss states) and is optimized with respect to the resulting tax load in an as efficient way as possible. Accountants can either be represented as company-internal departments or external service providers. They create comprehensive financial statements and also provide consulting services with respect to profit appropriation strategies. Auditors have to be organizationally separated from accountants (by law) to ensure their independency. They examine and verify compliance of financial statements and profit appropriations. Finally, the cantons (states) receive the completed tax declaration and initiate the assessment/ enactment process. Municipalities play a certain role within the tax declaration process in some of the Swiss cantons, but are left out in this work due to space constraints. During this procedure of creating a tax computation, the division of labour among the players induces the need for coordination and information exchange between them which follows certain choreographies. As a consequence, numerous documents are passed from one stakeholder to the other and are thereby processed in different ways until they reach the end of their respective “lifecycles”. Today, all stakeholders interact with each other via different communication channels. Some information is exchanged in paper format; other documents are transferred via e-Mail or proprietary electronic interfaces. Resulting media breaks, the lack of standardized interfaces and the strong involvement of humans into information processing induces high transaction costs and increases the risk of errors, thereby limiting service quality. Also, services are only rarely subject to quantifiable performance metrics. The study has shown that especially nonfunctional properties of services such as delivered quality or exact time required for completion are usually not provided in a clear, formal and quantifiable way. Also, the cross-organizational process varies from canton to canton as the individual states determine the boundary conditions of the tax declaration procedure. The heterogeneity prevents from standardization with respect to terminology, processes, and pieces of information and therefore deteriorates the productivity of seamless collaboration across the stakeholders’ boundaries. Frequently, decisions have been found to be made on the basis of best practices instead of formalized rule sets.
Industrialization Strategies for Cross-organizational Information Intensive Services
21
4.2 “Lean” IIS Management through the Reduction of Waste By applying the reference architecture introduced above and deploying an electronic business medium (referred to as HERA Bus) for the support of the collaborative tax declaration scenario, “waste” can be reduced significantly. The following paragraphs elaborate and highlight the respective reductions which will be achieved in the course of the HERA project [13]. Waste of defects: In the governmental administration case outlined above, defects are one of the most relevant sources for waste and therefore reduced service performance. Paper-based information exchange, proprietary interfaces and application interconnections as well as a huge degree of human involvement induce the risk of information processing errors. Especially the mapping between different data formats (which requires significant effort) may lead to incorrect interpretation of certain pieces of information (as they are rarely subject to widely accepted standards) which, in turn, result in defective documents (“waste”). The introduction of an electronic business medium which complies with the reference architecture outlined above will reduced error rates dramatically: As a part of the O-Component, first of all, the information objects the agents deal with are clearly specified and standardized. Through formalization and standardization, uncertainty in the mutual interaction and thus also errors are reduced. The semantics of exchanged pieces of information (part of the L-Component), for example, are also clearly determined through a standard which has been created and agreed upon prior to the medium’s establishment. The “semantic gap” which frequently exists between the different information objects which the diverse agents are used to work with is bridged through the above mentioned software adapters which account for the translation from proprietary, locally used standards into the commonly comprehensible format and the other way around. Besides elements of the O-and L-Component of our framework, the C-Component also contributes to the reduction of “defective” results of electronic interaction: The service bus acts similar to a cross-organizational operating system and features services for error detection (on a physical level, e.g. XML schema processing errors, time limit overruns, routing errors etc.), error removal and exception handling. Data Validation Services account for the integrity of transmitted data. Overproduction and Excessive Inventory: As opposed to other scholars [4], we found that inventory does not only exist in the case of physical goods, but also in the information processing context. According to our definition, service inventory represents all the work-in-progress and steps toward service accomplishment that have been performed and stored before the actual service consumer arrives. According to [14], such inventory must be reduced to a minimum to avoid costs such as those incurring for storage, management, processing, maintenance and the decrease of its value over time. On the other hand, it allows firms to “buffer their resources from the variability of demand and reap benefits from economies of scale” [14, p.56] and to avoid a phenomenon which s frequently referred to as “Lean Sinking”. To find an adequate balance between the two, the “push-pull boundary” [14, p.56] must be easily adjustable (and should be
22
Christoph Schroth
placed in a way ensuring that as much as possible work is done only in response to actual demand). The reference architecture which is currently being setup in the course of the HERA project supports firms in optimally managing their information service inventory and preventing it from becoming excessive. The business medium which can be considered as “intermediary” will, for example, provide accountants with services that reduce their information inventory while increasing their responsiveness to customer demands: A dedicated service allows them to access up-to-date, electronic forms for completing tax computations (which is within the scope of the service they perform for their clients). In this way, they are provided support to streamline their inventory (which usually consists of large piles of possibly out-dated, paper-based forms that need to be filled manually). For their key customers, they may already pre-fill relevant information which reduces service lead times on the one hand, but also costs time (for the pre-performed process step) and may eventually represent waste in case the client decides to engage another accountant. With the help of our novel architecture, agents such as accountants may also improve their services’ quality over time. Old client data can be stored at the intermediary to allow for plausibility checks and automated prefilling of forms at the next time the service is consumed. However, this information needs to be maintained, requires storage capacity, security mechanisms, must be processed correctly and may finally go to waste, inducing the need for a thorough and individual trade-off once again. The increased automation through service orchestration (e.g., by means of the presented finite state machines), high transparency and adherence to clear interaction rules (part of the O-Component) redundantize certain activities which are now done ahead of demand. The creation of a proposal for the tax“Ausscheidung” across the different states which accommodate at least one of a taxable company’s premises, for example, is today often done although it is not necessarily needed by entities involved in later process steps. This kind of overproduction can be avoided by the establishment of the institutional intermediary which enforces the application of clear rules and also provides high transparency with respect to the work that has already been done and process steps that still need to be performed. Transportation: In the traditional product manufacturing context, the transportation of goods is regarded as an activity that must be reduced to a minimum as its does not add customer value but costs time and bears the risk that goods are lost or damaged. In the field of IIS, transportation means the transfer of symbols (or pieces of information) from one agent to another. Transportation can also occur internally at a specific agent as he transmits data from one employee to the other before sending it out to another, external party. Especially during the transportation of paper-based information as it is frequently conducted today, information may get lost. Apart from that, the routing of the different dossiers (comprising all the documents currently relevant for one dedicated client) represents considerable effort as people need to seek and contact the next destination. Our approach reduces the transportation of information considerably: First of all, paper-based information representation is fully avoided and replaced by
Industrialization Strategies for Cross-organizational Information Intensive Services
23
electronic information systems (the electronic service bus and the software adapters for connecting local applications to it). In this way, transportation of information becomes standardized and also highly quicker. Intermediate information storages are reduced as well. In the heterogeneous IT application landscape as it is in place today, documents often reside in diverse folders, databases and outboxes until they are transferred (and possibly also transformed) for processing in other information systems on the path to the actual receiver. In this way, both the above discussed risk for errors and the time required for “transportation” is increased, limiting quality and productivity of the agents’ interaction. The electronic business medium which is implemented in the course of the HERA project will now provide a uniform means for exchanging documents. Events (encapsulating pieces of information) are created by local applications, are then transformed into the commonly comprehensible format and can then be transmitted via the bus without any other intermediate stops. Besides this, the reduction of routing uncertainty also contributes to improved transportation conditions. Today, significant time is being spent to identify the desired receiver for a certain piece of information among a number of agents. With the help of a comprehensive role model (featuring related rights and obligations) as well as formalized business rules which represent constraints for the agents’ interactions, in combination with an unambiguous user registry and corresponding addressing schemes, routing uncertainty can be avoided: Agents are only enabled to send specified pieces of information to determined receivers which is, from a CComponent perspective- automatically performed by a routing service, thereby leading to a reduced time period required for information transportation. Waiting: The idle time between single steps towards service accomplishment, leading to human resources or equipment that do not operate at full capacity represent another kind of waste. In the IIS context, such waiting times also exist and are often induced by the unpredictability of approaching work load through customer requests and also the lack of formally defined non-functional properties of services. As opposed to the highly productive manufacturing sector, formal and quantifiable parameters defining service quality or exact time required for completing rarely exist [14]. Our reference architecture which is now being applied in the public administration context provides all stakeholders involved in a collaborative tax computation a huge degree of transparency and thus allows them to schedule their activities accordingly. Again, different architectural elements contribute to the overall reduction of this kind of waste: As part of the OComponent, formalized and standardized service descriptions (regarding functional as well as non-functional parameters) will facilitate a more accurate collaboration. The electronic business medium is enabled to automatically monitor and also enforce such parameters and can take immediate action (such as sending reminders or offering alternative services) in case a certain service does not work appropriately. This transparency of service characteristics combined with services for monitoring and enforcement (provided as part of the C-Component) enhance predictability and dependability of electronic interaction and thus reduce the “waste” of waiting.
24
Christoph Schroth
Waste of motion: In manufacturing, ineffective work procedures represent one of the causes for manual efforts that do not add value from a customer’s point of view. In the context of IIS, not the physical motion, but the superfluous processing of information (which can be well compared to motion) either through human users of machines represents a significant source of waste as well. One example for the reduction of inefficient information processing is a service which orchestrates all the above discussed operational services of the medium as soon as a message is to be transferred via the bus. The sequence of services for security (e.g. decryption), data validation, routing, publish-subscribe message dissemination, re-encryption and transfer to the software-adapter of the message receiver is highly automated and thus very efficient. Human involvement or superfluous information processing can thus be reduced to a minimum. Over-processing: Over-processing, induced for example by over-fulfilment of customer requirements, may result in useless consumption of resources such as time or inventory. As described above, the definition of clearly formalized services which inform their potential consumers about non-functional properties will reduce the risk of over-processing.
Waste of defects
• Electronic, standardized encapsulation of services, interoperable data formats, uniform terminologies will reduce information processing errors
Overproduction
• Process-automation, high transparency and adherence to clear rules redundantizes certain process steps which are now done ahead of demand
Excessive inventory
• Services that allow for electronic download and filling of tax computation forms reduce existing, paper-based inventory and allow for pre-performing certain steps
Transportation
• Information transportation is automated, controlled centrally and is performed according
Waiting
• Formalized service parameters, combined with monitoring and enforcement services provided by
to strict, previously defined routes the medium will improve transparency of the process-oriented organization
Waste of motion
• Superfluous processing of information can be reduced through standardization and partial automation of service orchestrations
Over-processing
• The formalization and standardization of non-functional as well as functional service properties will facilitate the reduction of over-processing
Fig. 3. The reference architecture facilitates the reduction of seven kinds of waste
5 Conclusion The growing relevance of information intensive services for most of the developed countries’ economies on the one hand, and their weaknesses with respect to key performance indicators on the other hand induce an immediate need to identify and apply means to facilitate a comprehensive wave of industrialization [15, 16, 17, 18, 19]. Challenges such as high variability or unquantifiabiliy do not represent inherent characteristics, but major hurdles which need to be taken down on the path
Industrialization Strategies for Cross-organizational Information Intensive Services
25
to a global industrialization. To identify proper means for the industrialization of IIS, we have first of all analyzed methods that have been successfully applied in the traditional product manufacturing context. In particular, we elaborated on the key principles and the central impacts of the Lean Manufacturing concept. Diverse scholars already presented interesting approaches to the challenges of increasing productivity, flexibility and quality of services in general and also of IIS in particular. In [19], the authors present an elaborate framework for measuring service performance which supports the enhancement of services productivity. In [20], process analysis, a method typically applied in traditional operations management is transferred to the field of information intensive services. In this way, the authors argue, business processes that are performed as parts of a service can be analyzed and reorganized in a systematic and detailed fashion. In [14], Chopra and Lariviere introduce the notion of “service inventory” and elaborate on its significant role for the level of service performance. According to the authors, basically three factors can be considered as general determinants of service performance: The placement of the push-pull-boundary (the size of inventory, i.e. work-in-progress or unfinished, stored information which is created prior to a customer’s request), the level and composition of resources (i.e. the people and the equipment that the provider utilizes to perform a service), and finally the service access policies (used to govern how customers are able to make use of a service) have significant impact on services performance. In [22], Levitt proposed a comprehensive production-line approach “as one by which services could significantly improve their performance on both cost, largely through improved efficiency, as well as quality.” [23, p. 209] He believed that service businesses which adopt the (mass) production-line approach could gain a competitive advantage by leveraging a low-cost leadership strategy. Levitt [22] regarded the following four key principles as crucial for the realization of his idea: First, employees should perform limited discretionary action to drive “standardization and quality, i.e. consistency in meeting specifications.” [23, p. 209] Second, by following the principle of division of labour, processes are broken down into groups of tasks which allow for the specialization of skills. “This narrow division of labour made possible limited spans of control and close supervision. It also minimizes both worker skill requirements and training time.” [23, p.209] Third, the systematic “substitution of equipment for labour aligned with well conceived use and positioning of technology led to efficient, high-volume production at acceptable quality levels.” [23, p.209] Last, standardization “allows predictability, preplanning, and easier process control which, in turn, provides uniformity in service quality.” [23, p. 209] In this work, we follow a rather operational approach and propose a reference architecture for the adequate organization and implementation of electronic media for the provision and consumption of service across company boundaries. The architecture comprises an organizational component (role models, associated rules, registries and information objects), a logical component (enabling a common understanding between interacting agents rather than only allowing for the exchange of mere data) as well as a physical component (the actual services which implement the “ideas” of the organizational component). We applied this reference architecture to the case of governmental administration in Switzerland and showed
26
Christoph Schroth
that thereby defects, overproduction and excessive amounts of inventory, unnecessary transportation, waste of time, superfluous effort of employees as well as over-processing can be significantly reduced (see Fig. 3). Future work will be devoted to applying and investigating the reference architecture and its benefits in other scenarios in order to gain more insights into optimal organization and implementation of cross-organizational information intensive services.
References [1] [2] [3] [4]
[5] [6] [7] [8] [9] [10]
[11] [12]
[13] [14] [15] [16] [17] [18]
[19]
Malone, T. (2001). The Future of E-Business. Sloan Management Review, 43 (1), 104. Porter, M. (2001). Strategy and the Internet. Harvard Business Review, 79 (3), 63-78. Hill, T. P. (1977). On goods and services. Review of Income and Wealth, 23, 315-38. Apte, U., Goh, C. (2004). Applying Lean Manufacturing Principles to InformationIntensive Services, International Journal of Service Technology and Management, 5 (5/6), 488-506. Fitzsimmons J. A., Fitzsimmons, M. J. (2003). Service management. New York: McGraw-Hill. Smith, A. (1776). An Inquiry into the Nature and Causes of the Wealth of Nations. U.K. Coase, R.H. (1937). The Nature of the Firm, Economica, 4 (16), 386-405. Krafcik, J. F. (1988). Triumph of the lean production system, Sloan Management Review, 30 (1), 41-52. Taylor, F. W. (1911). The Principles of Scientific Management. New York: Harper Bros. Schmid, B. F., Lechner, U., Klose, M., Schubert, P. (1999). Ein Referenzmodell für Gemeinschaften und Medien. in: M. Englien (Hrsg.), J. Homann (Hrsg.): Virtuelle Organisation und neue Medien. Lohmar : Eul Verlag - Workshop GeNeMe 99, Gemeinschaften in neuen Medien.- Dresden.- ISBN 3-89012-710-X, 125-150. Müller, W. (2007). Event Bus Schweiz. Konzept und Architektur, Version 1.5, Eidgenössiches Finanzdepartement EFD, Informatikstrategieorgan Bund (ISB). Schmid, Beat F., Schroth, Christoph (2008). Organizing as Programming: A Reference Model for Cross-Organizational Collaboration. Proceedings of the 9th IBIMA Conference on Information Management in Modern Organizations, Marrakech, Morocco. HERA project, available online at: http://www.hera-project.ch, accessed in 2007. Chopra, S., Lariviere, M. A. (2005). Managing Service Inventory to Improve Performance, MIT Sloan Management Review, 47 (1), 56-63. Schmid, B. F. Elektronische Märkte- Merkmale, Organisation und Potentiale, available online at: http://www.netacademy.org, accessed in 2007. McAfee, A. (2004). Will Web Services Really Transform Collaboration. Sloan Management Review, 46 (2), 78-84. Karmarkar, U. S. (2004). Will you survive the services revolution, Harvard Business Review, 82 (6), 100-110. Malone, T. W. (2004). The Future of Work: How the New Order of Business Will Shape Your Organization, Your Management Style, and Your Life. Boston, MA, USA: Harvard Business School Press. Schroth, C. (2007). Web 2.0 and SOA: Converging Concepts Enabling Seamless Cross-Organizational Collaboration. Proceedings of IEEE CEC'07/ EEE '07, Tokyo, Japan.
Industrialization Strategies for Cross-organizational Information Intensive Services
27
[20] Harmon, E., Hensel, S., Lukes, T. (2006). Measuring performance in services, The McKinsey Quarterly, No. 1. [21] Karmarkar, U. S., Apte, U. M. (2007). Operations Management in the information economy: Information products, processes, and chains. Journal of Operations Management, Vol. 25, 438-453. [22] Levitt, T. (1972). Production-line approach to service, Harvard Business Review, 50 (5), 20-31. [23] Bowen, D. E., Youngdahl, W. E. (1998). “Lean” service: in defense of a productionline approach, International Journal of Service Industry Management, 9 (3), 207-225.
SME Maturity, Requirement for Interoperability Gorka Benguria, Igor Santos European Software Institute, Parque tecnológico de Zamudio # 204, E-48170 Zamudio, Spain [email protected], [email protected]
Abstract. Nowadays and in the future, the capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. The objective of this paper is to present a strategy for becoming and staying interoperable in SME (Small and Medium Enterprise) environments. SMEs used to be flexible, adaptable and innovative in their products, but they have difficulties to adopt ICT in disciplines not directly related with their products. This strategy relays in three pillars: an Improvement Cycle to guide the establishment and the maintenance of the interoperable status; An Interoperability Maturity Model as a repository of good practices for being interoperable; An Assessment Method to be able to measure the level of interoperability and being able to establish feasible goals. Keywords: Strategy and management aspects of interoperability, Managing challenges and solutions of interoperablity, Business models for interoperable products and services
1 Introduction The development and delivery of products and services implies the establishment of relationship with external parties like providers, partners, administration and clients. This has not changed since the appearance of the ancient Sumerian civilization, and even before (Some authors [1] date the history commerce from 150,000 years ago). What has changed a lot is the way in which those relationships are carried out and the complexity of these relationships. In most cases, the motivation for this evolution relays in the business need for: increasing earnings and staying competitive. The evolution of the way in which the different roles interact to complete the delivery of a product or service implies the introduction of further prerequisites for the interacting roles in order to take part in the information exchange. It is not that long ago when it was only required to be able to speak and perform some basic
30
Gorka Benguria, Igor Santos
calculations for engaging in any kind of trade. However, the introduction of the written documents, the more recent introduction of the telephone and finally the advent of information and communication technologies (ICT) have brought about a huge set of prerequisites in some trading models (e.g. eCommerce, eBanking, etc). This huge set of prerequisites has made these relationships extremely complex, but on the other hand, effectiveness and profitability have increased dramatically. These prerequisites could include the acquisition of hardware and software, the establishment of new procedures, the modification of the existing ones, and even changes in the organisational culture. Nowadays and in the future, the capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large to micro enterprises. It is important to underline that the adoption of new means of interaction is not a one shot effort, new means of interaction will appear in the future and organizations should be able to integrate them in line with their business objectives. Interoperability is defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [2] but how can we support SME to better interact with their partners, clients, providers, administrations, and so on? The objective of this paper is to present a strategy for becoming and staying interoperable in SME (Small and Medium Enterprise) environments. In this paper, the definition of the SME given by the European commission is taken as a reference. This is, enterprises which employ fewer than 250 persons and which have either an annual turnover not exceeding 50 million euro, or an annual balance sheet total not exceeding 43 million euro [3][4]. SMEs are a key element of the European economy [5] as they employ half of the workforce and generate half of the value added. The European commission has stressed their importance several times as they are the focus of the second action of their strategy in line with the Lisbon strategy [6]. Besides, they have some characteristics that require specific strategies, different from those applicable to large companies. SMEs used to be flexible, adaptable and innovative in their products, but they have difficulties to adopt ICT [7] in disciplines not directly related with their products (e.g. how they sell them, how they offer them, etc). This strategy for becoming and staying interoperable relays in three pillars as shown in the next figure (Fig. 1): x
x
An Improvement Cycle to guide the establishment and the maintenance of the interoperable status. SME are flexible by definition but even if they are used to change they do not perform that evolution in a managed way. Becoming and staying interoperability is an ambitious goal, but usually SME do not have the budget to address this kind of huge improvement initiatives. Therefore it should be approached progressively with small and result oriented interactions. An Interoperability Maturity Model as a repository of good practices for being interoperable. Practices that could be adopted by the organisations in the continuous effort for being interoperable. The maturity model
SME Maturity, Requirement for Interoperability
x
31
establishes a roadmap that is capable of guiding the SME from the chaos to the greater stages of interoperability. It takes into account the characteristics of the SME, and tries to prioritise the outdoor interoperability, in contrast to the indoor interoperability. An Assessment Method to be able to measure the level of interoperability of an organisation for knowing the current state and being able to establish feasible goals. This method takes into account the SME constraints and establishes mechanisms to reduce the time and the effort required to complete the assessment. Instead of performing a huge assessment it focus the assessment in different stages of interoperability. The focus of the assessment could be determined based on a short questionnaire based autoevaluation.
Fig. 1. Elements of the Strategy
The paper is structured in three sections presenting the three pillars of the strategy. Then the case study that will be used to validate the strategy is presented. Finally, the paper is concluded with a description of the expected results and benefits, the related work and the preliminary conclusions from the strategy definition.
2 Improvement Cycle Increasing organisational capability of being interoperable is not a question of jumping directly from the bottom level to the top level. Although often perceived as such, it is imperative that organisational personnel refrain from viewing process improvement as a one-time investment. The marketplace is constantly changing its needs which means that organisational requirements must change to meet the needs of the market [8]. Since long time, improvement cycles have been used with the object of supporting organisations in the process of constantly adapting their way of work whenever business needs change. Improvement cycles help organisations in performing this continuous evolution in a systematic and controlled way.
32
Gorka Benguria, Igor Santos
Becoming and staying interoperability is an ambitious goal, but usually SME do not have the budget to address this kind of huge improvement initiatives. Therefore it should be approached progressively with small and result oriented interactions. The improvement cycles describe the set of phases to carry out by organizations willing to enhance their capabilities. Besides, they used to provide a set of good practices for each phase that increase the probability of succeeding with the improvement initiative. Along the time, several improvement cycles have appeared in different domains: x
x
x
PDCA (Plan-Do-Check-Act): In 1939 the statistician Walter Shewart published a continuous cycle (the Shewart Cycle) [9] for following improvement efforts at any stage. W. Edwards Deming popularised this cycle by applying it to the manufacturing industry in Japan in the 1950’s. The Japanese referred to the approach as the Deming Cycle and nowadays it is often referred to as the Plan-Do-Check-Act (PDCA) cycle [10]. IDEAL Model [11] (Initiating, Diagnosing, Establishing, Acting and Learning): The Software Engineering Institute (SEI) developed the IDEAL model for managing CMM®-based (Capability Maturity Model) software process improvement programmes. The IDEAL model has gained widespread acceptance in the CMM® community based on its close link to the Capability Maturity Model for Software. RADAR (Results, Approach, Deploy, Assessment and Review): This is the improvement cycle of the EFQM excellence model. In the same way IDEAL is closely related to CMM®, this improvement cycle is also closely related with the EFQM excellence model [12].
Taking as a basis the Shewart cycle and adding some inputs from other improvement cycles, such as IDEAL and RADAR, a customised improvement cycle was developed as the strategy for becoming and staying interoperable for in SME Environments. This cycle takes into account the SMEs features (specialisation, constrained resources, etc) and the interoperability issues (security, changing standards, etc) in the identification and definition of the good practices. x
x
x
Identify Objectives: In line with the business needs and priorities this phase identifies the interoperability objectives that will be addressed in the next improvement initiative. SME do not have budget or patience to address long term improvement projects. These objectives must be valuable, concrete and achievable in the short term. Of course these small initiatives should be managed and aligned in a common direction. Measure Current State: It is necessary to clearly understand the current situation with respect to the capability of being interoperable to be able to establish coherent improvement objectives, and to check the advance in the interoperability practices once the improvement initiative has finished. For SME this measurement should be fast and focussed in specific issues, trying to avoid extensive and expensive assessments of all the organisational aspects. Plan: Once the current state and the interoperability objectives are known, it is possible to define and schedule the set of activities that will bring the
SME Maturity, Requirement for Interoperability
x
x x
33
organisation from the current to the objective situation. In SME the most probable situation is not to have dedicated resources. We will have to use shared resources, resources that should continue performing their day to day activity while participating in the initiative. In the same line, it will be useful to plan activities to keep the attention over the initiative, like prototypes evaluation, presentation, and so on to avoid the lost of interest due to the day to day pressures. Act: Follow the defined plan, monitor the performance and re-plan when needed. In SME environment is crucial to keep the motivation of the people working in the initiative and the interest of the people that could be potentially affected. Check Results: Once the improvement activities have been carried out, the next step is to confirm that those activities have achieved the expected results. Transfer Results: At the end of each improvement initiative it is advisable to stop for a while and gather the lessons learnt from the improvement initiative. Besides, based on the analysis of the experience, in some special cases, it could be decided to institutionalise the improvement initiative to other areas of the organisation.
Usually, in large organisations the most critical aspect of the improvement initiative is to get the management commitment. Fortunately, in SMEs the improvement initiative usually starts from the management and the commitment is guaranteed since the beginning. But, in those special cases in which this condition is not met it is important to gain and maintain those commitments. The success at this point is to ensure the alignment of the improvement initiative and the business objectives. In the other side, we have the commitment of the workers. This commitment is also difficult to gain and maintain. The success at this point is to ensure the alignment of the improvement initiative with the personal and professional objectives of the staff affected by the improvement initiative.
3 Maturity Model The term maturity model was popularized by the SEI when they developed the CMM® in 1986. These models are aimed at evaluating and measuring processes within organizations and identifying best practices useful in helping them to increase the maturity of their processes. Several models have been developed in different disciplines and focusing on different levels of the enterprise: the Service-Oriented Architecture Maturity Model [13], the European Interoperability Framework [14], the Extended Enterprise Architecture Maturity Model [15], the Levels of Information Systems Interoperability [16] and the Organisational Interoperability Maturity Model [17] are the main models surveyed for this work. Unfortunately, existing maturity models don’t address the issue of interoperability directly or, if they do it, they focus on certain levels (e.g.:
34
Gorka Benguria, Igor Santos
organisational, systems) and are not directly applicable to SME. The maturity model introduced here establishes the roadmap for the adoption of better interoperability practices. The model takes into account that SME could be in a chaotic state where there are not processes not roles defined, and it defines a staged approach that guides SME from the lower chaotic levels to the interoperable state. It depicts 6 interoperability stages of the organization and 4 process areas subject to improvement. The next figure (Fig. 2) shows the structure of this model. The process areas are common to each interoperability level. At each level, different objectives are set within each process area. Each objective is achieved through the fulfilment of certain practices that in the end will determine the interoperability level. These practices are in turn defined by sub-practices and work products.
Fig. 2. Structure of the Maturity Model
The process areas have been defined according to the main dimensions of the enterprise defined in [18]: x x x x
Business processes: Specification, execution, improvement and alignment of business strategy and processes. Organization: Identification, specification, enactment and improvement of all organizational structures. Products and services: Specification and design of the organisation’s products and services. Systems and technology: Identification, specification, design, construction or acquisition, operation, maintenance and improvement of systems.
As shown in the previous figure (Fig. 2), the following maturity levels are proposed: x x
Initial: no processes have been defined and are performed based on memory. Performed: informal definitions of strategies and processes on a per project/department basis.
SME Maturity, Requirement for Interoperability
x x x
35
Modelled: definition of the collaboration and e-business strategy on a per project/department basis and the consequent definition of most processes. Integrated: definition of the strategy concerning interoperability and the institutionalization of formal and complete processes. Interoperable: processes have well defined measures and data from monitoring is analyzed and used for further improvement iterations.
The process areas contain the good practices for being interoperable, while the interoperability level proposes an improvement roadmap. The roadmap is necessary because the amount of good practices contained in the model is too big for being adopted at the same time. More over, as it has been already mentioned, it is not advisable for SME as it will take lot of time, stop the business activity, consume a great amount of resources and it will have a high probability of failure.
4 Assessment Method This section describes the method that will be used to assess the degree to which an organisation has adopted the good practices contained in the interoperability maturity model introduced in this paper. The assessment method defines the activities, roles, and work products to be used during the assessment in order to produce a profile representing the interoperability maturity of the organisation. Besides, the assessment produces a set of strengths and improvement actions that could be used to identify improvement opportunities that could foster the organisation capacity of being interoperable. When the assessment method is used inside the improvement cycle, it provides a sound base for the initial evaluation and the final confirmation of the results. This method takes into account the SME constraints and establishes mechanisms to reduce the time and the effort required to complete the assessment. Instead of performing a huge assessment it focus the assessment in different stages of interoperability. The focus of the assessment could be determined based on a short questionnaire based self-assessment. There exist other formal assessment methods that have been used as basis for the definition of this method such as the ones used with CMMI® [19][20][21] y SPICE [23]. CMMI® and SPICE are software maturity models, and they provide formal assessment methods to support the evaluation of a software intensive organisation against the model practices. A similar approach will be used for the evaluation of the interoperability maturity model. There are other assessment approaches usually used in business improvement initiatives such as the SWOT Analysis (Strengths, Weaknesses, Opportunities and Threats) [24], but these approaches are not specifically designed for working with a reference model. Therefore, they are not useful for benchmarking the improvements achieved by the initiative against an exemplar model. The activities of the method are structured in four main phases: x
Assessment Preparation: The purpose of the assessment preparation is to ensure that all aspects of the assessment planning have been carried out properly. Taking into account SME constraints, this phase include activities
36
Gorka Benguria, Igor Santos
x x x
to reduce the scope of the assessment in order to being able to complete it in less than a week. This is performed through a self-assessment that helps to obtain a fast image of the strengths and weaknesses of the SME. Assessment Execution: The purpose is to determine in a systematic way the capacity of the processes under evaluation. Assessment Reporting: The purpose is to ensure that the results of the communication have been correctly communicated and archived. Assessment follow-up: The purpose of the assessment follow-up is to learn from the gathered experiences along the evaluation in order to improve future evaluation.
From these four main phases the three first phases are mandatory and should be performed in any evaluation. The last phase is optional, but recommendable. It is possible to realise an assessment without this phase, but recording and archiving the lessons learnt will improve the quality of the following assessments.
5 Case Study Nowadays, one of the trends in many markets is the customized provisioning of end products to the final customer. The perception that the purchased product meets perfectly the needs and desires of customers is an essential value for them. For example, in the automotive industry OEM (Original Equipment Providers) can produce up to 100.000 distinct cars and there is a choice of 10.000 different combinations of colors, engine, equipment and other aspects for each model [22]. Similar cases can be applied to industries like furniture, textile, etc. Besides, the organisations continue to have the constant need for increasing earnings and being competitive. The implications of this market trend are the increasing complexity of products and the relationships among providers. In order to keep pace with this demanding environment organizations are forced to increase efficiency along the supply chain. The main objective is to minimize costs through reduced stocks through the integration of the different partners in the supply chain, from the raw material suppliers to the final vendors. Information and communication technologies focused on B2B (Business to Business) processes can be really helpful at this stage. Large firms adopted standards for data interchange like EDI (Electronic Data Interchange) years ago and now, with the widespread use of web technologies, they are starting to adopt web-enabled B2B platforms. In this scenario, the smaller companies providing materials to these large scaled enterprises are obliged to conform to the new established rules if they don’t want to be ruled out of the market. The focus of this case study is the middle-tier SME that provides assembled materials (doors) to OEMs or retailers (furniture development organisations)(Fig. 3). This SME in turn also interacts with upper tier’s suppliers which may provide them raw materials (wood) or components (handles). All the companies in this chain comprise the extended supply chain of industries like automotive, furniture, textile, electronics, etc.
SME Maturity, Requirement for Interoperability
37
Fig. 3. Mid-tier SME supplier
The final aim of the case study is to understand the interoperability barriers of the mid-tier supplier with its upper tier suppliers and the final customer and to provide a solution that will overcome those identified problems. For this purpose these are the steps that will be followed: x x x x x x
Identification of the business objectives of the SME, in this case the doors development company. In this case those related with the improvement of their relationship with their clients, the furniture development companies. Identification of different types of interoperability barriers at different enterprise levels. Mechanisms and channels for asking RFQ (request for quotations); and mechanisms for monitoring the state of their issued orders. Definition of an improvement planning based on the desired situation. Deployment of necessary ICT infrastructure for the fulfilment of the plan. Verification and validation of the results. Transfer of the results through dissemination activities.
This case study is primarily focused on the testing of the interoperability maturity model and the assessment method. Therefore, only one improvement cycle will be performed in the SME. The testing of the improvement cycle is a much longer activity that could not be performed in the context of the project.
6 Results and Business Benefits The expected results from the application of the strategy for becoming and staying interoperable in SME environment are the modification of the organisational culture towards a more interoperability oriented attitude. The organisation should proactively find, evaluate and integrate new ways of interacting with other organisations in order to support their business objectives. Besides, this strategy takes into account the changing nature of the business context and provides a continuous improvement cycle that takes into account this evolution and supports the organisation in this constant activity. The usage of an interoperability maturity model, and performance of assessments against that model, will allow the organisation to benchmark their progresses along the time towards the ideal situation represented by the interoperability maturity model.
38
Gorka Benguria, Igor Santos
7 Related Work The approach introduced in this paper is in line with the many existing quality models. These quality models are composed of a set of good practices that characterize leading organizations in a certain domain. Initiatives like the MBNQA (Malcolm Baldridge National Quality Award) [25], EFQM (European Foundation for Quality Management) [12], ISO 9000 [26] and Six Sigma [27] are focused on business process quality. On the other hand, initiatives like ITIL (Information Technology Infrastructure Library) are focused on delivery of good quality IT services. Other initiatives like CMMI (Capability Maturity Model Integration) [28] or SPICE [29] and other examples mentioned in previous sections of this document are models focused on improving certain areas of the organizations and their focus is not always on interoperability. The strategy introduced here aims to serve as a holistic approach to interoperability in SME. It introduces interoperability practices embracing all the areas within an organization, and structures them in a staged approach to support all kind of SME, from those that remain in the chaos to those that have already adopted some of the best practices that support the interoperability. Besides, it provides practices that take into account the characteristics of the SME when applying the model to become them more interoperable.
8 Conclusion SMEs are a key element in the European community, they are employing the majority of the people and producing half of the gross value added. However, they seem to have problems for adopting ICTs. ICTs are evolving very fast, and they are changing the way to do business. Moreover, today there are certain kinds of trades that cannot be done without using ICT (e.g. eCommerce, eBanking, etc). The capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large scaled enterprises to micro enterprises. The capacity of being able to use the latest means of interaction has become a critical factor not only to increase earnings but also to stay in the market. This is true, for virtually any kind of organisation, from large to micro enterprises. It will be an error to think that it will be enough to buy proper computers and software in order to support those new ways of doing business. Those new ways of doing business usually require changes in way of work of the organisation; in fact they could affect the processes, the people and the infrastructures of the organisation. Therefore, an improvement effort for increasing the interoperability of an organisation should be carefully evaluated, planned and executed. It is important to underline that this is not a one shot effort, new means of interaction will appear in the future and organizations should be able to integrate them in line with their business objectives.
SME Maturity, Requirement for Interoperability
39
This paper has presented a strategy for becoming and staying interoperable in SME environment. The motivation for the development of this strategy was the lack of a suitable strategy for introducing interoperability practices in an SME in a continuous way. This strategy takes as a basis three pillars that have shown to be successful in other domains: x x x
Improvement Cycle to guide the improvement activities Maturity Model as a source of good practices Assessment Method to establish a basis for the comparison
These three pillars have been adapted to the introduction of the interoperability practices in SMEs taking into account the interoperability issues and the SMEs advantages and difficulties. In order to verify the adequacy of these instruments as building blocks of the strategy for becoming and staying interoperable in SME environments. A scenario for the use case has been identified. It has been decided to centre the validation of the strategy in a SME taking part in a supply chain. Nowadays the supply chain management is one of the most demanding domains with respect to interoperability related technologies and approaches.
Acknowledgement This paper summarises some early results from the ServiciosB2B project. This work was partially funded by the Ministry of Industry, Commerce and Tourism of the Spanish Government through the Programme for the Promotion of Technical Investigation (PROFIT). We would like to thank our partners for their great feedback and collaboration during the execution of the project.
References [1] [2] [3] [4]
[5] [6] [7]
Watson, Peter (2005). Ideas: A History of Thought and Invention from Fire to Freud. HarperCollins. ISBN 0-06-621064-X. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard Computer Glossaries, Institutes of Electrical and Electronics Engineers, New York, NY 1990 European Commission, “The new SME definition User guide and model declaration”, http://ec.europa.eu/enterprise/enterprise_policy/sme_definition/sme_user_guide.pdf European Commission, “COMMISSION RECOMMENDATION of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises, (2003/361/EC) Manfred Schmiemann, “SMEs and entrepreneurship in the EU”, Statistics in focus, European Commission, INDUSTRY, TRADE AND SERVICES, 24/2006 European Commission, “Time to move up a gear: The new partnership for growth and jobs”, Communication from the Commission to the spring European Council, 2006 European Commission, “i2010 – A European Information Society for growth and employment”, COMMUNICATION FROM THE COMMISSION TO THE COUNCIL, THE EUROPEAN PARLIAMENT, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS, 2005
40
Gorka Benguria, Igor Santos
[8]
David A. Reo, Nuria Quintano, Luigi Buglione, “Measuring software process improvement: there’s more to it than just measuring processes”, ESI, FESMA 99, September 1999. Walter Andrew Shewhart, “Statistical Method from the Viewpoint of Quality Control. New York: Dover”, 1939, ISBN 0-486-65232-7 W. Edwards Deming, “Out of the Crisis”, MIT Center for Advanced Engineering Study, Cambridge, MA, 1986.ISBN: 0262541157 CMU/SEI-96-HB-001, “IDEALSM: A User’s Guide for Software Process Improvement”, Bob McFeeley, February 1996 EFQM. “The EFQM Excellence Model – Improved Model, European Foundation of Quality Management”, 1999 SONIC, “SOA Maturity Model”, http://www.sonicsoftware.com/soamm European Commission. (2004). “European Interoperability Framework for PanEuropean e-Government Services” J. Schekkerman, “Extended Enterprise Architecture Maturity Model”, Version 2.0, 2006, http://www.enterprise-architecture.info/Images/E2AF/Extended%20Enterprise %20Architecture%20Maturity%20Model%20Guide%20v2.pdf C4ISR Interoperability Working Group, Department of Defense. “Levels of Information Systems Interoperability (LISI)”. Washington, DC: 1998 Thea Clark, Richard Jones, "Organisational Interoperability Maturity Model for C2", 1999 Command and Control Research and Technology Symposium, June 1999 ATHENA Integrated Project. (2005). “Framework for the Establishment and Management Methodology”. Deliverable D.A1.4 CMU/SEI-2006-HB-002, “Standard CMMI® Appraisal Method for Process Improvement (SCAMPISM)”, Version 1.2:, SCAMPI Upgrade Team CMU/SEI-95-TR-001, ESC-TR-95-001, “CMM Appraisal Framework (CAF)”, Version 1.0, Steve Masters, Carol Bothwell, February 1995 CMU/SEI-2001-TR-033, “CMM®-Based Appraisal for Internal Process Improvement (CBA IPI)” Version 1.2, Donna K. Dunaway, Steve Masters, November 2001, ESCTR-2001-033 Sachon, M., Albiñana, D. (2004). “Sector español del automóvil: ¿Preparado para el e-SCM?”. EBCenter PwC-IESE. On-line at www.ebcenter.org ISO/IEC TR 15504-3:1998(E), “Information technology - Software process assessment - Performing an assessment”. Learned, E., Christensen, C. Andrews, K. and W. Guth. “Business Policy :Text and Cases”, Homewood, R. Irwin, 1969 NIST, "Baldrige National Quality Program, Criteria for Performance Excellence", http://www.quality.nist.gov/Business_Criteria.htm ISO/IEC, “Quality management systems - Fundamentals and vocabulary“, ISO 9000:2005, 2005 Peter S. Pande et al, Robert P. Neuman, Roland R. Cavanagh, “The Six Sigma Way: How GE, Motorola, and Other Top Companies are Honing Their Performance (Hardcover)“, Publisher: McGraw-Hill; April 27, 2000, ISBN: 0071358064. CMU/SEI-2006-TR-008, “CMMI® for Development, Version 1.2” , CMMI-DEV, V1.2, ESC-TR-2006-008Conclusion ISO/IEC TR 15504-2:1998(E), “Information technology - Software process assessment – Reference Model.”
[9] [10] [11] [12] [13] [14] [15]
[16] [17] [18] [19] [20] [21]
[22] [23] [24] [25] [26] [27]
[28] [29]
Information Security Problems and Needs in Healthcare – A Case Study of Norway and Finland vs Sweden Rose-Mharie Åhlfeldt and Eva Söderström School of Humanities and Informatics, University of Skövde, P.O. Box 408, Skövde, Sweden rose-mharie.ahlfeldt;[email protected]
Abstract. In healthcare, the right information at the right time is a necessity in order to provide the best possible care for a patient. Patient information must also be protected from unauthorized access in order to protect patient privacy. It is also common for patients to visit more than one healthcare provider, which implies the need for crossborder healthcare and a focus on the patient process. Countries work differently with these issues. This paper is focused on three Scandinavian countries, Norway, Sweden and Finland, and their information security problems and needs in healthcare. Data was collected via case studies, and the results were compared to show both similarities and differences between these countries. Similarities include the too wide availability of patient information, an obvious need for risk analysis, and a tendency to focus more on patient safety than on patient privacy. Patients being involved in their own care, and the approach of exchanging patient information are examples of differences. Keywords: Information security, healthcare informatics, patient safety, patient privacy
1 Introduction Information Technology (IT) in healthcare has the potential to increase the welfare of the citizens as well asb improve the efficiency of the healthcare organizations. The demands on the healthcare sector in the Nordic countries come from: an aging population, a need for seamless service processes, an increasing demand of care in the patients’ homes, a demand for more information and participation, etc. [1]. Even if the Nordic countries are at the forefront with regard to the use of IT and the Internet in the society as a whole, the implementation of IT in healthcare has been slow [1]. When IT solutions are applied in healthcare, especially in a distributed fashion, information security is a critical issue [2], [3].
42
Rose-Mharie Åhlfeldt and Eva Söderström
The aim of this paper is to identify similarities and differences concerning the problems and needs of information security in a distributed healthcare domain. The research method used was two minor case studies in Norway and Finland. The results were compared with a previous, similar study in Sweden [4], also incorporating an existing information security model. The contribution is a holistic view of information security, which is necessary when preparing for and alleviating problems and needs in the future. In particular, the holistic view is a necessary contribution when patient information is transferred between different healthcare providers. The paper is structured as follows: Information security and the Info Sec model are presented in chapter 2, while chapter 3 introduces the state-of-the-art information security in healthcare. Chapter 4 includes the research approach, and chapter 5 presents the results of the study. Chapter 6 discusses and compares the results with the Swedish study. A summarizing discussion is given in chapter 7.
2 Information Security Model Information security concerns security issues in all kinds of information processing. According to SIS [5], information security is defined as the protection of information assets, and its aim is to maintain confidentiality, integrity, availability and accountability of information. In order to achieve the four characteristics, both technical and administrative security measures are required. Administrative security concerns the management of information security; strategies, policies, risk assessments etc. It includes structured planning and implementation of security work, and concerns the organizational level and thus the organization as a whole. Technical security concerns measures to be taken in order to achieve the overall requirements. It is subdivided into physical security (physical protection of information) and IT security (security for information in technical information systems). IT-security is further divided into computer security (protection of hardware and its contents) and communication security (protection of networks and media used for communication). A visual overview of the characteristics and security measures is presented in an information security model (Figure 1), The model combines the mentioned concepts.
Information Security Problems and Needs in Healthcare
Ava ila bilit y
Con fide n t ia lit y
I n t e gr it y
43
Accou n t a bilit y
Charact erist ics
I nfor m a t ion Se cur it y Securit y m easures Te ch n ica l se cu r it y I T se cu r it y
Com put er Secur it y
Adm in ist r a t ive se cu r it y
Ph ysica l se cu r it y
Com m unica t ion Secur it y
Fig. 1. Information Security Model
With information security in the middle, the four characteristics are at the top, and the security measures are placed at the bottom. The latter ones are gathered directly from the SIS conceptual classification [5].
3 Information Security in Healthcare Swedish healthcare is currently undergoing extensive changes. It is to some extent moving out of the hospitals and is instead being carried out in other forms and locations. Patients with complex care needs acquire extensive care activities, for instance, in their homes. They have contact with several different healthcare providers, both within municipalities and county councils. These changes increase the requirements for secure communication, as well as cooperation between several different organizations and authorities. The National Board of Welfare [6] identifies cooperation and information transfer between different healthcare providers as a risk area for patient safety. Computerized systems as well as all media useful for patient information transmission between different healthcare providers constitute risk factors. The Board claims that healthcare providers must offer systematic measures in order to achieve sufficiently secure routines for the exchange of patient information [6]. IT-systems are extended to more and more users, but proper functions that control unauthorized access to patient information are still lacking. The Swedish Data Inspection Board declares in its report that county councils, in practice, have little or no control of who has access to information about specific patients [7]. Depending on the authorization method used, various additional measures and routines may be required to force the users into making active choices when accessing sensitive data about a specific patient. These demands are not unique for Swedish healthcare. Even if traditions and legal aspects differ between countries, the main problems are the same. Strong authentication, derived services such as authorization, access controls, accountability, integrity and confidentiality are importunate demands to achieve [8], [9], [10]. In a distributed healthcare domain, there is also a particular need for
44
Rose-Mharie Åhlfeldt and Eva Söderström
a process-oriented approach [11]. Our research is focused on the patient process, since it expands beyond the boundaries of one organization, and consequently leads to patient information being available to more actors. There is a need for better awareness, improved procedures, improved software, and so on [12]. The main purpose of information security in healthcare is twofold: to achieve a high level of patient safety, i.e. to provide patients with the best care with the right information at the right time; and to achieve a high level of patient privacy; i.e. to protect sensitive patient information from unauthorized access. It is difficult to achieve both aims simultaneously, and one of them is often compromised. Hence, a balance between them is necessary in healthcare [13]. Patient safety is here related to availability and integrity, while patient privacy is related to confidentiality and accountability. We return to the relationship between the two concepts in the results.
4 The Case Studies The research was constructed as two minor case studies with a total of four interviews. The aim of the case studies was twofold: to investigate how healthcare management perceives current information security from a holistic perspective, and to explore their view of information security when patient information is exchanged. The studies were conducted at a national level in Norway and Finland. Three groups were in focus: 1) What main problems and needs, alternatively positive effects of information security, exist in healthcare from a national perspective? 2) What problems and needs, alternatively positive effects, exist when patient information is being exchanged between different healthcare providers? 3) How can the present balance between patient safety and patient privacy be recognized and what tendency can be discerned for the future? From each country, one interest organization (In Norway KITH – The Norwegian Centre for Informatics in Health and Social Care [14], in Finland STAKES – The National Research and Development Centre for Welfare and Health [15]) and one information security manager from a large hospital were selected. The respondents were selected for their good knowledge and experience of Norwegian and Finnish healthcare respectively. The studies used semi-structured interviews and were based on a set of main questions. Nine questions, derived from the Info Sec model, concern information security from a holistic view, the exchange of patient information between different healthcare providers, as well as the balance between the need for availability of patient information and protecting patient privacy. The questions are numbered from 1 to 9. The first one is divided into four parts: 1a - 1d. The four respondents were noted as A – D, with A and B from Norway, and C and D from Finland. The answers from A and the first question are hence noted as A1a, A1b and so on in Figure 2. All the answers are structured in a similar manner.
Information Security Problems and Needs in Healthcare
45
5 Results This chapter presents a summary of the results from the three main questions stated in the applied case study. 5.1 Problems and Needs The questions were structured to address general problems and needs first, before the specific questions about security during patient information exchange. We follow this structure when accounting for the results. The answers from the questions 1, 3, 4, 6, 7 and 8 concerning problems and needs have been classified and illustrated into the Info Sec Model in Figure 2. 1a Information Security Problems – spontaneous Both Norwegian respondents mentioned the problem of unavailable patient information, within as well as across organizations. They also emphasized that only those who need the information should have access to it. Respondent C identified employees as the weakest link, but that this is difficult to define. There are too many deficiencies concerning, for example, audit logs, regulations for encryption of information inside healthcare provider organizations, and medical doctors being reluctant to ask for consent. Respondent D claimed that the major problem is a lack of knowledge concerning the applications’ security level. Suppliers may have said one thing, but the reality is something else.
Fig. 2. Results from questions 1, 3, 4, 6, 7 and 8 indexed and classified in the Info Sec Model
1b Information Security problems – administratively The respondents mentioned that lack of knowledge, human behavior and comprehension are the main administrative problems. There is a lack of resources
46
Rose-Mharie Åhlfeldt and Eva Söderström
for information security activities and insufficient routines to meet the security requirements. Furthermore, systems monitoring is also a problem. It is too expensive to have 24-hour monitoring, although this really is necessary. 1c Information Security problems – physical security “Open” hospitals are a major physical security problem in both Norway and Finland. Although new hospitals in Norway equip their entrances with cameras etc., it is still very easy to enter the wards, where much information is visible, both physically and on screens, according to respondent B. Hospitals in Finland use group logins to buildings, and computers have been stolen from the central store, according to respondent D. Respondent C pointed out that physical security in small and medium size healthcare centers is not adequate. 1d Information Security problems – IT security Authentication and access level problems are obvious in both Norway and Finland. Norway has implemented smart cards for the network, but it is an expensive technique with usability problems, such as users forgetting the cards at home, and not bringing them to other computers. Norway also lacks an automatic tool for checking logs, which is unacceptable since the access levels are too wide according to respondent B. Respondent C claims that their whole password system is not very secure at all. Outsourcing is also a narrow area: “You cannot easily see what you can outsource. You can outsource the technology but you cannot outsource the rest of the responsibility. It has to be included in the contract (respondent C). Respondent D mentioned problems such as too many uncontrolled output sockets in the systems and external partner access to the systems. 3 Present substantial security problems with cross border healthcare information from a patient process view In Norway, the information is attached to one organization and almost limited for legal reasons. Technically, systems are hard to integrate, and structured information is difficult to transfer between the systems. According to respondent B, although they can transfer information, this is not allowed according to their legislation. In one region, all healthcare organizations (except municipalities) have the same system, making it easy to transfer information between them. Finland also has interoperability problems, which are technical in a short-term sense. The main problems are semantic, according to respondent C. Respondent D claims that sending information between hospitals and healthcare centers is a minor problem. Instead, the main problem is serving the systems of the many organizations. 4 and 6 Future substantial security problems with cross border healthcare information from a patient process view and the security measures for solutions The Norwegian respondents agree on the importance of the technology working and being thought through from the start. Risk analysis is needed for the whole healthcare sector, and consequences should be dealt with. The main problem is the daily work for the ordinary co-operators according to respondent A: “It is important to find a balance”. Responsibility issues are also important for the
Information Security Problems and Needs in Healthcare
47
future, such as: obtained consent, documentation and information duty, and distribution to the research community. In Finland, respondent C states that the next generation of health record systems will be more consistent and more accessible, but changing IT will take more than one generation. According to respondent C, finding one general definition of merging data is too difficult. Information overflow is another problem: “The medical doctors still have a maximum of 15 minutes for the patient and five minutes for discussion. They do not have time to check all the information” (respondent C). Respondent D also mentioned the problem with a high service level agreement. To solve these problems, Norwegian respondents suggested supporting as well as presenting tools, improved analysis tools for checking logs, and more education concerning network applications. The Finnish respondents emphasized the need for electronic healthcare information, and respondent C suggested a reorganization of the whole healthcare sector. Furthermore, tools for checking quality will be more common and useful as well as necessary for the healthcare sector in order to improve the quality of care. Respondent D suggested the need for standards and service agreements. 7 and 8 Future security problems concerning availability of health information and measures for solutions Confidentiality aspects are a common risk factor concerning the availability of healthcare information for all respondents. The Norwegians stress that the availability of many patients’ health records is too broad, revealing a need to revise authorization allowance. Respondent A pointed out the risk for criminal blackmail, particularly for well-known persons: “At present, we have a peaceful regime both in Norway and Sweden, but in other types of political regimes, our openness can be of great importance and be misused”. The Finnish respondents also worry about how to control access levels, for example: “We have to think about how we control the access to patient information and other data because the access is very wide” (respondent C). The political situation was also considered: “Political situations outside Scandinavia, health and fitness records and of course the access, is controlled by law, but we have seen that a law can be changed in some weeks like in the UK” (respondent C). The Norwegian respondents suggest the solution of maintaining basic principals concerning privacy and data protection. The access control systems must be improved: “We must implement a “need-to-know-model” even if it is administratively both intensive and expensive” (respondent A); and “We have to do risk analyses in order to set requirements for privacy” (respondent B). In Finland, respondent D emphasizes that patients must have control over their own records, while respondent C is more focused on technical issues: “New technology must be implemented, for instance the new generation of PKI”.
48
Rose-Mharie Åhlfeldt and Eva Söderström
5.2 Positive Aspects This section follows the same structure as the previous one in terms of first presenting the results in Figure 3, addressing general positive aspects, before proceeding with the specifics about the exchange of patient information. The answers from questions 2 and 5 concerning the positives aspects are presented in Figure 3.
Fig. 3. Results from questions 2 and 5 concerning the positive aspects
2 Present positive aspects of information security in healthcare In both Norway and Finland, patients are allowed to read their own patient records, implying: “power of citizens” (respondent C). When paper-based records were used, patients did not have this opportunity. The computerization of patient records is the most positive aspect of information security in healthcare according to all the respondents: “More people can have access to information quickly and simultaneously, and depending on IT, we can log everyone who has accessed the record and compared with the paper based system, this is very positive” (respondent B); “People cannot do whatever they want, with IT you have more orderliness” (respondent D). 5 Future positive aspects of information security in healthcare Future positive aspects will be more or less the same as previously mentioned. Even though improvements are still needed, there is a sense of faith that the healthcare sector will succeed. Information is available for healthcare actors who really need it, both internally and externally. The duty of information is improved and easier to implement with IT than paper-based systems. Hence, IT development must continue in healthcare. 5.3 Patient Safety and/or Patient Privacy? Answers from question 9 are classified according to patient safety and patient privacy regulations. These concepts have extended the upper part of the Info Sec
Information Security Problems and Needs in Healthcare
49
model and become related to the upper four characteristics of information security (Figure 4). The results indicate how patient safety and patient integrity often contradict one another, but also the desire to balance them. Three respondents claim the current focus priority is on patient privacy, since legislation is focused on protecting this issue. One respondent considers that patient safety dominates healthcare, while patient privacy dominates legislation.
Fig. 4. Result from question 9 - the balance between patient safety and patient privacy
All the respondents state the necessity of achieving a balance between the two concepts, even if two of the respondents want it to lean more towards safety. It should be possible to achieve a high level of patient safety and an acceptable level of patient privacy. Respondents A and D claim that in the future there will be a balance between privacy and safety: “We must try to find the right way. Sometimes we have to focus more on patient safety and sometimes on patient privacy” (respondent A): and “Patient safety is more technically focused and therefore easier to solve. The privacy part is more complex” (respondent D). Respondents B and C state that the focus would move towards patient safety in the future: “Safety and quality is coming and will be measured” (respondent C). Respondent B adds that even if privacy seems to be less in focus at the present time, in the near future, the pendulum will probably swing back.
6 Discussion and Comparison with the Swedish Study The results from the case studies are now compared to a similar study in Sweden. For details about the Swedish study, we refer to [4]. The analysis is organized according to the categories: information security problems and needs, positive aspects of information security in healthcare, and patient safety and patient privacy. 6.1 Problems and Needs The three countries have approximately the same social structure. They also basically have the same problems and needs concerning information security in healthcare. With regard to technical security, they share the problems concerning
50
Rose-Mharie Åhlfeldt and Eva Söderström
authentication techniques, even if Norway is somewhat further with their smart cards. All three countries also lack authorization techniques and tools for log management. In administrative security, the main problems are: a too wide availability of patient information, incomplete work routines, a lack of security awareness and a lack of risk analyses. The integration dilemma is also a common problem. Legislation must be adapted to the new requirements, and the responsibilities for patient information must be revised. Thus far, Norway and Finland have implemented more standards and policies than Sweden, while the Swedish study did identify the need for strategies, policies and standards. In Norway and Finland, patients have the right to access their logs and consequently have more control of their own information. This is currently not possible in Sweden, but the proposed “Data Protection Act” will enable patients to be admitted access to their logs if possible, but they will not have the right to claim them. 6.2 Positive Aspects All countries agree that the most positive aspect concerning information security in healthcare is the computerization of healthcare records. Security issues still remain, but the positive aspects are clear; information availability for healthcare staff both internally and externally, more efficient care flow, and improved quality of care and patient safety. Norway and Finland emphasize the positive aspect of more power to the patients, but this is – as mentioned – not possible in Sweden. 6.3 Patient Safety and Patient Privacy Our research shows similarities concerning the balance between patient safety and patient privacy, even though the respondents disagree on which one the present focus is directed. They agree that legislation is focused on privacy, while organizations focus on safety. The Norwegian and Finnish respondents all state the necessity of achieving a balance between the two concepts. The Swedish respondents are divided; three want a balance while two claim the focus should be on patient safety. All the countries claim that in the future the focus will shift to patient safety, while one also mentioned the importance of privacy. 6.4 Comparison Summary The similarities between the countries concern problems and needs, positive aspects, as well as patient safety versus patient privacy, which is illustrated in table 1. Table 1. Similarities between the three countries Category
Similarities
Problems and needs
Lack of an authorization techniques (technical security) Lack of log management techniques (technical security)
Information Security Problems and Needs in Healthcare
51
Too wide availability of patient information (adm. security) Incomplete work routines (adm. security) Lack of security awareness (adm security) Need for risk analyses (adm security) Positive aspects
Computerization of healthcare records
Patient safety and patient privacy
No consensus on the current focus between the countries Legislation is focused on privacy while organizations emphasize safety Tendency to focus more on safety in the future
The computerization of healthcare records, which is a main issue in all three countries, results in availability of information, better care flow, and improved quality of care. There are also differences, illustrated in table 2. Table 2. Differences between the three countries Differences
Sweden
Norway
Finland
Authentication technique problems
Ongoing pilot project with smart cards
Further advanced with smart cards implementation, e.g. communications networks.
Further advanced with smart cards implementation
Exchange of patient information
Few standards and policies
More standards and policies
More standards and policies
Patients’ own access to information
Patients lack the right to access their logs, thus little power to patients
Patients have the right to access their logs, thus more power to patients
Patients have the right to access their logs, thus more power to patients
Patient privacy and Patient safety
No consensus among respondents
Balance is needed
Balance is needed
The first three rows of the table indicate problems and needs, the third row includes positive aspects, and the fourth concerns the discussion of privacy versus safety. Even though the countries share the problems of authentication techniques, they differ in how far advanced they are addressing them. Information exchange is also, in part, a common dilemma, but the countries differ as well in how advanced they are in dealing with the issue. Interestingly, Norway and Finland both allow patients to access their logs. The general similarities should allow Sweden to do the same, but as mentioned, this will require legislative changes.
52
Rose-Mharie Åhlfeldt and Eva Söderström
7 Summarizing Discussion The aim of this paper is to identify problems and needs concerning information security in a distributed healthcare domain. Two minor case studies were conducted in Norway and Finland, and the results were compared to a similar preexisting Swedish study. Consequently, this research provides a more holistic, international overview. The results were presented and classified according to the Info Sec model. Furthermore, the need to relate and analyze patient safety and patient privacy was also emphasized. There are many similarities between the Norway, Finland and Sweden. The differences are mainly found at a more abstract level. Sweden has only recently conducted a National Strategy for IT in Healthcare, while both Norway and Finland have worked in a more structured way for a longer period of time. Both Norway and Finland are more centralized in their healthcare IT-development. Another interesting difference is the lack of patient involvement in Sweden compared to both Norway and Finland. The respondents claim that patient involvement is a useful preventive measure to protect patient privacy. When patients have the right to see who has accessed their records, the healthcare staff is more careful about avoiding misuse. However, protecting privacy is a very individual task, because people have different attitudes about the matter of privacy and even personal opinions can change depending on the situation. The involvement of patients could therefore be a good complement to the protection of their own privacy. This is a challenge for all three countries, but particularly for Sweden since it does not allow patients to access their records in the same way as in Norway and Finland. Patient safety and patient privacy in healthcare must be accomplished in order to achieve good quality of care and maintain the trust of the patients. This implies that information security must be taken seriously and the balance between these two concepts should be given careful consideration. However, the healthcare sector in general needs to look at and implement, more extensively, the existing security standards, framework and best practice, such as ISO/IEC 17799, ITIL etc. Ongoing work to incorporate the ISO/IEC 17799 into the healthcare sector (ISO 27799) exists, but further efforts are needed to adopt other frameworks into the sector as well. Furthermore, future research needs to follow the developments in how the countries address both the problems and differences, nationally, as well as in how they collaborate to identify common solutions. In addition, guidelines for establishing and achieving successful collaboration across national borders should be developed, which can also help establish better links with other countries as well. Such activities will be useful not the least since the mobility of students, the work force and organizations keeps increasing.
References [1]
Norden, (2005). Health and Social Sectors with an ”e”. A study of the Nordic countries. TemaNord 2005:531. Nordic Council of Ministers, Copenhagen. ISBN 92893-1157-6.
Information Security Problems and Needs in Healthcare
53
[2]
Computer Sweden, 2006. Computer Sweden/Dagens Medicin, IT i vården, 2006-1122 (in Swedish). [3] Ministry of Health and Social Affairs, 2006. Nationell IT-strategi för vård- och omsorg. ISBN 91-631-8541-5 (in Swedish). [4] Åhlfeldt, R-M. and Söderström, E. 2007. Information Security Problems and Needs in a Distributed Healthcare Domain - A Case study. In Proceedings of The Twelfth International Symposium on Health Information Management Research (iSHIMR 2007), Sheffield, UK, July 18 – 20, 2007, pp 97-108. ISBN: 0 903522 40 3. [5] SIS, 2003. SIS Handbok 550. Terminologi för informationssäkerhet. SIS Förlag AB. Stockholm (in Swedish). [6] National Board of Health and Welfare, 2004. Patientsäkerhet vid elektronisk vårddokumentation. Rapport från verksamhetstillsyn 2003 i ett sjukvårdsdistrikt inom norra regionen. Artikelnr: 2004-109-11 (in Swedish). [7] Data Inspection Board, 2005. Ökad tillgänglighet till patientuppgifter. Rapport 2005:1 [on-line]. Available from: http://www.datainspektionen.se [Accessed 1 November 2005] (in Swedish). [8] CEN TC 251, prENV 13729, 1999 Health Informatics Secure User Identification Strong Authentication using Microprocessor Cards (SEC-ID/CARDS), 1999. [9] Smith, E. and Eloff, J. H. P. 1999. Security in healthcare information systems current trends. International Journal of Medical Informatics 54, pp. 39-54. [10] Blobel, B. and Roger-France, F., 2001. A systematic approach for analysis and design of secure health information systems. International Journal of Medical Informatics 62, pp. 51-78.
[11] Poulymenopoulou, M., Malamateniou, F. and Vassilacopoulos, G., 2003 Specifying Workflow Process Requirements for an Emergency Medical Service. Journal of Medical Systems, 27(4), pp. 325-335. [12] Louwerse, K., 1998. Availability of health data; requirements and solutions Chairpersons' introduction. International of Medical Informatics, 49, pp. 9-11. [13] Utbult, M., Holmgren, A., Larsson, R., and Lindwall, C. L., 2004. Patientdata - brist och överflöd i vården. Teldok rapport nr 154. Almqvist & Wiksell, Uppsala (in Swedish). [14] KITH, 2007, Web-page. Available from: http://www.kith.no [Accessed sep, 2007]. [15] STAKES, 2007, Web-page. Available from: http;//www.stakes.fi [Accessed sep, 2007]. [16] ISO/IEC 17799, 2000. Information Technology – Code of practice for information security management. Technical Report. International organization for standards, Geneva, Switzerland. [17] ITIL, 2008, Web-page. Available from: http://www.itilofficialsite.com/home/home.asp [Accessed Jan, 2008].
Impact of Application Lifecycle Management – A Case Study J. Kääriäinen1, A. Välimäki2 1
2
VTT, Technical Research Centre of Finland, Oulu, Finland [email protected] Metso Automation Inc, Tampere, Finland [email protected]
Abstract. Lifecycle management provides a generic frame of reference for systems and methods that are needed for managing all product related data during the product’s lifecycle. This paper reports experiences from a case study performed in the automation industry. The goal was to study the concept of Application Lifecycle Management (ALM) and gather and analyse first experiences when a company is moving towards distributed application lifecycle management. The results show that several benefits were gained when introducing an ALM solution in a case company. This research also produced a first version of an ALM framework that can be used to support practical ALM improvement efforts. In this case, the experiences show that lifecycle activity should manage artefacts produced in different stages in a project lifecycle and keep all activities in synchronised. The challenge resides in how to generate efficient company-specific implementations of ALM for complicated real-life situations. Keywords: Industrial case studies and demonstrators of interoperability, Interoperability best practice and success stories, Tools for interoperability
1 Introduction The ability to produce quality products on time and at competitive costs is important for any industrial organization. Nowadays, companies are seeking systematic and more efficient ways to meet these challenges. Globalization and the use of subcontractors are becoming norms in current business environments. This shift from the traditional one-site development to the networked development environment means that product development is becoming a global complex undertaking with several stakeholders and various activities. In this kind of environment, people are likely to have difficulties in understanding each other and have problems to share a common view towards product related data. Therefore,
56
J. Kääriäinen, A. Välimäki
companies have to search for more effective procedures and tools to coordinate their ever-complicating development activities. Modern approaches for product development need to take into account the business environment, and the product’s whole lifecycle must be covered, from the initial definition up to maintenance. Such a holistic viewpoint means efficient deployment of lifecycle management. Setting up a comprehensive, smoothly running and efficiently managed product development environment requires effective lifecycle processes and tool support. In practice, this means the deployment of concepts of Product Lifecycle Management (PLM) and from a SW development point of view, Application Lifecycle Management (ALM). In literature and among tool vendors, the term PLM has been settled [1]. Within the past few years, the concept, “Application Lifecycle Management” (ALM) has emerged to indicate the coordination of activities and the management of artefacts during a SW product lifecycle, such as requirements, designs, source code, builds, documents, etc. ALM is quite a new term and therefore there are no extensive publications that deal with the term. However, tool vendors have recently used this term to indicate tool suites or their approaches that provide support for the various phases of a SW development lifecycle. While ALM is quite a new term, there is an apparent need to report practical ALM experiences from industry. This paper reports experiences from a case study performed in a company operating in the automation industry. This case study is a part of the broader ALM improvement effort in a case company where the aim is to systematize ALM in distributed development projects that develop SW for SW intensive systems. The previous solution for ALM was not good enough for the needs in future. The company expects that the new ALM solution will improve project visibility and efficiency. In this case study, the goal was to gather and analyse first experiences when a company is moving towards distributed ALM. Also, the whole concept of ALM is somewhat unclear and therefore, the aim of the research was to also create an ALM framework that can be used to evaluate the current state of ALM solution in a target organization and to detect ALM elements that possibly need to be improved. The experience data was collected from the real users of ALM in the case company. The results reported in this paper present the results of current state analysis and the perceptions of the project members when they estimated the impacts of new ALM solution on their daily work. This paper is organised as follows: In the next section, background and concept of ALM are identified. Then, in Section 3, the research approach is described comprising a description of industrial context and research settings. In Section 4, results are introduced. Finally, results are discussed and conclusions are drawn up with some final remarks.
2 Related Research Application Lifecycle Management (ALM) is quite a new term and therefore it is difficult to find definitions for it. One approach for this is standards, such as ISO/IEC 12207 that present lifecycle approaches for software development. Doyle [2] argues that ALM is a set of tools, processes and practices that enable a
Impact of Application Lifecycle Management – A Case Study
57
development organization to implement and deliver to such software lifecycle approaches. In practice, this means that some kind of solutions for ALM exist in every company even though toolsets that are specially marketed as ALM suites were introduced just a few years ago. Schwaber [3] defines that the three pillars of ALM are traceability, process automation and reporting. The role of ALM is to act as a coordination and product information management discipline. The purpose of ALM is to provide integrated tools and practices that support project cooperation and communication through a project’s lifecycle (Fig. 1). It breaks the barriers between development activities with collaboration and fluent information flow. For management, it provides an objective means to monitor project activities and generate real-time reports from project data.
Fig. 1. Application communication.
Lifecycle
Management
facilitates
project
cooperation
and
Doyle & Lloyd [4] identify requirements definition and management as one of the most critical disciplines in ALM. One focal point of requirements management is requirements traceability. It was actively researched in the 90’s and in early 2000. For example, in [5, 6, 7]. Traceability provides a means to identify and maintain relationships between developmental artefacts and therefore, facilitates reporting, change impact analysis and information visibility through the product lifecycle. The roots of ALM tools are in Configuration Management and Integrated Development Environments [8]. However, the problem with CM systems is that they are overly focused on code, and fail to provide a more comprehensive view of the project, especially in terms of its requirements [9]. An important landmark on the road towards ALM tools was the rethinking that requirements and other developmental artefacts started being taken seriously by CM tool vendors [10]. Therefore, to meet the changing needs of the industry, configuration management systems had to be merged into infrastructures that will support the entire SW development life cycle [9, 11, 12].
58
J. Kääriäinen, A. Välimäki
Shaw [13] argues that with traditional toolsets, traceability and reporting across disciplines is extremely difficult. Same data is often duplicated into various applications that complicate the traceability and maintenance of data. According to Doyle & Lloyd [4], having the results of processes recorded in the same repository as the development artifacts are managed in is easy to produce a relevant set of related metrics that span the various development phases of the lifecycle. Traditionally, vendors have attempted to support lifecycle management with ALM tools by moving to a single repository or by suggesting practitioners move to a single vendor for all ALM tools [13]. These solutions could be very comprehensive and well integrated. The downside of these solutions is that they lock a company into a single vendor. Another approach that aims to be more vendor-independent is called an “Application integration framework”. They are used as platforms to integrate several applications needed during the software development lifecycle. Examples of this kind of framework are Eclipse and ALF projects [14]. Schwaber [3] states that ALM does not necessarily require tools. Traditionally, lifecycle activities have been handled by partly using manual operations and solutions. For example, handouts between different functions can be controlled using a paper-based approval process. However, these activities can be made more efficient through tool integration with process automation.
3 Research Approach This section discusses industrial context, phases and methods used in this research. 3.1 Industrial Context The case company operates in the automation industry. The company produces complex automation systems where SW development is a part of system development. Product development in the case company is organized according to product lines. This research focuses on two product lines that consist of several products that are partly developed in different sites. The case was further focused on two teams each having 5 to 6 projects running in parallel. Depending on projects, the projects are geographically distributed over 2 or 3 sites. The same processes and tools are used in all projects. Previously, projects have followed a partly iterative development process. Currently, projects have adopted the agile development method, Scrum. The company is in first iteration in ALM improvement. Previously, the company’s ALM solution for distributed development comprised several somewhat isolated databases to manage project related data, such as version control, document management and fault management systems. The geographic distribution as well as increasing complexity and efficiency demands forced the company to seek more integrated solutions to coordinate distinct project phases and to provide a centralised project database for all project related data. In practice, this meant the deployment of a commercial ALM tool with the Scrum process.
Impact of Application Lifecycle Management – A Case Study
59
3.2 Research Settings This research has been carried out in the following phases: research planning, questionnaire, interviews and analysis of results. A two-step approach was used for data collection. First, a questionnaire was made for project managers and project members. The respondents were asked for current practices related to ALM, opinions on how the introduced ALM solution has affected daily work compared to previous solutions and opinions about things that are important for efficient application lifecycle management. They were also asked about their ideas for the kinds of challenges a distributed environment would set for operation. Based on the related research, the elements of ALM were identified. This framework has been used for organizing a questionnaire and interviews and for analysing the case results. The elements of the preliminary ALM framework are: x
x
x
x
Creation and management of project artefacts: How different data items are created, identified, stored and versioned on various phases of a project lifecycle? All project data should be securely and easily shared for all stakeholders. Team communication should be supported. Traceability of lifecycle artefacts: How traceability in a project lifecycle is handled? Traceability provides a means to identify and maintain relationships between artefacts and, therefore, facilitates reporting, change impact analysis and information visibility through the product lifecycle. Reporting of lifecycle artefacts: How the solution supports reporting on a project lifecycle? The solution should facilitate the gathering, processing and the presentation of process and configuration item-related information for an organization. Process automation and tool integration: How well the tools support lifecycle processes and what kind of tool integrations there are? An ALM solution should support the procedures of the project and facilitate fluent data exchange and queries between various development and management tools.
After the questionnaire, project managers from the projects were selected as representatives who were interviewed using semi-structured interviews. The main structure of the interview framework was similar to the structure of the questionnaire discussed above.
4 Results This section presents results obtained from the industrial case study. Fig. 2 and Fig. 3 illustrate respondents satisfaction to the earlier and new solution based on ALM elements. After that, each element is analyzed and presented as follows: x x
ALM framework element: Issue needs to be addressed in ALM solution. Solution: Description how the issue is supported by the previous ALM solution and new ALM solution.
60
J. Kääriäinen, A. Välimäki
x
Rate (1=poor, 10=excellent)
Experiences: Experiences gained from the usage of the solution. Strengths / weaknesses / improvement ideas in general in this solution based on case data. 10,0 9,0 8,0 7,0 6,0 5,0 4,0 3,0 2,0 1,0 0,0 Previous solution
New ALM solution
Fig 2. Respondents’ satisfaction to previous and new solution.
10,0 Rate (1=poor, 10=excellent)
9,0 8,0 7,0 6,0
Previous solution
5,0
New ALM solution
4,0 3,0 2,0 1,0 0,0 Creation and management of project artefacts
Traceability of lifecycle artefacts
Reporting of lifecycle artefacts
Process automation and tool integration
ALM issues
Fig. 3. Respondents’ satisfaction to the previous and new solutions based on ALM elements.
ALM framework element: Creation and management of project artefacts. Solution: x
x
Previous solution: Local CM systems, central document management database, programming tools, central system/SW fault management database, central test document database. Also e.g. design tools, word processing tools and project management tools were used. New ALM solution: Integrated ALM suite with central database for storing and versioning lifecycle artefacts. Covers project management, requirements management, configuration management, programming environment, project portal and document management. In addition, there is a central test document database and central system fault management database. Also e.g. design tools and word processing tools are used.
Experiences: New ALM solution is a single vendor integrated solution. Previously, projects used separate somewhat isolated databases, e.g. for document
Impact of Application Lifecycle Management – A Case Study
61
Rate (1=poor, 10=excellent)
management, configuration management and task management which was found to be complicated and difficult to keep synchronised. The ALM tool selection was affected in that the vendor in the new solution was the same as the vendor of the old CM system and therefore, the terminology of the new tool was somewhat familiar to the users. In the new solution, SW project artefacts such as management documents, requirements, tasks, design files, source code, builds and test documents can be stored into a central repository. Fig. 4 presents ratings on how well the respondents felt that previous and new solutions supported the management of different project artefacts. 10,0 9,0 8,0 7,0 6,0 5,0 4,0 3,0 2,0 1,0 0,0
Previous solution New ALM solution
Project planning and management documents
Requirements specifications
Design documents
Source code, builds
Defects, bugs
Test plans, cases, reports
Project data elem ents
Fig. 4. Respondents satisfaction to the previous and new solutions based on the management of SW project artefacts.
The respondents felt that the solution facilitates information sharing and communication because there is a single project portal for sharing the project data. At this point, it is worth noticing that globalization is nowadays a must, there is no real option for this. Therefore, the company had the challenge of working in a global environment “locally”, i.e. without the problems of geographical dispersion. Here information technology had a big role in providing the means to reduce geographic barriers. One downside of the solution is that the requirements management features of the solution is insufficient, even improved considerably if compared to the old solution. Current templates that were used in the solution for data capturing were not sufficient but they possibly need to be extended/tailored in the near future for the needs of the project. This is understandable because each organisation, product or project is unique with its own needs and characteristics. For example, attributes regarding requirements classification is missing from the standard process template as well as the visualisation of the hierarchy of the information (requirements – tasks) was poor. Results also show that the use of communication tools can facilitate informal team communication and in some cases also store discussion history (e.g. chat-tool). Furthermore, since the overall management has to coordinate the development efforts and resources it is important to be able to synchronize and prioritize different development projects with each other.
62
J. Kääriäinen, A. Välimäki
ALM framework element: Traceability of lifecycle artefacts. Solution: x
x
Previous solution: Traceability information was collected and stored as logical links (e.g. embedded into file/item names or comment field) or physical links (physical links between databases) between artefacts. Instructions for traceability existed. New ALM solution: The solution provides technical capabilities for linking various lifecycle artefacts stored in a central ALM database. In addition, logical and physical links between other databases. Instructions for traceability exist.
Experiences: According to results, previously traceability was more difficult when project data was dispersed over several somewhat isolated databases. In practice, this meant that traceability information was mostly collected and stored as logical links between artefacts. Respondents defined that insufficient traceability can cause various problems e.g. it slows down the testing, complicates bug fixing and makes it more difficult to find out reasons for changes. To make traceability workable, trace capture and reporting should be as easy/automated as possible or the capture and maintenance of traces is too time-consuming which leads to invaluable, out-of-date information. Respondents also felt that a distributed development environment requires that instructions for traceability should be welldocumented and communicated to ensure that traceability information remains upto-date. Respondents stated that the new ALM solution provides good possibilities for more automated traceability when the process templates will be tailored to better support traceability and reporting according to the needs of the company/project. This provides a possibility to gather traceability information as a part of the development process. A central project database provides a means for central trace storage in a global environment. This means that project personnel have a consistent view to project data and their interrelations. Examples of the lifecycle artefact traceability needs that came up in this study were: requirements – tasks – subtasks – code – build, bug – code – build and traceability of changes. In this case, SW is just one part of a complex system product. Therefore, the ALM solution should interface with HW and system level information management systems, e.g. with a system requirements management system and system version database. ALM framework element: Reporting of lifecycle artefacts. Solution: x x
Previous solution: Predefined and tailored reports from databases. New ALM solution: The ALM solution provides predefined reports, e.g. based on process templates. It is also possible to modify existing reports and create new reports based on the items stored in a central repository.
Experiences: ALM solution generates predefined or tailored project lifecycle reports from a central ALM database and thus facilitates data consolidation. Now there are better project management reports than previously. Previously, reports were based on schedules and feature statuses. Now Scrum method based reports
Impact of Application Lifecycle Management – A Case Study
63
are based e.g. on effort, remaining effort, iterative development and working SW. Report sharing and visibility are supported in a distributed development environment and therefore, project personnel and management can easily follow project status. However, the overall process template was not sufficient for the project and therefore also reports need to be extended in the future to better meet the needs of the project. Respondents defined some examples of project reports that are useful for their work as follows: sprint burn down chart, product backlog composition, found/resolved/inspected bugs, report about remaining work (hours). ALM framework element: Process automation and tool integration. Solution: x x
Previous solution: Separate process guide, tool-specific process support and automation. Some integrations between databases (point-to-point integrations built in a company). New ALM solution: Tight integration between different tools provided by ALM tool: requirements/task management, version control, build, defect management, project management, word processing, spreadsheet application. Solution provides standard process templates and their tailoring possibilities. Tailored templates can be used for configuring ALM system. Process guidance can be accessed from project portal (integrated into ALM tool). Wiki-based company-internal process instructions. Sprint retrospectives according to Scrum to continuously improve practices.
Experiences: The respondents felt that an integrated toolset with process support is important. The new ALM solution supports this by providing an integrated environment that utilises central project repository. Furthermore, it was stated that tool integration should also work in a distributed development environment. For example, developers should have fluent access to a distributed version control system via their local SW development tools. The solution also enables the use of predefined or tailored process templates that are used to configure an ALM system. According to respondents, the use of standard process templates is problematic since the needs of the company and project may vary. Current standard templates represent both ends of the process spectrums, agile (lightweight process that adheres to the principles of Agile Alliance) and CMMI (supports appraisal to CMMI level 3)[19]. Respondents felt that some kind of combination of basic templates and practices could be the answer to this. One respondent answered that tailoring could cause also problems if the ALM solution is updated (new version) in the future and if a new version will not work with a tailored template. Furthermore, the project produces SW that is part of a broader system product and therefore some respondents also raised the question about the integration with system/HW –level information management solutions (e.g. system fault database, test document database, system version database). Thus, ALM interfaces with system lifecycle management systems (i.e. PLM systems). Scrum process instructions can be accessed from the ALM solution. However, they were too general and therefore, there are also wiki-based company-internal process instructions.
64
J. Kääriäinen, A. Välimäki
5 Discussion This paper presents a case study where the impact of application lifecycle management was evaluated. Previously, teams have used several separate systems to manage project related data. This caused challenges in coordinating development work and maintaining product related data. Therefore, the case company decided that the previous solution was not good enough for its needs in the future. The company tries to face challenges with the new ALM solution that provides a more integrated frame for the management of project data. The company expects that the new ALM solution will improve efficiency by better project decisions and a decrease in quality costs in distributed projects. This research also aimed to clarify the concept of ALM by producing a framework that can be used to evaluate the current state of ALM solution in target organization and to detect ALM elements that possibly need to be improved. Based on the results presented in the previous sections, the ALM framework elements are presented in Fig. 5. “Creation and management of project artefacts” element is the foundation for ALM. Other elements enable efficient cooperation and communication environment. The future research should study the framework’s position related to maturity and appraisal models, since these models, such as SWCMM (Capability Maturity Model) or ISO 15504, provide ability for an organization to assess its work process practices. Traceability of lifecycle artefacts
Tool integration
Communication Creation Creationand andmanagement management ofofproject projectartefacts artefacts
Reporting of lifecycle artefacts
Process automation
Fig. 5. Principal elements of ALM framework.
The results of the case study show that successful implementation of a centralized ALM solution has many advantages for SW projects. The project members are more satisfied with the new ALM solution than with the previous solution even though there are still some challenges that need to be solved in future improvement efforts. The results show that a central project database was the foundation for successful ALM. The most successful ALM issues were traceability and reporting. The importance of these issues for ALM has also been discussed in [4] and [2]. Also, communication related arguments were strong in this study. Communication solutions and practices are needed to facilitate informal communication and project data visibility. The project used a predefined Scrum process template to configure ALM solution to support agile working methods.
Impact of Application Lifecycle Management – A Case Study
65
However, responses show that predefined process templates may need to be adapted according to the needs of the project. The worst deficiencies are in requirements management activities of the solution. This is critical since requirements management has been identified as one of the most critical disciplines in ALM [4]. The adaptation of product information management practices to a project environment has been discussed in literature, e.g. from SCM point of view in [15] and [16]. Therefore, the challenge is how to generate efficient company-specific implementations of ALM. Results also show that if SW development is a part of system product development, then interfaces with system/HW lifecycle management (i.e. PLM, CM) need to be handled or e.g. the traceability of SW to system or HW entities remain insufficient. This viewpoint has also been detected e.g. in [17] and [18].
6 Conclusions Planning and deployment of lifecycle management is challenging. It is a common belief that lifecycle management solutions are restricted to support the creation of very large and complicated systems. The important aspect of the lifecycle management is its conception as a generic frame of reference for systems and methods that are needed for facilitating product lifecycle activities. This concept provides a rich framework for the coordination of all kinds of engineering and development work, from agile-types of development to traditional waterfall-types of development. Since each organization and project tends to have its own characteristics and needs, the challenge resides in how to generate efficient, company-specific implementations of lifecycle management for complicated, reallife situations. This paper reports experiences from the case study performed in a company operating in the automation industry. This case study is a part of improvement effort in a case company where the aim is to systematize Application Lifecycle Management (ALM) in distributed development projects that develop SW for SW intensive systems. Even though the new ALM solution is considerably better than the previous solution and the direction is right, there remains some challenges that need to be handled in upcoming ALM improvement efforts. The next step will be that the projects will study the possibilities to tailor standard process templates in the near future. According to results, the following benefits of ALM were gained in this case: x x x x
Central storage place for all project-related data provided foundation for project cooperation and communication. Better communication within the team members of the project (data sharing, visibility even in distributed environment). Better lifecycle traceability of developmental artefacts to facilitate e.g. bug fixing and testing. Easier lifecycle reporting to increase project transparency (what is the real status of the project).
66
J. Kääriäinen, A. Välimäki
x x
Clear processes with fluent tool support that streamline time-consuming lifecycle activities. Better integration of applications that are used for producing, managing and reporting project related data.
References [1] [2] [3] [4] [5]
[6]
[7] [8]
[9]
[10]
[11]
[12] [13]
[14] [15]
Sääksvuori A, Immonen A, (2004) Product Lifecycle Management, Springer-Verlag Berlin Doyle C, (2007) The importance of ALM for aerospace and defence (A&D), Embedded System Engineering (ESE magazine), Vol. 15, Issue 5, 28-29 Schwaber C, (2006) The Changing Face of Application Life-Cycle Management, Forrester Research Inc., White paper, August 18 Doyle C, Lloyd R, (2007) Application lifecycle management in embedded systems engineering, Embedded System Engineering (ESE magazine), Vol. 15, Issue 2, 24-25 Gotel O, Finkelstein A, (1994) An Analysis of the Requirements Traceability Problem, Proceedings of the First International Conference on Requirements Engineering, 94-101 Ramesh B, Dhar V, (1992) Supporting systems development by capturing deliberations during requirements engineering, IEEE Transactions on Software Engineering, Vol. 18, No. 6, 498 –510 Ramesh B, Jarke M, (2001) Toward Reference Models for Requirements Traceability. IEEE Transactions on Software Engineering, Vol. 27, No. 1, 58-93 Weatherall B, (2007) Application Lifecycle Management - A Look Back, CM Journal, CM Crossroads – The configuration management community, January 2007, http://www.cmcrossroads.com/articles/cm-journal/application-lifecycle-management%11-a-look-back.html (available 18.10.2007) Kolawa A, (2006) The Future of ALM and CM, CM Journal, CM Crossroads – The configuration management community, January 2006. http://www.cmcrossroads.com/articles/cm-journal/the-future-of-alm-and-cm.html (available 18.10.2007) Estublier J, Leblang D, van der Hoek A, Conradi R, Clemm G, Tichy W, WiborgWeber D, (2005) Impact of software engineering research on the practice of software configuration management. ACM Transactions on Software Engineering and Methodology (TOSEM). ACM Press, New York, USA, Vol. 14, Issue 4, 383– 430 Heinonen S, Kääriäinen J, Takalo J, (2007) Challenges in Collaboration: Tool Chain Enables Transparency Beyond Partner Borders, In proceedings of 3rd International Conference Interoperability for Enterprise Software and Applications 2007, Funchal, Portugal Yang Z, Jiang M, (2007) Using Eclipse as a Tool-Integration Platform for Software Development, IEEE Software, Vol. 24, Issue 2, 87 - 89 Shaw K, (2007) Application lifecycle management for the enterprise, Serena Software, White Paper, April 2007, http://www.serena.com/Docs/Repository/company/Serena_ALM_2.0_For_t.pdf (available 18.10.2007) Eclipse web-pages, www.eclipse.org (available 18.10.2007) Buckley F, (1996) Implementing configuration management : hardware, software, and firmware, IEEE Computer Society Press, Los Alamitos
Impact of Application Lifecycle Management – A Case Study
67
[16] Leon A, (2000) A Guide to software configuration management, Artech House, Boston [17] Crnkovic I, Dahlqvist AP, Svensson D, (2001) Complex systems development requirements - PDM and SCM integration, Asia-Pacific Conference on Quality Software [18] Välimäki A, Kääriäinen J, (2007) Product Managers’ Requirement Management Practices As Patterns in Distributed Development, 8th International PROFES (Product Focused Software http://www.liis.lv/profes2007/www.lu.lvDevelopment and Process Improvement) conference, Latvia [19] Guckenheimer S, (2006) Software Engineering with Microsoft Visual Studio Team System, Addison-Wesley
Part II
Cross-organizational Collaboration and Cross-sectoral Processes
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration Christoph Schroth University of St. Gallen, MCM Institute and SAP Research CEC, Blumenbergplatz 9, 9000 St. Gallen, Switzerland [email protected]
Abstract. Today, cross-company collaboration is about to gain significant momentum, but still shows weaknesses with respect to productivity, flexibility and quality: Point-to-point, Electronic Data Interchange (EDI)-based approaches or Managed File Transfer installations provide only limited functional richness and low reach as they often represent proprietary island solutions. A new generation of providers of software (Multienterprise/ Business-toBusiness (B2B) Gateway solutions) and services (Integration-as-a-Service and B2B Project Outsourcing) for multienterprise interaction is about to emerge today and allows for richer interaction while reducing the costs for electronic transactions heavily. However, these products and services still exhibit weaknesses with respect to both managerial and technological aspects. In this work, we present and thoroughly discuss a service-oriented reference architecture for business media that overcome the drawbacks of today’s B2B software products and services. Based on the IEEE Recommended Practice for Architectural Description (IEEE 1471-2000) in combination with Schmid’s Media Reference Model, this reference architecture provides four main views: community (structural organization), process (process-oriented organization), services and infrastructure. Keywords: Support for cross-enterprise co-operative Work, Service oriented Architectures for interoperability, Interoperable enterprise architecture, Architectures and platforms for interoperability, Interoperable inter-enterprise workflow systems
1 Motivation Cross-organizational electronic collaboration is about to gain significant momentum, but still shows weaknesses with respect to productivity, flexibility and quality [1, 2, 3, 4]. Existing, point-to-point, EDI-based approaches or Managed File Transfer (MFT) installations provide only limited functional richness and low reach as they often represent proprietary island solutions. Such message-oriented
72
Christoph Schroth
B2B communities are frequently industry-driven or lead by a large, dominant participant which has lead to a multitude of different standards of different scope and granularity over time [5]. These substantially different standards prevent from a common understanding of exchanged data among a wide mass of organizations, while high cost and complexity of existing solutions impede a fast adoption by potential users. Today, a new generation of providers of software (Multienterprise/ B2B Gateway solutions) and services (Integration-as-a-Service and B2B Project Outsourcing) for multienterprise interaction is about to emerge today and allows for richer interaction while reducing the costs for electronic transactions heavily. Integration service providers such as Advanced Data Exchange, GxS, Seeburger, Sterling Commerce and TietoEnator already offer hosted multitenant environments for reliable and secure communication, trading-partner management, technical integration services and application services [6, 7, 8, 9]. However, these products and services still exhibit weaknesses with respect to both managerial and technological aspects. Limited functional richness, focus on automation rather than business innovation as well as an inherent enterprise- rather than multienterprise perspective represent only some of the remaining challenges towards business media for efficient and effective cross-organizational interaction. In this work, we present and thoroughly discuss a service-oriented reference architecture for business media that overcome the drawbacks of today’s B2B software products and services. Based on the IEEE Recommended Practice for Architectural Description (IEEE 1471-2000) in combination with Schmid’s Media Reference Model, this reference architecture provides four main views: community (structural organization), process (process-oriented organization), services and infrastructure. The remainder of this work is organized as follows: In section two, we elaborate on the state-of-the-art in multienterprise electronic interaction and also discuss the major shortcoming of existing solutions. In section three, after presenting an adequate formal foundation, we propose our service-oriented reference architecture. Section four concludes the work with a brief summary and an outlook on future work.
2 State-of-the-Art in Multienterprise Electronic Interaction and Shortcomings A first differentiation (see Fig. 1) of existing business media for the support of cross-organizational electronic interaction can be made between e-commerce solutions (automating a company’s information transactions with its buyers and sellers) and solutions for IT extension (extensions of corporate IT infrastructures to third-party service providers, e.g. Business Process Outsourcing (BPO), Softwareas-a-Service (SaaS)). Besides their intended purpose, the second differentiation can be made along their functional capabilities. Batch-oriented point-to-point integration has been the predominant integration scenario for many years [7]. Stand-alone EDI translators and Managed File Transfer solutions (MFTs) represent major examples for solutions related to this era: “A stand-alone EDI translator is a software application that an enterprise typically licenses from an EDI software provider or subscribes
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 73
from a Value-Added Network (VAN). The translator interprets incoming EDI information […] and converts it into the format or formats used by the enterprise's in-house systems and applications” [8, p.8-9]. Such approaches stem from the time when EDI was the predominant form of cross-organizational transaction message format (as opposed to the proliferating XML standard). “MFT software enables companies to automate, compress, restart, secure, log, analyze and audit the transfer of data from one endpoint to another” [8, p. 7]. Such message-oriented B2B software products have a limited functional scope and are frequently tailored to proprietary needs and not adequate to support heterogeneous processes or applications. To allow for richer functionality and thus high “reach” to a multitude of trading partners with different requirements, generic multienterprise/B2B gateway software is currently gaining momentum [8]. “Multienterprise/B2B gateway software (see Fig. 1) is a form of integration middleware that is used to consolidate and centralize a company's multienterprise data, application and process integration and interoperability requirements with external business partners.” [8, p.5] SaaS BPO Supply Chain/ E-Commerce
Company A
IT extension
Company B
Company C
Business Medium
3. 4. 5.
Software EDI Translators Managed File Transfer Multienterprise/ B2B Gateway SW
1. 2.
Service IaaS B2B Project Outsourcing
Fig. 1. Classification of cross-organizational electronic interaction
A remarkable market has emerged for “B2B integration capabilities that are hosted in a multitenant environment and delivered as a service rather than as software. Traditionally known as EDI VANs, we now call these hosted offerings IaaS [Integration-as-a-Service], in the spirit of SaaS, and we call vendors that offer such services (usually in one role relative to other roles) integration service providers. By definition, to be considered an integration service provider, a vendor must offer hosted multienterprise integration and interoperability services.” [8, p.
74
Christoph Schroth
9]. IaaS providers usually offer services in the fields of communication, trading partner management, technical integration (towards internal systems) and different application services. As opposed to the mere outsourcing of technical infrastructure, B2B Process Outsourcing (B2BPO) also comprises the outsourcing of a complete B2B project (including the workforce and their structural as well as process-oriented organization). Table 1. B2B Software and Services Market Overview (extracted from [6, 7, 8, 9]) Vendor
Software Multienterprise/ B2B Gateway SW
MFT Suites
Service EDI Translators
Accenture Adv. Data Exch.
B2BPO
x
x
x
Axway
x
x
Click Commerce
x
x
x x
Covisint Crossgate
IaaS
x x
x
DICentral
x
x
E2Open
x
x
EasyLink
x
eZCom Software
x
GxS
x
x x
Hubspan Inovis
x
x
nuBridges
x
x
Seeburger
x
Sterling Com.
x
x
x
x
x
x
x
x
x
x x
x
x
x
SupplyOn
x
TietoEnator
x
Tumbleweed
x
x x
x
Table 1 provides an overview of 19 selected vendors (mainly drawing on an analysis of more than 100 vendors conducted by Gartner [8]) which are active in at least one of the five above discussed fields of activity. A clear trend can be identified towards IaaS, i.e. the Internet-based, hosted offering of B2B integration capabilities (ideally in a multitenant environment). More and more vendors (worldwide already more than 100) are entering this promising market which is expected to grow significantly. In this work, we focus on managerial as well as technological aspects of IaaS. Many of the existing solutions in this area show weaknesses with regard to mainly the following three aspects:
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 75
Cross-enterprise view: Many solutions focus on shared business functionality rather than bridging existing gaps between enterprises and allowing them to seamlessly interact. Innovation potential: As elaborated in [7], existing approaches support the automation of cross-corporate interaction but do not enable users to tap the full potential business innovation. First trends towards innovation enablement can be “observed by looking at communities of data exchange where third parties are beginning to offer a range of business services as value added to the [mere] document exchange services” [7, p. 8]. “Richness” and “Reach”: Many “B2B communities” are still being setup as stand-alone island solutions for specific purposes. However, the frustration of organizations in establishing and supporting multiple, single-purpose portals and partner communities grows. “These communities and their underlying networks are not configured to support heterogeneous processes or applications.” [7, p.6]. According to Gartner research, firms desire integration services supporting “multiple protocols, multiple data formats, multiple onboarding approaches, higher-order integration features (for example, in-line translation, data validation and business process management), BAM (for example, process visibility and compliance management) and hosted applications (for example, catalogues and global data synchronization” [6, p.4]. Summing up, besides a lack of crossenterprise perspective and innovation potential, existing solutions only provide limited richness (functional scope) and reach (amount of connected organizations).
3 A Service-Oriented Reference Architecture 3.1 The IEEE Recommended Practice for Architectural Description To thoroughly elaborate on our reference architecture, we leverage the “IEEE Recommended Practice for Architectural Description” (IEEE 1471-2000 [10]) which has also been used by Greunz [11] to describe electronic business media. The standard considers architecture as „fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution“[10, p. 3]. As depicted in Fig. 2, systems are subject to an architecture. To be able to describe these adequately and systematically, there are architectural descriptions. As a central element, viewpoints exist: In order to reduce complexity and to maximize the benefit for certain groups of stakeholders, an architectural description selects one or more viewpoints, from which the system is then analyzed. A viewpoint is considered as „pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis“[10, p. 4]. The goal is to codify a set of concepts and interrelations in order to be able to adequately present and analyze certain concerns. „A concern expresses a specific interest in some topic pertaining to a particular system under consideration […]. Concerns arise in relation to human interests. These humans are called system stakeholders“[11, p. 16] who are
76
Christoph Schroth
individuals, teams or whole organizations who have a specific interest in a system. A viewpoint defines the conventions which underlie the creation, symbolization and analysis of a view „by determining the language (and notations) to be used to describe the view, and any associated modelling methods or analysis techniques to be applied to these representations of the view“ [11, p. 17]. Corresponding to the selection of viewpoints, an architectural description is divided into one or more views. Each view is thereby the representation of the overall system, but from the perspective of a set of specific, interrelated concerns. The relation between viewpoint and actual view can thus be compared with the relation between a class and one of its instances in the programming context. Due to space constraints, the remaining IEEE 1471-2000 artefacts shall not be discussed in detail. For the description of the reference architecture presented in this paper, the differentiation between viewpoints, concerns and actual views is of central importance. Missi +
+
1
Environme + influences
+
Syste
has Architectur + described by
+ is important
+
1
Architectural
+
1
Stakehol
+
1 +
1
Rati
+ participates
is
Library +
has
0
+
1 + used to
Conce 1
1
+
View
+
1 +
View 1
+ participates
1
+ consists 1
+ establishes
Mod 1
+
Fig. 2. IEEE Recommended Practice for Architectural Description (IEEE 1471-2000, [24])
3.2 Schmid’s Media Reference Model After discussing the essentials of describing architectures, Schmid’s Media Reference Model shall be used to specify and adapt the so far generic IEEE 14712000 meta-model to electronic business media, in particular in cross-organizational contexts. According to Schmid [12, 13, 14], media can basically be defined as follows: They are enablers of interaction, i.e. they allow for exchange, particularly the communicative exchange between agents. Such interaction enablers can be structured into three main components (Fig. 3): First, a physical component (CComponent) allows for the actual interaction of physical agents. This component
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 77
can also be referred to as carrier medium or channel system. Second, a logical component (L-Component) comprises a common “language”, i.e. symbols used for the communication between agents and their semantics. Without such a common understanding, the exchange of data is possible (with the help of the CComponent), but not the exchange of knowledge. Third, an organizational component (O-Component) defines a structural organization of agents, their roles, rules which impact the agents’ behaviour as well as the process-oriented organization of agents’ interactions. Layers
Community Viewpoint
Structural Organization
O Process Viewpoint
Processes/ Interactions
Service Viewpoint
Services/ Interfaces
L
C Infrastructure Viewpoint
Channel/ Service Platform Phase
Inform
Signal
Negotiate
Execute
Fig. 3. Schmid’s Media Reference Model [10]
Together, these basic three components have been identified to constitute various kinds of media. Among others, it is appropriate to describe electronic media such as those deployed to support cross-organizational collaboration. Based on these components which already represent a first, scientific approach to modelling, understanding and reorganizing media, a layer/ phase reference model has been introduced as well. The Media Reference Model (MRM) [12, see Fig. 3] comprises four different layers (which all represent dedicated views on media) and structures the use of media into four sequential phases. Similar to the emerging field of software engineering in the software context, the MRM aims to provide a comprehensive, coherent and systematic framework for the description and analysis of various media. The Community View (first layer) thereby accounts for the set of interacting agents, the organization of the given agents’ population, i.e. the specific roles of involved stakeholders, the situations in which they act as well as the objects with which they deal. Summing up, it models the structure of the social community sphere in a situation-dependent, but static fashion. The Process View (Implementation Aspects) deals with the modelling of the process-oriented organization of agents and can also be referred to as “Interaction Programming” [12]. It is also called implementation view as it connects the needs of the
78
Christoph Schroth
community with the means provided by the carrier medium and thus implements the “community-plot” on the basis of the carrier medium. The Service View (Transaction View) models the services provided by the carrier medium which can be used in the different interaction steps to reach the respective interactions’ goals. The Infrastructure View models the production system, which creates the services provided by the service view, i.e. in the case of electronic media the actual underlying information technology. The above discussed three major components can seamlessly be integrated into the MRM: The upper two views (Community Aspects and Implementation Aspects) represent the organizational component (O-Component) which accounts for the structural as well as process-oriented organization. The lower two layers are mapped to the physical component (C-Component) which focuses on the creation and provision of services. Last, the logical component (L-Component) concerns all four layers as it ensures that interaction of agents is based on a common understanding of exchanged symbols. 3.3 Service-Oriented Reference Architecture In this section, we describe the service-oriented reference architecture from the four views which have been identified as essential in the context of electronic media: Community, Processes, Services and Infrastructure. 3.3.1 Community View The concerns [10] addressed by the community view comprise the following: Who are the agents interacting over the electronic business medium? As a first element of our reference architecture, a registry of the different stakeholders must be available to ensure that organizations can publish their business profiles and are also enabled to find adequate trading partners. Particularly in multitenant environments, frequently groups of agents emerge which are also referred to as B2B communities: They represent a set of organizations that have negotiated certain business conditions as well as access right to possibly sensitive information. As a part of IaaS solutions, applications are thus required to manage profiles and access rights. What are their exact roles and the rights/ obligations associated to the roles? The different agents using the medium are assigned certain roles in order to reduce complexity of community management. Different agents may assume the same rule, depending on their properties and capabilities. Associated to these roles are clearly formalized rights and obligations which impose constraints on the agents’ activities. Which services are provided/ consumed by these roles? Besides the agents’ identities/ profiles and the specified roles which they may assume, the services they provide need to be determined to ensure an efficient organization. The mere process-oriented view which is often leveraged when analyzing an organization is not adequate to model services in the context of our reference architecture. In fact, collaborative tasks which have been identified shall be structured and decomposed into subtasks (i.e. services) according to the criteria proposed by Parnas [15] in the software programming context: First, rather than starting with a process or workflow and determining subtasks/ services as sequential parts of this process, organizational engineers are supposed to
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 79
encapsulate those design decisions which are difficult or likely to be subject to change in the future: “We have tried to demonstrate by examples that it is almost always incorrect to begin the decomposition of a system into modules on the basis of a flowchart. We propose instead that one begins with a list of difficult design decisions or design decisions which are likely to change. Each module is then designed to hide such a decision from the others. Since, in most cases, design decisions transcend time of execution, modules will not correspond to steps in the processing”[15]. Second, organizational services need to hide as much proprietary information as possible. By shielding information and complexity from the outside world, services are quickly exchangeable (since service interaction only is conducted via simple interfaces) and their specialization is facilitated. As a third important criterion which we transfer from software engineering to the organizational context, hierarchies need to be adopted where appropriate: Tasks are first of all broken down into subtasks/ services which reside on a first level. Similar to composite Web Services, such services than can often be composed out of other, more basis and focused services. “We have a hierarchical structure if a certain relation may be defined between the modules or programs and that relation is a partial ordering. The relation that we are concerned with is “uses” or “depends on”[…]. The partial ordering gives us two additional benefits. First, parts of the system are benefited (simplified) because they use the service of lower levels. Second, we are able to cut off the upper levels and still have a usable and useful product”[15]. Which information objects are exchanged between the agents? Today, pieces of information are frequently specified from a rather technical point of view. Countless standards already exist for XML-based electronic documents. In our reference architecture, we propose a holistic view which comprises both semantic and syntactical considerations and considers actual implementation (e.g., on the basis of XML) a separate issue (which is to be coped with in the service view). In [16], we thoroughly describe the meta-model as well as an implementation of a novel approach to modelling electronic documents based on generic, semantic building blocks (thereby complying with the principle of modularization). 3.3.2 Process View The process view mainly deals with two major concerns: How do individual agents behave and how is the overall service choreography (i.e. the interaction from a comprehensive, bird’s eye view), defined? With respect to this processoriented organization, a first differentiation must be made between imperative and declarative organizational styles: The collaboration of agents (across corporate boundaries) can be organized by prescribing the logic and the control (adhering to an imperative style) or by only pre-determining the logic (i.e. “what” has to be accomplished). The choice for an imperative or rather declarative process-oriented organization depends on certain external requirements. In case stable, standardized processes determine the interaction of agents, an imperative organization is appropriate. In cases where the actual interaction of stakeholders depends on various situational, unforeseeable factors, context-sensitive parameters or individual preferences and restrictions which vary over time, a process-oriented organization which prescribes both logic and control elements is not adequate. In
80
Christoph Schroth
such environments, more declarative “interaction programming” is needed which is merely based on specifications of “what” has to be achieved during the interaction rather than “how” (e.g. in which exact order). Independent of the decision for an imperative or a declarative process-oriented organization, our reference architecture foresees the utilization of atomic and generic process building blocks (also referred to as interaction patterns) which can be assembled towards complex cross-organizational processes. In this way (again complying with the principle of modularization), both operational flexibility and interoperability can be improved. On the basis of commonly agreed building blocks (such as those used in the case of the UN/CEFAT Modelling Methodology [16]), different parties can seamlessly model and negotiate their collaborative business processes. In the context of the HERA research project [17], an exemplary, declarative process-oriented organization is currently being setup with the help of fine-granular interaction patterns defined in the “Event Bus Schweiz”- standard [18]. 3.3.3 Services View The two major concerns addressed by the services view are: Which are the services provided by the electronic medium and which services are required to connect to it? Second, which are the interfaces of these services and how are they described? In the course of several studies, two basically different service classes have been identified. Operational services enable interaction (provide basic communication functionality), while coordination services facilitate (as they read and interpret exchanged information and act accordingly) interaction. We propose the following (non-exhaustive) set of operational services [18] as part of our reference architecture: Services supporting diverse information dissemination patterns (e.g., Publish/subscribe, Unicast, etc.), directory services (allowing for publishing and retrieving business partners and their respective profiles), event catalogue services (especially in case of declarative process-oriented organizations, the documentation of all events (messages) which may be disseminated via the medium is of central importance), transformation services (accounting for mediation of electronic artefacts which adhere to different format standards), security services (encryption and decryption), operating services (for media administration purposes), error services (automatic failure detection and removal), routing services, and validation services (e.g., for evaluation of correctness and integrity of exchanged information). Besides these enabling services, a set of coordination services is required to partially automate the process-oriented organization and to provide certain application functionality. In the course of the HERA project [17], which aims at improving the collaborative scenario of creating tax declarations, coordination services such as automated document completeness control, due date monitoring (each exchanged document is evaluated with respect to due dates for a response message; in case time-limits are not met, the service issues reminders and may also take other, pre-determined action) and process visibility (visualization of status and other key parameters of the cross-organizational interaction in order to improve transparency and thus also manageability) have been employed.
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 81
3.3.4 Infrastructure View The main two concerns addressed by the infrastructure are: Which technology shall be used to implement the medium as defined before? Which design principles are to be employed during implementation? Again, to account for the principle of modularization, our reference architecture foresees a decentral organization of electronic media which are primarily devoted to fulfilling the requirements of their respective ecosystems. In the HERA case, for example, a dedicated medium is currently being established for the purpose of collaborative tax declaration scenarios in Switzerland. The requirements with regard to all the four views of our architecture highly vary between different ecosystems and (today) prevent from a globally standardized common electronic medium for seamless interaction. We rather rely on the principle of decentrality and modularization. To allow for connectivity across the different media, though, we propose the standardization of a minimal set of common services. The “Event Bus Schweiz” initiative [14, 18] represents a first infrastructural example for a nationwide network of (sub-) event-buses which only need to adhere to common routing protocols and message formats to allow for interaction across sub-buses. As a further part of the infrastructure view, we propose the separation of actual media technology and access channel technology. For accessing instances of our reference architecture, dedicated software adapters (for connecting legacy software to the medium) or Web-based interfaces exist. Organizational Component
Physical Component (Service-Bus) t
Information Objects Interaction Rules Registry Roles Coor-
Form ats
dinati on
Software Web Client
Logical Componen
Bus
Publi c Servi ce 1
Publi c Servi ce 2
Publi c Servi ce n
Services
Routing Abo Directory Error Handling Event Catalogue
Fig. 4. Service-Oriented Reference Architecture
4 Conclusion In this work, we have analyzed the weaknesses of existing approaches to supporting the electronic interaction across corporate boundaries. On the basis of the IEEE Recommended Practice for Architectural Description as well as Schmid’s Media Reference Model, we then proposed a comprehensive reference architecture for business media which surpass today’s solutions. Fig. 4 visually summarizes the main artefacts of this architecture. As part of the organizational component, a registry, role models, business rules and specified information objects are
82
Christoph Schroth
leveraged to determine the structural and the process-oriented organization. The Logical Component (marked red) ensures that all artefacts are subject to standards and thus ensures seamless interaction on the basis of knowledge exchange rather than on data exchange. As a third component, the service bus actually enables and facilitates interaction on the basis of operational as well as coordination services. Finally, the actual agents encapsulate the services (and their related specifics) they offer with the help of standardized software adapters (alternatively, the medium can be accessed with the help of Web Clients).
References [1] [2] [3] [4]
[5]
[6] [7] [8]
[9]
[10] [11] [12]
[13] [14]
[15] [16]
McAfee, A. (2004). Will Web Services Really Transform Collaboration. Sloan Management Review, 46 (2), 78-84. Malone, T. (2001). The Future of E-Business. Sloan Management Review, 43 (1), 104. Porter, M. (2001). Strategy and the Internet. Harvard Business Review, 79 (3), 63-78. Schroth, C. (2007). Web 2.0 and SOA: Converging Concepts Enabling Seamless Cross Organizational Collaboration. Proceedings of the IEEE Joint Conference CEC'07 and EEE '07, Tokyo, Japan. Frenzel, P., Schroth, C., Samsonova, T. (2007). The Enterprise Interoperability Center– An Institutional Framework Facilitating Enterprise Interoperability. Proceedings of the 15th European Conference on Information Systems, St. Gallen, Switzerland. Lheureux, B. J., Malinverno, P. (2006). Magic Quadrant for Integration Service Providers, 1Q06. USA: Gartner Research Paper. White, A., Wilson, D., Lheureux, B. J. (2007). The Emergence of the Multienterprise Business Process Platform. USA: Gartner Research Paper. Lheureux, B. J., Biscotti, F., Malinverno, P., White, A., Kenney, L. F. (2007). Taxonomy and Definitons for the Multienterprise/B2B Infrastructure Market. USA: Gartner Research Paper. Lheureux, B. J., Malinverno, P. (2007). Spider Diagram Ratings Highlight Strengths and Challenges of Vendors in the B2B Infrastructure Market. USA: Gartner Research Paper. IEEE Computer Society (2000). IEEE Recommended Practice for Architectural Description; IEEE Std. 1471-2000. Greunz, M. (2003). An Architecture Framework for Service-Oriented Business Media. Dissertation, University of St. Gallen. Bamberg, Germany: Difo-Druck. Schmid, B. F., Lechner, U., Klose, M., Schubert, P. (1999). Ein Referenzmodell für Gemeinschaften und Medien. in: M. Englien (Hrsg.), J. Homann (Hrsg.): Virtuelle Organisation und neue Medien. Lohmar : Eul Verlag - Workshop GeNeMe 99, Gemeinschaften in neuen Medien.- Dresden.- ISBN 3-89012-710-X, 125-150. Schmid, B. F. Elektronische Märkte- Merkmale, Organisation und Potentiale, available online at: http://www.netacademy.org, accessed in 2007. Schmid, B. F., Schroth, C. (2008). Organizing as Programming: A Reference Model for Cross-Organizational Collaboration. Proceedings of the 9th IBIMA Conference on Information Management in Modern Organizations, Marrakech, Morocco. Parnas, D. L. (1972). On the criteria to be used in decomposing systems into modules, Communications of the ACM, 15 (12), 1053 – 1058. Schroth, C., Pemptroad, G., Janner, T. (2007). CCTS-based Business Information Modelling for Increasing Cross-Organizational Interoperability. Proceedings of the
A Service-oriented Reference Architecture for Organizing Cross-Company Collaboration 83
3rd International Conference on Interoperability for Enterprise Software and Applications, Madeira, Potugal. [17] HERA project, available online at: http://www.hera-project.ch, accessed in 2007. [18] Müller, W. (2007). Event Bus Schweiz. Konzept und Architektur, Version 1.5, Eidgenössiches Finanzdepartement (EFD), Informatikstrategieorgan Bund (ISB).
Patterns for Distributed Scrum – A Case Study A. Välimäki1, J. Kääriäinen2 1
2
Metso Automation Inc, Tampere, Finland [email protected] VTT, Technical Research Centre of Finland, Oulu, Finland [email protected]
Abstract. System products need to be developed faster in a global development environment. More efficient project management becomes more important to meet strict time-to-market and quality constraints. The goal of this research is to study and find the best practices to distributed Scrum, which is an agile project management method. The paper describes the process of mining distributed Scrum organizational patterns. The experiences and improvement ideas of distributed Scrum have been collected from a global company operating in the automation industry. The results present issues that were found important when managing agile projects in a distributed environment. The results are further generalized in the form of an organizational pattern which makes it easier for other companies to reflect on and to apply the results to their own cases. Keywords: Industrial case studies and demonstrators of interoperability, Interoperability best practice and success stories, Tools for interoperability, The human factor in interoperability
1 Introduction Globalization is the norm in current business environments. This shift from traditional one-site development to a networked development environment means that product development is becoming a global, complex undertaking with several stakeholders and various activities. People operating in global development environments may have difficulties to communicate and understand each other. Furthermore, the overall coordination of activities is more complicated. Therefore, companies have to search for more effective procedures and tools to coordinate their ever-complicated development activities. Distributed development has recently been actively researched. For instance, in [1] and [2] and lately published as a special issue in IEEE Software [3]. From a project management point of view, distributed development has been studied e.g.
86
A. Välimäki, J. Kääriäinen
in [4]. They present in their study drivers, constraints and enablers that are leading organizations to invest project management systems in today’s increasingly distributed world. As one interesting result, they divide top issues in distributed development as follows: strategic issues, project and process management issues, communication issues, cultural issues, technical issues, and security issues. One response to an ever-complicated environment is the rise of so called agile methods [5]. These methods or approaches value the following things [6]: x x x x
Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan
At first glance, these issues seem to be only suitable for small teams operating in a local environment. Some time agile methods were applied just for local development, but nowadays their potential for supporting more effective global development environments has been detected. The usage of agile methods in distributed development environment has been studied and reported e.g. in [7, 8, 9]. Ramesh et al. [7] report case studies conducted in three companies. Companies combined agile methods to distributed development environments. The article first lists the challenges in agile distributed development that relate to the aspects of communication, control and trust. Then the article reports successful practices observed in three organizations that address these challenges. As a conclusion the success of distributed agile development relies on the ability to blend the characteristics of both environments, agile and distributed. Sutherland et al. [8] report a case study of a distributed Scrum project. They analyze and recommend best practices for distributed agile teams. They report integrated Scrum as a solution for distributed development. Farmer [9] reports experiences when developing SW in a large, distributed team. The team worked according to slightly modified Extreme Programming (XP) practices. Farmer [9] states that the major factors that contributed to their success were: x x x
Team consisted of some of the top people in the company Team were permitted to find their own way Team got support from management when it was needed
Our paper reports experiences from a case study performed in a company operating in the automation industry. The goal of our research is to study the challenges and find practices to support project management in distributed Scrum (Fig. 1) projects which use the Application Lifecycle Management (ALM) (Fig. 2) solution. The Scrum approach has been developed for managing the systems development process. It is an empirical approach applying the ideas of industrial process control theory to systems development resulting in an approach that reintroduces the ideas of flexibility, adaptability and productivity. In the development phase the system is developed in Sprints. Sprints are iterative cycles where the functionality is developed or enhanced to product new increments. Each
Patterns for Distributed Scrum – A Case Study
87
Sprint includes the traditional phases of software development: requirements, analysis, design, evolution and delivery phases. [5] Doyle [10] presents that ALM is a set of tools, processes and practices that enable a development organization to implement software lifecycle approaches. PREGAME PHASE
DEVELOPMENT PHASE
POSTGAME PHASE
Regular updates
Goals of next Sprint
Product backlog list
Planning
Sprint backlog list
No more requirements
System testing
Final release
Integration Documentation
Requirements
Priorities
Effort estimates
Analysis Design Evolution Testing Delivery
SPRINT
High level design/ Architecture Standards Conventions Technology Resources Architecture
New product increment
Fig. 1. Scrum overview [5].
Application ApplicationLifecycle LifecycleManagement Management Maintenance Maintenance
Release Release
Integration&&verification verification Integration
Codingand andunit unittesting testing Coding
Project manager
Design Design
Requirementsdefinition definition Requirements
Product manager
Developer
Project management Requirements management Configuration management
Architect
Fig. 2. Application communication.
Traceability Traceability Reporting Reporting Process Processautomation, automation,tool toolintegration integration
Lifecycle
Management
facilitates
Tester
project
cooperation
and
88
A. Välimäki, J. Kääriäinen
This paper is organised as follows: In the next section, research approach is described comprising a description of industrial context and research settings. In the Section 3, results are presented. Finally, results are discussed and conclusions are drawn up.
2 Research Approach This section discusses industrial context and research settings used in this research. 2.1 Industrial Context This case study has been carried out in a global company operating in the automation industry. Previously, projects have followed partly iterative development process. Based on the positive experiences, the case project has gone further and deployed the Scrum method. The company operates in a multi-site environment and in the future, the work is globalizing more and more. Therefore, the challenges of a global development environment were also under investigation. This study reports first experiences from the distributed Scrum in the case company. Distributed Scrum practices are applied with the Application Lifecycle Management (ALM) solution. The paper discusses experiences gained from distributed Scrum projects and reports successful practices as patterns to make them more easily exploitable for other companies. 2.2 Research Settings The research has been carried out as a case study with the following phases: research planning, questionnaire, interviews, analysis of results. A two-step approach was used for data collection. First, a questionnaire was sent to scrum masters and other team members. The questionnaire was organized according to the following Scrum practices and artifacts: x x x x x x
Product backlog Scrum planning Scrum backlog Sprint and Daily Scrum Sprint review Scrum of Scrums
The respondents were asked about their opinions on how Scrum practices and artifacts have affected their daily work in a distributed development environment. They were also asked about their ideas regarding what kinds of challenges a distributed environment would set for operation. After the questionnaire, a few key persons from the scrum teams were selected as representatives who were interviewed using semi-structured interviews.
Patterns for Distributed Scrum – A Case Study
89
The results from the questionnaire and interviews were filtered and organized according to Scrum practices and artifacts and analysis viewpoints as follows (analysis framework): x
Practice or artifact: -
People: Issues that relate to people and organization. Process: Issues that relate to working methods, practices. Tool: Issues that relate to tools.
After this, patterns [11] were used to present suggested solutions to overcome the challenges in the organization Patterns make it easier for other companies to reflect on and to apply the results to their own cases. The patterns will be presented according to the following format: x x x x x x
ID: Name: Problem: Context: Solution: Consequences:
An ID number of a pattern. A short name of a pattern. A detailed description of a problem. A context where a problem exists. Activities that will solve the problem. Results and trade-offs when the pattern is applied.
3 Results This section presents the results obtained from an industrial case study. Issues that present needs or challenges for Distributed Scrum are classified according to analysis framework presented in the previous section. Patterns that describe suggested practices to overcome the challenges of distributed project management were developed based on these results and literature inventory. In this paper, some patterns are presented as examples. 3.1 Experiences from an Industrial Case Study This section presents results of the questionnaire and interviews organized according to the analysis framework. Results reflect issues that the respondents and interviewees found important for distributed project management. For each Scrum practice or artifact issues are mapped with people, process or tool -viewpoint based on the analysis (see Tables 1 to 6). In the end of issues there is a link to a related organizational pattern, e.g. (P Org) is an abbreviation from the words Pattern: “Organize needed Scrum roles in each site”, (P Kick-off) is “Have a kick-off meeting”, (P Rel) is “Make a release plan with some sprints”, (P ALM) is “Establish Application Lifecycle Management tool”, (P Infra) is “Establish a fast and reliable infra”, (P Comm) is “Establish efficient communication methods”, (P Know) is “Knowledge transfer”, (P Visual) is “Visualize status of project”, (P Daily) is “Have many Daily Scrums”.
90
A. Välimäki, J. Kääriäinen
Table 1. Scrum artifact: Product Backlog. Viewpoint
Issues
People
- The Product Owner role is very important since it acts as an internal customer for a development project. - In distributed development the replication of Scrum roles through sites is essential to ensure fully functional Scrum teams in every site. (P Org) - Knowledge transfer is especially important in a distributed environment. More detailed information about domain and product backlog items is needed to ensure understanding in a global environment. (P Know) - In a global environment, team members have to rely on more formal communication and IT means. (P Know & P Comm) - Project personnel need to learn new community working methods in order to make Scrum work. (P Org & P Comm & P Kick-off)
Process
- Product backlog process should allow collecting and processing ideas. Ideas, if accepted for development, will be used for creating backlog items. (P ALM)
Tool
- Common secure and integrated repositories for all project data. (P ALM) Items need to be classified and organised hierarchically. (P ALM) Process support should enable the process and form tailoring, e,g, new item attributes, tailored status models, adding hierarchy, etc. (P ALM) Infrastructure and efficient as well as reliable network connections are a must when using a central project database in a distributed environment. (P Infra) Table 2. Scrum practice: Sprint Planning.
Viewpoint
Issues
People
Estimation of task is challenging, especially for new developers when team support or support from an architect is needed. It is challenging to make a task estimation when there are other responsibilities like maintenance work. All participate, all exchange ideas and information and all share responsibility which is good team work. Constant travelling of team members is needed to have a good Sprint Planning meeting in distributed development. (P Comm)
Process
A presentation of a product owner is important to understand the goal of a sprint. (P Kick-off)
Patterns for Distributed Scrum – A Case Study
Preparing for the next print planning is needed and all related information is needed to clarify items in a distributed development. (P Know) Tool
Better video conference and net meeting tools will decrease the need of travelling. (P Comm) Table 3. Scrum artifact: Sprint Backlog.
Viewpoint
Issues
People
Different concepts are in use with architects and programmers. (P Know)
Process
Task splitting is a good method to clarify the size of a sprint.
Tool
Easy to see what is needed by sprint backlog list. (P ALM) Table 4. Scrum practice: Sprint and Daily Scrum.
Viewpoint
Issues
People
“Daily scrum” -practice was found useful for internal communication and information sharing (e.g. ongoing tasks, problems etc.). (P Daily) Sprint –practice was found good since it increases the attitude that SW needs to work (for every sprint).
Process
These practices enforce the scrum team to follow the procedure that has many advantages (e.g. values functional SW, increases communication, increases personal responsibility).
Tool
Respondents felt that a common sprint overview report would be good (common template to show the status). (P Daily) If distributed Daily scrum is practiced then adequate IT means are needed to facilitate formal and informal communication (e.g. chattool, project news, central ALM). (P Comm) Table 5. Scrum practice: Sprint Review.
Viewpoint
Issues
People
The Product Owner is a very important person in a review.
Process
Review provides a good possibility to find weaknesses from software and plans. Easier to illustrate how the system works and what kind of deficiencies there are. Sprint retrospective collects improvement ideas for the project (e.g. process).
Tool
Efficient communication tools are needed. (P Comm)
91
92
A. Välimäki, J. Kääriäinen
Table 6. Scrum practice: Scrum of Scrums. Viewpoint
Issues
People
-
Process
If the team is large then it is more efficient to have local daily scrums and then a global scrum of scrums with key persons present (network connection). (P Daily) In a distributed environment some sort of a summary report would be good to show and share to present project status for teams. This is especially important when people use different accent although the language is same. (P Daily)
Tool
Net meeting is a good tool to use. (P Comm) A chat-tool could be an alternative for a conference phone. (P Daily)
Fig. 3 illustrates respondents’ opinions related to Scrum claims. 5,0
Rate (1=disagree, 5=fully agree)
4,5 4,0 3,5 3,0 2,5 2,0 1,5 1,0 0,5 0,0 1
2
3
4
5
6
7
Claims
1 2 3 4 5 6 7
Distributed Scrum improves the visibility of the project status in relation to the previous procedure. Distributed Scrum accelerates the handling of changes in relation to the previous procedure. Distributed Scrum improves the management of features through the entire project lifecycle in relation to the previous procedure. Distributed Scrum improves understanding of requirements in relation to the previous procedure. Distributed Scrum improves the team communication in relation to the previous procedure. Distributed Scrum improves the utilisation of the knowledge possessed by the entire team in relation to the previous procedure. Distributed Scrum improves the commitment to the goals of the project in relation to the previous procedure.
Fig. 3. Respondents’ opinions about Scrum related claims.
Patterns for Distributed Scrum – A Case Study
93
3.2 Preliminary Patterns Derived from a Case Study The answers were analyzed and after that some organizational patterns were created (later a pattern) based on gathered information and related research materials. Some of the important patterns from the viewpoint of a distributed project management and Scrum are described below. ID: P Kick-off(1) & Name: Have a kick-off meeting (only name) ID: P Rel(2) & Name: Make a release plan with some sprints x x x
Problems: One big project plan could be a risk in distributed development. Solution: Split your project in many sprints. Iterative and incremental sprints improve the visibility of a project and improve the motivation of team members. Also working software is a good measurement of a sprint. Consequences: Sprints make it easier to plan your project and to change it, if needed. Visibility of a project is also better when you can see the results in the end of each sprint, in a sprint review meeting. In the end of a sprint it is also possible to have a retrospective meeting in which a process can be changed if it is really needed.
ID: P Org(3) & Name: Organize needed Scrum roles in each sites x x
x
Problems: Communication between different sites. Solution: Replicate needed Scrum roles in every site (e.g. Scrum master, a college of Product owner, Architect, IT Support, Quality assurance etc.) A person with the same kind of role can communicate efficiently between sites and inside his/her own site. Also architecture of a product has an effect on the division of responsibilities and roles as well as which phase of development work is made in which site. Consequences: Distributed development needs formal communication and a clear communication organization. A college of a product owner helps e.g. with understanding of feature lists and related specifications. A Scrum master in each site is also a key person. One person can have many roles in a project.
ID: P ALM(4) & Name: Establish Application Lifecycle Management tool x x
Problems: Separate Excel files are difficult to manage and project data is difficult to find, manage and synchronize between many sites Solution: A common Application Lifecycle (ALM) Management solution for all information in a project e.g. Product Backlog, Sprint Backlog, Sprint Burn Down Chart also e.g. source code, data of faults, requirements, testcases, other documents etc. An ALM solution includes e.g. storage places for artifacts and guidelines for common processes and effective user rights methods and role based views in use to see certain data. It also includes a possibility for global access regardless of a time and a place. ALM can consist of databases based tools which might have too simple GUI and other properties. ALM can also be a group of dedicated tools which have been integrated with each other.
94
A. Välimäki, J. Kääriäinen
x
Consequences: An ALM solution costs quite a lot of money and use of ALM needs training. With ALM, time is saved when information is found more quickly and common processes make work more efficient. Anyhow, a lot of work is needed to ensure that only needed information is visible to different user groups
ID: P Infra(5) & Name: Establish a fast and reliable infra ( P Infra) (only name) ID: P Comm(6) & Name: Establish efficient communication methods (only name) ID: P Know(7) & Name: Knowledge transfer x x
x
Problems: Lack of knowledge about domain and features to be developed. Solution: Domain knowledge can be distributed by visitors in different sites. Also features with short specifications and possible diagrams in Sprint Backlog improve knowledge transfer. Detailed specifications should be made latest during the earlier sprint. Consequences: Distributed development requires more traveling and a change of information. Specifications are also needed to clarify the contents of each feature.
ID: P Visual(8) & Name: Visualize status of project x x x
Problems: Status of project is not known. Solution: Use of Sprint burn down charts, trends of bugs, tasks, test-cases etc. from ALM solution. Working software in the end of Sprint is also a good measurement. Consequences: Iterative development with good measurements and a requirement of working software in the end of Sprint visualize the status of a project very well.
ID: P Daily(9) & Name: Have many Daily Scrums x x
x
Problems: One Daily Scrum is not always enough. Solution: Scrum of Scrums is used to manage big groups. One Daily Scrum could be enough for a small group with good communication tools e.g. conference phones, web cameras, video conferences, chat tools. With foreigners, written logs can be one solution e.g. chat logs or common documents. Consequences: Written logs are sometimes easier to understand. Scrum can be scaled with the use of Scrum of Scrums in a distributed development.
5 Discussion The results obtained from this case study indicate some important issues for a distributed Scrum in distributed development. The importance of distributed Scrum
Patterns for Distributed Scrum – A Case Study
95
has also been reported in other studies in industry, e.g. in [8]. Fairly simple solutions are sufficient when operating in a local environment with Scrum. However, in this distributed Scrum case study, secure shared information repositories as ALM solution and electronic connections (e-meetings, teleconferencing, web cameras, chat, wiki) were seen as essential solutions to support a collaborative mode of work. This has also been indicated in other case studies related to global product development, for instance, in [1] and [7] (e.g. intranet data sharing, teleconferencing). The results of this case study show that successful implementation of distributed Scrum has many benefits for case projects. The team members are more satisfied with the newly distributed Scrum than with the previous solution even though there are still some challenges that need to be solved in future improvement efforts. The results show that the most successful distributed Scrum issues have been improvements in visibility, management of features, communication, and commitment to the goals of the project. The importance of these issues for distributed development has also been discussed in [12]. Communication problems have been resolved by utilizing Scrum of Scrums, Daily Scrums and other Scrum practices and also other communication means. These issues have also been discussed both in [13] and [14]. Other major issues that emerged from the needs for more effective distributed agile project management: x x x x x
Enforcement to a global common process that is tool-supported. Same ALM solution for every site. Training of new methods and tools is important for the consistent usage of tools and practices. Product Owner should represent as a customer and interact with development personnel. Resolve knowledge transfer problems in distributed development environment: -
x
Local Product Owners and Scrum masters are needed for each site to facilitate knowledge sharing. Clear Product/Sprint Backlog items that will be specified more detail during previous Sprint. Specifications should be able to contain rich information in order to facilitate understanding (figures, diagrams). Ability to travel when it is needed. Efficient communication means.
Resolve visibility problems using Scrum based reports e.g. burn down charts, trends of bugs, tasks, test-cases, remaining effort etc. Working software in the end of Sprint visualize the status of a project very well.
96
A. Välimäki, J. Kääriäinen
6 Conclusions An efficient project management process is very important for companies. The goals and plans of a project must be communicated and working software must be developed in a product development faster and more efficiently than earlier, even in a distributed development environment. This paper presents a research process for mining organization patterns in order to improve an agile project management process, Scrum, in distributed development. The research method has included a framework based on Scrum practices and artifacts and the results of questionnaires and interviews have been described in the form of a table. Furthermore, the results have been generalized in the form of organizational distributed Scrum patterns and a list of important issues. The important issues from the view of people, processes and tools in different phases were studied. The results emphasise that distributed Scrum is possible to implement with the help of an ALM solution and distributed Scrum patterns. The key issue is also that the ALM solution provides global access and it changes distributed development partly back to centralized development. Of course, all distributed Scrum and other distributed agile patterns haven’t been found yet and also distributed Scrum doesn’t solve all the problems in distributed development. But at least this is a start for research to find distributed Scrum patterns. Organizational patterns seem to be one good method for describing solutions to a problem. Generalized patterns make it easier for other companies to reflect on and to apply the results to their own cases. Future research directions will be the analysis of experiences with the current patterns in future development projects, the improvement of the patterns and the creation of new patterns according to the feedback gained from projects.
References [1] [2]
[3]
[4] [5]
[6] [7]
Battin RD, Crocker R, Kreidler J, Subramanian K, (2001) Leveraging resources in global software development, IEEE Software, Vol. 18, Issue 2, 70 – 77 Herbsleb JD, Grinter RE, (1999) Splitting the organisation and integrating the code: Conway’s law revisited, Proceedings of the 1999 International Conference on Software Engineering, 16-22 May 1999, 85 – 95 Damian D, Moitra D, (2006) Guest Editors' Introduction: Global Software Development: How Far Have We Come?, IEEE Software, Sept.-Oct. 2006, Vol. 23, Issue 5, 17 - 19 Nidiffer E, Dolan D, (2005) Evolving Distributed Project Management, September/October 2005 IEEE Software Abrahamsson P, Salo O, Ronkainen J, Warsta J, (2002) Agile software development methods: Review and Analysis. Espoo, Finland: Technical Research Centre of Finland, VTT Publications 478, http://www.inf.vtt.fi/pdf/publications/2002/P478.pdf www.agilealliance.org, (available 10.07.2007) Ramesh B, Cao L, Mohan K, Xu P, (2006) Can distributed software development be agile? Communications of the ACM, Vol. 49, No. 10
Patterns for Distributed Scrum – A Case Study
[8]
[9] [10] [11] [12] [13] [14]
97
Sutherland J, Viktorov A, Blount J, Puntikov J, (2007) Distributed Scrum: Agile Project Management with Outsourced Development Teams, Proceedings of the 40th Annual Hawaii International Conference on System Sciences (HICSS) Farmer M, (2004) DecisionSpace Infrastructure: Agile Development in a Large, Distributed Team, Proceedings of the Agile Development Conference (ADC’04) Doyle C, (2007) The importance of ALM for aerospace and defence (A&D), Embedded System Engineering (ESE magazine), June 2007, Vol. 15, Issue 5, 28-29 Coplien JO, Harrison NB, (2005) Organizational Patterns of Agile Software Development, Pearson Prentice Hall Leffingwell D, (2007) Scaling Software Agility, Addison-Wesley Schwaber K, (2004) Agile Project Management with Scrum, Microsoft Press Schwaber K, (2007) The Enterprise and Scrum, Microsoft Press
Understanding the Collaborative Workspaces Gilles Gautier, Colin Piddington, and Terrence Fernando Future Workspaces Research Centre, University of Salford, Salford, M5 4WT {g.gautier, c.piddington, t.fernando}@salford.ac.uk
Abstract. Despite the concept of working collaboratively being very popular, its implications on the organisational structures and on the human requirements are not completely understood. As a result, the advanced technologies used to enhance collaborative work have often had a limited or even a negative impact on the work efficiency. Common issues often involve human factors, such as the acceptance of the new product by the users, the lack of communication between users and the working culture. As a consequence, a better understanding of the human needs would facilitate the development of better tools to support group work as well as more efficient organisations. Therefore, this paper compares the organisation and the human perspectives on collaboration in order to identify the barriers to implementing it. In particular, it focuses on issues related to the life cycles, organisation structure, information flow and human motivation. It also introduces the case of virtual organisations and their difficulty to generate efficient collaboration. Keywords: Socio-technical impact of interoperability, The human factor in interoperability, Knowledge transfer and knowledge exchange
1 Introduction Due to a lack of understanding of the implications of collaboration, it is often believed that new technologies will enhance collaboration by supporting the organisational processes [1]. But psycho-sociologists have demonstrated since the 19th century the importance of the social relationships between the workers [2]. Indeed, it appears clearly in the literature that social interactions are necessary to build trust and to generate a shared understanding of the project aims and processes. These are among the main features of collaboration [3] and [4]. Efficient technological workspaces for collaboration should therefore combine the support of the organisation processes and of the workers. So far, these two objectives are usually considered independently because of the difficulty to integrate both [5]. Consequently, the social and psychological features of the
100
Gilles Gautier, Colin Piddington and Terrence Fernando
workers are not represented in enterprise models where the humans are only seen through their roles in supporting the processes. As a result, the workers are often reluctant to integrate new technologies in their workspaces because they are associated to a change of their working culture. A better understanding of the working environment and its influence on the work efficiency would therefore permit the creation of better workspaces and the integration of more user-friendly technologies. In addition, it might be necessary to limit the intrusion of IT in the workspace in order to better respect the human nature. These issues are discussed in this document which starts with an introduction to collaboration before introducing the organisation’s and worker’s viewpoints on the need for collaborative workspaces. Finally, the role of the technology in supporting group work is discussed in the last section. The discussions of this paper are based on a competency-oriented decomposition of the organisation structure that is presented in the third chapter.
2 Understanding Collaboration As defined by Montiel-Overall [4], “collaboration is a trusting, working relationship between two or more equal participants involved in shared thinking, shared planning and shared creation”. This notion of collaboration is difficult to implement by enterprises because it implies quite open communications and social relationships between the participants, as explained in the group working theories. Consequently, organisations rarely achieve collaboration when they try to generate it. Instead, as will be explained in the next section, their non flexible structures and IPRs limits them to work cooperatively, which is not as beneficial for the group work. Therefore, collaboration necessitates the building of a team over four stages (Fig. 1) that have been identified by Tuckman [6]. First, the forming phase is when people join and set up the roles of each others. This is an observation phase strongly influenced by the leaders of the group and the structure of the participants’ roles [7]. In the second phase, called the storming phase, everyone tries to assure his position and the authority of the leaders is challenged. Conflicts arise between the members of the group. In the norming phase, people become closer and often start sharing a social life. The roles are clear in the group, and discussions become productive. Finally, the performing stage is when the collaborators are fully aware of the group structure, of each others’ roles and objectives, as well as of their capacities and ways of working. Like other researchers before him, Tuckman finally realised that as the group becomes closer, the efficiency of the work increases in parallel with the motivation of the collaborators. Those stages can be explained partially by the need to share an awareness of the working group between the collaborators, as explained by the Johari window [8]. This common knowledge, which is called the open area, is limited when the group forms, and it soon integrates a shared understanding of the roles of everyone (forming phase), and the objectives of all the participants related to the collaborative work (storming phase). In the norming phase, the group becomes stronger and share some social experiences outside work that allow them to
Understanding the Collaborative Workspaces
101
understand each others better, so that they can communicate more efficiently and trust more easily. This social need is the main difference between collaboration and cooperation, where only the understanding of the working group structure is essential. As a consequence, the performing phase can hardly be achieved.
Formin g
Storming
Norming
Performing
Fig. 1. The evolution of the roles in a collaborative team
Of course, any set of people cannot become a highly collaborative group. Among others, Kurt Lewin has shown that there is a need for an interdependence of fates and tasks between collaborators [9]. This corresponds to Vroom [10] theory that will be later introduced. Indeed, people not only need to share a common objective, but also a shared reward. If the work of everyone benefits only a few, this contradicts the instrumentality element of Vroom’s theory. The interdependence of the collaborators is then called negative, because their objectives conflict with each others. This can lead to the end of the group work due to a lack of communication, and to a lack of information sharing. On the opposite, positive interdependence enhances participation, discussions, friendship and decreases aggressive behaviours. It enables the collaborators to achieve Tuckman’s performing phase and the high productivity observed by Fayol [2] and Mayo [11].
3 The Organisational Layers Collaboration is usually required for particular projects in order to support interoperability between the stakeholders. During these projects, organisations usually adopt traditional pyramidal structures with managerial roles at the top and functional roles at the bottom. The top level roles mainly concentrate on the project objectives and processes, while the bottom ones are usually focussed on one skill or competence. Consequently, project organisational structures can be decomposed in three layers that are introduced here from top to bottom in order to explain the environment of the collaboration: x
The Project Manager (PM) has a management and control function in the overall project. He is responsible for the achievements of the project, and he must make sure that the organisation’s project objectives will be met. Therefore, he defines the processes that will be followed during the project in
102
Gilles Gautier, Colin Piddington and Terrence Fernando
order to meet a customer or product demand (time and cost) and he also defines the conditions that will allow to start or finalise each phase of the project. Finally, he is responsible for finding and managing the functional resources necessary to support the project processes (Fig. 2). He is not usually concerned by human factors aspects and mainly considers the employees at functional levels as the actors of the processes. Identify a set of objectives
Define the project processes
Determine resource allocation
Fig. 2. Simplistic view of the Project Manager’ successive tasks
x
The Management Team (MT), led by the project manager, is in charge of the integration of several competencies/functions during a particular phase of the project. A Competency Representative is present in the team for each competence involved in this phase. The first task of the management team is to match the processes defined by the project manager with the corporate competencies. Consequently, the MT must build a common understanding between its members in order to allow them to take better decisions when a conflict has to be solved. If some competencies are missing in the organisation, then a virtual organisation is created by acquiring competencies through outsourcing or sub-contracting (Fig. 3). The management team is where collaboration is the most likely to appear because a shared understanding is required to take better decisions and the competency representative have full control on the IPRs of their competencies. Build up a shared understanding
Identify relevant competencies
Build up a VO if required
Fig. 3. Simplistic view of the Management Team' successive tasks
x
The Skills Silos are vertical structures responsible for the development of the competence. Thus, they contain the organisation knowledge for each competency involved in the project. The competency representative, who represents the silo in the management team, is at the top of the silo, possibly supported by other levels of managers. At the bottom of the silo, the employees with the functional competencies/skills act according to the decisions made by their managers. They have a limited understanding of the overall project and they usually have little contacts with other silos, which disable the possibility to build up an inter-silos team. Consequently, the functional
Understanding the Collaborative Workspaces
103
levels do not usually get involved in any collaboration with other silos, and most of the wrong decisions and interoperability issues appear at these levels.
The above components permit a simplistic representation of an organisation structure for one phase of a project. This simplistic view is nonetheless sufficient to highlight the need for collaboration in the management team in order for them to share a common understanding of the project and to assure that every view point concerned will be considered equally throughout the project. In fact, late problem discovery has proved to be extremely costly in many industrial cases and they are regularly due to a limited understanding of the consequences of decisions made during previous management meetings. Similarly, collaboration at the silo level is essential to permit the workers to share cross project knowledge and experience in order to allow faster problem resolution or innovation [12]. The organisation structure presented here corresponds to a snapshot in time, but the management teams change throughout the project, as well as the involvement of the skills silos.
4 The Life Cycles The Product Life Cycle is the time line representation of a set of sequential phases from the birth to the death of a product or service. This addresses conceptualisation through design, to production, to deployment, and finally replacement (Fig. 4). The Product Life Cycles are extended by the maintenance and recycling to face the increasing demand for “through life services” [13] and [14] as well as environmental issues. In this line, some advanced work in standards and PLCS (Product Life Cycle Support) has already been done in the USA Department of Defense and in the UK as traditional manufacturers change their business from product manufacture to through life service (Leasing). In addition, the building sector is following a similar path with PPP (Private, Public Partnerships). Therefore, the outcome of a life cycle is often not only a product but also a service. This demands a far reaching collaborative way of working across organisations in order to consider the requirements at maintenance and recycling levels as early as at the design phase. Conceptualisati
Design
Production
Deployment
Maintenance
Recycli
Fig. 4. Example of product life cycle
In order to save time, these phases are often parallelised which means that a phase can start before the previous one has been completed. This specific adaptation of the product life cycle is used by the project management processes. The overlapping period between the start of a phase and the termination of the previous one is only possible after a Maturity Gate (Fig. 5) meeting where the
104
Gilles Gautier, Colin Piddington and Terrence Fernando
representatives of several competencies meet to assess the risk of change of starting a new phase of the project. Each overlapping phase corresponds to a high risk period where any changes in the previous phase can become extremely time consuming and costly to accommodate in later phases. The chairman of these meetings is the project managers, so that he can control the project from the clients’ perspective.
Conceptualisati M
Design M
Production M Deployment M Maintenance M
Recycling
Fig. 5. Example of project management process
Decisional Gates occur within planned meetings and chaired by the project manager, and whose outcomes are critical for the project because they aim at solving or avoiding problems. As a consequence, they usually happen within a phase of the project, and even might involve representatives of later phases, such as maintenance managers. Decisional gate intent is to limit delays in the project progress by the identification of optimised ways to progress. Nevertheless, the participants of these meetings are the competency representatives and they usually need to consult with their skills silos in order to resolve specific difficulties or assemble additional data. Since these consultations cannot usually be made during the meetings, the decisional gates often consist of a series of meetings that result in action lists for the competency representatives and their skills silos. If the meeting workspace could be connected to the skills silos workspaces, then it would be possible to make decisions faster by discussing solutions with the relevant silos during the course of the meeting. Two types of meetings can be differentiated - the planned meetings that are held throughout the project management process to assess progress and to mediate on problem resolution, and the reactionary ones that are needed to address urgent issues - they only influence the technology used and not the roles of the stakeholders, as the solution requires near real time performance. For reactionary meetings, mobile technologies might be useful to improve the reactivity of the organisation where meeting participants are not available or needed locally, even if these technologies could limit the level of interaction between the participants to the meeting.
Understanding the Collaborative Workspaces
105
5 The Processes and the Motivation Once the objectives of each phase of the project and the conditions to parallelise phases have been defined, the project manager aims at generating the processes that will have to be followed within each phase of the project in order to achieve the corresponding objectives. Roles are defined and assigned in accordance with the management teams. The view of the organisation, which is often the one of the project manager, is project-based and sees the employees as the actors of the processes. As a consequence, human factors are little considered by the higher levels of the organisation and the structure of the organisation tends to become rigid. However, Henri Fayol [2] has identified since the end of the 19th century the role of the administrative part of an organisation as supporting the employees. He also highlights the importance of giving structure to the employees, and the benefits of self-organisation which is often supported by social relationships. As an example, his miners organised themselves in small groups according to their friendships and relationships. The groups were more stable and more productive thanks to the increased motivation of the employees. This is consistent with the group building theories discussed above, which notice the increasing importance of informal relationships over time while structure is essential at the beginning (Fig. 1). On the opposite, the scientific management of work proposed by Taylor [15] and which corresponds to the decomposition of work into processes has shown some limits. Indeed, it increases sickness and absenteeism due to the lack of motivation of the employees. Nowadays, the French automotive sector and its high suicide rate is a good example of the consequences of such an approach. Therefore, it is crucial to support both structured and informal relationships when trying to develop collaboration. It is also essential to make the distinction between an employee and its roles in the organisation. This permits, for example, the consideration of the worker’s motivation by understanding his personal objectives, interests or culture. Indeed, it is sometimes believed that humans are not naturally inclined to work and they must see a personal gain in order to do it [10]. The most studied theory in motivation science is the hierarchy of needs developed by Maslow [16]. He presents the human needs in a five-level hierarchy. Those needs are determinant for human behaviours because they act as motivators as long as they are not fulfilled. From bottom to top, the five levels are: physiological needs, safety needs, love needs, esteem needs, and self-actualisation needs. The physiological needs are the ones that trigger homeostatic and instinctive behaviours. The safety needs include security, order and stability. The love needs correspond to our social lives and look for relationships. The esteem needs involve appreciation and recognition concerning abilities, capabilities and usefulness. Finally, the self-actualisation needs refer to our desire for self-fulfilment. The needs in the highest levels only act as motivators if the lower levels are already fulfilled. After a level of needs is fulfilled, lower needs become less important and higher ones are predominant. Even if the hierarchy is supposedly independent from any culture, the importance of each need can depend on the past
106
Gilles Gautier, Colin Piddington and Terrence Fernando
experiences of the people. As an example, a lack of love during childhood can lead to psychological illness and to a depreciation of love needs. As demonstrated by Herzberg et al. [17], the lowest levels needs are often seen as required ones by the workers. He considers those as hygiene factors, which are responsible for disaffection at work. As a consequence, unfilled hygiene factors can lead an employee to perform his tasks poorly or to leave his job. The highest levels of Maslow’s hierarchy of needs act as motivators that make the employees produce more effort to achieve the objectives associated to their roles in the organisation. In order to keep employees motivated, any work must therefore target the three factors that influence human’s motivation according to Vroom [10]: expectancy (be sure that his role is valuable in the project, and that others’ participations will lead to the achievement of the general objective), instrumentality (effort is rewarded accordingly to the effort produced) and valence (reward is appropriate to the employee’s need). To summarise, supporting the motivation of the employee must be considered as important as supporting the processes he is involved in when implementing collaborative tools. However, motivation evolves over time, and it is difficult to predict. It starts with a need for security before moving to self-development. Besides, an innovative organisation will benefit from more flexibility because a major issue is to integrate innovative methods and techniques in the organisations. Not being able to integrate the innovation could limit the user’s motivation because he would have the feeling that his effort were vain, which contradicts with Vroom’s requirements.
6 Virtual Organisations Contractual terms surround most collaborations and describe the constraints to protect the organisation and its customers and to build trust. Indeed, from an organisation viewpoint, trust mainly refers to the respect of the contractual agreement because this data is quantifiable and therefore tangible [18]. As part of the contract the organisation must define what can be shared, which is usually easier than to identify what needs to be protected (Intellectual Proprietary Rights). However, it limits collaboration because companies tend to share the minimum information, and it becomes difficult to build a common understanding of the project. Consequently, innovation is reduced and the decisions are easily made with an incomplete understanding of their consequences. This is particularly relevant in cross-organisations collaborations.
Understanding the Collaborative Workspaces
107
Fig. 6. The roles of an employee
Additional issues appear when considering the employee viewpoint, such as the workload or conflicts between roles for several organisations. Indeed, enterprise models are often project-based, and they cannot capture the conflicts between organisations. The example of an employee having to share his time between his main organisation and a virtual one is given in Fig. 6. He can play roles in both organisations independently for each others, and these roles can conflict with each others in terms of workload, IPR or objectives. The decisions of the employee, based on his manager’s advice or on his motivation, will then influence his choices. Therefore, there is a need when representing Virtual Organisations to add details about the involvement of the worker in external activities. Virtual Organisations are mainly necessary when an organisation does not have the full resources necessary to build a successful project. Two configurations are presented in Fig. 7 to illustrate some extremes of inter-silos relationships. In the first one, the Virtual Organisation is formed by the combination of skill silos that still work for their main organisations (Fig. 7-a). As explained before, the information exchanges between the stakeholders are controlled and limited to the minimum. Each organisation occasionally allocates some working hours for the workers, who mainly work for their own organisation. As a consequence, innovation is also limited by the lack of understanding between silos, and it is usually derived from the collaboration objectives. This is typical of supply chains and short term collaborations (partnering). In the second configuration, the virtual organisation tends to become autonomous from its parent organisations (Fig. 7-b). It creates its own working culture and the communications between its skill silos are less constrained. The workers tend to be fully allocated to the Virtual Organisation, and IPRs protection becomes less important. As a consequence, innovation is facilitated by the shared understanding between the silos. Since the links with their original organisations are weaker and less controlled, the Virtual Organisation ultimately becomes independent and works as a new organisation (merger or new business).
108
Gilles Gautier, Colin Piddington and Terrence Fernando
Organisation A
Organisation
(a)
B
Virtual Organisation A+B
(b)
Fig. 7. Examples of Virtual Organisation
Partnering intends to limit the possible drawbacks of collaborations due to the compartmentalisation of the knowledge and the development of a blame culture building a dependency of faith between organisations. This approach promotes similar values as inter-personal collaborations, such as equality, transparency and mutual benefits [19], and it also aims at generating win-win co-operations based on each organisation’ strengths.
7 Information Flows When considering the information flows, Virtual Organisations are where collaboration has the most impact because this is where the information shared is the most carefully selected and protected for commercial or privacy reasons. The competency representatives, due to their managerial roles, usually have the responsibility for defining what can be shared with other silos. Consequently, any sharing of new information must be validated by the relevant competency representative. However, as discussed before, the need for inter-silo collaboration often involves the participation of employees at bottom levels. Consequently, data is needed to be exchanged in the shared data environments via the demilitarised zones of the parent organisations. They must keep control on it through their competency representatives, even in emergency situations. The IT collaborative workspaces must allow communication across the organisation, but the IT tools developed on top of them often support their competences and their culture-based organisations. As a consequence, they store their data in a way adapted to these competences, taking into consideration the processes during which the data has to be accessed, or used, and manipulated by the corresponding disciplines. APIs are then used to extract the required view of the information and to translate the data stored into standardised or contractually agreed information so that it can be shared with other silos (Fig. 8). However, translations are often accompanied by loss of information and inter-silo interoperability is not then fully achieved. Data incompatibility remains a major
Understanding the Collaborative Workspaces
109
problem, even between silos sharing similar applications between silos because the software versioning, the platform used or the configuration can be responsible for incompatibilities at the data level [1].
Applicatio A PI Inform ation store
Applicatio A PI Inform ation store
Applicatio A PI Inform ation store
Fig. 8. Distribution of the product data between the skills silos
A solution to solve skills silos interoperability issues is proposed by IT: specifying the processes more carefully and completely in order to be sure that every situation has been considered and limiting the human initiative, which is unpredictable but innovative. However, this theory must be based on the strong assumption that any task can be detailed as a set of processes, because the workers involved in these processes would trust them and they would loose the ability to identify problems. As a consequence, any issue would be discovered very late and have huge financial consequences, as it was the case for the production of the Airbus A380. Besides, people would loose their ability to innovate because the structure of the organisation would be too rigid to integrate any development. Finally, the workers would not need to communicate with each others in order to achieve their objectives because of the processes efficiency. Their isolation would not allow them to build a shared understanding of their project and would compromise any intent to implement collaboration. Consequently, the implementation of tools to support processes should be modified in order to allow for the development of informal relationships between the workers. The support tools should focus on intra-silo work, where communications are already easy and efficient because people share similar background and working cultures.
8 Conclusion The above decomposition of the organisations’ structures during projects permits to highlight that the real challenge for supporting collaborative organisations is to generate social interactions between skills silos. Not only it would increase the motivation of the workers, but it would also facilitate innovation and facilitate
110
Gilles Gautier, Colin Piddington and Terrence Fernando
interoperability. IT is indeed extremely useful to automate processes and to support personal work, but its current scientific approach seems to be quite inefficient for supporting the social aspects of collaboration and its faults have increased financial consequences. Limiting the role of IT to a knowledge sharing tool and inter-silos communication enhancer would therefore allow for the development of informal interactions between the stakeholders at any level. This could facilitate the early problem identification, innovation and the building of trust. However, socialcomputing must still progress and prove that supporting the workers thanks to IT is not only a dream but that it can become true. This will be an objective of the CoSpaces project, which plans to develop user-centric technological workspaces based on the user’s working context.
Acknowledgments The results of this paper are partly funded by the European Commission under contract IST-5-034245 through the project CoSpaces. We would like to thank also Sylvia Kubaski and Nathalie Gautier for their useful resources concerning the psycho-sociological area, as well as all the industrial partners of the CoSpaces project for their information about current industrial practices and challenges and the research partners for their enlighten knowledge sharing.
References [1]
Prawel, D.: Interoperability isn’t (yet), and what’s being done about it. In: 3rd International Conference on Advanced Research in Virtual and Rapid Prototyping, pp. 47--49. Leiria, Portugal (2007) [2] Reid, D.: Fayol: from experience to theory. Journal of Management History, vol.1, pp.21--36 (1995) [3] Schrage, M.: Shared minds: The new technologies of collaboration. Random House, New York (1990) [4] Montiel-Overall, P.: Toward a theory of collaboration for teachers and librarians. School library media research, vol 8. (2005) [5] Klüver, J., Stoica, C., Schimdt, J.: Formal Models, Social Theory and Computer Simulations: Some Methodical Reflections. Journal of Artificial Societies and Social Simulation, vol. 6, no. 2 (2003) [6] Tuckman, B.: Developmental sequence in small groups. Psychological bulletin, vol. 63, pp. 384-399 (1965) [7] Belbin, M.: Management Teams, Why they Succeed or Fail. Heinneman, London (1981) [8] Luft, J., and Ingham, H.: The Johari Window: a graphic model for interpersonal relations. University of California Western Training Lab (1955) [9] Brown, R.: Group Processes. Dynamics within and between groups. Blackwell, Oxford (1988) [10] Vroom, V.: Work and Motivation. Jon Wiley & Sons, New York (1964) [11] Mayo, E.: The Social Problems of an Industrial Civilization. Ayer, New Hampshire (1945)
Understanding the Collaborative Workspaces
111
[12] Lu, S., Sexton, M.: Innovation in Small Construction Knowledge-Intensive Professional Service Firms: A Case Study of an Architectural Practice. Construction Management and Economics, vol. 24, pp. 1269--1282 (2006) [13] Gautier, G., Fernando, T., Piddington, C., Hinrichs, E., Buchholz, H., Cros, P.H., Milhac, S., Vincent, D.: Collaborative Workspace For Aircraft Maintenance. In: 3rd International Conference on Advanced Research in Virtual and Rapid Prototyping, pp. 689—693. Leiria, Portugal (2007) [14] Ward, Y., Graves, A.: Through-life management: the provision of integrated customer solutions by Aerospace Manufacturers. Report available at: http://www.bath.ac.uk/management/research/pdf/2005-14.pdf (2005) [15] Taylor, F.W.: Scientific Management. Harper & Row (1911) [16] Maslow, A.: A theory of human motivation. Psychological Review, vol. 50, pp. 370-396 (1943) [17] Herzberg, F., Mausner, B., Snyderman, B.B.: The motivation to work. Wiley, New York (1959) [18] TrustCom: D14 Report on Socio-Economic Issues. TrustCom European project, FP6, IST-2002-2.3.1.9 (2005) [19] Tennyson, R.: The Partnering toolbook. International Business Leaders Forum and Global Alliance for Improved Nutrition (2004)
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network Heiko Weinaug, Markus Rabe FhG IPK, Pascalstrasse 8-9, 10587 Berlin, Germany {heiko.weinaug, markus.rabe}@ipk.fraunhofer.de
Abstract. B2B manufacturing networks offer significant potentials for service providers, which can offer and operate services with the complete network instead of connecting to each single company to be involved. The term multi-disciplinary B2(B2B) characterizes this business-to-network relationship. The FLUID-WIN project is developing and using new business process models and methods as base for the related tools’ development for the smooth integration of logistic and financial services into a B2B manufacturing network. Keywords: Business Aspects of Interoperability, Business Process Reengineering in interoperable scenarios, Enterprise modeling for interoperability, Modeling cross-enterprise business processes, Modelling methods, tools and frameworks for (networked) enterprises
1 Introduction European enterprises have increased the degree of outsourcing, steadily, in the last decade leading to many more partners along the supply chain. In combination with smaller batch sizes and delivery requests on shorter notice, the coordination requirements are tremendous for all the companies involved. The number of processes to be synchronised has increased, significantly. Nevertheless, investigations demonstrate that up to 30% of the administrative effort can be saved by establishing electronic interchange mechanisms in the supply chain [10]. A very efficient additional effect could be achieved by connecting the B2B network with service providers. This approach has been described by Rabe and Mussini and introduced as the business-to-B2B or B2(B2B) mechanism [6,12,13]. Obviously, financial services providers can offer services based on the electronic invoices circulating in the B2B network. Also, logistic service providers can provide services based on the electronic transportation documents available through the B2B network, taking into account current order information that includes announced or expected delays as well as forecast information.
114
Heiko Weinaug, Markus Rabe
However, there is a need for a systematic approach and easily applicable instrument as a prerequisite to establish multi-disciplinary B2(B2B) services and to develop supporting tools for multi-enterprise B2B networks. Especially, the approach has to handle the large set of enterprises included in B2B manufacturing networks, whereas each has its own wishes, constraints and might be embedded in various other B2B networks. Moreover, the processes of the services providers have to be synchronized and integrated with the network processes. Beside the organizational aspects, the approach has to manage the different local and networkbound IT solutions as well as their relation to each other, to the processes and the actors during the service use. The FLUID-WIN project, partially funded in the sixth European Framework Programme, aims to the development a service integrating a B2(B2B) model and a prototype of the related e-commerce application. The FLUID-WIN consortium has decided to use the business processes modelling (BPM) as the main instrument related to certain steps in the development sequence to achieve the objectives. For this purpose, the Integrated Enterprise Modelling (IEM) Method [3] was used to set up a model of the B2(B2B) business process model. For example, during the field study phase a “Template Model” and various “As-is Models” were modelled to document mainly the results of the interviews done with the companies, thus helpimg to identify the user requirements, potentials and restrictions. All processes, constraints and potentials were merged into one “General Model”, which again was base for the definition of the “FLUID-WIN B2(B2B) Business Process Model”. The FLUID-WIN B2(B2B) Business Process Model represents the definition of a “To-be Model” that describes the new processes and methods as a base for the process and platform implementation through upcoming software development tasks. Based on the BPM models, the “Interdisciplinary Service Model” and the “Network Model” were developed as instrument to set-up and configure the service workflows and their related business actors in the FLUID-WIN platform.
2 The B2(B2B) Background and its Interoperability Challenges The FLUID-WIN consortium members found out that the integration of the business processes among companies belonging to a manufacturing network and companies providing logistic or financial services have major potentials, especially for manufacturing networks that are geographically distributed across state borders, where transportation duration is significant (days to weeks) and where cross-border payment and trade finance are far from being smooth. This is fertilized by connecting these service providers to the manufacturers’ B2B platform. This B2(B2B) approach is in contrast to the “traditional” way of connecting each single company with each related service provider. The effort saving through the new approach is significant, as interfaces with customers, suppliers, financial service providers and logistic service providers (figure 1) are replaced by one single interface to the new B2(B2B) platform. In this context, “multidisciplinary” indicates that the services to be integrated into the network are from other domains,
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
115
such as finance or logistics. Moreover, the new approach has to tackle crossdiscipline processes. This, of course, induces a high focus on the Interoperability topic on its various levels. Generally, interoperability is defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” [2]. In this context “system” can mean IT systems, but of higher importance are enterprises and other organizational units interacting with each other. To explain the challenges related to Interoperability, a framework for modelling cross-organisational business (CBP) processes from the ATHENA research project is used. The CBP framework was developed to provide modelling support for the business and technical level as well as for the transformation to executable models [1]. With respect to the modelling dimension this framework incorporates three levels:
Fig. 1. Network relations addressed by the B2(B2B) Approach
The Business level represents the business view on the cooperation and describes the interaction of the partners in their cross-organizational business process as base for the analysis of business aspects, like involved partners and their responsibilities. At this level, all FLUID-WIN Process Models cover the processes of 1) the B2B collaboration within a supply chain (manufacturing), 2) the business interaction with logistic service providers (B2B to LSP), 3) the business interaction with financial service providers (B2B to FSP), and 4) the support of new integrative services by functionalities of the FLUID-WIN Platform.
116
Heiko Weinaug, Markus Rabe
The Technical level represents the complete CPB control including the message exchange. Thereby, different task types can be distinguished: those which are executable by IT systems and those that are executed, manually. The challenge for the B2(B2B) Business Process Model is to use – as far as possible – standardised descriptions for the information exchange, like the XML Common Business Library [11], which later can be reused for platform programming and implementation. Furthermore, the model has to serve as 1) a base for the technical definition of services, 2) a supporting documentation for the development of the FLUID-WIN Platform concepts and software, and 3) for the generation of crossdomain classes, and as base for the customization and orchestration of services. On the Execution level the CBP is modelled in the modelling language of a concrete business process engine, e.g. a model based on the Business Process Execution Language (BPEL). It is extended with platform-specific interaction information, e.g. the concrete message formats sent or received during process execution. For example, the B2(B2B) Business Process Model has to deliver on the Execution level 1) support for FLUID-WIN Platform execution (e.g. help functionalities) and 2) integration of model concepts and platform architecture. Beside the results from the ATHENA projects, existing reference models and available standards were investigated to be used in FLUID-WIN. There are few commonly accepted approaches for models in the manufacturing supply network area. The Supply Chain Operations Reference (SCOR) model was designed for the effective communication among supply chain partners [8]. SCOR is a reference model, as it incorporates elements like standard descriptions of processes, standard metrics and best-in-class practices. However, still SCOR has no broad and common acceptance in industries, and the authors have experienced in their studies that most of the companies under consideration – especially the smaller ones – did not have any skills with respect to SCOR. The Value Reference Model (VRM) model version 3.0 provided by the Value Chain Group [9] follows a broader and more integrative approach than SCOR and supports the seamless and efficient management of distributed research, development, sales, marketing, sourcing, manufacturing, distribution, finance and other processes. Summarizing, VRM has a more substantial approach than SCOR. However, VCOR has not even reached the industrial awareness of SCOR.
3 Template Model, As-is Models and the General Model The authors have developed an approach for the analysis of supply chain networks, which was based on reference models and a guideline with several components [5]. This approach has been successfully applied for the analysis of requirements and potentials in regional supply networks [4]. For a number of field studies that were performed in the first months of the FLUID-WIN project, the approach was adapted to the specific needs of the service workflow investigation. In four different cross-European networks the field studies have involved 6 manufacturers, 2 Logistic Service Providers (LSP) and 5 Financial Service Providers (FSP) [6]. Different types of models have been used as efficient instrument to reach the
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
117
project aims. During the field studies these were the “Template Model”, the various “As-is Models” and the merged “General Model”, respectively. The Template Model provided an orientation during the interviews as a “guide” for relevant processes and topics. It served also as a base for modelling of the As-is Models. The As-is Models describe and document the results of the interviews done with the companies and thus helped to identify the user requirements, potentials and restrictions. Finally, the General Model includes the outcome of all the As-is Models and forms a unique interdisciplinary model, enabling an overview of the processes in all three domains (manufacturing, finance and logistics) and their relations among each other.
4 B2(B2B) Business Process Model The FLUID-WIN B2(B2B) Business Process Model is based on the requirements that have been derived from the field studies. It represents the definition of a “Tobe Model” that describes the new processes and methods as a base for the process and platform implementation through upcoming software development tasks. The model is an integrative framework in which all the workflows, interactions between the domains, services, functionalities and software modules involved are visualized. Hence, the description of the model has been done with a “servicefunctionality-module” approach. In order to clearly understand the model concept it is necessary to have a clear impression and definition of the terms “service”, “functionality” and “module” as they are used in this context. Services define the typical and ideal workflow provided by business service providers to other business partners, namely the interactions with their clients. Services are under the responsibility of service providers such as LSPs or FSPs and have their own workflows. Functionalities are the supporting elements which enable the users of the services to carry out electronically the activities supported by the B2(B2B) platform. The FLUID-WIN project develops a prototype for such B2(B2B) platform, which is called the FLUID-WIN Platform. The definition of the functionalities is, obviously, an important prerequisite for the FLUID-WIN Platform specification, as they define the workflow within the FLUID-WIN Platform. Functionalities describe how the FLUID-WIN Platform supports the service workflow and its participants. Exemplary, the functionality “Logistic Forecast Propagation” supports a certain step within logistic Transportation Service by gaining, transferring and providing logistical forecast information. Functionalities and services are strongly interrelated and depend on each other. A module is simply a piece of software within the B2(B2B) platform (i.e. user interface, database, logic etc.) made up of the FLUID-WIN supporting functionalities. According to the “service-functionality-module” approach, first the services were defined comprehensively during the development phase. Then the necessary functionalities of the FLUID-WIN Platform were identified and methodologically
118
Heiko Weinaug, Markus Rabe
specified depending on the requirements of the services. Finally, based on the generated functionalities, the modules were indicated. Based on these prerequisites the framework structure of the FLUID-WIN B2(B2B) Business Process Model has been developed. The highest level of the model is shown in figure 2. It contains six main areas, which are highlighted by numbers. Area 1 describes three symbolic enterprises related to different roles of manufacturing companies. Together they form a typical supply chain and represent the B2B network. These manufacturers are using the services provided by the service providers. The influence of these services on the physical material flow is displayed in the area 6 of figure 2 (e.g. transportation of goods performed either by the LSPs or by the companies themselves). The manufacturing supply chain is assumed to be organised by a supply chain execution tool which communicates with the local ERP software (relates to area 3). For this purpose, any SCE supporting tool such as I2, SAP APO, SPIDER-WIN etc. is suitable [7]. The control and information exchange integration through the supply chain is enhanced by the logistic as well as the financial participants using their own techniques, software and methods. Hence, the model areas 2 and 4 contain the process description of the services from the providers’ point of view, but related to the cross-domain information flow during the complete service. The model contains the LSP-provided services Transportation, Warehouse and Combined Logistics service. Related to financial services provided by the FSP the model contains Invoice discounting and Factoring service. Based on the field studies, these services have been selected for the prototypical implementation as they promise a high market importance. However, the architecture of the model and the platform allow for the definition of additional services at any time.
Fig. 2. FLUID-WIN B2(B2B) Business Process Model (Level 0)
Each service is described and modelled in a separate workflow. However, the modelling of the service workflows in domain-oriented model areas enables the
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
119
identification of common activities between the services as well as domain interactions. This is especially important for the common understanding and consensus building among the various involved parties (domain experts, platform users, and software developers). Figure 4 demonstrates the general structure of modelling services on an exemplary cut-off view from the transportation service workflow (detailing the area 2 in figure 2). The service workflow is clustered in four particular areas (figure 3): 1. The ideal service workflow describes each activity along the whole business process. This information is mainly used to define the workflow templates in the Interdisciplinary Service Model. 2. The responsible person or organization is identified for each activity of the business process. This is important, because the user interface definition of the platform has to support those actions. The information in relation to the service work sequence is also used to define the Network Model parameters later. 3. The identification of local IT and B2B support helps to define the right gateway functions to bridge between the FLUID-WIN Platform and the local IT and to identify the right place to source required data. 4. The most important part is the alignment of functionalities to certain processes within each service workflow. Each activity in the service workflows is interrelated with certain functionalities managing the service task supported by the FLUID-WIN Platform. The processes in area 5 (figure 2) explain these functionalities, their related information exchange, their configuration and dependencies in detail. The structure of the FLUID-WIN B2(B2B) support is indicated in figure 4, which consists of 3 different parts. Transportation service
1. Ideal service workflow
Actors
2. Responsible Organization
FLW connection to local systems
3. Local IT
4. Platform functionalities FLW functionalities required to support the service workflow Specification and Delimitation of FLW functionalities
Fig. 3. A cut-off from the LSP’s Transport Service Workflow with Activities and Levels
120
Heiko Weinaug, Markus Rabe
Fig. 4. Areas and described Functionalities in the Functionality Level of the FLUID-WIN B2(B2B) Business Process Model
In the top part of the figure 4, the enabler processes for the FLUID-WIN Platform are indicated. There are enabler processes ensuring trust and security mechanisms and processes which enable the addition of new services into the Platform. The second part of the B2(B2B) support level explains the processes for the platform configuration and customization. The responsibility of these processes is to compile, configure and coordinate the whole network including the services according to the data entered by the users. Hence, these processes mainly relate to the Network Model and Interdisciplinary Service Model. As the service is then ready to use, the third part of the B2(B2B) support level defines all 13 functionalities in detail. The aim of the modelling task within this area was the specification of the content and the operation of each functionality as well as the information exchange and the interdependencies between different functionalities. The methods to be developed (calculations, logics etc.) and the related software modules were identified.
5 Interdisciplinary Service Model and Network Model The FLUID-WIN B2(B2B) model targets to flexibly support service processes attached to a B2B network with low customization effort. For this purpose, there is the necessity for generalized models as well as for company- and service-specific models. In order to establish a general, re-usable base for the services in the FLUID-WIN platform, an Interdisciplinary Service Model is defined that specifies
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
121
workflow templates which implement the functionalities and the parameters that can be adjusted with these templates.
Fig. 5. Relation between the B2(B2B) Model, the Interdisciplinary Service Model and the Network Model
The instantiation, parameterization, and connection of the workflow templates is conducted with the support of the Interdisciplinary Service Modeller, which then in turn results in the definition of operating services, which are used by a Service Engine to drive the B2(B2B) business processes in the FLUID-WIN Platform (figure 5) The service workflows are implemented without relationship to the companies, and are therefore templates which form the Interdisciplinary Service Model and need to be customized before they can be used in operation. In order to establish concrete B2(B2B) services, the Network Model with information about the network members is required (e.g. company name, location, user’s roles and access rights, interoperability parameters, etc.). This information is static, i.e. it does not depend upon the services that the company uses or will use in the future. In order to operate on the platform, Template Workflows are instantiated and configured with respect to the involved companies as well as with data specific for the service. The Interdisciplinary Service Modeller is used to enter this information.
6 Summary and Outlook The area addressed by FLUID-WIN is one of the areas where companies have still significant room for cost improvements. For example, although transport cost are still rather low, logistic processes and the related information processing represent
122
Heiko Weinaug, Markus Rabe
a considerable and growing factor. Performance indicators of a supply chain should be extended beyond the traditional ones like availability of goods, low inventory, physical transportation cost and IT investment. More emphasis should be put on low coordination cost and connectivity cost, improved reactivity to unpredicted changes, and transparency of selective information while hiding information from unauthorized partners. The FLUID-WIN B2(B2B) model and the supporting platform conduct a significant step forward to optimize the synchronisation of money flow and delivery. This paper highlights that supply chain control platforms for multi-disciplinary B2(B2B) Networks are a good option to increase flexibility and reduce cost, and the most efficient way to create such coordination service is the method of Business Process Modelling. It can be used very efficiently in distributed environments such as supply networks. Thereby, differently styled and structured models can be used in different development phases for specific purposes (e.g. gaining information from field studies; structuring service workflows and their relation to the supporting B2(B2B) platform; set templates for service configuration). Within modelling SCOR is a good base to define the terminology and basic supply network model elements, and also compatibility with SCOR is advantageous for more widely accepted templates. Methods to combine models enable Europe-wide and even transcontinental collaboration while designing and optimizing supply chains.
Acknowledgements This work has been partly funded by the European Commission through the IST Project FLUID-WIN (IST-2004-027 083). The work reported here has involved all project partners Joinet (Italy), Régens Information Technology (Hungary), AcrossLimits (Malta), Lombardini (Italy), TS Motory (Slovakia), Fundación Labein (Spain), Technical University of Kosice (Slovak Republic), mb air systems (UK), Tecnicas de calentamiento (Spain) and ITW Metalflex (Slovenia).
References [1]
[2] [3] [4]
Greiner, U.; Lippe, S.; Kahl, T.; Ziemann, J.; Jäkel, F.-W. (2007) Designing and Implementing Cross-Organizational Business Processes – Description and Application of a Modelling Framework. In: Doumeingts, G.; Müller, J.; Morel, G.; Vallespir, B. (eds.): Enterprise Interoperability - New Challenges and Approaches. London: Springer 2007, pp. 137 - 147 IEEE (2007) IEEE Standard computer dictionary: A compilation of IEEE standard computer glossaries. Institute of Electrical and Electronics Engineers, New York Mertins, K.; R. Jochem (1999). Quality-oriented design of business processes. Kluwer Academic Publishers, Boston Rabe, M.; Mussini, B. (2005) Analysis and comparison of supply chain business processes in European SMEs. In: European Commission (Hrsg.): Strengthening competitiveness through production networks - A perspective from European ICT
Models and Methods for Web-support of a Multi-disciplinary B2(B2B) Network
[5]
[6]
[7]
[8] [9] [10] [11] [12]
[13]
123
research projects in the field ‘Enterprise Networking’. Luxembourg: Office for Official Publications of the European Communities, pp. 14-25 Rabe, M.; Weinaug, H. (2005) Methods for Analysis and Comparison of Supply Chain Processes in European SMEs. 11th conference on Concurrent Engineering (ICE), München, pp. 69-76 Rabe, M.; Weinaug, H. (2007) Distributed analysis of financial and logistic services for manufacturing supply chains. In: Pawar, K.S.; Thoben, K.-D.; Pallot, M.: Concurrent innovation: An emerging paradigm for collaboration & competitiveness in the extended enterprise. Proceedings of the 13th International Conference on Concurrent Enterprising (ICE’07), Sophia Antipolos (France), pp. 245-252 Rabe, M.; Gocev, P.; Weinaug, H. (2006) ASP supported execution of multi-tier manufacturing supply chains. In: Proceedings of the International Conference on Information Systems, Logistics and Suppy Chain ILS’06, Lyon (France), CD-ROM Publication SCC (2006) Supply-Chain Operations Reference Model. Supply-Chain Council, http://www.supply-chain.org, visited 10.11.2006 VCG (2007) Value Reference Model. Value Chain Group. http://www.valuechain.org/, visited 08.11.2007 VDI (2005) Datenaustausch per EDV läuft oft mangelhaft. VDI Nachrichten No. 24, 17th June 2005, p. 24 XCBL (2007) XCBL, XML Common Business Library. www.xcbl.org/ (visited 30.05.2007) Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. Rabe, M.; Mussini, B., Weinaug, H.: Investigations on Web-based integrated services for manufacturing networks. In: Rabe, M.; Mihok, P. (eds) New Technologies for the Intelligent Design and Operation of Manufacturing Networks. IRB, Stuttgart 2007.
Platform Design for the B2(B2B) Approach Michele Zanet1, Stefano Sinatti2 1
2
Joinet, Bologna (Italy) [email protected] University of Bologna, Phd Student (Italy) [email protected]
Abstract. Business-to-business or “B2B” is a term commonly used to describe the transaction of goods or services between businesses, enabled through a platform that allows for the interaction between customers and suppliers. Today, the new challenge is the integration of financial and logistics services into the business relationship without having to install thousands of peer-to-peer interfaces. The authors follow an approach that introduces a new level of business, that has been called B2(B2B). The purpose of this paper is to describe briefly the design of the FLUID-WIN project that targets this new approach, and to sketch the components to be developed in order to integrate the activities between significantly different business entities. Keywords: Decentralized and evolutionary approaches to interoperability, Design methodologies for interoperable systems, Interoperability of E-Business solutions, Interoperable inter-enterprise workflow systems, Tools for interoperability
1 Introduction The FLUID-WIN project aims to achieve a web platform for providing financial and logistic services to a manufacturing network that already uses a B2B platform for supply chain management [1]. This paper aims to present a particular aspect of this project, that is the design of the FLUID-WIN platform. From the point of view of technology the platform offers the management of an information flow connected to a material flow and the associated financial and logistic flows. The single flow is related to a series of activities that without the support of a platform, such as that offered by FLUID-WIN, would require the installation of a series of one-to-one interfaces for a single report. Every member of the network would require as many interfaces as the number of partners in the network with which it relates to, and this level of costs is unacceptable.
126
Michele Zanet, Stefano Sinatti
The significant progress that FLUID-WIN implies, involves switching from a one-to-one network to a new model where the platform is the only channel of communication through which it will be possible to efficiently implement the exchange of information between the three main domains : manufacturing, logistics and finance.
Methods Platform Services
FF
Finance
FF
L, FF L,
Logistics
M, L, L, FF M,
M, L, L, FF M,
M, L, L, FF M,
Impact and support of Fluid-WIN to the sector
Manufacturing
M = Manufacturing L = Logistic F = Financial
Fig. 1. Information categories exchanged between the platform and the sectors
Therefore, the main objective of FLUID-WIN is to interact on a single platform among manufacturing B2B network providers and logistical and financial service providers (Fig. 1), applying the new B2(B2B) approach [2]. This paper has been prepared when the project has not been completed yet, and when the detailed platform design was still in process. Nevertheless, the software architecture has been defined, and the components that implement the platform are Network Modeler, User Interface, Interdisciplinary Modeler, and Service Engine. In the following, the architecture of the platform is described, the base technologies used are characterized, and the software components of the platform are explained.
2 Architecture Overview A logical scheme of the platform is depicted in Fig. 2. The general architecture is composed of the following modules: x x x x
Network Modeler Interdisciplinary Modeler User Interface Service Engine
The first two components are designed to model the entities involved in the communication and the rules by which they interact. The User Interface and the Service Engine components are designed to manage certain documents and events within the network.
Platform Design for the B2(B2B) Approach
127
Fig. 2. The FLUID-WIN platform
In addition to the platform, another set of software components called “gateway” are part of the FLUID-WIN architecture. The gateways aim to implement domain-dependent communication. Thus, the three gateways relate to the FLUID-WIN domains manufacturing (B2B), Logistics and Finance. Gateways have to face and to solve the interoperability challenges of the FLUID-WIN approach. There are two major levels of interoperability: x x
Internal interoperability concerns the communication among the FLUIDWIN modules (for instance communication between the FLUID-WIN platform and the logistic gateway). External interoperability concerns the communication between the different FLUID-WIN modules and the legacy systems of the FLUID-WIN users . The FLUID-WIN interoperability is realized by the implementation of gateways and adapters that work between the FLUID-WIN Platform and the external application domains.
3 Technologies for Development and Technologies for Design The choice of technology for the platform development has a strong impact on interoperability. Especially for the internal interfaces, we must therefore consider: x
Abstracted: Service is abstracted from the implementation
128
Michele Zanet, Stefano Sinatti
x x x
Published: Precise, published specification functionality of the service (not of the implementation of the service) Formal: Formal contract between endpoints places obligations on provider and consumer Relevant: Functionality presented at a granularity recognized by the user as a meaningful service
The modeling of these properties is possible through the use of technology offered by Web services. Thus, the platform itself was developed with the following instruments: x x
Eclipse 3.2, with Web Tools Platform 1.5 development environment [3] J2EE 1.4, that fully supports the use of Web services [4]
For the deployment, the following tools have been selected: x x x x
The FLUID-WIN Platform will be implemented in Java, Enterprise Edition 1.4. The J2EE Application Server will be BEA Web-Logic Server 9.2. [5] The application will run on a server with Linux or Sun Solaris as OS. The B2B gateway and the FLUID-WIN Platform have a DBMS layer (IBM Informix).
Thus, the platform is a WebService which exposes a number of methods available through gateways that are mandated to route messages to the outside world. In Fig. 3, a conceptual scheme is reported of the protocols and tools used for communication among network entities:
Platform Design for the B2(B2B) Approach
129
FLUI D- W I N PLAT FOR M
APPLI CATI ON SERVER
Apache Axis/ SOAP Ser ver
BEA WebLogic
SOAP/ HTTP
W EB- SERVER
I nner Firew all
GAT EW AY SOAP/ HTTPS
in t e
Out er Firew all
r ne
Dat abase Ser ver
Aut horizat ion
t HTTPS
BR OW SER
Fig. 3. Platform infrastructure
For the design tasks at the FLUID-WIN consortium partners, the Integrated Enterprise Modelling (IEM) method [6] was used to define and exchange the business processes. The Universal Modelling Language (UML) [7] is used to define and describe workflow protocols for the exchange of messages between the gateway, the platform and the outside world. The development environment for UML is Enterprise Architect, an advance software modeling tool for UML 2.1 [8].
4 Components for Modelling The software components for modelling are the Network Modeler and the Interdisciplinary Service Modeller. These two components have the main objective to define the parties involved in the B2(B2B) network, the parties involved in specific workflows and the “rules of the game” applied when operating the platform’s functionalities. In particular, the Network Modeler is used to model the players in the B2(B2B) process, assigning roles and defining constrains that will drive the collaborative activities. The Network Modeler is expected to have structural similarities with the modelling engines of the SPIDER-WIN project, that facilitated the exchange of relevant manufacturing execution information along the supply chain [9, 10]. Therefore, partners expect that they can re-use the structural parts of the specification from the SPIDER-WIN project, while the content parts will need to be specified newly, leading also to the development of a completely new modeller.
130
Michele Zanet, Stefano Sinatti
The Interdisciplinary Service Modeller has been conceived for the modelling of the domain concept to be handled in a given network context, where this context is defined through the Network Modeller. The Interdisciplinary Service Modeller will enable to define the interdisciplinary process activities and to map them to the process elements and events that trigger the actions managed by the Service Engine. Another fundamental aspect to be considered is the modeling of processes through which certain documents can be obtained. This is possible through the socalled Workflow and Report Templates that are included within the Interdisciplinary Service Modeler. The above mentioned processes are defined by a number of states and methods implemented in the platform, and used by workflows whose final output will be a document (e.g. logistic order, request for quotation, quality measurement document, etc.). A WorkFlow Template is composed from a set of states, transitions, rights, outgoing and incoming events, functions, notifications, constraints and changes to the database. Through customizable external input it is possible to obtain a workflow as shown in Fig. 4, which is an example of a Workflow Template on a Logistic Order, specifying the possible states and the transitions between these states.
Platform Design for the B2(B2B) Approach
131
LOGISTIC ORDER – TEMPLATE WORKFLOW
f: {F1}; i:{I1}, r:{LCU } f :{ F 3 }; r:{LCU}; o :{ O 3 } f:{F2 }; i: {I1}, r:{LCU} f :{ F2 }; i: { I1 }, r:{LSP }
CREATED
f: {F4}; r:{LSP }
f: { F2 }; i:{I1}, r:{ LSP } CH ANG E by LSP
f :{ F 4 }; r :{LCU }; o:{O1}
f:{F2}; i: {I1}, r:{LCU } CON FIRMED
f:{F 4}; r:{LSP }; o:{O3}
CHANG E by LSP's CUSTOMER
f :{ F3 }; r :{ LCU }; o :{ O3}
f :{
F3 }; r:{LCU}; o:{O 3}
f :{F 5}; r:{LSP }
CANCELLED
FULFILLED
f :{F 6}; r:{SYS }
C LO SED
Fig. 4. Examplary workflow template (logistic order)
5 Components for Managing The components for the management are Service Engine and User Interface. The main task of these components is the management of documents and events within the network. The Service Engine manages all messages that are exchanged with B2B and legacy applications, storing and updating the central repositories, fulfilling the required elaboration to propagate message data through the singlediscipline domains, and towards interested network players. Moreover, the Service Engine collaborates with software agents in charge of detecting events and transporting messages, and it defines the basic routing elements for control flow semantics, based on an XML/XSL.
132
Michele Zanet, Stefano Sinatti
The User Interface allows authorized users to interact through a series of actions granted to them by their partners. An example of the first draft User Interface is shown in Fig. 5.
Fig. 5. Example User Interface
6 Summary The step from the “classical” B2B approach to the new B2(B2B) concept requires the development of a platform that centralize services and therefore communications between various entities. Therefore, a technology is mandatory that offers a single language-independent development environment. The project has selected Web services for this purpose. The successful exploitation of the B2(B2B) concept requires x x x x x x
a business process model of the B2(B2B) concept, that forms the base of the implementation and supports the customization in a concrete network, an Interdisciplinary Service Model that defines the Workflow Templates implementing the states and transitions on the platform, a Network Model that defines the players in a concrete platform, a Service Engine that operates the workflow, a B2B gateway that connects the platform to a B2B platform (which then connects to a multiplicity of manufacturers), finance and logistic gateways that allow for the attachment of specific IT systems from the service domains, and
Platform Design for the B2(B2B) Approach
x
133
user interfaces to directly access the platform, taking into account that the majority of services will be used without direct access to the platform, as information is exchanged through the gateways and platforms among the existing legacy systems.
References [1]
FLUID-WIN: Finance, Logistic and Production Integration Domain by Web-based Interaction Network. European Project FP7-IST-027083, www.fluid-win.de [2] Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. [3] Eclipse Foun¬da¬tion, Eclipse 3.2: http://www.eclipse.org/, visited 05.11.2007, also http://help.eclipse.org/help32/index.jspX (Documentation) [4] Sun Microsystems, J2EE 1.4, 2004. http://java.sun.com/j2ee/1.4/docs/tutorial/doc/ [5] BEA Web-Logic Server 9.2: http://edocs.bea.com/wls/docs92/index.html, visited 05.11.2007 [6] Mertins, K.; Jochem, R.: Quality-oriented design of business processes. Kluwer Academic Publishers, Boston, 1999. [7] UML 2.1: http://www.uml.org/, visited 05.11.2007 [8] Enterprise Architect: http://www.sparxsystems.com.au/, visited 05.11.2007 [9] Rabe, M.; Mussini, B.; Gocev, P.; Weinaug. H.: Requirements and potentials of new supply chain business processes in SME networks - Drivers for solution developments. In: Cunningham, P.; Cunningham, M. (Hrsg.): Innovation and the Knowledge Economy: Issues, Applications, Case Studies. Amsterdam: IOS Press 2005, Vol. 2, S. 1645-1652. [10] Rabe, M.; Gocev, P.; Weinaug, H.: ASP Supported Execution of Multi-tier Manufacturing Supply Chains. In: Proceedings of the International Conference on Information Systems, Logistics and Suppy Chain ILS’06, Lyon (France), 14.-17. Mai 2006, CD-ROM Publication. [11] FLUID-WIN consortium: New Business Processes Specifications. Deliverable D13, 28.09.2007, www.fluid-win.de
Trust and Security in Collaborative Environments Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová Faculty of Economics, Technical University Košice, NČmcovej 32, 040 01 Košice, Slovakia {peter.mihok, jozef.bucko, radoslav.delina, dana.palova}@tuke.sk
Abstract. It is often stated in literature that trust is of critical importance in creating and maintaining information sharing systems. The rapid development of collaborative environments over the Internet has highlighted new open questions and problems related to trust in web-based platforms. The purpose of this article is to summarize how trust can be considered in collaborative environments. Partial results of the field studies of two European IST projects, FLUID-WIN and SEAMLESS, are presented. Identity management problems and trusted operational scenarios are treated. Key words: trust, security, information sharing, collaboration, web-based platform, identity management, trusted scenario
1 Introduction Trust is considered as a basic success factor for collaboration. Modern ICT based collaboration environments allow companies to realize a number of competitive advantages by using their existing computer and network infrastructure for the collaboration of persons and groups. The collaborating actors (manufacturers, suppliers, customers, service providers) must feel confident that their transaction processes are available, secure and legal. Trust building mechanisms vary according to their complexity and acceptability, especially among companies with low IT skills. Appropriate selection and user-friendly implementation can enhance trust and efficient use of web-based business platforms. In this contribution we examine trust and trust building mechanisms in different contexts. Organizations and projects are looking for ways to optimize their supply chains in order to create a competitive advantage. Consequently, the same organizations are modifying their business processes to accommodate the demands that information sharing requires. Information sharing can reduce the cost of failure and operational cost. Furthermore, it can improve the scheduling quality and the efficiency of current resources. It also provides intangible benefits such as
136
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
improved quality with increased customer and shareholder satisfaction. However, integrating and sharing information in inter-organizational settings involves a set of complex interactions. The organizations and institutions involved must establish and maintain collaborative relationships in which information and data of sensitive and critical nature are transferred outside of the direct control of the organization. The sharing processes can involve significant organizational adaptation and maintenance. Trust and security mechanisms are often stated in literature as being of critical importance in the creation and maintenance of an information sharing system. In the past decades there is a rapid increase of information sharing systems based on different electronic services (e-services) offered through web-based platforms. Trust and security aspects in the development of such platforms are in the center of the research activities of European FP6 and FP7 projects, e.g. networks of Living Labs and Digital Ecosystems, projects SECURE, SERENITY, SWAMI, HYDRA, etc. In this paper we will restrict our attention to two types of collaborative environments: electronic marketplaces and manufacturing networks. Our research is based on our results and experiences in the FP6 IST projects SEAMLESS and FLUID-WIN. The SEAMLESS project studies, develops and tests an embryo of the Single European Electronic Market (SEEM) network, where a number of e-registries are started in different countries and sectors. The SEEM vision is towards a web-based marketplace where companies can dynamically collaborate without cultural, fiscal and technological constraints. The FLUID-WIN project targets business-to-business (B2B) manufacturing networks and their interactions with the logistics and financial service providers. This cross-discipline service integration concept is called business-to-(B2B), or shorter B2(B2B) [19]. Within that context the project aims to develop an innovative platform, which can integrate data and transfer them among all the various partners, in order to improve the competitiveness and to make the business processes of the integrated network as efficient as possible. After introducing the basic concepts related to trust and security we deal with the problem of the secure access to the platforms and additional trust mechanisms considered within the projects.
2 Basic Concepts In the context of collaboration it is of importance to differentiate between trust and security. The basic concepts and terms are defined as a base for the further discussion. Trust is a seemingly very abstract factor and as a complex notion, synonymous to confidence, it has a lot of meanings depending on the context where it is considered. By WordNet [28] the word trust relates to: x x x x
Reliance, certainty based on past experience Allow without fear Believe to be confident about something Trait of believing in the honesty and reliability of others
Trust and Security in Collaborative Environments
x
137
Confidence, a trustful relationship
Another definition describes trust as confident reliance. “We may have confidence in events, people, or circumstances, or at least in our beliefs and predictions about them, but if we do not in some way rely on them, our confidence alone does not amount to trust. Reliance is a source of risk, and risk differentiates trusting in something from merely being confident about it. When one is in full control of an outcome or otherwise immune from disappointment, trust is not necessary” [27]. Trust is usually specified in terms of a relationship between a trustor and trustee. The trustee is the subject that trusts a target entity i.e. the entity that is trusted. Trust forms the basis for allowing a trustee to use or manipulate resources owned by a trustor or may influence a trustor’s decision to use a service provided by a trustee. Based on the survey of Grandison and Sloman [13], trust, in the eservices context, is defined as: “the quantified belief by a trustor with respect to the competence, honesty, security and dependability of a trustee within a specified context”. Also distrust is defined as: “the quantified belief by a trustor that a trustee is incompetent, dishonest, not secure or not dependable within a specified context”. The level of trust has an approximate inverse relationship to the degree of risk with respect to a service or a transaction. In many current business relationships, trust is based on a combination of judgment or opinion based on face-to-face meetings or recommendations of colleagues, friends and business partners. However, there is a need for a more formalized approach to trust establishment, evaluation and analysis to support e-services, which generally do not involve human interaction. Comprehensive surveys on the meaning of trust can be found e.g. in [17] and [13] as well as in the book on Trust in E-Services [24]. Whereas, security is “a wish” of being free from danger, the goal is “bad things don't happen”. Computer security is the effort to create a secure computing platform, designed in a way that users or programs can only perform actions that have been allowed. This involves specifying and implementing a security policy. The actions in question are often reduced to operations of access, modification and deletion. Schneier [21] defines security “like a chain; the weakest link will break it. You are not going to attack a system by running right into the strongest part. You can attack it by going around that strength, by going after the weak part, i.e., the people, the failure modes, the automation, the engineering, the software, the networks, etc.” In the context of information-sharing computer systems, everything reduces to access to appropriate information. Provision (or disclosure) of information is the key element. A simple transfer of data is between two parties, a sender and a receiver, and includes the following key steps: preparation of data; transfer a copy of the prepared data; use the copy of data received. More complex transactions can be composed from such simple data transfers. Dependability is an ability to avoid failures that are more frequent or more severe than it is acceptable (to avoid wrong results, results that are too late, no results at all, results that cause catastrophes). Attributes of dependability are:
138
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
x x x x x
Availability – readiness for correct service Reliability – continuity of correct services Safety – absence of catastrophes Integrity – absence of improper results Maintainability – ability to undergo modifications and repairs
Security can be defined [14] as the combined characteristics of: confidentiality (i.e., absence of unauthorized disclosure of information), availability to conduct authorized actions, and also integrity (i.e. the absence of unauthorized system alterations). Security and dependability overlap, and are both required as base for trust. Unfortunately and most confusingly, the terms dependability and security are sometimes used interchangeably or, else, either term is used to imply their combination. In fact, because security and dependability are distinct but related and partially overlapping concepts, the term trustworthiness is being increasingly used to denote their combination. The main technical mechanisms that have strong influence on the trust in networked based systems include: x x x
Identity management Access control Data encryption and security
The identity management systems provide tools for managing partial identities in the digital word. Partial identities are subsets of attributes representing a user or company, depending on the situation and the context. Identity management systems must support and integrate both techniques for anonymity and authenticity in order to control the liability of users. The access control is the enforcement of specified rules that require the positive identification of the users, the system and the data that can be accessed. It is the process of limiting access to resources or services provided by the platform to authorized users, programs, processes or other systems according to their identity authentication and associated privilege authorization. Finally, data encryption and security are related to cryptographic algorithms, which are commonly used to ensure that no unauthorized user can read and understand the original information. The concept of asymmetric cryptography, (also called Public Key Cryptography), was described for the first time by Diffie and Hellman [6]. In contrast to the symmetric cryptography in which we have the same secret key for encryption and decryption, we now have one public key eP (encryption key) and one private key dP (decryption key) for each person P. While the public key eP can be published to the whole world, the private key dP is to be treated as a secret and only person P knows it. An important characteristic of such a cryptography system is that it is computationally infeasible to determine the private key given the corresponding public key. The advantage of asymmetric cryptography is the enormously reduced effort for key management. A disadvantage is the velocity. The asymmetric cryptography can serve as base for the digital signature. The Public Key Infrastructure (PKI) provides the identification of a public key with a concrete person via the certificate. The PKI is the system of technical and administrative arrangements associated with issuing, administration, using and
Trust and Security in Collaborative Environments
139
retracting of public key certificates. The PKI supports reliable authentication of users across networks and can be extended to distributed systems that are contained within a single administrative domain or within a few closely collaborating domains.
3 Trust and Security on Web-based Platforms In an open and unknown market place with a high number of unknown participants, assurance and trust are difficult but very important. There is a growing body of research literature dealing with online trust, in which e-commerce is one prominent application. Several studies contend that e-commerce cannot fulfil its potential without trust (e.g. [8], [11], [15], [20]). Lee and Turban [16] highlight lack of trust as the most commonly cited reason in market surveys why consumers do not shop online. On an open consultation on “Trust barriers for B2B e-marketplaces” [7] conducted by the Enterprise DG Expert Group in 2002, several important barriers were identified. From the report we can find that the most important trust barriers are issues regarding the technology (security and protection), trust marks and dispute resolution absence, online payment support, lack of relevant information about partners, products, contract and standardization issues. A trust building process must be set up to resolve these issues. Trust usually is conceptualised as a cumulative process that builds on several, successful interactions [18]. Each type of process increases the perceived trustworthiness of the trustee, raising the trustor’s level of trust in the trustee [3]. It is not known exactly which trust-building processes are relevant in an e-commerce context. It is suggested that, in this setting, trust building is based on the processes of prediction, attribution, bonding, reputation and identification [3]. Reputation has a very high relevance in a trust-building process on e-commerce markets [1]. According to the Chopra and Wallace classification, identification based trust refers to one party identifying with the other, for example in terms of shared ethical values. Identification builds trust when the parties share common goals, values or identities. In e-commerce, these attributes perhaps may relate to corporate image [2] or codes of conduct. These results are more focused on trust impact than on factors, which build trust. According to several research activities, the research on significance and acceptance of trust building mechanisms is still missing and is necessary for future development in this field. This absence has been examined in the Seamless project [22]. The results are presented in Deliverable D1.2 “Trusted Operational Scenarios” of the project, see also [5]. Though operating within a closed supply chain system, locally spread information technology destinations (users of manufacturer, supplier and financial service institutions) need to be linked, which brings up the need for trust, privacy and security. It is to be expected that security is at least of equal importance than in an open system, as limitation of access plays a vital role. There are several trust and security best practices scattered throughout the Internet and material is constantly updated daily, if not hourly, based on the latest threats and
140
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
vulnerabilities. Security standards are not “one size fits all.” Responsible, commercially reasonable standards vary, depending on factors such as company size and complexity, industry sector, sensitivity of data collected, number of customers served, and use of outside vendors. Security standards exist for several types of transactions conducted, and new ones are on the way all the time. A further checklist to meet trust and security requirements is to meet local legislation in terms of data protection and privacy regulation. Financial transactions need to meet local and also foreign standards if they are to be accepted by a provider. With respect to the FLUID-WIN project and its platform mainly two security and trust building mechanisms can be differentiated: 1. 2.
Mechanisms based on workflow design, policies and contractual issues. Technical solutions ensuring a save login and data exchange.
Both mechanisms are strongly related and build upon each other. Based upon the results of a mail-based survey the following ten key success factors for an established information sharing system were determined [10]: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Centralized Information Sharing Control Maintain and Update Information Sharing Rules Significant Exchange of Information Defined Use of Information Collaboration with Suppliers Cooperative Competition End-to-End Connectivity Formed Supply Alliances Replace Traditional Communication with IT Share Frequently with Suppliers
Trust, which did not occur as a factor among them, is replaced by contractual agreements defining the limitations of the transferred information usage. However, the challenge within the FLUID-WIN idea is the high number of actors from different domains as well as their technical connection within the B2(B2B) concept. For secure access to the FLUID-WIN Platform it could be convenient to use digital signatures (private keys), which are saved at a secure device (chip card, USB key) and protected by other safety elements (PIN and static password). It is necessary to take the existence of a digital signature couple as granted, the first one for access and cryptography and the second one for designation. The strength of this securing form is the fact that the method of digital signature cannot be breached by “brute” force at the present time. The weakness of this security method is insufficient knowledge of this method and infringement of all safety rules that are related to the physical security of digital signature storage site and safety elements, which allow its operation. Therefore, it is necessary to work out the security policy, in which the method of usage, security principles and risks of improper use of this method will be exactly specified. The digital world uses the same principles of electronic signature data identification (in case of a digital signature it is the public key):
Trust and Security in Collaborative Environments
x
x
x
141
Uniqueness of the line – based on the agreement about a public key between the key owner and the verifier of the signature. This way of connection is unequivocal under opinion of both parties. Duly conducted agreement protects both parties and is one of the arguments in the case of cause. This application of digital signature is possible in the so-called closed systems. This method is used in the electronic banking services at present. Triangle of trust – in open systems, the owner of the key has not often the possibility to meet with the verifier of the signature to make an agreement concerning the relevant public key. In this case it is suitable to use the third party principle. Certification – a dedicated authority assures the unequivocal identification of the public key with a concrete person (its owner), on the basis of a certified application of the owner. In this application the basic identification data and relevant public key are listed.
Therefore, the necessary condition of active employment of electronic signature technology, which allows the transition to electronic document interchange in open systems, is the existence of certification authorities. To the basic services defined by the e-signature mechanisms belong: x x x
Registration services – contact with certificate applicant, verification of data conformity (data in the application form for issue of certificate and data concerning the identity of an applicant). Issuing of certificates – on the basis of an agreement with an applicant and verification of all necessary data the certificate for public key is issued. Revoke of certificates and publication of the cancelled certificates list – in case that an unauthorized person obtains the private key, it is required to cancel the certificate before its validity date. The certification authority is obliged to keep and publish lists of valid and cancelled certificates.
However, financial service providers have their own security policies. Especially larger financial institutions are hard to be convinced to adapt their policy as precondition to use the FLUID-WIN Platform. Rather, it is likely that the FLUID-WIN Platform has to accept the policies of the certain Financial service providers, even if is this means that FLUID-WIN has to implement a set of different security mechanisms depending on the requests of each financial service provider.
4 Conclusion E-services like web-banking, web-shopping, web-auctions, e-government, e-health, e-manufacturing, e-learning are becoming part of everyday life for citizens everywhere. As a basis for deciding to use the service trust is becoming a major impediment. To fill the gap between identities and their level of trust is one of the eight major issues in developing identity management for the next generation of distributed applications and use of e-services [24]. A lot of interesting questions
142
Peter Mihók, Jozef Bucko, Radoslav Delina, Dana Palová
and problems are considered in the recent publications [4],[5],[10],[12] and can be found as public deliverables of the projects [9],[22],[23],[25] and [26].
Acknowledgements This work has been partly funded by the European Commission through IST Project SEAMLESS: Small Enterprises Accessing the Electronic Market of the Enlarged Europe by a Smart Service Infrastructure (No. IST-FP6-026476) and IST Project FLUID-WIN: Finance, logistics and production integration domain by web-based interaction network (No. IST-FP6-27083). The authors wish to acknowledge the Commission for their support. We also wish to acknowledge our gratitude and appreciation to all the Seamless and Fluid-Win project partners for their contribution during the development of research presented in this paper.
References [1]
Atif, Y.: Building Trust in E-Commerce, IEEE Internet Computing, Jan-Feb (2002), 18-24 [2] Ba, S., Pavlou, P.A.: Evidence of the effect of trust building technology in electronic markets: Price premiums and buyer behaviour. MIS Quarterly Vol. 26 No. 3. (2002) 243-268 [3] Chopra, K., Wallace, W.: Trust in Electronic Environments, Proceedings of the 36th Hawaii International Conference on Systems Sciences (HICSS), (2003) [4] Delina, R., Azzopardi, J., Bucko, J., Frank, T., Mihok, P.: Financial Services in Webbased Platforms. In: Managing Worldwide Operations and Communications with Information Technology, Proc. IRMA Conference Vancouver (Canada), ed. Koshrow M. (2007) 1273-1274 [5] Delina, R., Mihok, P.: Trust building processes on web-based information-sharing platforms. In: Proceedings of the 13th International Conference on Concurrent Enterprising, ICE’2007, eds. K.S. Pawar, K-D. Thoeben and M. Pallot, Sophia Antipolis, France (2007) 179-186 [6] Diffie,W., Hellman,M.: New directions in cryptography, IEEE Transactions on Information Theory. (1976). Available at: http://crypto.csail.mit.edu/classes/6.857/papers/diffie-hellman.pdf (11.10.2007) [7] EU Commission: Open consultation on “Trust barriers for B2B emarketplaces” Presentation of the main results. (2002) Available at: http://europa.eu.int/comm/ enterprise/ict/policy/b2bconsultation/consultation_en.htm (25.05.2007) [8] Farhoomand, A., Lovelock, P.: Global e-Commerce – Texts and Cases, Prentice Hall (2001) [9] FLUID-WIN Finance, logistics and production integration domain by web-based interaction network. FP6 IST STREP 27083 funded by European Commission. Available at: www.fluid-win.de [10] Frank, T. G., Mihók, P.: Trust within the Established Inter-Organizational Information Sharing System. In: Managing Worldwide Operations and Communications with Information Technology, Proc. IRMA Conference Vancouver (Canada), ed. Koshrow M. (2007) 132–135 [11] Friedman, B., Kahn, P., Howe, D., Trust Online, Communications of the ACM, Vol. 43, No. 12 (2000) 34-40
Trust and Security in Collaborative Environments
143
[12] Giuliano, A., Azzopardi, J., Mihók, P., Bucko, J., Ramke, Ch.: Integration of financial services into multidisciplinary Web platforms. To appear in: Ambient Intelligence Technologies for the Product Lifecycle: Results and Perspectives from European Research, IRB Stuttgart (2007) [13] Grandison, T., Sloman, M.: A survey of trust in Internet applications, IEEE Communications Surveyes and Tutorials, 4(4) (2000) 2-16 [14] IEEE Standard computer dictionary: A compilation of IEEE standard computer glossaries. Institute of Electrical and Electronics Engineers, New York (2007) [15] Jones, S., Wilikens, M., Morris, P., Masera, M.: Trust requirements in e-business: A conceptual framework for understanding the needs and concerns of different stakeholders. Communications of the ACM, Vol. 43, No. 12 (2000) 81-87 [16] Lee, M., Turban, E.: A Trust Model for Consumer Internet Shopping, International Journal of Electronic Commerce, Vol. 6, No. 1 (2001) [17] McKnight, D.H. and Chervany, N.L.: The Meanings of Trust. MISRC 96-04, University of Minnesota, Management Informations Systems Research Center, University of Minnesota, (1996) [18] Nicholson, C., Compeau, L., Sethi, R.: The Role of Interpersonal Liking in Building Trust in Long-Term Channel Relationships, Journal of the Academy of Marketing Sciences, Vol. 29, No. 1 (2001) 3-15 [19] Rabe, M.; Mussini, B.: ASP-based integration of manufacturing, logistic and financial processes. In: XII. Internationales Produktionstechnisches Kolloquium, Berlin, 11.-12. October 2007, pp. 473-487. [20] Raisch, W.: The E-Marketplace – Strategies for Success in B2B Ecommerce, McGraw-Hill (2001) [21] Schneier, B.: Security in the real world. How to evaluate security technology. In: Computer Security 15/4 (1999) 1-14 [22] SEAMLESS: Small enterprises accessing the electronic market of the enlarged Europe by a smart service infrastructure. FP6 IST STREP 26476 funded by European Commission. Available at: www.seamless-eu.org [23] SECURE: Secure environments for collaboration among ubiquitious roaming entities. Available at: http://www.dsg.cs.tcd.ie/dynamic/?category_id=-30 (visited 01.11.2007) [24] Song, R., Korba, L., Yee, G.: Trust in E-Servuces, Technologies, Practices and Challenges, Idea Group Publ. (2007) [25] SWAMI: Safeguards in a world of ambient intelligence, final report. (2006) Available at: http://swami.jrc.es (visited 01.06.2007) [26] TRUSTe: Security guidelines 2.0. (2005) Available at: http://www.truste.org/pdf/Security Guidelines.pdf (visited 23.05.2007) [27] UNMC: Glossary. (2007), www.unmc.edu/ethics/words.html (31.05.2007) [28] WordNet: Word net search; Trust. (2007) Available at: http://wordnet.princeton.edu/ perl/webwn?s=trust&sub=Search+WordNet&o2=&o0=1&o7=&o5=&o1=1&o6=&o4 =&o3=&h (visited 21.05.2007)
Prototype to Support Morphism between BPMN Collaborative Process Model and Collaborative SOA Architecture Model Jihed Touzi, Frederick Bénaben, Hervé Pingaud Centre de Génie Industriel, Ecole des Mines d’Albi-Carmaux Route de Teillet 81013 ALBI Cedex 9 {Jihed.touzi, Frederick.benaben, Herve.pingaud}@enstimac.fr
Abstract. In a collaborative context, the integration of industrial partners deeply depends of the ability to use a collaborative architecture to interact efficiently. In this paper, we propose to tackle this point according to the fact that partners of the collaboration respect SOA (Service-Oriented Approach) concepts. We propose to design such a collaborative SOA architecture according to MDA (model-Driven Approach) principles. We aim at using business model (the needs) to design a logical model of a solution (logical architecture). The business model is a collaborative business model (in BPMN, at the CIM level), while the logical model is a collaborative architecture model (using UML, at the PIM level). This paper presents the theoretical aspects of this subject, the mechanisms of morphism and the dedicated translation rules. Finally, we show the prototype of a demonstration tool embedding the transformation rules and running those principles. Keywords: transformation, ATL prototype, collaborative process, meta-model, morphism.
1 Introduction The application of model-driven development facilitates faster and more flexible integration by separating system description to different levels of abstraction. The global MDA approach shows that is possible to separate concerns by splitting implementation choices from specifications of business needs. Specifically, the Model Driven Interoperability (MDI) paradigm [1][2][3] proposes to start from an Enterprise Modelling level that means at Computation Independent Model (CIM) level, defining the collaboration needs of a set of partners and to reach a Platform Independent Model (PIM) level defining a logical architecture of a collaborative solution. Finally the Platform specific Model (PSM) can be generated. The three models are in closed connections and passing from one layer to another one, must
146
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
be facilitated by vertical transformation rules. Previous research works have shown the benefits of this new paradigm. The PIM4SOA project [4] defines a PIM metamodel based on SOA. In this work, it is possible to generate a PIM model from a Processes, Organization, Product, * (POP*) model. The weak point of this work is that there is no description of the needed morphism between the CIM model and the PIM model. Other research works like [5] and [6] focus principally on the identification of the two meta-models (CIM and PIM). The Morphism (which contains the definition of the transformation rules) is missed or shortly described. In this paper we intend to identify a morphism between a collaborative process model and a collaborative SOA architecture. In our PhD work [3] we are actually developing a prototype that transforms a BPMN collaborative process into a collaborative SOA architecture. The generated UML model can be enriched by an additional knowledge about the collaboration (services description, exchanged messages details, etc.). This paper is organized as follows. Section 2 defines morphism between two models, while section 3 defines the needed meta-models of the BPMNCollaborative SOA architecture transformation. Transformation rules are illustrated in section 4 and finally we present in section 5 the architecture of the developed prototype to support the defined morphism.
2 Definition of a Morphism We aim to establish a morphism between a BPMN collaborative process model and a collaborative SOA architecture model. Fig 1 shows how to represent schematically the notion of morphism: A, B represent respectively source and target model and M is the morphism.
M
Fig. 1. Morphism between A and B models
Morphism allows obtaining a model B from a model A. It is based on the concepts of mapping and transformation [7]. If we consider that one model is composed of a set of elements: x
Mapping is a relation that aims to establish correspondences between elements of two models without modification. The definition of one mapping needs the availability of the two models. The establishment of one mapping needs firstly to define the meta-models that define the models.
Prototype To Support Morphism between BPMN and Collaborative SOA
x
147
Transformation is a function that transforms (or modifies) one source model to a target model. The source model is unchanged and there is generation of new model: the result of the transformation.
Mappings can be of different types [7]: A 1-to-1 mapping puts in correspondence one element of a model with exactly one element of the other model. However, there are other cases in which we map single elements in the source model with a sub-graph in the second model (1-to-n) or, even, a sub-graph to sub-graph (m-to-n).
3 Definition of the Meta-Models In this section, we define the needed meta-models of the BPMN-Collaborative architecture transformation. 3.1 Definition of BPMN Collaborative Process Meta-Model The first meta-model is of the collaborative process. The meta-model of Fig 2 regroups basic BPMN elements (like gateway, events, message flow…) and other specialized components (like pools or tasks that explicitly refer to collaboration entities). The BPMN formalism aims to support process management for technical and business stakeholders by providing a graphical notation that is rather intuitive and able to represent complex process semantics. As a specialized element the Collaborative Information System (CIS) pool refers to a mediation actor of the collaboration that offers a set of added value services. As example: choosing a supplier from a list of suppliers concerned with a customer order, checking payment transactions, etc. The defined meta-model respects our vision of the collaboration, based on the use of a mediator that facilitates interoperability issues between partners.
148
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
Fig. 2. Collaborative process meta-model (source meta model)
3.2 Definition of Collaborative SOA Architecture Meta-Model The collaborative SOA architecture meta-model described in Fig. 3 is closed and inspired from the PIM4SOA meta-model [4]. Three packages are proposed corresponding to three views: x x
x
Services view: services that are used in the collaboration are described; they are business reachable computing functionalities with a known location on the communication network. Information view: data are exchanged by messages between services, they are defined here in their structure by a data model, and also as a communication utility by identification of the emission and reception services Process view: interaction amongst services and coordination aspects are specified by the control of processes described here
Prototype To Support Morphism between BPMN and Collaborative SOA
149
Fig. 3. Meta model of the SOA collaborative architecture (target meta model)
4 Definition of the Transformation Rules The described source and target meta-models of the morphism are further detailed and justified in [3]. It is not possible to explain here, due to limited space constraints, the objects and the relations that rely them in each meta-model. However, we hope that the following dedicated to the mappings of those metamodels that lead to the transformation rules will give some lights about the equivalences that are used. Transformation rules are classified in two categories: 1.
Basic generation rules are used in a first time to create elements of the target model. Most of these rules are defined by a direct mapping between meta model elements.
150
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
2.
Binding rules are applied in a second time to draw the links between the elements resulting from the previous phase. Existing relations in the source model are transformed into relations in the target model.
4.1 Basic Generation Rules Fig. 4, Fig. 5 and Fig. 6 try to summarise the set of rules (also called derivation laws) that are applied during transformation. The rules are represented by circles located in the middle of two class diagrams. The class diagrams are subgraphs which are parts of the primitive meta models. On the left part of each figure is the subgraph of the source meta model, and on the right part is the subgraph of the target meta model. The rules have to be interpreted in the following manner: “When an object is identified in the collaborative process model, it belongs to meta-model class of the left side subgraph linked to the rule. Then, it will be transformed in an object instantiated from the class of right side of the figure. We mean that it will become an object in the SOA collaborative architecture”. The service view of the SOA collaborative architecture is represented in Fig 4. On the left part, the pool and lane classes are mapped on the different entities services of the right part (partners or CIS services). Rs1 rule gives the links from tasks in the collaborative process model to services listed in the registries, either specific to the collaboration or generic ones. Rs2 to Rs4 rules provide solutions for the structure and organisation of services. Rs5 shows the need for an additional knowledge to fill services description. With the same logic, Fig. 5 introduces two transformation rules applied for the information view. As indicated before, the transformation is not sufficiently developed in this domain (business process is not the good way to modelize informations of the collaboration). Transformation provides syntactic indications that help to create business objects (Rules Ri1 and part of Ri2). However, the problem of translation refers to semantic interpretation that we do not include in this part of the study (Remaining part of Ri2 is probably not a robust solution).
Prototype To Support Morphism between BPMN and Collaborative SOA
151
Fig. 4. Localisation of transformation rules for basic generation of the service view
Fig. 5. Localisation of transformation rules for basic generation of the information view
152
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
In contrast Fig. 6 is the most developed part of the transformation procedure with nine rules. The “process view” package has been designed using specifications of the BPEL meta model language. BPEL (Business Process Execution Language) is one of the most popular candidates for specification of web services process execution. Some of the rules in Fig. 8 are adaptations of recommendations provided by BPMI when they address the problem of BPMN graph conversion to BPEL well defined XML sentences [8]. It concerns rules Rp3 to Rp6, and rules Rp8 to Rp9. Rules Rp1, Rp2 and Rp7 participate to the definition of coordination activites.
Fig. 6. Localisation of transformation rules for basic generation of the process view
Prototype To Support Morphism between BPMN and Collaborative SOA
153
4.2 Binding Rules The binding rules can be used to build the interactions between the elements of the target model coming from the basic generation rules appliance. The links could be inside a target model package or between two different packages (dependence). The goal is to define in the target model the necessary relations that exist in the source model. The relations may be of different types like inheritance, composition, aggregation or simple association. Three binding rules Rb1 to Rb3 are given in the following as an exemple. Rb1: sequence ordering A sequence element issued from Rp3 rule is associated with two basic activities into the same process package. Rb2: information processing A service from service package is related to a business object of the information package. Rb3: service identification A basic activity of the process package is linked to a service of the service package.
5 Prototype Development
Fig. 7. Technical architecture of the developed prototype
154
Jihed Touzi, Frederick Bénaben, Hervé Pingaud
Fig. 7 shows the technical architecture of the prototype, developed to implement our proposition. It is based on three open source tools that run in the Eclipse¤ platform. Intalio designer¤ is a BPM tool that helps user to specify the BPMN model. The Atlas Transformation Language (ATL)¤ can use a process model in the XML format coming from the BPM tool in input, and produce the UML model of the collaborative architecture in output. It is the heart of our transformation prototype. The TOPCASED¤ tool is a computer aided software environment that can perform a graphical edition of the UML model. The ATL tool allows generating a UML model from an XML file that represents the process model. The rules presented previously are coded with ATL. The following gives examples of the ATL code. 5.1 Examples of ATL Code The following ATL code generates the structure of the collaborative architecture (three packages: services, information and processes): rule generatePackages { from a : BPMN!Collaborativeprocess to out :UML2!Model ( name < Pj,l, {AttrExpj,l}>=< Pi,k+ Pj,l, {AttrExpi,k} {AttrExpj,l}> =< Pi,k+ Pj,l, {AttrExpi,k AttrExpj,l}>
Preference-based Service Level Matchmaking for Composite Service
285
=< Pi,k+ Pj,l, {( Attri,k AttrOPi,k AttrValuei,k) (Attrj,l AttrOPj,l AttrValuej,l)}> =< Pi,k+ Pj,l, {(Attri,k AttrOPi,k (AttrValuei,k AttrValuej,l)}>
The semantic of AttrValuei,kAttrValuej,l is also introduced in table1. 3). Semantic of Service Level Choice Composition QServicek::=QServicei QServicej QServicei QServicej= = Qi Qj={, , … , } {, , … , } i
i
j
j
={, , … , , , … , } i
i
j
j
6 Experiments and Evaluation We have implemented this approach in a Web Service platform OnceServiceExpress developed by Institute of Software, Chinese Academic of Science. In OnceServiceExpress, we study the performance of our preference model and the effectiveness of our service level matchmaking mechanism. 6.1 Experimental Setup Service consumers and service providers run on two machines respectively, with the configuration of CPU Pentium IV 2.8G Hz, 512M memory, windows professional operating system and JDK 5.0. The number of service consumers ranges from 1 to 200, and each service consumer has different preferences to a particular service level. The composite service is composed of four component service. The number of available services of a component service ranges from 1 to 200. Every available component service has 10 service levels. 6.2 Experiments and Evaluation First, we study the performance of our preference model comparing with WSPolicy. In the first experiment, we fix the number of available services of each component service with 200. We study the performance changing as the number of service consumer increasing from 1 to 200. In the second experiment, we fix the number of service consumer with 200. We study the performance changing as the number of available services of each component service increasing from 1 to 100. Figure3 a) and b) show the experiment results respectively.
PDW FKL QJ W L PH PV
PDW FKL QJ W L PH PV
2XU 3U HI HU HQFH 0RGHO :6 3RO L F\ 0RGHO
QXPEHU RI FRQVXPHU V
Fig. 3. a) influence of consumer number
2XU 3U HI HU HQFH 0RGHO :6 3RO L F\ 0RGHO
QXPEHU RI DYDL O DEO H VHU YL FHV
b) influence of available services
286
Ye Shiyang, Wei Jun, Huang Tao
Figure 3 a) shows that the time of matchmaking using our preference model increases linearly when the number of service consumers increases, while the time of matchmaking using WS-Policy model increases exponentially. Figure 3 b) shows a similar result when the number of available services increases. As illustrated in figure 3, the preference model presented in this paper is very efficient. Then, we verify the effectiveness of our service level matchmaking approach. We evaluate the service consumer’s preference by varying the available services and varying the service levels of available services. Each component service has 200 available services and each available service has 10 different service levels. As shown in Figure 4(a), we set the changing rate of available service to be varying from 1 to 100 per second. We notice that as changing rate increases, our service level matchmaking model maintains a high level consumer preference without any reduce, while in static model, service consumer preference decrease constantly. This is mainly because that when changing rate of available services increases, the service levels provided by composite service change rapidly, and the probability of that the service level matched by static model conforming to service consumer’s preference becomes more smaller. Figure 4(b) shows the influence of changing service level of available services. The changing rates of service level are set in the range from 1 to 100 per second. We can see that with our model the service consumer preference maintains a high level consumer preference when changing rates of service level increases. Compare with Figure 4(a), service consumer preference decrease slowly in this situation in static model. The reason is service levels provided by composite service changed more slowly when changing the service levels of available service. As a result, the probability of that the service level selected by static model conforms to service consumer’s preference decreases slowly.
SU HI HU HQFH
SU HI HU HQFH
RXU PDW FKPDNL QJ 0RGHO
Fig. 4. a) available services changing
RXU PDW FKPDNL QJ 0RGHO
VW DW L F PDW FKPDNL QJ PRGHO 1R RI FKDQJL QJ DYDL O DEO H VHU YFL HV SHU VHFRQG
VW DW L F PDW FKPDNL QJ
1R RI FKDQJL QJ O HYHO V RI DYDL O DEO H VHU YL FH SHU VHFRQG
b) levels of available services changing
As shown in the above experiments, our proposed service level matchmaking mechanism can effectively satisfy service consumer’s preference.
7 Conclusion In this paper, we present a service consumer preference model. The preference model considers both the service level and the price of service level by introducing the notion acceptable price. Moreover, the model gets a better efficiency because of avoiding service level enumeration. We also proposed a service level automatic
Preference-based Service Level Matchmaking for Composite Service
287
generating approach for composite service. Based on service levels of component services and composite service structure expression, this approach can determine the service levels of a composite service automatically and dynamically. After that, a service level matchmaking model is proposed. In contrast to other work in this area, our approach can maintain a high level consumer preference because it considers the changing of available component services and determines its service levels dynamically. Experiment results show that this approach is efficient and effective. In the future work, we plan to optimize our service matchmaking mechanism. On one hand, we plan to study the influence boundary when an available component service or a service level becomes invalid; on the other hand, we plan to analyze the incremental influence when adding a new available service.
Acknowledgement This Work is supported by the National Natural Science Foundation of China under Grant No. 60673112, the National Grand Fundamental Research 973 Program of China under Grant No.2002CB312005 and the High-Tech Research and Development Program of China under Grand NO. 2006AA01Z19B.
References [1]
Zeng L., Benatallah B., Dumas M., Kalagnanam J., Sheng Q. Z.: Quality driven web services composition. Proc. 12th Int’l Conf. World Wide Web (WWW), 2003 [2] Zeng L., Benatallah B.: QoS-aware middleware for web services composition. IEEE Transactions on Software Engineering, 2004, 30(5): 311-327. [3] Serhani M., Dssouli R., Hafid A.: A QoS Broker Based Architecture for Efficient Web Services Selection [A]. Proc of the IEEE Int'l Conf on Web Services (ICWS'05)[C].2005. [4] Comuzzi M., Pernici B.: An Architecture for Flexible Web Service QoS Negotiation. Proceedings of 9th IEEE EDOC Conference, Enschede, The Netherlands, September 2005 [5] Wohladter E., Tai S., Thomas A., Rouvellou I., Devanbu P.: GlueQoS: Middleware to Sweeten Quality-of-Service Policy Interactions, ICSE 2004, Edinburgh, UK. [6] Gimpel H., Ludwig H., Dan A., and Kearney R.: PANDA: Specifying Policies for Automated Negotiations of Service Contract, Research Report RC22844, IBM T.J. Watson Research Center, Yorktown Heights, N.Y. 10598 (2003). [7] Ariba, IBM, and Microsoft. Web services definition language (WSDL), March 2001. [8] IBM Corporation. WSLA language specification, version 1.0. http://www. research.ibm.com/wsla, 2003. [9] Sahai V. Machiraju M. Saya A. v. Moorsel, and F. Casati. Automated SLA monitoring for web services. In Proc. of 13th Int. Workshop on Distributed Systems, Montreal, Canada, 2002. [10] W3C. W3C XML Query (XQuery 1.0). http://www.w3.org/XML/Query/, January 2007. [11] W3C. Web Services Policy Framework 1.5. http://www.w3.org/2002/ws/policy/, July 2006.
288
Ye Shiyang, Wei Jun, Huang Tao
[12] Global Grid Forum. Grid Resource Allocation Agreement Protocol. Web Services Specification. Available from http://www.ogf.org/Public_Comment_Docs/Documents /Oct-2006/WS-AgreementSpecificationDraftFi nal_sp_tn_jpver_v2.pdf, October 2006. [13] W3C. Universal Description, Discovery and Integration. http://www.uddi.org/. [14] Lamparter S., Ankolekar A., Grimm S., Studer S.: Preference-based Selection of Highly Configurable Web Services In Proc. of the 16th Int. World Wide Web Conference (WWW'07). Banff, Canada, May 2007. [15] ONCEServiceExpress. www.once.com.cn
Ontological Support in eGovernment Interoperability through Service Registries Yannis Charalabidis National Technical University of Athens, 9 Iroon Polytechniou, Athens, Greece [email protected]
Abstract. As electronic Government is increasing its momentum internationally, there is a growing need for the systematic management of the newly defined and constantly transforming services. eGovernment Interoperability Frameworks usually cater for the technical standards of eGovernment systems interconnection, but do not address service composition and use by citizens, businesses or other administrations. An Interoperability Registry is a system devoted to the formal description, composition and publishing of traditional or electronic services, together with the relevant document and process descriptions in an integrated schema. Through such a repository, the discovery of services by users or systems can be automated, resulting in an important tool for managing eGovernment transformation towards achieving interoperability. The paper goes beyond the methodology and tools used for developing such a system for the Greek Government, to population with services and documents, application and extraction of useful conclusions for electronic Government transformation at global level. Keywords: eGovernment Interoperability, Enterprise modeling for interoperability, Metaddata for interoperability, eGovernment Ontology
1 Introduction In the dawn of 21st century, where system complexity, multiplicity and diversity in the public sector is posing extreme challenges to common interoperability standards, eGovernment Interoperability Frameworks (eGIF’s) pose as a cornerstone for the provision of one-stop, fully electronic services to businesses and citizens [16]. Such interoperability frameworks aim at outlining the essential prerequisites for joined-up and web-enabled Pan-European e-Government Services (PEGS), covering their definition and deployment over thousands of front-office and backoffice systems in an ever extending set of public administration organizations.
290
Yannis Charalabidis
Embracing central, local and municipal government, eGovernment Interoperability assists Public Sector modernization at business, semantic and technology layers. As more and more complex information systems are put into operation everyday, the lack of interoperability appears as the most long lasting and challenging problem for governmental organizations which emerged from proprietary development of applications, unavailability of standards, or heterogeneous hardware and software platforms.
2 Background and Scope In order to effectively tackle the transformation of Public Administration, European Union has set key relevant priorities in its “i2010 eGovernment Action Plan” [15]. At national level, most European Union Member States have produced their own National Digital Strategies (e.g. the Greek Digital Strategy 2006-2013 [25], or the Estonian Digital Strategy [11]) which include measures and strategic priorities aimed at developing eGovernment. Within this context, most countries have tried to face the interoperability challenge with the adoption of national e-GIF’s covering areas such as data integration, metadata, security, confidentiality and delivery channels, which fall into the technical interoperability layer. Such frameworks have issued “sets of documents” guiding system design but have not developed to date appropriate infrastructures, such as repositories of XML schemas for the exchange of specificcontext information throughout the public sector – observed only partially in United Kingdom’s e-GIF Registry [6] and the Danish InfoStructureBase [8]. Furthermore, as shown in recent eGovernment Framework reviews [7, 14], there exists no infrastructure proposal for constructing, publishing, locating, understanding and using electronic services by systems or individual users. In order to take full advantage of the opportunities promised by e-Government, a second generation interoperability frameworks era, launching “systems talking about systems” and addressing issues related to unified governmental service and data models, needs to commence. As presented in the next sections of this paper, such an Interoperability Registry infrastructure, should consist of: x x x x
An eGovernment Ontology, able to capture the core elements and their relations, thus representing services, documents, providing organizations, service users, systems, web services and so on. A metadata schema, extending the eGovernment Ontology and providing various categorization facets for the core elements, so as to cover for information insertion, structuring and retrieval. Formal means for describing the flow of processes, either still manual or electronic, and the structure and semantics of various electronic documents exchanged among public administrations, citizens and businesses. An overall platform integrating data storage, ontology management, enterprise modelling and XML authoring, data input and querying mechanisms as well as access control and presentation means.
Ontological Support in eGovernment Interoperability through Service Registries
x
291
The population of the eGovernment Ontology database, with information about administrations, their systems, services and documents is an important step. Since this task usually involves gathering huge amounts of information, an initial set of data should be considered first: this way, population achieves a critical mass, while automatic knowledge acquisition tools are being developed.
3 Defining the eGovernment Ontology The representation means of the proposed system should first capture the core elements of the domain, together with their main relationships. Most existing approaches for eGovernment ontologies cover neighboring domains, such as Public Administration Knowledge [12,28], Argumentation in Service Provision [22], eGovernment projects [23], or types and actors in national governments [24]. As depicted in Figure 1, the proposed eGovernment Ontology provides for the representation of the following core elements: x x x x x x x x
Services, of various types, provided by Administrations towards citizens, businesses or other administrations. Documents, in electronic or paper forms, acting as inputs or outputs to services. Information Systems, being back-office, front-office or web portals providing the electronic services. Administrations, nested at infinite hierarchical levels, being ministries, regions, municipalities, organizations or their divisions and departments. Web Services, being electronically provided services, either final or intermediate ones (contributing to the provision of final services). XML (eXtensible Markup Language) Schema Definitions, for linking the formal representation of data elements and documents. BPMN (Business Process Modelling Notation) Models, for linking services with their workflow models. WSDL (Web Services Definition Language) Descriptions, linking Web Services with the respective systematic, machine readable description of their behaviour.
292
Yannis Charalabidis isControlledBy
Administration
isPartOf
operates
Information System
issues
isPartOf
provides
Document
isSupportedBy
isProvidedBy
relatesTo
Service
relatesTo
Web Service
isPartOf hasDescription
XML Schema
hasDescription
BPMN Model
hasDescription
WSDL Description
Fig. 1. Core Elements of the Ontology
Additional objects complementing the core ontology elements are Citizens (as various types of citizens requesting services), Enterprises (both as service recipients but also as contractors for government projects), Legal Framework Elements (that guide services provision), Life Events and Business Episodes that may trigger a service request, or Technical Standards affecting the provision of electronic services. 3.1 Metadata standards for multi-faceted classification The eGovernment Ontology is supported by numerous categorization facets and standardized lists of values for systematically structuring database contents during the population phase, including types of services and documents (according with the Government Category List – GCL categorization), All the core elements of the eGovernment ontology have predefined metadata, so that their description, search and retrieval can be assisted. The implemented metadata structure is based and extends on a number of existing metadata structures in literature and practice, namely: x
The Dublin Core metadata standard [10] which provides a generic set of attributes for any government resource, be it document or system, including various extensions [20].
Ontological Support in eGovernment Interoperability through Service Registries
x x
x
x
x
293
The European Interoperability Framework – EIF (Version 1.0) [16] published by the IDABC Programme. The United Kingdom e-Government Interoperability Framework [3] and its relevant specifications (e.g. the e-Government Metadata Standard [4], the e-Government Strategy Framework Policy and Guidelines [5], and the relevant Schema Library [6]). The German Standards and Architectures for e-Government Applications (SAGA) Version 3.0 (October 2006) [17], which identifies the necessary standards, formats and specifications, sets forth conformity rules and updates them in line with technological progress. The Danish Interoperability Framework (Version 1.2.14) [9] released in 2006, which includes recommendations and status assessments for selected standards, specifications and technologies used in e-government solutions while the collaboration tool InfoStructureBase [8] includes an international standards repository containing business process descriptions, data model descriptions, interface descriptions, complex XML schemas and schema fragments. The Belgian Interoperability Framework (BELGIF) [2] that is built on a wiki collaborative environment and has released recommendations on web accessibility and on the realization of XML Schemas, apart from a list of approved standards.
The resulting metadata definitions cover all the important facets for classifying and querying the elements of the ontology, so as to provide answers to important questions around the status of electronic provision of services, existence and structure of documents, relation of services with public administrations, characteristics of the various governmental information systems and so on. Table 1 shows the metadata definitions for the Service element, indicating which of them are represented as strings, numbers, list of values or structured elements themselves. Table 1. Services Metadata Field Code Title Providing Administration Engaged Administration Final Service Beneficiary Type Category Life Event Business Event
Description, Type The Service Code, Unique, String The Service Title, Unique, String The Administration (organization, department or office providing the service), Element Other Administrations, taking part in the service provision, Multi-Element Yes/No, if it is a final service, giving output to the citizens or businesses, List Citizens or Businesses or subtypes of them, Element The Service Type, (e.g. Registration, Benefit, Application, Payment, etc), List Service Category, according to GCL (e.g. social service, taxation, education), Element The associated Life Event, Element The associated Business Event, Multi-Element
294
Yannis Charalabidis
Field Legal Framework Ways of Provision Electronic Provision Level Multilingual Content Manual Authentication Type
Electronic Type
Authentication
Frequency Web Site International Policy National Policy Information Source Date of last Update
Description, Type The applying legal framework for the service, MultiElement Manual, Internet, SMS, I-TV, etc., Multi-List Level of electronic provision (1 to 4), according with the EC standardization Languages in which the content for the service exists, Multi-List Type of Authentication needed when the service is provided in a manual way (e.g. presence in person, idcard, proxy), List Type of Authentication needed when the service is provided electronically (e.g. username/passwd, token, digital signature), List Frequency the service is requested, by means of High-Medium-Low, List The URL of the portal providing the service, String Yes / No, if the service is included in the i2010 20 core-services set, List Yes / No, if the service is included in the National Digital Strategy core-services set, List The source(s) of information for the service, String The date of last sampling, Date
Relevant, extensive metadata description fields exist for Documents, Administrations, Information Systems and WebServices, providing an indication of the descriptive power of the Ontology.
4 Combining Processes and Data The description of Services and Documents cannot be complete without formal representation of the services flow and of the documents internal structure. The importance of formal, combined description of services and document schemas has been properly identified in current literature [22,13]. Business modeling and analysis of the processes and the public documents that take part in their execution, is done using the BPMN notation [18] and the ADONIS modeling tool, provided by BoC International [21]. On top of the ADONIS tool, integration with the eGovernment Ontology (Services, Documents, Administrations) has been implemented, ensuring a complete, interoperable data schema. As shown in Figure 2, eGovernment Processes are modeled using BPMN Notation, resulting in easy identification of documents to be exchanged, decisions taken during the service flow by citizens/businesses or administrations and specific activities or information systems that take part in the overall process execution – in this case the electronic VAT declaration from an enterprise towards the TAX Authority.
Ontological Support in eGovernment Interoperability through Service Registries
295
Fig. 2. VAT Declaration Model
Design of data schemas involved in the execution of the processes under consideration, has been performed with the use of the UN/CEFACT CCTS methodology [27], for the creation of common components among the various governmental documents that have been identified through process modeling. Then, following modeling and homogenization of data components, Altova XML authoring tools [1] have been used for defining the final XSD descriptions representing business documents of all types. Final XSD files have been linked with the respective governmental documents of the ontology, resulting in a comprehensive and easy to navigate semantic network structure.
296
Yannis Charalabidis
5 The Interoperability Registry Platform The architecture that implements the Interoperability Registry comprises three layers: (a) the Web-based and UDDI (Universal Description, Discovery and Integration) interfaces for various groups of users, (b) the tools layer including ontology management, process and data modeling and (c) the information repository for interconnected data elements, process models, XML schemas and Web Services descriptions. These three layers, as shown in Figure 3, are integrated through a Relational Database Management System and the Common Access Control and Application Engine.
Registry Intranet
Registry Web Site
(Limited Access: Public Administrations and Portal Builders)
(Public Access: Citizens and Businesses)
Registry UDDI Interface (Limited Access: Systems)
Common Access Control and Application Engine
Process Modeling Tools (incl. COTS software)
Ontology Management, Population & Reporting Tools
XML Management Tools (incl. COTS software)
Relational Database Management System
BPMN Process Models
Services, Documents, Systems & Organisations Metadata
XML Schemas & Core Components
Web Services
Fig. 3. Platform Architecture
The front-end platform components are the following: x x
The Registry Web Site found within the Greek eGIF Web Site [26], which publishes the various documents of the eGovernment Framework but also gives access to citizens and businesses for publicly available data. The Registry Intranet, accessible to pre-selected public administrations and portal builders that gives access to the Registry Tools (processes, ontology, XML).
Ontological Support in eGovernment Interoperability through Service Registries
x
297
The Registry UDDI interface, where administrations publish their Web Services or find existing, available Web services to use through their information systems, constructing truly interoperable, one-stop services.
The Tools layer consists of the process modelling facilities, based on ADONIS engine, the XML Management facilities, based on ALTOVA XML platform, and the custom-developed ontology management, data entry and reporting tools that integrate all representations and models. Finally, the Data Storage layer, incorporates connected database schemas for the ontology instances, the Web Service descriptions in WSDL, the process models and the XML schemas and Core Components. The development and integration of the whole platform has been performed with the use of Microsoft Visual Studio .net suite, using the ASP 2.0/AJAX development paradigm. A parallel installation has also been performed using Java/J2EE/MySQL components.
6 Population of the Repository Initial Population of the Interoperability Registry Repository was greatly assisted by the existence of data in electronic form, through the Greek Ministry of Public Administration. As shown in Table 2, even for a country close to the average European Union Member State population (11,000,000 citizens), the size of the domain is significant, involving thousands of governmental points, services and document types. Furthermore, a plethora of information systems are currently under development, during the new Greek Digital Strategy plan, aiming to achieve full electronic operation of the State by 2013. Table 2. Size of the Domain in Greece Organisational Aspect
18 ministries, 13 prefectures, 52 districts, 1,024 municipalities, 690 public sector organizations 2,500 Governmental “Points of Service” Services and Data Aspect
3,000 non-interoperable Service Types 4,500 Document Types exchanged 1,000 IT companies supporting IT systems
Systems Aspect
300 Central Government Internet Portals 1,000 Municipal Government Internet Portals 2,500 Systems
Public
Administration
Back
Office
Users Aspect
750,000 Enterprises (small, medium and large) 11,000,000 Citizens 18,000,000 Tourists per year
Population of the repository was achieved through the following automated and semi-automated activities:
298
Yannis Charalabidis
x x x x
Automated import of more than 1,797 administrations including ministries, prefectures, districts, municipalities and public sector organisations. Automated import of 1,009 public services definitions, with core metadata descriptions and frequency indications, stemming out of 3,000,000 service requests by citizens and businesses during the last year. Modelling of the core-100 governmental services (including all i2010 services and the services amounting to 85% of the yearly service requests), Modelling of the core XML schemas and WSDL for Web Services to be developed – an activity that is still going on.
The resulting platform is now being maintained and further populated with the assistance of engaged public administrations. The acceptance of the Interoperability Registry by the Public Administration is following a three-stage approach: (a) the core team, including the Ministry of Public Administration and the National eGIF team, (b) the main Public Sector stakeholders, including key ministries, organisations and local administrations and (c) eGovernment project managers and implementation teams, from the public and private sector. Currently, registry users, with various levels of access, exceed 100.
7 Conclusions The new Greek Interoperability Registry presented in this paper introduces a new system (not a paper-based specification) that will interact with e-Government portals and back-office applications, administration stakeholders, businesses and citizens, guiding eGovernment transformation and ensuring interoperability by design, rework or change. The initial application of the system, as well as the relevant evolutions from other European eGIF’s, are indicating that new perspectives should be taken into consideration in eGovernment Frameworks from now on, analysed as following: x x
x
x
Importance and adequate effort should be put in defining standard, formally described electronic services for businesses and citizens, thus providing clear examples to administrations and service portal developers. The paper-based specification should give way to system-based presentation of the framework, incorporating service descriptions, data definitions, unified domain representation ontologies and metadata in a common repository. Organisational interoperability issues should be supported by a more concrete methodology of how to transform traditional services to electronic flows, with the use of decision-making tools. In this direction, the Interoperability Registry infrastructure presented can be of great assistance as it contains all the necessary information in a comprehensive, welldefined and connected semantic network. The collaboration among European e-Government Interoperability Frameworks is particularly beneficial for the ongoing efforts of individual
Ontological Support in eGovernment Interoperability through Service Registries
299
countries, since it ensures that lessons from the pioneers’ experience are learnt and that the same mistakes will not be repeated. Future work along the Greek eGIF and the Interoperability Registry includes both organisational and technical tasks, since the proper maintenance and usage of the registry is now the crucial issue. So, efforts will be targeting the following objectives: x x x
Binding with the Central Governmental Portal for citizens and businesses, so that the registry can by used for locating and enrolling to electronic services. Completion and publication of additional XML Schemas based on Core Components methodology. Initial training of key staff within administrations for using and extending the registry.
Finally, is has been identified that no system can work without the engagement of the public servants: more effort is to be put towards encouraging stakeholders to interact with the registry and among themselves, building synergies across the public sector authorities in a truly interdisciplinary way – hopefully extending the eParticipation features of the registry.
References [1] [2]
Altova XML-Spy Authorware Tools, www.altova.com Belgian Interoperability Framework, Belgif, (2007), http://www.belgif.be/index.php/Main_Page [3] Cabinet Office – e-Government Unit: e-Government Interoperability Framework, Version 6.1, Retrieved February 5, 2007 from http://www.govtalk.gov.uk/documents/e-GIF%20v6_1(1).pdf [4] Cabinet Office – Office of the e-Envoy, e-Government Metadata Standard, Version 3.1, Retrieved February 5, 2007 from http://www.govtalk.gov.uk/documents/eGMS%20version%203_1.pdf [5] Cabinet Office – Office of the e-Envoy, Security - e-Government Strategy Framework Policy and Guidelines Version 4.0, Retrieved Feb. 2007 from http://www.govtalk.gov.uk/documents/security_v4.pdf [6] Cabinet Office, UK GovTalk Schema Library, http://www.govtalk.gov.uk/schemasstandards/schemalibrary.asp, (2007) [7] Charalabidis Y., Lampathaki F., Stassis A.: “A Second-Generation e-Government Interoperability Framework” 5th Eastern European e|Gov Days 2007 in Prague, Austrian Computer Society, April 2007 [8] Danish e-Government Project, InfostructureBase (2007), http://isb.oio.dk/info [9] Danish Interoperability Framework, Version 1.2.14, Retrieved February, 2007 from, http://standarder.oio.dk/English/Guidelines [10] Dublin Core Metadata Element Set, Version 1.1, Retrieved January 25, 2007 from http://dublincore.org/documents/dces/ [11] Estonian Information Society Strategy 2013, ec.europa.eu/idabc/en/document/6811/254, 30 November 2006 [12] Fraser J, Adams N, Macintosh A, McKay-Hubbard A, Lobo TP, Pardo PF, Martinez RC, Vallecillo JS: “Knowledge management applied to e-government services: The
300
[13]
[14]
[15] [16]
[17]
[18]
[19]
[20]
[21] [22] [23] [24]
[25] [26] [27]
[28]
Yannis Charalabidis
use of an ontology”, Knowledge Management In Electronic Government Lecture Notes In Artificial Intelligence 2645: 116-126, Springer-Verlag, 2003 Gong R., Li Q., Ning K., Chen Y., O'Sullivan D: “Business process collaboration using semantic interoperability: Review and framework”, Semantic Web - ASWC 2006 Proceedings, Lecture Notes In Computer Science, Springer-Verlag, 2006 Guijarro L.: “Interoperability frameworks and enterprise architectures in egovernment initiatives in Europe and the United States”, Government Information Quarterly 24 (1): 89-101, Elsevier Inc, Jan 2007 The i2010 eGovernment Action Plan, European Commission, http://ec.europa.eu/idabc/servlets /Doc? id =25286, 2007 IDABC, European Interoperability Framework for pan-European e-Government Services, Version 1.0, Retrieved February 5, 2007 from http://europa.eu.int/idabc/en/document/3761 KBSt unit at the Federal Ministry of the Interior, SAGA Standards and Architectures for e-Government Applications Version 3.0, Retrieved February 5, 2007 from http://www.kbst.bund.de/ OMG Business Process Modelling Notation (BPMN) Specification, Final Adopted Specification http://www.bpmn.org/Documents/OMG%20Final%20Adopted%20BPMN%2010%20Spec%2006-02-1.pdf Seng JL, Lin W: “An ontology-assisted analysis in aligning business process with ecommerce standards”, Industrial Management & Data Systems 107 (3-4): 415-437, Emerald Group Publishing, 2007 Tambouris E, Tarabanis K: “Overview of DC-based e- -driven eGovernment service architecture”, Electronic Government, Proceedings Lecture Notes In Computer Science 3591: 237-248, Springer-Verlag, 2005 The ADONIS Modeling Tool, BoC International, http://www.boc-group.com/ The DIP project eGovernment Ontology http://dip.semanticweb.org/documents/D9-3improved-eGovernment.pdf, 2004 The Government R&D Ontology, http://www.daml.org/projects/integration/projects20010811 The Government type Ontology, http://reliant.teknowledge.com/DAML/Government.owl, CIA World Fact Book, 2002 [12] The Greek Digital Strategy 2006-2013 (2007), http://www.infosoc.gr/infosoc/enUK/sthnellada/committee/default1/top.htm [2] The Greek eGIF Website, available at http://www.e-gif.gov.gr [28] UN/CEFACT Core Components Technical Specification, Part 8 of the ebXML Framework, Version 2.01, Retrieved January 25, 2007 from http://www.unece.org/cefact/ebxml/CCTS_V2-01_Final.pdf Wimmer, M.: “Implementing a Knowledge Portal for e-Government Based on Semantic Modeling: The e-Government Intelligent Portal (eip.at)”, Proceedings of the 39th Annual Hawaii International Conference on System Sciences (HICSS'06) Track 4 p. 82b, 2006.
Towards Secured and Interoperable Business Services A. Esper, L. Sliman, Y. Badr, F. Biennier LIESP, INSA-Lyon, F-69621, Villeurbanne, France {alida.esper, layth.sliman, youakim.badr, frederique.biennier}@insa-lyon.fr
Abstract. Due to structural changes in the market, from mass customisation to increased interest in product-services management, an exponential growth of a service ecosystem will emerge in the coming years. This shift in the economy creates a need for instant and ondemand collaborative organisations which involve radical changes in the organizational structure of enterprises, increasing the need for business interoperability. Unfortunately, existing enterprise engineering approaches and information systems technologies lack the intrinsic agility and adaptability features required by these service-based collaborative organisations. To overcome these limits, we introduce a new approach called the Enterprise Urbanism Concept to reorganize enterprises into sets of interoperable industrial services. This new approach relies on the extension of the concept of information system urbanism in order to take into account industrial constraints while reorganising service business units. Nevertheless, despite this intrinsic partner reorganisation, instant and on-demand collaborative organisations can be limited due to a lack of trust between partners. To overcome these limits, we reinforce our approach by clearly assessing contextual security policies based on the patrimony of a company and technological security components. These components can be dynamically added in respect to the collaboration context when organising a consistent chain of industrial services. Keywords: Security issues in interoperability, Interoperable enterprise architecture, Service oriented Architectures for interoperability, Business Process Reengineering in interoperable scenarios, Enterprise modeling for interoperability
1 Introduction The need for increased customisation and service-oriented products has forced enterprises to adapt their organisational strategy. While focusing on their core business competencies, outsourcing and collaborative strategies must respond to market requirements, for example, being able to provide a high-quality service level to consumers. These organisational trends enhance the enterprise’s agility such as its ability to quickly respond to structural changes, client requests,
302
A. Esper, L. Sliman, Y. Badr, F. Biennier
technological or activity changes, supplier management [24, 30] and to reduce waste leading to a lean manufacturing organisation [42]. These organisations heavily use Information and Communication Technologies (ICT) [34] leading to increase the call for IT inter-operability. Their performance level is related to efficient information sharing systems so that deviation risks can be reduced [18]. Moreover, the increasing association between products and services creates a need to organize instant collaborations to fulfil a customer’s request. Consequently, an exponential growth of a ubiquitous ecosystem of services will emerge in the coming years. The ecosystem of services in the economy will rely on software services and support which span multiple organizations and service providers. The dynamic interaction between services creates chains of organizations that will provide agile support for business applications, governments, or, simply, the consumer. This global service vision leads us to identify two major statements: 2.
As such service ecosystems rely on a collaborative industrial organisation, service models must be designed to take into account industrial constraints. Furthermore, firms must be re-organised to provide consistent and interoperable industrial services. 3. Building a dynamic service chain depends on the capability to link services together in arbitrary ways in order to meet customer needs. Nevertheless, the potential lack of trust between partners can an obstacle to the global adoption of such distributed architectures. To overcome this limit, particular attention must be paid to both security requirements and management while specifying the context of the service and user preferences. The propagation of these constraints must be included in service compositions and orchestrations to respond to a particular service need.
The paper is organized as follows: after defining the extended context of services in Section 2, we will introduce a new service model in Section 3 to specify business interoperability constraints. Lastly, in Section 4, we will define how services can be dynamically orchestrated through the combination of adapted technological components that fulfil context requirements.
2 Challenges in Ecosystem of Services The integration of services when selling a product increases the need for inter-firm collaborations and leads to the organization of industrial activities according to dynamic added-value networks instead of Michael Porter’s traditional value chain model [40]. Such a vision involves being able to dynamically define the way that global services can be linked together in industry. Yet, the definition of a global service is in contrast with the traditional enterprise organisation in terms of business sectors (i.e. sales and production) supported by underlying corporate information systems. This approach imposes an organization of the enterprise in respect to generic models of business sectors that can be customised. Unfortunately, this approach often leads to monolithic top-down engineering
Towards Secured and Interoperable Business Services
303
practices, gathering different engineering tools and methodologies in a common reference architecture [27]. In addition, it creates a vertical organisation of the enterprise in terms of customer relationship, supply management and production systems. The vertical organization, fully impacted when organising a dynamic collaboration, exhibits a poor agility level. 2.1 Interoperable Business and Production Processes Moreover, corporate information systems depend on a wide variety of technological-based support systems (i.e. DBMS and programming languages, etc.) and various business sectors. Examples include the Enterprise Resource Planning, Product Life Management, Manufacturing Execution Systems and Supply Chain Management Systems. Such corporate information systems exhibit a high level of technological complexity and lack system interoperability. They involve information redundancy which cause inconsistencies and decrease the flexibility and scalability of their architectures. Although leaner approaches such as workflow management systems can be fruitfully used to support both business and production processes [35], these on-demand tools may adversely increase Information System complexity and inconsistency. We attempt to solve these problems by recalling the Information System Urbanization approach [33] which organises the Information System into different levels. The urbanization approach separates activities from the Information System and technical infrastructure [12]. Inversely, the introduction of ServiceOriented Architecture (SOA) enables a flexible design and which produces a reactive environment of applications interconnected through an applicative bus that orchestrates processes [38]. Coupling the urbanization paradigm and the ServiceOriented Architecture in the design of Information Systems will improve the consistency and the agility of the firm to reorganize its activities. It is the first step towards the organisational business interoperability between collaborative firms. This coupling approach focuses on activities as fine-grained components of information and production systems in respect to the enterprise functional organisational chart and without taking into account the logic of production processes or organisational constraints. 2.2 Propagation of Security Requirements A lack of trust between partners can limit the emergence of collaborative organizations and weaken collaboration strategies in the way that partners share information. As information represents an important part of the enterprise patrimony [31, 15], security requirements must be taken into account while designing both business processes and the physical implementation [7]. The definition of security policy is often limited to the implementation of a secure infrastructure. For this purpose, methodologies and industrial standards have been defined since the 1980s to propose international level of certifications such as the DoD Rainbow series [19], ITSEC [21] and Common Criteria [16]. Nevertheless, this reduced point of view provides technical solutions to threats and vulnerabilities caused by the technology. Early risk analysis approaches, such as
304
A. Esper, L. Sliman, Y. Badr, F. Biennier
EBIOS [17] and MEHARI [14], allow the design and implementation of safe architectures of consistent and reliable systems based solely on technological solutions as proposed in SNA [36] and OCTAVE [2]. Organisational vulnerability must also be considered in defining consistent security policy as proposed in the ISO 17799 standard [28]. Few approaches, such as UML-SEC [29] and web services deal with the integration of security requirements when designing processes. Security requirements in web services can be achieved at the technical level in respect to a reference architecture [25] that integrates different standards such as WS-Security [37] and WS-federation [26] for securing SOAP messages and federating different security realms. The security policies of infrastructures and processes suffer from being locally limited to corporate information systems [20]. Policy requirements and implementation are generally bounded by the boundary of each firm. The dominant logic of ecosystem of services implies global security roadmaps among chains of collaborative services. Security requirements and constraints should be propagated when orchestrating chains of distributed services. We attempt to deal with the problem of security propagation by examining Digital Rights Management architectures [23]. These architectures provide long life-cycle protection of audio and video contents based on cryptographic techniques and associated players. Adapting such an approach to a dynamic ecosystem of services involves the design of services to support contextual dynamic binding. The technology of Hippocratic databases [1] which respects the privacy of data it manages could ensure that disclosure concerns (i.e. privacy policy, security policy, legislation, etc.) are protected. Digital Rights Management architectures and Hippocratic Databases support security policies and access rights management [32] including user preferences in respect to the industrial standard of Platform for Privacy Policies (P3P) [13]. In conclusion, the ecosystem of services urges solutions to build on-demand and secured interoperable service systems. In the next section, we introduce an extended service model to embed both business interoperability and security constraints. We then pay special attention to define how contexts can be used to implement such service framework. We emphasize the contextual binding and orchestration monitoring functions.
3 Secure Interoperable Business Organisation To organise interoperable business organisations, our works rely on the Enterprise Urbanism concept [5] which splits the enterprise organisation into several “business areas” coupled with the production organisation. In this organisation the evolution of the corporate information system is guaranteed by the flexible and incremental design of processes [4]. In the enterprise urbanisation approach, the specification of business areas integrates industrial flows, competencies, decisions and information viewpoints. Because of the correlation between business areas and the production organisation, a given full process, including customer relationship management, production and purchase activities, belongs to the same autonomous
Towards Secured and Interoperable Business Services
305
area [39]. This process can easily be integrated in dynamic collaborative organizations by binding and orchestrating industrial services. 3.1 Interoperability Constraints While traditional information systems focus on management and business aspects, we pay particular attention to the industrial aspect to efficiently support coproduction constraints and build interoperable business organisation. This leads to take into account several interoperability constraints: x
x x
Organisational Interoperability: means that enterprises must share the same goal and have compatible management strategies. This interoperability level is also related to the security policy and security context management so that a consistent end-to-end secured organisation can be built. Industrial Interoperability: means that enterprises must share information regarding products and production processes such as the process maturity level and the required real time of execution. Technical Interoperability: means that the different applications of the information system can exchange information.
3.2 Interoperable Business Services Based on different constraints, we propose to extend the service model that we have proposed in previous work [8] to design a multi-level architecture in order to build interoperable business services (Fig. 1): x x
x
x
The enterprise level: consists of the organisation of traditional business areas. The business service level: consists of sets of shared services that take into account both organisational and industrial interoperability constraints. These two levels are interconnected by means of urbanisation composition of services which gather different groups of technology tools and the routing system. Roughly speaking, the group of technology tools is used to identify consistent industrial units. The routing system connects each industrial business area to the traditional “vertical” business area. The collaborative level: defines different collaboration policies and trust rules in order to simplify both the contractual enactment phase and the dynamic building of adapted security policies. Policy requirements depend on partners engaged in the chain of services. The technological level: denotes shared and accessible interfaces to connect the industrial services to the implemented services. This level includes the contextual composition and orchestration engine, the Enterprise Service Bus (ESB) as well as connectors to components of the information system.
306
A. Esper, L. Sliman, Y. Badr, F. Biennier
Supply
Customer
Accounting
Production
Design
Enterprise traditional organisation
Group technology tools
Business routing
Urbanisation
Industrial services
Service A Service A Industrial services
Service B Service B
Composition / orchestration
Service C Service C
Service D Service D
Service E Service E
Policies / preferences
Context manager
Composition / orchestration Security
B2B gateway
Mediation Entreprise
External Service provider
Soap service request
Transformation
Routing
Service Bus
Application Access service
Data access service
ERP
Web
Business service choregrapher
Service routing directory Business service directory
IT system SCM
Mainframe
CRM
Fig. 1. Multi-Level Architecture of Interoperable Industrial Services
3.3 The Service Extended Model The organisation of interoperable industrial services relies on a multi-interface extended service model (see figure 2): x
x
The industrial service: describes competencies of the service in terms of what the service can achieve (i.e. a “virtual” service product, a “real” manufactured product or a “product-service” offer connecting a manufactured product to the related value-added services). This conceptual service includes a global accessible interface defining exactly what the service will achieve (i.e. a stand alone product or a service-included product). The manufacturing interface: includes two views describing how the manufacturing service will be achieved: -
Product view: gathers information on material and products (i.e. bills of material, management policy, quality indicator, etc.) as well as production management strategies. As far as value-added services are concerned, the required resources including human
Towards Secured and Interoperable Business Services
-
What will be achieved
Service Competencies
Pattern Indicator SLA Interface Acquisition
Manufacturing Interface
Exchange
Management Data
x
The control interface of the Service Level Agreement: Real-time constraints are used to describe global QoS constraints to be included in SLAs whereas safety constraints are used to define pre-emptive or non-preemptive services. This interface is coupled with a dynamic monitoring system to dynamically define adapted QoS monitoring features [9]. The security control interface: describes the global protection that should be applied to both data and processes depending on perceived risks and according to partner trust levels. In order to take into account threats related to the pervasive infrastructure of services, a particular facet devoted to the exchange protection is added. The implementation interface: indicates the semantic service to be executed by the implementation level. Adding this “virtual service layer” before composing and orchestrating “concrete services” allows us to remain independent from the IT system organisation and protect it as well by publishing only “embedded” service interface [10]. It consists of the conceptual service description and contextual policy information (i.e. security, QoS…) defined by WS-policy based ontologies [11]. Then, technological services, which are devoted to information access, monitoring and security are added before orchestrating industrial services.
Process
x
software and hardware resources are described in a bill of resources and management strategies which define the way resources will be scheduled. Process view: describes the routing specification, the process qualification according to the CMMI classification [41] and potential resource constraints used when allocating the manufacturing service to resources.
Process Material Management
How it will be achieved
Security interface Implementation interface Technical Abstraction Level
x
307
Conceptual service
Contextual policy patterns
Implementation …
How it is orchestrated
Fig. 2. Extended Service Model
308
A. Esper, L. Sliman, Y. Badr, F. Biennier
3.4 The Binding Mechanism Binding industrial services is achieved according to the following steps: 1.
The service “semantic” selection first defines potential candidates. This service competencies description is associated to the public process interface without exactly specifying what will be processed or the returned information granularity level. 2. The industrial interoperability is then taken into account by refining the selection process according to process maturity and management rules so that a consistent service chain can be built. 3. The contextual information related to the service consumer is used to select the convenient “core process” associated with the private part of the process, adapting the returned information granularity level to the exact context. The duo of process and service description is only used to publish a visible part of the business service. 4. Lastly, Quality of Service (QoS) requirements are confronted with the current context to define which potential candidates can be selected and which monitoring services must be set to control the orchestration of services. After these industrial services composition phases, the “conceptual service” and the contextual information are transmitted to the implementation level so that technological services can be composed in order to support the technological interoperability.
Technical Abstraction Level
Business Abstraction Level
Conceptual service
Interface
« Published » unique Business instance
Implementations Collaboration Strategy based instances
Interface
Implementation
Interface
Implementation
Fig. 3. Embedded Conceptual Service and Core Process Relationships
Towards Secured and Interoperable Business Services
309
4 Implementation Architecture To fit the technological interoperability requirement we propose an on-demand dynamic service binding process. Organised in a bottom-up way, the binding process consists of assembling “technological” sub-components so that the orchestrated services can be tuned to exactly fit environmental needs such as implementing a convenient security policy and supporting semantic and syntactic interoperability. Security policies are defined according to the following contexts: x
x
x
x
Provider Preferences: define different security requirements of the information system according to data and process patrimonial-values which describe the importance of the potentially shared element. Depending on the trust level of service customer and the underlying infrastructure, different security policies can be applied including authentication and authorisation processes and information cryptographic requirements [6]. Customer Preferences: are described in respect to the P3P specification and exactly define which customer information can be processed (i.e. authentication, logged accesses, etc) as well as mediation constraints (i.e. data interchange format). Collaborative Context: describes contractual relationships between the service provider and the service customer. These contractual relationships are used to specify what the service customer is allowed to directly process as well as Quality of Service requirements through industrial SLAs [9]. Infrastructure Context: in a pervasive environment, different threats are caused by the communication infrastructure. Particular security components must be added to protect the underlying infrastructure.
To provide dynamic contextual composition and orchestration features, we consider the orchestrated services as autonomous components. Each contextual view is associated with generic patterns implemented as classes, in a similar way as the PIM4SOA framework [3]. Then the dynamic binding process consists of providing a multi-model instantiation process conditioned by different contexts and standard components (i.e. pattern components). The instantiation process is used to select and bind the technological components patterns and generates the convenient BPEL script in respect to technological interoperability constraints. These autonomous services include: x x
x
Information System Access Services: these technological components implement connectors to the different software of the corporate information system. Mediation and Transformation Services: these technological components are used to support syntactic and semantic interoperability in translating information to understandable formats by applications. They are based on B2MML and FDML [22] in order to implement information interface services towards business service providers and manufacturing resources, respectively. Access Services: these technological components implement access controls, namely authentication, access filtering and reporting functions,
310
A. Esper, L. Sliman, Y. Badr, F. Biennier
x
x
according to the partners’ collaboration scenario and based on the security policy and contextual information. Exchange Security Services: denote cryptographic components ensuring that soap message can be secured according to infrastructure contextual information. They provide encrypted content and its associated “playing” service which is invoked each time the message content is accessed. The exchange security service checks out whether the access request is defined in the “information licensing conditioned” base of the service provider. Monitoring Services: dynamically generate monitoring components based on QoS requirements and monitoring patterns.
5 Conclusion and Further Works In this paper, we propose extended an Industrial Service-Based Architecture to support secured interoperable businesses. This architecture is implemented in the Service-Oriented Enterprise Architecture. As trust and perceived risks are key points that are taken into account when defining a collaborative strategy, we pay particular attention to security policies and requirements when composing chains of services with regards to service consumer and infrastructure characteristics. The next steps will focus on the way a Lean Service Bus can be implemented in order to reduce IT structuring effects. Such a Lean Service Bus will adapt the orchestration process to a “pull flow” logic. Coupled with a trigger system, this service bus will improve the way the industrial service architecture fit the lean enterprise requirements.
Acknowledgements This work was supported by grants from the INSA BQR project “Enterprise urbanism” and the Rhône-Alpes Region Council through the GOSPI Cluster project INTERPROD.
References [1] [2]
[3]
[4]
Agrawal R., Kiernan J., Xu Y., Srikant R., 2002. Hippocratic Databases, 28th VLDB Conference, 10 pages, 2002. Alberts C., Dorofee A., 2001. An Introduction to the OCTAVESM Method. CERT White Paper. Available Online at http://www.cert.org/octave/methodintro.html, [Last Visited September 30 2007] Benguria G., Larruceat X., Elvesaeter B., Neple T., Beardsmore A., Friess M., 2007. A platform independent model for service oriented architecture. In Enterprise Interoperability: new challenges and approacghes. Doumeingts G., Müller J., Morel G., Vallespir B. Eds., Springer. pp. 23-32 Biennier F., Favrel J., 2003. Collaborative Engineering in Alliances of SMEs. Actes de PRO-VE’03. Lugano (Suisse), October 2003. In : Processes and foundations for
Towards Secured and Interoperable Business Services
[5] [6] [7] [8]
[9]
[10]
[11]
[12] [13] [14]
[15]
[16]
[17]
[18] [19] [20]
[21] [22]
311
virtual organizations. Camarinha-Matos L., Afsarmanesh H. (Eds.). Kluwer academic publishers. pp. 441-448 Biennier F., Buckard S., 2005. Organising Dynamic Virtual Organisation: Towards Enterprise Urbanism, APMS 2005 Biennier F., Favrel J., 2005. Collaborative Business and Data Privacy: Toward a Cyber-Control Computers in Industry, V. 56, n° 4, pp. 361-370 (May 2005) Biennier F., Mathieu H., 2005. Security Management: Technical Solutions v.s Global BPR Investment. Schedae informatica vol. 14, pp. 13-34 Biennier F., Mathieu H., 2006: Organisational Inter-Operability: Towards Enterprise Urbanism. In Entreprise interoperability – New challenges and approaches, Eds.Doumeingts G., Müller J., Morel G., Vallespir B. Eds. Springer. pp. 377-386 Biennier F., Ali L., Legait A., 2007. Extended Service Integration: Towards Manufacturing SLA. IFIP International Federation for Information Processing, Volume 246, Advances in Production Management Systems, Olhager, J., Persson, F.. Eds., pp.87-94 Chaari S., Benamar C., Biennier F., Favrel J., 2006. Towards service oriented enterprise. In the IFIP International Conference on PROgraming LAnguages for MAchine Tools, PROLAMAT 2006, 15-17 June, Shanghai, China, pp 920-925. (ISBN: 978-0-387-34402-7) Chaari S., Badr Y., Biennier F., 2008. Enhancing Web Service Selection by QoSBased Ontology and WS-Policy. accepted in the 23rd ACM Symposium on Applied Computing, Ceará, Brazil, 16-20 March 2008 CIGREF 2003. Accroitre l’agilité du système d’information. Livre blanc du CIGREF, September 2003. Cranor Lorrie, Privacy with P3P, 239 pages, O’Reilly, 2001 CLUSIF, 2000. Mehari. Rapport Technique. 91pp, Available Online at https ://www.clusif.asso.fr/fr/production/ouvrages/pdf/MEHARI.pdf [Last Visited September 30, 2007] CLUSIF, 2005. Enquête sur les politiques de sécurité de l'information et la sinistralité informatique en France en 2005. online [Last Visited September 30, 2007] : http://www.clusif.asso.fr/fr/production/sinistralite/docs/etude2005.pdf Common Criteria Organisation, 1999. Common Criteria for Information Technology Security Evaluation – Part I: introduction and general model version 2.1 – CCIMB 99-031. Available Online at http://www.commoncriteria.org/docs/PDF/CCPART1V21.PDF, 61 p. [Last Visited, September 30, 2007] Direction Centrale de la Sécurité des Systèmes d’Information (DCSSI), 2004. Expression des Besoins et Identification des Objectifs de Sécurité : EBIOS, Rapport Technique. Available Online at http://www.ssi.gouv.fr/fr/confiance/ebios.html, [Last Visited, September 30, 2007] DeVor R., Graves R., Mills J.J., 1997. Agile Manufacturing Research: Accomplishments and Opportunities. IIE Transactions n° 29, pp. 813-823 Department Of Defence (DoD), 1985. Trusted Computer Security Evaluation Criteria - Orange Book. DOD 5200.28-STD report. Djodjevic I., Dimitrakos T., Romano N., Mac Randal D., Ritrovato P., 2007. Dynamic security Perimeters for Inter-enterprise Service Iintegration. Future generation of computer systems (23). pp. 633-657 EEC, 1991. Information Technology Security Evaluation Criteria. Available Online at http://www.cordis.lu/infosec/src/crit.htm, [Last Visited September 30, 2007] Emerson D., Brandl D., 2002. Business to Manufacturing Markup Language (B2MML) version 01. 60 p.
312
A. Esper, L. Sliman, Y. Badr, F. Biennier
[23] Erickson J.S., 2003. Fair Use, DRM and Trusted Computing. Communications of the ACM, vol 46, n°4, , pp.34-39 [24] Goldman S. Nagel R., Preiss K., 1995. Agile Competitors and Virtual Organisations. New York: Van Nostrand Reinhold. [25] IBM and Microsoft Corp., 2002. Security in a Web Services World : A Proposed Architecture and Roadmap. 28pp white paper, Available Online at ftp://www6.software.ibm.com/software/developer/library/ws-secmap.pdf, [Last Visited, September 30, 2007] [26] IBM, Microsoft, BEA, Layer 7 technology, Verisign, Novell Inc., 2006. Web Services Federation Language. Version 1.1. Available Online at http://download.boulder.ibm.com/ibmdl/pub/software/dw/specs/ws-fed/WSFederation-V1-1B.pdf. [Last Visited , September 30, 2007] [27] IFAC-IFIP, 1999, GERAM: Generalized Enterprise Reference Architecture and Methodology, Version 1.6.3, IFAC-IFIP Task Force on Architecture and Methodology. [28] ISO, 2000. ISO/IEC 17799:2000 standard - Information technology. Code of Practice for Information Security Management. [29] Jürjens J., 2002, UMLsec: Extending UML for Secure Systems Development. Lecture Notes in Computer Science 2460, UML 2002 Proceedings, pp. 412-425 [30] Lee H.L., 2004. The Triple A Supply Chain. Harvard Business Review, October 2004, pp. 102-112 [31] Levitin A.V., Redman T.C., 1998. Data as a Resource: Properties, Implications and Prescriptions. Sloan management review, fall 1998. pp. 89-101 [32] Lin A., Brown R., 2000, The Application of Security Policy to Role-based Access Control and the Common Data Security Architecture, Communication (23) pp. 15841593 [33] Longépé C, 2003. The Enterprise Architecture IT Project - The Urbanisation Paradigm, Elsevier. 320p. [34] Mahoué F., 2001. The E-World as an Enabler to Lean. MSc Thesis. MIT. [35] Martin J., 1992. Rapid Application Development, Prentice Hall, Englewood Cliffs. [36] Moore A. P.; Ellison, R. J., Architectural Refinement for the Design of Survivable Systems. Technical Note (CMU/SEI-2001-TN-008), Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, October 2001, Available Online at http://www.sei.cmu.edu/publications/documents/01.reports/01tn008.html [Last Visited, September 30, 2007] [37] OASIS, 2004. Web Services Security: SOAP Message Security 1.0 (WS-SECURITY 2004). 56 pages Available Online at http://www.oasisopen.org/committees/download.php/16790/wss-v1.1-spec-osSOAPMessageSecurity.pdf [Last Visited, September 30, 2007] [38] Schmidt M.T., Hutchinson B., Lambros P., Phippen R., 2005. The Enterprise Service Bus. : Making Service Oriented Architecture Real. IBM System Journals, vol. 44, n° 4, pp.781-797. [39] Sliman L., Biennier F., Servigne S., 2006. Urbanisarion Conjointe de l’entreprise et de son Système d’Information. Colloque IPI 2006 proceedings : "Comprendre et piloter la mutation des systèmes de production", pp. 169-180 [40] Tekes, 2006. Sara - Value Networks in Construction 2003-2007. Sara technology programme, Available online at http://www.tekes.fi/english/programmes/sara [Last Visited, September 30, 2007] [41] Williams R., Wegerson P., 2002. MINI CMMI(SM), SE/SW/IPPD/SS Ver 1.1, Staged Representation. Cooliemon. [42] Womack J.P., Jones D.T., 2003. Lean Thinking, 2nd edition. Simon & Schuster, 404 p
Part IV
Ontologyies and Semantics for Interoperability
Semantic Web Services based Data Exchange for Distributed and Heterogeneous Systems Qingqing Wang1,2, Xiaoping Li1,2 and Qian Wang1,2 1
School of Computer Science and Engineering, Southeast University, Nanjing 210096, P.R.China 2 Key Laboratory of Computer Network and Information Integration(Southeast University), Ministry of Education, Nanjing 210096, P.R .China [email protected], [email protected], [email protected]
Abstract. Data exchange is the process of exchanging data between systems online, in which heterogeneity, distribution and different semantic descriptions are the main difficulties. In this paper, a data exchange framework constructed on the architecture of Web services is proposed, in which data provider deploys Web services for data exchange and publishes description of service function and exchange data on a register center, data requester searches for web services on the register center according to his requirements on function and data. To improve precision of data discovery, a semantic matching mechanism for matching the provided and requested Web services, based on OWL-S (Web Ontology Language for Services), is presented. The matching mechanism which takes both service function and exchange data into account calculates the semantic similarity of two Web services as matching result. A prototype system is implemented to verify the proposed framework and matching mechanism. Keywords: Interoperability for knowledge sharing, Ontologies and Semantic Web for interoperability, Interoperability for knowledge creation, transfer, and management, Design methodologies for interoperable systems, Enterprise application Integration for interoperability
1 Introduction Sharing information is difficult in systems with different operating systems, heterogeneous database management systems and distributed data sources based on various semantic description abilities and isolating levels. Data exchange, which is one of the ways to share information between such systems, is the process of exchanging data dynamically between systems [1] [2] and transforming data under a source schema to data under a target schema [3].
316
Qingqing Wang, Xiaoping Li and Qian Wang
Data exchange is a traditional problem, for which there are many different solutions according to different application scenes, user requirements and technical environments. Here we present a brief survey of some methods for data exchange: (I) EDI (Electronic Data Interchange). This is the electronic transfer between separated computer applications of commercial or administrative transactions, using an agreed standard to structure the transaction or message data. This kind of methods [4] focuses on the issues of data transfer and data format. However, other parts of data exchange process are not taken into account. (II) Traditional approaches of integrating heterogeneous data sources. This kind of methods [5] mostly uses wrapper components to shield the differences among data sources. They focus on transforming data formats. Other issues such as data discovery and data transferring are not referred to. These methods suit the scene that the data sources to exchange data are not many and changing frequently. (III) XML based method. This kind of methods uses XML as the medium of data exchange. XML files whose schemas can be matched and formats can be transformed is the common data format in the process. These methods [6] [7] focus on schema matching or format transforming. However, data discovery on internet are not emphasized. Moreover, some standards such as RosettaNet [8] and ebXML [9] also focus on data exchange and describe data with XML. But people in different industries define different XML tags according to their own customs, which causes that these standards become complex and separate with each other and lack of compatibility. Web services technology which defines how to interoperate between Web applications have three parts: service provider, service requester and service broker. Web services can solve the problem of distribution in data exchange and improve the efficiency and precision of data discovery, in which Web services matching is a key issue. Presently, existing Web services matching mechanism can be divided into two categories: (1) Syntax level. This kind of mechanisms usually publishes Web services according to industry classify standards, describes Web services interfaces by WSDL and matches Web services based on key words. These mechanisms can realize Web services discovery on Internet, but lack of function description for Web services as well as semantic information. Usually, these mechanisms are terse but with low discovery precision. Typical application are the UDDI systems developed by IBM, Microsoft, HP and so on [10]. (2) Semantic level. This kind of mechanisms usually describes Web services based on ontology theory, which can solve the semantic heterogeneity of syntax level mechanisms and add semantic description of Web services’ function. Service matching usually takes IOPE (Input, Output, Precondition and Effects) of Web services into account. Generally, discovery precision of semantic level is higher than the syntax level’s. Research on matching mechanism of semantic level and some related matching algorithms are introduced in [11] [12] [13]. Now, weaknesses of existing data exchange methods are: (1) exchange data is not easy to be discovered on Internet, (2) few methods are capable of completing the whole process of data exchange on Internet. To solve these shortcomings, a semantic Web services based data exchange method is proposed in this paper. Our focus is realizing the whole process of data exchange and easy data discovery on Internet. In the method, a data exchange framework is constructed on the
Semantic Web Services based Data Exchange
317
architecture of Web services, in which information of exchange data can be published and discovered, data discovery can be realized based on semantic information, data can be obtained and transferred remotely and data schema matching as well as data format transforming can be realized too. In order to improve the precision of data discovery, a Web services matching mechanism of semantic level for data exchange is also presented.
2 Terms and Definitions Data Exchange on Internet is the process of discovering needed data on Internet, exchanging data between information systems and transforming data under a source schema to data under a target schema. The process consists of data publishing, data discovery, data getting, data transferring, schema matching and format transforming. Data schema
Data publishing Data discovery Data getting Data transferring Schema matching
Format transforming
Service matching
Web services for data exchange
is the description for structures, compositions and properties of a data set. For database, it is the description for structures of tables, relations between tables as well as properties of fields. is the process of publishing information of exchange data on Internet for being requested. is the process of requesting needed data on Internet. is the process of extracting needed data from the database of data provider’s system. is the process of transferring exchange data from data provider’s system to data requester’s system. is the process of matching or mapping data schemas between data provider and data requester according to certain rules. is the process of transforming exchange data under provider’s schema to data under requester’s schema according to the result of schema matching. is the process of matching published Web services and requested Web services, the result of which should show the requester’s requirements at a certain extent. is a special kind of Web services which can provide exchange data.
318
Qingqing Wang, Xiaoping Li and Qian Wang
3 Data Exchange Method 3.1 Architecture Figure 1 shows the architecture of the WSDE (Web Services based Data Exchange method) proposed in this paper, which is constructed on the architecture of Web services. There are three participants: DPRO (Data Provider), DREQ (Data Requester) and SUDDI (Semantic Universal Description Discovery and Integration) center. 68'', &HQWHU Preceding API Module
: HE 6 HU Y 3X EOLV K '352
Service Requesting API Result Displaying API Service Matching Module 6HUYLFH GDWD GHVFULSWLRQ 2:/ 6
6HUYLFH PDWFKLQJ HQJLQH
6HUYLFHRQWRORJ\ EDVH
Service Publishing Module
8'', PDSSLQJ HQJLQH
'5(4
'RPDLQRQWRORJ\ EDVH
Service Requesting Module 8'', %DVHG RQ 8'', Criterion
Data Encapsulating Module
Service Invoking Module Schema Matching Module 6FKHPD PDWFKLQJ HQJLQH
Service Deploying Module
,QYRNH :HE 6HUYLFHV Database
V LFH HU Y 6 HE W : HV TX 5H
LFH V
Service Publishing API
Web Server
0DWFKLQJ UHVXOW HQJLQH ;6/7
'RPDLQ RQWRORJ\ EDVH
Format Transforming Module
Database
Application Software
Fig. 1. Architecture of WSDE
DPRO is the system who can provide exchange data on Internet. It deploys a local Web service and publishes service description as well as information of exchange data on the SUDDI center. When the service is invoked, DPRO extracts data from database dynamically, transforms it into a XML file and transfers the file to DREQ. DREQ is the system who needs exchange data. It requests Web services for data exchange on the SUDDI center. If a useful Web service has been requested successfully, DREQ invokes the service, gets exchange data from data provider, matches the data schema of provider and itself and transforms the exchange data under provider’s format to its own data format.
Semantic Web Services based Data Exchange
319
The SUDDI center accepts and matches the description of published services and requested services. It extends common UDDI systems found on UDDI 2.0 criterion, to which adds a semantic module. A semantic Web services matching mechanism for data exchange is presented in the semantic module, which is introduced in section 4. 3.2 DPRO (Data Provider) Modules of DPRO are described as follows: (1) Service Publishing Module This module is used to describe functions of published Web services and provided exchange data according to the service description model and the data description model respectively (introduced in Section 4), and publish function description as well as data description on the SUDDI center. (2) Service Deploying Module This module is used to deploy Web services that have been designed for data exchange to local web server. The WSDL file of these Web services will be uploaded on the web server too. When invoked, DPRO extracts requested data by data encapsulating module and transfer data to DREQ by SOAP. (3) Data Encapsulating Module This module is used to extract data schema of exchange data when the Web service is published and extract exchange data dynamically from provider’s database according to data requester’s needs when the Web service is invoked. 3.3 DREQ (Data Requester) Modules of DREQ are described as follows: (1) Service Requesting Module This module is used to describe DREQ’s requirements according to the service description model and the data description model, and request Web services that could provide needed exchange data on the SUDDI center. This module manages to realize data discovery by discovering Web services. (2) Service Invoking Module After discovering a Web service, service invoking module obtains the provider’s data schema, sets query condition for extracting needed exchange data, reads the WSDL file and invokes the chosen Web service. Then exchange data and its schema will be downloaded to local machine. This module manages to realize data getting and transferring by invoking Web services. (3) Schema Matching Module This module is used to match data schemas of DPRO’s and DREQ’s. Firstly, it extracts local data schema and transforms it to a XML Schema file, then matches the two schemas according to a schema matching algorithm. In this paper, a hybrid matching algorithm for XML Schemas introduced in [14] is adopted. Matching results are stored in a XSLT file. (4) Format Transforming Module
320
Qingqing Wang, Xiaoping Li and Qian Wang
This module is used to transform the format of exchange data according to the schema matching result. So that data could be stored in local database or use it to participate in local applications. 3.4 SUDDI (Semantic Universal Description Discovery and Integration) Center Modules of SUDDI center are described as follows: (1) Preceding API Module This module consists of three kinds of APIs: x x x
Service Publishing API: These APIs accept description of published Web services functions as well as exchange data, and send them to the service matching module. Service Requesting API: These APIs accept description of requested Web services functions as well as demanded data, and send them to the service matching module. Result Displaying API: These APIs accept the Web services matching results from service matching module and display them on the preceding interface.
(2) Service Matching Module This module is used to match the published Web services and the requested service. Functions of this module are creating owl-s profile (namely service description) for Web services, parsing owl-s profile and domain ontology, and producing matching results according to the service matching mechanism introduced in Section 4. (3) UDDI Module This module founded on the UDDI 2.0 criterion is used to store WSDL file path and other common description of published Web services. It also produces mapping relations between owl-s profile base and UDDI module so that full information of a Web service could be published and requested. 3.5 Flow of Data Exchange Figure 2 illustrates the flow of data exchange in WSDE.
Semantic Web Services based Data Exchange
321
68'', &HQWHU '352 GHSOR\ :HE VHUYLFHV
'5(4 HE VK : SXEOL HU YLFHV V
PDWFK:HE VHUYLFHV
HE W: XHV V UHT HU YLFH V U\ TXH X OW QJ UHV FKL PDW
LQYRNH :HE VHUYLFHV
VHW LQYRNLQJ FRQGLWLRQ
᧥WUDQVIHU H[FKDQJH GDWD DQG LWV VFKHPD PDWFK GDWD VFKHPDV WUDQVIHU GDWD IRUPDW VWRUH H[FKDQJH GDWD
Fig. 2. Flow of Data Exchange in WSDE
DPRO firstly deploys a Web service to local web server, publishes description of service function and exchange data on the SUDDI center and waits for the Web service being invoked. When DREQ needs data, it requests Web services and exchange data according to its own needs on the SUDDI center. The SUDDI center matches the requested service with all published services in the center, produces matching results according to the service matching mechanism, sorts the results and displays the match results. Then DREQ looks over these Web services for data exchange in the matching result list, chooses the most appropriate service, sets invoking condition for extracting needed exchange data, reads the WSDL file and invokes the chosen Web service. DPRO responses the invoking, executes the Web service and returns exchange data as well as its schema. After obtaining the exchange data and its schema, DREQ extracts local data schema, matches or maps the two data schemas and uses XSLT engine to transform data format according to the schema matching result. Finally, DREQ stores the exchange data that has been transformed in local database.
4 Web Services Matching Mechanism Precision of Web services discovery depends on the services matching mechanism. In the proposed data exchange method, data discovery is realized by Web services discovery, so the services matching mechanism adopted in WSDE determines precision of data discovery directly. Data discovery is the first and key step in the process of data exchange, whose result will affect the following steps. Therefore it is very important to choose an effective service matching mechanism for a data exchange method. The key task of Web services for data exchange, which have some differences with common Web services, is realizing data discovery. So services matching
322
Qingqing Wang, Xiaoping Li and Qian Wang
between Web services for data exchange should cover not only IOPE that services matching between common Web services takes into account but also information of exchange data. As information of exchange data is so important to data discovery, the proposed method extends existing semantic matching mechanisms and presents a SWSM-DE (Semantic Web Services Matching mechanism for Data Exchange). SWSM-DE constructs both function description model and data description model, and integrates the result of function matching and data matching as final services matching result. Following will describe the SWSM-DE in detail. 4.1 Services Description Web services description is the base of services publishing, services discovery and services matching. SWSM-DE uses OWL-S which is published by W3C to describe semantic Web services to describe services. However, OWL-S profile only contains function items such as IOPE, so SWSM-DE extends the profile and adds description of exchange data to it. Definition 1. SDM (Service Description Model) is the semantic description of Web services function and exchange data in SWSM-DE, defined as SDM=. FM is function description model and DM is data description model. Definition 2. FM (Function Model) is the semantic description of name, text description, inputs and outputs of Web services, defined as FM᧹. Sn is service name, Sd is text description of service, Sin is inputs of service and Sout is outputs of service. Definition 3. DM (Data Model) is the semantic description of domain, source, application scene and content of exchange data, defined as DM=. Dd is domain of exchange data, Ds is source of exchange data, As is application scene of exchange data and Dc is content of exchange data.Figure 3 shows the structure of SDM. 6'0 Service Description Model
)0 Function Model
Fig.3. Structure of SDM
'D
GDWD FRQWHQW
'V
DSSOLFDWLRQ VFHQH
'G
GDWD VRXUFH
6RXW
GDWD GRPDLQ
6LQ
VHUYLFH RXWSXWV
6G
VHUYLFH LQSXWV
WH[W GHVFULSWLRQ
VHUYLFH QDPH
6Q 6Q
'0 Data Model
'F
Semantic Web Services based Data Exchange
323
4.2 Matching Approach Different people or systems have different semantic description abilities, even to the same service or concept, so “completely same” for service matching is impossible and insignificant. We consider services matching is successful when the description of two services is “similar at a certain extent”. The key issue is how to judge “similar at a certain extent”. A matching mechanism introduced in [15] calculates the total similar degree of OWL-S service profile and other two parts of OWL-S profile which are OWL-S service model and OWL-S service grounding. The matching result is more precise, but more complex because it calculates similar degree of service model and service grounding additionally. A matching mechanism introduced in [16] matches services based on OWL-S service profile, and gets matching result by combining results of inputs matching, outputs matching, service category matching and user-defined matching. The mechanism is terser and easier to be realized, but the matching result is distinguished by only four ranks. So this mechanism is not flexible and the ability of distinguishing matching result is not perfect too. Besides, for data exchange Web services, the two mechanism are all lack of description and matching for exchange data. SWSM-DE which combines the merits of the above mechanisms describes services by OWL-S and measures similarity of Web services by a numerical value. Besides, it takes both service function and exchange data into account. SWSM-DE has two stages: the first stage is function matching and the second is data matching. Services that do not satisfy certain conditions will be filtered after each stage. The first stage calculates similarity of service function and set threshold value to filter useless services. Services that won’t be filtered in the first stage must be Web services for data exchange which have the ability of providing exchange data, and satisfy some conditions on inputs and outputs. The second stage calculates similarity of exchange data and set threshold value to filter services again. Services that will not be filtered in the second stage must satisfy some conditions on data domain, data source, application scene and data content. For each service that overpasses the two stages, SWSM-DE combines the similarity of services function and exchange data as final services matching result which is a numerical value. The two stages can use different matching algorithms. Figure 4 illustrates the matching process of SWSM-DE.
324
Qingqing Wang, Xiaoping Li and Qian Wang
Fig. 4. Matching Process of SWSM-DE
According to SDM, the similarity of service Spro which represents the published Web service and service Sreq which represents requested Web service is defined as:
Sim( spro, sreq ) T 1 Simfunc ( Spro, Sreq ) T 2 Simdata ( Spro, Sreq ), T 1 T 2 1, 0 d T 1, T 2 d 1
(1)
Simfunc(Spro, Sreq) is the similarity of service function. Simdata(Spro, Sreq) is the similarity of exchange data. T1 is the weight value that function matching has in service matching. T2 is the weight value that data matching has. The two values can be adjusted according to actual situation. In SWSM-DE, they are initialized by 0.5 and 0.5 respectively. According to FM, the similarity of service function is defined as:
Sim ( S , S ) func
pro
req
O Sim ( S , S ) O Sim 1
name
O Sim 4
output
pro
req
2
description
( S , S ) O Sim ( S , S ) pro
( S , S ), O O O O pro
req
1
2
3
req
4
3
input
pro
req
1, 0 d O , O , O , O d 1 1
2
3
(2)
4
Simname(Spro, Sreq), Simdescription(Spro, Sreq), Siminput(Spro, Sreq) and Simoutput(Spro, Sreq) are the similarity of service name, service text description, service inputs and service outputs respectively. Simname(Spro, Sreq) and Simdescription(Spro, Sreq) can be calculated by MCMA (Multi-Concept Matching Algorithm) introduced in [14]. Siminput(Spro, Sreq) and Simoutput(Spro, Sreq) can be measured by MIMA (Multi-Input Matching Algorithm) and MOMA (Multi-Output Matching Algorithm) introduced in [10] respectively. O1, O2, O3 and O4 are the weight value of the above items in function matching, which can be adjusted according to actual situation. In SWSMDE, they are initialized by 0.25, 0.25, 0.25 and 0.25 respectively. According to DM, the similarity of exchange data is defined as:
Semantic Web Services based Data Exchange
Sim ( S , S ) data
pro
req
P Sim P Sim
( S , S ) P Sim ( S , S ) P Sim ( S , S ) ( S , S ), P P P P 1, 0 d P , P , P , P d 1
1
domain
4
content
pro
req
pro
req
2
1
source
2
pro
3
req
3
4
scene
1
2
pro
req
3
4
325
(3)
Simdomain(Spro, Sreq), Simsource(Spro, Sreq), Simscene(Spro, Sreq) and Simcontent(Spro, Sreq) are the similarity of data domain, data source, application scene and data content respectively, which can be measured by MCMA. P1, P2, P3, and P4 are the weight value of the above items in data matching, which can be adjusted according to actual situation. In SWSM-DE, they are initialized by 0.15, 0.15, 0.15 and 0.55 respectively. Let Cfunc represent the threshold value in the first stage and Cdata represent the threshold value in the second stage. According to the matching process of SWSMDE, if Simfunc(Spro, Sreq)=Cfunc, the second stage would be executed and Simdata(Spro, Sreq) would be calculated. In the same way, if Simdata(Spro, Sreq)=Cdata, the final matching result Sim(Spro, Sreq) would be calculated by formula (1). Namely, the more exact formula for calculating Sim(Spro, Sreq) is:
Sim( s , s ) pro
req
0, Sim ( S , S )< Cfunc or Sim ( S , S )< Cdata ® ¯(1) , Sim ( S , S ) t Cfunc and Sim ( S , S ) t Cdata func
func
pro
pro
req
req
data
data
pro
pro
req
(4)
req
5 Prototype System The prototype system which is developed by Borland JBuilder 9.0 uses three computers to simulate DPRO, DRQE and SUDDI respectively. DRPO which runs on Windows NT system uses SQL Server 2000 database and Tomcat 4.1 as local web server. DREQ which runs on Windows 2000 system uses Microsoft Access 2000 database. SUDDI which runs on Windows XP system uses OWL-S API to create and parse OWL-S profile, juddi0_9RC4 to realize UDDI 2.0 APIs, mysql 5.0 as database for registering, tomcat 5.0 as web server and JSP pages as user interfaces. The initial weight values in formula (1)(2)(3) are adopted. System is based on the following application scene: A machine factory in Beijing needs a special type of axletrees, while there are enterprises which can manufacture the very type in New York, London, Sydney, and Tokyo. However, the machine factory in Beijing doesn’t know which enterprises can manufacture them and doesn’t know how to inquire about them. If WSDE is adopted, a SUDDI center should be constructed: x
Enterprises that manufacture axletrees act as DPRO. They provide information of axletrees by Web services and publish the services information on the SUDDI center (Figure 5(a)).
326
Qingqing Wang, Xiaoping Li and Qian Wang
x x x
The factory in Beijing which acts as DREQ requests Web services based on his requirements on services function and exchange data from the SUDDI center. The SUDDI center executes services matching and returns list of information of enterprises that can manufacture the very type of axletrees (Figure 5(b)). The factory in Beijing chooses a proper item associating with certain enterprise, invokes the Web services that the enterprise has deployed remotely and gets product information namely exchange data to local. Then data schema matching and format transforming will be executed. Finally, the exchange data after being transformed can be stored in local database or participate in application.
a
b
Fig. 5. a. Interface of publishing Web services; b. Interface of displaying matching results.
6 Conclusions In this paper, a semantic Web services based data exchange method is proposed, which can realize the whole process of data exchange on Internet. In the proposed method, data publishing, data discovery and data getting are realized by publishing Web services, discovering Web services and invoking Web services respectively. Data schema matching and format transforming are realized by XML technology. In order to improve the precision of data discovery, a Web services matching mechanism of semantic level for data exchange is presented, which takes both service function and exchange data into account and calculates the semantic similarity of published and requested Web services based on OWL-S. Compared to the existing data exchange methods, the proposed method can complete the whole process of data exchange, realize easy data discovery and provide or obtain exchange data dynamically on Internet. The prototype system takes an industry use case for example by which the presented framework and mechanism are verified. Also, the proposed data exchange method can be applied to various applications.
Semantic Web Services based Data Exchange
327
References [1]
[2]
[3] [4] [5]
[6] [7] [8] [9] [10] [11]
[12] [13]
[14] [15]
[16] [17]
ISO/TC184/SC5/WG4, (2002) ISO16100-1 Industrial automation systems and integration - Manufacturing software capability profiling for interoperability — Part 1: Framework ISO/TC184/SC5/WG4, (2005) ISO16100-3 Industrial automation systems and integration - manufacturing capability profiling for interoperability - Part 3: Interface services, Protocol and capability templates Fagin1 R, Kolaitis PG, Popa L, (2005) Data Exchange: Getting to the Core. ACM Transactions on Database Systems, 174–210 Meadors K, (2005) Secure Electronic Data Interchange over the Internet. Internet Computing, IEEE, 82–89 Wang N, Chen Y, Yu BQ, Wang NB, (1997) Versatile: A scaleable CORBA 2based system for integrating distributed data. In Proceedings of the 1997 IEEE International Conference on Intelligent Processing Systems, 1589–1593 Pendyala VS, Shim SSY, Gao JZ, (2003) An Xml Based Framework for Enterprise Application Integration. In IEEE International Conference on E-commerce, 128–135 Zhang M, Xu QS, Shen XC, (2006) Data exchange and sharing platform model based on XML. Journal of Tsinghua University (Science and Technology), 105–107, 119 ROSETTANET, (2007-10) About RosettaNet Standards. http://portal.rosettanet.org/ cms/site/RosettaNet/ EbXML Technical Architecture Project Team, (2001) EbXML Technical Architecture Specification v1.0.4. http://www.ebxml.org/specs/ebTA.pdf, 1-39. Hu JQ, Zou P, Wang HM, Zhou B, (2005) Research on Web Service Description Language QWSDL and Service Matching Model. Chinese Journal of Computers, 505–513 Gao S, Omer FR, Nick JA, Chen DF, (2007) Ontology-based semantic matchmaking approach. Advances in Engineering Software, 59–67 Paolucci M, Kawamura T, Payne TR, Sycara K, (2002) Semantic matching of web service capabilities. In Proceedings of 1st International Semantic Web Conference, Springer-Verlag, 333–347 Cui Y, (2005) Research on Service Matchmaking Model Based on Semantic Web. Master Dissertation, Dalian University of Technology, China Kajal TC, Vaishali H, Naiyana T, (2005) QMatch - A Hybrid Match Algorithm for XML Schemas. In Proceedings of the 21st International Conference on Data Engineering. IEEE Computer Society, 1281–1290 Hau J, Lee W, Darlington J, (2005) A Semantic Similarity Measure for Semantic Web Services. http:// www.ai.sri.com/WSS2005/final-versions/WSS2005-Hau-Final.pdf Jaeger MC, Rojec-GoldMann G, Liebetruth C, Muhl G, Geihs K, (2005) Ranked Matching for Service Descriptions using OWL-S. In: Proceedings of Communication in Distributed System, 91–102
Ontology-driven Semantic Mapping Domenico Beneventano1, Nikolai Dahlem2, Sabina El Haoum2, Axel Hahn2, Daniele Montanari1,3, Matthias Reinelt2 1
2
3
University of Modena and Reggio Emilia, Italy {domenico.beneventano, daniele.montanari}@unimore.it University of Oldenburg, Germany {dahlem, elhaoum, hahn, reinelt}@wi-ol.de Eni SpA, Italy [email protected]
Abstract. When facilitating interoperability at the data level one faces the problem that different data models are used as the basis for business formats. For example relational databases are based on the relational model, while XML Schema is basically a hierarchical model (with some extensions, like references). Our goal is to provide a syntax and a data model neutral format for the representation of business schemata. We have developed a unified description of data models which is called the Logical Data Model (LDM) Ontology. It is a superset of the relational, hierarchical, network, objectoriented data models, which is represented as a graph consisting of nodes with labeled edges. For the representation of different relationships between the nodes in the data-model we introduced different types of edges. For example: is_a for the representation of the subclass relationship, identifies for the representation of unique key values, contains for the containment relationship, etc. In this paper we discuss the mapping process as it is proposed by EU project STASIS (FP6-2005-IST-5-034980). Then we describe the Logical DataModel in detail and demonstrate its use by giving an example. Finally we discuss future research planned in this context in the STASIS project. Keywords: business schema representation, business interoperability, meta- model
1 Introduction Today’s enterprises, no matter how big or small, have to meet the challenge of bringing together disparate systems and making their mission-critical applications collaborate seamlessly. One of the most difficult problems in any integration effort is the missing interoperability at the data level. Frequently, the same concepts are embedded in different data models and represented differently. One difficulty is identifying and
330
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
mapping differences in naming conventions, whilst coping with the problems of polysemy (the existence of several meanings for a single word or phrase) and synonymy (the equivalence of meaning). A connected problem is identifying and mapping differences stemming from the use of different data models. For example information expressed in a relational schema is based on the relational data model, while XML Schema is basically a hierarchical model (with some extensions, like references). Therefore we propose an ontology to describe a unified data model which is called the Logical Data Model. The purpose of the Logical Data Model ontology is to provide a common representation able to encapsulate substantial information coming from different sources and various schema formats. Data models represented by such an ontology can be considered as a neutral specification which allows a common processing in an interoperability framework. In the remainder we first discuss the related work (section 2). Then we describe the mapping process to provide the context for this work in section 3. Section 4 presents the Logical Data Model Ontology and gives an example. Section 5 discusses ontology-driven semantic mapping and section 6 concludes with a discussion and an outlook to the future research.
2 Related Work The integration costs for enterprise applications cooperation are still extremely high, because of different business processes, data organization, application interfaces that need to be reconciled, typically with great manual (and therefore error prone) intervention. This problem has been addressed independently by MDA and ontology-based approaches. The Model Driven Architecture (MDA) proposed by the Object Management Group (OMG) uses platform-independent models (PIMs) [1] as the context for identifying relations between different applications. Transformation is a central concept in MDA to address how to convert one model into another model of the same system, and further into executable code. MDA provides technologies to handle meta models, constraints etc. which can be used for semantic enrichment and model transformation. In model-based approach, Unified Modelling Language [2] is used to express conceptual models. The meta language Meta Object Facility (MOF) is defined as part of the solution in order to capture relationships between data elements. Transformation languages are used to create executable rules, and transformation techniques can be used in the process of detailing the information needed, converting from abstract MOF compliant languages to more formal ones [3]. Today, ontology technologies have reached a good level of maturity and their applications to industrial relevant problems are proliferating. Ontologies are the key elements of the Semantic Web. The notion of the Semantic Web is led by W3C and defined to be a “common framework allowing data to be shared and reused across application, enterprise and community boundaries” [4]. Ontologies support semantic mapping by providing explicitly defined meaning of the information to be exchanged. The development of the LDM Ontology was
Ontology-driven Semantic Mapping
331
particularly inspired by related work on relational schema modeling and the general goal of establishing mappings among (fragments of) domain ontologies. The latter has been an active field of research in the last ten years, exploring a number of approaches. The basic expression of mapping for ontologies modeled with description logic formalisms and the associated languages (like OWL) involves the use of basic language constructs or evolved frameworks to express the existence and properties of similarities and then mappings [5] [6] [7]. One significant result in this area is the MAFRA framework [8]. Research in the area of database schema integration has been carried out since the beginning of the 1980s, and schema comparison techniques are often well suited for translation into mapping techniques. A survey of such techniques is offered in [9]. One system extensively using these techniques is MOMIS (Mediator Environment for Multiple Information Sources); MOMIS creates a global virtual view of information sources, independent of their location and heterogeneity [10]. The discovery of mappings has been studied by means of general methods often derived from other fields. One such approach is graph comparison, which comprises a class of techniques which represent the source and target ontologies (or schemas) as graph, and try to exploit graph structure properties to establish correspondences. Similarity flooding [11] and AnchorPrompt [12] are examples of such approaches. Machine learning techniques have also been used. One such example is GLUE [13], where multiple learners look for correspondences among the taxonomies of two given ontologies, based on the joint probability distribution of the concepts involved and a probabilistic model for combination of results by different learners. Another example is OMEN (Ontology Mapping Enhancer) [14] which is a probabilistic mapping tool using a bayesian net to enhance the quality of the mappings. Linguistic analysis is also quite relevant, as linguistic approaches exploit the names of the concepts and other natural language features to derive information about potential mates for a mapping definition. For example, in [15] a weighted combination of similarities of features in OWL concept definitions is used to define a metric between concepts. Other studies in this area include ONION [16] and Prompt [17], which use a combination of interactive specifications and heuristics to propose potential mappings. Similarly, [18] use a bayesian approach to find mappings between classes based on text documents classified as exemplars of these classes. In Athena (ST-507849), a large European IST project, two different technologies have been applied to support model mapping. Semantic mapping involves the application of an ontology. However, current literature does not provide detailed description regarding how this is to be done, as pointed out by [19] and [20]. In Athena, a solution has been proposed, based on semantic annotation (A* tool), reconciliation rules generation (Argos tool), and a reconciliation execution engine (Ares). Parallel, in Athena, also a model-based approach has been proposed, based on a graphic tool (Semaphore) aimed at supporting the user in specifying the mappings and XSLT based transformation rules.
332
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
Other European projects addressing mapping issues include the IST FP6 projects SWAP (IST-2001-34103) [21], SEKT (IST-2003-506826) [22], and DotKom (IST-2001-34038) [23].
3 Mapping of Business Schemata When analyzing semantic relations between business schemata, we follow the approach of A* [24] to obtain a neutral representation of the schemata first. In a subsequent step this neutral representation is processed to identify mappings. These steps are discussed in the following two sections. 3.1 Obtaining a Neutral Schema Representation
The proposed mapping process works on a neutral representation, which abstracts from the specific syntax and data model of a particular business schema definition. Therefore, all incoming business schemata first need to be expressed in this neutral format. Fig. 1 shows the steps of this acquisition process.
Fig. 1. Schema acquisition process
Firstly, the incoming schema is expressed in terms of a corresponding structural ontology. Several parseable and non-parseable schema formats are already analyzed and supported namely relational databases, XML schema, EDIFACT-like EDI environments, FlatFile representations. For each of these formats a specific structural ontology is defined [25]. Then, in a second step, the model specific structural ontology representation is transformed into a neutral representation which is based on the Logical Data Model. This transformation can be automated by applying a set of predefined rules.
Ontology-driven Semantic Mapping
333
3.2 Identification of Mappings
Once the schema information has been acquired and expressed in the unified model, further analysis and/or processing can be performed to identify a set of mappings between semantic entities being used in different business schemata. The goal is to provide such sets of mappings as input to translator tools to achieve interoperability between dispersed systems without modifying the involved schemata. The definition of the mappings is done through the acquisition of the crucial features in the schemata of the source and target, giving them a conceptual representation, enriching this representation with semantic annotations and then using the system functionalities to synthesize and refine the mappings. An overview of this process is given in Fig. 2.
Fig. 2. Mapping process
As shown in Fig. 2 the neutral representation of incoming schemata provides the basis for the identification of the relevant semantic entities being the basis of the mapping process. This step is labeled “extraction” and the resulting semantic entities are the a-box of the LDM Ontology. Apart from the element being identified as semantic entity, a semantic entity holds metadata such as annotations, example values, keywords, owner and access information, etc. The analysis of the information encapsulated in semantic entities can support the finding of mapping candidates. A more advanced way to identify mappings between semantic entities is to derive them through reasoning on aligned ontologies. For this purpose the semantic entities need to be annotated with respect to some ontology as proposed in A*. Based on the annotation made with respect to the ontologies and on the logic relations identified between these ontologies, reasoning can identify correspondences on the semantic entity level and support the mapping process. Beyond the capability of A* this reasoning can also benefit from the conceptual information derived from the LDM Ontology because all semantic entities carry this extra information by being instances of the concepts of the LDM Ontology.
334
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
4 Logical Data Model Ontology This section contains a general description of the Logical Data Model Ontology followed by an example to demonstrate its main characteristics. 4.1 General Description of the Model
The LDM Ontology contains generic concepts abstracting from syntactical aspects and different data models. As an intuitive example, in the relational model a foreign key expresses a reference between two tables; at a more abstract level we can consider the two tables as nodes of a graph and the foreign key as an edge from one table to another table; more precisely this is a directed (since a foreign key has a “direction”) and labeled (since we want to distinguish two foreign keys between the same pair of tables) edge. In this way, the LDM Ontology corresponds to a graph with directed labeled edges and it has the following types of concepts: 1.
The Nodes of the graph, which are partitioned in SimpleNodes and ComplexNodes. 2. The edges of the graph, which represent Relationships between Nodes. The following types of Relationships can exists: -
Reference: A Reference is a directed labeled edge between ComplexNodes. Identification: A ComplexNode can be identified by a SimpleNode or a set of SimpleNodes. Containment: A ComplexNode can contain other Nodes, SimpleNodes and/or ComplexNodes. Qualification: A Node can be qualified by a SimpleNode. Inheritance: Inheritance can exist between ComplexNodes.
The LDM Ontology has been represented as an OWL ontology. An overview of the concepts and their relations in the ontology is shown in Fig. 3. A detailed description of the LDM Ontology is provided by [25]
Fig. 3. Overview of the concepts in the LDM Ontology
Ontology-driven Semantic Mapping
335
4.2 Demonstration example
In this section an example is introduced to show how a relational data base schema is first represented in terms of a structural ontology and then transformed into a LDM Ontology representation by means of respective transformation rules. For the relational case the structural ontology has to provide concepts for the terms Database, Relation and Attribute and a property consistsOf to create a hierarchy involving them. For this purpose the structural ontology contains the concepts of Catalogue, Table and Column and the object property hasColumn. Consider the relational schema in Fig. 4. Expressed in terms of the structural ontology for the relational case (hereafter shortly referred as relational structural schema) there are two Tables: Table “Order” and Table “OrderLine” with their Columns “number”, “date”, “customerID” and “articleNumber”, “quantity”, “lineNumber”, “orderNumber” respectively. Additionally, the Column “number” is declared a PrimaryKey of the “Order” Table and the Column “lineNumber” the PrimaryKey of the Table “OrderLine”. Further, the “OrderLine” is connected to one specific “Order” using a ForeignKey reference “FK_OrderLine_Order”.
Fig. 4. Part of an exemplary relational data base schema
The next step to achieve an LDM Ontology representation is to apply transformation rules to the structural ontology representation. A brief overview of the transformation rules is presented in Table 1. Due to space limitation the table only gives an intuition of the rules; their detailed explanation is given in [25]. In general, in the LDM Ontology all Tables will be represented as ComplexNodes, Columns as SimpleNodes, and so on. Table 1. Transformation rules from the relational structural ontology into a LDM Ontology representation Entity in the relational Entity in the Comments structural ontology Logical Data Model Table ComplexNode All Tables are represented as ComplexNodes Column SimpleNode All Columns are represented as SimpleNodes KeyConstraint Identification All KeyConstraints (i.e. PrimaryKeys and AlternativeKeys) are represented as Identifications ForeignKey Reference All ForeignKeys are represented as References hasColumns Containment The relationship between a Table and its Columns is represented as a Containment relationship
336
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
The application of the transformation rules leads to an LDM Ontology based representation of the example as shown in Fig. 5. The notation used in this figure is described by [25].
Fig. 5. LDM Ontology representation of the exemplary schema
According to the graphical representation in Fig. 5 the example schema contains two ComplexNodes “Order” and “OrderLine”. For each Column a SimpleNode is introduced and connected with its Table/ComplexNode via the Containment relation. Identification relations are defined for the PrimaryKeys “number” and “lineNumber”. The ForeignKey “FK_OrderLine_Order” is transformed to a Reference “belongsTo”.
5 Ontology-driven Semantic Mapping As discussed in the section 3 mappings between Semantic Entities can be achieved based on annotations linking the semantic entities with some concepts being part of an ontology. The annotation of semantic entities with respect to external ontology means that additional machine processable knowledge is associated with them. As in A* the ontology-driven process of deriving correspondences between semantic entities belonging to different schemata will make use of this additional knowledge. Our approach also benefits from structural knowledge on the data model represented by linking the semantic entities to the concepts of the LDM Ontology. When the annotation of semantic entities belonging to different schemata is based on one common ontology and the LDM Ontology (see Fig. 6), the annotations can directly facilitate the discovery of semantic relations between the semantic entities. The definition of semantic link specification (SLS) is based on [26]. The following semantic relations between semantic entities of two business formats are defined: equivalence (EQUIV); more general (SUP); less general (SUB); disjointness (DISJ). As in [26], when none of the relations holds, the special IDK (I do not know) relation is returned; Notice IDK is an explicit statement that the system is unable to compute any of the declared (four) relations. This should be interpreted as either there is not enough background knowledge, and therefore, the system cannot
Ontology-driven Semantic Mapping
337
explicitly compute any of the declared relations or, indeed, none of those relations hold according to an application. The semantics of the above relations are the obvious set-theoretic semantics.
Fig. 6. Ontology-based schema mapping with a single common ontology
More formally, an SLS is a 4-tuple where ID is a unique identifier of the given mapping element; semantic_entity1 is an entity of the first format; R specifies the semantic relation which may hold between semantic_entity1 and semantic_entity2; semantic_entity2 is an entity of the second format. Our discussion is based on examples. To this end we consider the following two business formats: The graphical representation of semantic_entity1 from a business format 1 (bf1) shown in Fig. 7 and semantic_entity2 from another business format 2 (bf2) shown in Fig. 8. We consider the annotation of the above business format with respect to the Purchase_order Ontology (see Fig. 9).
Fig. 7. Business format specification (bf1) (derived from relational schema)
338
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
Fig. 8. Business format specification (bf2) (derived from an XML schema)
Fig. 9. The ontology of Purchase_order
The proposed “Ontology-based schema mapping with a single common ontology” is based on the annotation of a business format with respect to this single common ontology. Here we will use the following proposal. An annotation element is a 4-tuple where ID is a unique identifier of the given annotation element; SE is a semantic entity of the business format; concept is a concept of the ontology; R specifies the semantic relation which may hold between SE and concept. The proposal is to use the following semantic relations between semantic entities of the business format and the concepts of the ontology: equivalence (AR_EQUIV); more general (AR_SUP); less general (AR_SUB); disjointness (AR_DISJ). Let us give some examples of annotation. In the examples, the unique identifier ID is omitted. x
(bf2:Address, AR_EQUIV,O:Address) may be considered as the output of automatic annotation
Ontology-driven Semantic Mapping
x x
339
(bf2:Address, AR_SUB,O:Address) may be considered as the output of a ranked automatic annotation/search: the AR_SUB relation instead of AR_EQUIV is used since the rank is less than a given threshold (bf2:Address, AR_EQUIV, O:Address and Billing-1.Purchase_Order) may be considered as a refinement by the user of (bf2:Address, AR_EQUIV,O:Address) to state that the address in the BF is exactly the “address of the Billing in a Purchase_Order”
Let us also consider the following possible annotations of bf1 x x x x x
(bf1:Address, AR_EQUIV,O:Address) (bf1:Address, AR_SUB,O:Address) (bf1:Address, AR_EQUIV, O:Address and Shipping-1.Purchase_Order) (bf1:Address, AR_EQUIV, O:Address and Shipping-1.Purchase_Order) (bf1:Address, AR_DISJ, O:Address and Billing-1.Purchase_Order)
Now, some example of the SLS derived from annotation will be discussed. To this end, let us suppose that in the bf2 there is the following annotation for address (bf2:Address, AR_EQUIV,O:Address). We want to discuss what is the SLS derived between bf2:Address and bf1:Address, by considering the following cases for the Address annotation in bf1 Case 1) (bf1:Address, AR_EQUIV,O:Address) The following SLS can be derived (bf1:Address, EQUIV, bf2:Address) Case 2) (bf1:Address, AR_SUB,O:Address) and The following SLS can be derived (bf1:Address, SUB, bf2:Address) Case 3) (bf1:Address, AR_EQUIV, O:Address and InverseOf(Shipping).Purchase_Order) The following SLS can be derived (bf1:Address, SUB, bf2:Address) since Address and InverseOf(Shipping).Purchase_Order (the annotation of bf1:Address) is subsumed by Address (the annotation of bf2:Address). This shows how the semantic mapping can be derived from the semantic entity specification. The information of the linkage to the LDM Ontology is used in the same way. One topic is still open. A possible extension [to be evaluated] w.r.t. the [26] framework is the addition of the overlapping (OVERLAP) semantic relation. Formally, we need to evaluate if with OVERLAP we can decide IDK relations; moreover we need to proof that with OVERLAP “relations are ordered according to decreasing binding strength”
6 Discussion and Future Research We provide a joint approach to integrate the benefits of the MOF and ontology based semantic mapping methods. Model entities of business formats/standards are described by a generic meta model which is made explicit by an ontology, called the Logical Data Model Ontology. By annotating these semantic entities w.r.t. business ontologies an enriched knowledge base is available to reason on semantic links to align the entities of business formats. These technologies are going to be integrated in an interoperability framework to share the semantic information in
340
D. Beneventano, N. Dahlem, S. El Haoum, A. Hahn, D. Montanari, M. Reinelt
peer groups to enrich the semantic basis for cooperation. This enhances the common ontology to provide an even better basis for the mapping process. This is accompanied by further approaches to simplify the definition of ontologies, their linkage to semantic entities (annotation) and verification of the jointly generated semantic net.
Acknowledgments The STASIS IST project (FP6-2005-IST-5-034980) is sponsored under the EC 6th Framework Programme.
References [1] [2] [3]
[4] [5]
[6]
[7] [8]
[9] [10] [11]
[12]
[13]
[14]
Kleppe, A., Warmer, J., Bast, W.: MDA Explained: The Model Driven Architecture-Practice and Promise. Addison-Wesley, Boston (2003) Rumbaugh, J., Jacobson, I., Booch, G.: The Unified Modeling Language Reference Manual. Second Edition, Addison-Wesley, Boston (2005) Karagiannis, D., Kühn. H.: Metamodelling Plattforms. In Bauknecht K., Min Tjoa, A., Quirchmayr, G. (eds.): Thirst Int. Conference EC-Web 2002, p. 182. Springer, Berlin (2002) W3C Semantic Web Activity, http://www.w3.org/2001/sw/ last accessed 2007-10-24 Ehrig, M., Haase, P., Hefke, M., and Stojanovic, N.: Similarity for Ontologies – a Comprehensive Framework. In Workshop Enterprise Modelling and Ontology: Ingredients for Interoperability, PAKM 2004 (2004) Weinstein, P., Birmingham, W.P.: Comparing Concepts in Differentiated Ontologies. In 12th Workshop on Knowledge Acquisition, Modelling, and Management (KAW99), Banff, Alberta, Canada (1999). Choi, N., Song, I-Y., Han, H.: A Survey of Ontology Mapping. In SIGMOD Record 35, Nr. 3 (2006) Mädche, A., Motik, B., Silva, N., Volz, R: MAFRA - A Mapping Framework for Distributed Ontologies. In 13th Int. Conf. on Knowledge Engineering and Knowledge Management, vol. 2473 in LNCS, pp. 235--250 (2002) Rahm, E., Bernstein, P.: Survey of Approaches to Automatic Schema Matching. VLDB Journal 10 (2001) Beneventano, D., Bergamaschi, S., Guerra, F., Vincini, M.: Synthesizing an Integrated Ontology, IEEE Internet Computing, September-October (2003) Melnik, S., Garcia-Molina, H., Rahm, E.: Similarity Flooding: A Versatile Graph Matching Algorithm and its Applications to Schema Matching. In 18th International Conference on Data Engineering (ICDE-2002), San Jose, California, (2002) Noy, N.F., and Musen, M.A.: Anchor-PROMPT: Using non-local context for semantic matching. In Workshop on Ontologies and Information Sharing at the 17th International Joint Conference on Artificial Intelligence (IJCAI-2001), Seattle, WA, US (2001) Doan, A., Madhavan, J., Domingos, P., Halevy, A.: Learning to map between ontologies on the semantic web. In The 11th Int. WWW Conference, Hawaii, USA (2002) Mitra, P., Noy, N.F., Jawal, A.R.: OMEN: A Probabilistic Ontology Mapping Tool. LNCS, Vol. 3729/2005, Springer, Berlin (2005)
Ontology-driven Semantic Mapping
341
[15] Euzenat, J., Valtchev, P.: Similarity-based ontology alignment in OWL-Lite. In The 16th European Conference on Artificial Intelligence (ECAI-04), Valencia, Spain (2004) [16] Mitra, P., Wiederhold, G., Decker, S.: A scalable framework for interoperation of information sources. In SWWS01, Stanford University, Stanford, CA, US (2001) [17] Noy, N.F., Musen, M.A.: The PROMPT suite: Interactive tools for ontology merging and mapping. International Journal of Human-Computer Studies, 59, pp. 983--1024 (2003) [18] Prasad, S., Peng, Y., Finin, T.: A tool for mapping between two ontologies using explicit information. In AAMAS 2002 Ws on Ontologies and Agent Systems, Bologna, Italy (2002) [19] Wache, H, Vögele T., Visser, U., Stuckenschmidt, H., Schuster, G., Neumann H., Hübner, S.: Ontology-Based Integration of Information - A Survey of Existing Approaches. In: IJCAI-01 Workshop Ontologies and Information Sharing (IJCAI01), pp. 108-118 (2001) [20] Gómez-Pérez, A., Fernández-López M, Corcho, 0.: Ontological Engineering with examples from the areas of Knowledge Management, e-Commerce and the Semantic Web. Springer (2004) [21] SWAP Project Web Site, http://swap.semanticweb.org/ last accessed 2007-10-24 [22] SEKT Project Web Site, http://www.sekt-project.com/ last accessed 2007-10-24 [23] DotKom Project Web Site, http://nlp.shef.ac.uk/dot.kom/ last accessed 2007-10-24 [24] Callegari, G. Missikoff, M., Osimi, N., Taglino F.: Semantic Annotation language and tool for Information and Business Processes Appendix F: User Manual, ATHENA Project Deliverable D.A3.3 (2006) available at http://lekspub.iasi.cnr.it/Astar/AstarUserManual1.0 last accessed 2007-10-24 [25] STASIS Project Web Site, http://www.stasis-project.net/ last accessed 2007-10-24 [26] Giunchiglia, F., Yatskevich, M., Shvaiko. P.: Semantic Matching: Algorithms and Implementation. Journal on Data Semantics (JoDS), IX, LNCS 4601, pp. 1-38 (2007)
Self-Organising Service Networks for Semantic Interoperability Support in Virtual Enterprises Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova St.Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, 39, 14 Line, 199178 St.Petersburg, Russia {smir, michael, nick, oleg}@iias.spb.su
Abstract. Virtual enterprises consist of a number of independent distributed members, which have to collaborate in order to succeed. The paper proposes an approach to creation of self-organising service networks to support semantic interoperability between virtual enterprise members. Since the centralized control is not always possible, presented approach proposes decentralized communication and ad-hoc decision making based on the current situation state and its possible future development. The presented approach proposes usage of self-organising networks of knowledge sources and problem solvers. The paper is devoted to questions of semantic interoperability in such kind of agent-based service networks. Ontologies are used for description of knowledge domains. Application of objectoriented constraint networks is proposed for ontology modelling. The approach uses such technologies as knowledge and ontology management, profiling, intelligent agents, Webservices, etc. Keywords: Self-organisation, semantic interoperability, agent, service, ontology, context
1 Introduction Nowadays, complex decision making faces problems of management and sharing of huge amount of information & knowledge from distributed and heterogeneous sources (experts, electronic documents, real-time sensors, etc.) belonging to virtual enterprise members, personalization, availability of up-to-date and accurate information provided by the dynamic environment. The problems include search of right sources, extraction of content, presentation of results in a personalized way, and other. As a rule, the content of several sources has to be extracted and processed (e.g., fused, converted, checked) to produce required information. Due to such factors as different data formats, interaction protocols and others this leads to a problem of semantic interoperability.
344
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Ontologies are widely used for problem domain description in the modern information systems to support semantic interoperability. An ontology is an explicit specification of a structure of a certain domain. It includes a vocabulary for referring to notions of the subject area, and a set of logical statements expressing the constraints existing in the domain and restricting the interpretation of the vocabulary [1]. Ontologies support integration of resources that were developed using different vocabularies and different perspectives of the data. To achieve semantic interoperability, systems must be able to exchange data so that the precise meaning of the data is readily accessible and the data itself can be translated by any system into the form that it understands [2]. Centralized control in complex distributed systems is not always possible: for example, virtual enterprises consist of independent companies and do not have a central decision making unit. Thus, decentralized self-organisation of distributed independent components is a promising architecture for such kind of systems [3], [4], [5]. However, in order for the self-organisation to operate it is necessary to solve a number of problems including: (i) registration and cancelling of registration of network elements, (ii) preparation of initial state, (iii) self-configuration: finding appropriate network elements [6], negotiation of conditions and assignment of links, and preparation of alternative configurations. Different research projects are devoted to self-management of such networks: self -contextualization, -optimization, -organization, -configuration, -adaptation, -healing, -protection [7]. The following major requirements to the approach to virtual enterprise interoperability support have been selected (some of the decision making processes in virtual enterprises have been identified in [8]): (i) intensive information exchange, (ii) distributed architecture, (iii) decentralised control, (iv) semanticbased information processing, (v) ad-hoc decision making support based on the current situation state and its possible future development. Self-configuration of heterogeneous sources providing knowledge and tools for this knowledge processing is a basic idea of the presented approach. The developed methodology proposes integration of environmental information in a certain context. The context is any information that can be used to characterize the situation of an entity where an entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves [9]. The context is purposed to represent only relevant information from the large amount of those. Relevance of information is evaluated on a basis how they are related to a modelling of an ad-hoc problem. A number of already solved problems and problems to be solved includes interoperability at both technological and semantic level, situation understanding by the members via information exchange, protocols of ad-hoc decision making for self-organization. Proposed technological framework incorporates such technologies as situation management, knowledge and ontology management, profiling, Web-services, decision support, negotiation protocols. The proposed methodology is based on the earlier developed concept of knowledge logistics [10] and includes such technologies as situation management, ontology management, profiling and intelligent agents [8]. Standards of information exchange (e.g., Web-service standards), negotiation protocols, decision making rules, etc. are used for information exchange and rapid
Self-Organising Service Networks for Semantic Interoperability Support
345
establishing of ad-hoc partnerships and agreements between the operation members. In the second section of the paper the developed methodology is presented. The technological framework is described in the third section. Some results are summarised in the Conclusion.
2 Proposed Approach The main idea of the approach is to represent virtual enterprise members by sets of services provided by them Fig. 1. This makes it possible to replace the interoperability between virtual enterprise members with that between their services.
Fig. 1. Representation of virtual enterprise members by sets of services.
At the first stage of the research the lifecycle phases of the self-configuring network and major requirements to them were defined (Table 1). Based on these requirements the main ideas the approach is based on were formulated: 1.
A common shared top-level ontology (application ontology) serves for terminology unification. Each service has a fragment of this ontology corresponding to its capabilities / responsibilities. This fragment is synchronized automatically when necessary (not during the operation). 2. Each service has a profile describing its capabilities, appropriate ontological model. 3. Each service is assigned an intelligent agent, representing it (together they will be called “agent-based service”). The agent collects information required for situational understanding by the service, negotiates with other agents to create ad-hoc action plans. The agent has predefined rules to be followed during negotiation processes. These rules depend on the role of the appropriate member.
346
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
4.
Web-service standards are used for interactions. External sources (e.g., medical databases, transport availability, weather forecasts) should also support these standards and the terminology defined by the application ontology. This is achieved by developing services for each particular source.
Table 1. Lifecycle phases for the self-configuring network, its needs and services to fulfil them Life cycle phase Community building (once, new members are added on a continuous basis)
Needs Common infrastructure
Services Modelling goals and objectives
Common communication standards and protocols
Identification, qualification, registration of members Common knowledge representation Common modelling for community members
Formation (continuous, initiated by the situation, or a task as a part of the situation) Operation (continuous)
Task definition model (context)
Task modelling Rules of partner selection
Partner selection Coordination and synchronization
Rules of re-negotiation and solution modification if necessary
Termination of the established Update of Discontinuation (continuous, initiated by agreements solution members)
the
current
The developed methodology proposes a two-level framework of context-driven information integration for decision making. The first level addresses activities over a pre-starting procedure of the system as creation of semantic models for its components (Fig. 2); accumulating domain knowledge; linking domain knowledge with the information sources; creation of an application ontology describing a macro-situation; indexing a set of available e-documents against the application ontology. This level is supported, if required, by the subject experts, knowledge and ontology engineers. The second level focuses on decision making supported by the system. This level addresses a problem recognition presented by a user request; context creation; identification of relevant knowledge sources; generation of a set of problem solutions; and making a decision by the user.
Self-Organising Service Networks for Semantic Interoperability Support
System Components
347
Model Domain ontology
Tasks&methods ontology
Domain knowledge Application ontology User (decision maker)
User profile
Environment
Information source models Formalism of OOCN
Fig. 2. Models for system components.
The internal knowledge representation is supported by the formalism of objectoriented constraint networks (OOCN) [11]. All the system components and contexts are represented by means of this formalism. According to the formalism, the application ontology is represented as sets of classes, class attributes, attribute domains, and constraints. The set of constrains comprises constraints describing “class, attribute, domain” relation; constraints representing structural relations as hierarchical relationships “part-of” and “is-a”, classes compatibility, associative relationships, class cardinality restrictions; and constraints describing functional dependencies. Below examples of some constraints are given: x x x x x x
“class, attribute, domain” relation: the attribute costs belongs to the class component and takes positive values; hierarchical relationship “is-a”: the body production facility is a resource; hierarchical relationships “part-of”: an instance of the class component can be a part of an instance of the class product; associative relationship: an instance of the class body can be connected to an instance of the class body production facility; classes compatibility: the class body is compatible with the class body production facility; functional dependence: the value of the attribute cost of an instance of the class body production facility depends on values of the attribute cost of instances of the class component connected to it and on the number of such instances.
Fig. 3 represents the macro-level taxonomy of the application ontology for a virtual enterprise built by domain experts. The represented classes are the main concepts for the production network configuration problem.
348
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Fig. 3. Taxonomy of the application ontology for a virtual enterprise.
3 Technological Framework The generic scheme of a self-organizing service network is presented in Fig. 4. Each enterprise member is represented as an intelligent agent acting in the system. The architecture of the agent is presented in Fig. 5. Each agent has its own knowledge stored in its knowledge base and is described by a portion of the common shared application ontology. A part of this knowledge related to the current agent’s (and member’s) tasks and capabilities is called “context” and is stored separately to provide for faster task performance (only relevant information is processed). Capabilities, preferences and other information about the agent are stored in its profile that is available for viewing by other agents of the community. It facilitates communication, which is performed via the communication module responsible for meeting protocols and standards that are used within the community.
Self-Organising Service Networks for Semantic Interoperability Support
Alternative information sources
Common shared application ontology
Information source
349
Agent-based service
Agent-based service
Agent-based service
OWL-based information h SOAP-based
information h Fig. 4. Generic scheme of a self-organising network.
Communication module
Context
Core / engine
Profile
Personal knowledge base
Fig. 5. Agent architecture.
The agents communicate with other agents for two main purposes: (1) they establish links and exchange information for better situation awareness; and (2) they negotiate and make agreements for coordination of their activities during the operation. The agents may also get information from various information sources, for example, local road network for transportation can be acquired from a geographical information system (GIS). To make agent-based services independent a component Service Directory Facilitator (SDF) has been proposed. It could be hosted by independent or governmental organisations (for example, CLEPA [12] or regional associations existing in some European countries) SDF is responsible for service registration and update of autonomous services (Fig. 6). Initially an agent representing a service does not have any information about neighbouring services and its slice of the application ontology is empty. An automatic tool or an expert assigns it a set of the application ontology elements related to the service. For knowledge source services this could be classes which can be reified or attributes those values can be defined using content of knowledge sources. For problem solving services this could be tasks and methods existing in the problem domain.
350
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
Service Directory Facilitator
PSS
KSS
PSS
KSS
KSS PSS
PSS
PSS
KSS KSS PSS – Problem Solving Service KSS – Knowledge Source Service Fig. 6. Registration and update of services.
PSS
Agents inform SDF about their appearance, modifications, intention to leave the community and send scheduling messages to update the internal repository. The task of SDF is to build a slice of the application ontology for each service, update references to the neighbouring services, support list of services (type, location, schedule, and notification of services about changes in the network). Organization of references between services is a complex task and is out of scope of the paper. Well-organized references result in a list of services which are used as initiators of the self-organisation process. Knowledge source services are responsible for (i) representation of information provided by knowledge sources by means of the OOCN-formalism (ii) querying and extract content from knowledge sources, (iii) transfer of the information, (iv) information integration, and (v) data conversion. Two types of strategies for interaction with knowledge sources have been proposed: x
x
Pull – knowledge sources deliver content when it is requested. E.g., a temperature sensor can be mentioned as an example of such information source. It measures temperature and supplies this information needed (Fig. 7, left part). Push – knowledge sources send content to a certain Web-service. For example fire alarm sensor in case of fire can send this information to a special agent that would activate a corresponding scenario (Fig. 7, right part). request inform Pull IS
Push IS reply Fig. 7. Pull (left part) and push (right part) strategies.
When a situation that requires some action occurs, an appropriate context is built. The initiators of the self-organisation process receive a notification. Using references to neighbour services and developed rules / protocols an appropriate
Self-Organising Service Networks for Semantic Interoperability Support
351
network organization is build, to produce a solution. In Fig. 8 a simplified example of self-organizing network is presented. Initiators are agents representing services KSS1, KSS2, and PSS1. Agents of KSS2 and PSS1 delegate the task of participation in problem solving to agents of KSS3, and PSS2. Content of knowledge sources presented by services KSS1, and KSS3 is sufficient and solvers presented by services PSS1 and PSS2 process this content. When the configuration is finished, peer-to-peer interactions between members of the network take place.
4 Conclusion The paper represents an approach and its technological framework for semantic interoperability in virtual enterprises. It is proposed that self-organization can resolve problems arising from failures of centralized control.
PSS
KSS
PSS
KSS1
KSS PSS
PSS KSS3 PSS – Problem Solving Service KSS – Knowledge Source Service Discovery of neighbor services Self-organized network
PSS1 PSS2
KSS2
Context
Fig. 8. Self-organizing of service network.
Ontologies have been proposed for description of problem domain knowledge. In accordance with the selected model of interoperability, services directly connect and exchange information with one another, but service descriptions (in the paper this is description of the knowledge source content and tasks that can be solved) are mapped to the common shared application ontology. The further research activities will address refinement of algorithms (i) of neighbor services definition, (ii) selection of services that initialize selforganization of the network, (iii) traverse of network and (iv) generation and estimation of alternative network configurations. The authors believe that once completed the proposed architecture could efficiently work for a range of the real world problems.
Acknowledgments The paper is due to research projects supported by grants # 05-01-00151, # 06-0789242, and #07-01- 00334 of the Russian Foundation for Basic Research, projects
352
Alexander Smirnov, Mikhail Pashkin, Nikolay Shilov, Tatiana Levashova
# 16.2.35 of the research program "Mathematical Modelling and Intelligent Systems", and # 1.9 of the research program “Fundamental Basics of Information Technologies and Computer Systems” of the Russian Academy of Sciences (RAS).
References [1]
Foundation for Intelligent Physical Agents (FIPA) Documentation, http://www.fipa.org. [2] Heflin, J., Hendler, J.: Semantic Interoperability on the Web. In: Proceedings of Extreme Markup Languages 2000, pp. 111--120. Graphic Communications Association, (2000). [3] Viana, A. C., Amorim, M. D., Fdida, S., Rezende, J. F.: Self-organization in spontaneous networks: the approach of DHT-based routing protocols. Ad Hoc Networks J., special issue on Data Communications and Topology Control in Ad Hoc Networks, vol. 3, no. 5, 589-606 (2005). [4] Hammer, B., Micheli, A., Sperduti, A., Strickert, M.: Recursive self-organizing network models. Neural Networks, vol. 17, no. 8-9, 1061--1085 (2004). [5] Nakano, T., Suda, T.: Self-Organizing Network Services with Evolutionary Adaptation. In: IEEE Transactions on Neural Networks, vol. 16, no. 5, 2005. [6] Chandran, R., Hexmoor, H.: Delegation Protocols Founded on Trust. In: KIMAS'07: Modeling, Exploration, and Engineering (proceedings of the 2007 International Conference on Integration of Knowledge Intensive Multi-Agent Systems), ISBN 14244-0945-4, pp. 328--335, IEEE (2007). [7] Baumgarten, M., Bicocchi, N., Curran, K., Mamei, M., Mulvenna, M., Nugent, C., Zambonelli, F.: Towards Self-Organizing Knowledge Networks for Smart World Infrastructures. In: Teanfield, H. (ed.) International Transactions on Systems Science and Applications, ISSN 1751-1461, vol. 2, no. 2., 123--133 (2006). [8] Smirnov, A., Shilov, N., Kashevnik, A.: Constraint-Driven Negotiation Based on Semantic Interoperability in BTO Production Networks. In: Panetto, H., Boudjlida, N., (eds.) Interoperability for Enterprise Software and Applications (proceedings of the Workshops and the Doctoral Symposium of the Second IFAC/IFIP I-ESA International Conference: EI2N, WSI, IS-TSPQ 2006), ISBN-13 978-1-905209-61-3, ISBN-10 1-905209-61-4, pp. 175-186, ISTE Ltd. (2006). [9] Dey, A. K., Salber, D., Abowd, G.D.: A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications, Context-Aware Computing. In: T.P. Moran, P. Dourish (eds.) A Special Triple Issue of HumanComputer Interaction, 16, Lawrence-Erlbaum (2001). [10] Smirnov, A., Pashkin, M., Levashova, T., Chilov, N.: Fusion-Based Knowledge Logistics for Intelligent Decision Support in Network-Centric Environment. In: George J. Klir (ed.) Int. J. of General Systems, ISSN: 0308-1079, vol. 34, no 6, pp. 673--690, Taylor & Francis (2005). [11] Smirnov, A., Sheremetov, L., Chilov, N., Sanchez-Sanchez, C.: Agent-Based Technological Framework for Dynamic Configuration of a Cooperative Supply Chain. Multiagent-based supply chain management. In: Chaib-draa, B., Müller, J. P. (eds.) Series on Studies in Computational Intelligence, vol. 28, ISBN: 3540338756, pp. 217--246, Springer (2006). [12] CLEPA: Comité de liaison de la construction d’équipements et de pièces automobiles (the European Association of Automotive Suppliers), http://www.clepa.be.
Semantic Service Matching in the Context of ODSOI Project S. Izza and L. Vincent Ecole des Mines de Saint-Étienne, Industrial Engineering and Computer Science Laboratory, OMSI Division, 158 cours Fauriel, 42023, Saint-Etienne, Cedex 2, France. {izza, vincent}@emse.fr
Abstract. Matching services still constitutes a big challenge for most of enterprises in general and notably for large and dynamic ones. This paper delineates a service similarity approach in the context of ODSOI (Ontology-Driven Service-Oriented Integration) project that concern the intra-enterprise integration issues in the field of manufacturing industry. Our approach is based on an extension of OWL-S service similarity. It proposes a rigorous quantitative ranking method based on some novel semantic similarity degrees. An implementation of this ranking method is provided in the form of a prototype coded on Java platform exploiting some existing APIs mainly Racer OWL API, and OWL-S-API. Keywords: Information System; Integration; Ontology; Semantics; Service; Similarity; Matching; OWL-S.
1 Introduction In the last few years, matching of services has been a very active research field in the context of semantic web in general and enterprise information systems in particular. Service matching is generally considered as the process by which similarities between service source (ideally candidate services) and service target (ideally the client's objectives presented in a form of a service template) are calculated. From an architectural point of view, the service matching involves three types of stakeholders: (i) service providers which have the goal of providing particular types of services and publish or advertise their services to enable potential requestors to discover and utilize them, (ii) service requestors which have the goal of finding services that can accomplish some internal objectives, (iii) matchmakers which are middle agents that can be used to find the service providers that match the stated requirements of a requestor [1].
354
S. Izza and L. Vincent
Today, the service matching is particularly challenging in the context of intraenterprise integration because of the large number of available web sources and services, and also of the different user profiles that may be involved. The use of efficient matching approach based on a pertinent ranking method becomes necessary in order to correctly facilitate and achieve some integration issues. This paper deals with these issues and precisely the service matching approach in support for intra-enterprise integration in the context of ODSOI (OntologyDriven Service-Oriented Integration) project [11]. It is organized as follows: Section 2 introduces some previous work in the domain of service similarity and matching. Section 3 presents the main principles of our service matching approach. Section 4 presents our semantic similarity approach. Section 5 presents some preliminary experimental results and lessons learned. Finally, Section 6 presents some conclusions and outlines some future work.
2 Related Work Semantic matching constitutes an important field that has been widely investigated in the last few years in several areas of research. We present a brief survey of some approaches to semantic matching that are related to the context of matching services. These works are presented following four categories of approaches: (i) concept similarity approaches, (ii) resource matching approaches, (iii) service matching approaches, and (iv) OWL-S service matching approaches. The first category of approaches concern the concept similarity metrics and the main representative ones with respect to our work are those proposed by [20], [19] and [7]. [20] proposed, in the context of semantic nets, a metric to measure conceptual distance between concepts in hierarchical semantic nets as a minimum number of edges separating the involved concepts. [19] proposed in the context of measuring information content of a concept a metric that associates probability with concepts in an hierarchy to denote the likelihood of encountering an instance of a concept. [7] proposed, in the context of similarity for ontology framework, a metric for measuring similarity for ontology concepts as an amalgamation function that combines the similarity measure of three complementary layers that are: (i) data layer that measures the similarity of entities by considering the data values of simple or complex data types such as integers and strings; (ii) ontology layer that considers and measures the semantic relations between the entities; and (iii) context layer that considers how entities of the ontology are used in some external context, most importantly, the application context. Although these approaches are pertinent, we exploit within ODSOI project a concept similarity approach based on OWL constructs. Another important work that is related to ours is [4] that proposed a taxonomy-based similarity measuring of ontology concepts. This latter, which does not provide an efficient asymptotic behaviour of the similarity measure, is improved by our similarity approach. The second category of approaches is closely related to resource matching such as: text document, schema, and software component matching. Text document matching constitutes a long-standing problem in information retrieval where most solutions are based on term frequency analysis. Schema matching is based on
Semantic Service Matching in the Context of ODSOI Project
355
methods that try to capture clues about the semantics of the schemas, and suggest matches based on them. Such methods include linguistic analysis, structural analysis, the use of domain knowledge and previous matching experience. Software component matching is an important activity for software reuse and is generally based on the examination of signature matching and specification matching. However, most of the approaches of this category are insufficient in the web service context because they present differences from the information structure, granularity and coverage point of view. The third category of works that are closed to ours concern those that exploit the notion of service similarity in general. [5], [10], [2], [3], [22], [15] and [24], constitute some important works that propose techniques that perform service matching exploiting the notion of service similarity. [5] proposed a metric for similarity search for web services in the context of Woogle search engine that exploits the structure of the web services and employ a novel clustering mechanism that groups parameter names into meaningful concepts. [10] proposed a metric for measuring the similarity of semantic services annotated with OWL ontology. Similarity is calculated by defining the intrinsic information value of a service description based on the inferencibility of each of OWL Lite constructs. [2] proposed, in the context of METEOR-S, a service similarity approach that is based on syntactic, operational and semantic information as a way to increase the precision of the match of web services. Syntactic similarity is based on the similarity of service names and service descriptions. Operational similarity is based on the functionality of services. Semantic similarity is based on the similarity of concepts and properties that defines the involved services. [2] proposed a matching algorithm that extended the work presented in [18] and precisely they mainly extended the subsumption based matching mechanism by adding information retrieval techniques to find similarity between the concepts when it is not explicitly stated in the ontologies. [22] uses, in the context of LARKS framework, five different types of matching: context matching, profile comparison, similarity matching, signature matching and constraint matching. [24] proposed an approach to rank services by their relative semantic order by defining three different degrees of matching, which are: (i) exact match (all requests in a demand are available in supply); (ii) potential match (where some requests in demand are not specified in supply); (iii) partial match (where some requests in demand are conflict with supply). Although these approaches present important principles, however, they do not concern OWL-S services. Finally, the fourth category of works that are the most closed to ours concern those discussed semantic discovery using OWL-S ontology such as [18] [3], [13] and [8], when the advertisement and request use terms from the same ontology. Most of these works proposed similar approaches for service discovery using a Semantic UDDI registry. They mainly enhanced the semantic search mechanism by enabling users to specify semantic inquiries based on web services capabilities. [18] propose, in the context of OWL-S services, a service similarity approach based on the use of Service Profile similarity, mainly the similarity of inputs and outputs. To achieve flexible matching, they define four degrees of service matches, which are: (i) exact, when there is an equivalence or a subsumption relationship between the two concepts, (ii) plug-in, when there is a subsumption relationship
356
S. Izza and L. Vincent
between the two concepts, (iii) subsumes, when there is an inverse subsumption relationship between the two concepts, and (iv) fail, when no subsumption relation can be identified between the two concepts. [3] proposed, in the context of MAIS project, a service similarity approach that extends that proposed by [18] and that allow to calculate a similarity, between a published service and the user request, taking into account the semantics contained in the corresponding OWL-S ontologies. [13] refines the work by [18] and proposed a ranking method that concern OWL-S services. [8] introduced a complex semantic similarity measure for determining the similarity of OWL-S services as a linear combination of the functional similarity of services and their textual similarities. The functional similarity of services is determined by measuring semantic similarity which exists between their sets of inputs and outputs. In order to be able to measure the input/output similarity of services we introduced techniques for finding the similarity of OWL concepts which are used for annotating the inputs and outputs of services. Our work is very closed to this category of approaches and is aiming to propose more rigorous ranking method that may be used for matching of OWL-S services.
3 Main Principles of the Semantic Service Matching ODSOI (Ontology-Driven Service-Oriented Integration) project concerns the intraintegration issues in the context of large and dynamic enterprises with the aim to define agile integration architectures. Within this project, service matching constitutes an important point that allows to discovering pertinent services in order to compose them into more gross-grained services and/or useful processes. In our approach [11][12], services are mainly characterized by four elements that are defined using ontology concepts (OWL-DL constructs) and that are: context, signature, constraint, and non-functional attributes. Service context is defined using one or more of the following properties: service classification which is the service category where the considered service belongs to, service cluster which is the cluster, in terms of service area and domain, where the considered service belongs to, and the enterprise view which is the enterprise component that is exposed by the considered service. Service signature includes the inputs and the outputs of the service. Service constraint mainly includes constraints on service preconditions and postconditions of the service. Non functional service parameters include quality of service and service visibility. In our approach, the defined service matching method improves the traditional matching of services [18] which is mainly based on signature matching. Our matching approach is based on the matching of all the four above elements that describe enterprise services: x
Service context matching: the source service and the target service must share the same context (i.e.: contextsource contextt arg et where context source and contextsource are respectively the context of the source and the target service.
Semantic Service Matching in the Context of ODSOI Project
x
x
Service signature matching: the source service and the target service must present comparable signatures (ie.: ) where is the subsumption ( Int arg et In source ) (Out source Out t arg et ) operator, Insource and Outsource are respectively the inputs and the outputs of the source service, Inttarget and Outtarget are respectively the inputs and the outputs of the target service. Service constraint matching: the source service and the target service must have compatible constraints on their preconditions and postconditions. This means that we must make valid the following expression: ( precond source precond t arg et ) ( postcond t arg et postcond source ) where
precond source and postcond source are preconditions postconditions of precond
x
357
postcond
t arg et t arg et the source service, and and are preconditions and post-conditions of the target service. Non functional parameters: the source service and the target service must have comparable non functional parameters that are quality of service and service visibility. Precisely, the source and the target service must present comparable levels of quality (i.e.: QoS source QoS t arg et where QoS source and QoS t arg et are respectively the quality of the source service and of the target
service. Concerning the service visibility, we must also have compatible visibilities (i.e.: visibility source visibility t arg et where visibilitysource and visibility source are respectively the quality of the source and the target service.
4 Semantic Service Similarity Our approach is based on the calculation of a similarity degree that measures the degree of satisfaction evaluated using the similarity of ontology concepts. This similarity degree is based on some more elementary degrees that quantitatively refines the traditional inference between two concepts (equivalence, subsumption, inverse subsumption, intersection, and disjunction) such as those generally proposed in the literature [18].
5 Global Semantic Service Similarity Formally,
we
propose
to
calculate
the
global
similarity
degree
simglobal ( S , S ' ) between two given services (or a source service and a target service)
S and S' as a weighted product accordingly to the following formula:
358
S. Izza and L. Vincent
sim global ( S , S ' )
sim xOx ( S , S ' )
xFilter
(1)
where x {context, signature, constraint, non-functional} and Ox is a positive real weight and where simcontext ( S , S ' ) denotes the similarity degree of service contexts,
sim signature ( S , S ' ) sim constra int ( S , S ' )
denotes
the
similarity
degree
of
service
signatures,
denotes
the
similarity
degree
of
service
constraints,
'
simnon functional ( S , S ) denotes the similarity degree of non-functional attributes. As it is shown, this formula may be interpreted as a probability of similarity of two services S and S'. The choice of the weighted product in the formula is not fortuitous and it can be justified by the fact that the global similarity must depend of the performance of all the elementary similarity degrees and not of the excellence of only certain degrees.
6 Elementary Semantic Service Similarity In a similar manner than the global similarity degree, all the above intermediary degrees are a combination of some more elementary degrees. For example, the similarity degree of service contexts simcontext ( S , S ' ) is calculated using the following formula: simcontext (S , S ' )
simO (S , S ) '
x
x x{type, cluster, view}
(2)
In a similar manner, the other intermediary similarity degrees are calculated accordingly to the following formulas:
simO (S , S )
simsignature (S , S ' )
'
x
(3)
x
x{input, output}
simconstra int (S , S ' )
simO (S , S )
(4)
simO (S , S )
(5)
x
'
x x{precondition, postcondition}
simnon functional ( S , S ' )
x
x x{QoS, visibility}
'
Therefore, we gradually define the global similarity degree as a combination of elementary degrees. We define an elementary degree as a degree that can not be divided in order to calculate it. For example, we consider in our approach that the two degrees siminput ( S , S ' ) and simoutput ( S , S ' ) as elementary because we consider them as not composite. The elementary degree is very important in the way that the problem of calculation of the global similarity degree is translated in the form of some simple problems of calculation of elementary degrees.
Semantic Service Matching in the Context of ODSOI Project
359
7 Semantic Ontology Concept Similarity For the calculation of the elementary similarity degrees, we exploit a variant that combines the structural topological dissimilarity [23] and the upward cotopic distance [16]. Based on these two measures, we define an appropriate measure that matches two ontology concepts of a same ontology and that takes into account the sensibility of the concepts depth. Given two concepts C and C' of the same ontology O with a depth Depth(O), the similarity degree between these two concepts is calculated accordingly to the following formula: 1 d d' D max( d , d ' ) 1 (1 min(d , d ' ) 1)( ) (6) ) ) (1 ) sim (C , C ' ) (1 ( 1 Depth(O) (min(d , d ' )) E 1 Depth (O ) d d' where D and E are parameters that control the flatness of the similarity function and that are defined as follows: D 1 max(d , d ' ), E log10 ( Depth(O)) (7) and where d and d' are respectively distances (augmented with one in order to avoid null denominators) between the concept C (resp. C') and the smallest common ancestor C0 :
d
dist (C , C0 ) 1, d ' dist (C ' , C0 ) 1
(8)
The common ancestor C0 of the two concepts C and C' is the solution of the optimization problem that allows to define the structural topological dissimilarity as introduced in [23] : (9) ?c: min[ dist (C , c ) dist (C ' , c )] c
Of course, the global concept similarity function (sim) is mathematically a function of similarity because it verifies the conditions of positivity, maximality and symmetry [14]. Furthermore, this function is normalized due to the fact that its values are in [0, 1].
360
S. Izza and L. Vincent
( 1- a)
( 1- b)
Fig. 1. Curves of the concept similarity function
As it can be noted, the global similarity formula (formula 6) contains four factors. The first factor (1 ( d d ' )D ) represents the global behaviour of the d d'
similarity calculation. It allows a measure that is sensible to the depth of the two concepts. More the concepts are near from the common ancestor and more this quantity is important. In opposite, this quantity diminishes when the concepts are far away from the common ancestor. The disadvantage is that it favors the cases where the distances d et d' are equal even if one or more concepts are so far away from the common ancestor accordingly to the bisecting plan (d, d'). This disadvantage is corrected by the second factor (1 max(d , d ' ) 1) , that allows to 1 Depth(O )
diminish the curve when the distances d and d' increase in a same manner. This allows us to have a desired effect that is the effect of a bell. This effect is important because it allows to increase the similarity around the common ancestor and to decrease when the concepts move away from the common ancestor. Concerning 1 the last two factors (1 min(d , d ' ) 1) and ( ) , they allow to smooth the (min(d , d ' )) E 1 Depth(O) curve, notably along the contour of the curve where d = d'. Figure 1-a and figure 1b show the curves of this similarity function from two different angles of view. The calculation of some values of the similarity function, with Depth(O) equals to 100 is represented on table 1 that quantitatively illustrates the asymptotic behavior of this function. As shown, this latter conforms to the desired behavior which is to provide an efficient measure that takes into account the depth of the concepts in the ontology, their neighborhood, and also the asymptotic behavior of the measure when these concepts are far from the common ancestor.
Semantic Service Matching in the Context of ODSOI Project
361
Table 1. Some values of the concept similarity function d'
1
2
3
4
5
6
7
8
9
1.0000 0.9534 0.9189 0.8948 0.8761 0.8603 0.8464 0.8338 0.8219 0.8107
0.9534 0.2451 0.2422 0.2392 0.2362 0.2334 0.2307 0.2280 0.2254 0.2229
0.9189 0.2422 0.1068 0.1057 0.1046 0.1035 0.1024 0.1013 0.1002 0.0991
0.8948 0.2392 0.1057 0.0588 0.0582 0.0576 0.0570 0.0564 0.0558 0.0552
0.8761 0.2362 0.1046 0.0582 0.0369 0.0365 0.0361 0.0358 0.0354 0.0350
0.8603 0.2334 0.1035 0.0576 0.0365 0.0251 0.0248 0.0246 0.0243 0.0240
0.8464 0.2307 0.1024 0.0570 0.0361 0.0248 0.0181 0.0179 0.0177 0.0175
0.8338 0.2280 0.1013 0.0564 0.0358 0.0246 0.0179 0.0135 0.0134 0.0132
0.8219 0.2254 0.1002 0.0558 0.0354 0.0243 0.0177 0.0134 0.0105 0.0104
10
d 1 2 3 4 5 6 7 8 9 10
0.8107 0.2229 0.0991 0.0552 0.0350 0.0240 0.0175 0.0132 0.0104 0.0083
8 Preliminary Experimental Results and main Lessons Learned We conducted some preliminary evaluation analyzing the efficiency and comparing the performances of our service matching approach to show that exploiting such a method is pertinent and does not hinder the performance and scalability of the matching process. We implemented a matching prototype using Java under the Eclipse environment [6] and some APIs mainly OWL API [21] to extend and manage OWL ontologies, OWL-S API [17] to extend and manipulate OWL-S descriptions, and Racer [9] to perform OWL inferences. Figure 2 illustrates the developed prototype. As it is shown, the graphical user interface contains some graphical components that allow fulfilling the elements of the discovery request. In addition to this, we have the possibility to change the different weights and thresholds that are used for calculating the global similarity degree between the request and the published services.
362
S. Izza and L. Vincent
n FunctionService MaintenanceArea
o
EquipementQualificationPredictio n
EquipementQualification
p
Fig. 2. Snapshots of the developed prototype
In order to facilitate the fulfilling of the request, the prototype allows visualizing some stored ontologies for selecting some desired concepts. Once the user finished its request, he can execute the discovery process by clicking on the button "Discover". Then the prototype restitutes the matched services that are ranked accordingly to the calculated similarity degrees. An example of simple discovery request that was experimented is to search for functional services within the cluster MaintenanceArea that produces Qualifications. The prototype executes the matching algorithm and restitutes ranked answers accordingly to the service similarity degree. But within the restituted answers, some services may present the same similarity degree (here we obtained in this case two services that have the same similarity degree 0,8684.). In order to differentiate them, we exploit other elements of the request. So we exploit the enterprise view (or other request elements) that allows to precise for example the type of the activity held by the service (planning, prediction, reporting, etc.). This is important in order to increase the performance (mainly precision and recall) of the prototype. By exploiting the Service View element, we can indicate precisely which kind of activity of qualification management we want to retrieve. For example, when we indicate in the Service View zone text the functional concept "QualificationQueryFunction" which is a concept from the Functional Ontology, then we can differentiate between the above services and we obtain two different
Semantic Service Matching in the Context of ODSOI Project
363
answers with two different degrees of similarity: 0,2921 and 0,8684. These similarity degrees may be more refined with the exploitation of the other elements, or by choosing more restricted service clusters. Based on our experimentation of Semantic Web services, and precisely of the semantic matching of enterprise services on a significant sample of enterprise services, we believe that the existing technologies (precisely the existing open OWL-S APIs and also on the existing OWL inference engines) are sufficiently mature to use them in industries even if they are sometimes very heavy in the sense that they need a consequent consuming time. The suggested matching method is very promoting in the way it proposes a pertinent quantitative ranking of enterprise services. We worked successfully with a sample of enterprise services and some evaluation of the performance was also made. And we are convinced by the importance of the implemented prototype. An additional notice concerns the graphical user interface. This latter seems to be important in order to correctly guide the process of discovery of enterprise services. With such an interface, a user may user-friendly make a request. By clicking on the browsing buttons, the prototype visualizes appropriate ontologies (in order to select ontology concepts), and the user gradually builds his request by only choosing desired concepts. Furthermore, the user can change the request weights (default values were set to one) in order to adapt them to the specific enterprise context. He can also change the different thresholds in order to better control the failure condition in the discovery process. Our experimentation shows that the values of the different weights and the different thresholds must be correctly determined. The experimentation also shows some limitations that need some amelioration in the future. Two notable ameliorations must at least be mentioned in order to enhance the prototype. The first one concerns the taking into account of the numerical values in the formulation of the discovery requests (for example to take into account a reliability of service higher than 0.9 which is not possible in the actual version of the prototype because we use the simple hypothesis that formalizes the quality as some discrete modalities) and also the possibility of visualizing the answers graphically using the principle of the cartography of the enterprise services that is more convivial.
9 Conclusions and Future Work We have presented in this paper an approach for service matching in the context of intra-enterprise integration, and precisely in the context of ODSOI project. This approach which extends the existing matching methods, may improve the service discovery approach in order to correctly search and rank services within and also outside the enterprise. Our service matching approach is based on the matching of all the four elements that describe enterprise services: context, signature, constraint and non-functional attributes. Furthermore, our approach proposes a rigorous ranking method that provides an efficient measure that takes into account the depth of the concepts in the ontology, their neighborhood, and also the asymptotic behavior of the measure when these concepts are far from the common ancestor concept. However, our approach presents some limitations and the main ones
364
S. Izza and L. Vincent
concern its run-time-consuming in the sense that it needs a longer execution path than in a traditional matching approach. But the benefits are above all for long term because the matching method gives more refined results. We have implemented a matching prototype and we have performed several experiments within a large enterprise that gave good results. The future work will focus on improving the time consuming through reducing the loading and inference time by managing and manipulating small interrelated ontologies. It also focuses on testing the prototype in large realities in order to validate its scalability and also its importance in the context of large enterprises and the semantic Web services.
Disclaimer Certain commercial software products are mentioned in this paper. These products were used only for demonstration purposes. This use does not imply approval or endorsement by our institution, nor does it imply these products are necessarily the best available for the purpose.
References [1]
[2] [3]
[4]
[5]
[6] [7]
[8]
[9]
Burstein M. H., Bussler C., Zaremba M., Finin T. W., Huhns M. N., Paolucci M., Sheth A. P. and Williams S. K., "A Semantic Web Services Architecture". IEEE Internet Computing 9(5): 72-81 (2005). Cardoso A. J. S., "Quality of service and semantic composition of workflows". PhD Thesis, University of Georgia, Athens, Georgia, 2002. Corallo A., Elia G., Lorenzo G. and Solazzo G., "A Semantic Recommender Engine enabling an eTourism Scenario". ISWC2005 International Semantic Web Conference 2005, 6-10 November 2005, Galway, Ireland. Correla M. A. and Castels P., "A Heuristic Approach to Semantic Web Services Classification". 10th International Conference on Knowledge-Based & Intelligent Information & Engineering Systems (KES 2006), Invited Session on Engineered Applications of Semantic Web (SWEA). Bournemouth, UK, October 2006. Springer Verlag Lecture Notes in Computer Science, Vol. 4253, ISBN 978-3-540-46542-3, pp. 598-605. Dong X., Halevy A., Madhavan J., Nemes E. and Zhang J., "Similarity Search for Web Services". In proceedings of the 30th VLDB Conference, Toronto, Canada, http://www.vldb.org/conf/2004/RS10P1.PDF, 2004. Eclipse, "Eclipse - an open development platform". 2005, http://www.eclipse.org (accessed 3 March 2005). Ehrig M., Haase P., Hefke M. and Stojanovic N., "Similarity for ontology - a comprehensive framework". In Workshop Enterprise Modelling and Ontology: Ingredients for Interoperability, 2004. Ganjisaffar Y., Abolhassani H., Neshati M. and Jamali M., "A Similarity Measure for OWL-S Annotated Web Services". Web Intelligence (WI'06), IEEE/WIC/ACM pp. 621-624, 2006. Haarslev V. and Möller R., "RACER System Description". In Proceedings of the International Joint Conference on Automated Reasoning, June 18-23, 2001.
Semantic Service Matching in the Context of ODSOI Project
365
[10] Hau J., Lee W., and Darlington J., "A semantic Measure for Semantic Web Services". WWW2005, May 10–14, 2005, Chiba, Japan. [11] Izza S., Vincent L. and Burlat P., "A Framework for Semantic Enterprise Integration". In Proceedings of INTEROP-ESA'05, Geneva, Switzerland, pp-78-89, 2005. [12] Izza S., Vincent L., Burlat P., Lebrun P. and Solignac H., "Extending OWL-S to Solve Enterprise Application Integration Issues". Interoperability for Enterprise Software and Applications Conference (I-ESA'06), Bordeaux, France, 2006. [13] Jaeger M. C., Rojec-Goldmann G., Liebetruth C., Mühl G. and Geihs K., "Ranked Matching for Service Descriptions Using OWL-S". Springer Berlin Heidelberg, 2005. ISBN: 978-3-540-24473-8. [14] KnowledgeWeb, "Deliverables of KWEB Project". EU-IST-2004-507482, 2004, http://knowledgeweb.semanticweb.org/, accessed 10 June 2006. [15] Li L. and Horrocks I., "A software framework for matchmaking based on semantic web technology". In the International Journal of Electronic Commerce, 8(4):39-60, 2004. [16] Mädche A. and Zacharias V., "Clustering ontology-based metadata in the semantic web". In proceedings 13th ECML and 6th PKDD, Helsinki (FI), 2002. [17] Mindswap Group, "OWL-S API". 2005, http://www.mindswap.org/ 2004/ owl-s/api/ (accessed 24 March 2006). [18] Paolucci M., Kawamura T. and Payne T., "Sycara, K.: Semantic Matching of Web Service Capabilities". In proceedings of the First International Semantic Web Conference, 2002. [19] Resnik P. "Semantic similarity in a taxonomy: An information-based measure and its application to problems of ambiguity in natural language". In Journal of Artificial Intelligence Research, volume 11, pages, 95–130, July 1999. [20] Roda R., Mili H., Bicknell E. and Blettner M., "Development and application of a metric on semantic nets". In IEEE Transactions on Systems, Man, and Cybernetics, volume 19, Jan/Feb 1989. [21] Sourceforge, "OWL API". http://owlapi.sourceforge.net/, accessed April 2006. [22] Sycara K., Widoff S., Klusch M. and Lu J., "Larks: Dynamic Matchmaking Among Heterogeneous Software Agents in Cyberspace", ACM Portal, Source: Autonomous Agents and Multi-Agent Systems, v5, issue 2, pp. 173–203, June 2002. [23] Valtchev P. and Euzenat J., "Dissimilarity measure for collections of objects and values". In proceedings Coen X. Liu and M. Berthold, editors, Proc. 2nd Symposium on Intelligent Data Analysis., Vol. 1280, pp. 259–272, 1997. [24] Tommaso D. N., Di Sciascio E., Donini F. M. and Mongiello M., "A System for Principled Matchmaking in an Electronic Marketplace". In proceedings of the Twelfth International Conference on World Wide Web (WWW), 2003.
Ontology-based Service Component Model for Interoperability of Service Systems Zhongjie Wang and Xiaofei Xu Research Center of Intelligent Computing for Enterprises and Services (ICES), School of Computer Science and Technology, Harbin Institute of Technology P.O.Box 315, No.92, West Dazhi Street, Harbin, China 150001 {rainy, xiaofei}@hit.edu.cn
Abstract. Service system as the foundation for service providers and customers to create and capture value via co-production relationship, is a complex socio-technological system composed of people, techniques and shared information. One of the most significant characteristics of a service system is that there exist a mass of dynamic, stochastic and semantic interactions (i.e., interoperability) between heterogeneous service elements. For better considerations on interoperability during service system design, we import OWLbased service ontology to precisely express service semantics. Based on such ontology, heterogeneous service elements are elaborately classified and encapsulated into a unified form “Service Component” with a set of uniform interfaces to expose or acquire semanticsbased services to/from outside. In such way, right service components will be chosen out by ontology matching, and semantics conflicts between service elements will be easily discovered. Based on interface connections, selected service components are weaved together to form service system with good interoperability performance. Keywords: Ontology based methods and tools for interoperability, Design methodologies for interoperable systems, Model Driven Architectures for interoperability
1 Introduction Over the past three decades, services have become the largest part of most industrialized nations’ economies (Rouse and Baba 2006, Spohrer et al 2007) and researchers and practitioners have evolved more and more R&D on service related activities. In 2004, IBM firstly presented the concept “Services Sciences Management and Engineering (SSME)” (IBM, 2004) and promoted the creation of a new discipline “Services Sciences”, which tries to integrate across traditional disciplinary areas to obtain globally effective solutions in a service business
368
Zhongjie Wang and Xiaofei Xu
environment (Chesbrough and Spohrer, 2006) to better facilitate the development of service industry. Generally speaking, a service is a provider-to-client interaction that creates and captures value while sharing risks (Wikipedia, 2007). In order to support execution a service, there should be a well-designed service system for supporting. Consider education service system as an example, in which universities as service providers that aim to transform student knowledge through agreements, relationships, and other exchanges among students and university faculty, including courses offered and taken, tuition paid, and work-study arrangements (Spohrer et al, 2007). There has reached a consensus that, a service system is a value co-production configuration of people, technology, other internal and external service systems connected by value propositions, and shared information (such as language, processes, metrics, prices, policies, and laws) (Spohrer et al, 2007). It is a type of complex socio-technological system that combines features of natural system and manufacturing system (IBM, 2004). Easily to see that a service system is not just a pure software system but is additionally composed of various complex, heterogeneous and distributed service elements, e.g., people, techniques, resources, environments, etc. Besides, in order to fully accomplish service business, tight and continual collaborations between service elements provided by different participants cannot be thoroughly avoided. From above two points of view, service system is essentially considered as the interoperability between multiple sub-systems of different participants, or between multiple service elements. Formally speaking, we define service interoperability as the ability of people, resources, or behaviors to provide services to and accept services from other people, resources, or behaviors and to use the services so exchanged to enable them to operate effectively together. During service process, a smooth and effective interoperability channel between all service elements must be created and maintained to ensure coherent communications between participants, so as to realize the objective of “co-producing and share values”. Concerning state of the art about interoperability of service systems, current research mainly concentrates on the following two aspects, i.e., (1) Fundamental theories on service systems. Researchers try to borrow some universal theoretical methods from other similar complex systems (e.g., natural ecosystem, social networking system, ESA, etc) to analyze the interoperability mechanisms between service elements. Such methods include ecosystem based idesign method (Tung and Yuan 2007), small world network (Yuan and Zhang 2006), catastrophe theory (Liu et al 2006), etc. Unfortunately, at present these methods are merely resting in theoretical level and there still lack of concrete techniques to support the implementation of such theories in practical service systems. (2) Methodology for service engineering. This aspect usually emphasize on step-by-step mapping from customer requirements to service systems following model-driven approach to address interoperability issues (Tien and Berg 2003, Cai 2006). However, most of related literatures emphasize particularly on the interoperability between softwarized services (e.g., web services, SCA, etc) (Cox and Kreger 2005), in which ontology languages (e,g., OWL, RDF, DAML+OIL,
Ontology-based Service Component Model for Interoperability of Service Systems
369
etc) are adopted to eliminate semantics gap between services (Vetere and Lenzerini 2005), but those non-softwarized service elements are mostly ignored. In order to address limitations mentioned above, in this paper we present a new concept “Service Component (SC)” as basic and uniform building blocks to encapsulate heterogeneous service elements. Each SC has a set of unified interfaces to interact with others, and pre-defined domain-specific ontology is imported to formally express semantics of service elements encapsulated in SC. In this way, structural and semantic diversities will be thoroughly eliminated. During design and implementation of a service system, SCs could be precisely discovered, understood, invoked and composed together based on ontological information attached to them. This paper is organized as follows. In section 2 we analyze basic elements of a service system and possible interoperability scenarios between them. In section 3, we define the corresponding ontology for describing semantics of each type of service elements. In section 4 the concept of “Service Component” is put forward, including its classifications and unified interfaces. Section 5 discusses how to deal with interoperability issues by SC composition. Finally is the conclusion.
2 Service System and Typical Interoperability Requirements 2.1 Basic Constituents of a Service System
Service system is considered as a socio-technological system composed of multiple types of service elements. Generally speaking, such elements are classified into two types, i.e., technological and non-technological ones, either of which depends closely on each other to achieve global optimization of service performance (IBM 2004). Distinctive characteristics of a service system are specified as nine classes, i.e., customer, goals, input, output, process, human enabler, physical enabler, informatic enabler and environment (Karni and Kaner 2000). In our opinion, such classification is too fine-grained to be lack of clarity, therefore we re-classify typical service elements into the following four types based on IBM’s viewpoint (Spohrer et al 2007), i.e., x x x x
People, mainly referring to service customers and service providers, including their organizations, roles, human resources, professional skills and capabilities; Resource, including technological (virtual) and physical material resources, e.g., software, hardware and equipment, physical environments, etc, that could be utilized during service process; Shared information, including papery or electronic files, reports and documents which are created, processed by and exchanged between service customers and providers; Behavior, referring to the physical, mental and social behaviors that are provided by or as the responses of people or resources during services.
370
Zhongjie Wang and Xiaofei Xu
Organization
People 0..*
1..* Shared Information 0..*
1..*
Behavior 0..*
0..*
Capability 0..*
Software
Hardware
0..*
0..*
0..* Resource
0..*
1..* 1..*
0..*
0..*
Value
QoS/SLA Parameter Risk
Environment
Fig. 1. Service elements and their relationships
The reason that interoperability of service system is quite complex is that it contains not only a mass of non-technological elements (e.g., people, whose behaviors are difficult to model and simulate), but also because such interoperability is stochastic, nonlinear, difficult to forecast, and frequently changing along with requirements and resource providing. In Fig. 1, we present some typical service elements and relationships between them. 2.2 Typical Interoperability Issues in Service Systems
Considering different types of service elements, typical interoperability in service systems may possibly fall into one of the following situations: x x x x x
Interoperability between people, i.e., verbal communication or behavior interactions face-to-face or via collaborative web-based environment; Interoperability between people and software, i.e., people learn to use software, provide input information to it and receive returned information from it; Interoperability between people and information, i.e., people receive and try to understand information, then take specific actions or produce new information based on these information; Interoperability between people and hardware, i.e., service behaviors initiated by people requires the support by some specific hardware or equipments; Interoperability between people and environment, i.e., only when people locate in some specific environment, can they better provide proper services to others;
Ontology-based Service Component Model for Interoperability of Service Systems
x x x
371
Interoperability between software, i.e., software belonging to different service participants should interoperate with each other via function calls or information sharing to automate accomplish some specific service tasks; Interoperability between resources, i.e., one resource may possibly use other resources for its own functions; Interoperability between resource and environments, i.e., a resource should be configured in a specific environment so as to be utilized by other service elements.
In real services, because service elements provided by different participants are quite heterogeneous, the interoperability scenarios mentioned above might not be easily realized. We summarize possible interoperability issues into the following three types, i.e., x
Function mismatch
-
-
-
Functions provided by a service element are rather larger or smaller than the required functions of other elements. For example, a service behavior “Business Consulting” include four packaged service activities (AS-IS modeling, component business analysis, business redesign, IT re-design), but a service customer might only require “ASIS modeling”; Functions provided by a service element require too much input information that other elements cannot fully provide. For example, a service “Switch a ERP system from simulative running to formal online running” requires that complete BOM (Bill of Materials) and BOP (Bill of Process) data have been fully prepared, but almost all customer enterprises usually cannot provide complete BOM/BOP at all; Output of a service element contains too much information that is not completely required by other service elements.
x
SLA mismatch Quality is a very important aspect of a service. When multiple service elements are composed together to form service system, quality of different service elements should be compliant with each other so that total service quality will reach an accepted level. Even two service elements are both in high-level quality, if they are not matching, the composite quality would be possibly bad. Typical SLA/QoS parameters include response time, cost, price, security, competence, etc, of service behaviors, resources and capabilities.
x
Semantics mismatch - Syntax format of service element representations/descriptions are inconsistent with others; - Semantics representations of service elements are inconsistent with other elements.
372
Zhongjie Wang and Xiaofei Xu
If such interoperability issues cannot be effectively eliminated, it is difficult for service providers and customers to establish favoring collaborations between respective service elements, therefore performance of the service system will keep in low level and some mandatory functional service requirements are difficult to cover indeed. Since interoperability in service systems is quite similar with interoperability between software systems, we prefer to find possible interoperability scenarios and solve them in service model level, then try to map them to the execution level of service systems. It is quite similar to the approach of Model-Driven Interoperability (MDI) in the domain of ESA interoperability (Elvesæter et al 2006).
3 Domain-specific Service Ontology A common approach to address interoperability is to import ontology by which the concepts are consolidated into a unified semantics space. In this paper we adopt similar approach, too, i.e., defining domain-specific service ontology, using which to express semantics of service components and support interoperability during service component composition. Table 1 lists some typical concepts and their property-based associations. Due to limited space, we will not explain each concept and property in details; however, readers could get to understand their meanings from the names. Table 1. Service ontology: concepts and properties Concept
Organization
People
Behavior
Property has contains provides provides has inStateOf has belongsTo provides provides has inStateOf has has has hasBehaviorType hasExecutionType in has isProvidedBy hasInput hasOutput
Associated Concepts OrganizationProfile People Resource Behavior Capability State PeopleProfile Organization Behavior Capability FeasibleDate State Profile PreCondition PostCondition BehaviorType ExecutionType GranularityLevel SLAParameter Organization or People Information Information
Ontology-based Service Component Model for Interoperability of Service Systems
Concept
Property requires requires interactWith isComposedOf hasImpactOn
Capability
createValue followRisk has has requires inStateOf
CapabilityMetrics
enumerate
CapabilityRepresentation
enumerate
Resource
Information ServiceValue ServiceRisk State StateParameter
isRelatedTo has provides requires provides inStateOf Requires has fromState toState fromState toState contains hasName hasValue hasMetrics
373
Associated Concepts Resource Capability Behavior Behavior Resource or Organization or People or Behavior or Capability ServiceValue ServiceRisk CapabilityProfile CapabilityRepresentation Capability State {QualitativeMetrics, QuantitativeMetrics} {Advantage, Knowledge, Experience, ControllingResCapability} CapabiliytMetrics ResourceProfile Capability Capability Behavior State Resource InformationProfile State State State State StateParameter Name Value Metrics
SLAParameter
With flourish researches on Semantic Web, there have appeared so many ontology description techniques, e.g., RDF, DAML+OIL, OWL, KIF, etc, among which OWL (Web Ontology Language) (W3C 2004) now has been the most popular and de-festo standard of ontology description language for semantics on the Web. Although web-based services are just one of typical service elements in service system, it is easy to make use of OWL to describe other non-softwarized services; therefore we adopt OWL as the tool to describe our service ontology. Fig. 2 is a snapshot of service ontology defined in Protégé (Stanford 2007).
374
Zhongjie Wang and Xiaofei Xu
Fig. 2. A screen snapshot of service ontology definition by Protégé
Compared with other service elements, service behaviors are more emphasizing on interactive process. In Semantic Web, OWL is extended to OWL-S (OWL 2004) for defining interactive processes between web services, and here we also employ OWL-S as the tool for describing inner detailed processes of service behaviors, each of which is represented as an OWL-S process model according to pre-defined service behavior ontology. Fig. 3 shows an example of IT consulting behavior process composed of five fine-grained service behaviors, i.e., (1) CBM (Component Business Modeling) based AS-IS modeling and analysis, (2) business value analysis to specify the value of each business components on strategic goals of an enterprise, (3) business transformation design, (4) business re-deployment and (5) IT asset re-planning, each of which will be further decomposed as more fine-grained behaviors. Above listed ontology concepts are all universal to any service domains, but concrete semantics in various specific service domains are not included. When such ontology is applied to a given domain, it should be further refined and extended for more elaborated semantics.
Ontology-based Service Component Model for Interoperability of Service Systems
375
Fig. 3. A screen snapshot of service behavior definition based on OWL-S
4 Interoperability-oriented Unified Service Component Model In this section we import the concept “Service Component (SC)” to facilitate encapsulation of heterogeneous service elements with uniform structure and semantics representations. Inner descriptions and functions are exposed by interfaces and their semantics are represented by the service ontology discussed in Section 3. 4.1 Classification of Service Components
A service component (SC) is defined as a reusable service element. Any service elements could be mapped to service components, which are correspondingly classified into the following types: x
x
People-ware SC (PSC): a people with some professional skills to provide behaviors to others during service execution, e.g., an IT consultant who has ten-year’s experience on cement manufacturing industry and could provide consulting services to such kinds of enterprises; Resource SC (RSC): a resource with specific capabilities to provide specific behaviors, including
376
Zhongjie Wang and Xiaofei Xu
-
-
-
x x
Software SC (SSC): a software entity with specific computation capabilities to provide specific services behaviors, e.g., a web services with WSDL-based interfaces, a SCA component, a database, an encapsulated legacy system, etc, to offer web-based service behaviors; Hardware SC (HSC): a hardware or physical equipment with specific capabilities, e.g., a machine for manufacturing, a computer server for residing software, an instrument for measuring and checking, a GPS for indicating directions, a telephone for communicating with customers, etc. Environment SC (ESC): a location with specific facilities as a container where service behaviors will take place, e.g., a meeting room with a council board, ten chairs, one projector and one whiteboard for Joint Application Design (JAD), a call center with 120 seats and 250line SPC telephones for providing help to customers, etc.
Behavior SC (BSC): processes, activities or actions that a person could behave to accomplish a service task, e.g., consulting, training, manipulating a machine, using a software system, reporting unanticipated problems, etc. Information SC (ISC): a physical or electronic information entity that is exchanged and shared among software systems, people, etc, e.g., a sales order, a log of call center, an ESA design document, a service manual for guidance, etc.
4.2 Architecture of Service Component
Similar to the architecture of a software component, service component is also designed as a black-box, i.e., interfaces are exclusive channels that service component communicates with external environment or other service components. There are two types of interfaces, i.e., Providing Interface and Required Interface, the former of which specifies the channel by which other components access its basic profiles and functions (behaviors, capabilities), and the latter specifies the functions/ information/resources/behaviors it requires from environment or other components, as shown in Fig. 4. Required Interface
Providing Interface Service Component
Fig. 4. Brief structure of a service component
For each type of service components, we’ve designed a set of specific interfaces listed in Table 2.
Ontology-based Service Component Model for Interoperability of Service Systems
377
Table 2. Interfaces of service components SC type
PSC
Interface Type
Providing
Required Providing
BSC Required
Providing RSC
Required
ISC
Providing Required
Interface Name Profile ProvidingBehavior ProvidingCapability QueryFeasibleDate ReserveBehavior CancelReservation (None) Profile BehaviorDescription BehaviorInteraction RequiredResource RequiredCapability RequiredBehavior ImpactedResource ImpactedPeople InputInformation OutputInformation Profile ProvidingCapability ProvidingBehavior QuerySchedule ReserveResource CancelReservation RequiredResource RequiredBehavior RequiredCapability Profile CreateInfoEntity QueryInfoEntity (None)
Cardinality 1 n n 1 1 1 1 1 1 n n n n n 1 1 1 n n 1 1 1 n n n 1 1 1
5 SC Composition Based Service System Interoperability Solutions When specific service requirements are jointly confirmed by service providers and customers, service components will be selected from repository and composed together to get the corresponding service system. During service system development, the service component model presented in section 4 can effectively support interoperability in service system development, where x x x
Domain-specific service ontology is imported for solving semantics conflicts; Unified interfaces are designed for identifying function mismatch; SLA ontology is imported for solving SLA mismatch;
378
Zhongjie Wang and Xiaofei Xu
The process of interoperability-oriented service component composition includes the following steps: Step 1: Specify service requirements and SLA according to the negotiation results between service providers and customers; Step 2: Aiming at each service requirement, express it as the form of service ontology; Step 3: Find BSCs that match with the requirements by querying each BSC’s Profile interfaces; Step 4: For each selected BSC, query its RequiredResource, RequiredCapability, RequiredBehavior, InputInformation and OutputInformation interfaces to find what kinds of resources, capabilities, behaviors and information elements it requires; Step 5: For each service elements found in Step 4, query the responding SCs from repository by interface semantics matching; Step 6: Recursively query SCs from repository until there are no other SCs required; Step 7: For the function and semantics mismatch between SCs, design adaptors between them to eliminate the gaps; Step 8: Compose all selected SCs together.
PSC1
PSC2
ProvidingBehavior
ProvidingCapability
ProvidingBehavior
ProvidingCapability
Behavior Description
RequiredCapability
Behavior Description
RequiredCapability
Profile ISC1
BehaviorInteraction InputInformation
ISC2 Profile
BSC1
Required Resource
OutputInformation
InputInformation OutputInformation Profile Required Required Resource Capability
BSC2
ISC4 Profile Required Capability
ISC3 Profile
ProvidingCapability
Profile
SSC1
ProvidingCapability HSC1 RequiredResource
RequiredResource ESC1 Profile
Profile
Fig. 5. A simple example of service component composition
6 Conclusions Service system is the fundamental infrastructure that ensures service right and efficiently executed to co-produce and share values between service providers and customers by exerting individual competitive advantages. It is a kind of complex socio-technological system composed of various heterogeneous service elements, and whether the interoperability channels between them would keep smooth or not will impact service execution quality and efficiency to a large extent. In this paper, we carefully analyze basic service elements in service system and the possible
Ontology-based Service Component Model for Interoperability of Service Systems
379
interoperability scenarios, and then construct typical service ontology defined by OWL and Protégé. To support interoperability, service elements are defined as service components with unified interfaces, where their semantics are represented by service ontology. Due to limited spaces, some technical details are not included in this paper. And concerning the key issue of how to find proper SCs by semantics matching during service component composition, it is a quite big topic and we will discuss in other papers.
Acknowledgement Research works in this paper are partial supported by the National Natural Science Foundation (NSF) of China (60673025), the National High-Tech Research and Development Plan of China (2006AA01Z167, 2006AA04Z165) and the Development Program for Outstanding Young Teachers in Harbin Institute of Technology (HITQNJS.2007.033)
References [1]
B. Elvesæter, A. Hahn, A.-J. Berre, and T. Neple. Towards an interoperability framework for model-driven development of software systems. The 2nd International Conference on Interoperability of Enterprise Software and Applications. Springer London, 2006, 409-420. [2] D.E. Cox and H. Kreger. Management of the service-oriented architecture life cycle. IBM Systems Journal, 2005, 44(4): 709-726. [3] G. Vetere and M. Lenzerini. Models for semantic interoperability in service-oriented architectures. IBM Systems Journal, 2005, 44(4): 887-903. [4] H. Cai. A two steps method for analyzing dependency of business services on it services within a Service Life Cycle. ICWS'06. 877-884. [5] H. Chesbrough and J. Spohrer. A research manifesto for services science. Communications of the ACM, 2006, 49(7): 35-39. Service Sciences, Management and Engineering (SSME). [6] IBM. http://www.research.ibm. com/SSME [7] J. Spohrer, P. Maglio, J. Bailey, and D. Gruhl. Steps towards a Science of Service Systems. IEEE Computer, 2007, 40(1):71-77. [8] J.M. Tien and D. Berg. Towards Service Systems Engineering. IEEE International Conference on Systems, Man and Cybernetics, 2003, 5(5): 4890-4895. [9] Q.G. Liu, J. Zhou, and J.Y. Li. Catastrophe modeling for service system in the networked environment. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China. [10] R. Karni and M. Kaner. An engineering tool for the conceptual design of service systems. Springer Berlin Heidelberg. Advances in Services Innovations. 2000, 65-83. [11] R.D. Mascio. Service process control: a method to compare dynamic robustness of alternative service processes. Journal of Process Control. 2003, 13(7): 645-653. [12] R.X. Yuan and X. Zhang. Spatial characteristics of agent behavior in small world networks. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China.
380
Zhongjie Wang and Xiaofei Xu
[13] Stanford Medical Informatics. The Protégé ontology editor and knowledge acquisition system. http://protege.stanford.edu/ [14] The OWL Services Coalition. OWL-S: Semantic Markup for Web Services. http://www.daml. org/services/owl-s/1.0/owl-s.html [15] W.B. Rouse and M.L. Baba. Enterprise transformation. Communications of the ACM, 2006, 49(7): 66-72. [16] W.-F. Tung and S.-T. Yuan. iDesign: an intelligent design framework for service innovation. Proceedings of 40th Hawaii International Conference on System Sciences (HICSS 40), Hawaii, USA. January 3-6, 2007. 64-73. [17] W3C. OWL Web Ontology Language Overview. http://www.w3.org/TR/owlfeatures/ [18] Wikipedia. http://www.wikipedia.org/wiki/services
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services Keith Popplewell1, Nenad Stojanovic2, Andreas Abecker2, Dimitris Apostolou3, Gregoris Mentzas3, Jenny Harding4 1
2
3
4
Coventry University, Priory Street, Coventry CV1 5FB, United Kingdom [email protected] Forschungszentrum Informatik, Haid-und-Neu-Str. 10-14, D-76131 Karlsruhe, Germany {nstojano, abecker}@fzi.de Institute of Communication and Computer Systems, 157 80 Athens, Greece {gmentzas, dapost}@mail.ntua.gr Loughborough University, Loughborough, Leicestershire, LE11 3TU, United Kingdom [email protected]
Abstract. The next phase of enterprise interoperability will address the sharing of knowledge within a Virtual Organisation (VO) to the mutual benefit of all VO partners. Such knowledge will be a driver for new enhanced collaborative enterprises, able to achieve the global visions of enterprise interoperability. This paper outlines the approach to be followed in the SYNERGY research project which envisages the delivery of Collaboration Knowledge services through interoperability service utilities (ISUs): trusted third parties offering web-based, pay-on-use services. The aim of SYNERGY is to enhance support of the networked enterprise in the successful, timely creation of, and participation in, collaborative VOs by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. The proposed approach aims to (a) provide semantic ontology-based modelling of knowledge structures on collaborative working; (b) develop a service-oriented self-adaptive solution for knowledgebased collaboration services; and (c) facilitate the testing and evaluation of the efficiency and effectiveness of the solution in concrete case studies. Keywords: enterprise interoperability, semantic web, knowledge services, knowledge management, trust, virtual organisation.
1 Introduction In a recent roadmap of the European Commission (Enterprise Interoperability Research Roadmap – EIRR hereafter, [5]) four challenges were identified as
382
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
strategic directions of research in the area of Enterprise Interoperability: (i) interoperability service utility; (ii) web technologies for enterprise interoperability; (iii) knowledge-oriented collaboration; and (iv) a science base for enterprise interoperability. Here, we discuss the development of the necessary technological infrastructure for supporting the third grand challenge, i.e. the next phase of development of deeper functionality of enterprise interoperability that will allow the sharing of knowledge within virtual organizations (VOs) to the mutual benefit of the VO partners. Such research will help mitigate two primary needs of enterprises in successfully forming and exploiting VOs: rapid and reliable formation of collaborative consortia to exploit product opportunities; and application of enterprise and VO knowledge in operational and strategic decisionmaking, thereby leading to enhanced competitiveness and profitability. In this paper, we claim that research on semantic web services [12] [19] has the potential to facilitate the satisfaction of these needs and provide the underlying technological infrastructure for supporting adaptive enterprise collaboration through semantic knowledge services [14]. Specifically, we outline the major objectives and architectural directions of a multi-national research project (SYNERGY), which aims to enhance support for the successful and timely creation of, and participation in, collaborative virtual organizations by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. Section 2 outlines the motivation of this research, while section 3 gives the overall objectives and conceptual architecture of the SYNERGY infrastructure – it focuses on three categories of knowledge services to be developed: moderator services; collaboration pattern services; and knowledge evolution services. Section 4 discusses related work, while the final section 5 presents the main conclusions and further work.
2 Motivation The last decades show a clear trend in business – away from big, comprehensive trusts which can cover all stages of a value creation chain, and away from long standing, well-established and stable supply chains; instead, companies increasingly focus on their core business competencies and often enter into flexible alliances for value creation and production. For example, in the automotive industry the market speed demands flexible configuration and re-configuration of supply-chains, in typical “knowledge-oriented businesses” (like management consulting and software engineering) more and more freelancers and small specialized companies form project-specific coalitions for customer-specific products or services, while in life sciences and biotech technological progress comes from research-based companies in co-opetitive relationships which require flexible and ad-hoc co-operations. This growing demand for flexibly interactive and efficiently integrated businesses and services has already led to a huge amount of scientific and technological research in enterprise interoperability. Although such research has achieved already promising results and partially led to first commercial products
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
383
and service offerings, as well as operational, deployed applications, these remain nevertheless at the level of data interoperability and information exchange; they hardly reach the level of knowledge integration, and certainly fall short of knowledge-based collaboration. Seen from the business-process perspective, today’s approaches to business interoperability mainly address support processes (for instance, how to manage ordering and buying a given product), but they hardly support the companies’ core processes (e.g., finding a decision about what product to buy) with the companies’ core knowledge assets in the centre of value creation and competitive advantage. If we rely on typical definitions of the term “knowledge” as widely accepted in the Knowledge Management area [13], some of the key characteristics of knowledge are that it is highly context-dependent, interlinked with other pieces of knowledge, action-oriented, and often either bound to people or expressed in complex, often logic-based, knowledge representation formalisms. This shows that today’s business interoperability approaches usually address the level of information and application data, but clearly fail to achieve the “knowledge level”. This situation is sketched in Figure 1. Knowledge subject to uncontrolled sharing
Enterprise N Enterprise 3
Knowledge which SHOULD be shared but rarely is • Organisation knowledge assets • Policy • Strategy
Enterprise 2
Enterprise 1
Legacy Systems
Data Interoperability
Legacy Systems
(eg. ERP, CAD, etc)
Process Interoperability
(eg. ERP, CAD, etc)
• Etc.
Knowledge which MUST NOT be shared but often is
Isolated, ad hoc, interpersonal communications
Misunderstanding; Core IPR confidentiality lost
• Core IPR • Competing projects • etc.
Lost opportunities to learn and adapt from past collaborations
Missed Collaboration Opportunities
The Collaborating Partner The Virtual Organisation
Fig. 1. Current Forms of Knowledge-Oriented Collaboration
Some of the identified shortcomings of the current situation are as follows: Existing solutions for automated business interoperability address data interoperability for (more or less hard-coded) support of business processes as implemented, e.g., in ERP systems. All forms of “higher-level” interoperation in
384
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
knowledge-intensive processes ([1], [18]) usually take place in the form of isolated, selective, informal person-to-person contacts, such as e-mails, meetings, telephone conversations, etc. If the business partners do not know each other already and do not have deep insights in the other company’s internal affairs, they can not be aware of their partner’s internal rules, regulations, experiences, core knowledge assets, etc. which easily lead to misunderstandings, wrong estimations, etc. Even worse, “uncontrolled” and unsystematic collaboration about complex issues may not only be subject to inefficiencies, misunderstandings, or wrong decisions because of missing knowledge about the business partner; it is also exposed to the risk of unaware, accidental disclosure of corporate secrets and all kinds of confidential information. Furthermore, unmanaged knowledge exchange can not only cause direct problems such as inefficiency, mistakes, or confidentiality problems; there are also indirect problems which stem from the fact that a systematic assessment of new opportunities, a continuous collaboration-process improvement, etc. can only happen if there is some level of formality and documentation as its basis. Modular, Ontology Based Knowledge Knowledge shared with controls & understood risks • Organisation knowledge assets • Policy • Strategy • Etc.
ISU Services
Enterprise 3 Enterprise 2
Information and Process Interoperability Services Enhanced VO Collaboration
Collaboration Patterns
Common Understanding
VO Opportunity & Decision Conflict Identification & Resolution
Moderator Services The Learning VO
Learning Services
New Knowledge The Learning Enterprise
Collaboration structured for enhanced support of VO
Partner KM Services
Knowledge Sharing and Security
Protected knowledge • Core IPR • Competing projects • etc.
Enterprise N
Enterprise 1
Learning Loop
Collaboration Registry Services
Publishing Capabilities Searching for Contributions
The Collaborating Partner The Virtual Organisation
Fig. 2. SYNERGY Vision of Knowledge-Oriented Collaboration
Figure 2 illustrates the overall idea of the SYNERGY project in terms of the TO-BE situation which will be enabled by the SYNERGY project results. In this TO-BE situation, a Web-based and service oriented software infrastructure will help all kinds of companies which need to engage in collaborative businesses, to
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
385
discover, capture, deliver and apply knowledge relevant to collaboration creation and operation thus helping them to effectively and efficiently participate in Virtual Organizations (VOs) whilst avoiding the above mentioned shortcomings and problems. The next section outlines in more detail the objectives and conceptual architecture of our approach.
3 The SYNERGY Approach 3.1 Objectives
Following the vision and approach of the IST Enterprise Interoperability Research Roadmap (EIRR, [5]), the SYNERGY architecture takes up and refines the challenge of the Interoperability Service Utility (ISU), i.e. of an open, serviceoriented platform which allows companies to use an independently offered, intelligent infrastructure support to help planning, setting-up, and running complex knowledge-based collaboration. The ISU services to be investigated, designed, prototypically implemented and tested in the SYNERGY project can be organized in three groups: x
x
x
basic collaboration support, including: collaboration registry services that allow publication of and search for capabilities; and information and process interoperability services that may include, e.g., data mediation at the message level, protocol mediation at the service orchestration level, process mediation at the business level, etc. [19]: enhanced collaboration support, including: partner-knowledge management services: services helping a company that wants to enter the collaboration space, to efficiently build up and manage a knowledge base of collaboration-oriented internal knowledge, together with sharing and exchange services which guarantee adequate treatment of confidentiality concerns etc.; collaboration pattern services: as a means to use and reuse proven, useful, experience-based ways of doing and organizing communication and collaboration activities in specific collaborative tasks; and moderator services: which implement the role of a trusted third party helping to establish and run a specific collaborative engagement, to employ collaboration patterns, to mediate conflicts and communication problems, to implement intelligent services at partner site such as opportunity detection, etc; collaboration evolution support, i.e. learning services which continuously accompany and evaluate ongoing activities in order to realize a continuous improvement of knowledge residing both in the central services (such as the collaboration patterns) and at the partner sites (partner-specific collaboration knowledge).
The overall aim of SYNERGY is to enhance support of the networked enterprise in the successful, timely creation of and participation in collaborative
386
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
VOs by providing an infrastructure and services to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. The infrastructure must facilitate the sharing of knowledge within an enterprise, between potential and actual VO partner enterprises, and across industrial sectors, whilst allowing, and indeed actively promoting, the protection of individual and shared commercial interests in operating knowledge and expertise and intellectual property rights (IPR). Note that whilst the results of this research are aimed at providing services which could be integrated into the offerings of an ISU, it is not the intention to duplicate research, carried out elsewhere, into the policy, strategy, delivery and operation of ISUs in general. Rather our research effort aims to define the way in which ISUs in general can provide the essential infrastructure for knowledgeoriented collaboration.
3.2 Conceptual Architecture SYNERGY supports collection and preservation of individual enterprise knowledge about inter-organizational collaboration and its secure integration and harmonization within the existing knowledge landscape of a networked enterprise, stored globally in the ISU or locally at the enterprise level. Through collaboration-knowledge services, SYNERGY provides an active platform for efficient maintenance of and access to shared and individual knowledge, and moderation of inter-organizational collaboration, including the ability to continually learn from experience. In this section, we present in detail how we plan to realize this idea. Figure 3 presents an overview of the SYNERGY conceptual architecture. ISU Services
Partner KM Services
Moderator Services Learning Services Collaboration Registry Services
Access to ALL Knowledge Repositories through ISU Services
Collaboration Patterns
Application
Enterprise IPR Collaboration Experience etc..
ISU Knowledge Registry Moderator Knowledge Industry Knowledge Collaboration Capabilities Collaboration Patterns etc..
Distributed Knowledge Repositories
ISU Service Information and Process Interoperability Services
Networked Enterprise
Local Repositories
Repositories at ISU
Fig. 3. Overview of SYNERGY Conceptual Architecture: distributed knowledge repositories, residing locally in an enterprise or globally in the ISU (right-hand side), which
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
387
can both be maintained and accessed through collaboration-knowledge ISU services (lefthand side)
Each network will develop, through its lifetime, project-specific knowledge. This is in part knowledge specific to the network’s product or service, and to the processes and technologies involved, but it is also related to the current state of the network in its life-cycle. In most cases, such knowledge needs to be maintained only for the network and its partners because it is of no use, and possibly even very confusing, outside that context. Nevertheless, there may be a need to analyse such knowledge and its evolution to provide improved patterns for the future, thus forming a basis for organisational learning. Such collaboration patterns may then enter the public domain to support future collaborations across, say, an industrial sector, but there will be (perhaps most) cases where it is precisely this knowledge which represents competitive advantage to the network or partners concerned, so there is a need to identify where this is the case, and how services might deliver new learning to a specified and appropriate audience of future users – perhaps only partners in the network generating this new knowledge. Within SYNERGY we aim to deliver a Collaboration Knowledge Services Framework (CKSF), a structural framework for knowledge repositories and collaboration services defining mechanisms to manage correct sharing and protection of knowledge. In order to effectively share information and knowledge, it is essential to know when sharing is advantageous: the CKSF will embody knowledge to provide this capability. The maintenance of a library of appropriate collaboration patterns, available as process and service templates to be specialised as necessary and applied to network enterprises either as they form or as they subsequently evolve, is central to the support of partner collaboration in a VO. The CKSF will therefore embody structures and services to define collaboration pattern templates and to select (according to the nature of a developing or existing/changing network), specialise and apply such templates. It is envisaged that the distribution of repository knowledge will reflect its commercial sensitivity, so that at one extreme the most sensitive, perhaps enterprise core competence, would reside within the enterprise’s own systems, whilst at the other extreme a set of collaboration patterns applicable to different scenarios common across an industrial sector would reside in the service provider’s repository. Between these extremes, services may deposit or access knowledge from the service provider or from other partners, but the significant issues relate to the control of access regardless of the location of knowledge. Enterprise knowledge relevant to the formation and operation of collaborative ventures will include, though not necessarily be limited to, (i) Enterprise Core Competence, (ii) Process knowledge for VO Formation, (iii) Process knowledge for partner selection, (iv) VO Operations Management Knowledge. The architecture will be mainly based on the development of a number of ISU services for collaboration: moderator services, pattern services and knowledge evolution services. They are examined in more detail in the following sections.
388
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
3.2.2 Collaboration Moderator Services A critical aspect of effective knowledge sharing within VOs is the identification of the most appropriate knowledge for reuse or exploitation in a particular context combined with the most efficient tools and mechanisms for its identification, sharing or transfer. Knowledge has a life-cycle and therefore, to maintain its value it must evolve through ongoing maintenance and update. These issues are addressed through the identification of appropriate knowledge sources and the concept of a Collaboration Knowledge Model to support knowledge-sharing activities. However, these elements on their own are insufficient to actively support knowledge sharing and interactions between collaborating partners in the VO. Partners also need to be aware of when knowledge needs to be shared, the implications of doing so and when their decisions are likely to affect other partners within the collaboration. Therefore tools and methods are needed to support identification, acquisition, maintenance and evolution of knowledge and to support knowledge sharing through the raising of awareness of possible consequences of actions and other partner's needs during collaboration. SYNERGY addresses these issues, by exploiting the identified sources of collaboration knowledge in the design of a Collaboration Moderator to raise awareness of needs, possible consequences and likely outcomes in collaboration activities between the partners of the VO. Collaboration Moderation Services comprise the process of identifying key knowledge objects and processes as well as understanding their relevance in context and their relationships. We will exploit previous research work [8] [11[11]] and will also identify innovative knowledge acquisition approaches to extend existing moderator functionalities to enable improved collaboration support through ongoing knowledge updating and maintenance. 3.2.3 Collaboration Pattern Services The collaboration-computing experience is currently dominated by tools (e.g. groupware) and the boundaries they introduce to collaboration processes. As new integration methods (e.g. Web Services) enable users to work more seamlessly across tool boundaries and to mix and match services opportunistically as collaboration progresses, a new organisational model becomes desirable. The challenge is not simply to make integration easier, but also to support the users deal with a multiple of collaboration-related information, tools and services. By adopting collaboration patterns as the organizational model of collaboration, users will work in a more complete context for their actions and be burdened by fewer manual integration tasks. At the same time, by dividing collaborative work into distinct collaboration activities, users can focus more readily on a particular activity and deal more easily with interruptions by suspending and resuming activities as needed. Collaboration patterns augment rather than replace collaboration services and tools. Through reference-based integration methods, collaboration patterns introduce new views and organizational schemes that cut across service and tool boundaries, thereby increasing the integrity of the representation of work and mitigating scatter.
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
389
SYNERGY will assess the appropriate level of pattern granularity (abstraction) and represent collaboration patterns using ontologies. We will then develop the Collaboration Patterns Editor, a software component for defining collaboration patterns. The editor will represent collaboration patterns as a collection of relationships that emerge between people, the resources they use, and the artefacts they work on, as well as the communication, coordination, and business processes that are used to complete their work. Collaboration patterns will link to services that are already exposed by collaborative tools, such as workflow tools, word processing, wikis, etc. We envisage three ways for generating collaboration patterns: manually, from best practice data; semi-automatically, by detecting prominent usage patterns using folksonomy techniques (e.g. users typically tend to send an email to all other users asking their feedback when a meeting agenda is sent out); and by community members themselves, either from scratch or as refinements of existing patters. A collaboration pattern created with the editor can be used as a template that guides the informal collaborative process without constraining it. We will also develop a simulator that takes as input information about on-going collaborations taking place and being recorded in a collaboration-pattern knowledge base. The simulator will focus on visualizing collaborative networks as well as transactions inside them. 3.2.5 Knowledge Evolution Services One of the unique features of the SYNERGY approach is based on the idea that an explicit management of knowledge-based collaborative work opens up completely new possibilities regarding (semi-) automatic verifying, evolving, and continuously improving the collaboration-knowledge bases. We aim at a comprehensive management of all dynamic aspects in the proposed knowledge bases, including: (i) automated searching for inconsistencies and problems; (ii) automated recommendation of possible new collaboration opportunities; (iii) propagation of evolutionary changes in the collaboration environment toward dependent artefacts in the codified information space; or (iv) implementing means for self-adaptive collaboration that will enable learning from experience and continuously adapt collaboration patterns and their usage preconditions. Altogether, this will lead to a further development of the concept of the learning organization toward a “learning virtual organization” or, better, a “learning business ecosystem”. Methodologically, we intend to extend ideas and approaches of the Learning Software Organization [9].
4 Related Work The European Commission’s EIRR [5] identifies the state of the art of a number of research areas relevant to enterprise interoperability in general and to SYNERGY in particular. For example, the EIRR recognises ontology definition as necessary to the sharing of knowledge, almost by definition – Gruber [7] defines a domain ontology as “a formal, explicit specification of a shared conceptualisation”.
390
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
Similarly, the EIRR anticipates that delivery of enterprise interoperability capabilities will be achieved through Interoperability Service Utilities (ISUs) providing services to clients, possibly on a pay-per-use business model making them affordable to SMEs. SYNERGY aims to define collaboration knowledge services [14] explicitly to be delivered as services in this way. Specifically, two of the main innovations of SYNERGY include: (i) a reference architecture for an interorganisational knowledge-sharing system, based on knowledge services; and (ii) the formal representation of meta-knowledge for network enterprise collaboration, as well as for risk assessment, propagation and evaluation in networked enterprise collaborations. Concerning moderator services, a lot of research has been reported since the early work of Gaines et al. [6]. Although recent efforts promote the research agenda in this field (e.g. [10], [20], [23]), none of them have addressed interoperability and semantic heterogeneity issues. In SYNERGY, we will provide reference ontologies and example knowledge bases for Collaboration Moderator Services and design and develop a prototype implementation of semantic Collaboration Moderator Services. Collaboration between partners in a VO from a wide variety of domains will result in the need to share knowledge from varied sources, with different data types, file formats and software tools. To cope with this, [16] proposed an ontology-based approach to enable semantic interoperability. The case study proves the effectiveness of ontologies in collaborative product development to support the product-data exchange and information sharing. However, for interoperability to be achieved effectively, it is essential that the semantic definitions of the knowledge objects, processes, contexts and relationships are defined based on mathematically rigorous ontological foundations [11]. Much current work utilises the Web Ontology Language (OWL) for the representation of semantic objects, but this has a very limited capability in terms of process definition. Similarly, the Process Specification language (PSL) has a strong process representation capability, but is weak in its representation of objects. Researchers are therefore increasingly identifying the need for heavyweight ontologies and improved knowledge formalisms [3, 22]. Within SYNERGY, we will develop a blueprint (requirements, language, system architecture, runtime experience) for a knowledge representation and reasoning solution dedicated to knowledge-based collaboration support, as well as a prototype implementation dedicated to knowledge-based collaboration-support requirements. Patterns and pattern languages are becoming increasingly important in various areas such as community informatics [2], activity management [15] and workflow management [21]. A pattern formalizes the structure and content of an activity and the integration methods it depends on, thereby making it reusable as a template in future activities. Collaboration patterns can be regarded as abstractions of classes of similar cases and thus describes a kind of best practices for the execution of specific collaboration activities. Collaboration patterns are useful because they may be used to suggest possible tasks to the users, they can provide information about dependencies between tasks, provide insight about the roles that are required, the resources needed, etc. By sharing collaboration patterns, users can ‘‘socialize’’ best practices and reusable processes. Recently, [4] provided a categorization of
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
391
collaboration patterns, while [17] presents a first collaboration-patterns ontology. However, to our knowledge there exist no software tools that exploit collaboration patterns as a means to support collaboration in real-time. In SYNERGY, we intend to develop a collaboration pattern-based system that gathers and manipulates many types of content without relying on their native applications. Specifically, we will develop: (i) a reference ontology for collaboration-pattern representation; (ii) methods and service-based tools for collaboration-pattern management and use; and (iii) novel methods of collaborative work and knowledge-task management based on collaboration-patterns support and awareness mechanisms. The ATHENA European FP6 integrated research project considered aspects of enterprise models for interoperability and model driven interoperability. ATHENA focused on application interoperability and on the necessary ontological and semantic requirements to support interoperability of enterprise information. Reports of results and pilot implementations can be accessed through the ATHENA web site [24].
5 Conclusions In this paper we outlined the major objectives of the SYNERGY project, which aims to enhance support for the successful and timely creation of, and participation in, collaborative virtual organisations. Moreover, we presented the architectural directions for a software infrastructure and services supporting collaborative virtual organisations to discover, capture, deliver and apply knowledge relevant to collaboration creation and operation. SYNERGY is expected to benefit collaborating organisations in many ways. As a "learning organisation" a collaborating partner is better able to exploit past experience in responding to future opportunities and in particular to opportunities for participation in collaborative networks. Improved risk assessment may enable collaborating partners to participate in more networks than previously seemed safe, whilst minimising exposure to survival-critical risk. Enhanced sharing of knowledge, with dynamic access control and security accelerates and improves network decision making, shortens time to market and reduces network operating costs, whilst improved capture and especially re-use of enterprise and network knowledge reduces the cost of repeating work of earlier projects, and of repeating past errors. Improved, risk-aware decision making reduces the costs of wrong decisions, and failed collaborations. The SYNERGY software infrastructure will be extensively evaluated against the sophisticated collaborations which arise in the pharmaceutical industry and the engineering domain. The participation in the project of collaborating organisations from more than one industrial sector will enable the evaluation of different aspects of collaboration and, at the same time, will ensure that SYNERGY is generic and not sector-specific.
392
K. Popplewell, N. Stojanovic, A. Abecker, D. Apostolou, G. Mentzas, J. Harding
References [1] [2] [3]
[4]
[5] [6]
[7] [8]
[9] [10] [11]
[12] [13]
[14] [15]
[16]
[17] [18]
[19]
Abecker A.: Business Process Oriented Knowledge Management – Concepts, Methods and Tools. Phd Thesis, University of Karlsruhe (2003) Chai, C.S., Khine, M.S.:. An Analysis of Interaction and Participation Patterns in Online Community. Education Technology & Society, 9(1):250-261 (2006) Cutting-Decelle, A.-F., Young, B.I., Das, B.P., Case K., et al: A Review of Approaches to Supply Chain Communications: From Manufacturing to Construction. Itcon, vol. 12, pp 73-102 (2007) den Hengst, M., Adkins, M.: Which Collaboration Patterns are Most Challenging: A Global Survey of Facilitators. In: Proc. 40th Hawaii Int. Conf. on System Sciences (2007) European Commission: Enterprise Interoperability: A Concerted Research Roadmap for Shaping Business Networking in the Knowledge-based Economy (2006) Gaines, B.R., Norrie, D.H., Lapsley, A.Z.: Mediator: An Intelligent Information System Supporting the Virtual Manufacturing Enterprise. In: Proc. 1995 IEEE Int. Conf. on Systems, Man and Cybernetics, pp. 964-969 (1995) Gruber, T.R.: Towards Principles for the Design of Ontologies used for Knowledge Sharing. Int. J. of Human-Computer Studies, vol. 43, pp.907-928 (1993) Harding, J.A., Popplewell, K.: Driving Concurrency in a Distributed Concurrent Engineering Project Team: A Specification for an Engineering Moderator. Int. J. of Production Research, 34(3):841-861 (1996) Henninger, S. Maurer, F.: Advances in Learning Software Organizations: 4th Int. Workshop (LSO 2002). Springer, Heidelberg, 2002. Huang, G. Q.: Web Based Support For Collaborative Product Design Review. Int. J. of Computers in Industry, 48(1):71-88 (2002) Lin, H.K., Harding, J.A.: A Manufacturing System Engineering Web Ontology Model on the Semantic Web for Inter-Enterprise Collaboration. Int. J. of Computers in Industry, 58(5):428-437 (2007) Martin, D., Domingue, J.: Semantic Web Services, Part 1. IEEE Intelligent Systems, September/October, pp. 12-17 (2007) Mentzas, G., Apostolou. D., Abecker, A., Young, R.: Knowledge Asset Networking: A Holistic Approach for Leveraging Corporate Knowledge”, Springer, Heidelberg (2002) Mentzas G., Kafentzis, K., Georgolios, G.: Knowledge Services on the Semantic Web. Comm. of the ACM, 50(10):53-58 (2007) Moody, P., Gruen, D., Muller, M.J., Tang, J., Moran T.P.: Business Activity Patterns: A New Model for Collaborative Business Applications. IBM Systems Journal 45(4):683–694 (2006) Mostefai, S., Bouras, A., Batouche, M.: Effective Collaboration in Product Development via a Common Sharable Ontology. Int. J. of Computational Intelligence, 2(4):206-216 (2005) Pattberg, J., Fluegge, M.: Towards an Ontology of Collaboration Patterns. In: Proceedings of Challenges in Collaborative Engineering 07 (2007) Remus, U.: Integrierte Prozess- und Kommunikationsmodellierung zur Verbesserung von wissensintensiven Geschäftsprozessen. In: Abecker, A., Hinkelmann, K., Maus, H., Müller, H.-J. (eds): Geschäftsprozessorientiertes Wissensmanagement, pp. 91-122. Springer, Heidelberg (2002). In German. Studer, R., Grimm, S., Abecker, A.: Semantic Web Services. Concepts, Technologies, and Applications. Springer, Heidelberg (2007)
Supporting Adaptive Enterprise Collaboration through Semantic Knowledge Services
[20]
[21] [22]
[23]
[24]
393
Ulieru, M., Norrie, D., Kremer, R., Shen, W.,: A Multi-Resolution Collaborative Architecture for Web-Centric Global Manufacturing. Information Sciences 127, 3-21 (2000) van der Aalst, W.M.P., ter Hofstede, A.H.M., Kiepuszewski, B., Barros, A.P.: Workflow Patterns. Distributed and Parallel Databases, 14(3):5-51 (2003) Young, R.I.M., Gunendran, A.G., Cutting-Decelle, A.-F., Gruninger, M.: Manufacturing Knowledge Sharing in PLM: A Progression Towards the Use of Heavy Weight Ontologies. Int. J. of Production Research, 45(7):1506-1619 (2007) Zhou, L., Nagi, R.: Design of Distributed Information Systems for Agile Manufacturing Virtual Enterprises Using CORBA and STEP Standards. J. of Manufacturing Systems, 2(1):14-31 (2002) ATHENA Project Public Website: http://www.athena-ip.org/
Part V
Interoperability in Systems Engineering
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation of Manufacturing Systems Markus Rabe, Pavel Gocev Fraunhofer Institut Produktionsanlagen und Konstruktionstechnik (IPK) Pascalstrasse 8-9, 10587 Berlin, Germany {markus.rabe, pavel.gocev}@ipk.fraunhofer.de
Abstract. The development of new products and manufacturing systems is usually performed in the form of projects. Frequently, the conduction of the project takes more time than planned due to inconsistency, incompleteness, and redundancy of data, which delays other project activities influencing the start of production (SOP). This paper proposes a semantic Web framework for cooperation and interoperability within product design and manufacturing engineering projects. Data and knowledge within the manufacturing domain are modelled within ontologies applying rule-based mapping. The framework facilitates the generation of new knowledge through rule based inference that enriches the ontology. This enables a high-level model completeness in the early phase of product design and manufacturing system development, which is a basic prerequisite for the realisation of a proper simulation study and analysis. The simulation results can be integrated into the ontologies as a knowledge that additionally extends the ontology. Keywords: Semantic Web, Product Design, Manufacturing, Ontology, Knowledge Base, Rules, Inference, Modelling and Simulation.
1 Introduction The design and development of new products and manufacturing systems is usually considered as a project that involves numerous project members, who cooperate and exchange data and information. The design and development activities result in new findings and conclusions, which are knowledge applied to a particular situation. Usually, at the beginning of the project there are strategic decisions of the management about the new products that are documented as plain text or simple tables, describing the features of the products. Product designers deliver the first
398
Markus Rabe, Pavel Gocev
design and “look and feel” models of the new products. Simultaneously, technologists and production experts are involved in order to determine the required technologies for the production, to develop the manufacturing system and to estimate the product cost. Various concepts of the product and the manufacturing system are usually verified through discrete event simulation of the production and the logistics. The process of data and information exchange has a very important aspect of understanding of the meaning. In [27] the authors (Studer at al.) give an overview of the understanding process supported by the Meaning-Triangle, which describes the relations between symbols or words, thoughts and real objects. As the meaning of the words highly depends on the context, the environment and the personal view of the project partner, the authors conclude that the missing of a common understanding leads to misunderstandings, wrong communication, inadequate interoperability and limited reuse of data and information within various software systems. Most of the required data and information in the early project phase are informal and based on the experience from already existing products and manufacturing systems. Generally, the data and the information are taken from existing IT applications for Business Process Modelling (BPM), Enterprise Resource Planning (ERP), Product Data Management (PDM), Product Life Cycle Management (PLM), Computer Aided Process Planning (CAPP), Manufacturing Execution System (MES), and others. Usually, for the project purposes data are extracted as office applications in format of text, spreadsheet, project plans, presentation slides, etc. The authors’ experience proved the practice of using of different files in multiple IT applications with distinct structures and terminology, which causes inconsistency, redundancy, and incompleteness. Regular meetings of the project members govern the share and exchange of information, as well as the presentation of the achievements. The attendees of those meetings analyse, debate, reason and conclude in order to take decisions for their further project work. The different skills and abilities of the project members for knowledge perception, conclusion and context presentation are an additional reason for a possible extension of the project work, very often influencing the critical milestones of the project. The complexity grows when supply networks have to be analysed, developed and simulated where the data and project members belong to several enterprises with different cultures and procedures for data management. The complexity of cooperative project work can be reduced if project data and information are free of ambiguity and redundancy and presented in a format and structure that enables easy use by the project members. The explicit description and defined data structures facilitate the integration of various sources (data and information) and therewith the interoperability within the project. The necessary level of domain description can be achieved through classes which are organised as taxonomies with assigned individuals (instances) and defined relations between them in one model. These models are called ontologies and defined by Gruber [7] as an explicit, formal specification of a shared conceptualisation. Ontologies can be used to develop a knowledge base for one particular domain under consensus of the involved partners as a means to collect and structure the existing knowledge as well as the generation of new knowledge through reasoning.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
399
This paper gives an overview of existing technologies and approaches for ontology development to describe manufacturing systems. The focus is set on solutions that utilise the emerging semantic Web technologies (section 2.1), existing and emerging standards, and open standards for modelling of manufacturing systems (section 2.2). Section 3 summarises the challenges for development. Section 4 gives an overview of the framework proposed, that is based on semantic Web technologies and is utilised for the modelling of the manufacturing domain as well as for the generation of new knowledge.
2 Related Enabling Technologies and Developments 2.1 Semantic Web Technologies
Under the leadership of the World Wide Web Consortium (W3C) [12] new developments of technologies that are suitable for data modelling and management have been achieved. With the definition of the Extended Markup Language (XML) [13] a widely accepted base for the syntax of structured data was given. The modelling of the semantics was realised with the Resource Description Framework (RDF) [14] and Resource Description Framework Schema (RDFS) [15] which enable the description of the relations among the data objects as statements in the form of subject-predicate-object (triples). The statements described with RDF can be processed by computers, but RDF does not offer a concept to model similarities, distinctions, cardinalities, restrictions, intersections, unions, characteristics of properties and other functionalities that are needed for the explicit description of a domain. These features are offered by the Web Ontology Language (OWL) [16]. But, OWL does not support the modelling of logical rules in form of “If-Then”rules that are essential for reasoning and inference. The requirements for logical reasoning could be satisfied with the Semantic Web Rule Language (SWRL) [17] that is still in the proposal stage by the W3C. The rules can be expressed as OWL concepts (classes, properties and instances) which enable easy integration with OWL ontologies. The rules in terms of OWL syntax are axioms that comprise of an antecedent part (body) and consequent part (head) of the rule. Both consist of atoms built with variables, classes, properties, individuals or built-ins. If all antecedent atoms are true, then all consequent atoms will be true, too. Example: IF car has wheels AND ModelX isA car THEN ModelX has wheels.
The example shows how to extend the knowledge about the individual ModelX. The result of inference upon the rules is a new statement as a triple (ModelX-haswheels), which enriches the ontology and represents the newly generated knowledge.
400
Markus Rabe, Pavel Gocev
2.2 Standards and Initiatives for Modelling of Manufacturing Systems
An essential prerequisite for the integration of information and knowledge from different project members is the deployment of a common data structure. Since the processes and operations of manufacturing companies are supported by various IT applications, very often there are data redundancies, structural differences and semantic distinctions. Most of the companies are striving to an integration of the business domain activities with production planning, scheduling, performance reporting, and other related processes. As a result of these attempts numerous standards, open standards and industrial initiatives have emerged or still are appearing. All objects and phases of the life cycle of manufacturing systems are considered by standards, but there is no complete one that covers all aspects. The list of most frequently applied standards for modelling and description of manufacturing systems entails ISA 95 (ISO 62264) [18], OAGIS [19], ISO 10303 (STEP) [20], MIMOSA [21], RosettaNet [22], PIDX [23], ISO 15926 [24], and ISO 18629 [25]. For the design, development and analysis of manufacturing systems, standards or open standards can be used. Very suitable are the models and data structures for manufacturing systems which are defined with the standard for vertical business to manufacturing integration ISA-95 Enterprise Control System Integration and with the open standard OAGIS. Main elements that are defined with the both standards are: x x x x
Hierarchy, functional and physical object models of manufacturing systems. Activity models for manufacturing operations management. Object Models and attributes for information exchange. Transactions between applications performing business and manufacturing activities.
The latest developments within the Object Management Group (OMG) [28] for systems engineering emerged in a Systems Modeling Language (SysML) [29]. Seven diagrams from Unified Modeling Language 2.0 (UML 2.0) [30] have been adapted and two new have been developed in order to support specification, design, analysis and testing of complex systems and system-of-systems. Still the applicability for an explicit domain description and inference seems to be limited, due to the fragmentation of the diagrams and the complexity of the language, especially for non-software engineers. As the metadata interchange standard XML Metadata Interchange (XMI) [31] for SysML is based on XML, it could be applied for data and information exchange with RDF/OWL-based files. 2.3 Ontologies for Manufacturing Systems
A comprehensive state-of-the-art of ontologies for modelling and simulation of manufacturing systems is given in [2]. Several scientific communities conduct research on the development of ontologies for discrete event simulation. Most of the work is focused on the architecture of the simulation model and on its parts like entities, clock, resources, activities, queues, etc.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
401
The most recent developments at the University of the Georgia Athens are moving towards ontology-based model generation for simulation purposes [3], where the authors develop an ontology-driven Simulation (ODS) Architecture based on a discrete event modeling ontology [4]. The suggested architecture generates simulation models for the healthcare domain. Different ontologies have been developed that are used for manufacturing domain descriptions. However, these ontologies are used for various other purposes than for simulation. The project PABADIS PROMISE [5] develops a manufacturing ontology [6] based on ISA-95, ISO 10303 and ISO 61499 [26]. The ontology has to provide a formal and unambiguous description of manufacturing systems that will support the new control architecture for manufacturing enterprises. A manufacturing system engineering (MSE) ontology [8] is developed to provide a common understanding of manufacturing-related terms within globally extended manufacturing teams. The goal of ontologies for Supply Chain Management (Onto-SCM) [9] is to provide terms within the SCM domain and to model general concepts and relationships. Therewith it presents a neutral basis for an effective cooperation by different partners. A core manufacturing ontology [10] has been developed as an object-oriented UML model to be used in the communication between agents within an adaptive holonic control architecture for distributed manufacturing systems (ADACOR) [11]. The attempts and solutions mentioned are a significant step of deploying ontologies for a description of the manufacturing domain. However, a solution to support the cooperation within the projects between different members and considering the multiple information structures and semantics is not suggested. The solutions found consider specific parts of the manufacturing domain and are still not sufficient for the evaluation of manufacturing systems through simulation. The key performance indicators for manufacturing systems, which are essential for analysis and evaluation of manufacturing systems, have not been considered, too.
3 Functionalities Required for the Generation of Knowledge In order to support the collaborative work within the development projects and to reduce the development time, there is a need for assistance of definition, structuring and generation of the knowledge. Instead of having one or more glossaries as text documents, which are usually not related with distributed data carriers, explicit definition of objects and terms within the manufacturing system and the relations between them is required. This will enable an unambiguous manipulation with the concepts, and will facilitate the consideration of only related data and information within the collaboration activities in design, modelling and simulation of manufacturing systems as a part of the digital factory. Thus, the following functionalities emerge for a framework for generation of knowledge, modeling and simulation of manufacturing systems: x x
Embedding of domain knowledge within a single knowledge base. Inclusion of results from the daily work of project members involved in the design and development of new products and manufacturing systems.
402
Markus Rabe, Pavel Gocev
x x x x x
Discovery of relationships between the entities of the knowledge base. Generation of new knowledge through inference. Integration of generated knowledge into the knowledge base and therewith enrichment of the knowledge base. Facilitating simulation and evaluation of a manufacturing system and integration of simulation results back into the knowledge base. Extraction of data and information from the knowledge base for the users and project members in a format needed for daily project work (e.g. spreadsheets, process models, documents, drawings, etc.).
4 Framework for the Generation of Knowledge within the Manufacturing Domain The framework for knowledge generation has to enable a knowledge base that describes products, materials, resources, technologies and processes. Their models have to hold information and knowledge needed for flawless project activities of the users. Figure 1 presents the main components of the framework: x x x x x
Standard-based core ontology of the manufacturing domain. Rule-based interface for the integration of dispersed information about the products, processes, resources, technologies and organization of a particular manufacturing system. Manufacturing Knowledge Base (MKB) as an entirety of information and knowledge for the particular manufacturing systems. Rules for inference and generation of new knowledge and enrichment of the MKB. Interface for modelling and simulation of the manufacturing system and integration of the simulation results into the knowledge base.
Manufacturing Manufacturing Manufacturing Manufacturing Domain Domain Domain Ontologies with Domain Ontologieswith with Ontologies Different Ontologies with Different Different Structures Different Structures Structures Structures
Simulationgenerated Knowledge Generated Knowledge
Simulation
Inference Engine
Mapping Rules Core Manufacturing Ontology (OWL-M)
Manufacturing Knowledge Base (MKB) Rules
Standard based Structures (ISA-95/OAGIS)
Manufacturing Knowledge Base (MKB)
Manufacturing Knowledge Base (MKB)
Rules
Rules
Continuously Enriched and Growing MKB
Fig. 1. Semantic Web Framework for Manufacturing Systems.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
403
4.1 Core Manufacturing Ontology
The Core Manufacturing Ontology (OWL-M) is under development on Fraunhofer IPK and is built using the RDF/OWL syntax upon the object models and structures defined within the ISA-95 standard series and the open standard OAGIS. OWL-M is the further development of the data model for simulation explained in [1], structured according to the classes of ISA-95: x x x x
Process segments as a business view of the production, Product definition with bill of materials and production plans, Resources and their subclasses (personnel, equipment and material), Work description of production, maintenance, quality tests and inventory as: -
Capabilities as the highest sustainable output rate that could be achieved for a given product mix, raw materials, and resources, Schedules for production, maintenance, quality tests and inventory, Performance as a report of the production responses.
Since the ISA-95 and OAGIS are developed for the execution of manufacturing systems, OWL-M comprises additional classes and properties that describe shiftpatterns, spatial elements for the layout, manufacturing engineering project phases (installation, qualification and ramp-up), resource status, queues, transporters, performance indicators, etc. The OWL-M gives a core structure that can be used for the development of an extensive MKB for a particular manufacturing system comprising the individuals. 4.2 Rule-based Mediation of Source Ontologies The Manufacturing Knowledge Base (MKB) comprises several interfaced ontologies around the OWL-M. The input information from different IT applications is imported as an XML file. These XML files can be transformed as OWL files with a weak semantic, since they still have the original structure as in the XML file using the OWL syntax. The integration of the elements within these input OWL files into the structure of OWL-M can be realised through rule-based mediation. The mapping procedure yields the correspondences between the source OWL files and OWL-M. The alignment and matching of the ontologies are specifications of similarities. Those specifications are the base for rule development by the engineer. The rules govern the merging and creation of the MKB on the skeleton of OWL-M. An inference engine (software) applies the rules, reasons over the OWL-M structure, populates the existing classes, properties and instances in form of statements, and generates new ones. 4.3 Inference and Enrichment of the Manufacturing Knowledge Base
Due to the variety of information sources upon which the MKB is generated, there is “hidden” knowledge within the MKB that it is still not “visible” for the user (person or IT application). For example, the bill of material given by the product
404
Markus Rabe, Pavel Gocev
designer frequently does not contain additional materials like consumables in production. The list of fixtures and tools needed for a particular product is not given by the product designer and not available in the product description. The production plan does not exist, too. To produce all theses information in a structure needed for further processing (e.g. bill of resources, production plan), usually discussions and information exchange between project members is needed. This goal can be achieved through rules and reasoning performed again by an inference engine. The antecedent part of the rules consists of existing axioms and facts about the individuals (triples) from different classes or ontologies. The consequent part includes the facts and statements that have to be generated through inference. An overview of the whole process is given below through an example. A product SiP_07 and the corresponding manufacturing system have to be designed. A product designer determines the list of the components that the new product has to conclude (Figure 2). Some of the product features are given, too. These are the result of the customer requirements and the experience knowledge of the designer. Facts: SiP_07 has following materials: RS_1, D_3, C_7 and T_5. The product SiP_07 has to be processed on a pick-and-place with speed of 300. The printing has to be performed according to the Jig principle and SiP_07 has to have a value of 5 for the feature distance. hasPlacingSpeed SiP_07
hasPrintingType RS_1
300
JigPrinting
hasDistance hasMaterial
D_3
5
T_5 C_7
Fig. 2. Description of the Instance SiP_07 by the Product Designer
The technologist has the knowledge about the materials, components, and their features as a result from his or her experience, using own terminology (Figure 3). Facts: RS_1 needs following technologies: Modules_Test, Reflow, Pick_and_Place, Jig_Printing, Sieve_Printing and Stacking. RS_1 needs either SB_21 with a length of 5, or SB_22 with length of 7.
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
405
Modules_Test Reflow Pick_and_Place needsTechnology
RS_1
Jig_Printing Sieve_Printing
needsMaterial
SB_21 SB_22
hasLength hasLength
5
Stacking
7
Fig. 3. Description of the Instance RS_1 by the Technologist.
The modeler is the designer of the OWL-M and is responsible for the semantic within the ontology (Figure 4), as well as for the rules, which enable reasoning and inference. Rules can be modelled as a set of atoms. hasLength owl:equvalentProperty hasDistance
Fig. 4. Equivalency of two Properties.
A rule example is given in table 1. Table 1. Example of a Rule for Extension of the Bill of Material
Rule Atom
Example
IF
IF
(?P :hasMaterial ?C)
SiP_07 hasMaterial RS_1
Product Designer
(?C :needsMaterial ?M)
RS_1 needs SB_21
Technologist
(?P, ?PP, ?PPV)
SiP_07 hasDistance 5
Product Designer
(?M, ?MP, ?MPV)
SB_21 hasLength 5
Technologist
(?PP owl:equivalentProperty ?MP)
Distance and Length are equal
Modeler
5=5
Modeler
equal(?PPV, ?MPV) THEN
Source
THEN SiP_07 hasMaterial SB_21
Inference Engine
(?P :hasMaterial ?M)
The first rule atom (?P :hasMaterial ?C) considers all products P and the instances C that are related with P through the property hasMaterial. The following atoms of the rule’s body are additional premises of the rule. The head of the rule contains just one atom (triple) that has to be generated by the inference engine that
406
Markus Rabe, Pavel Gocev
reasons upon the ontology. The rule governs if all premises given in the antecedent are satisfied for a set of particular individuals. Then, the list of the materials that are in relation with the SiP_07 with the property hasMaterial will be extended. All values for M that satisfy the conditions will be added to the list and therewith the MKB will be augmented. In this example the technologist defined that RS_01 needs either SB_21 with value for the feature hasLength of 5 or SB_22 with value for the same feature of 7. As per the last rule’s premise the values of hasDistance and hasLength have to be equal, only SB_21 comes as a result of inference. A new statement (SiP_07 :hasMaterial SB_21) enriches the bill of material of the individual SiP_07 and therewith new knowledge is generated (Figure 5). hasPlacingSpeed SiP_07
hasPrintingType RS_1
300
JigPrinting
hasDistance hasMaterial
D_3
5
T_5 C_7 SB_21 Fig. 5. Augmented Description of the Instance SiP_07 after the Inference.
The same procedure can be applied considering other axioms and facts within the MKB through the deployment of additional rules that are specified to reach a particular goal. Deploying the knowledge of the production engineer, a production plan for the SiP_07 can be generated, too. The results are to be seen after the inference as triples (subject-predicate-object). Only those triples that are selected by the user build the generated knowledge and can be asserted into the MKB. 4.4 Connecting the MKB with the Simulation Model
The enrichment of the MKB can yield a completion of information needed for simulation of the manufacturing system (bill of materials, production plans, available resources, production orders and shift plans). The more data about the manufacturing system are available, the more accurate and closer to reality the simulation can be performed. The triples from the MKB through transformation can be used as an input for the simulation model. After the end of the simulation, the results (e.g. throughput-time, resource utilisation, buffer time and needed capacities) can be imported into the MKB through XML-to-OWL transformation. Therewith the information and knowledge
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
407
gained from the simulation can be used for further inference and enrichment of the MKB.
5 Conclusions and Future Developments An ontology for the manufacturing domain is needed as a skeleton for the modelling of the knowledge about a particular manufacturing system. The basis for the core manufacturing ontology (OWL-M) that is presented in this paper is taken from the structures and the object models defined within the ISA-95 series and open standard OAGIS. The distributed information about one particular manufacturing system can be transferred from the source files into several ontologies. The objects of those ontologies can be integrated into the OWL-M skeleton through rule-based ontology mediation, yielding the Manufacturing Knowledge Base (MKB) for a particular manufacturing system. Rules can generate new knowledge and through assertion of inference statements the MKB can be augmented. Since this paper describes a method and first applications, further developments are necessary in order to provide the users with a more friendly interface for ontology modeling, as project members can not be expected to be familiar with ontology modeling software. The MKB as a repository of knowledge about the particular manufacturing system can be used for further analysis and developments like simulation. The practice is that the logical decisions at branching points of the material flow are embedded into the simulation models. Examples are scheduling decisions, batching, matching, lot splitting, etc. The logic models could be a part of the MKB and therewith free the simulation models from complex constructs of modules and logical elements. This approach would enable rapid building of simulation models for complex manufacturing systems. The results obtained from the simulation could be integrated into the MKB as an additional enrichment. The inclusion of the scenarios into the MKB could give a basis for manufacturing system advisor through querying a knowledge base, where expert knowledge is stored and can be used later by other users or applications. Further developments could enable the connection of the MKB with existing Manufacturing Execution Systems. Similar to the results from simulation, the real information from daily operation of the manufacturing system could augment the MKB. The described semantic Web framework can substitute a part of the project activities usually in form of data and information exchange, understanding, agreeing, reasoning and knowledge generation. The modelling of the manufacturing domain with ontologies facilitates and accelerates the cooperation and collaboration processes within the projects of product design and development of manufacturing systems.
408
Markus Rabe, Pavel Gocev
References [1]
[2]
[3]
[4] [5] [6] [7] [8]
[9]
[10]
[11]
[12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24]
[25]
[26]
Gocev, P., Rabe, M.: Simulation Models for Factory Planning through Connection of ERP and MES Systems. Tagungsband 12. ASIM-Fachtagung Simulation in Produktion und Logistik, pp. 223-232. Kassel (2006) Gocev, P.: Semantic Web Technologies for Simulation in Production and Logistic - a Survey. Simulation und Visualisierung 2007 – Doktorandenforum Diskrete Simulation, pp. 1-10. Magdeburg (2007) Silver, G., Hassan, O., Miller, J.: From Domain Ontologies to Modeling Ontologies to Executable Simulation Models. Proceedings of the 2007 Winter Simulation Conference, pp 1108-1117. (2007) Miller, J., Fischwick, P.: Investigating Ontologies for Simulation Modelling. Proceedings of the 37th Annual Simulation Symposium (ANSS’04). pp 55-63. (2004) Project Pabadis‘Promise. www.pabadis-promise.org Development of Product and Production Process Description Language (PPPDL). www.uni-magdeburg.de/iaf/cvs/pabadispromise/dokumente/Del_3_1_Final.pdf Gruber, T.: A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition 5(2), pp. 199-220, (1993) Lin, H. -K., Harding, J. A., Shahbaz, M. Manufacturing System Engineering Ontology for Semantic Interoperability across Extended Project Teams. International Journal of Production Research, Vol.42, No.24, pp 5099-5118. Tylor & Francis (2004). Ye, Y., Yang, D., Jiang, Z., Tong, T.: Ontology-based Semantic Models for Supply Chain Management. The International Journal of Advanced Manufacturing Technology. Springer, London (2007) Borgo, S., Leitao, P.: Foundations for a Core Ontology of Manufacturing. Ontologies – A Handbook of Principles, Concepts and Applications in Information Systems, Vol.14, Part 4, pp 751-775. Springer (2007) Leitão, P., Colombo, A., Restivo, F.: ADACOR – A Collaborative Production Automation and Control Architecture. IEEE Intelligent Systems, Vol.20, No.1, pp 5866. (2005) World Wide Web Consortium. www.w3.org Extensible Markup Language. http://www.w3.org/XML Resource Description Framework. www.w3.org/RDF Resource Description Framework Schema. www.w3.org/TR/rdf-schema Web Ontology Language - www.w3.org/2004/OWL Semantic Web Rule Language. www.w3.org/Submission/SWRL Instrumentation, Systems, and Automation Society, Enterprise-Control System Integration. Parts 1,2,3. Published 2000-2005. www.isa.org Open Applications Group Integration Specification. www.oagi.org Standard for the Exchange of Product Model Data. www.tc184-sc4.org/SC4_Open Machinery Information Management Information Open Systems Alliance. www.mimosa.org RosettaNet Standards. www.rosettanet.org Petroleum Industry Data Exchange (PIDX). www.pidx.org Industrial Automation Systems and Integration – Integration of Life-Cycle Data for Process Plants Including Oil and Gas Production Facilities. www.iso.org; http://15926.org Industrial Automation Systems and Integration – Diagnostics, Capability Assessment, and Maintenance Applications Integration Part 1. (Under Development), 2006. www.iso.org Function Blocks for Industrial-Process Measurement and Control Systems. www.iec.ch
Semantic Web Framework for Rule-Based Generation of Knowledge and Simulation
409
[27] Studer R. et al.: Arbeitsgerechte Bereitstellung von Wissen – Ontologien für das Wissensmanagement. Technical Report, Institut AIFB, Universität Karlsruhe. 2001. www.aifb.uni-karlsruhe.de/WBS/ysu/publications/2001_wiif.pdf [28] Object Management Group (OMG). www.omg.org [29] Systems Modeling Language (SysML). www.sysml.org [30] Unified Modeling Language (UML). www.uml.org [31] XML Metadat Interchange (XMI). www.omg.org/technology/documents/formal/xmi.htm
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing 1
N. Chungoora and R.I.M. Young1 1
2
Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK [email protected] Wolfson School of Mechanical and Manufacturing Engineering, Loughborough University, Loughborough, LE11 3TU, UK [email protected]
Abstract. Nowadays, sophisticated Computer Aided Engineering applications are used to support concurrent and cross-enterprise product design and manufacture. However, at present, problems are still encountered whenever manufacturing information and knowledge have to be communicated and shared in computational form. One of the most prominent of these problems is concerned with semantic mismatches, which impinge onto achieving seamless manufacturing interoperability. In this paper, the possible configuration of frameworks to capture semantically enriched manufacturing knowledge for manufacturing interoperability is being discussed. Ontology-driven semantic frameworks, based on explicit definitions of manufacturing terminology and knowledge relationships, offer an attractive approach to solving manufacturing interoperability issues. The work described in this paper defines Hole Feature ontological models in order to identify and capture preliminary semantic requirements by considering different contexts in which hole features can be described. Keywords: interoperability, semantics, manufacturing knowledge sharing
1 Introduction Information and Communications Technology (ICT) infrastructures, coupled with appropriate manufacturing strategies and practices can bring considerable benefits towards the survival and integration of manufacturing enterprises. According to Ray and Jones [1], interoperability is “the ability to share technical and business data, information and knowledge seamlessly across two or more software tools or application systems in an error free manner with minimal manual interventions”. Seamless interoperability, although being one fundamental requirement for ICT
412
N. Chungoora and R.I.M. Young
infrastructures, so as to support efficient collaborative product development, is still not completely achievable. This lack of interoperability is costly to many globally distributed industries [2] where significant amounts of money are spent into overcoming interoperability problems [3]. Several problems are responsible for the lack of interoperability of manufacturing systems, the most common reason being due to incompatibility between the syntaxes of the languages and the semantics of the terms used by the languages of software application systems [4]. It has been asserted that the problems of interoperability are acute for manufacturing applications, as applications using process specifications do not necessarily share syntax and definitions of concepts [5]. Moreover, clear emphasis has been laid to the fact that either common terms are used to mean different things or different terms are used to mean the same thing which leads to potentially substantial interoperability problems [1]. Several authors such as Prawel [6], Liu [7] and Cutting-Decelle et al. [8] have recognised the importance of product data exchange and information modelling as a means of obtaining a certain level of systems integration. Systems and process integration and interoperability work hand in hand, for example, at manufacturing level, the integration of mechanical analysis into the design process is one of the most obvious and crucial requirements, particularly during the early stages of design [9]. In modern PLM systems, manufacturing knowledge handled by decision support systems has to be communicated across the entire lifecycle effectively. In the design and manufacture domain, this knowledge is developed in activities such as Design for Function, Design for Assembly and Disassembly, Design for Manufacture and Manufacturing Planning. Current limitations of semantic interoperability, therefore, inevitably affect manufacturing knowledge sharing capability. Effort pursued through ontological approaches provide attractive potential solutions to the problem of semantic interoperability. However, the greatest difference of significance between ontological approaches is the basis upon which the sharing of meaning is made in relation to the level of rigour with which terms are defined [10]. Furthermore, it has been specified that interoperability between manufacturing activities is influenced by the context dependency of information [11]. Hence, an all-embracing framework to solve semantic manufacturing interoperability issues is likely to cut across rigorous ontological engineering which captures the contextual representations of information and knowledge. This paper provides an understanding of the potentials that ontology-driven frameworks possess in order to solve semantic interoperability problems in the manufacturing domain. A hole feature ontology example has been devised to illustrate some of the requirements for capturing semantic as well as to identify key areas which need to be focused towards solving semantic manufacturing interoperability and manufacturing knowledge sharing issues.
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
413
2 Manufacturing Information and Knowledge Support Systems The quest towards PLM decision support systems with increasing decisionhandling capabilities has driven the progression from information support systems to knowledge-based systems. Relevant information support to product design and manufacturing has been pursued through the use of Product and Manufacturing Models [12]. A Product Model may be defined as an information model, which stores information related to a specific product [13]. The Product Model paradigm has slowly been extended with time, for instance, through the inclusion of additional dimensions such as product family evolution [14]. On the other hand, a Manufacturing Model is said to be a common repository of manufacturing capability information, whose use is justified in the way the relationships between all manufacturing capability elements are strictly defined [15]. Nowadays, new product development in large companies, operating for instance in automotive and aerospace sectors, is supported by Knowledge-Based Engineering (KBE). KBE in industry is mostly used to automate design processes and to integrate knowledge and experience from different departments [16], for example, in the design and manufacture domain the generative technology of knowledge-based tools enables companies to create product definitions which incorporate the intuitive knowledge (experience) of designers and engineers about design and manufacturing processes [17]. The main claimed benefit of KBE lies in its ability to aid rapid product development in a collaborative way for increased productivity. 2.1 Integrating Product and Manufacturing Knowledge
The concept of acquiring manufacturing knowledge is partly based on having the appropriate system infrastructure to aid the integration of product and manufacturing information repositories. It has been mentioned that Manufacturing Information Models have not been shown fully integrated with each other or with a Product Information Model [18]. Manufacturing knowledge dissemination can be more specifically targeted to the useful interoperability of both product and manufacturing information repositories in such as way that clear contexts, relationships, constraints and rules are defined. Previously, multi-view modelling has attracted attention as a framework for gathering manufacturing systems knowledge. However, multi-view modelling to acquire manufacturing knowledge has been developed into solutions based on the use of UML, and therefore uses a lightweight ontological approach which is inappropriate for intersystem interoperability [10]. Therefore, more stringent approaches need to be devised in order to capture and share manufacturing knowledge. 2.2 Ontology-Driven Frameworks for Knowledge Support Systems
The area of ontological representation of knowledge is a subset of technologies for information and knowledge support [19], which implies that in one way or the
414
N. Chungoora and R.I.M. Young
other, ontological approaches can be sought in order to set up platforms for knowledge-driven integration of Product and Manufacturing Models. In recent years, ontological engineering has been witnessed in the manufacturing domain, for instance, a Manufacturing System Engineering (MSE) ontology model that has the capability to enable communication and information exchange in interenterprise, multi-disciplinary engineering design teams, has been proposed [20]. In another instance, a product ontology concerned with the development and refinement of the ISO10303-AP236 standard to support information exchange for the furniture industry has been developed [21]. One fundamental observation made is that only a progression towards the use of heavyweight ontologies can provide greater confidence that the real meaning behind terms coming from different systems is the same [10]. Hence, heavyweight ontologies offer the potential of supporting semantic manufacturing interoperability.
3 Understanding Semantic Requirements for Knowledge Sharing As an attempt to understand the need for semantic support in knowledge sharing between functional domains, Figure 1 has been proposed. A functional domain may be regarded as being any group or community in which a particular knowledge is fully accepted, understood and shared for the realisation of a specific function. In a concurrent engineering environment, a functional domain could be synonymous to, for example, a team of people working in Design for Assembly or another group working in Design for Function. Having a communal acceptance of knowledge within a group implies that a specific ontology is being adopted within a functional domain. Therefore, it can be deduced that the common understanding of concepts, communication and sharing of these concepts, and subsequent decision-making, all depend on the semantics defined and used in a functional domain. Assuming that a functional domain has a semantically well-defined ontology as a basis for sharing knowledge and the meaning behind the knowledge, it becomes feasible to suggest that different domains are likely to develop their own ontologies. Hence, two functional domains, regardless of whether they operate within similar areas or not, may not necessarily achieve consensus, whenever knowledge is to be shared among the two groups. This is because, although the ontologies can be well-formed, accepted and semantically correct in both individual groups, the semantics from both functional domains do not match when both groups have to communicate with each other. In concurrent engineering design and manufacture premises, semantic mismatches are primarily due to multiple manufacturing and product-related terminologies defined to mean similar things or to mean disparate concepts. As a consequence of this, a software system developed to suit the purpose of one functional domain, needing to communicate with another software system suited for another domain, does not always readily do so. These semantic problems can be carried downstream in the product lifecycle. At present, there still exist problems related to ambiguous semantics ([5], [10]) which prevent manufacturing knowledge to be captured and shared seamlessly. In Figure 1, the central ellipse
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
415
denotes the ongoing misfits in semantics, which lead to the problem of knowledge sharing. Knowledge Sharing
Functional Domain
X
Concept Understanding Concept Communication Decision Making
Functional Domain
Concept Understanding
Semantics Semantics
Concept Communication Decision Making
Fig. 1. Knowledge Sharing and Semantics between Functional Domains
3.1 Hole Feature Ontology Model to Identify Semantic Requirements
To illustrate the issue identified previously, an example has been put forward where aspects of two different but related domains (namely a design domain and a manufacture domain) have been captured. An ontological approach using the Protégé 3.3 tool has been exploited to model the two domains. The scope of the task is to identify a set of semantic requirements, which can be used as specifications to the design of frameworks to promote semantic interoperability of knowledge resources within disparate contextual representations of features. It has been acknowledged that feature-based engineering bridges the gap between CAD and knowledge-based engineering systems [22]. Features play a key role in providing integration links between design and manufacture [23]. For these reasons, it was considered appropriate to build an onthology around a feature so as to incorporate some level of manufacturing knowledge. The feature ontology proposed has been developed to identify semantic requirements related specifically to holes as features. This is partly because hole feature manufacture is problematic and sometimes costly to industries, as a result of the diverse contexts, manufacturing processes and poorly established best practice methods associated to whole features. An example to illustrate the prominence of the contextual definitions for the designation of whole features is illustrated in Figure 2. In the design functional domain, a counterbored hole accommodating a particular size of bolt may be
416
N. Chungoora and R.I.M. Young
regarded as a bolt hole. In the manufacture functional domain, the functionality (of the whole acting as a bolt hole) can be of little importance and instead, the same hole in a manufacture functional domain may be designated as a counterbored hole. In the latter case, this could further imply that the counterbored hole needs to consist of an optional centre-drilling operation, a required drilling operation and a required counterboring operation. A whole feature may be considered from various contexts and the semantics need to be defined for contexts such as functional, geometry, manufacturing, machining process and assembly [11]. Drilling Counterboring Bolt Hole Design Functional Domain
Manufacture Functional Domain
has functional context
has machining context
Counterbored hole Fig. 2. Considering a Counterbored Hole from Two Different Contexts
3.1.1 Design Hole Feature Ontology
A lightweight ontology for hole feature representation from a purely functional and geometry context, reflecting the design functional domain, was developed. Figure 3 depicts the class hierarchy from the Design Hole Feature ontology. This ontology may be regarded as one possible context in which hole features can be represented. The superclass “Design Hole Feature” has two subclasses namely “Circular CrossSection Hole” and “Shaped Hole”, implying shape and geometric property variations from these two parent classes.
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
417
Fig. 3. Class Hierarchy for the Design Hole Feature Ontology
Protégé allows the user to define classes and specify the properties or slots of these classes. These properties, also known as attributes in the OO environment, describe the information elements which are the building blocks of a class. The necessity for having parent to child class property inheritance (i.e. is-a relationship) is significant. For example, the “depth” property of the class “Design Hole Feature” is inherited by the subclass “Circular Cross-Section Hole” and subsequent child classes. It is also possible to define additional slots for specific classes, for example, the “Circular Cross-Section Hole” class has a “primary diameter” property. It would not be reasonable for the class “Shaped Hole” to possess the property “primary diameter” since a shaped hole consists of two or more geometries which define its cross-section. In the proposed Design Hole Feature ontology a few instances have been defined. 3.1.2 Machining Hole Feature Ontology
A similar approach to the ontology development of the Design Hole Feature ontology has been adopted to devise a Machining Hole Feature ontology. The latter captures other contexts in which hole features can be represented namely from a machining and manufacturing process context, thus reflecting the manufacture functional domain. Provision for the geometric context has also been made since it is impossible to describe a “Machining Hole Feature” without referring to the basic dimensions of the feature. Several classes and subclasses have been defined. Two main superclasses are present namely “Machining Hole Feature” and “Hole Machining Operation”. The “Machining Hole Feature” class holds knowledge about particular classes of holes which can be encountered during production, while the “Hole Machining Operation” class holds knowledge about the capability of a particular process to machine a given hole feature. Figure 4 illustrates some of the relationships that can be made through slots. One type of relationship is called an inverse relationship whose semantics is well defined (in
418
N. Chungoora and R.I.M. Young
the current ontology the “produced by” and “produces” slots pertaining to the classes “Machining Hole Feature” and “Hole Machining Operation” respectively are inverse slots) and Protégé allows the user to input this information through an “inverse-slot” option made available.
Knowledge Fig. 4. Class Hierarchy and Knowledge Relationships in the Machining Hole Feature Ontology
The property defined as “requires machining sequence” is a requirement for the “Hole Machining Operation” class. This is because for selected processes, a processing sequence may be required before a complete machined hole feature can be obtained. As previously seen, attributes or properties can be defined so that they represent relationships which state the behaviour of information elements between classes, slots and instances, thereby acquiring some knowledge within the domain ontology. In order to understand occurrence and process-dependency aspects in hole manufacturing operations, a reaming operation required to produce a reamed hole, has been taken into account. A “Reamed Hole” may be described as being “produced by” a certain reaming operation which “produces” the hole feature and achieves the necessary dimensional target. The reaming operation involves the use of an available manufacturing resource “Machine Chucking 10.02mm”. Also, it is possible to define that a reaming operation “requires machining sequence” and use this property to identify other manufacturing operations and resources pertinent to a reaming operation. The instances diagram in Figure 5 gives a clear idea of the level of semantic linking that needs to be defined through relationships.
Reamed Hole 10.02mm diameter = 10.02 produced by = Machine Chucking 10.02mm diameter tolerance = +/-0.008 depth = 35.0 quantity = 6
produces
produced by
Machine Chucking 10.02mm HIGH-SPEED_STEEL Centre Drill 2.00mm requires machining sequence = Stub Length 9.50mm Machine Chucking 10.02mm tool diameter = 10.02 produces = Reamed Hole 10.02 tool diameter tolerance = +0.005, -0.000 tool type = FLUTED_REAMER
requires machining sequence Centre Drill 2.00mm overall length = 40.0 body diameter = 5.0 pilot length = 2.5-3.3 tool material = HIGH_SPEED_STEEL tool diameter = 2.0 produces = Centre Drilled Hole 2.05mm tool diameter tolerance = +0.013 tool type = 60_DEGS_INCLUDED_ANGLE
produces
produced by
Centre Drilled Hole 2.05mm diameter = 2.05 produced by = Centre Drilling 2.00mm diameter tolerance = +/-0.03 depth = 3.0
requires machining sequence
tool material =
419
requires machining sequence
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
requires machining sequence Stub Length 9.50mm tool material = SOLID-CARBIDE requires machining sequence = Stub Length 9.50mm tool diameter = 9.5 produces = Drilled Hole 10.02 tool diameter tolerance = +0.000, -0.015 tool type = HELICAL_POINT
produces
produced by
Drilled Hole 9.52mm diameter = 9.52mm produced by = Stub Length 9.50mm diameter tolerance = +/-0.02 depth = 35.0
Fig. 5. Implications of Producing a Reamed Hole from Knowledge Contained in Instances
4 Discussions and Conclusions The feature-oriented ontologies developed with well-defined semantic relationships reflect a potential way forward to the integration and sharing of product and manufacturing knowledge. It is clear that the functional and geometry contexts from the Design Hole Feature ontology capture some relevant aspects of the Product Model perspective. On the other hand, the machining process context
420
N. Chungoora and R.I.M. Young
based on manufacturing methods, witnessed in the Machining Hole Feature ontology, reflects the Manufacturing Model perspective. In Design for Manufacture, having context-specific hole feature representations is important but these representations should not be used individually from each other. Thus, a basis needs to be defined so as to enable the successful management and matching of feature-oriented ontologies constructed from different contextual views for knowledge sharing. So as to meet the purpose of manufacturing interoperability, unambiguous semantic relationships need to be set up among these context-specific ontologies so that multi-context manufacturing knowledge becomes interoperable and subsequently shareable. In the experiment, the “requires machining sequence” property gives the relationship between multiple hole machining operations and introduces a knowledge element to the system. Although existent, the basis over which the “requires machining sequence” property has been defined is still not explicit enough and it would be an advantage to create a more rigorous statement. Some of the questions which could be asked concerning this issue are: is the sequence an ordered sequence? Is the machining sequence in an ascending order of dimensional targets? Is the sequence in descending order of importance? One highly promising direction to solve this and other similar issues is to include meta-modelling of the classes and slots in such a manner that the semantics behind classes and properties are fully captured, thus removing the ambiguities present in semantic linking through property definitions. Given that the manufacturing knowledge in question is captured using different methods employed by different groups and cutting across varying contexts, two important questions need to be reviewed in the quest for an ontology-driven semantic interoperable framework for manufacturing knowledge sharing. These main questions have been identified below: x x
How can methods of manufacturing knowledge capture be refined in such a way that a level of semantic enrichment of the knowledge is achieved to enable comprehensive knowledge sharing? To what extent can a semantic framework verify what segments of manufacturing knowledge are appropriate for sharing? Conversely, what segments of manufacturing knowledge prove to be semantically dissimilar so that they cannot be shared?
At this stage, it is possible to list a number of distinct semantic requirements to be satisfied with the intention of promoting semantic interoperability for manufacturing knowledge sharing. These requirements are as follows: x x x
It is necessary to provide an adequate basis for sharing design and manufacture related meaning through comprehensive ontological frameworks (e.g. through the construction of domain ontologies). These frameworks should entail sufficient complexity in the way information and knowledge are structured (e.g. through meta-modelling). Semantic definitions need to cut across several contexts so as to provide a basis for matching semantics from different functional domains (e.g. geometric, assembly and machining contexts).
Semantic Interoperability Requirements for Manufacturing Knowledge Sharing
x x
421
Semantic linking should be made through well-defined knowledge relationships thereby bridging the semantic gaps between contexts. It is essential to provide an underlying mathematical rigorousness to formalise semantic statements (e.g. where dependencies on options, sequences, activities and event-based parameters are present).
Future work to be based on RDF(S)/OWL ontology markup languages, using the software Protégé OWL and Altova SemanticWorks, shall provide a further insight into answering the above questions. Furthermore, the subsequent exploration of heavyweight feature-oriented ontologies using the Process Specification Language (PSL) shall provide additional support into overcoming the problem of semantic interoperability for manufacturing knowledge sharing.
References [1]
Ray SR, Jones AT, (2003) Manufacturing interoperability. Concurrent Engineering, Enhanced Interoperable Systems. Proceedings of the 10th ISPE International Conference, Madeira Island, Portugal: 535–540 [2] National Institute of Standards and Technology, (1999) Interoperability cost analysis of the U.S. automotive supply chain. http://www.nist.gov/director/prog-ofc/report991.pdf [3] Brunnermeier SB, Martin SA, (2002) Interoperability costs in U.S. automotive supply chain. Supply Chain Management: an International Journal 7(2): 71–82 [4] Das B, Cutting-Decelle AF, Young RIM, Case K, Rahimifard S, Anumba CJ, Bouchlaghem N, (2007) Towards the understanding of the requirements of a communication language to support process interoperation in cross-disciplinary supply chains. International Journal of Computer Integrated Manufacturing 20(4): 396–410 [5] Pouchard L, Ivezic N, Schlenoff C, (2000) Ontology engineering for distributed collaboration in manufacturing. AIS2000 Conference. http://www.acmis.arizona.edu/CONFERENCES/ais2000/Papers.back/Papers/PDF/a0 26pouchardlc.pdf [6] Prawel D, (2003) Interoperability best practices: advice from the real world. TCT 2003 Conference organised by Rapid News and Time Compression Technologies, NEC, UK [7] Liu S, (2004) Manufacturing information and knowledge models to support global manufacturing coordination. PhD Thesis, Loughborough University, Loughborough, UK [8] Cutting-Decelle AF, Das BP, Young RIM, Case K, Rahimifard S, Anumba CJ, Bouchlaghem NM, (2006) Building supply chain communication systems: a review of methods and techniques. Data Science Journal 5: 26–51 [9] Aifaoui N, Deneux D, Soenen R, (2006) Feature-based interoperability between design and analysis processes. Journal of Intelligent Manufacturing 17: 13–27 [10] Young RIM, Gunendran AG, Cutting-Decelle AF, Gruninger M, (2007) Manufacturing knowledge sharing in PLM: a progression towards the use of heavy weight ontologies. International Journal of Production Research 45(7): 1505–1519 [11] Gunendran AG, Young RIM, Cutting-Decelle AF, Bourey JP, (2007) Organising manufacturing information for engineering interoperability. Interoperability for Enterprise Software and Applications Conference, Madeira Island, Portugal
422
N. Chungoora and R.I.M. Young
[12] Costa CA, Young RIM, (2001) Product range models supporting design knowledge reuse. IMechE Part B Journal of Engineering Manufacture 215(3): 323–337 [13] Molina A, Ellis TIA, Young RIM, Bell R, (1995) Modelling manufacturing capability to support concurrent engineering. Concurrent Engineering Research and Applications 3(1): 29–42 [14] Sudarsan R, Fenves SJ, Sriram RD, Wang F, (2005) A product information modelling framework for product lifecycle management. Computer Aided Design 37: 1399– 1411 [15] Liu S, Young RIM, (2004) Utilizing information and knowledge models to support global manufacturing co-ordination decisions. International Journal of Computer Integrated Manufacturing 17(4): 479–492 [16] Liening A, Blount GN, (1998) Influences of KBE on the aircraft brake industry. Aircraft Engineering and Aerospace Technology 70(6): 439–444 [17] Kochan A, (1999) Jaguar uses knowledge-based tools to reduce model development times. Assembly Automation 19(2): 114–117 [18] Feng SC, Song EY, (2003) A manufacturing process information model for design and process planning integration. Journal of Manufacturing Systems 22(1): 1–16 [19] Chandra C, Kamrani AK, (2003) Knowledge management for consumer-focused product design. Journal of Intelligent Manufacturing 14: 557–580 [20] Lin HK, Harding JA, (2007) A manufacturing engineering ontology model on the semantic web for inter-enterprise collaboration. Computers in Industry 58(5): 428– 437 [21] Costa CA, Salvador VL, Meira LM, Rechden GF, Koliver C, (2007) Product ontology supporting information exchanging in global furniture industry: 278–280. In: Goncalves RJ et al. (eds.) Enterprise interoperability II: new challenges and approaches, Springer-Verlag London Limited, London, UK [22] Otto HE, (2001) From concepts to consistent object specifications: translation of a domain-oriented feature framework into practice. Journal of Computer Science and Technology 16(3): 208–230 [23] Brimson J, Downey PJ, (1986) Feature technology: a key to manufacturing integration. Computer Integrated Manufacture review
Collaborative Product Development: EADS Pilot Based on ATHENA Nicolas Figay, Parisa Ghodous 1
2
EADS IW 12, rue Pasteur – 92152 Paris Cedex – France [email protected] Bâtiment Nautibus, Université Claude Bernard Lyon 1, 43 bd du 11 novembre 1918, 69622 Villeurbanne cedex - France [email protected] URL: http://liris.cnrs.fr/ 2
Abstract. When willing to support collaboration within enterprise or between enterprises, it is required to support fast establishment of communication and interactions between numerous organizations, disciplines and actors. More and more, it implies to be able to interconnect enterprise applications that support involved partners and their communication, authoring and management processes. This paper presents an innovative federation framework and associated usage, which composes an effective way already existing enterprise, knowledge and application interoperability frameworks that are themselves standardized. The framework is defined according ATHENA vision, addressing interoperability at Enterprise, knowledge and Information/Communication technologies and establishing links at semantic level by means of ontologies for information, service and processes. The framework aims to address governance, organizational and technological brakes identified in industrial context when willing to establish fast and effective eCollaboration, providing modeling and execution platforms able to produce executable collaboration models and supporting round-trip development, that is a pre-requisite when willing to federate legacy solutions of partners involved within the collaboration. It proposes policies for choice, composition and usage of de jure and de facto standards that will remove these brakes. It will be validated within the particular domain of collaborative Product Development within Aerospace sector. Keywords: Interoperability of Enterprise Application, Federation, Model Driven Interoperability, Semantic Preservation
424
Nicolas Figay, Parisa Ghodous
1 Introduction: Interoperability Needs and Issues for Emerging Networked Organization In order to face a more and more competitive environment, enterprises rely more and more on usage of enterprise applications, which are supporting the main functions and processes of the enterprise: Enterprise Resources Planning applications, Human Resources management applications… Characteristic of these applications is to support, enact, monitor, control and some time execute business processes of the enterprise for numerous activities and categories of users, which are geographically distributed. In addition, they are managing informational resources of the enterprise that are becoming more and more electronic: databases, documents, models, knowledge… and that are produced by authoring tools such as document authoring tools (e.g. Microsoft word…), relational database systems, Computer Aided Design tools (e.g. Dassault Systems Catia). As functions of the enterprise need to be interconnected in order to support transversal processes, in particular those related to the clients and attached to the creation of values and benefits, integration of enterprise information system became a key issue the last years, with as means integration frameworks, integration systems (e.g. Enterprise Application Integration Systems), middleware (e.g. Common Object Request Broker Architecture [3]), data/documents exchange facilities (e.g. XML) and Service Oriented Architectures and Systems. In order to govern evolution of the whole enterprise information system and to align the information system with strategic objectives of the enterprise, new approaches have been created such as controlled urbanism of the Enterprise Information System with associated Enterprise Modelling capabilities (e.g. Business Process Modelling or decision modelling). Nevertheless, integration of enterprise applications remains difficult due to heterogeneity of legacy applications and due to the fact the used software were not initially designed to be interoperable (c.f. Interoperability anti-patterns defined in ATHENA[13]). In addition, integration solutions provided by the market are most of the time not interoperable between them and led to add technological or software products silos to functional silos Because of globalization of the economy and necessity to focus on core high value business activities, enterprises had also to establish partnerships with other companies specialized in other domains, which are required to support their activities but that are out of their core activities. For example in-house software development was replaced by selection of commercial of the shelves (COTS) software products. Other example is Aerospace activity where the amount of subcontracted activity is today targeted to 60%, including design activity. It lead to the creation of what is called Virtual Enterprise, where integrators have to coordinate activities of all the partners and to import them on the information and communication system supporting product development (what is called Extended Enterprise). Challenge of Extended Enterprise is very often seen as to be able to integrate information systems of several enterprises. But as establishment of each enterprise information system was done independently, integration of enterprise information systems is a very difficult challenge. Due to the new relationships established between enterprises, it is also most of the time neither wished nor possible, in particular because partners and members of the supply chain are
Collaborative Product Development: EADS Pilot Based on ATHENA
425
working with several partners, clients and programs that are independent, and also because internal informational systems are based on heterogeneous core domains and activities. How can then an integrator constraint a partner to work with the tools that are related to a domain they are not expert in and that is out of their core activity? In such a situation, collaboration is to be established between partners having heterogeneous core domain activity, processes, applications, and information/communication technologies, without targeting integration but fast and limited in time interconnection of applications supporting the collaboration within a collaboration space. For such a challenge, existence and usage of accurate set of ‘de jure’ and ‘de facto’ standards addressing interoperability at enterprise level, knowledge domain level and Information and Communication technologies level are critical. Competition of numerous standardization bodies and solutions providers, pushing valuable but incompatible and overlapping solutions, does not facilitate the task. ATHENA research program on interoperability of enterprise applications highlighted difficulties to use simultaneously solutions coming from these different communities when trying to integrate them within piloting activities. For example, semantic mediation prototypes were based on Resource Description Framework schemas, while Service Oriented Execution components were based on message structured according XML schema [8]. So it required to work on mapping of schema definition languages. Similar issues exists for model interchange format (XML Model Interchange, XML Process Definition Language, etc).From enterprise piloting activities point of view, it was particularly important because the different models had to be coherent and projected on robust Service Oriented execution platform. Finally business domain communities (such as Manufacturing, Health, etc) are creating their own standardization communities collaborating with several Information and Technology Interoperability initiatives. Some try to formalize their need a way that is Information and Communication technology independent, developing their own specification framework (e.g. ISO STEP community) that includes binding to important interoperability technologies (e.g. for ISO STEP community bindings with XML, UML[12], Java, etc) but also contributing to vertical specification definition concerning their domain in liaison with different other communities (e.g. PDM enablers and PLM services [4] within the Mantis [5] group of OMG, or PLCS consortium at OASIS defining PLCS PLM services or Reference Data Libraries). An important difficulty is related to incompatibilities between the used paradigms and solutions developed for each technological framework. It was highlighted for example within the SAVE project trying to use simultaneously STEP AP214 and PDM Enablers. Another difficulty is related to incompatible implementations of the standards, due to insufficient conformance testing frameworks and certification process establishment. Consequently, there is a strong need today to find new interoperability solutions enabling enterprises to collaborate a federate way, and to interconnect their legacy integrated information systems using together the different accurate technological interoperability frameworks a coherent way in order to avoid technological silos.
426
Nicolas Figay, Parisa Ghodous
2 State of the Art Concerning Standard Based Technological Interoperability Frameworks to Reuse ATHENA research programs provided foundation to establish interoperability of enterprise application, but also highlight a set of important issues that are still not solved. Existing Interoperability frameworks can be assessed according the different viewpoints defined by ATHENA. 2.1 ATHENA
The ATHENA Integrated project proposed a vision for interoperability of enterprise applications, that should be addressed a holistic way at Enterprise, knowledge and ICT (Information and Communication Technologies) levels, with semantic models as glue between levels and enterprise applications. It is a basis for the establishment of federation framework and identification of relevant standards to use within this framework.
Fig. 1. Interoperability supported at all the layers of the enterprise by ATHENA
Within ATHENA, several sectors (Aerospace, Telecom, Automotive, and Furniture) and domains (Enterprise modelling, Executable service oriented platform) were implied. But it is important to point that open federative platform for several organizations was not really addressed by researchers, targeting more one to one connection as reflected in the model of reference. Neither federation issues related to data (e.g. multiple identification, naming, and typing rules) nor usage of different used legacy formalisms within a single modelling environment were addressed. It was an issue when willing to integrate ATHENA produced innovative solutions within an open collaborative framework. 2.2 Standard Based Technological Interoperability Frameworks to Reuse
Numerous solutions and approaches have been developed the last decade in order to address interoperability. State of the art was done in order to cover the different layers to be considered for enterprise applications interoperability (enterprise, knowledge, ICT with semantic mediation), with coverage of information, service and process aspect, and with as prerequisite openness, standardization and existence of implementations as commodities on the WEB (i.e. free, open source
Collaborative Product Development: EADS Pilot Based on ATHENA
427
and robust implementations). According ATHENA, the different viewpoints to consider are domain knowledge, enterprise, and ICT (Information and Communication Technologies). Domain knowledge related standards have been defined to address some interoperability issues, with specific technological frameworks and bindings to other technologies. It is the case for manufacturing domain that is used for validation of the federation framework through collaborative design business scenarios. This community is being producing, through ISO 10303 STEP standard, a set of applications protocols (formal and computational information models in EXPRESS[7]) in order to address exchange, sharing and long term retention of data describing a product (e.g. an aircraft or an automotive) between several organizations and software products. Bindings to existing technologies of interest are provided. This community is also producing set of standardized object oriented interfaces (Product Data Management Enablers) and web services (Product Lifecycle Customer Support – PLCS[24]- and Mantis Product Lifecycle Management or PLM services) based respectively on Common Object Request Broker Architecture and Web Services Definition Language [6]. In order to use these standards together, several initiatives addressed the issue of federation of different domain models, such as PLCS. PLCS produced STEP AP239 standard, PLCS PLM services and Reference Data Libraries standards and specifications, that attempts mixed usage of several technologies and provide ways to interrelate different domains ontologies to product data model provided by AP239. Enterprise modelling related standards is just emerging, and it is difficult to really identify those that should be use. Some are addressing modelling of an enterprise as a system, providing modelling construct for an enterprise, such as Unified Enterprise Modelling Language. But such standard is mainly used by consultant and function of the enterprise dealing with organization and not for Application Engineering. Application specifications are often formalized by means of UML use cases, in combination with Business Process Modelling. Usage of different modelling language for application specifications, combined to projects boundaries, lead to “islandization” of applications, i.e. creation of independent and not interoperable applications within the enterprise. Finally emerging approaches related to executable Business Process Models and related standardized modelling language should be considered, as part of the Enterprise Modelling construct, or as component of the enterprise models to be considered. Without a real consensus today for usage of a single enterprise modelling “de-jure” standard, it is difficult today to select one. A more pragmatic approach should be to establish, for a given community, a set of collaboration processes, shared services and information models of reference for collaboration, within a federated environment. More and more communities are dealing with federation issues: federated authentication (Liberty Alliance project), web services federation (OASIS’ SAML), Federated Enterprise reference architecture (or FERA, by Collaborative Product Development Associates, LCC or CPDA). FERA proposes a framework for loosely coupled business process integration as part of its product value management and product lifecycle management infrastructure research services, with implementation on ebXML platform. It is consequently within a technological silo and it creates new overlapping model of reference in
428
Nicolas Figay, Parisa Ghodous
PLM domains, which is not related to other PLM standards such as PLCS, PDM Enablers or STEP Application protocols. FERA approach is nevertheless important, as related to Product value management that is not necessarily considered by other PLM standardization communities. At Information and Technological level, it is important to distinguish the modelling platforms, the development platforms and the execution platforms. The most promising execution platforms are the service oriented platforms encompassing standardized application servers, integration framework, process enactment and execution platforms, presentation integration (portals), federated authentication and single sign-on components. Such an execution platform was already described by the author [1] [2] within ATHENA Networked Collaborative Product Development Platform within Aerospace pilot. The federation platform will be an extension of this platform. For the development platform, the used standard should be based on mature enough open “de-jure” and “de-facto” standards for application modelling, programming and model transformation. An extension of EUROPA[25] Eclipse platform was identified as an excellent candidate, the Papyrus UML platform, that support most of the last versions of OMG standards for UML modelling, interchange (diagrams and models) and transformation language (Model to Model). In addition, it should be extended by Model to Text and Text to Model capabilities to be able to import and export views of enterprise modelling platforms and operational execution platforms. For modelling platforms, the more appropriate technologies and standards are those related to MDA[16] in one hand, and to the semantic WEB in the other hand, in conjunction with usage of relevant models for a community (e.g. emerging enterprise/application modelling language and models). Papyrus platform, in conjunction with appropriate profiles and transformations, is a good candidate as MDA platform. Some of these transformations are being developed by the author within the scope of the OpenDevFactory project[20], as components of the federation framework described later on. For ontological models, the modelling platform is Protégé ontology editor, coupled with standards based querying (sparQL[11] based tool such as Virtuoso) and reasoning (DL[14] based tools such as Pellet[18]) tools. It is required to find way to federate heterogeneous models (different modelling language and different viewpoints) within each environment and to interchange federated models between the two modelling environment. This issue is addressed within the proposed federation frameworks through definition of extended multi-ground hyper models.
3 Proposed Federation Framework The proposed federation framework is addressing the different interoperability needs and issues identified within the previous sections. It is first aimed to be able to establish federation of applications on a collaborative space that will allow collaboration between enterprise having heterogeneous internal private and specific processes, information, organizations, modelling platforms and execution platforms. The challenge is to identify eligible legacy solutions based on open
Collaborative Product Development: EADS Pilot Based on ATHENA
429
standards, which could be combined to provide organizations modelling, knowledge modelling and application modelling platform that can be combined to support model driven approach collaboration and roundtrip transformation between service oriented execution platforms and modelling platforms. The proposed federation framework encompasses a federated organization reference model, enabling collaborative and B2B standard based platforms specifications and principles and finally innovative enabling concepts to break the technological silos, roundtrip transformation and heterogeneous model aggregation. 3.1 The Federated Organisational Model
The federated organization is a network for which each member is an organization or a company with its own objectives, private business processes and specific business objects. Each information process is supported by applications that are providing services to human end users or to other applications. Relationships between business processes and application is done through business use cases, that define interactions between application and users a specific context, constrained by business rules coming from disciplines methods, application usage guidelines and software product usage. User interfaces are sequenced in order to give access or to basic information Creation, Remove, Update or Delete (CRUD) operations, or providing service invocation templates and result consultation templates. Internally, an application is a business information container (data, documents) and a service provider, which implement some business logic. Interfaces with other application is done by publication of set of services that can be composed by means of executable business process, defined by means of programs, batches, composition or workflows. In order to be integrated at the outside, the application can support business data/documents exchange and sharing, business services access to application and business service access to human users (by means for example of portlets). Most of the time, members of the network don’t have a model oriented approach and no enterprise models or application models exist, only documents. The internal applications are also not structured this way. It is a prerequisite to consider that front-office applications of the enterprise is structured this way, and can publish what is described. It represents state of the practice, through usage of application servers, portals and process enactment systems.
430
Nicolas Figay, Parisa Ghodous
Fig. 2. Front Office Application of a Network Member
A federated organization is a network where collaboration processes and business rules are not those of one member of the network, but those of the network, and where each member of the network has its own specific internal private organization, process and set of business objects. A Collaborative platform for federated organization is a place where collaboration processes can interconnect legacy application systems, through different means such as service publication and consumption or service composition. In addition, it should allow business information exchange and sharing a secured way, on the basis of a common business language. It is important to point that such a platform does not correspond to the model of Extended Enterprise, which is just an extension of the enterprise boundary to enterprise external actors.
Fig. 3. Collaborative Platform for federated organization
Collaborative Product Development: EADS Pilot Based on ATHENA
431
3.2 Collaborative Platform Specifications
The network collaboration platform for federated organization is providing collaboration execution platform, with business logic containers including heterogeneous sets of shared and private heterogeneous and aggregated business data and metadata, which can be used and accessed through application server, process enactment systems and user application interaction enactment. In addition, it provides a shared governance platform based on enterprise business computer independent models. It also provides collaboration modelling platform for executable business models, shared business services and information business models. It provides a development platform that allows to make the link between the business models and the application models, but also to implement when required the business logic. Two other components are a resource repository (services, models, processes) and a communication and transformation platform. All these platforms are interconnected in order to support roundtrip transformations between enterprise models, application models and execution platforms. They are each providing set of services such as governance services, modelling time related services, business runtime services, and finally transversal enactment services. Complementary services that are not domain specific, such as security, authentication, transactions, etc… are also addressed through services that are plugged on business containers (CCM/EJB models for application servers).
Fig. 4. Platform architecture requires semantic preservation and aggregation
First principle is the business logic alignment between enterprise models of the governance, the modelling, the development and the business logic container of the execution platform, and the semantic preservation between formal representations on each of these platforms. Second principle is the simultaneous existence within
432
Nicolas Figay, Parisa Ghodous
the collaboration system of aggregated heterogeneous business logic, data and schemas. 3.3 Enabling Concepts: Extended Multi-ground Hyper Models for Heterogeneous Grounds
The modelling platform to provide is not a simple application modelling platform as it should be able produce annotated CIM, PIM and PSM models, with capability to import and export models using different modelling languages. The roundtrip transformation should be possible in order to maintain coherency between models, code and binaries. Finally, some semantic mediation and transformation should be possible to exchange information between the different members of the network, but still again being able to perform reverse transformation and reconciliation of models and data. As part of the federation framework, concepts were defined for a metamodelling workshop to respond to all these needs. First the concept of modelling “ground” was defined. A modelling ground is a concrete modelling environment, based on a modelling paradigm with associated standardized language. Some activities, services and tools are associated to this modelling “ground” aiming to implement a vision of the community that defined this paradigm and associated standards. It is for example the object paradigm, with as associated standards UML that is dedicated to software engineering with representation of several aspects of an application (usage with use cases, object with class diagrams, sequence with dynamic diagrams, and deployment with deployment diagrams). A concrete ground could consequently be Eclipse environment with UML2 plug-in, based on UML 2.1, XMI 1.4 [9] and EMF 2.0. Another modelling “ground” is for example Protégé 3.3, based on the semantic web paradigm and OWL 1.0[10], RDF schema, SparQL and OWL-S standards. Second concept is the semantic preservation of the business concept. For users of an application, the way a business object was logically defined in order to be interpreted by a computer is not important. For example a “person” concept remains the same being modelled as a UML class, a OWL class, an EXPRESS entity or an XSD entity. So when a conceptual business model is moved between different “grounds”, a risk exist to loose semantic due to “impedance” mismatch, i.e. to the lost of information when translating from one language to an other, as they are not equivalent. The idea to avoid semantic lost is to extend the idea developed by hypermodel, in order to support preservation of the semantic of the business models when moving the model from one ground (e.g. meta-workshop UML2) to an another ground (e.g. meta-workshop OWL). The idea is also to be able to import business models formalized on other grounds (e.g. Product Data Exchange based on EXPRESS models) but being able to keep track of the original modelling concepts. For example, if reusing an application protocol express schema containing an entity “Person”, the resulting UML class person should keep track of the fact it can be considered as a STEP entity within the model. Of course, it is true for class concept, but also for all the other modelling concepts and their relationships.
Collaborative Product Development: EADS Pilot Based on ATHENA
433
Fig. 5. Semantic Preservation of Business Concept Person within different grounds
So in a collaboration space federating numerous and heterogeneous systems, with important information flow and roundtrip generation, it is very important not to loose information in order to have a coherent system. It is about impossible with business model translation from one language to another, except if keeping track of the original language constructs with additional information concerning details that are lost during the transformation and allowing to make the reverse transformation So a concrete federation framework is considered as a set of “grounds” for modelling, coding or execution that have to be robust in term of standards compliance, in order to allow usage of several paradigms and associated modelling language within the collaboration space. Such an approach should resolve some issues related to technology silos and to be able to mix usage of different technological framework.
4 Application to a Product Lifecycle Management Collaborative Platform These principle have been applied for establishment of a Collaborative PLM federated organization, with as business information models of reference STEP application protocols, with ass business service of reference PLM services and collaborative process of reference some collaborative processes such as Engineering Data Package, Change and Configuration Management or co-revue of a product. It is currently evaluated through an industrial research called SEINE[22]. A Networked Collaborative Product Development Platform for collaboration is being defined and developed, extending the one defined in Aerospace ATHENA pilot, with in particular an application server based on EJB3 for execution platform
434
Nicolas Figay, Parisa Ghodous
and a Model Based development platform based on AndroMDA and UML2 profiled modelling for enterprise application on the WEB. Finally STEP application protocols semantic preservation between modelling and execution platform is currently being addressed. EXPRESS UML2 profile was defined, allowing transforming application protocol as UML models, stereotyped as EXPRESS model in one hand (CIM provenance) and as EJB entities and Value Objects in the other hand (PSM targets). A similar way to annotate the model with a OWL UML2 profile, derived from OMG ODM, is under development in order to be able to easily export the extended hypermodel on Protégé 3.3 ground.
5 Conclusion and Perspectives Numerous component solutions exist for establishment of open and standards based collaboration platform aiming to respond to emerging needs for fast establishment of collaboration for federated organization allowing interconnection of involved enterprise information system. The federation framework currently proposes ways to address it, including federated organization model of reference, standards based platforms specification, principles and finally innovative concepts for extended hyper-models allowing semantic preservation on heterogeneous grounds. This framework, that is being developed iteratively, needs to be extended in order to take into consideration not only information models, but also service and process models. Some investigation should be done on data themselves (as managed individuals or object instance). These extensions will be described in future papers. Once robustness demonstrated, methodological approaches will be defined for the different actors involved on establishment and usage of the collaboration space.
References [1]
[2]
[3] [4] [5] [6] [7]
[8] [9]
Figay, N, “Technical Enterprise Applications interoperability to support collaboration within the Virtual Enterprise all along lifecycle of the product”, I’ESA2006 Doctoral Symposium Figay, N, « Collaborative Product Development :EADS Pilot Based on ATHENA results », 2006, within “Leading the web in concurrent engineering – Next Generation Concurrent Engineering”, IOS Press, ISBN I-58603-651-3 CORBA - http://www.omg.org/gettingstarted/history_of_corba.htm PLM services - http://www.prostep.org/en/standards/plmservices/ ManTIs - http://mantis.omg.org/index.htm WSDL http://www.w3.org/TR/wsdl ISO 10303-11:1994 Industrial automation systems and integration -- Product data representation and exchange -- Part 11: Description methods: The EXPRESS language reference manual XML Schema, http://www.w3.org/TR/xmlschema-0/ XML Metadata Interchange, http://www.omg.org/technology/documents/modeling_spec_catalog.htm#XMI 2
2
3
2
Collaborative Product Development: EADS Pilot Based on ATHENA
435
[10] OWL Web Ontology Language Overview, http://www.w3.org/TR/owl-features/, in June, 2006. Web Ontology Language – Description Logic, http://www.w3.org/2004/OWL/ [11] SparQL - http://www.w3.org/TR/rdf-sparql-query/ [12] UML Unified Modeling Language http://www.omg.org/technology/documents/formal/uml.htm [13] ATHENA - http://www.athena-ip.org [14] F. Baader, D. Calvanese, D. McGuinness, D. Nardi, & P. Patel-Schneider (eds) The Description Logic Handbook, Theory, Implementation and Applications, Cambridge University Press, United Kingdom, (2003) 555 pp. [15] ATHENA Aerospace piloting Web Site : http://nfig.hd.free.fr [16] MDA: Model Driven Architecture Official web site: http://www.omg.org/mda/ [17] Virtuoso web site: http://virtuoso.openlinksw.com/wiki/main/Main/VOSSPARQL [18] Pellet web site: http://pellet.owldl.com/ [19] Object Management Group at http://www.omg.org [20] OpenDevFactory as part of the Usine Logicielle project at http://www.usinelogicielle.org [21] Papyrus UML at http:// http://www.papyrusuml.org/ [22] S.E.I.N.E. at http:// http://seine-plm.org/ [23] Enterprise Java Beans specification at http://java.sun.com/products/ejb/docs.html [24] PLCS web site at http://www.plcs-resources.org/ [25] Eclipse Europa at http://www.eclipse.org/ 3
3
3
3
4
4
4
4
4
5
5
5
5
5
6
6
Contribution to Knowledge-based Methodology for Collaborative Process Definition: Knowledge Extraction from 6napse Platform V. Rajsiri1, A-M. Barthe1, F. Bénaben2, J-P. Lorré1 and H. Pingaud2 1
2
EBM WebSourcing, 10 Avenue de l’Europe, 31520 Ramonville St-Agne, France {netty.rajsiri, anne-marie.barthe, jean-pierre.lorre} @ebmwebsourcing.com Centre de Génie Industriel, Ecole des Mines d’Albi-Carmaux, 81000 Albi, France {benaben, pingaud}@enstimac.fr
Abstract. This paper presents a knowledge-based methodology dedicated to automate the specification of virtual organization collaborative processes. Our approach takes as input knowledge about collaboration coming from a collaborative platform called 6napse developed by EBM WebSourcing, and produces as output a BPMN (Business Process Modeling Notation) compliant process. The 6napse platform provides knowledge to instantiate the ontology to contribute to the collaborative process definition. The ontology is in the collaborative network domain, consisting in (i) collaboration attributes, (ii) description of participants and (iii) collaborative processes inspired from the enterprise Process Handbook (MIT). Keywords: Ontology based methods and tools for interoperability, Tools for interoperability, Open and interoperable platforms supporting collaborative businesses
1 Introduction Nowadays companies tend to open themselves to their partners and enter in one or more networks in order to have access to a broader range of market opportunities. The heterogeneities of partners (e.g. location, language, information system), the long-term relationships and establishing mutual trust between its partners are the ideal context for the creation of collaborative networks. The interoperability is a possible way toward the facilitation of integrating networks [6] [18]. General issue of each company in collaboration is to establish connections with their partners. Partners have no idea about what their collaboration will exactly be but they know what they are waiting for from the collaboration. This means that
438
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
partners can express informally and partially their collaboration requirements (knowledge). But, how to make these requirements more formalized and completed? In principle, partners collaborate through their information system. The concept of collaborative information system (CIS) has been evolved to deal with the interoperability issues. According to [16], this concept focuses on combining the information systems of different partners into a unique system. Developing such a CIS concerns the transformation of a BPMN collaborative process model into a SOA (Service Oriented Architecture) model of the CIS. This is comparable to the Model Driven Architecture (MDA) approach [9], as discussed in [17]. The BPMN supports the Computation Independent Model (CIM) of the MDA, while the SOA-based CIS supports the Platform Independent Model (PIM) of the MDA. Consequently, our research interest concerns the CIM model. The main focus is to formalize the informal and partial knowledge expressed from the partners in form of BPMN relevant process. But, how do we obtain the BPMN? The answer is shown as follows:
Fig. 1. Our approach for defining a BPMN collaborative process
The schema above shows our approach composing of (i) two gathering methods: interview and knowledge extraction, (ii) two repositories: collaboration characteristics (participant and collaboration) and collaborative processes, and (iii) a transformation. The approach starts at gathering knowledge by interviewing or extracting from a platform called 6napse. This knowledge will be classified and kept in corresponding repositories. Main difference between these two gathering methods is that the interview provides knowledge about the participants (e.g., name, role, business, service) and their collaborations (e.g., relationship, common objective) for the characteristic repository, while the extraction from 6napse provides not only the same knowledge as interview, but also the collaborative process (e.g., CIS, CIS services). Both repositories allow to analyze, keep and construct knowledge in form of collaborative process. Defining these two repositories requires implementing a knowledge-based methodology. This methodology uses ontology and reasoning to automate the specification of collaborative processes. The ontology covers the collaborative network domain which maintains the repositories of collaboration characteristics and collaborative processes, as shown in Fig. 1. The reasoning methodology
Contribution to Knowledge-based Methodology for Collaborative Process Definition
439
establishes the interactions between the repositories in order to fulfill the building of collaborative processes. The paper is focused firstly on introducing 6napse platform. Secondly, the ontology describing the collaborative network domain will be presented. Finally, the knowledge extraction from the platform and an application scenario will be discussed.
2 6napse Collaborative Platform The global evolution makes the enterprises to open themselves to their partners. The necessity of creating network depends on various elements, for example, competition, communication, complexity of products. The collaboration is set up around business tools corresponding to a collaborative process between the enterprises (e.g., group buying services, supplier-customer services). The actual market offers many collaboration tools addressing various functionalities, for example, communication (i.e. e-mail, instance massager), sharing document (i.e. blogs), knowledge management (i.e. wiki, e-yellow pages) and project management (i.e. calendar sharing). However, one of the most required functionality from users is to be able to integrate directly their functionalities emerged from their proper activity domain to the platform as mentioned in [7]. EBM WebSourcing, an open source software provider, has been found in late 2004. Their business focuses on editing and developing the solutions dedicated to SME clusters. EBM WebSourcing is now developing a collaborative platform called “6napse”. 6Napse is a collaborative platform intended to enterprises that would like to work together. The main idea is to provide a trustable space for members to establish (or not) commercial relations among them. The platform will play the role of mediator between the information systems of the enterprises. It is different from the other actual products in the market because it is integrated the business services. The development of this platform is on a basis of the social network paradigm. It aims at (i) creating a dynamic ecosystem of enterprises which communicate by using the services provided by the platform (i.e. send/receive documents, send mails, share documents), (ii) creating a network by viral propagation in the same way as Viadeo, LinkedIn and (iii) being the first step which drive to integrate information systems of the partners and to define more complicated collaborative processes (i.e. supply chain, group buying, co-design). The third development aim led to the concept of CIS and the collaborative process definition by using the knowledge-based methodology (Fig. 1). The followings are some examples of functionalities that the platform offers to their members: x x x x
Registering the enterprise, the user. Login or logout of the user to the platform. Creating or consulting the profile of enterprise and user. Inviting partners to join network, creating a partnership and a collaboration.
440
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
x
Searching enterprises, services via keywords (i.e. service, localization, tag)
Through these offered functionalities, the members can create their partnerships and collaborate. The user of the 6napse platform will be an individual relevant to an enterprise. It means that an enterprise is considered as a frame grouping their individuals (employees). An enterprise can be also a reference for the external individuals because normally we need to recognize the enterprise before being able to identify their belonging personnel. The collaboration occurred on the platform is established at the individual level.
3 Collaborative Network Ontology (CNO) In Artificial Intelligence, according to [3], knowledge representation and reasoning aim at designing computer systems that reason about a machine-interpretable representation of the world, similar to human reasoning. A knowledge-based system maintains a knowledge base which stores the symbols of the computational model in form of statements about the domain, and it performs reasoning by manipulating these symbols. Our knowledge-based methodology lies on the above approach in order to deal with the collaborative process design. Fundamentally, the methodology of collaborative process design in our case starts at analyzing the input knowledge regarding collaborative behaviors of the participants and ends at providing a related BPMN collaborative process. The input knowledge we require for manipulating the methodology concerns the collaborative characteristics or behaviors of all involved partners of the network. This kind of knowledge is, for example, business sectors, services (competencies) and roles. This knowledge is extractable from the 6napse platform. We have already discussed about the platform in the previous section. The knowledge extraction will be presented in Section 4. After manipulating the methodology, what we are waiting for at the output are network participants, exchanged data, business services and coordination services. These elements are essential for designing a BPMN collaborative process. Thus, to make the methodology able to produce these elements, we need (i) to define ontology and rules describing the collaborative network domain and (ii) to use an inference engine to deduce these modeling elements from the input knowledge. According to [4], ontology is a specification of a conceptualization. It contains a set of concepts relevant in a given domain, their definitions and interrelationships. To define domain and scope of ontology, [8] suggested starting by answering several basic questions which concerns for example, the domain of interest, user and expected result of the ontology. Often developing ontology is akin to defining a set of data and their structure for programs to use. Problem-solving methods and domain-independent applications use ontologies and knowledge bases built from them as data. The domain of interest for developing an ontology that we focus is on the collaborative network domain especially for designing collaborative process. The
Contribution to Knowledge-based Methodology for Collaborative Process Definition
441
knowledge base built from this ontology will cover the two repositories shown in Fig. 1. It will be used in some applications by the consultants of EBM WebSourcing to suggest their clients a collaborative process relevant in given collaboration behaviors. There are three key concepts underlying the collaborative network ontology (CNO) which are (i) the participant concept, (ii) the collaboration concept and (iii) the collaborative process concept. What we need to define in an ontology is not only the concepts, relations and properties, but we need also to define rules that reflect the notion of consequence. The followings are some examples of rules in the collaboration domain: If decision-making power is equal and duration is discontinuous then topology is peer-to-peer or if role is seller then participant provides delivering goods. The following paragraphs describe these three concepts with their relations, properties and rules. The participant concept, see Fig. 2, concerns the descriptions about participant, which are the characterization criteria of collaboration [13]. A participant provides several services at high level (discussed in the collaborative process concept) and resources (e.g., machine, container, technology), plays proper roles (e.g., seller, buyer, producer) and has business sectors (e.g., construction, industry, logistic).
Fig. 2. RDF graph representing the participant concept
From the above figure, reasoning by deduction can be occurred for example between role and service. Role and service are not compelled to have both but at least one of them is required because they can be completed by each other by deduction. It means that related services will be derived from a given role and viceversa. For example, if role is computer maker then its services are making screen, making keyboard… The collaboration concept, see Fig. 3, concerns the characterization criteria of collaboration [13] and the collaborative process meta-model [17]. Common objective, resource, relationship and topology are the characterization criteria, while CIS and CIS services are a part of the collaborative process meta-model.
442
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
Fig. 3. RDF graph representing the collaboration attributes
A collaborative network has a common objective (e.g., group same products to buy together) and a CIS. A CIS has its own CIS services which can be generic (e.g., send documents/mails) or specific (e.g., select supplier service). A network can have several topologies which can be star, peer-to-peer, chain or combination of these three structures. Topology has duration and decision-making power characteristics. Central, equal or hierarchic power is a decision-making power. Duration can be continuous or discontinuous. A topology contains relationships which can be group of interest, supplier/customer or competition. Reasoning topology by deduction is for example, if decision-making power is equal and duration is discontinuous then topology is peer-to-peer, if decisionmaking power is hierarchic for whatever duration be then topology is chain. The collaborative process concept, see Fig.4, is an extension of the concepts developed by the MIT Process Handbook project [8] and also the value chain of [12]. The value concept provides a list of services describing competencies at very high level and generic (e.g., vehicle manufacturing, software development), while the Process Handbook provides business services (i.e. assemble components of computer) at functional level. A service can be divided into business service and coordination service. Business service explains task at functional level. Service can derive the business services that correspond to it. For example, if service is making keyboard then business services are assembling circuit board, testing board…
Fig. 4. RDF graph representing the service
The concept of dependencies (flows) of resources is also included. To deduce a dependency, according to [2], we consider possible combinations of services using resources. Each dependency can be associated a coordination service (i.e. manage flow of material from a business service to another). The concepts of dependency and coordination are related because coordination is seen as a response to problems
Contribution to Knowledge-based Methodology for Collaborative Process Definition
443
caused by dependencies. This means a coordination service is in charge of managing a dependency. For example, if the placing order service of a buyer produces a purchase order as output and the obtaining order service of a seller uses a purchase order as input then there is a dependency of resource between these two services and we can use the forwarding document coordination service to manage this dependency. Collaborative networks usually have several participants, resources, relationships and a common objective. Common objective achieves services which use resources and are performed mostly by proper roles of the participants. A relationship gets two participants together which its type is depended on the roles of the participants (e.g. if two participants play seller and buyer roles, the relationship will be supplier/customer). The following figure shows these expressions which unite the three above concepts together:
Fig. 5. Union of the participant, collaboration and collaborative process concepts.
Once the CNO has been informally defined, we need to formalize it with rigorous syntax and semantic language. OWL (Web Ontology Language), a W3C recommendation, is the most recent development in standard ontology languages. There are three OWL versions but the most appropriate one in our case is OWLDL (Description Logics) because it adapts to automated reasoning. It guarantees the completeness of reasoning (all the inferences are calculable) and logics. For using this language, we need an editor to create ontology’s elements (classes, relations, individuals and rules). We use the Protege which is an opensource OWL editor developed by Stanford University [11]. To reason the ontology, we use the inference engine Pellet which is an open source OWL-DL inference engine in Java, developed at the University of Maryland’s Mindswap Lab [15].
4 Knowledge Extraction from 6napse Platform The two previous sections have been described about the CNO and the 6napse platform. This section interests in using 6napse platform with the CNO as discussed in [1]. We focus on extracting knowledge from the platform, which will be used to instantiate the CNO. The beginning idea of using 6napse came from when partners have no perspective about what their collaboration is supposed to be or they would like to see it clearer, they can try to collaborate through 6napse. In principle, we try to extract knowledge and to find patterns behind the collaborations occurred on the 6napse. The extracted knowledge and patterns will
444
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
be used to improve existing collaborations and to define more complicated collaborative processes. In this section, we will discuss about the knowledge extraction from 6napse platform. Then, an application scenario will be presented by using a suppliercustomer use case. 4.1 2-level Knowledge Extraction
Extracting knowledge from 6napse can be occurred at two levels corresponding to the life cycle of the enterprise on the platform: individual registration and collaboration. The first level occurs when the enterprises register on the platform. The second level is when the enterprises collaborate or start exchanging data between their partners. The second one can occur once the first one has been done. We will detail the extractable knowledge at each level of the platform in comparison to the requirements of the ontology defined in Section 3. The individual registration level provides knowledge about the description of the participants concerning the enterprise itself which is seen as an organization. This knowledge is available immediately since the participants have individually registered themselves on the platform and will not be varied depending on the collaboration. We can find this knowledge on the “my company” and “my profile” pages of the platform. The table 1 compares the information that the ontology requires as input to what we can extract from the 6napse. Table 1. Requirements of the ontology vs. Extractable knowledge from 6napse Requirements of the ontology Name of participants Business sector Services Business services Relationships
Extractable knowledge from 6Napse Name of the enterprise Activity sector List of the services List of the business services (or function of the individual belonging to the enterprise) List of the partners
Following is an example of the information on the “service” tab of the “my company” page. The service tab shows the list of services which explain the competencies of the enterprise.
Contribution to Knowledge-based Methodology for Collaborative Process Definition
445
Fig. 6. Printed screen of a service tab of an enterprise
The collaboration level provides the source of knowledge concerning the enterprise which is considered as a member of the network. To enter the collaboration level, they need to declare their partnerships and create their collaborations (network) on the platform. This can occur once the individual registration of each enterprise has been done. After the individual registration, the enterprises can invite other 6napse’s members to be their partners in the network. Then, they can create their collaborations on the collaborative space of the platform. During the partners are collaborating (e.g., transferring documents) via the platform, we will extract their collaboration knowledge. Through this knowledge, we can understand what is happening in the real collaboration. This knowledge is available on the collaborative space which includes the “share service”. The knowledge we expect to extract in this level are, for example, number of participants in the network, CIS services they are using and documents transferred from one to others, see the table below. Table 2. Requirements of the ontology vs. Extractable knowledge from 6napse Requirements of the ontology Number of participants CIS services Transferred resources Business service Common objective Duration of the collaboration (continuous, discontinuous)
Extractable knowledge from 6Napse Number of members in a collaboration CIS generic services of the platform Documents shared on the platform Shared by (individual who shares the document) Description of why creating the collaboration Measurement of the duration of the collaboration
Following is an example of the knowledge on the “share” tab of a collaboration occurred on the platform. The share tab shows the documents transferring between partners of this collaboration.
446
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
Fig. 7. Printed screen of a collaboration occurred on the 6napse platform.
The knowledge extracted in this level is, for example, an individual named Laura shares the test case documents as same as Pascal who shares the how-to text to their partners. Comments can be written for each shared documents. However, some working environments force the users to reroute several times before finding the right information, for example, B needs to see the document shared by A before transferring to C. We cannot study the exchange flows of documents on the platform without taking the network topology and the type of relationship into account. This is significant for improving or defining more complicated collaborative processes afterwards. 4.2 Supplier-customer Scenario
To illustrate principles of knowledge extraction in this section, we introduce a customer-supplier use case. The input knowledge is extracted from the 6napse platform. Some reasoning examples will be shortly explained. The output result will be shown at the end. The tables 3 and 4 show the knowledge extracted from 6napse while registering and collaborating respectively. Table 3. Description of each participant in the network Participants
Business sector
M
Manufacturing
S
Part Manufacturing
W
Logistics
Services Making computers Sales Supplying the parts of computer Stocking materials and transporting to customers
Relationships S, W (supplier-customer) M (supplier-customer) M (supplier-customer)
Table 4. The extracted knowledge from 6napse while collaborating
Knowledge Nb of participants CIS services used Resources transferred
Extracted knowledge 3 Sharing documents, Mailing… Purchase order, bill, delivering detail, receipt, messages...
Contribution to Knowledge-based Methodology for Collaborative Process Definition
Business services Common objective Duration of collaboration
447
Identifying needs, transferring materials, paying… Fulfill the supply chain for manufacturing products to stock Continuous
The knowledge in the tables 3 and 4 is required as input of the ontology. It will be used to instantiate the ontology defined in the section 3. Once we have the input knowledge (instances in the ontology), we can derive first of all the topology and the characteristics of the network. This network has two chain topologies because, from the deduction, the decision-making power is hierarchic and the duration is continuous. We continue the collaboration definition by deducing the roles and business services. If there is any missing information at this level, this information will be completed by the ontology. In this case, for example, since M provides making computer and sales then M plays manufacturer role which performs identifying needs, receiving materials, paying and producing business services. Once the business services provided by the participants have been reasoned, what we need to do next is to derive all possible dependencies between business services belonging to different participants. This dependency brings to the deduction of the coordination and the CIS services in the collaborative process. After that, we can find if there are any services required in the collaboration that no any participant can be in charge of. If this is the case, we need to create the CIS services to perform these services. For example, the control and evaluation service cannot be done by any participant, so it will belong to the CIS. At the end, we have to deduce once again the dependency between the CIS services and the coordination services. The result is shown in the following figure:
Fig. 8. A solution of collaborative process of the network.
448
V. Rajsiri, A-M. Barthe, F. Bénaben, J-P. Lorré and H. Pingaud
We have to remark that the collaborative process obtained at the end is just a solution for the given use case. It is always possible to have other solutions that meet to collaborative behaviors of the participants more than the proposed one.
5 Conclusion The 6napse collaborative platform is still on the development phase. It allows to extract knowledge before setting up collaborations. The contribution of the 6napse is dedicated to partners who have no idea about what their collaboration is supposed to be or would like to see it clearer. The partners can capitalize on their collaboration knowledge for better collaborating in the future. Also, it plays an important role for enriching the knowledge (instances) in the CNO in order to improve existing collaborations of the partners and define more complicated collaborative processes. The collaborative process obtained from the ontology (Fig. 8) is really near the BPMN compliant process but still not complete. There are some missing elements such as gateways and events. These elements are needed to be added in actual collaborative processes because they can make process more dynamic. Our current work is focused on firstly extracting knowledge from real collaborations occurred on the 6napse. Secondly, we address adding the dynamic aspect to the actual reasoning methodology by taking into account event and gateway elements. Also, the actual knowledge-based methodology, including its concepts, rules and reasoning steps is needed to be finalized and validated.
References [1]
[2]
[3] [4] [5] [6] [7] [8]
[9]
Barthe, A-M. : La plateforme 6napse – présentation et perspectives d’intégration de cet outil et du recueil d’informations qu’il autorise, Mémoire de Master Recherche, Génie Industriel de l’Ecole des Mines d’Albi-Carmaux (2007). Crowston, K.: A Taxonomy of Organizational Dependencies and Coordination Mechanisms (Working paper No. 3718-94): Massachusetts Institute of Technology, Sloan School of Management (1994). Grimm S., Hitzler P. and Abecker A.: Knowledge Representation and Ontologies: Logic, Ontologies and Semantic Web Language, Semantic Web Services. (2007). Gruber, T. R.: A translation approach to portable ontologies, Knowledge Acquisition, 5(2), pp.199-220 (1993). Katzy B., Hermann L.: Virtual Entreprise Research: State of the art and ways forward, pp.1-20 (2003). Konstantas D, Bourrières JP, Léonard M and Boudjlida N.: Interoperability of enterprise software and applications, INTEROP-ESA’05, Springer-Verlag (2005). Lorré, J-P.: Etat de l’art, EBM WebSourcing (2007) Malone, T.W., Crowston K., Lee J., Pentland B.: Tools for inventing organizations: Toward a Handbook of Organizational Processes, Management Science/Vol. 45, N° 3. (1999). Process Handbook Online : http://ccs.mit.edu/ph/ Millet J. and Mukerji J.: MDA Guide Version 1.0.1 (2003)., available on http://www.omg.org
Contribution to Knowledge-based Methodology for Collaborative Process Definition
449
[10] Natalya F.N. and Deborah L.M.: Ontology Development 101: A Guide to Creating Your First Ontology. Stanford University, Stanford, CA, USA. (2001) [11] Protege http://protege.stanford.edu (2000) [12] Porter M. : L’avantage concurrentiel, InterEdition, Paris, pp. 52 (1986). [13] Rajsiri V, Lorré JP, Bénaben F and Pingaud H.: Cartography for designing collaborative processes. Interoperability of Enterprise Software and Applications (2007) [14] Rajsiri V, Lorré JP, Bénaben F and Pingaud H.: Cartography based methodology for collaborative process definition, Establishing the Foundation of Collaborative Networks, Springer, pp. 479-486. (2007) [15] Sirin E., Parsia B., Grau B.C., Kalyanpur A. and Katz Y.: Pellet: A practical OWLDL reasoner, Journal of Web Semantics, 5(2). (2007) [16] Touzi J, Lorré JP, Bénaben F and Pingaud H.: Interoperability through model based generation: the case of the Collaborative IS. Enterprise Interoperability. (2006). [17] Touzi J, Bénaben F, Lorré JP and Pingaud H.: A Service Oriented Architecture approach for collaborative information system design, IESM’07. (2007). [18] Vernadat FB.: Interoperable enterprise systems: architectures and methods, INCOM’06 Conference. (2006).
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services Shu Liu, Xiaofei Xu and Zhongjie Wang Research Centre of Intelligent Computing for Enterprises and Services (ICES), School of Computer Science and Technology, Harbin Institute of Technology, 150001, Harbin, China {sliu, xiaofei, rainy}@hit.edu.cn
Abstract. Based on a service system, service providers offer services to customers. Quality of a service system has great influence on customer. Therefore in order to provide better services to customer, it is necessary to assure quality for the lifecycle of services. With the experiences accumulated in developing and implementing some typical IT systems in manufacturing enterprises from past decade, we have proposed a new service engineering methodology named SMDA, to assist service providers to build better service system. As part of the SMDA, the Service Quality Function Deployment (SQFD) has been proposed to consider quality aspects of a service system. SQFD, which is adopted from QFD, focuses on designing, evaluating and optimization of service quality in lifecycle of services. The three phases of SQFD, i.e., build-time QFD-oriented service quality design, run-time service performance evaluation and service performance optimization, are illustrated in this paper. Keywords: Quality and performance management of interoperable business processes, Service oriented Architectures for interoperability, Model Driven Architectures for interoperability
1 Introduction In the past decade the world economy has changed rapidly, especially in the advanced countries, a modern service centered economy is emerging with the growing GDP of service sector and advanced development of information technology. In order to gain advantages in the competition of the new economy attention has focused on service quality. While there have been efforts to study service quality, there have been no general agreement on the effective way to measure and improve service quality. One service quality measurement model that has been extensively applied is the SERVQUAL model developed by Parasuraman A, Zeithaml V, and Berry L L. The SERVQUAL instrument has been the predominant method use to measure
452
Shu Liu, Xiaofei Xu and Zhongjie Wang
consumers’ perceptions of service quality. The SERVPERF model is another service quality measurement instrument, was developed later by Cronin and Tayor, which inherited from SERVQUAL and expanded SERVQUAL. Both models have been focusing on service quality measurement lack of attention on service quality design and service quality assurance for the lifecycle of services. Under this atmosphere research center on Intelligent Computing for Enterprises and Services (ICES) of School of Computer Science and Technology in Harbin Institute of Technology (HIT) is one of the pioneers. With the rich experiences accumulated in developing and implementing some typical IT systems (e.g., ERP, SCM, CRM, etc.) in manufacturing enterprises from past decade, ICES proposed a new service engineering methodology named Service Model Driven Architecture (SMDA), to assist service providers to build their service systems in an MDA style, e.g., modeling customer requirements and gradually transforming models into a executable service system [1][2]. As part of the SMDA the Service Quality Function Deployment (SQFD) has been proposed to consider quality aspects of a service system. The SQFD was originated from Quality Function Deployment (QFD) which was developed at the Kobe Shipyard of Mitsubishi Heavy Industries, Ltd. as a way to expand and implement the view of quality. QFD has been widely applied in many industries worldwide, such as automobile, electronics, food processing, computer hardware and software ever since. SQFD focuses on service quality assurance for lifecycle of services includes service quality design, service system evaluation and optimization. The purpose of this paper is to illustrate the approaches of SQFD which is organized as following. The 2nd section explains service quality and SQFD in detail. In the 3rd section the build-time QFD-oriented service quality design was discussed. The run-time service quality evaluation and service lifecycle optimization are introduced in section 4 and 5 respectively. Finally is the conclusion.
2 Service Quality and SQFD 2.1 Service Model Driven Architecture
In order to understand the mechanism of SQFD it is necessary to explain the architecture of SMDA first. SMDA, a new service engineering methodology, contains three-layer service models and a service system, as showing in figure 1.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
Services Demand Acquirement
USML: Unified Services Modeling Language
Service Requirements Model
SRM: Service Requirement Model
Service Process Model Capability Model
Services Design
SBCM: Service Behavior/Capability Model Service Modeling Process
Service Execution Model
Selecting Services Components
(Build time)
Services Component Sets
SEM: Service Execution Model
Evaluation On-Demand Services
Mapping Services Quality Evaluation
453
Service Performance
Implementation and Execution of Services Systems (Run time)
Instances of Services Components SES: Service Execution Systems
Fig. 1. The architecture of SMDA
The first layer of SMDA is Service Requirement Model (SRM), which captures Voice of Customer (VOC). The second layer, Service Behavior and Capability Model (SBCM) are used to design service behaviors and capabilities by transforming VOC defined in SRM. The Service Execution Model (SEM), the third layer of SMDA, further transforms service behaviors and capabilities from SBCM into executable service component sets by selecting appropriate service components. Then SEM is mapped to a Service Execution System (SES) of a specific service. There is a top-down transformation between models in SMDA by three mappings: SRM to SBCM, SBCM to SEM, and SEM to SES. The first two mappings are service modeling process, belong to build-time. And the third mapping is implementation and execution of services systems; belong to run-time [2]. 2.2 Service Quality in SMDA
Service quality is a concept that has aroused considerable interest and debate in the research literature because of the difficulties in both defining it and measuring it with no overall consensus emerging on either [3]. Among a number of definitions for service quality, one that is widely accepted defines service quality as the difference between customer expectation of service and perceived service. Analyzing and measuring the gap between customer expectation of service and perceived service is the starting point in designing and improving traditional service quality. Figure 2 is a conceptual model of service quality with 5 major gaps, which was proposed by Parasurman, Zeithaml and Berry. As shown in the figure gap 5 is the discrepancy between customer expectations and their perceptions of the service delivered, which is the sum of other 4 gaps. In order to improve service quality in traditional service, the efforts have to be focused on reducing the gap 5, which is necessary to be able to measure gap 1~ 4 and reduce each of them.
454
Shu Liu, Xiaofei Xu and Zhongjie Wang
Fig. 2. Gaps in Service Quality in Traditional Service
With similar approach the conceptual model of the service quality of SMDA is proposed with 5 major gaps, as shown in figure 3.
Fig. 3. Gaps in Service Quality of SMDA
x x x
Gap1: the difference between customers expected quality of service and quality characteristics captured in SRM. Gap2: the difference between quality characteristics captured in SRM and quality characteristics transformed in SBCM. Gap3: the difference between quality of SBCM and transformed in SEM.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
x x
455
Gap4: the difference between quality of SEM and external communications to customers. Gap5: the difference between customers expected service quality and customer perceived service quality of service systems.
As seeing in figure 3, to improve service quality of SMDA the key is reducing gap5. Since the gap5 is the sum of other 4 gaps, to reduce each gap become the ultimate goal. As part of SMDA the SQFD was proposed to capture VOC and transform through SRM to SBCM to SEM, finally to service execution system (SES). 2.3 SQFD
Traditional quality evaluation systems aim at minimizing negative quality such as eliminating defects or reducing operational errors. QFD is quite different in that it seeks out customer requirements and maximizes "positive" quality by designing and transforming quality factors through product development process [4]. QFD facilitates translation of a prioritized set of subjective customer requirements into a set of system-level requirements during system conceptual design [5]. Then further translate system-level requirements into more detailed set of requirements at each stage of the design and development process [6]. SQFD is proposed with a similar approach, it translates a set of customer requirements of a service into an executable service system through the service modeling process. SQFD consist of three phases: x x x
Build-time QFD-oriented service quality design Run-time service quality/performance evaluation Service quality/performance optimization
As shown in Figure 4, the service quality design process is depicted along black arrows. VOC is captured in Service Level Agreement (SLA) through negotiation between customers and service providers. As input to SRM, the Quality aspect of VOC is transformed though SBCM and SEM into service system by QFD approach. Through the performance monitoring, data collection, and evaluation the red arrows depict the service optimization process of the service system.
456
Shu Liu, Xiaofei Xu and Zhongjie Wang
Fig. 4. Structure of SQFD
3 Build-time QFD-oriented Service Quality Design As core of QFD, the House of Quality (HoQ) can be used to incorporate VOC into every manufacturing activity. There are four levels of quality house were proposed in typical manufacturing case as shown in Figure 5. It helps to trace what design engineers and manufacturers do for what customers want [7].
Fig. 5. The Four Houses of QFD
With the similar approach the three levels of quality house have been designed with each level corresponding to one of the model in SMDA, as shown in figure 6.
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
457
Fig. 6. The three levels of SHoQs
The three-level service house of quality (SHoQ) incorporates customer’s voice into every modeling activity of SMDA. It helps to make customer an integral part of early design synthesis and evaluation activities. In building each level of the quality house of SQFD, four dimension views of SMDA should be considered, as shown in table 1. Table 1. Modeling Views in SMDA Models SRM
SBCM
SEM
Service Provider Customer (S-PC View) Service Behavior View (SB View) Service Action View (SA View)
Four dimension views SMDA Service Service Resource Organization View View (SR View) (SO View) Service Role Service Capability View View (SL View) (SC View) Capability Service Configuration Participation View View (CC View) (SP View)
Service Information View (SI View) Service Information View ((SI View)) Service Information View ((SI View))
HoQ of SQFD Level-1 SHoQ
Level-2 SHoQ
Level-3 SHoQ
The level-1 SHoQs is used to capture “Voice of the Customer” for specific service and transform them into SRM with top-down approach. Identification and extraction of customer’s essential needs are the first step in building house of quality. In the customer requirement acquisition, methods such as interviews, surveys, market investigation, and trend analysis are often used, it is important to ensure that complete, consistent, nonredundant, and true customer requirements are identified and specified [8][9]. When building the Level-1 SHoQ. the four dimensions of SRM should be considered include Service-Provider-Customer View, Service Organization View, Service Resource View, and Service Information View. Figure 6 (a) depicts the level-1 SHoQ, it takes the values and risks extracted from requirements of customers as the inputs of the SHoQ, and outputs the quality targets for each task of specific service.
458
Shu Liu, Xiaofei Xu and Zhongjie Wang
The level-2 SHoQs take the outputs from the level-1 as its inputs and transform them into SBCM with top-down approach. Figure 6 (b) depicts the level-2 SHoQ of the four View, it takes output of target values for each task from level-1 as input and outputs target values for each behavior of specific task. Level-3 SHoQ takes target values of each behavior from the level-2 as its inputs and transforms to target values for each action of specific behavior by topdown approach, as shown in figure 6 (c). Meanwhile, the bottom-up approach is also used to select available service components with different granularity to meet the target values of task/behavior/action.
4 Run-time Quality/Performance Evaluation Figure 7 depicts the Run-time quality/performance monitoring mechanism. Service system in run-time, the monitoring software automatically sends queries and receives reports from different service component environments periodically. The system run-time Key Performance Indicators (KPIs) will be evaluated based on collected data. The service system quality/performance can be evaluated from three aspects with different granularity as shown in figure 8. KPIs of service components are collected and evaluated; KPIs of service orchestration are collected and evaluated; KPIs of service choreography are collected and evaluated. The results from runtime quality/performance evaluation are analyzed for further service system optimization.
Fig. 7. Run-time quality/performance monitoring
SQFD: QFD-based Service Quality Assurance for the Lifecycle of Services
459
Fig. 8. Aspects of service quality/performance evaluation
5 Service Quality/Performance Optimization Service quality/performance optimization is the third phase of SQFD, it is a reversed process compare to service quality design as shown in figure 9. Blue arrows represent the quality design process and the red arrows with opposite direction represent the optimization process. The gaps between customer expected service quality from SRM and customer perceived service quality from service system are identified, analyzed, and traced backward to early phase for reasoning the causes. In order to improve the quality of service system efforts are made to optimization by considering re-design, re-configuration, and or re-negotiation. This closes the loop of quality assurance for lifecycle of services. SRM Performance
SRM (System)
SBCM Performance
SBCM (System)
Expected Service Quality Gap
SEM Performance
SEM (System)
SQFD based quality design
Service System Performance
Service System
Reversed SQFD based quality optimization
Fig. 9. Service quality/performance optimization of SMDA
Perceived Service Quality
460
Shu Liu, Xiaofei Xu and Zhongjie Wang
6 Conclusion In this paper, based on SMDA a SQFD method is proposed to ensure service quality for the lifecycle of services. There are three phases in SQFD which includes service quality design, service quality evaluation, and service quality optimization. The QFD method is adopted and applied to design service quality by building three levels of quality houses. Run-time based service quality evaluation and optimization is also depicted briefly. Further research work include: refining of SQFD based on real case study; defining quality parameter sets for specific typical service sectors etc.
Acknowledgement Research works in this paper are partial supported by the National High-Tech Research and Development Plan of China (2006AA01Z167, 2006AA04Z165) and the National Natural Science Foundation (NSF) of China (60673025).
References [1]
[2]
[3] [4] [5]
[6]
[7]
[8]
[9]
Xiaofei Xu, Zhongjie Wang, Tong Mo. SMDA: a New Methodology of Service Engineering. 2006 Asia Pacific Symposium on Service Science, Management and Engineering. Nov. 30-Dec. 1, 2006, Beijing, China Xiaofei Xu, Tong Mo, Zhongjie Wang. SMDA: A Service Model Driven Architecture. the 3rd International Conference on Interoperability for Enterprise Software and Applications. Mar. 28-30, 2007, Madeira Island, Portugal Parasuraman A., Zeithaml V.A., Berry L.L.. A conceptual model of service quality and its implication. Journal of Marketing. 1985 Vol. 49 Fall, pp. 41~50 Xiaoqing Frank Liu. Software Quality Function Deployment. IEEE POTENTIALS Jong-Seok Shin, Kwang-jae Kim, M. Jeya Chandra. Consistency check of a house of quality chart. International Journal of Quality & Reliability Management. Vol. 19 No. 4, 2002, pp. 471~484 Kwai-Sang Chin, Kit-Fai Pun, W.M. Leung, Henry Lau. A quality function deployment approach for improving technical library and information services: a case study. Library Management. Jun 2001 Volume: 22 Issue: 4/5 Page: 195~204 A. Ghobadian, A.J. Terry. How Alitalia improves service quality through quality function deployment. Managing Service Quality. 1995 Volume: 5 Issue: 5 Page: 31~35 Anne M. Smith, Moira Fischbacher, Francis A. Wilson. New Service Development: From Panoramas to Precision. European Management Journal. October 2007 Vol. 25 No. 5, pp. 370~383 Eleonora Bottani, Antonio Rizzi. Strategic Management of Logistics Service: A Fuzzy QFD Approach. Int. J. Production Economics 103 (2006) 585~599
Coevolutionary Computation Based Iterative MultiAttribute Auctions Lanshun Nie, Xiaofei Xu, Dechen Zhan Harbin Institute of Technology, Harbin 150001, P.R. China {nls, xiaofei, dechen}@hit.edu.cn
Abstract. Multi-attribute auctions extend traditional auction settings. In addition to price, multi-attribute auctions allow negotiation over non-price attributes such as quality,terms-ofdelivery, and promise to improve market efficiency. Multi-attribute auctions are central to B2B markets, enterprise procurement activity and negotiation in multi-agent system. A novel iterative multi-attribute auction mechanism for reverse auction settings with one buyer and many sellers is proposed based on competitive equilibrium. The auctions support incremental preference elicitation and revelation for both the buyer and the sellers. Coevolutionary computation method is incorporated into the mechanism to support economic learning and strategies for the sellers. The myopic best-response strategy provided by it is in equilibrium for sellers assuming a truthful buyer strategy. Moreover, the auction are nearly efficient. Experimental results show that the coevolutionary computation based iterative multi-attribute auction is a practical and nearly efficient mechanism. The proposed mechanism and framework can be realized as a multi-agent based software system to support supplier selection decision and/or deal decision for both the buyer and the suppliers in B2B markets and supply chain. Keywords: Socio-technical impact of interoperability, Decentralized and evolutionary approaches to interoperability, Enterprise application Integration for interoperability
1 Introduction Auctions are important mechanisms for allocating resources and services among agent[1]. It has found widespread use as a technique for supporting and automating negotiations in business-to-business online markets, industrial procurement and Multi-Agent System. Multi-attribute auctions [2, 3] extend the traditional auction setting to allow negotiation over price and non-price attributes such as quality and terms-of-delivery, and promise to improve market efficiency in markets with configurable goods. Traditional auction mechanisms, such as the English, Dutch, First(or Second) price Sealed-Bid auction, can’t be extended straightforwardly to
462
Lanshun Nie, Xiaofei Xu, Dechen Zhan
the multi-attribute settings, because there is private information in both sides of the auction. Several researchers have considered attributes other than price in the procurement setting from a purely theoretical perspective. Most of them adopt mechanism design theory and focus on the agent’s best-response strategy. Che [2] first studies the optimal auction when there are only two attributes, price and quality. He proposes a buyer payoff-maximizing one-shot sealed-bid auction protocol with a first price and a second price payoff function. Branco [4] extends this protocol to the case where the seller cost functions are correlated. Milgrom [5] has shown that efficiency can be achieved if the auctioneer announces his true utility function as the scoring rule, and conducts a Vickrey (second-price sealedbid) auction based on the resulting scores. Beil and Wein [6] propose an iterative payoff-maximizing auction procedure for a class of parameterized utility function (with K parameter) with known functional forms and naive suppliers. The buyer uses K rounds to estimate the seller cost functions deterministically. For the final round they design a scoring function so as to maximize buyer payoff. Although theoretical progress has been made, there are some limitations:(1) Agents are required to have complete information about their preference. However, preference elicitation is often costly and bidders would prefer not to determine an exact value tradeoff across all different combinations of attribute levels. (2) Agents are required to reveal a great deal of private information. However, they would prefer to reveal as little information as possible about costs and preferences. (3) Agents are often required to have complete rationality. Another approach to auction problem is competitive equilibrium theory. In this model an agent plays a best-response to the current price and allocation in the market without either the strategies of other agents or the effect of its own actions on the future state of the market. Iterative auction mechanisms based on it, which allow agents to provide incremental information about their preference, are especially important in application of multi-attribute auctions. Research about this is just beginning. As far as we know, only Parkes and Kalagnanam [3] propose a iterative primal-dual based multi-attribute auction mechanism for reverse auction settings. The auctions are price-directed. A myopic best-response strategy is in equilibrium for sellers assuming a class of consistent buyer strategies. The auctions are efficient with a truthful buyer. In this study, we propose an iterative multi-attribute auction mechanism based on competitive equilibrium theory. Coevolutionary computation method [7] is incorporated into the mechanism to support economic learning and strategy for the sellers. Section 2 formulates general multi-attribute auction problem. Section 3 introduces an iterative auction mechanism for it. Coevolutionary computation based multi-attribute auction method is presented in Section 4. A computation result is reported in Section 5. We discuss some characteristics of the proposed mechanism in Section 6.
Coevolutionary Computation Based Iterative Multi-Attribute Auctions
463
2 Multi-attribute Auction Problem In the multi-attribute auction problem there are N sellers, one buyer, and M attributes. Let I denote the set of sellers, and J denote the set of attributes. Each attribute, jJ, has a domain of possible attribute values, denoted with 4j. The joint domain across all attributes is denoted 4=41u42u}u4M. For an attribute bundle, T4, each seller, iI, has a cost function, ci(T)t0, and the buyer has a value
function, v(T)t0. We restrict our attention to the problems in which a single buyer negotiates with multiple sellers in a reverse auction, and will eventually select a single seller. Also, we assume that agents have quasilinear utility functions. The utility to seller i for selling an item with attribute bundle T at price p is the difference between the price and its cost, i.e. u i (T , p ) p ci (T ) . Similarly, the utility to the buyer for buying an item with attribute bundle T at price p is the difference between its value and the price, i.e. u B (T , p ) v(T ) p . We believe that efficiency is a more appropriate goal for multi-attribute auction since it is usually applied in business to business market and procurement activity[3]. In addition to efficiency, individual-rationality, budget-balance, low rationality requirement to agents are all desirable properties of a multi-attribute auction mechanism.
3 Iterative Multi-attribute Auction Mechanism We propose the following iterative mechanism(IMA) for the multi-attribute auction problem. Step 1 The buyer announces minimal constraints to bids, maximal round MAX_R, the number of bids OPT in each round, the number of rounds for quit QUIT_R. The current round CUR_R=0. Each seller submits an initial bid. Step 2 The buyer determines the best bid for this round BestbidCUR_R and announces it to the sellers. Step 3 CUR_R++. Each seller (i), except the one providing BestbidCUR_R-1, proposes at most OPT new bids to the buyer and the buyer returns the result whether these bids are better than BestbidCUR_R-1. If some proposed bids are better than BestbidCUR_R-1, seller i selects one of them as the bid of this round. Else, i.e. no bids are better than BestbidCUR_R-1, count the number of consecutive rounds seller i failed, if it is larger than QUIT_R, seller i loses. Step 4 If there is only one active seller, this seller win. The deal result is BestbidCUR_R-1. And the whole auction terminates in success. Step 5 If CUR_R