220 107 2MB
English Pages 387 Year 2009
Service-Oriented Computing
Cooperative Information Systems Michael P. Papazoglou, Joachim W. Schmidt, and John Mylopoulos, editors
Advances in Object-Oriented Data Modeling Michael P. Papazoglou, Stefano Spaccapietra, and Zahir Tari, editors Workflow Management: Models, Methods, and Systems Wil van der Aalst and Kees Max van Hee A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen Meta-Modeling for Method Engineering Manfred Jeusfeld, Matthias Jarke, and John Mylopoulos, editors Aligning Modern Business Processes and Legacy Systems: A Component-Based Perspective Willem-Jan van den Heuvel A Semantic Web Primer, Second Edition Grigoris Antoniou and Frank van Harmelen Service-Oriented Computing Dimitrios Georgakopoulos and Michael P. Papazoglou, editors
Service-Oriented Computing
edited by Dimitrios Georgakopoulos and Michael P. Papazoglou
The MIT Press Cambridge, Massachusetts London, England
© 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please e-mail [email protected] This book was set in Times Roman by SNP Best-set Typesetter Ltd., Hong Kong. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Service-oriented computing / edited by Dimitrios Georgakopoulos and Michael P. Papazoglou. p. cm.—(Cooperative information systems) Includes bibliographical references and index. ISBN 978-0-262-07296-0 (hardcover : alk. paper) 1. Web services. I. Georgakopoulos, Dimitrios. II. Papazoglou, M., 1953– TK5105.88813.S45 2009 006.7′6—dc22 2007039722 10
9
8
7
6
5
4
3
2
1
Contents
1
1
Overview of Service-Oriented Computing Dimitrios Georgakopoulos and Michael P. Papazoglou
2
Conceptual Modeling of Service-Driven Applications Boualem Benatallah, Fabio Casati, Woralak Kongdenfha, Halvard Skogsrud, and Farouk Toumani
29
3
Realizing Service-Oriented Architectures with Web Services Jim Webber and Savas Parastatidis
51
4
Service-Oriented Support for Dynamic Interorganizational Business Process Management Paul Grefen
83
5
Data and Process Mediation in Semantic Web Services Adrian Mocan, Emilia Cimpian, and Christoph Bussler
111
6
Toward Configurable QoS-Aware Web Services Infrastructure Lisa Bahler, Francesco Caruso, Chit Chung, Benjamin Falchuk, and Josephine Micallef
151
7
Configurable QoS Computation and Policing in Dynamic Web Service Selection Anne H. H. Ngu, Yutu Liu, Liangzhao Zeng, and Quan Z. Sheng
183
WS-Agreement Concepts and Use: Agreement-Based, Service-Oriented Architectures Heiko Ludwig
199
8
9
Transaction Support for Web Services Mark Little
229
vi
Contents
10
Transactional Web Services Stefan Tai, Thomas Mikalsen, Isabelle Rouvellou, Jonas Grundler, and Olaf Zimmermann
259
11
Service Componentization: Toward Service Reuse and Specialization Bart Orriens and Jian Yang
295
12
Requirements Engineering Techniques for e-Services Jaap Gordijn, Pascal van Eck, and Roel Wieringa
331
13
E-Service Adaptation Barbara Pernici and Pierluigi Plebani
353
Contributors Index
373 375
Service-Oriented Computing
1
Overview of Service-Oriented Computing
Dimitrios Georgakopoulos and Michael P. Papazoglou
1.1
Introduction
Service-Oriented Computing (SOC) is a computing paradigm that utilizes services as fundamental elements to support rapid, low-cost development of distributed applications in heterogeneous environments. The promise of Service-Oriented Computing is a world of cooperating services that are being loosely coupled to flexibly create dynamic business processes and agile applications that may span organizations and computing platforms, and can adapt quickly and autonomously to changing mission requirements. Realizing the SOC promise involves developing Service-Oriented Architectures (SOAs) [13] [23] and corresponding middleware that enables the discovery, utilization, and combination of interoperable services to support virtually any business process in any organizational structure or user context. SOAs allow application developers to overcome many distributed enterprise computing challenges, including designing and modeling complex distributed services, performing enterprise application integration, managing (possibly cross-enterprise) business processes, ensuring transactional integrity and QoS, and complying with agreements, while leveraging various computing devices (e.g., PCs, PDAs, cell phones, etc.) and allowing reuse of legacy systems [4]. SOA strives to eliminate these barriers so that distributed applications are simpler/cheaper to develop and run seamlessly. In addition, SOA provides the flexibility and agility that business users require, allowing them to define coarse-grained services, which may be aggregated and reused to address current and future business needs. The design principles of an SOA [13] [3] are independent of any specific technology, such as Web Services or J2EE Enterprise Java Beans. In particular, SOA prescribes that all functions of a SOA-based application are provided as services [2]. That is, SOA services include all business functions and related business processes that comprise the application, as well as any system-related function that is necessary to support the SOA-based application. In addition to providing for the functional decomposition of applications to services, SOA requires services to be:
2
Dimitrios Georgakopoulos and Michael P. Papazoglou
•
Self-contained
•
Platform-independent
•
Dynamically discoverable, invokable, and composable.
A service is self-contained when it maintains its own state independently of the application that utilizes it. Services are platform-independent if they can be invoked by a client using any network, hardware, and software platform (e.g., OS, programming language, etc.). Platform independence also implies that an SOA service has an interface that is distinct from, and abstracts the details of, the service implementation. The service interface defines the identity of a service and its invocation mechanism. The service implementation implements the SOA function that the service is designed to provide. Finally, SOA requires that services may be dynamically discovered, invoked, and composed. Dynamic service discovery assumes the availability of an SOA service that supports service discovery. This may include a service directory, taxonomy, or ontology that service clients query to determine which service(s) can provide the functions they need. To ensure invocability, SOA requires that service interfaces include mechanisms that allow clients to invoke services and/or be notified by services as needed. This implies that clients are unaware of the network protocol used to perform the service invocation and of the middleware platform components required to establish the connection. The combination of service invocability and platform independence permits clients to invoke any service from anywhere and at any time the service is needed. Finally, services are composable if they can be combined and used by business processes that may span multiple service providers and organizations. Each service in an SOA-based application may implement a brand-new function, it may use parts of old applications that were adapted and wrapped by the service implementation, or it may combine new code and legacy parts. In any case, the developers of the service clients typically do not have direct access to the service implementation other than indirectly through its interface. For example, web services publish their service interfaces only without revealing their implementation or the inner workings of their provider. Therefore, SOA permits enterprises to create, deploy, and integrate multiple services and to choreograph new business functions and processes by combining new and legacy application assets encapsulated in services. Furthermore, due to its dynamic nature, SOA can potentially provide just-in-time integration of services that offer a new product or a client and/or time-dependent service that has never been provided to a client before. This is a key enabler for real-time enterprises. Web Services have become the preferred implementation technology for realizing SOAs [34]. Their success is due to basing their development on existing, ubiquitous infrastructure such as HTTP, SOAP, and XML. In this chapter, we survey the underpinnings of SOA and discuss technologies that can springboard enterprise integration projects. In addition, we review proposed enhancements of SOA, such EDA and xSOA. The Event-Driven Architecture (EDA) is an event-driven SOA that provides support for complex event processing and provides additional flexibility. The extended SOA (xSOA) provides SOA extensions for service composition and management. This, chapter
Overview of Service-Oriented Computing
3
is unique in that it unifies the principles, concepts, and developments in enterprise application integration, middleware, SOAs and event-driven computing. It also explains how these contribute to an emerging distributed computing technology known as the Enterprise Service Bus. Moreover, this chapter introduces the remaining chapters in the book that discuss enhancements to the conventional SOA, EDA, and xSOA. 1.2
Service Roles in SOA
SOAs and Web Services solutions support two key roles: a service requester (client) and a service provider, which communicate via service requests. A role thus reflects a type of participant in an SOA [13] [33]. Service requests are messages formatted according to the Simple Object Access Protocol (SOAP) [11]. SOAP entails a light-weight protocol allowing RPC-like calls over the Internet using a variety of transport protocols including HTTP, HTTP/S, and SMTP. In principle, SOAP messages may be conveyed using any protocol as long as a binding is defined. The SOAP request is received by a runtime service (an SOAP “listener”) that accepts the SOAP message, extracts the XML message body, transforms the XML message into a protocol that is native to the requested service, and delegates the request to the actual function or business process within an enterprise. After processing the request, the provider typically sends a response to the client in the form of an SOAP envelope carrying an XML message. Requested operations of Web Services are implemented using one or more Web Service components [55]. Web Service components may be hosted within a Web Services container [21] serving as an interface between business services and low-level infrastructure services. In particular, Web Service containers are similar to J2EE containers [3], and provide facilities such as location, routing, service invocation, and management. A service container can host multiple services, even if they are not part of the same distributed process. Thread pooling allows multiple instances of a service to be attached to multiple listeners within a single container [17]. SOAP is by nature a platform-neutral and vendor-neutral standard. These characteristics allow for a loosely coupled relationship between requester and provider, which is especially important over the Internet, where two parties may reside in different organizations or enterprises. However, SOA does not require the usage of SOAP and other service transports have been used in the past, for example in [45]. The interactions between service requesters and service providers can be complex, since they involve discovering/publishing, negotiating, reserving, and utilizing services from potentially different service providers. An alternative approach for reducing such complexity is to combine the service provider and requester functionality into a new role, which we refer to as service aggregator [37]. The service aggregator thus performs a dual role. First, it acts as an application service provider, offering a complete “service” solution by creating composite, higher-level services. Service aggregators can accomplish this composition using specialized composition
4
Dimitrios Georgakopoulos and Michael P. Papazoglou
Service provider
Service requester
Provider
Requester
Aggregator
Service provider
Figure 1.1 The role of the service aggregator.
languages such as BPEL [5] and BPML [6]. A service agreegator also acts as a service requester by requesting and reserving services from other service providers. This process is shown in figure 1.1. Though service aggregation may offer these service composition benefits to the requester, it is also a form of service brokering that offers a convenience function that groups all the required services “under one roof.” However, an important question that needs to be addressed is how a service requester selects a specific application service provider for its service offerings. The service requester can retain the right to select an application service provider based on those that can be discovered from a registry service, such as UDDI [1]. SOA technologies such as UDDI, and security and privacy standards such as SAML [40] and WS-Trust [80], introduce another role which aids service selection and it is called the service broker [19]. Service brokers are trusted parties that force service providers to adhere to information practices that comply with privacy laws and regulations or, in the absence of such laws, industry best practices. In this way broker-sanctioned service providers are guaranteed to offer services that are in compliance with local regulations and to create a more trusted relationship with customers and partners. A service broker maintains a registry of available services and providers, as well as value-added information about the provided services. This may include information about service reliability, trustworthiness, quality, and possible compensation, to name a few. Figure 1.2 shows an SOA where a service broker acts as an intermediary between service requesters and service providers. A UDDI-based service registry is a specialized instance of a service broker. Under this configuration the UDDI registry serves as a broker where the service providers publish the definitions of the services they offer using WSDL, and the service requesters find information about the services available.
Overview of Service-Oriented Computing
5
Service broker
Service provider
Service client
Figure 1.2 Service brokering.
1.3
Enterprise Service Bus
Though Web Services technologies are currently most commonly used in implementing SOAs, many other conventional programming languages and enterprise integration platforms may be used in an SOA as well [51]. In particular, any technology that complies with WSDL and communicates with XML messages can participate in an SOA. Such technologies include J2EE and message queues, such as IBM’s WebSphere MQ. Since clients and services may be developed by different providers using different technologies and conceptual designs, there may be technological mis-matches (e.g., they may use different communication protocols) and heterogeneities (e.g., message syntax and semantics) between them. Dealing with such technological mismatches and heterogeneity involves two basic approaches: Implement clients to conform exactly with the technology and conceptual model (e.g., semantics and syntax) of the services they may invoke.
•
Insert an integration layer providing reusable communication and integration logic between the services and their clients.
•
The first approach requires the development of a service interface for each connection, resulting in point-to-point interconnections. Such point-to-point interconnection networks are hard to manage and maintain because they introduce a tighter coupling between clients and services. This coupling involves significant effort to harmonize transport protocols, document formats, interaction styles, etc. [35]. This approach results in hard-to-change clients and services, since any change to a service may impact all its clients. In addition, point-to-point integrations are complex and lack scalability. As the number of services and clients increases, they may quickly become unmanageable. To deal with this problem, existing Enterprise Application
6
Dimitrios Georgakopoulos and Michael P. Papazoglou
Integration (EAI) middleware supports a variety of hub-and-spoke integration patterns [39]. This leaves the second approach as the more viable alternative. The second approach introduces an integration layer that provides for interoperability between services and their clients. The Enterprise Service Bus (ESB) [48] [17] addresses the need to provide an integration infrastructure for Web Services and SOA. The ESB exhibits two prominent features [24]. First, it promotes loose coupling of the clients and services. Second, the ESB divides the integration logic into distinct, easily manageable pieces. The ESB is an open, standards-based message bus designed to enable the implementation, deployment, and management of SOA-based solutions. To play this role the ESB provides the distributed processing, standards-based integration, and enterprise-class backbone required by the extended enterprise [24]. In particular, the ESB is designed to provide interoperability between large-grained applications and other components via standards-based adapters and interfaces. To accomplish this the ESB functions as both transport and transformation facilitator to allow distribution of these services over disparate systems and computing environments. Conceptually, the ESB has evolved from the store-and-forward mechanism found in middleware products (e.g., message-oriented middleware), and combines conventional EAI technologies with Web services, XSLT [96], and orchestration and choreography technologies (e.g., BPEL, WS-CDL, and ebXML BPSS). Physically, an ESB provides an implementation backbone for an SOA. It establishes proper control of messaging and also supports the needs of security, policy, reliability, and accounting in an SOA. The ESB is responsible for controlling message flow and performing message translation between services. It facilitates pulling together applications and discrete integration components to create assemblies of services that form composite business processes, which in turn automate business functions in an enterprise. Figure 1.3 depicts a simplified architecture of an ESB that integrates a J2EE application using JMS, a .NET application using a C# client, an MQ application that interfaces with legacy applications, and other external applications and data sources. An ESB, as portrayed in the upper and midde parts of figure 1.3, enables the more efficient value-added integration of a number of different application components by positioning them behind a service-oriented facade and by applying Web Services technology. In this figure, a distributed query engine, which is normally based on XQuery [10] or SQL, enables the creation of data services to abstract the complexity of underlying data sources. Portals in the upper part of figure 1.3 are user-facing ESB aggregation points of a variety resources represented as services. Endpoints in the ESB, which are depicted as small rectangles in figure 1.3, provide abstraction of physical destination and connection information (such as TCP/IP hostnames and port numbers) transcending plumbing-level integration capabilities of traditional, tightly coupled, distributed software components. Endpoints allow services to communicate using logical connection names, which an ESB will map to actual physical network destinations at runtime. This destination independence offers the services that are connected to the ESB the ability to be upgraded, moved, or replaced without having to modify code and disrupt existing ESB applications. For instance, an existing invoicing service could easily be upgraded by a new service without disrupting the
Overview of Service-Oriented Computing
Custom applications
Portals
7
Service orchestration
Reliable Asynchronous Secure Messaging
service interface
Distributed query engine
Data sources
Adapters
Enterprise applications
Web services
JMS/ J2EE
MQ gateway
WebSphere, .NET apps
Java apps
Mainframe & legacy apps
Multi-platform support
Figure 1.3 Enterprise Service Bus connecting diverse applications and technologies.
execution of other applications. Additionally, duplicate processes can be set up to handle fail-over if a service is not available. Endpoints rely on the asynchronous and highly reliable communication between service containers. They can be configured to use several levels of quality of service, which guarantees communication despite network failures and outages [17]. To successfully build and deploy a distributed Service-Oriented Architecture, the following four primary aspects need to be addressed: 1. Service enablement
Each discrete application is exposed as a service.
2. Service orchestration Distributed services are configured and orchestrated in clearly specified processes. 3. Deployment As the SOA-based application is developed, completed services and processes must be transitioned from the testing to the production environment. 4. Management Services must be monitored, and their invocations and selection may need to be adjusted to better meet application-specific goals. Services can be assembled using a variety of application development tools (e.g., Microsoft .NET, Borland JBuilder, or BEA WebLogic Workshop) which allow new or existing distributed applications to be exposed as Web services. Technologies such as J2EE Connector Architecture (JCA) may also be used to create services by integrating packaged applications (such as ERP systems), which would then be exposed as services. To achieve its operational objectives, the ESB integration services such as connectivity and routing of messages based on business rules, data transformations, and application adapters [18].
8
Dimitrios Georgakopoulos and Michael P. Papazoglou
These capabilities are themselves SOA-based in that they are spread out across the bus in a highly distributed fashion and are usually hosted in separately deployable service containers. This is a crucial difference from traditional integration brokers, which usually are highly centralized and monolithic [39]. The distributed nature of the ESB container model allows individual Web Services to plug into the ESB backbone as needed. This enables ESB containers to be highly decentralized and work together in a highly distributed fashion, though they are scaled independently from one another. This is illustrated in figure 1.3, where applications running on different platforms are decoupled from each other, and can be connected through the bus as logical endpoints that are exposed as Web services. 1.3.1
Event-Driven Architecture
In the enterprise context, business events (e.g., a customer order, the arrival of a shipment at a loading dock, or the payment of a bill) may affect the normal course of a business process at any point in time [36]. This implies that business processes cannot be designed a priori, assuming that events follow predetermined patterns, but must be defined more dynamically to permit process flows to be driven by asynchronous events. To support such applications, SOA must be enhanced into an event-driven extension of SOA, which we refer to as Event-Driven Architecture (EDA) [55] [17] [57]. Therefore, EDA is a service architecture that permits enterprises to implement an SOA while respecting the highly volatile nature of business events. An ESB requires that applications and event-driven Web Services be tied together in the context of an SOA in a loosely coupled fashion. EDA allows applications and Web Services to operate independent of each other while collectively supporting a business processes and functions [18]. In an ESB-enabled EDA, applications and services are treated as abstract service endpoints which can readily respond to asynchronous events [18]. EDA provides a means of abstracting away from the details of underlying service connectivity and protocols. Services in this SOA variant are not required to understand protocol implementations or have any knowledge on routing of messages to other services. An event producer typically sends messages through an ESB, and then the ESB publishes the messages to the services that have subscribed to the events. The event itself encapsulates an activity, constituting a complete description of a specific action. To achieve its functionality, the ESB supports the established Web Services technologies, including, SOAP, WSDL, and BPEL, as well as emerging standards such as WS-ReliableMessaging [43] and WS-Notification [52]. As was noted in the previous section, in a brokered SOA (depicted in figure 1.2) the only dependency between the provider and the client of a service is the service contract that is typically described in WSDL and is advertised by a service broker. The dependency between the service provider and the service client is a runtime dependency, not a compile-time dependency. The client obtains and uses all the information it needs about the service at runtime. The service interfaces are discovered dynamically, and messages are constructed dynamically. The service consumer does not know the format of the request message, or the location of the service, until it needs the service.
Overview of Service-Oriented Computing
9
Service contracts and other associated metadata (e.g., about policies and agreements [20]), lay the groundwork for enterprise SOAs that involve many clients operating with a complex, heterogeneous application infrastructure. However, many of today’s SOA implementations are not that elaborate. In many cases, when small or medium enterprises implement an SOA, neither service interfaces in WSDL nor UDDI lookups are provided. This is often due either to the fact that the SOA in place provides for limited functionality or to the fact that sufficient security arrangements are not yet in place. In these cases an EDA provides a more lightweight, straightforward set of technologies to build and maintain the service abstraction for client applications [9]. To achieve less coupling between services and their clients, EDA requires event producers and consumers to be fully decoupled [9]. That is, event producers need no specific knowledge of event consumers, and vice versa. Therefore, there is no need for a service contract (e.g., a WSDL specification) that explicates the behavior of a service to the client. The only relationship between event consumers and producers is through the ESB, to which services and clients subscribe as event publishers and/or subscribers. Despite the focus of EDA on decoupling event consumers and producers, the event consumers may require metadata about the events they may receive and process. To address this need, event producers often organize events on the basis of some application-specific event taxonomy that is made available to (and in some cases is mutually agreed upon with) event consumers. Such taxonomies typically specify event types and other event metadata that describe published events that consumers can subscribe to, including the format of the event-related attributes and corresponding messages that may be exchanged between event producer and consumer services. 1.3.2
An ESB-Based Application Example
As an example of an ESB-based application, consider the simplified distributed procurement process in figure 1.4 that has been implemented using an ESB. The process is initiated when an “Inventory service” publishes a replenishment event (in figure 1.4 this is indicated by the dashed arrow between “Inventory service” and “Replenishment” service). This event is received by the subscribing “Replenishment” service as prescribed by EDA. On receipt of such an event, the “Replenishment” service starts the procurement process that follows the traditional SOA (e.g.,
Inventory service
Replenishment
Supplier order
Figure 1.4 Simplified distributed procurement process.
Purchase order
Invoicing service
10
Dimitrios Georgakopoulos and Michael P. Papazoglou
service invocations are depicted as solid arrows). In particular, the “Replenishment” service first invokes the “Supplier order” service that chooses a supplier based on some criterion. Next, the purchase order is automatically generated by the “Purchase order” service (this service encapsulates an ERP purchasing module), and it is sent to the vendor of choice. Finally, this vendor uses an “Invoicing” service to bill the customer. The services that are part of the procurement business process in figure 1.4 interact via an ESB that is depicted in figure 1.5. This ESB supports all aspects of SOA and EDA needed for implementing this service-based application. In particular, the ESB receives the published event and delivers it asynchronously to the subscribing “Replenishment” service (the event publisher is not depicted in figure 1.5). When the “Replenishment” service invokes the “Supplier order” service, the ESB transports the invocation message. Although this figure shows only a single “Supplier order” service as part of the inventory, a plethora of supplier services may exist. The
Procurement
Inventory
Order application
J2EE application
Replenishment service
Supplier order service
SOAP/ HTTP
XML/ JMS
Enterprise Service Bus XML/ JMS
SOAP/ JMS
Purchase order service
Credit check service Invoicing service
JCA connector ERP
Supplier Figure 1.5 Enterprise Service Bus connecting remote services.
Legacy cc application
Finance
Invoice application
Overview of Service-Oriented Computing
11
“Supplier order” service, which executes a remote Web Service at a chosen supplier to fulfill the order, generates its response in XML, but the message format is not understood by the “Purchase order” service. To deal with this and other heterogeneity problems, the message from the “Supplier order” service leverages the ESB’s transformation service to convert the XML message that has been generated by the “Supplier order” service into a format that is accepted by the “Purchase order” service. Figure 1.5 also shows that legacy applications that are placed onto the ESB through JCA resource adapters employed by the “credit check” service. The capabilities of ESB are discussed in more detail next. 1.3.3
Enterprise Service Bus Capabilities
Developing or adapting an application for SOA involves the following steps: 1. Creating service interfaces to existing or new functions, either directly or through the use of adapters; 2. routing and delivering service requests to the appropriate service provider; and 3. providing for safe substitution of one service implementation for another, without any effect to the clients of that service. The last step requires not only that the service interfaces be specified as prescribed by SOA, but also that the SOA infrastructure used to develop the application allow the clients to invoke services regardless of the service location and the communication protocol involved. Such service routing and substitution are among the key capabilities of the ESB. Additional capabilities/functional requirements for an ESB are described in the following paragraphs. We consider these capabilities as being necessary to support the functions of an effective ESB. Some of the functional capabilities described below have been discussed in other publications (e.g., [46] [15] [32] [17]). Service Communication Capabilities A critical ability of the ESB is to route service interactions through a variety of communication protocols, and to transform service interactions from one protocol to another where necessary. Other important aspects of an ESB implementation are the capacity to support service messaging models consistent with the SOA interfaces and the ability to transmit the required interaction context, such as security, transaction, or message correlation information. Dynamic Connectivity Capabilities Dynamic connectivity pertains to the ability to connect to services dynamically, without using a separate static API or proxy for each service. Most enterprise applications today operate on a static connectivity mode, requiring some static piece of code for each service. Dynamic service connectivity is a key capability for a successful ESB implementation. The dynamic connectivity API is the same, regardless of the service implementation mechanism (Web Services, JMS, EJB/RMI, etc.).
12
Dimitrios Georgakopoulos and Michael P. Papazoglou
Endpoint Discovery with Quality of Service Capabilities The ESB should support the discovery, selection, and binding to services. Increasingly these capabilities will be based on Web Services standards such as WSDL, SOAP, UDDI, and WSPolicyFramework. As many network endpoints can implement the same service contract, it may be desirable for the client to select the best endpoint at runtime, rather than hard-coding endpoints at build time. In addition the ESB should be capable of supporting various qualities of service. Clients can query a Web Service, such as an organizational UDDI service, to discover the best service instance to use based on its QoS properties. Ideally, these capabilities should be controlled by declarative policies associated with the services involved, using a policy standard such as WS-Policy [26]. Integration Capabilities To support SOA in a heterogeneous environment, the ESB needs to integrate with a variety of systems that do not directly support service-style interactions. These may include legacy systems, packaged applications, and other COTS components. When assessing the integration requirements for ESB, several levels of integration must be considered, including application, process, and information integration. Each of these imposes specific technical requirements that need to be addressed by a service-oriented integration solution [32] [24]. Application integration is concerned with building and evolving an integration backbone capability that enables fast assembly and redeployment of business software components. Such integration is an integral part of the assembly process that facilitates strategies which combine legacy applications, acquired packages, external application subscriptions, and newly built components. The ESB should focus on a service-based application integration solutions that deliver (1) applications composed of interchangeable parts that are designed to be adaptable to accommodate business and technology changes; (2) evolutionary application portfolios that protect investment and can respond rapidly to new requirements and business processes; and (3) integration of various platform and component technologies. Process integration is concerned with the development of processes that combine other business processes, and integrate applications into business processes. Process-level integration at the level of ESB generally involve Enterprise Application Integration (EA), i.e., the integration of business processes and applications within the enterprise. It may also involve the integration of processes, not simply individual services, from other organizations external such as organizations participating in a supply chain or financial services that span multiple institutions. Information integration [42] is the process of providing consistent access to all the data in the enterprise, by all the applications that require it, in whatever form they need it, without being restricted by the format, source, or location of the data. This may require adapters and other data transformation facilities, aggregation of services to merge and reconcile disparate data (e.g., to merge two profiles for the same customer), and data validation to ensure data consistency (e.g., the minimum computed income should be greater that zero).
Overview of Service-Oriented Computing
13
Portal-based integration is concerned with how to fabricate a standard portal framework that provides efficient, uniform, and consistent presentation of complex business functions and processes to Web users. It permits the ESB to provide one face to Web users, resulting in consistent user experience and unified information delivery. This allows the underlying services and applications to remain distributed. Two complementary industry standards, JSR 168 and WSRP, are emerging in the portal space and can help in such integration efforts [44]. JSR 168 defines a standard way to develop portlets. It allows portlets to be interoperable across portal vendors. For example, portlets developed for BEA’s WebLogic Portal can be interoperable with IBM Portal. This allows organizations to have a lower dependency on the portal product vendor. WSRP (Web Service for Remote Portals) allows remote portlets to be developed and used in a standard manner and facilitates federated portals. It combines the power of Web services and portal technologies and is fast becoming the major enabling technology for distributed portals in an enterprise. JSR 168 complements WSRP by dealing with local rather than distributed portlets. A portal page may have certain local portlets which are JSR 168-compliant and some remote, distributed portlets that are executed in a remote container. With JSR 168 and WSRP maturing, the possibility of a true EJB federated portal can become a reality. It is important to note that all these integration levels must be considered when embarking on an ESB implementation. To be effective, ESB-based integration must rely on a methodology that facilitates reuse, eliminates redundancy, and simplifies integration, testing, deployment, and maintenance. Message Transformation Capabilities Legacy and new components that are integrated as services into the ESB typically have different expectations of messaging models and data formats. A major source of value in an ESB is that it shields any individual component from any knowledge of the implementation details of any other component. The ESB transformation services make it possible to ensure that messages and data received by any component are in the format it expects, thereby removing the need to change the sender or the receiver. The ESB plays a major role in transforming heterogeneous data and messages, including converting legacy data formats (e.g., a COBOL/VSAM application running on an OS/390 host) to XML, transforming XML to WSDL messages, and transtating input XML to a different XML format. Reliable Messaging Capabilities Reliable messaging refers to the ability to queue service request messages and ensure guaranteed delivery of these messages to their destinations. It also includes the ability to provide message delivery notification to the message senter/service requestor. Reliable messaging supports asynchronous store-and-forward delivery as well as guaranteed delivery capabilities. Primarily used for handling events, this capability is crucial for responding to clients in an asynchronous manner.
14
Dimitrios Georgakopoulos and Michael P. Papazoglou
Topic/Content-Based Routing Capabilities The ESB should be equipped with routing mechanisms to facilitate not only topic-based routing but also a more sophisticated content-based routing. Topic-based routing assumes that messages can be grouped into fixed, topical classes, so that subscribers can state interest in a topic and, as a consequence, receive messages associated with that topic [28]. Content-based routing permits the content of the message to determine its routing to different endpoints in the ESB. This requires subscriptions to be based on constraints involving properties or attributes of asynchonous messages and events. Therefore, content-based routing is particularly important for EDA. Content-based routing is often implemented by XML messages and JMS or other messageoriented middleware or is based on emerging standards, such as WS-Notification. WS-Notification defines a general, topic-based Web Service system for publish/subsribe interactions, which relies on the WS-Resource framework [30]. WS-Notification [52] is a family of related specifications that define a standard Web Services approach to notification using a topic-based publish/subscribe pattern. Its specification defines standard message exchanges to be implemented by service providers that wish to participate in notifications and standard message exchanges, thus allowing publication of messages from entities that are not themselves service providers. WS-Notification also allows expression of operational requirements for service providers and requesters that participate in notifications. It permits notification messages to be attached to WSDL PortTypes. The current WS-Notification specification provides support for both brokered and peer-to-peer publish/subscribe. Security Capabilities Enforcing security is a key success factor for ESB implementations. The ESB needs to provide security to service consumers and to integrate with the (potentially different) security models of the service providers. Both point-to-point (e.g., SSL encryption) and end-to-end security capabilities are required. The latter include federated authentication, which intercepts service requests and adds the appropriate username and credentials; validation of each service request and authorization to make sure that the sender has the appropriate privilege to access the service; and encryption/decryption of XML message content. To address these intricate security requirements, trust models, WS-Security [8], and other security-related standards have been developed. Long-Running Process and Transaction Capabilities If the ESB support, long-running business processes and transactions, such as those encountered in an online reservation system that interacts with the users as well as various service providers (airline ticketing, car reservation, hotel reservation, online payment such as paypal, etc) it is of vital importance that the ESB provide necessary transactional correctness and reliability guarantees. More specifically, the ESB must provide mechanisms that isolate the side effects of concurrent transactions from each other and must support recovery from technical and process failures. The challenge at hand is to ensure that complex transactions are handled in a highly
Overview of Service-Oriented Computing
15
reliable manner, and ESB-supported transactions can roll back their processing to the original/ prerequest state in the event of a failure. Management and Monitoring Capabilities Managing applications in a SOA environment is a serious challenge [7]. Examples of issues that need to be addressed include dynamic load balancing, fail-over when primary systems go down, and achieving topological or geographical affinity between the client and the service instance. Effective application management in an ESB requires a management framework that is consistent across an heterogeneous set of participating component systems and supports Service Level Agreements (SLAs). Enforcing SLAs requires the ability to select service providers dynamically, based on the quality of service they offer and the business value of individual transactions. An additional requirement for a successful ESB implementation is the ability to monitor the health, capacity, and performance of services. Monitoring is the ability to track service activities that take place via the ESB and provide various metrics and statistics. Of particular significance is the ability to be able to spot problems and exceptions in the business processes and to move toward resolving them as soon as they occur. Process monitoring capabilities are currently provided by tool sets in platforms for developing, deploying, and managing service applications, such as the WebLogic Workshop. Scalability Capabilities With a widely distributed SOA, there is need to scale some of the services or the entire infrastructure to meet integration demands. For example, transformation services are typically very resource-intensive and may require multiple instances across two or more computing nodes. The loosely coupled nature of an SOA requires that the ESB use a decentralized model to provide a cost-effective solution that promotes flexibility in scaling any aspect of the integration network. A decentralized SOA enables independent scalability of individual services as well as of the communications infrastructure itself. 1.4
xSOA
A basic SOA (i.e., the architecture depicted in figure 1.2) implements concepts such as service registration, discovery, and service invocation. ESB requirements, however, suggest that the basic SOA needs to be extended to support capabilities such as service orchestration, monitoring, and management. These are addressed by the extended SOA (xSOA) [37] [41]. The xSOA is an attempt to streamline, group together, and logically structure the functional requirements of complex applications that make use of the service-oriented computing paradigm. The xSOA is a stratified service-based architecture. Its architectural layers, which are depicted in figure 1.6, embrace a multidimensional separation of concerns [40] in such a way that each layer defines a set of constructs, roles, and responsibilities, and relies on constructs of a lower layer to accomplish its mission. The logical separation of functionality is based on the need to separate basic
16
Dimitrios Georgakopoulos and Michael P. Papazoglou
Service operator
Market maker
Manage
ment
• Marke • Certifict • Rating ation • SLAs • Opera • Assurations • Suppo nce rt
Role actions performs publishers uses
Manag
ed serv
Compo
Compo
sition
site serv
• Coord ination • Confo rmance • Monit oring • QoS
Service provider
Descrip
tion & B
• Capab ility • Inteface • Behav ior • QoS
ices
ices
Basic serv
asic Op
ices
eration
s
• Publica
tion
• Discov ery • Select ion • Bindin g
Service client
Service aggregator
Figure 1.6 xSOA, an extended SOA.
service capabilities provided by the basic SOA (for example, building simple service-based applications) from more advanced service functionality needed for composing monitoring, and managing services. As shown in figure 1.6, the xSOA utilizes the basic SOA constructs as its foundational layer, and adds layers of service composition and management on top of it. The ESB middleware capabilities (communication, message routing and translation, service discovery, etc.) fall within the xSOA foundation layer. ESB capabilities that deal with service composition and management are in the composition and management layers of the xSOA. However, these layers include more advanced functionality than in an ESB. In a typical service-based application employing the bottom layer of xSOA, a service provider hosts a network-accessible software module. The service provider defines a service description, and publishes it to a client or a registry through which a service description is advertised. The service client (requester) discovers a service (endpoint) and retrieves the service description directly from the service or from a registry (e.g. a UDDI repository). The client uses the service description to bind with the service provider and invoke the service. The service provider and service client roles are logical constructs, and a service may exhibit characteristics of both. For reasons of conceptual simplicity, in figure 1.6 we assume that service providers and aggregators
Overview of Service-Oriented Computing
17
can act as service brokers and advertise the services they provide. The role actions in this figure also indicate that a service aggregator entails a special type of provider. The service composition layer in the xSOA encompasses a more advanced role of service agreegator and corresponding functionality for the aggregation of multiple services into a single composite service. Resulting composite services may be used by (1) service aggregators as basic services in further service compositions, or (2) clients as applications/solutions. Service aggregators thus become service providers in the bottom layer of xSOA by publishing the service descriptions of the composite service they create. The development of composite services at the composition layer of xSOA involves managing the following: Metadata, standard terminology, and reference models Web Services need to use metadata to describe what other endpoints need to know in order to interact with them. Metadata describing a service typically contain descriptions of the interfaces of a service, including vendor identifier, narrative description of the service, Internet address for messages, and format of request and response messages, as well as choreographic descriptions of the order of interactions. Service descriptions may range from simple service identifiers implying a mutually understood protocol to a complete description of the service interaction. Metadata describing services provide high-level semantic details regarding the structure and contents of the messages received and sent by Web Services, message operations, concrete network protocols, and endpoint addresses used by Web Services. They also describe abstractly the capabilities, requirements, and general characteristics of Web Services and how they can interoperate with other services. Web Service metadata need to be accompanied with standard terminology to address business terminology fluctuations and reference models, such as RosettaNet PIPS [56], to allow applications to define data and processes that are meaningful not only to their own businesses but also to their business partners, while maintaining interoperability (at the semantic level) with other business applications. The purpose of combining metadata, standard terminology, and reference models is to enable business processes to capture and convey their intended meaning and exchange requirements, identifying (among other things) the meaning, timing, sequence, and purpose of each business collaboration and associated information exchange.
•
Conformance This involves managing service, behavior, and semantics conformance. Service conformance achieves the following: (1) ensures the integrity of a composite service by matching its operations and parameter types with those of its constituent component services, (2) imposes constraints on the component services (e.g., to enforce business rules), and (3) performs data fusion activities. Service conformance has three parts: typing, behavioral, and semantic conformance. Typing (syntactic) conformance is performed at the data-typing level by using principles such as type safeness, covariance, and contravariance for signature matching [53]. Behavioral conformance ensures the correctness of logical relationships between component operations that need to be blended into higher-level operations. Behavioral conformance guarantees that composite operations do not lead to spurious results and that the overall process
•
18
Dimitrios Georgakopoulos and Michael P. Papazoglou
behaves in a correct and unambiguous manner. Finally, semantic conformance ensures that services and operations are annotated with domain-specific semantic properties (descriptions) so that they preserve their meaning when they are composed and can be formally validated. Service conformance is a research topic [53]. Concrete solutions exist only for typing conformance [38] [16], as they are based on conformance techniques for programming languages such as Eiffel or Java. Coordination This involves managing the execution order and dataflow between the component service (e.g., by specifying workflow processes and using a workflow engine for runtime control of service execution).
•
Monitoring This allows monitoring events or information produced by the component services, monitoring instances of business processes, viewing process instance statistics, and suspending and resuming or terminating selected process instances. Of particular significance is the ability to be able to spot problems and exceptions in the business processes and be able to resolve them as soon as they occur. Process monitoring capabilities are currently provided by tools in platforms for developing, deploying, and managing service applications (e.g., BEA’s WebLogic and Vitria’s BusinessWare).
•
Policy compliance This involves the management of the compliance of service compositions to policies (e.g., for security, information assurance, and QoS). In particular, policies [49] may be used to manage a system or organize the interaction between Web services [4]. For example, knowing that a service supports a Web Services security standard such as WS-Security is not enough information to enable successful composition. The client needs to know what kind of security tokens the service is capable of processing, and which one it expects. The client must also determine if the service requires signed messages. And if so, it must determine what token type must be used for the digital signatures. Finally, the client must determine when to encrypt the messages, which algorithm to use, and how to exchange a shared key with the service. Composing services without understanding these details and complying to the appropriate policies may lead to erroneous results.
•
Standards such BPEL and WS-Choreography [14] that operate at the service composition layer in xSOA enable the creation of large service-based applications that allow two companies to conduct business in an automated fashion. We expect to see much larger service collaborations spanning entire industry groups and other complex business relationships. These developments necessitate the use of tools and utilities that provide insights into the health of systems that implement Web Services and into the status and behavior patterns of loosely coupled applications. A related methodology is essential for leveraging a management framework for a production-quality, service-based infrastructure and applications. The rationale is very similar to the situation in traditional distributed computing environments, where systems administrators rely on programs/tools/utilities to make certain that a distributed computing environment operates reliably and efficiently.
Overview of Service-Oriented Computing
19
Managing loosely coupled applications in an SOA inherently entails even more challenging requirements. Failure or change of a single application component can bring down numerous interdependent enterprise applications. The addition of new applications or components can overload existing components, causing unexpected degradation or failure of seemingly unrelated systems. Application performance depends on the combined performance of cooperating components and their interactions. To counter such situations, enterprises need to constantly monitor the health of their applications. The performance should be in tune at all times and under all load conditions. Managing critical Web service-based applications requires sophisticated administration and management capabilities that are supported across an increasingly heterogeneous set of participating distributed component systems and provide complex aggregate (cross-component) service-level agreement enforcement and dynamic resource provisioning. Such capabilities are provided by the xSOA service management layer. We could define Services management as the functionality required for discovering the existence, availability, performance, health, usage, control, configuration, life cycle support, and maintenance, of Web Services or business processes in the context of a SOA. Service management encompasses the control and monitoring of SOA-based applications throughout their life cycle [22]. It spans a range of activities from installation and configuration to collecting metrics and tuning to ensure responsive service execution. The management layer in xSOA requires that services need to be managed. In fact, the very same well-defined structures and standards that form the basis for Web Services also provide the foundation for managing and monitoring communications between services and their underlying resources, across numerous vendors, platforms, technologies, and topologies. Service management includes many interrelated functions [25]. The most prominent functions of service management are summarized in the following categoriers: 1. Service-Level Agreement (SLA) management This includes QoS (e.g., sustainable network bandwidth with priority messaging service) [31]; service reporting (e.g., acceptable system response time); and service metering. 2. Auditing, monitoring, and troubleshooting This includes providing service performance and utilization statistics, measurement of transaction arrival rates and response times, measurement of transaction loads (number of bytes per incoming and outgoing transaction), load balancing across servers, measuring the health of services, and troubleshooting. 3. Dynamic service provisioning This includes provisioning resources, dynamic allocation/ deallocation of hardware, installation/deinstallation of software on demand-based changing workloads, ensuring SLAs, and reliable SOAP messaging delivery. 4. Service life cycle/state management This includes exposing the current state of a service and permitting life cycle management, including the ability to start and stop a service, make specific configuration changes to a deployed Web Service, support different versions of Web
20
Dimitrios Georgakopoulos and Michael P. Papazoglou
services, and notify the clients of a service about a change or impending change to the service interface. 5. Scalability/extensibility The Web Services support environment should be extensible and must permit discovery of supported management functionality in a given instantiation. To manage critical applications, cross-organizational collaborations, and other complex business relationships the xSOA service management layer is divided into two complementary categories [37] [41]: 1. Service operations management. This manages the service platform and the deployment of services, and, more importantly, monitors the correctness and overall functionality of aggregated/orchestrated services. 2. Service market management. This typically supports integrated supply chain functions and provides a comprehensive range of services supporting an industry/trade, including providing services for business transaction negotiation, financial settlement, service certification and quality assurance, service rating, etc. The xSOA’s service operations management functionality is aimed at supporting critical applications that require enterprises to manage the service platform, the deployment of services, and the applications. xSOA’s service operations management typically gathers information about the managed service platform, Web Services and business processes, and managed resource status and performance. It also supports specific service management tasks (e.g., root cause failure analysis, SLA monitoring and reporting, service deployment, and life cycle management and capacity planning). Operations management functionality may provide detailed application performance statistics that help assess the application effectiveness, permit complete visibility into individual business processes and transactions, guarantee consistency of service compositions, and deliver application status notifications when a particular activity is completed or a decision condition is reached. xSOA refers to the role responsible for performing such operation management functions as the service operator. It is increasingly important for service operators to define and support active capabilities versus traditional passive capabilities [29]. For example, rather than merely raising an alert when a given Web Service is unable to meet the performance requirements of a given service-level agreement, the management platform should be able to take corrective action. This action could take the form of rerouting requests to a backup service that is less loaded, or provisioning a new application server with an instance of the software providing the service if no backup is currently running and available. Finally, service operations management should provide tools for the management of running processes that are comparable with those provided by BPM tools. Management in this care taker the form of real-time and historical reports, and the triggering of actions. For example, deviations from key performance indicator target values, such as the percent of requests fulfilled within the limits specified by a service-level agreement, might trigger an alert and an escalation procedure.
Overview of Service-Oriented Computing
21
Another aim of xSOA’s service management layer is to provide support for specific markets by permitting the development of open service marketplaces. Currently, there exist several vertical industry marketplaces, such as those for the semiconductor, automotive, travel, and financial services industries. Their purpose is to create opportunities for buyers and sellers to meet and conduct business electronically, or to aggregate service supply/demand by offering added-value services and group buying power (just like a cooperative). The scope of such a service marketplace is limited only by the ability of enterprises to make their offerings visible to other enterprises and establish industry-specific protocols by which to conduct business. Service marketplaces typically support supply chain management by providing to their members a unified view of products and services, standard business terminology, and detailed business process descriptions. In addition, service markets must offer a comprehensive range of services supporting industry/ trade, including services that provide business transaction negotiation and facilitation [12], financial settlement, service certification and quality assurance, rating services, service metrics (such as number of current service requesters, average turnaround time), and manage the negotiation and enforcement of SLAs. The xSOA market management functionality as illustrated in figure 1.6 is aimed to support these open service market functions. xSOA Service markets introduce the role of a market maker. A market maker is a trusted third party or consortium of organizations that brings the suppliers and vendors together. Essentially, a service market maker is a special kind of service aggregator that has added responsibilities, such as issuing certificates, maintaining industry standard information, introducing new standards, and endorsing service providers/aggregators. The market maker assumes the responsibility for service market administration and performs maintenance tasks to ensure the administration is open and fair for business and, in general, provides facilities for the design and delivery of integrated service offerings that meet specific business needs and conform to industry standards. 1.5
Chapter Summary and Remaining Chapters in this Book
Modern enterprises need to streamline both internal and cross-enterprise business processes by integrating new, legacy, and home-grown applications. This requires an agile approach that allows enterprise business services (those offered to customers and partners) to be easily assembled from a collection of smaller, more fundamental business services. This challenge in automated business integration has driven major advances in technology within the integration software space. As a result the SOA has emerged to address the roles of service requesters, service providers, and service brokers, and to foster loosely coupled, standards-based, and protocol-independent distributed computing that offers solutions for achieving the desired business integration and mapping IT implementations more closely to the overall business process flow. Combining the SOA paradigm with event-driven processing lays the foundation for EDA. Both SOA and EDA can be effectively supported by an ESB that is capable of combining various
22
Dimitrios Georgakopoulos and Michael P. Papazoglou
conventional distributed computing, middleware, BPM, and EAI technologies. Therefore ESB offers a unified backbone on which enterprise services can be advertised, composed, planned, executed, and monitored. To capture essential ESB functions that include capabilities such as service orchestration, “intelligent” routing, provisioning, and integrity and security of messages, as well as service management, the conventional SOA was extended into the extended SOA (xSOA). In writing this chapter we have intentionally exposed the complexity of SOA-based application development in environments where SOA-based applications are not developed entirely from new functionality. Though this sacrifices some of the clarity in SOA, it faithfully represents the issues in the development of many real SOA-based applications. The remaining chapters in this book cover topics related to SOA, EDA, and xSOA, as well as engineering aspects of SOA-based applications. In particular, the book includes chapters on modeling of SOA-based applications, SOA architecture design, business process management, transactions, QoS and service agreements, service requirements engineering, reuse, and adaptation. The remaining chapters are outlined below. Chapter 2: Conceptual Modeling of Service-Driven Applications In this chapter, Boualem Benatallah, Fabio Casati, Woralak Kongdenfha, Halvard Skogsrud, and Farouk Toumani present a model-driven methodology for Web Services development and management. In particular, the proposed methodology is based on (1) UML and high-level notations to model key service abstractions, (2) operators and algebras for comparing and transforming such models, and (3) a CASE tool, partially implemented, that manages the entire service development life cycle. The proposed methodology focuses on the aspects of Web Services mentioned above, and on the relationship among them, and between them and the Web Service implementation. The motivation behind this effort comes from the observation that there has been little concern so far regarding the potential complexity of service-based applications development, despite initial results in supporting interoperability in terms of a concerted standardization effort. There is no framework for helping Web Services beginners, or even developers, wander through the maze of available specifications and their usefulness for developing service artifacts and abstractions (e.g., interfaces, business protocols, composition models, and policies). In addition, there has been little support for model-driven development of Web Services, which is very useful to bridge the gap between requirements and the subsequent phases of the service artifacts development process. Chapter 3: Realizing Service-Oriented Architectures with Web Services In this chapter Jim Webber and Savas Parastatidis present a message-oriented architectural paradigm for Web Services, consisting of three distinct views that decouple the architecture layer (service-oriented) from the protocol layer (message-oriented) and the implementation layer (event-driven). The importance of XML messages and message exchange patterns are empha-
Overview of Service-Oriented Computing
23
sized across all three views. This approach is illustrated by desribing the construction of a simple instant messaging application which highlights protocol design and implementation issues. In addition to setting out the architectural layers of message-oriented Web services, a set of architectural and implementation guidelines is presented. These guidelines show how to avoid common software pitfalls by adhering to a number of deliberately simple design principles which encompass architecture, protocol, and implementation. Chapter 4: Service-Oriented Support for Dynamic Interorganizational Business Process Management Paul Grefen’s chapter analyzes requirements for the support of business processes and puts these in the context of the existing SOC technology. The chapter describes an application of SOC technology providing dedicated support for dynamic business process management across the boundaries of organizations. The combination of SOC technology and workflow management (WFM) technology provides the basis for full-fledged dynamic interorganizational business process management. Grefen concludes that the current state of the art does not yet provide an integrated solution, but that many capabilities are available or under development. Chapter 5: Data and Process Mediation in Semantic Web Services This chapter by Adrian Mocan, Emilia Cimpian, and Christoph Bussler views the Web as a highly distributed and heterogeneous source of data and information. In this environment, Web Services extend the heterogeneity problems from the data level to the behavior level of business logic, message exchange protocol, and Web Service invocation. The chapter first indentifies the need for mediator systems that are able to cope with these heterogeneity problems and offer the means for reconciliation, integration, and interoperability. Next, this chapter presents an overview of mediator systems, analyzes existing and future trends in this area, and describes the mediation architecture of the Web Service Execution Environment (WSMX). The chapter addresses both data and process mediation. It provides an insight into the first topic together with a survey of the multitude of existing approaches in data mediation. The chapter also explores and characterizes the largely unexplored topic of process mediation that must be part of the mediation solution for the Semantic Web and Semantic Web Services. Chapter 6: Toward Configurable QoS-Aware Web Services Infrastructure L. Bahler, F. Caruso, C. Chung, B. Falchuk, and J. Micallef have a long experience in the development of mission-critical enterprise systems, such as telecommunications network management systems. Such systems have stringent nonfunctional requirements for reliability, performance, availability, security, and scalability. Web Services technologies that address these essential quality of service (QoS) aspects are still in their infancy, with several emerging and often overlapping specifications that address only a fraction of QoS issues. This chapter addresses the challenges of designing Web Services with QoS requirements for mission-critical operations. In particular, this chapter provides an analysis and design methodology for Web Services that
24
Dimitrios Georgakopoulos and Michael P. Papazoglou
can be transparently deployed on different transports. It focuses on the QoS requirements for the message exchange to accomplish a business service—and decribes an adaptive Web Services gateway that can be configured to provide security (and other capabilities). The chapter also advocates the use of Semantic Web technologies to support and automate deployment configuration that satisfies the solution QoS requirements for a specific technology environment. Chapter 7: Configurable QoS Computation and Policing in Dynamic Web Service Selection In this chapter, Anne HH Ngu, Yutu Liu, Liangzhao Zeng, and Quan Z. Sheng cover QoS problems and solutions for supporting rapid and dynamic composition of Web Services. In a dynamic Web Service composition paradigm, Web Services that meet requesters’ functional requirements must be located and bounded dynamically from a large and constantly changing number of service providers. To enable quality-driven Web service selection, an open, fair, dynamic, and secure framework is needed to evaluate the QoS of a vast number of Web services. The computational fairness and enforcement of the QoS of component Web Services should have minimal overhead, yet be able to achieve sufficient trust by both service requesters and providers. This chapter presents an open, fair, and dynamic QoS computation model for Web Services selection through implementation of and experimentation with a QoS registry in a hypothetical phone service provisioning marketplace application. Chapter 8: WS-Agreement Concepts and Use: Agreement-Based, Service-Oriented Architectures This chapter by Heiko Ludwig outlines the concept of agreement-driven SOA, explains the elements of the WS-Agreement specification, and discusses conceptual and pragmatic issues of implementing an agreement-driven SOA based on WS-Agreement. Agreements, such as Service Level Agreements (SLAs), are typically used to define the specifics of a service delivered by a provider to a particular customer. They include the service provider’s obligations in terms of which services at which quality, the modalities of service delivery, and the quantity (i.e., the capacity) of the service to be delivered. Agreements also define what is expected of the service customer, typically the financial compensation and the terms of use. The chapter describes the specifications of WS-Agreement that are defined by the Grid Resource Allocation Agreement Protocol (GRAAP) Working Group of the Global Grid Forum (GGF) Such agreement specifications enable an organization to dynamically establish an SLA in a formal, machine-interpretable representation as part of an SOA. The chapter discusses an XML-based syntax for agreements and agreement templates, a simple agreement creation protocol, and an interface to monitor the state of an agreement. Chapter 9: Transaction Support for Web Services This chapter by Mark Little provides an overview of various transaction models and specifications that have been proposed as standards for transactional composition of Web services. In
Overview of Service-Oriented Computing
25
particular, the chapter provides a tutorial of ACID transactions and the Business Transactions Protocol (BTP) proposed by the Organization for Advancement of Structured Information Systems (OASIS). It explains the Web Services Coordination (WS-C) protocol with its Web Services Atomic Transaction (WS-AT) and the Web Services Business Activity (WS-BA) specifications, and provides an overview of the the Composite Application Framework (WSCAF) specification. Finally, the chapter compares these proposed standards by characterizing and comparing the transactional guarantees they provide. Chapter 10: Transactional Web Services This chapter by Stefan Tai, Thomas Mikalsen, Isabelle Rouvellou, Jonas Grundler, and Olaf Zimmermann addresses the problem of transactional coordination in Service-Oriented Computing (SOC). The chapter advocates the use of declarative policy assertions to advertise and match support for different transaction models, and to define transactional semantics of Web Services compositions. It presents concrete, protocol-specific policies that apply to relevant Web Services specifications. In particular, the chapter focuses on the Web Services Coordination (WSCoordination) specification that defines an extensible framework that can be used to implement different coordination models for Web Services. These include traditional atomic transactions and long-running business transactions specified using the Web Services Atomic Transaction (WS-AT) and the Web Services Business Activity (WS-BA) specifications. The chapter presents a policy-based approach that extends BPEL with coordination semantics and uses policies to drive and configure corresponding middleware systems to support transactional service compositions in an SOC environment. Chapter 11: Service Componentization: Toward Service Reuse and Specialization Bart Orriens and Jian Yang introduce the concept of service component to facilitate the idea of Web Service component reuse, specialization, and extension, and discuss why the inheritance concepts developed by object-oriented programming language research cannot be applied directly to service component inheritance, but must be modified. The chapter introduces service components as a packaging mechanism for developing Web-based distributed applications in terms of combining existing (published) Web services. Generally speaking, a service component can range from a small fragment of a business process to an entire complex business process. A component’s interface specification will be used when applications and other service compositions are built upon it. The chapter illustrates that service components have a recursive nature, in that they can be composed of published Web Services while in turn they themselves are also considered to be Web Services (albeit complex in nature). Finally, The chapter describes how once a service component is defined it can be reused, specialized, and extended. Chapter 12: Requirements Engineering Techniques for Web Services In this chapter, Jaap Gordijn, Pascal van Eck, and Roel Wieringa focus on creating a shared understanding of Web Services, and on analyzing whether the each Web Service is commercially
26
Dimitrios Georgakopoulos and Michael P. Papazoglou
viable and technically feasible. Achieving these is complicated when many different stakeholders representing different enterprises and different interests are involved. The chapter presents an approach, based on requirements engineering techniques: (1) to understand and analyze the Web Service, and (2) to develop a blueprint for a Web Service-based implementation. The chapter takes a multiple perspective approach that includes a commercial value perspective, a process perspective, and an information systems perspective. Chapter 13: Web Service Adaptation In the last chapter of this book, Barbara Pernici and Pierluigi Plebani discuss Web Service adaptation. The solutions presented in the chapter are based on the results of the MAIS (Multichannel Adaptive Information System) project, in which the Web Service paradigm has been exploited in an adaptive way, studying (1) a set of models and methodologies which allow service provisioning through different channels, and (2) techniques to provide flexible services which are aware of the provisioning context. For this goal, the chapter proposes a Web Services description that takes into account the service provisioning channel as an orthogonal dimension which is independent from the user and the provider. The chapter considers Web Services to be both stand-alone and running inside a process. To offer a way to evaluate when and in what way the adaptation can take place, the chapter describes a quality model of Web Services. In particular, this quality model allows the client to know how the quality varies not only between different Web Services but also between the channels through which the same Web service can be invoked. In this way, the client has information with which to identify the best Web Service and the best channel to use. References [1] Universal Description, Discovery, and Integration (UDDI). Technical report. September 2000. http://www.uddi. org. [2] Ali Arsanjani. Introduction to the special issue on developing and integrating enterprise components and services. Communications of the ACM, 45(10):30–34 (October 2002). [3] Anjali Anagol-Subbarao. J2EE Web Services on BEA WebLogic 1/e. Prentice Hall PTR, Upper Saddle River, N.J., 2004. [4] G. Alonso, F. Casati, H. Kuno, and V. Machiraju. Web Services: Concepts, Architectures and Applications. Springer, New York, 2004. [5] T. Andrews et al. Business Process Execution Language (BPEL), Version 1.1. Technical report. BEA Systems, IBM, Microsoft, SAP, and Siebel Systems, May 2003. [6] A. Arkin. Business Process Modeling Language (BPML). Last call draft report. BPMI.org, November 2002. [7] A. Arora et al. Web Services for Management (WS-Management). Technical report. Advanced Micro Devices, Dell, Intel, Microsoft, and Sun Microsystems, October 2004. [8] B. Atkinson et al. Web Services Security (WS-Security). Technical report. Microsoft, IBM, and VeriSign, April 2002. [9] J. Bloomberg. Events vs. services: The real story. ZapThink white paper, October 2004. Available at www .zapthink.com. [10] Scott Boag et al. XQuery 1.0: An XML Query Language. Working draft, technical report. W3C, April 2005.
Overview of Service-Oriented Computing
27
[11] D. Box et al. Simple Object Access Protocol (SOAP) 1.1. W3C note. May 2000. http://www.w3.org/TR/SOAP. [12] Tung Bui and Alexandre Gachet. Web services for negotiation and bargaining in electronic markets: Design requirements and implementation framework. In Proceedings of the 38th Hawaii International Conference on System Sciences. IEEE, Waikoloa, Hawiaii 2005. [13] S. Burbeck. The tao of e-business services: The evolution of Web applications into service-oriented components with Web services. IBM DeveloperWorks, October 1, 2000. http://www-106.ibm.com/developerworks/Webservices/ library/ws-tao. [14] David Burdett and Nickolas Kavantzas. WS-Choreography Model Overview. Working draft. W3C, March 2004. [15] A. Candadai. A dynamic implementation framework for SOA-based applications. Web Logic Developers Journal, pp. 6–8 (September/October 2004). [16] L. Cardelli and P. Wegner. On understanding types, data abstraction and polymorphism. ACM Computing Surveys, 17(4):471–522 (1985). [17] D. Chappell. Enterprise Service Bus. O’Reilly, Sebastopol, Calif., 2004. [18] D. Chappell. ESB myth busters: 10 Enterprise Service Bus myths debunked. Clarity of definition for a growing phenomenon. Web Services Journal, pp. 22–26 (February 2005). [19] M. Colan. Service-Oriented Architecture expands the vision of Web services, Part 2. IBM DeveloperWorks. June 2004. [20] A. Dan et al. Web services on demand: WSLA-driven automated management. IBM Systems Journal, 43(1): 136–158 (March 2004). [21] Arulazi Dhesiaseelan and Venkatavaradan Ragunathan. Web Services Container Reference Architecture (WSCRA). In Proceedings of the International Conference on Web Services, pp. 805–806. IEEE, 2004. [22] A. Lazovik et al. Associating assertions with business processes and monitoring their execution. In Proceedings of the Second International Conference on Service Oriented Computing. ACM Press, New York, 2004. [23] D. Booth et al. Web Services Architecture. 3WC Working Group note, 2003/2004. http://www.w3.org/TR/2004/ NOTE-arch-20040211. [24] M. Keen et al. Patterns: Implementing an SOA using an Enterprise Service Bus. IBM Redbooks, July 25, 2004. [25] Nicolas Catania et al. Web Services Management Framework—Overview, Version 2.0. Technical report. HP, July 2003. [26] Siddharth Bajaj et al. Web Services Policy framework (WS-Policy). Technical report. BEA Systems, IBM, Microsoft, SAP, Sonic Software, and VeriSign, September 2004. [27] Stephen Farrell et al. Assertions and Protocol for the OASIS Security Assertion Markup Language (SAML), V1.1. Committee specification. OASIS, July 2003. [28] B. Mukherjee et al. An efficient multicast protocol for content-based publish-subscribe systems. In ICDCS ’99: Proceedings of the 19th IEEE International Conference on Distributed Computing Systems, pp. 262–272. IEEE Computer Society, Washington, D.C., 1999. [29] A. G. Ganek and T. A. Corbi. The dawning of the autonomic computer era. IBM Systems Journal, 42(1):5–18 (2003). [30] Steve Graham et al., eds. Web Services Resource (WS-Resource), Version 1.2. Working draft 03, technical report. OASIS, March 2005. [31] R. Hauck and H. Reiser. Monitoring quality of service across organizational boundaries. In Trends in Distributed Systems: Towards a Universal Service Market. Proceedings of the Third International IFIP/GI Working Conference. LNCS 1890. Springer, New York, 2000. [32] K. Channabasavaiah, K. Holley, and E. M. Tuggle, Jr. Migrating to a Service-Oriented Architecture, Part 1. IBM DeveloperWorks, December 2003. [33] D. Krafzig, K. Banke, and D. Slama. Enterprise SOA: Service-Oriented Architecture Best Practices. Prentice Hall Professional Technical References, Indianapolis, Ind., 2005. [34] Heather Kreger. Fulfilling the Web services promise. Communications of the ACM, 46(6):29ff. (2003). [35] D. Linthicum. Next Generation Application Integration: From Simple Information to Web Services. AddisonWesley, Boston, 2003.
28
Dimitrios Georgakopoulos and Michael P. Papazoglou
[36] D. Luckham. The Power of Events: An Introduction to Complex Event Processing in Distributed Enterprise Systems. Addison-Wesley, Boston, 2002. [37] M. P. Papazoglou and D. Georgakopoulos. Introduction to a special issue on Service-Oriented Computing. Communications of the ACM, 46(10): 24–28 (October 2003). [38] B. Meyer. Object-Oriented Software Construction, 2nd ed. Prentice Hall Professional Technical Reference, Upper Saddle River, N.J., 1997. [39] M. P. Papazoglou and P. M. A. Ribbers. e-Business: Organizational and Technical Foundations. Wiley, Hoboken, N.J., 2006. [40] H. Ossher and P. Tarr. Multi-dimensional separation of concerns and the hyperspace approach. In Proceedings of the Symposium on Software Architectures and Component Technology: The State of the Art in Software Development. Kluwer 2000. [41] M. P. Papazoglou. Extending the Service Oriented Architecture. Business Integration Journal, February 2005. [42] Christine Parent and Stefano Spaccapietra. Issues and approaches of database integration. Communications of the ACM, 41(5):166–178 (1998). [43] Kazunori Iwasa, ed. WS-reliability, 1.1. Committee draft 1.086. OASIS, Web Services Reliable Messaging TC, August 2004. http://www.oasis-open.org/committees/wsrm/documents/specs/(tbd). [44] R. Rana and S. Kumar. Service on demand portals: A primer on federated portals. Web Logic Developers Journal, pp. 22–24 (September/October 2004). [45] Ueli Wahli et al. Websphere Version 5.1 Application Developer 5.1.1 Web Services Handbook. IBM Redbook, 2004. [46] R. Robinson. Understand Enterprise Service Bus Scenarios and Solutions in Service-Oriented Architecture. IBM DeveloperWorks, 2004. [47] S. Anderson et al. Web Services Trust language (WS-Trust). Public draft release. Actional, BEA Systems, Computer Associates, IBM, Layer 7 Technologies, Microsoft, Oblix, OpenNetwork Technologies, Ping Identity, Reactivity, RSA Security, and VeriSign, February 2005. [48] R. Schulte. Predicts 2003: Enterprise Service Buses Emerge. Report. Gartner, December 2002. [49] M. Sloman. Policy driven management of distributed systems. Journal of Network and Systems Management, 2:333–360 (1994). [50] D. Smith. Web services enable Service Oriented and Event-Driven Architectures. Business Integration Journal, pp. 12–13 (May 2004). [51] Michael Stal. Web Services: Beyond component-based computing. Communications of the ACM, 45(10):71–76 (October 2002). [52] Steve Graham, Peter Niblett, et al. Web Services Base Notification, Version 1.0. Akamai Technologies, Computer Associates, Fujitsu, Hewlett-Packard Development Company, IBM, SAP, Sonic Software, University of Chicago, and Tibco Software. Inc., 2004. [53] W. J. van den Heuvel. Aligning Modern Business Processes and Legacy Systems: A Component-Bases Perspective. MIT Press, Cambridge, Mass., 2007. [54] W3C. XSL Transformations (XSLT), version 2.0. W3C working draft, technical report. April 2005. [55] Jian Yang. Web Service componentization. Communications of the ACM, 46(10):35–40 (October 2003). [56] P. Yendluri. RosettaNet Implementation Framework (RNIF), version 2.0. Technical report. RosettaNet, 2000. [57] D. Baker, D. Georgakopoulos, M. Nodine, and A. Cichocki. From events to awareness. In Proceedings of the First International Workshop on Event-driven Architecture, Processing, and Systems (EDA-PS 2006); 2006 IEEE Services Computing Workshops (SCW 2006), IEEE, Chicago, September, 2006.
2
Conceptual Modeling of Service-Driven Applications
Boualem Benatallah, Fabio Casati, Woralak Kongdenfha, Halvard Skogsrud, and Farouk Toumani
2.1
Introduction
Web Services technology is emerging as the main pillar of a service-oriented integration platform of unprecedented scale and agility. The basic principles of Web Services, and of serviceoriented computing in general, consist in modularizing functions and exposing them as services that are typically specified using (de jure or de facto) standard languages and interoperate through standard protocols. Web Services (and Web Service development) have characteristics, challenges, and opportunities that to a great extent are similar to those found in conventional middleware and in the development of component-based applications. There are, however, some important differences between Web Services and conventional (e.g., CORBA-like) middleware services that are relevant especially with respect to conceptual modeling of Web Services, and to the use of such modeling in the context of Web Service development and management applications. The first is that, from a technology perspective, all interacting entities are considered to be (Web) services, even when they are in fact requesting and not providing services. This allows uniformity in the specification language (for example, the interface of both requesters and providers will be described using the Web Services Description Language, (WSDL) and in the development and runtime support tools. The second trend, which is gathering momentum and mind share, is that of including, as part of the service description, not only the service interface but also the business protocol supported by the service (i.e., the specification of which message exchange sequences, called conversations in the following) are supported by the service. This is important, as it rarely happens that service operations will be invoked independently from one another. The interactions between clients and services are often structured in terms of a set of operation invocations, whose order typically has to obey certain constraints in order for clients to be able to obtain the service they need. Third, Web Services were originally intended for B2B interactions, and although they are popular in intraenterprise application integration, the support for B2B integration is still one of the main usage scenarios. In B2B, security is an issue, especially to realize the Web Services vision of having services interact with occasional (as opposed to predefined) clients. This raises
30
Boualem Benatallah and Colleagues
the need for security and trust management mechanisms, including the ability to define and execute trust negotiation protocols. This chapter presents a model-driven methodology for Web Services development and management, and in particular describes models, abstractions, and tools supporting the automation of key Web Service life cycle activities. The proposed methodology is based on UML and highlevel notations to model key service abstractions, on operators and algebras for comparing and transforming such models, and on a CASE tool, partially implemented, that manages the entire service development life cycle. In particular, the proposed methodology focuses on the innovative aspects of Web Services mentioned above, and on the relationship among them, and between them and the Web Service implementation. The motivation behind this effort comes from the observation that there has been little concern so far regarding the potential complexity of service-based applications development, despite initial results in supporting interoperability in terms of concerted standardization effort. There is no framework for helping Web Services beginners or even developers wander through the maze of the available specifications and their usefulness for developing service artifacts and abstractions (e.g., interfaces, business protocols, composition models, and policies). In addition, there has been very little support for model-driven development of Web Services, which is very useful to bridge the gap between requirements and the subsequent phases of the service artifacts development process. 2.2 Toward a Model-Driven Methodology for Web Services Development and Management The work presented in this chapter is part of a larger effort aimed at developing a fully fledged CASE tool for Web Service development and life cycle management. In this chapter we discuss key service abstractions and show how they can be modeled, analyzed, and managed. We focus in particular on business protocol, composition, and policy models, which were the early target of our work and, as we will see, are important for facilitating Web Service development and management. Our methodology allows developers to model these abstractions using high-level, graphical notations. Specifications expressed in these graphical notations are mapped to XMLbased notations which can be used for lower-level processing (e.g., code generation). In the following, we first present overview-related concepts and technologies, and then introduce the main components of the service development methodology and its potential benefits. 2.2.1
Overview of Web Service Technologies
In today’s Web Services, one of the centerpieces of their development is WSDL, a language that allows the description of Web Services as collections of port types, or service interfaces in general. A service interface describes the operations supported by the service, and the structure of messages required and provided by each operation. Service descriptions give clients an indication of which functionalities a service provides and of how to invoke them. Much like IDL
Conceptual Modeling of Service-Driven Applications
31
in CORBA, it can be used to automatically generate programming-language-specific stubs and skeletons that simplify service development by hiding the details of the distribution. However, the description of a service interface is not enough, as it lacks the descriptions of the set of message exchange sequences that are supported by the service, as well as other constraints on operation invocations. We use the term business protocol to refer to the definition of sets of correct interactions between clients and services. Business protocols are very important because they allow programmers of client applications to develop clients that can interact correctly with a service. Hence, it is necessary to include business protocol specification as part of the service description. Several efforts recognize the need for extending service description languages to enable the definition of constraints on the correct sequences of message invocations (also called conversations hereafter) and their properties, including WSCL, WSCI, BPEL4WS, WSCoordination, and WS-Transaction. However, these efforts aim to provide XML-based notations for describing conversation properties rather than a framework to model and enforce protocol specifications. In addition, all these proposals lack abstraction mechanisms that allow richer descriptions of service characteristics and automation of the service life cycle. Service policies are another important part of Web Services abstractions. The need for explicit representation of policies is more pronounced than in traditional application integration due to the fact that Web Services will potentially operate in dynamic and autonomous environments. Efforts in this area include specifications such as WS-Security, WS-Policy, and WSSecurityPolicy. These proposals define an XML-based syntax for specifying security requirements, but they do not address how to model or manage these requirements. Hence, there is a need for high-level frameworks and tools to help developers define and apply service policy specifications and to automate the service policies life cycle. Another important aspect of Web Services, which was lacking in conventional application integration, is that once applications’ functionality is exposed as Web Services, then heterogeneity is greatly reduced. When services are described and can interact in a standardized manner, the task of developing complex services by composing other (basic or composite) services is considerably simplified. Indeed, as service-related technologies mature, service composition is expected to play a bigger and bigger role in service development. Since services will be sought during assembly of compositions and business-to-business interactions, their functionality and policies need to be described in such a way that clients can discover them and evaluate their appropriateness for compositions and interactions. Languages and tools for service composition exist today, and they are more advanced and mature with respect to what was mentioned above for protocols and policies. The leading specification in this space is BPEL. Despite this relative maturity, tools still lack the ability to manage service composition in relation to other aspects of Web Services, such as the above-mentioned protocols and policies. Exploiting this interaction is very important because it enables the automated generation of both external specifications and code skeletons (as discussed later in the chapter). In addition, also in this space there is a need for higher-level modeling with respect to that offered by BPEL, as well as the ability to the generate service composition specification in
32
Boualem Benatallah and Colleagues
UML-based Notations
Conversation Modeler
Compatible Composition
Intersection
||C
||I
Difference
Projection
Trust Negotiation Modeler Composite Service Modeler
XML-based Notations
Routing Tables
Control Tables
||P
D
||
Service Discovery Engine
Conceptual Layer
Model Algebras Compatible composition
Intersection
Conversation and Trust Negotiation Controller
P1 = (S1, s, F1, M1, R1) P2 = (S2, s, F2; M2, R2) P = P2||CP2
P1 = (S1, s, F1, M1, R1) P2 = (S2, s, F2; M2, R2) P = P1||IP2
Difference
Projection
Service Composition Orchestrator
P1 = (S1, s, F1, M1, R1) P2 = (S2, s, F2; M2, R2) P = P1||DP2
P1 = (S1, s, F1, M1, R1) P2 = (S2, s, F2; M2, R2) P = P1||PP2
Logical Layer
Communication Bus
Services
S1
S2
S3
S4 Physical Layer e
Applications
Workflows
Databases
Applications
Web Applications
Figure 2.1 A layered architecture for service development and management.
languages other than BPEL. In summary, tools supporting service development today are mainly concerned with implementation aspects (e.g., mappings from WSDL to Java). There is no support for high-level modeling, analysis, and management of various service abstractions. 2.2.2
Web Services Development and Management: Abstraction Layers
Figure 2.1 shows the main abstraction layers which are relevant to the proposed service development and management methodology. Conceptual Layer The first layer addresses the identifications of Web Services’ relevant abstractions, including business protocol, composition model, trust negotiation, and privacy policies. We abstract these
Conceptual Modeling of Service-Driven Applications
33
characteristics from the languages and platforms that a service may be implemented in. These service abstractions are modeled using high-level graphical notations. In particular, we use state charts to model service compositions, and extended state machine models to represent business protocols and trust negotiation policies. These notations are endowed with formal semantics and provide useful and extensible constructs for modeling main service abstractions. We also develop model management operators to analyze and manipulate service abstractions models. These operators form the basis for a protocol algebra (and more generally for an algebra to manage Web Services models) that we believe will have the same usefulness and impact that relational algebra has for data management. In particular, business protocol model and management operators can be used to manipulate and compare protocol specifications, which allows the several types of analysis discussed below. Compatibility analysis This refers to a checking if requester protocol and provider protocol are compatible (i.e., verifying if they can interact and what the possible interactions are.
•
Replaceability analysis This refers to a checking if a service provider SR can replace another service provider S from a protocol point of view (i.e., if SR can support the same conversation that S supports).
•
Compliance analysis This refers to a checking if a protocol is refined to conform to the protocol specification (i.e., to verify if an implementation of a service actually supports the protocol definition).
•
Consistency analysis This refers to a checking if a set of specifications that describe the same service from different points of view, such as protocol and composition models, are consistent (i.e., they do not contain contradictory concepts).
•
Logical Layer The models at the conceptual layer are abstract, and thus need to be made concrete and enforced to ensure that services behave as specified. At this layer, service abstraction models are represented using XML-based notations, including standard-based notations such as BPEL and WSDL codes. Specifications in these languages can be semiautomatically generated from highlevel specifications. We also provide middleware components, called controllers, that sit between clients and services to enforce protocol and policy specifications by maintaining interaction states (conversation and negotiation states), as well as to verify that each operation invocation is performed in accordance with the definition. Similarly, a composition model is associated to an orchestration controller to manage composition logic. We also propose a generative approach, in which a skeleton of a composite service can be generated from the protocol and policy specifications, and protocol specifications can be automatically generated from composition models. Physical Layer Web services are built on top of applications and resources such as database systems, legacy applications, and workflows. In fact, Web Services wrap these applications. Underlying
34
Boualem Benatallah and Colleagues
applications use different formats, communication protocols, and policy models. Thus, there is a need for adapters to bridge interactions between logical and physical layers. We will not focus on this layer in this chapter. 2.2.3
Usages of Abstractions
Our methodology provides high-level supports for service development and interoperability. In particular, it can be beneficial in facilitating and automating many activities in the service life cycle, such as dynamic discovery, service refinement, and protocol and policy enforcement. The methodology can also be beneficial for analyzing degrees of commonalities and differences between protocols, as well as the interoperation possibilities of interacting Web services. In the following we briefly discuss how our methodology can be leveraged to better support service life cycle management. Development-Time Support Our methodology provides several tools to support tasks at development time. First, visual editors are provided to allow users to graphically specify and model service abstractions. These models can be used to generate code skeletons that refine and conform to the specifications. Based on these models, we can check whether the implementation of the service complies with the specifications. In addition, protocol analysis can assist in assessing the compatibility of the newly created service (and service protocol) with the other services with which it needs to interact. The protocol analysis can also help users to identify which part of the protocol are compatible and which are not, therefore suggesting possible areas of modifications that we need to tackle if we want to increase the level of compatibility with a desired service. Runtime Support We provide middleware tools for enforcing service specification identified at the development time. Another tool that our methodology provides is the verification tool that checks whether services are compatible for interoperation. In particular, this tool assists service discovery in the way that the search engine can restrict the returned services by checking their compatibility. Change Support Web Services operate autonomously within potentially dynamic environments. In particular, component or partner services may change their protocols, others may become unavailable, and still others may emerge. Our methodology provides a tool to statically and dynamically identify alternative services that is based on behavior equivalence and substitution. In addition, a longrunning service may change its policy during the service execution. Hence, our methodology provides a tool that allows developers to modify their policies and verify whether the new version is compatible with the old one.
Conceptual Modeling of Service-Driven Applications
2.3
35
Conceptual Modeling of Web Services Abstractions
Conceptual modeling plays an important role in system development because it allows developers to describe a system independently from implementation details. This allows developers to focus on the most critical aspects of the design while leaving to the tools (or to other phases of the design) the task of deriving the detailed implementation and the final code to be executed. When models are sufficiently formal, their construction also provides the foundation for early analysis, error detection, and other useful model management activities. We next propose a methodology that provides notations for modeling the main Web Services characteristics at the conceptual level. Our methodology aims to provide richer descriptions of service properties to support (1) humans in understanding properties (e.g., to understand if they want to access a service and how to write a client that can interoperate with it), (2) client applications in searching for and binding to services, (3) middleware applications in enforcing constraints and factorizing the logic that manages constraint violation (such as transactional middleware supports transactional abstractions), and (4) CASE tools in supporting the development life cycle. The chapter focuses in particular on three layers of abstractions that we believe to be critical in Web Services development and management: business protocol, composition logic, and trust negotiation. For each of these, we discuss its role and importance, describe its key aspects, and propose a model and a notation that are independent of the language used to describe services and of the platform used to implement services. The proposed notation is based on UML, which the reader is assumed to be familiar with. UML provides a number of predefined concepts that can be used to describe structural and behavioral aspects of a system. The structural aspects of a system can be defined in terms of classes, which formalize a set of objects with common services, properties, and behavior. Services are described by methods, and properties by attributes and associations; and behavior can be characterized, for example, by a state machine attached to the class. The UML’s predefined concepts can be extended to capture the relevant semantics of a particular domain or architecture. This formal extension mechanism includes stereotypes, which are used to introduce domainspecific primitives, and tagged values, which represent domain-specific properties. UML also supports specifications of additional constraints through Object Constraint Language (OCL). In addition, there are a number of general-purpose UML tools that benefit subsequent stages of development: tools for code generation, simulation, analysis, and so on. 2.3.1
Business Protocol Model
A conversation is commonly defined as a sequence of message exchanges between a client and a service as part of a Web Service invocation. This exchange occurs according to a business protocol (i.e., a specification of the set of correct and accepted conversations). The business protocol model goes beyond just message exchange choreography by identifying and describing
36
Boualem Benatallah and Colleagues
important conversation abstractions, such as activation abstractions and completion abstractions [1]. Activation abstractions refer to the triggering features and preconditions of a transition as activation mode, activation event, and precondition. The activation mode indicates whether the triggering is activated explicitly, by a service operation invocation from client, or implicitly (i.e., the transition is automatically fired after a temporal event has occurred). A precondition specifies conditions that must be satisfied when triggering a transition as a triple (O-condition, Ucondition, and T-condition). An O-condition specifies conditions on service request parameters. A U-condition specifies conditions on requester profiles (i.e., only certain requesters can invoke such an operation). A T-condition specifies temporal constraints as a time window within which a transition can be fired. In addition to time conditions specified in a precondition, another kind of temporal constraint identified in activation abstractions describes temporal events that automatically and implicitly fire a transition. A completion abstraction describes the implications and the effects of a transition from the requester’s perspective (e.g., whether an operation can be canceled). Effects of a transition can be specified as effectless, compensatable, or definite. The effectless describes a transition that has no permanent effect from the client’s perspective. The compensatable denotes a transition whose effect can be undone by explicitly invoking a compensation operation (an operation that cancels the effects of other operations). The definite refers to a transition whose effects are permanent (i.e., are not compensatable). We extend the UML state machine to model these abstractions, as shown in figure 2.2. States are labeled with logical stages in which a service can be in its interaction with a client, such as Logged or BookOrdered. Transitions among states are labeled with activation events, which are either operation invocations or temporal events in the case of an implicit transition. In particular, we extend a state machine with «explicit» or «implicit» stereotypes to represent the activation mode. For example, T2 denotes a transition that can be explicitly activated by invoking the SearchBook operation. We also extend a state machine by using tagged values to represent time conditions, both the T-condition of explicit transitions and temporal events of implicit transitions. For example, T7 is an implicit transition that will be automatically fired thirty days after transition T5 has been done. Moreover, the tagged values are also used to denote the effects of a transition. For example, the effects of transition T3 can be compensated by invoking transition T4. Note that the compensation operation (i.e., the CancelOrder) can be invoked only within seven days after the OrderBook operation has been done because the state BookOrdered will be automatically transferred to the BookShipped state after that time. 2.3.2
Trust Negotiation Policy Model
We have previously identified trust negotiation as an access control model that is well suited to service-oriented environments [5]. Using the trust negotiation approach, access is granted on the basis of trust established in a negotiation between the service requester and the service provider. In this negotiation, requesters and providers exchange credentials that contain attributes
Conceptual Modeling of Service-Driven Applications
37
T1: Login()
Logged T2: SeacrhBook()
BookFound
T3: OrderBook() {compensatable(T4)}
BookReturned
T6: ReturnBook() {=, (end(T7) + 2)}
ShipmentConfirmed T7: {>=, (end(T5) +30)}
OrderCanceled Figure 2.2 A business protocol model of a bookshop service.
describing the owner. Credentials include membership cards, credit cards, and various forms of identity documents. Both requesters and providers use trust negotiation policies to restrict access to their credentials and to service operations. A trust negotiation policy specifies which operations to allow and which credentials to disclose at a given execution state of the negotiation, and the conditions for disclosing them. Figure 2.3 shows an example of a trust negotiation policy. We proposed an extension of the UML state machine model to describe trust negotiation policies. States represent the level of trust achieved by the requester thus far in the negotiation. By entering a new state, the trust negotiation gives the requester access to more resources. Our trust negotiation model uses roles as an abstraction for resources. This means that instead of mapping resources directly to the states, roles are mapped to states and access rights to the resources are assigned to these roles. Roles are cumulative, so when a requester reaches a new state, all previous role memberships are retained. States are labeled with the roles that are available at that state. Thus, in figure 2.3, a requester that has reached state A is a member of the Customer role, while a requester that has visited both states A and C will be a member of the states Customer, Gold Customer, and Buyer. Transitions are labeled with conditions that restrict when they may be fired. When the negotiation is in a state S, and an event occurs which satisfies the condition of a transition T where
38
Boualem Benatallah and Colleagues
B
{Address, Credit Card}
Reviewer
D Buyer
{ID}
{Register}
A
I
Customer
{10 mins}
F
{Gold Member}
C Gold Customer, Buyer
Role
Operations
Credentials
Customer
Register, Search
Verified by Visa
Reviewer
Write review
Buyer
Purchase
Gold customer
Special Offers
Figure 2.3 A trust negotiation policy model for a bookshop service.
S is the input state, then the negotiation moves to the output state of T. Requesters explicitly trigger events by invoking operations such as CredentialDisclosure. Transitions are extended beyond traditional state machines to capture security abstractions necessary for trust negotiation. We have identified four types of transition conditions: credential disclosures, provisions, obligations, and temporal events (time-outs). Briefly, credential disclosures require the other party to disclose one or more credentials. Provisions require the other party to perform some action (modeled as a Web Service operation invocation) before continuing. Obligations are similar to provisions, but they require the action to be taken at some point in the future. Finally, time-outs specify a time limit on responses, so that abandoned negotiations can be moved to a final state. This is why we call temporal events time-outs in our trust negotiation model. The extensions we added to the basic state machine model include transition stereotypes and tagged values. The stereotypes are used to show whether a transition is of type «credentialDisclosure», «provision», «obligation», or «time-out». In figure 2.3, the transition marked with
Conceptual Modeling of Service-Driven Applications
39
a dashed line is a provision, and the transition marked with a dotted line is a time-out. Dashed or dotted lines used to mark transitions do not add any meaning; they are merely used to aid in the description of the model. Tagged values are used to indicate properties associated with the transition types, such as the credential required or the time when a time-out transition will occur. Figure 2.3 shows a trust negotiation policy for a bookshop service. If a requester enters state A, that requester is allowed to activate the Customer role. Upon activating this role, the requester is given access to the service operations Register and Search, as well as to the provider credential Verified by Visa. To become a member of the Reviewer role, the requester must either disclose its ID credential or perform the provision of invoking the Register operation. 2.3.3
Composition Model
Service composition is a task of developing complex services by composing other elementary or composite services. An elementary service provides an access point to Internet-based applications that do not rely on other Web Services to fulfill external requests. A composite service aggregates other elementary services and composite services that collaborate to implement a set of operations. Following our previous work [2], a state chart is used to express the business logic of a composite model. The reason that we use state charts to express service composition, besides the provided formal semantics (as mentioned earlier), is that they offer most of the control-flow constructs found in existing process description languages [2]. A state chart is made up of states and transitions. A state can be either basic or compound. A basic state corresponds to a service operation, either elementary or composite. A compound state, which can be an OR-state or an AND-state, provides a mechanism for nesting one or several state charts inside a larger state chart. An OR-state relates its substates by an exclusive or relationship (i.e., exactly one of its substates will be executed once the OR-state is entered). An AND-state contains several state charts, all of which are executed concurrently. On the other hand, transitions between states in a state chart are labeled according to event condition action (ECA) rules. A transition can be fired if the specified event of the transition occurs and the specified condition is met. Figure 2.4 presents state charts of two composite services: an e-Bookshop service and an OrderBook service. The OrderBook service is invoked within the eBookshop service. The state chart of the e-Bookshop service consists of an OR-state, in which an input parameter of the eBookshop composite service (i.e., the searchOption parameter) is used as a condition whether to perform the SearchByAuthor or the SearchByTitle operation. The state chart of the OrderBook service consists of an AND-state, denoted as a dashed line, in which the VerifyCredit operation is performed in parallel with the CheckStock operation. As Web Services often operate in a highly dynamic environment where providers remove, modify, or relocate their services frequently, we introduced a concept of service community at the composition model layer to facilitate the composition of a potentially large and changing set of services. A community is an aggregator of service offers with a unified interface. In other
40
Boualem Benatallah and Colleagues
eBookshop service
[searchOption=Author]
SearchByAuthor
Login [searchOption=Title]
OrderBook
Shipment
SendBill
SearchByTitle
OrderBook service
VerifyCredit
CheckStock
Figure 2.4 A composition model for an eBookshop service.
words, a community defines a request for a service by making abstractions of the underlying providers. In order to be accessible through communities, service providers need to register themselves with such communities. This registration requires specifications of mappings between the operations of the service and those of the community. For example, consider the following mapping in which the operation ItemSearch of the community Book SearchEngine is mapped to the operation BookSearch of the e-Bookshop service. source service e-Bookshop EBS target community Book SearchEngine BSS operation mappings operation BSS. ItemSearch() is EBS.BookSearch(). The concept of service community enables dynamic provider selection by postponing the decision on which specific service handles a given invocation until the moment of invocation. Communities are services in themselves, so they are created, advertised, discovered, and invoked just as elementary and composite services are, and they exist independently of the services they aggregate. When a requester invokes a service community’s operation, the community is responsible for selecting the actual service that will execute the operation. To conduct this dynamic selection, a service community uses a membership mode and a scoring service to maintain its list of members and conduct multiattribute dynamic service selection. In particular, a community chooses a member by interpreting a selection policy, which can be defined in a service definition or can be derived from information such as execution logs.
Conceptual Modeling of Service-Driven Applications
2.4
41
Tools Supporting Web Services Life Cycle Automation
The proposed methodology supports activities of the service life cycle by providing a set of tools for use at various stages of the service life cycle, including modeling support, code generation, compliance enforcement, and model analysis and management. Modeling tools allow users to visually edit several service characteristics models: composition, business protocol, choreography, trust negotiation, access control, and privacy policy models. These models can be used to generate code skeletons and control structures to enforce the occurrence of interaction according to the specifications and constraints defined in the models themselves. This section overviews existing tools that we have developed. For more detail see Sheng et al. [3] and Skogsrud et al. [4]. 2.4.1
Modeling and Code Generation
The modelers are CASE-like tools that assist developers in specifying Web Services abstractions: business protocols, composition logic, and trust negotiation policies. These abstractions are edited through a visual interface and translated into an XML document for subsequent processing. The visual interface offers an editor for creating state machine and state chart diagrams of abstractions. The modelers also provide means to describe the properties of transitions, such as effectless or compensatable, for business protocol policies, and for credential disclosure or provision for trust negotiation policies. In addition, XML documents generated from the modelers can be used to generate code structures. In our system, for trust negotiations, the modeler generates complete execution instructions for controlling trust negotiations. However, in the case of business protocols, the modeler generates code skeletons for enforcing service specifications. Figure 2.5 shows a screen shot of the trust negotiation modeler tool. In the case of trust negotiations, the execution instructions are generated in full, which means that no additional effort by the developer is necessary to complete the specification. For each state the trust negotiation modeler analyzes the resources (i.e., the roles) available at the state, as well as the outgoing transitions. This information is presented as a control table, and there is one control table for each state except final states. In general, a control table is a set of eventcondition-action (E[C]/A) rules that describe triggering events and conditions for the outgoing state transitions, as well as associated actions. The specific procedure to generate control tables for trust negotiations is as follows. For each role r available at a state, the tuple r[true]/null is added to the control table for that state. This tuple makes the role r available to the requester. Also, for each outgoing transition t of the state, the tuple t.event[t.condition]/ Transition to t.target is added to the control table. This tuple means that when the event associated with the transition occurs (event), if the condition of the transition is satisfied (condition), the transition will be triggered (action). More specifically, the event is the receipt of a message, such as a credential disclosure. The condition applies to the event; for instance, it could specify constraints on attributes contained in the submitted credential if the event is a credential disclosure. When the transition is triggered, moving to a new state signifies
42
Boualem Benatallah and Colleagues
Figure 2.5 A screen shot of the Trust Negotiation Modeler tool.
Table 2.1 Control Table for State A Event
Condition
Action
Reader
true
null
CredentialDisclosure: ID
true
transition to state B
Gold Member
true
transition to state C
Time-out: 10 minutes
true
transition to state F
CredentialDisclosure:
the increase in trust that the provider grants to the requester. For example, consider the policy in figure 2.5. The control table for state A is shown in table 2.1. In the case of business protocols, the modeler generates code skeletons, BPEL4WS skeletons in particular, of a composite service that complies with the service specification [5]. Designers can then extend this skeleton with business logic, thereby completing the specification of the service up to the point where detailed, executable service specifications are obtained, typically in the form of a process definition. As an alternative approach, instead of generating and
Conceptual Modeling of Service-Driven Applications
43
extending the composition skeleton that deals with the service as a whole, the designer may wish to start the development by separately defining the composition logic for each operation. The code generation tool also supports this case, by combining the specification of the operation implementations with the external specifications to produce a more complete, executable specification of a service that implements the service operations as specified. It also takes care of maintaining the interaction state in a way that complies with the business protocol specifications, by generating the appropriate logic and combining it with the operation implementation logic. 2.4.2
Compliance Enforcement
Our methodology provides middleware tools to allow protocol and policy enforcement, runtime verification, and processing [6]. The business protocol and trust negotiation policies are enforced using controllers, a process which maintains the state of interactions (conversation and negotiation), checks whether messages are received and sent in accordance with the definitions, and returns error messages to clients when messages are not compliant. The controllers conduct their tasks on the basis of control tables that are generated from the specification models, as described in the previous section. This middleware concept is also applied to service communities to conduct composition logic management. Figure 2.6 shows how trust negotiation controllers are deployed as middleware between the requester and the provider. Note that the requester and the provider may each have its own controllers and trust negotiation policies. The arrow between the requester and provider boxes symbolizes that the trust negotiation controllers are transparent to the services. The services see only functional messages of the interaction, and the controllers process the messages that are specific to trust negotiation. A service may participate in several interactions (conversations and trust negotiations) simultaneously. New controller instances are spawned whenever new interactions begin, and each controller instance is dedicated to a particular interaction. This controller instance processes all the incoming and outgoing messages related to that interaction, according to the rules in the control table. This allows the controllers to monitor all messages for the entire duration of the interactions. By keeping track of the status of these requests and by having access to the relevant control tables, the controller object is able to check whether receiving and sending messages are in accordance with policy and to detect when a given transition should be triggered.
Service Level
Controller Level
Policy
Requester
Provider Web Service
Negotiation Controller
Negotiation Controller
Figure 2.6 Trust Negotiation Controllers as middleware.
Policy
44
Boualem Benatallah and Colleagues
Controller Requester R1 Instance for R1
Instance for R2
Requester R2
Instance for R3 Requester R3
Conversation Protocol / Trust Negotiation Policy
Figure 2.7 An interaction controller.
Figure 2.7 shows an interaction controller that sits between service requester and service provider to intercept all messages being sent to the service and respond by taking necessary action (e.g., requesting a credential). Once an invocation is permitted, it is forwarded to the service for processing. This means that the operation of the controllers is completely transparent to the Web Service. The service sees only the invocations of the service operations; all other messages related to conversations or trust negotiations are handled by the respective controllers. The advantage of this architecture is that no modifications need to be made to the service itself to add support for the enforcement of protocols. Instead, a middleware component handles the protocol enforcement. In the case of trust negotiation policies, the controller depends on other middleware components to provide additional functionality. For instance, validation of credentials is handled by a validator component. The validator performs the task of verifying expiration dates and signatures on credentials. In the implementation, this functionality may be programmed into the validator itself, or the validator may depend on other Web Services to provide the functionality. This choice highlights some of the advantages that SOAs (Service-Oriented Architectures) provide for application integration. In summary, the concept of middleware can considerably simplify service development and enforcement efforts. 2.4.3
Analysis and Management
We define a set of operators—for compatible composition of protocols, intersection of protocols, differences between protocols, and projection of a protocol on a given role—to help assess
Conceptual Modeling of Service-Driven Applications
45
commonalities and differences between protocols, as well as whether two protocols can interact with each other [7]. The proposed operators take protocols as their operands and return a protocol as their result (e.g., a composition of two protocols leads to a new protocol that describes the possible interactions between the composed protocols). The proposed operators can be useful in several tasks related to management and analysis of business protocols, including protocols’ compatibility and replaceability checking. For example, the compatible composition operator allows one to understand if and how two services can conduct correct conversations by looking at their protocols. Other operators, such as intersection and difference, allow one to address the problem of understanding how “similar” two protocols are. They provide means to assess whether a service supporting a certain protocol can be used (provided) as an alternative to another service supporting a different protocol. Figure 2.8(a, b, and c) shows graphical representations of business protocols P1, P2, and P3. Each transition is labeled with a message name, followed by a message polarity that defines whether the message is incoming (+ sign) or outgoing (− sign). (Note that we omitted the message polarity in the previous section for readability reasons.) Figure 2.8(d) shows the common aspects of protocol P1 of figure 2.8(a) and protocol P3 of figure 2.8(c), which is a result of an intersection operator. This commonality describes part of the two protocols that can be used interchangeably. In addition, figure 2.8(e) shows all the possible interactions between protocol P1 of figure 2.8(a) and P2 of figure 2.8(b) by using a compatible composition operator. For trust negotiation policies, we have provided extensive change management support. Since trust negotiations may be long-lasting, policies may need to be changed while there are ongoing negotiations. These changes are often caused by changing business strategies (e.g., due to the emergence of new competitors). Changes to laws and regulations also force enterprises to update their policies. We provide a set of change operations that can be applied to modify trust negotiation policies, such as adding and removing states and transitions. These change operations satisfy consistency criteria, which means that when an operation is applied to a valid policy, the result is always a valid policy. In addition to the change operations, we have identified strategies that help developers to manage ongoing trust negotiations when there is a change in the policy. Then strategies allow the negotiation to continue according to the old policy, abort the negotiation, or move the instance to the new policy. When there is a need or desire to change the trust negotiation policy, the following steps are taken. First, the old policy is transformed into the new policy, using change operations implemented in the modeling tool. Second, the control tables for the new policy are generated, using the code generation tool. Third, the new policy is loaded into the trust negotiation controller. At this point, new trust negotiations will commence according to the new policy. However, existing negotiations that followed the old policy must be taken care of. Though the easiest choice is simply to allow these negotiations to continue according to the old policy, this is not always appropriate, especially if the policy change was forced by law. Instead, a strategy selection policy is written by a developer. This selection policy contains a set of rules that determine which migration strategy to apply, freeing the developer from the burden of examining trust negotiation
Login(+) Logged
BookSearch (+)
BookFound PurchaseOrder (+) Ordering Ack (-) Payment(+)
Payment Processing
Invoicing CancelOrder (+) Order Rejected
Cancelled
CreditRequest ( -)
Refuse( -)
Credit Checking
Ship( -)
Accept( -) Order Accepted
Shipped
Ship( -)
(a) A Protocol P1
Login(-)
Login(+)
Logged
Logged BookSearch (+)
BookSearch (+)
BookSearch (-)
BookSearch (-) CheckPrice (-)
Cancel (-) PurchaseOrder (+)
Cancel (-)
Ordering
Cancelled
Price (+) Cancel (-)
Cancelled
Ordering
Invoicing
Invoicing
Payment (-)
Payment (+)
Payment Processing
Payment Processing
Ship (+)
Ship (-)
Shipped
Shipped
Figure 2.8 Examples of protocol management operators.
PurchaseOrder (-)
Ack (+)
Ack (-)
(b) A Protocol P2
BookFound
BookFound
BookFound
(c) A Protocol P3
Conceptual Modeling of Service-Driven Applications
Login(+) Logged
47
Login Logged/ Logged BookSearch
BookSearch (+) BookFound PurchaseOrder (+) Ordering Ack (-) Invoicing Payment (+) Payment Processing Ship (-) Shipped
(d) Intersection
BookFound/ BookFound PurchaseOrder Ordering/ Ordering Ack Invoicing/ Invoicing Payment Payment Processing/ Payment Processing Ship Shipped/ Shipped
(e) Compatible Composition
Figure 2.8 Continued
instances one by one. By applying the strategy selection policy, existing trust negotiations can be dealt with in a manner that suits the service provider. 2.5
Related Work
In this chapter, we argue that abstracting, modeling, and managing service artifacts will benefit several activities related to the services life cycle. Model-driven development of applications is a well-established practice [8]. For example, Basin et al. [9] propose a model-driven security approach, called SecureUML, to allow developers to specify system models along with their security requirements and to use tools to automatically generate system architectures from the models including security infrastructures. Ceri et al. [10] introduce a Web modeling language, called WebML, which identifies layers of abstractions specific to Web applications and provides tools based on these abstractions to support the Web applications life cycle. However, the contributions of these approaches are not specific to Web Service development.
48
Boualem Benatallah and Colleagues
In terms of managing the Web Service development life cycle, technology is still in the early stages. There has been little concern so far regarding methodologies and tools for conceptual modeling and development of services. Service development tools (e.g., BPEL4WJ, Collaxa) that support emerging standards and protocols have started to appear. Existing modeling approaches tend to embody a particular aspect of services. To the best of our knowledge, there is no existing work that considers a wide range of service life cycle activities. For example, Nitto et al. [11] focus on generating executable process descriptions from UML process models. An approach that provides generation rules from UML activity diagrams to BPEL processes has been proposed by Gardner et al. [12]. Several efforts recognize aspects of protocol specifications in component-based models [13] [14]. These efforts provide models (e.g., pi-calculus-based languages for component interface specifications) and mechanisms (e.g., compatibility checking) that can be generalized for use in Web Service protocol specifications and management. Indeed, several efforts in the general area of formalizing Web Service description and composition languages have emerged [15] [16] [17]. Again, existing approaches focus on a particular aspect of the service life cycle. While modeling and analyzing individual aspects help to curb certain complexity, modeling several aspects and analyzing relationships between models at different abstraction layers is important to ensure continuity, traceability, and consistency between models. 2.6
Conclusions
Web Services are emerging as a promising technology for the effective automation of interorganizational interactions. Although a lot of work has already been done and progress has been achieved in the area of Web Services in the last few years, the focus has been on basic service interoperability issues, such as description languages and interaction protocols. As the technology is becoming more advanced, several issues must be addressed to provide Web services with benefits similar to what traditional middleware brings to intraorganizational application integration. In our attempt to provide supports for the service life cycle, we propose a methodology and tools for the model-driven development and management of Web Services. We focus on modeling and specifying service abstractions: composition logic, business protocols, and trust negotiation policies. Based on these models, the service life cycle can be automated, from development through analysis, management, and enforcement. We believe that this methodology will result in a comprehensive methodology and platform that can support large-scale interoperation of Web Services and facilitate the service life cycle. This will foster the widespread adoption of Web Service technology and of service-oriented applications.
Conceptual Modeling of Service-Driven Applications
49
References [1] B. Benatallah, F. Casati, and F. Toumani. Web Service conversation modeling: A cornerstone for e-business automation. IEEE Internet Computing, 8(1):46–54 (January/February 2004). [2] B. Benatallah, Q. Sheng, and M. Dumas. The Self-Serv environment for Web Services composition. IEEE Internet Computing, 7(1):40–48 (January/February 2003). [3] Q. Z. Sheng, B. Benatallah, M. Dumas, and E. Mak. SELF-SERV: A platform for rapid composition of Web Services in a peer-to-peer environment (demo paper). In Proceedings of the 28th Conference on Very Large Data Bases (VLDB 2002), pp. 1051–1054. Morgan Kaufmann, Hong Kong, 2002. [4] H. Skogsrud, B. Benatallah, F. Casati, and M. Q. Dinh. Trust-Serv: A lightweight trust negotiation service (demo paper). In Proceedings of the 30th Conference on Very Large Data Bases (VLDB 2004), pp. 1329–1332. Morgan Kaufmann, Toronto, 2004. [5] K. Baïna, B. Benatallah, F. Casati, and F. Toumani. Model-driven Web Service development. In Advanced Information Systems Engineering. Proceedings (CAiSE), pp. 290–306. LNCS 3048, Springer, New York, 2004. [6] B. Benatallah, F. Casati, H. Skogsrud, and F. Toumani. Abstracting and enforcing Web Service protocols. International Journal of Cooperative Information Systems, spec. iss. on service-oriented modeling, 13(4) (December 2004). [7] B. Benatallah, F. Casati, and F. Toumani. Analysis and management of Web Services protocols. In Proceedings of 23rd International Conference on Conceptual Modeling (ER 2004), pp. 524–541. LNCS 3288. Springer, Berlin, 2004. [8] S. Mellor, A. N. Clark, and T. Futagami, eds. Spec. iss. on model-driven development. IEEE Software, 20(5) (2003). [9] T. Lodderstedt, D. A. Basin, and J. Doser. SecureUML: A UML-based modeling language for model-driven security. In Proceedings of the Fifth International Conference on the Unified Modeling Language, pp. 426–441. Springer LNCS 2460, 2002. [10] S. Ceri, P. Fraternali, and A. Bongio. Web Modeling Language (WebML): A modeling language for designing Web sites. In Ninth International World Wide Web Conference, The Web: The Next Generation. Elsevier Science, New York, 2000. [11] E. Di Nitto, L. Lavazza, M. Schiavoni, E. Tracanella, and M. Trombetta. Deriving executable process descriptions from UML. In International Conference on Software Engineering (ICSE 2002), pp. 155–165. [12] T. Gardner et al. Draft UML 1.4 Profile for Automated Business Processes with a Mapping to the BPEL 1.0. IBM Alpha Works, 2003. [13] C. Canal, L. Fuentes, E. Pimentel, J. M. Troya, and A. Vallecillo. Adding roles to CORBA objects. IEEE Transactions on Software Engineering, 29(3):242–260 (March 2003). [14] D. M. Yellin and R. E. Strom. Protocol specifications and component adapters. ACM Trans. Program. Lang. Syst., 19(2):292–333 (March 1997). [15] T. Bultan, X. Fu, R. Hull, and J. Su. Conversation specification: A new approach to design and analysis of e-service composition. Proceedings of the 12th International World Wide Web Conference (WWW2003), pp. 403–410. ACM, 2003. [16] M. Mecella, B. Pernici, and P. Craca. Compatibility of e-services in a cooperative multi-platform environment. In Technologies for E-Services, Second International Workshop, TES 2001, Rome, Itaty, Proceedings. LNCS 2193, pp. 44–57. Springer, New York, 2001. [17] R. Hull, M. Benedikt, V. Christophides, and J. Su. E-services: A look behind the curtain. In Proceedings of the International Symposium on Principles of Database Systems (PODS), pp. 1–14. ACM Press, New York, 2003.
3
Realizing Service-Oriented Architectures with Web Services
Jim Webber and Savas Parastatidis
3.1
Introduction
(Note: This chapter was originally submitted in 2004–2005 and captures our research work and views at that time.) Service orientation is the contemporary architectural paradigm of choice for developing distributed applications. The paradigm is based on the concepts of “services” and “messages” where applications orchestrate message exchanges between their constituent services. Although the underlying services and message abstractions may seem trivial, they are the basis for constructing loosely coupled systems which are scalable and reliable. Though the principles of service orientation can be applied to the implementation of applications using many distributed computing technologies, it is the advent of XML, together with the set of existing and emerging Web Services technologies and their surrounding hype, that has refocused attention on service-oriented architectures. This chapter introduces a message-oriented architectural paradigm for Web Services, consisting of three distinct views that decouple the architecture layer (service-oriented) from the protocol layer (message-oriented) and the implementation layer (event-driven). The importance of messages and message exchange patterns is emphasized across all three views. This approach is exemplified by the construction of a simple instant messaging application which highlights protocol design and implementation issues. In addition to setting out the architectural layers of message-oriented Web Services, a set of architectural and implementation guidelines are presented. These guidelines show how to avoid common software pitfalls by adhering to a number of deliberately simple design principles which encompass architecture, protocol, and implementation. The chapter is structured as follows: section 3.2 discusses the evolution of Web Services technology from an object-oriented integration technology through to the beginning of serviceoriented computing; section 3.3 concretizes what is meant by SOA and defines the key characteristics and constraints of that architectural style; section 3.4 shows how the principles of SOA are embodied by modern Web Services practices; section 3.5 layers a number of software engineering best practices on top of the MEST architectural style in order to facilitate the creation of dependable services and networks of services; section 3.6 exemplifies the approach outlined
52
Jim Webber and Savas Parastatidis
in previous sections, using instant messaging (IM) to illustrate the salient concepts; section 3.7 highlights alternative approaches to the SOAP-centric method outlined in this chapter; and section 3.8 draws the chapter to a conclusion. 3.2
Historical Perspective
The origins of Web Services have been largely disregarded as the state of the art advances. However, early practice in the field has shaped the discipline as we know it today, and it is therefore important to understand how Web Services developed in order to understand current methods. In the early days of Web Services, the focus of the technology was point-to-point integration of applications in a heterogeneous computing environment. At that time Web Services tool kits encouraged developers to use SOAP [1] (usually employing the SOAP-RPC convention) to invoke methods on remote objects in a classic client-server pattern. The SOAP 1.0 and 1.1 specifications provided a set of rules for marshaling and unmarshaling of graphs of application-level objects, as well as the familiar envelope structure, which together formed a crude but workable XML-based RPC system. It was precisely the ability to bridge disparate systems using platform-independent technologies, such as SOAP and Web Services Description Language (WSDL) [2], which initially brought Web Services technology to prominence. Developers had access to a set of technologies that seemed familiar to contemporary distributed-object technologies, where SOAP was considered to be an object invocation protocol and WSDL to be analogous to an object Interface Description Language. Initially this seemed to be a natural use of the technology, since the term SOAP was an acronym for Simple Object Access Protocol. (The SOAP 1.2 specification [3] subsequently stated that SOAP is no longer an acronym.) This familiar analogy was, and to some large extent still is, echoed by Web Services development tool kits. WSDL interfaces were used to generate class skeletons and stubs, WSDL operations were treated as being analogous to object methods, and Web Services development paralleled the development of distributed object systems. As the state of the art in Web Services advanced, patterns began to emerge from the community which sought to avoid some of the known issues with object-based systems. In particular, scalability (because of state management issues) and application robustness (because of tight coupling between client and server due to shared abstractions such as classes) were thought to be problematic. The Web Services community looked to refine its techniques to better support large, loosely coupled applications, on the premise that while technologies such as SOAP and WSDL are sound, distributed object-based architectural patterns were untenable for Internetscale computing [4]. As experience grew and tooling improved, the consensus was that a move to service orientation as the underlying architectural paradigm which, coupled with appropriate programming models, might avoid some of the problems associated with large-scale distributed object-based systems.
Realizing Service-Oriented Architectures with Web Services
3.3
53
Service Orientation
While service orientation is not a new architectural paradigm, the advent of Web Services has reinvigorated interest in the approach. However as researchers and developers have rebranded their work to be in vogue with the latest buzzwords, the term Service-Oriented Architecture (SOA) has become overloaded. Therefore, before discussing how to build service-oriented applications using Web Services, the fundamental constituents of SOA must be pared from the hyperbole surrounding it. In a Service-Oriented Architecture there are two fundamental abstractions from which all higher-level functionality is built: services and messages. In the absence of an existing, well-accepted definition, we define a service in a deliberately minimal fashion, as follows: A service is the logical manifestation of some physical or logical resources (such as databases, programs, devices, humans, etc.) and/or some application logic that is exposed to the network. Service interaction is facilitated by message exchanges.
Using these fundamental abstractions, we can discuss the following: 1. The architecture of a computing system based on services 2. The application and infrastructure protocols that use documentcentric message exchanges to ferry information between those services 3. The implementations for each service which embrace and exploit those protocols. These three layers of a service-oriented computing system will be addressed in turn. 3.3.1
The Architecture View
The architecture view encompasses all of the major functional units of a system (the services and the administrative domains which host them) and a high-level view of that system’s choreography (the message exchanges between the services), as exemplified in figure 3.1. The legacy of the last few decades of distributed computing is that most distributed systems are based on some form of remote procedure call. Though there have been advances, such as the REST architectural style [5] [6], which aim to support large-scale integration through a uniform interface, there is still the lingering assumption that services must also support “calls,” “operations,” or “actions” with application-specific semantics. Put differently, there exists a false impression that services are “callable” entities, and the fact that technologies for building SOAs continue to use terms such as “interfaces” and “operations” (e.g., WSDL 2.0 [7]) has helped to promote this view. However, such terms are unhelpful when thinking about service orientation, since a service is simply an entity that sends and receives messages. Instead, a service should be seen as an entity that supports a single logical operation, called (somewhat arbitrarily) “ProcessMessage,”
54
Jim Webber and Savas Parastatidis
Administrative domain
Service Service Service
network Service Service
Administrative domain
Administrative domain
Service
ges
ssa
me
boundaries
Figure 3.1 The Service-Oriented Architecture of a system.
which enables message exchanges by taking delivery of the message from the network and making it available to the service for processing. This operation exists only conceptually and has a uniform semantics across all services; a call to the ProcessMessage operation represents the transfer of a message from a sender to a recipient and a request to process that message. It is a minor leap of intuition to see that ProcessMessage provides means for comprehending message exchange protocols, and is the fundamental building block for the creation of more sophisticated message exchange patterns. When a service-oriented system is realized, the ProcessMessage operation is replaced by an appropriate mechanism from the underlying transport technology (e.g., socket receive(), SMTP commands, HTTP POST, etc.), but the semantics of message transfer remains the same. 3.3.2
The Protocol View
Though the architecture view is a high-level model of the service-oriented system, the protocol view defines the message formats and message exchange patterns (schema and contract) for an individual service within the context of the larger computing system. The protocol view provides a more detailed view of a smaller aspect of the system architecture and is exemplified in figure 3.2. The design of protocols for message exchanges between services, such as that shown in figure 3.2, is governed by the following principles: boundaries are explicit; services are autonomous; services share only schema and contract; and policies determine service compatibility [8] [9]. These principles are intended to confer a number of beneficial characteristics on service-oriented applications including, but not limited to, robustness, ease of maintenance, scalability, and reliability. Ignoring one or more of the principles leads to the development of services which lack these vital characteristics [9]. These are now discussed in turn.
header
...
...
body
...
...
one-way
Service
request-response
...
...
Envelope
...
...
header
Envelope header body
Envelope
...
...
...
...
body
Realizing Service-Oriented Architectures with Web Services
Service
55
Figure 3.2 Protocols constrain interactions with a service.
Boundaries Are Explicit The boundaries of a service are well defined. Other services do not see the internal workings, implementation details, or resources of a service. These remain private to the administrative domain which hosts the service. Communication between services takes place through the exchange of messages which explicitly cross service boundaries. The use of explicit messagepassing helps to maintain the strictest levels of encapsulation, which keeps service implementations completely decoupled from their consumers. Services Are Autonomous Services are designed, developed, and evolve independently from each other, allowing their internals to change without affecting other services that make use of them. This places the additional requirement on each service that the formats of the messages and protocols not change over the service’s lifetime. Deviating from this principle almost certainly ensures that a service’s consumers will fail due to message and/or protocol incompatibility. Once a change in a service’s implementation is reflected in its contract, it is effectively a new service. Maintenance and evolution of a service’s implementation should not alter the observed behavior of the service.
56
Jim Webber and Savas Parastatidis
Services Share Schema and Contract, Not Implementation There is no single set of implementation-specific abstractions (i.e., type system) which spans an entire application. The format of the messages the service is prepared to send and receive is described using a schema language, which should be seen as a mechanism to constrain the layout of messages and not as a platform-neutral type of system, even where the schema language supports both approaches (e.g., XML Schema [10]). The way in which sequences of messages form interactions or conversations (i.e., the message exchange patterns) is determined by the contract to which the service adheres. The schema and the contract of a service are made available beyond its boundaries and are shared with other services. It is critical to note that schema and contract are the only information proffered to other services. Failure to abide by this principle couples the implementation of services. Inevitably, changes in one aspect of such a tightly coupled system ultimately have a ripple effect throughout the remainder of the system. While this might be acceptable where a single administrative domain owns an entire application, this is not generally the case with Service-Oriented Architectures, where it should be inconceivable that a change in the implementation of one service will affect the execution of a service from another administrative domain. Policies Determine Service Compatibility Services interact with each other only after it has been determined that they can meaningfully exchange information. The metadata which describe the prerequisites for service interactions are known as policy, and services determine whether they can meaningfully interact based on the form of each service’s policy assertions. Whereas schemas and contracts constrain messages and message exchange patterns, policies describe the nonfunctional requirements (e.g., for security, transactions, dependability, etc.) that must be satisfied by services if they are to attempt interaction. Failure to agree on policy will, in the general case, lead to services that are compatible in terms of schema and protocol being unable to participate in message exchanges (e.g., one service requires an encrypted message exchange which is unsupported by the other). 3.3.3
The Implementation View
The implementation level focuses on the functionality of an individual service and how that service is designed to facilitate the required message exchanges for the protocols it supports. The structure of a typical service is shown in figure 3.3. It consists of resources (e.g., data, programs, devices, etc.), service logic, and a message-processing layer. The uppermost layer in the stack represents the collection of resources which may be used by the logic embodied in the service’s implementation. Typically such resources consist of computer memory, operating system resources, (transactional) databases, devices, other computer systems and services, and even humans.
•
The service logic layer contains the functionality which defines the service. The canonical behavior for the service logic is to receive a notification of message arrival from the messaging
•
Realizing Service-Oriented Architectures with Web Services
57
service
resources
service logic
message processing
messages
Figure 3.3 The anatomy of a service.
layer which is used (perhaps in combination with data from its resource layer) to perform some service-specific work, which may result in further message exchanges. The granularity of that functionality (the sum of message processing, the execution of the application logic, and the state updates within the resource layer) may be of any scale, from a single operating system process to an organizationwide business process. The messaging layer provides programmatic abstractions for the service logic to exchange messages with other services. The arrival of a message at the service normally causes the message to be validated against the service’s contract by the messaging layer (although there may be situations where validation does not occur until farther up the stack), then dispatched up the stack and delivered to the service logic layer. Importantly, the messaging layer exposes the underlying protocol to the service logic, such that the logic can be designed to tolerate the behaviors, imperfections, and subtleties inherent in the messaging protocol the service supports. It is possible that in local area or tightly coupled systems, protocols could be hidden behind higher-level abstractions such as method calls, since message transmission has low latency and low risk of failure. However, in the general case, robustness is improved if the implementation is designed to tolerate latency, loss of messages, and so forth.
•
The layering of messaging, logic, and state is inherently similar to the classic three-tiered architecture, with the presentation layer replaced by a layer that deals with messaging. Although a canonical service may resemble that of figure 3.3, the internal architecture of any implementation of a service is kept hidden from the outside world. The point within a service where messages arrive and leave represents an “event horizon” beyond which no assumptions can be made. Following the path of a message which originates from the service logic and ultimately crosses that service’s boundary leads us back to the protocol view of the system (section 3.3.2), and ultimately back to the architecture view (section 3.3.1).
58
3.4
Jim Webber and Savas Parastatidis
Web Services
Until this point we have considered service orientation as an abstract architectural style, which has introduced services and messages as the fundamental abstractions, and discussed service orientation at the architectural, protocol, and implementation levels. In order for an SOA to be truly useful, as opposed to a software architectural plaything, we need to understand how to apply the principles which it embodies with real implementation technologies, such as Web Services. In this section we will examine how a typical Web Service is designed, implemented, and hosted in a runtime environment to provide scalability and loose coupling while promoting ease of deployment and robustness to change. We will also review how common Web Services specifications such as SOAP [3] [11] and WSDL [7] can be used to form the message-oriented underlay for Internet-scale software systems, and discuss how enterprise quality-of-service features can be integrated into the base architecture to provide features such as privacy, message integrity, and transactionality. 3.4.1
The Web Services Architecture
The Web Services Architecture (WSA) is the conceptual architecture which contextualizes the various Web Services technologies (often called WS-*) and clarifies their interrelationships (e.g., [12] [13]). A version of a WSA stack is presented in figure 3.4, which includes a representative subset of the plethora of Web Services protocols that have been proposed for supporting quality of service (QoS) in Web Services-based applications. SOAP is the technology which distinguishes the Web Services architecture from preceding distributed computing architectures, since SOAP is the default message transfer protocol for Web Services. Though it might seem contentious to some, SOAP is the defining protocol for Web Services and the lowest protocol in the WSA stack. The headers of a SOAP envelope make it possible for higher-level Web Services protocols (security, reliable messaging, transactions, and so forth) to be cleanly integrated into the underlying message exchanges. Header blocks for each protocol are processed independently, allowing software agents to perform protocol-defined actions upon arrival of a message at a Web Service. Similarly, when a service sends a message, protocol-specific agents may add any necessary header blocks and rewrite sections of the message where appropriate. The contents of SOAP headers are not fixed, which allows Web Services to effectively determine their own protocol stack. The stack is expressed within the headers of the messages it exchanges, whereas the list of supported or required protocols is advertised in a Web Service’s metadata (e.g., policy documents, WSDL contracts). The key aspects which unite each of the protocol specifications included in the Web Services architecture are the following: Composability Protocol specifications are created in isolation, though they can be composed together to achieve some aggregate behavior (e.g., reliable transmission of private message •
SOAP
Inter-process communication , TCP, UDP, HTTP, SMTP, JMS, etc.
Quality of service Messaging Transfer
Addressing, notification, enumeration , etc. (WS-Addressing, WS-MessageDelivery , WS-Eventing, WSNotification, WS-Enumeration )
59
Transport
Transactions (WS-AT/BA, WSTransactionManagement)
Reliable messaging (WS-ReliableMessaging)
Security (WS-Security, WS-Trust, WSSecureConversation)
Metadata (WSDL, Policy)
Realizing Service-Oriented Architectures with Web Services
Figure 3.4 The Web Services stack (adapted from [14]).
exchanges within the scope of a transaction). That is, while a given specification may refer to other specifications in order to provide additional optional functionality, there are no mandatory dependencies between such specifications. Message orientation A protocol specification defines the protocol in terms of messages and message exchange patterns. The specification does not place any additional demands on the architecture or service implementations.
•
Any WS protocol which possesses these properties can be used in combination with any other WS protocol to support the exchange of messages with specific quality-of-service characteristics defined by those protocols. However, the loosely coupled nature of the Web Services approach means that a given system has no central design authority which can enforce behavior consistent with the quality-of-service protocols across an entire application. Instead, it falls to each Web Service to interpret and process quality-of-service protocol messages correctly. Though this may initially seem daunting, the reality is that each Web Service provider must acquire or implement only those protocols which the Web Service under consideration requires. All other protocols and specifications can be safely ignored, hence reducing workload and simplifying the protocol stack of that Web Service.
60
Jim Webber and Savas Parastatidis
Transport Independence Although the term “Web Services” may suggest the use of HTTP as the transport of choice, which was certainly true during the early stages of Web Services proliferation, today no such assumption exists. Though the term “Web Services” has been preserved for historical reasons, most Web Services specifications are defined only in terms of SOAP message exchanges, independent of any particular transport, as it is understood that Web Services are decoupled from the Web. Transport-independent SOAP messaging creates the requirement for message-level addressing. Protocols such as WS-Addressing [15] and WS-MessageDelivery [16] allow the addressing information of the recipient of a message to be placed into the SOAP envelope as header blocks which during message transfer are bound to the addressing mechanisms of the underlying transport protocols. As a result, SOAP messages can navigate arbitrary networks, utilizing a variety of protocols, as exemplified in figure 3.5. Message Routing Based on transport independence and an extensible message transfer mechanism, other higherlevel messaging protocols, such as Multicast and Reliable Message Delivery, can be defined (e.g., WS-Eventing [17], WS-Enumeration [18], WS-Notification [19]). Such protocols support useful patterns which allow Web Services to exchange information beyond the traditional pointto-point message exchanges. Metadata The metadata and policy element of the stack in figure 3.4 governs the way in which information about services is described. In particular, WSDL describes the message formats and message exchange patterns in which a service is willing to participate, and optional policies define constraints on the content of the permissible messages (e.g., to support nonrepudiable, transacted, or encrypted message exchanges) or other quality-of-service characteristics.
resources
resources
Web Service logic
Web Service logic
message processing
logical point -to-point SOAP message transfer
http
tcp
message processing
jms
transport
transport intermediate
Figure 3.5 Embedded addresses enable SOAP transport independence.
intermediate
Realizing Service-Oriented Architectures with Web Services
61
Access to a Web Service’s metadata has traditionally been an ad hoc affair, with the consensus being that issuing an HTTP GET on the Web Service’s URL should retrieve a WSDL contract for that service. Such a mechanism, even if it has become standardized, is transport-specific, and thus suboptimal for Web Services. In an attempt to rectify that shortcoming, the WSMetadataExchange [20] specification has been proposed as a SOAP-friendly (and thus transportneutral) means of retrieving WSDL contracts and policies from their associated Web Services. Quality-of-Service Protocols Web Services would not be an effective means for developing dependable computing systems without protocols that provide suitably robust levels of quality of service. These protocols address important aspects of dependable systems by adding message-based security (WSSecurity [21]), reliable messaging (either WS-ReliableMessaging [22] or WS-Reliability [23]), and transactions (WS-Atomic Transaction/Business Activity [24] [25], WS-Transaction Management [26], or BTP [27]), among others, to the underlying message-oriented architecture. The discussion is deliberately confined to security, reliable message delivery, and transactions here; however, the principles espoused are equally applicable to other quality-of-service protocols. Though “security” is a broad term, in the Web Services arena, discussions of security are necessarily confined to the context of message transmission. WS-Security enables private, tamperproof, authenticated, and nonrepudiable message exchanges between Web Services [28]. These properties are tremendously important when considering that message exchanges may occur via arbitrary networks including the Internet, and it could be disastrous if confidential information was leaked, or if the contents of the message could be altered en route. It is similarly important to determine the sender of the message, since it may affect whether and how the receiving Web Service processes that message. On the assumption that the contents of message headers can be made tamperproof (which they can, using WS-Security), schemes have been developed to allow Web Services middleware to mask failures in the underlying network in order to give best effort delivery of messages to Web Services (e.g., WS-ReliableMessaging). The key to such schemes is once again the adornment of messages with metadata headers which the underlying protocol implementations use to organize transmission and, where appropriate, retransmission of messages. In addition to reliable messaging and security, transactional support is a typical requirement for dependable computing. Since web Services lack tight coupling that enables classic ACID transaction processing, the meaning of “transaction” differs in this context. The arrival of a message bearing a transaction context at a Web Service generally means “do not (logically) act on this message until you receive confirmation to do so,” with variations on that theme which dictate how soon the results of the message exchange may be seen in other messages the Web Service produces. This is the strongest semantic that a Web Services transaction protocol can enforce without the collusion of local resource managers, which a Web Service generally would be foolhardy to expose. Therefore, a Web Services-based application cannot expect to process consistent and up-to-date data, and must accept that data outside the service (conveyed in
62
Jim Webber and Savas Parastatidis
messages) are simply a snapshot of data within the service. The comfort afforded by classic transaction processing techniques is not available; only an indication of whether or not certain messages were logically processed is. The WS protocols do not make a Web Service implementations secure, reliable, or transacted; they define the information that must be included in message exchanges so that developers of Web Services can implement the desired behavior, which may differ according to the requirements of a specific implementation/deployment. 3.4.2
The Anatomy of a Web Service
The implementation of a Web Service combines a number of WS protocols with some servicespecific logic in order to deliver the intended functionality to the network. Like the abstract architecture of a service (see figure 3.3), the canonical architecture of a Web Service is a multitiered artifact built from message processing, service logic, and resource layers hosted in a runtime environment, as shown in figure 3.6. The hosting environment can be anything from a single operating system process, to a Web server (e.g., IIS [29] or Apache [30]), to an entire application server farm deployment (e.g., based on WebSphere [31]). The hosting environment needs only to provide an execution context for the message-processing capabilities, such that processing can be instigated in response to the receipt of messages. The transport layer deals with routing of messages to and from the messaging layer. It is typically a piece of infrastructure-specific middleware which implements one or more communications protocols (e.g., in-memory exchange, TCP, UDP, SMTP, HTTP, etc.). The function of the message-processing layer is to take delivery of SOAP messages from the underlying transport and apply any necessary transformations or protocol-specific actions based on the contents of that message. It must also provide programmatic abstractions which reveal the underlying messaging activity. The service logic is able to bind to the underlying application and infrastructure protocols via those abstractions, as shown in figure 3.7. Additionally, the message-processing layer provides the necessary execution context for the Web Service logic by extracting and processing protocol-specific headers from the SOAP messages as they arrive. It also introduces protocol-specific headers according to deployment or service-specific requirements to all outgoing SOAP messages. A typical message processor (see figure 3.7) will allow “handlers” or “plug-ins” to process SOAP messages along two logical message-processing pipes, one for incoming and one for outgoing messages. Any information which the handlers need to propagate to the service implementation is performed out-of-band from the processing of the message body. Typically this is achieved by sharing a programmatically accessible execution context between the handlers and the service logic. Each of the handlers typically implements a SOAP intermediary (as per the SOAP processing model), some deployment-specific requirement (e.g., logging of messages), the logic of a protocol from the Web Services architecture stack (e.g., WS-Security), or other task, such as message validation (e.g., WSDL and policy validation).
Realizing Service-Oriented Architectures with Web Services
63
Humans
SOAP, WSDL, and other WS-* protocols processing
...
...
message processing
Web Service logic
Programs Computational resources
BPEL, Java, .NET
Databases
resources
Devices
transport
SOAP messages
TCP, UDP, HTTP, SMTP, JMS, in-memory exchange , etc.
hosting environnment
Figure 3.6 The canonical architecture of a Web Service.
Separating service logic and processing of WS protocols allows the functional requirements of the Web Service to be addressed in relative isolation from the nonfunctional requirements. This decoupling permits the quality-of-service aspects of a Web Service to evolve independently from that service’s implementation, and allows different quality-of-service characteristics to be applied to different Web Service endpoints which share the same logic and implementation. 3.4.3
Embracing Message Orientation in Web Services
The programming abstractions supported by the message-processing layer of a typical Web Service will vary across implementations. Though some Web Services tool kits are beginning
64
Jim Webber and Savas Parastatidis
Web Service logic programming abstractions
programming abstractions
outgoing SOAP message processing pipe handler
handler
handler
message processing
handler
header
…
body
…
SOAP Envelope
header
…
body
incoming SOAP message processing pipe
SOAP Envelope
hosting environment
handler
handler
…
message processor
Figure 3.7 The architecture of a SOAP message processor.
to support message orientation as the appropriate paradigm for building distributed applications (e.g., WSE [32], Indigo [33], JWSP [34]), most existing tool kits still focus on hiding the messaging aspects of Web Services (e.g., Axis [35], ASP.NET [36]). Although abstractions such as representing services as objects and message exchanges as method invocations may appeal to many programmers because they are familiar and comfortable, such abstractions are inherently inappropriate in a large-scale distributed system where high latency, communications glitches, and partial failures are the norm. The method call paradigm, for example, does not highlight the difference between invoking a method on a local object and exchanging messages with a remote Web Service. Such approaches are potentially limiting in terms of scalability because of tight coupling between the state and identity, robustness due to inappropriate binding of method semantics onto arbitrary underlying protocols, and ease of maintenance because tight coupling arises between implementations. Furthermore, the method invocation paradigm does not allow programmers to reason in terms of the extended message exchange patterns that are possible with Web Services, such as asynchronous delivery, notifications, third-party delivery, message streaming, and so forth.
Realizing Service-Oriented Architectures with Web Services
65
It is important to understand that messages should not be considered “containers” for serialized objects, parameters, or their equivalents, but as first-class abstractions in their own right. As messages are one of the two fundamental abstractions of a service-oriented architecture (the other being services) it is wrong to relegate them to mere plumbing or to misconstrue them as a marshaling format for higher-level application abstractions. It is from elevating (rather than hiding) the concept of messages and message exchanges to the Web Service implementation that the benefits of message orientation emerge. Embracing message orientation within Web Services means that implementation should reason in terms of (programmatic abstractions of) messages and message exchange patterns. Message orientation emphasizes the importance of crossing service boundaries. Thus developers are encouraged to construct Web Services in terms of message interactions and developing implementations whose logic is tolerant of the underlying messaging environment. The message-oriented approach is clearly a departure from location-transparent distributed object system implementations or the Web Service-as-object approach. The API through which the service implementation binds to the messaging infrastructure reflects the explicit messagepassing nature of the underlying system. In this case, choosing explicit message-passing is beneficial [37] because it reinforces the difference between local object invocations, with their implied performance and reliability characteristics, and message exchanges, which may involve remote and potentially latent third parties with different performance and reliability characteristics. Though it may be argued that forcing developers to mix message-passing with their preferred local programming scheme adds complexity, it can be mitigated by the following: 1. The underlying message exchanges cannot, in the general case, be dependably abstracted behind facades such as synchronous method calls. 2. The extent to which message-oriented code pervades a software project is minimized to only those functional units which deal with messaging. All other code is unaffected. Explicit messaging-oriented programming further helps to isolate the implementation of the Web Service logic from its consumers. This allows a service to be maintained without disrupting communication with its consumers, provided that the message exchanges which the service implementation supports do not change. If a service were to expose information about its implementation to consumers by using the network to directly transfer application-level objects (or pointers to application-level objects), those objects or pointers would have to be maintained for the remainder of the Web Service’s lifetime. It is not difficult to see how such a burden is a significant load to carry throughout the maintenance phase of a potentially long-lived system. Therefore, though policy descriptions [38] and semantics [39] of an action may augment a service’s WSDL contract, they should expose intent of message exchanges which a Web Service supports, and not physical service characteristics or resources.
66
3.5
Jim Webber and Savas Parastatidis
Best Practices for Building Web Services
The discussion on service orientation and its differences from other architectural paradigms has led to an implicit association between SOAs and scalability, robustness, and loose coupling. This is a misconception, since these are characteristics that are not necessarily implicit within service-oriented architectures and, most important, they cannot be assumed in any Web Servicesbased application. Careful consideration and good planning are necessary when designing a distributed application in order to satisfy such requirements. Next we identify and discuss a number of design guidelines which emerged during our investigation of service-oriented architectures and the use of Web Services technologies. 3.5.1
Statelessness
The architecture of a supremely loosely coupled and scalable system such as the Web [5] indicates the value of stateless interactions. Statelessness is one of the enabling principles of the World Wide Web infrastructure [40] where . . . each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. Session state is therefore kept entirely on the client. [5]
For the purposes of service orientation, we can usefully reinterpret the above principle as follows: Messages received by the service implementation should contain sufficient information for the implementation to establish the execution context for that message. Session state must be computable from the contents of the messages a service exchanges.
Statelessness does not mean that services cannot contain and manage state. Undoubtedly the existence of state is the reason for the existence of most Web Services. Instead, this principle simply advances the notion that messages carry the information required by the receiving services to reestablish the context of an interaction in addition to the information required by the service logic associated with that interaction. The execution context may be embedded directly into the message (e.g., a session identification element) or may be derived by the service logic after the contents of the message have been processed (e.g., by retrieving data from the storage tier based on some business rules). In practice, statelessness constrains the implementation of the service. It means that the service is able to execute its logic using the contents of a message and any pertinent data in the state management layer. This implies that service implementations should actively manage only state which is related to the action that is currently being performed, and should be prepared to recompute that state safely in the event of a failure and subsequent retry. Any state which needs to be persistent between message exchanges should not implement persistency using a persistent data store to store and retrieve the service state.
Realizing Service-Oriented Architectures with Web Services
67
Recoverable data store
Application logic
Application logic
Application logic
Messaging processing
Messaging processing
Messaging processing
Web Services middleware (SOAP stack)
Web Services middleware (SOAP stack)
Web Services middleware (SOAP stack) Server farm hosting stateless services
Gateway/router Administrative domain
network
Figure 3.8 Scalability and fail-over fault tolerance for stateless service implementations.
Service implementations should be stateless in order to enable simple deployment of multiple copies of the service to enhance scalability and fail-over fault tolerance. If service implementations are designed to be stateless, then they can be deployed easily on server farms, and thus be implicitly scalable and resilient to individual server failures (as depicted in figure 3.8). If the service implementation manages its own persistent state, then this not only complicates implementation, but the scope for fail-over fault tolerance and scalability is reduced. Since the field of data management is mature, there are many enterprise-quality solutions available which implement storage functionality with the required quality-of-service characteristics (scalability, fault tolerance, etc.). Since such characteristics are often difficult to implement, it reinforces the point that they should be devolved to the underlying enterprise data store, and not implemented in the service logic layer.
68
3.5.2
Jim Webber and Savas Parastatidis
Dispatching
If the details of the mechanism used to dispatch messages to the service logic are reflected in the format of the messages or the message exchange patterns of the service (e.g., by using methods such as WSDL operations), then an unnecessary level of coupling is introduced. Effectively, details of an implementation are exposed beyond the boundaries of a service, and hence compromise the autonomy of the service. Anything which is declared in a WSDL contract must be supported throughout the lifetime of a Web Service; therefore, minimal contracts which reveal only messages and message exchange patterns are preferable. 3.5.3
Coupling
Architects should keep in mind that in a service-oriented architecture there are no actors such as “consumer,” “provider,” “client,” or “server.” These are roles that exist at the application level, and not at the building blocks of the architecture. In SOAs, there are only services which exchange messages. Treating a pair of services as client/server introduces a form of coupling which may ultimately be difficult to break. Furthermore, though in theory any SOAP style is valid, we would advise against using SOAPRPC (i.e., rpc/encoded SOAP) because it encourages transmission of application-level objects as parameters to a procedure call and lures developers back into the mind-set of treating services as objects. Instead, it is a more natural fit for the messages that are exchanged to resemble the kinds of business documents that the service’s owner deals with. Thus, rather than encoding graphs of objects into SOAP messages, we suggest that unencoded documents be exchanged (i.e., use of document/literal-style SOAP). This advice is underscored by the fact that SOAPRPC is optional (effectively deprecated) in SOAP 1.2 and the rpc/encoded style is not supported by the WS-I Basic Profile 1.0a [41]. 3.6
Instant Messaging
An instant messaging (IM) application constitutes a useful exemplar for message-oriented Web Services. It enables two or more participants to exchange messages in an asynchronous manner. 3.6.1
Chatting Across the Internet
The protocol for instant messaging is very simple, consisting of three stages. The initial stage involves sending an invitation message to a participant to join a conversation and the remote user responding with a message of acceptance or a declination (or not replying at all). This initial exchange is also used to determine the code name of each participant, as shown in figure 3.9. The second stage of message exchanges consists of the text exchanged between the participants in a conversation. A message is sent to all the participants in the conversation. This stage
Realizing Service-Oriented Architectures with Web Services
69
hey, do you want to chat? I am savas.
jim
savas
savas
jim
savas
jim Yes, why not. I am jim.
Figure 3.9 Initiating a conversation.
jim
savas
Hey, would you like to join the conversation with jim and savas?
jim
savas
savas
simon
simon
simon
Yes, why not. I am simon.
Figure 3.10 Invitation to join an existing conversation.
Hey, how r u?
savas
Hey, how r u?
jim
Not too bad. Enjoying the weather.
jim
savas
savas
How r things with u?
Not too bad. Enjoying the weather .
Hey, how r u? Not too bad. Enjoying the weather. How r things with u?
jim
How r things with u?
Figure 3.11 A conversation in progress.
continues until the participants decide to end the conversation. Additional participants may be added to an existing conversation through an invitation, as shown in figure 3.10. A conversation in progress between two participants, along with the message exchanges, is shown in figure 3.11. In the final stage of message exchanges, one participant informs the others that it is exiting the conversation, as shown in figure 3.12. Having understood the basic concepts of exchanging messages in an IM conversation, we can now investigate how message orientation works in practice, using the IM protocol and implementation as an example.
70
Jim Webber and Savas Parastatidis
savas
Hey, how r u?
jim
Not too bad. Enjoying the weather.
savas
How r things with u?
simon
I have to go.
jim
Figure 3.12 A participant leaving the conversation (the message to only one of the other participants is shown).
Recoverable data store
...
...
...
WSDL contract
Application logic
Web Services middleware (SOAP stack)
Messaging processing
Messaging processing
Web Services middleware (SOAP stack)
Web Services middleware (SOAP stack)
Figure 3.13 Contract-first development process.
3.6.2
Contract-First Development
When building applications using Web Services, we can only rely on their contracts and policies to determine the manner in which they can be composed (section 3.3.1). Therefore, it makes sense to think of the development of a Web Service in terms of the messages and the message exchange patterns that will be supported, and use them as the starting point of the development process. For example, figure 3.13 illustrates how the IM application was developed. We started by distilling the message exchanges into a WSDL contract; then we selected the hosting environment and the required protocol specifications; and then we constructed the application logic. We ensured that any state maintained would be recoverable in case of failures by factoring state management code into a separate module in order to keep the application logic stateless, and thus easy to deploy and scale. Only then did we develop a graphical user interface through which participants in a conversation interact with the IM application. The approach to building other Web Services applications is similar, regardless of whether they are a sophisticated enterpriseto-enterprise integration or something as trivial as an instant messaging application. The XML schema element declaration in figure 3.14 describes the body of the chat-invitation message, which support the invitation for a convestation. This XML schema includes the code name of the initiating user and of the chat-invitation-acceptance message, which capture the identity of the conversation and the details of the participant.
Realizing Service-Oriented Architectures with Web Services
71
Figure 3.14 XML Schema elements for chat-invitation and chat-invitation-acceptance messages.
Figure 3.15 The XML Schema element declaration for the chat message.
An invitation to participate in an existing conversation between two or more participants does not require an additional message. The same chat-invitation will have to be sent by the participant-invitor to the invited participant, but this time the information on all existing participants will have to be conveyed, which is the reason for allowing an unbounded number of elements. The structure of the message to convey the text of the conversation is defined through the complex type, as shown in figure 3.15. When a participant wishes to exit a conversation, it may send a message, whose schema is presented in figure 3.16, to all other participants in the same conversation. On receipt of a message, the exiting participant is removed from the list of active participants in the conversation.
72
Jim Webber and Savas Parastatidis
Figure 3.16 The XML Schema elements for the chat-termination message.
Table 3.1 Message Exchanges Messages
Interaction Style Message originator
Message receiver
chat-invitation and chat-invitation-acceptance
Out-In (solicit-response)
In-Out (request-response)
chat
Out (one-way)
In (one-way)
chat-termination
Out (one-way)
In (one-way)
It should be noted that all of the messages described above include a element for carrying a unique identifier, which is used to contextualize the messages. Given a hosting environment might host multiple simultaneous conversations, the identifier is used to scope the message exchanges to a particular IM conversation. Put differently, the chat-id is used as a content-based message correlation mechanism rather like WS-BPEL [42]. The chat-id does not act in any way as a pointer or reference to a specific resource inside the service, but is simply used by the Web Service to establish an execution context for its associated message exchange. Finally, it is not suggested that the contract-first approach is the only way a Web Service may be implemented. There are many situations, especially when legacy systems have to be integrated using Web Services technologies, when the implementations of the Web Service logic and the back end stores already exist. However, even in such cases, the contract should be designed so that the architectural principles discussed in section 3.3.1 are not violated. 3.6.3
Capturing and Enforcing Message-Oriented Contracts
Having understood the structure, content, intent, and exchange patterns of the messages, we are now in a position to develop a more formalized contract for our IM Web Service. This is where WSDL comes into play, being the prominent notation for expressing message-level contracts for Web Services. For the IM example, our WSDL contract needs to capture the message exchanges enumerated in table 3.1.
Realizing Service-Oriented Architectures with Web Services
73
Note that since any single interaction may involve two services which both play complementary roles, and hence both can be initiators of and reactors to messages, the message exchanges are symmetric (an out-only is matched by an in-only, etc). These messages and message exchanges can be rendered as the WSDL 2.0 shown in figure 3.17. To assist in the understanding of the separate application-specific roles that the Web Service must support, the (partial) WSDL contract shown in figure 3.17 consists of three interfaces: im: initiator:Participant, im:reactor:Participant, and im:service:InstantMessaging. Each interface captures a different aspect of the service’s message-level behavior as follows: im:initiator:Participant The im:initiator:Participant interface describes the “active” aspects of the service the messages which the service may send without prior receipt of a message to act as a stimulus.
•
im:reactor:Participant The im:reactor:Participant interface describes the “passive” aspects of the service the messages which the service expects to receive from external sources, and possibly (as in the case of the ChatInvitation operation) the response messages with which the service is obliged to respond.
•
im:service:InstantMessaging The im:service:InstantMessaging interface aggregates the im: initiator:Participant and im:reactor:Participant interfaces into a complete functional interface for all aspects of an instant messaging Web Service.
•
The key point in understanding this (and, for that matter, any other) WSDL contract is that the interface should be considered as an indivisible whole, not in a piecemeal operation-byoperation fashion (this applies equally to the equivalent mechanisms in WSDL 1.1). The emphasis is on an architectural model which describes message exchanges, rather than on operations which are invoked with parameters and return results. The WSDL contract shown in figure 3.17 dictates the following behavior: The im:initiator:Participant interface’s ChatInvitation operation mandates that a chatinvitation message may be emitted from a service which will then expect a corresponding chatinvitation-acceptance message in response.
•
The im:reactor:Participant interface’s ChatInvitation operation specifies that a chat-invitation message may be received by a service, in which case the service will respond with a corresponding chat-invitation-acceptance message. •
• The Chat operation in the im:initiator:Participant interface states that a service may emit a chat message at any point.
The Chat operation in the im:reactor:Participant interface states that a service may receive a chat message at any point. •
The im:initiator:Participant interface’s ChatTermination operation asserts that a service may emit a chat-termination message at any point. •
74
Jim Webber and Savas Parastatidis
Figure 3.17 WSDL elements describing the message exchange patterns for an IM Web Service.
Realizing Service-Oriented Architectures with Web Services
75
The im:reactor:Participant interface’s ChatTermination operation asserts that a service expects to receive a chat-termination message at any point.
•
It is noteworthy that the WSDL contract presented above is an example of the general-use case for Web Services, where instead of having the fixed notion of client and server, Web Services will adopt both consumer and service provider roles at different times during their deployment. 3.6.4
Mapping Message-Oriented Contracts onto an API
The form and nature of the programming abstractions, which provide the bridge between the Web Services domain (messaging, contracts, and so forth) and the implementation of the logic, can have a significant impact on the implementation’s architecture and robustness. Traditionally, Web Services tool kits have offered functionality for presenting remote Web Services as local objects, often leveraging WSDL to automatically generate stubs for the developer to code against. This model abstracts the underlying message exchanges into the method-call paradigm which is familiar to programmers. Like RPC systems, the Web Services-as-objects model treats messages as subordinate to the local programming experience. We know, however, that treating remote objects as local objects is considered harmful [37], and can extrapolate that treating remote services as objects is similarly harmful. Instead, a metaphor which more closely belies the actual activity in the underlying Web Services-based application should be used. Since Web Services rely on explicit message-passing, it is natural to formulate a message-oriented API to bridge the gap between the service implementation and the wider Web Services-based application. Such a model not only reinforces the developer’s view that a significant boundary is being crossed when a message is sent or received, but also encourages the development of code which is more resilient to the kinds of failures that interacting with remote autonomous entities may involve. Contrast the example codes shown in figure 3.18 and figure 3.19 (We use C#-based pseudo code in the examples.) In figure 3.18, the abstraction presented to the developer of the Web Service is that of a class from which local objects can be instantiated. Message exchanges are hidden behind method calls completely masking the behavior of the underlying system. This makes it difficult for developers to explicitly reason in terms of the underlying message exchanges and write code that is aware of possible network delays or failures. There are similar problems when implementing a Web Service. A simple strategy for deploying Web Services in the past was to expose the methods of a class directly to the network as Web Services “operations.” This means that the WSDL contract and the implementation became tightly coupled, and thus any changes to the implementing class’s interface would break the contract, which is highly undesirable from a maintenance point of view. Also, the message exchanges were once again hidden from the developer. The synchronous nature of the method invocations presented in figure 3.18 hides necessary details from developers. For this reason, tool kits have now started to present asynchronous
76
Jim Webber and Savas Parastatidis
// Consumer-side stub public class ImServiceClient { public ImServiceClient(Uri) public ChatInvitationAcceptance ChatInvitation(ChatInvitation) public void Chat(Chat) public void ChatTermination(ChatTermination) } // Service-side stub [WebService] public class ImService { [WebMethod] public ChatInvitation ChatInvitation(ChatInvitationAcceptance) [WebMethod] public Chat Chat() [WebMethod] public ChatTermination ChatTermination() }
Figure 3.18 A traditional object-oriented abstraction for Web Services.
versions of the same methods in order to deal with network latencies. However, it is preferable to design the service’s message exchanges with asynchrony and loose coupling in mind, and use appropriate tooling for the implementation. By contrast, the API style presented in figure 3.19 is much more suitable. The messaging layer code presented in figure 3.19 exposes the underlying message exchanges without sacrificing ease of programming. Instead of hiding the details of the underlying messageoriented architecture behind an object facade, it requires the developer to consider the implementation in terms of the exchanged messages. For example, passing an instance of the ChatInvitationMessage class into the SendMessage() method causes that method to send a ChatInvitation message and return immediately. At some point later, the processing logic will asynchronously receive confirmation that the conversation has been started through a subscription to the ChatInvitationAcceptanceMessageArrived event. Similarly, the moves can be sent through the SendMessage() method (as and when the application logic sees fit) and received via the ChatMessageArrived event, respectively (sensibly leaving the application logic to determine whether a move is in sequence or not). Such decoupling means that at no point is the behavior of the Web Service code bound directly to the behavior of the underlying messaging protocols, which yields useful improvements in both stability and robustness. Such an approach does not mean that the information about the logical correlation of the messages implied by the WSDL operations is lost. The message-processing infrastructure could make this information available, where appropriate, to programmers through information passed
Realizing Service-Oriented Architectures with Web Services
public class Message : IXmlSerializable { // ... } public class ChatInvitationMessage : Message { // ... } public class ChatInvitationAcceptanceMessage : Message { // ... } public class ChatMessage : Message { // ... } public class ChatTerminationMessage : Message { // ... } public class ImWebService { public void SendMessage(Message msg) { // ... } public event ChatInvitationDelegate ChatInvitationMessageArrived; public event ChatInvitationAcceptanceDelegate ChatInvitationAcceptanceMessageArrived; public event ChatMessageDelegate ChatMessageArrived; public event ChatTerminationMessageDelegate ChatTerminationMessageArrived; }
Figure 3.19 A plausible tool-generated implementation of a messaging layer for the IM Web Service.
77
78
Jim Webber and Savas Parastatidis
to the event handlers dealing with the arrival of messages (and a programmatic context). However, since the programmers explicitly reason in terms of message exchanges, they can implement message correlation and advanced exchange patterns which cannot be captured in WSDL by analyzing the content of the messages or depending on service logic. 3.7
Alternative Approaches
The notion of message orientation is not new, and indeed has played a major role in modern integration work; message-oriented middleware (MOM) [43] is well entrenched in modern enterprise systems. Enterprise Application Integration (EAI) [43] tool kits, and more recently the Enterprise Service Bus (ESB) [44], have been the subject of a great deal of debate, much of it stemming from the vendor community. While there is clearly merit in software which can deal with routing, transformations, reliable delivery of messages, and so forth, these technologies and SOA make for strange bedfellows. Centralized versus distributed integration The ESB approach logically centralizes integration onto a bus and the bus handles issues of routing, security, reliable delivery, and so forth. Conversely, Web Services-based approaches decentralize integration. Each Web Service implements only the functionality it requires to complete its own work, and therefore may deliberately not support transactional coordination while simultaneously mandating that certain message encryption standards be used. The key to understanding how this works is that SOAP messages form the equivalent of a “bus” in a Service-Oriented Architecture and that services, including their quality-of-service characteristics, evolve independently from one another and from any plumbing which underlies them.
•
Proprietary processing models versus the SOAP processing model Communication between Web Services happens by the transfer of SOAP messages. Those messages can be manipulated by intermediaries according to the SOAP processing model, which sets out a set of global architectural constraints for behavior and interoperability for all Web Services. This is in stark contrast to proprietary integration technology, which, from a vendor perspective, has nothing to gain from ensuring seamless interoperability and standard processing models between rival software vendors.
•
Adapters and connectors versus services The paradigm for building connected systems with EAI and ESB technology generally involves connectors and adapters which are hosted within the integration software. This means that the entire communications infrastructure of an enterprise may be held hostage by the owner of the integration technology, in contrast to SOAP-based solutions, which are highly commoditized and supported by an increasing (and increasingly important) set of enterprise applications.
•
Although EAI and ESB systems may be valid choices for local-area integration solutions [45], the features they offer are looking increasingly likely to be subsumed into the transport layer
Realizing Service-Oriented Architectures with Web Services
79
(e.g., to provide reliable delivery of messages) within the context of an SOA than they are to be used as first-tier integration hubs in future. While the debate rages as to whether EAI or SOA is the better approach, the increasing number of vendors who are opting to expose functionality through SOAP endpoints into their applications has effectively all but ruled out justifying the use of proprietary integration technology in combination with their products. Effectively, EAI and ESB technologies will compete not with SOA and Web Services as an integration technology, but with TCP/IP and message queues as they become one of many possible transport-level protocols for enabling end-to-end Service-Oriented Architectures. 3.8
Conclusions
This chapter has described the Service-Oriented Architecture in terms of its fundamental building blocks and governing principles, and has highlighted the importance of the message-oriented protocols for Web Services. The protocols use descriptive SOAP messages which are transported over arbitrary protocols to support interactions between Web Services. It has also been shown how existing Web Services technologies can be used to design and develop services in this style. The discussion presented here has shown how the focus on message exchanges provides benefits in terms of architectural clarity, loose coupling, scalability, reliability, and robustness in the face of change, both for individual Web Services and for applications composed from networks of Web Services. The message-oriented architectural style is consistent with SOA and provides a clear, simple, and coherent model across all levels of a service-oriented application. Acknowledgments We thank Peter Lee, Paul Watson, and Simon Woodman, and our anonymous reviewers, for their valuable feedback during the preparation of this chapter. This work was made possible by funding from the UK Core e-Science Programme, DTI, EPSRC, and JISC, and the North East Regional e-Science Centre. References [1] D. Box et al. Simple Object Access Protocol (SOAP) 1.1. May 8, 2000. http://www.w3.org/TR/2000/ NOTE-SOAP-20000508. [2] W3C. Web Services Description Language (WSDL). http://www.w3.org/2002/ws/desc. [3] W3C. SOAP Version 1.2 Part 1: Messaging Framework, M. Gudgin et al., eds. June 24, 2003. http://www .w3.org/TR/2003/REC-soap12-part1-20030624. [4] W. Vogels. Web services are not distributed objects. IEEE Internet Computing, 7(6):59–66 (2003).
80
Jim Webber and Savas Parastatidis
[5] R. T. Fielding. Architectural Styles and the Design of Network-based Software Architectures. Ph.D. dissertation, University of California, Irvine, 2000. [6] R. T. Fielding and R. N. Taylor. Principled design of the modern Web architecture. ACM Transactions on Internet Technology, 2(2):115–150 (2002). [7] W3C. Web Services Description Language (WSDL) Version 2.0 Part 1: Core Language, R. Chinnici et al., eds. August 3, 2004. http://www.w3.org/TR/2004/WD-wsdl20-20040803. [8] D. Box. Service-Oriented Architecture and Programming (SOAP): Part 1 & Part 2. 2003. MSDN TV archive. [9] P. Helland. Data on the Outside vs. Data on the Inside: An Examination of the Impact of Service-Oriented Architectures on Data. 2004. http://msdn.microsoft.com/architecture/default.aspx?pull=/library/en-us/dnbda/html/ dataoutsideinside.asp. [10] W3C. XML Schema. 2001. http://www.w3.org/XML/Schema. [11] W3C. SOAP 1.2 Part 2: Adjuncts. W3C recommendation, M. Gudgin et al., eds. June 24, 2003. http://www .w3.org/TR/2003/REC-soap12-part2-20030624. [12] L. F. Cabrera, C. Kurt, and D. Box. An Introduction to the Web Services Architecture and Its Specifications. 2004. http://msdn.microsoft.com/webservices/default.aspx?pull=/library/en-us/dnwebsrv/html/introwsa.asp. [13] W3C. Web Services Architecture, D. Booth et al., eds. February 11, 2004. http://www.w3.org/TR/2004/ NOTE-ws-arch-20040211. [14] Microsoft. Web Services Specifications Index. http://msdn.microsoft.com/webservices/understanding/specs. [15] W3C. Web Services Addressing (WS-Addressing). http://www.w3.org/2002/ws/addr. [16] W3C. WS-Message Delivery Version 1.0, by A. Karnarkar et al. W3C member submission. (April 24, 2004). http://www.w3.org/Submission/2004/SUBM-ws-messagedelivery-20040426. [17] D. Box et al. Web Services Eventing (WS-Eventing). 2004. http://msdn.microsoft.com/webservices/understanding/ specs/default.aspx?pull=/library/en-us/dnglobspec/html/ws-eventing.asp. [18] Web Services Enumeration (WS-Enumeration). 2004. http://msdn.microsoft.com/library/en-us/dnglobspec/html/ ws-enumeration.pdf. [19] OASIS. Web Services Notification (WS-Notification). http://www.oasis-open.org/committees/wsn. [20] K. Ballinger et al. Web Services Metadata Exchange (WS-MetadataExchange). 2004. http://msdn.microsoft.com/ library/en-us/dnglobspec/html/ws-metadataexchange.pdf. [21] OASIS. Web Services Security (WS-Security). http://www.oasis-open.org/committees/wss. [22] R. Bilorusets et al. Web Services Reliable Messaging (WS-ReliableMessaging). 2004. http://msdn.microsoft.com/ webservices/understanding/specs/default.aspx?pull=/library/en-us/dnglobspec/html/ws-reliablemessaging.asp. [23] OASIS. Web Services Reliable Messaging (WS-Reliability). http://www.oasis-open.org/committees/wsrm. [24] L. F. Cabrera et al. Web Services Atomic Transaction (WS-AtomicTransaction). 2008. http://www.servicearchitecture.com/web-services/articles/web_services_atomictransaction_ws-atomictransaction.html http://msdn .microsoft.com/webservices/understanding/advancedwebservices/default.aspx?pull=/library/en-us/dnglobspec/html/ wsat.asp. [25] L. F. Cabrera et al. Web Services Business Activity (WS-BusinessActivity). 2005. http://schemas.xmlsoap.org/ ws/2004/10/wsba http://msdn.microsoft.com/webservices/understanding/specs/default.aspx?pull=/library/en-us/ dnglobspec/html/wsba.asp. [26] OASIS. Web Services Transaction Management (WS-TXM). http://www.oasis-open.org/committees/ws-caf. [27] OASIS. OASIS Business Transaction Protocol (BTP). http://www.oasis-open.org/committees/ business-transaction. [28] J. Rosenberg and D. Remy. Securing Web Services with WS-Security. SAMS, Indianapolis, 2004. [29] Microsoft. Internet Information Services (IIS). http://www.microsoft.com/windowsserver2003/iis. http://www .microsoft.com/windowsserver2003/iis/default.mspx. [30] Apache. HTTP Server Project. http://httpd.apache.org. [31] IBM. WebSphere. http://www.ibm.com/software/info/websphere. [32] Microsoft. Web Services Enhancements (WSE). http://msdn.microsoft.com/webservices/building/wse.
Realizing Service-Oriented Architectures with Web Services
81
[33] Microsoft. Indigo. 2004. http://itmanagement.webopedia.com/TERM/Indio.html. [34] Sun Microsystems. Java Web Services Developer Pack (Java WSDP). http://java.sun.com/webservices/downloads/ previous/webservicespack.jsp. [35] Apache. Axis. http://ws.apache.org/axis. [36] Microsoft. NET. http://www.microsoft.com/net/default.aspx. [37] J. Waldo et al. Note on Distributed Computing. Technical report TR-94-29. Sun Microsystems, Mountain View, Calif., 1994. [38] S. Bajaj et al. Web Services Policy Framework (WS-Policy). 2004. http://msdn.microsoft.com/webservices/default .aspx?pull=/library/en-us/dnglobspec/html/ws-policy.asp. [39] W3C. Semantic Web. http://www.w3.org/2001/sw. [40] W3C. Architecture of the World Wide Web, First Edition, I. Jacobs, ed. (July 5, 2004). http://www .w3.org/TR/2004/WD-webarch-20040705. [41] WS-I. Web Services Interoperability (WS-I) Interoperability Profile 1.0a. http://www.ws-i.org. [42] OASIS. OASIS Web Services Business Process Execution Language. http://www.oasis-open.org/committees/ wsbpel. [43] G. Hohpe and B. Woolf. Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions. Addison Wesley, Boston, 2004. [44] D. A. Chappell. Enterprise Service Bus. O’Reilly, Sebastopol, Calif., 2004. [45] S. Baker. Web Services and CORBA. In On the Move to Meaningful Internet Systems 2002: CoopIs, DOA, ODBASE 2002, R. Meersman et al., eds. Springer, New York, 2002.
4
Service-Oriented Support for Dynamic Interorganizational Business Process Management
Paul Grefen
4.1
Introduction
In the modern business world, we see more and more dynamic cooperation between autonomous organizations. In the past, business organizations often operated in stand-alone mode or relied on rather static networks of cooperating organizations. Current business settings, however, often require more dynamic cooperation between organizations in order to survive. Competition has become fiercer because of the globalization of business, the advent of e-commerce, increased market transparency, et cetera. Products and services to be delivered to customers are, on the one hand, of quickly increasing complexity and, on the other hand, subject to increasingly frequent modifications and replacements. Consequently, we see the emergence of new business models that are focused on this new business context and rely on intensive collaboration between autonomous business entities based on dynamic partnering [26]. In markets where concentration on core business competences is essential, there is the deployment of the dynamic business process outsourcing paradigm, in which organizations outsource their secondary business activities to dynamically selected service providers that specialize in these activities. This paradigm is used, for instance, in financial, insurance, and logistics markets. For example, in the insurance market a firm dynamically outsources its subprocesses related to damage claim assessment [38], typically to increase its efficiency. In markets where the combination of highly specialized business activities is required, there is the formation of dynamic cooperative business process networks (or, seen from the process side, business network processes [27]). Examples of these markets are the automotive industry and the construction industry. In the automotive industry, there is dynamic outsourcing of business subprocesses related to design and delivery of car components [19] [27], typically to handle the growing complexity and change rate of automobiles. Essential to fruitful collaboration in dynamically forged business relations is an efficient integration of the business processes of the partners [26]. “Efficient” in this context refers both to the setup of the integration and to the execution of the integrated business process. Clearly, this efficient integration must rely on effective dynamic coupling of the information systems of the collaborating parties. Interorganizational business processes have to rely on interorganiza-
84
Paul Grefen
tional information processing in this information-intensive era. Whereas in the “old days” coupling of information systems across business boundaries could rely on static technology such as EDI, the new settings require technology targeted toward much more dynamic situations. The technology that is most promising in this context is Service-Oriented Computing (SOC). As we will see in this chapter, SOC technology needs to be coupled to advanced workflow management (WFM) technology to obtain full-fledged support for dynamic interorganizational business processes. In this combination, SOC technology provides for dynamic, interorganizational interoperability and WFM technology provides for core business process management. The application of SOC to dynamically integrating business processes can provide means for business organizations to dynamically search for compatible business partners, to set up a business relationship, to dynamically integrate their business process support systems, and to enact their integrated business processes. All of this should be supported with the proper means for monitoring and controlling the progress of matters, thus taking into account well-defined quality-of-service properties. This should also be supported with the proper means to guarantee business process integrity, taking into account transactional characteristics of business processes. Quality-of-service and transaction management technology provide additional required functionality with respect to basic SOC and WFM technology as discussed above. The current state of the art of SOC is focused mainly on integrating business functions and not so much on integrating business processes. Business functions typically have a black box, one-step functionality toward their invoker, whereas business processes have multistep functionality that may require complex interaction with their invoker. Business functions may encapsulate business processes required to implement these functions, but encapsulated business processes are not visible to the outside world. Dynamic business process integration, however, requires that process characteristics be made explicitly visible on the integration interface. Why this is necessary and how this can be accomplished are the main issues addressed in this chapter. An integrated solution is not yet available, but most ingredients are available or under development in SOC. The structure of this chapter is as follows. In section 4.2, we investigate the requirements of dynamic business process integration across organizational boundaries. In section 4.3, we compare these requirements with the current state of the art in Service-Oriented Computing— more specifically, the Web Service platform. In section 4.4, we present a model to apply the Web Service platform to include support for the requirements found in section 4.2. In section 4.5, we present a draft of an integration picture, showing how the elements of previous sections can be combined in an abstract architecture. We conclude this chapter in section 4.6, with an overview of the main issues and a brief outlook. 4.2
DIBPM Requirements
In this section, we analyze the requirements of dynamic interorganizational business process management (DIBPM) for systems that support this paradigm. We first describe the nature of
Support for Dynamic Business Process Management
85
DIBPM. Next, we focus on the role of process structures and the various classes of process structures. Then, we turn our attention to two major aspects of business processes: transactionality and quality of service. Finally, we discuss requirements specific to the dynamic nature of DIBPM and briefly address electronic contracting. (For a more elaborate discussion, see [25] and [5].) 4.2.1
The Nature of DIBPM
As indicated in the introduction of this chapter, dynamic interorganizational business processes (DIBPs) exist in two main flavors: the kind where one organization acts as a server to another client organization and the kind where two organizations collaborate in a peer-to-peer fashion. In both models, the aspect of dynamism in collaboration is required to deal with fast-changing market conditions. Both models can be generalized to more than two partners, either by allowing multipartner constructs or by using multiple two-partner constructs (e.g., by transitive business process outsourcing). For the sake of simplicity and brevity, we will limit ourselves to the twopartner case. In the client/server model, there are client organizations that require server organizations that can perform part of their business processes on their behalf. In other words, client organizations want to outsource business subprocesses to organizations that are more specialized in these subprocesses. Outsourcing allows client organizations to concentrate on their core business aspects. Together, client and server organizations operate in common business service outsourcing markets. These markets may be horizontal (i.e., offering a selection of general-purpose business services, such as logistics) or vertical (i.e., offering a selection of services specific to a certain business domain, such as the insurance or automotive industry). In the peer-to-peer model, we have organizations that require coupling of their business processes to the processes of other organizations in order to obtain integrated business functionality that is too complex for a single organization. Collaborating organizations form dynamic cooperative business process networks in the contexts of highly specialized markets. These markets are usually vertical. Although this model is more symmetric than the client/server model, there is one organization that requests (initiates) a collaboration and one organization that responds to this request. In both models, we see the collaborator roles of DIBP initiator (requesting a collaboration) and DIBP responder (answering the request). Initiator and responder share a view on their business process with external autonomous parties. As we will see, in a simple variant of dynamic service outsourcing, the initiator-provided view may be a black box. In other cases, both views will be more complex. These views allow the synchronization of the local (intraorganizational) business processes in a collaboration context. Here, this synchronization is equivalent to the management of the global (interorganizational) business process. Below, we investigate the nature of these views.
86
Paul Grefen
External Process
project Conceptual Process
project Conceptual Process
map
map
Internal Process
Internal Process
Process Initiator
Process Responder
Figure 4.1 Three-level business process framework.
4.2.2
Explicit Process Structures
When thinking about models of interorganizational business processes, one should first realize that there are multiple levels at which these models exist. In this chapter, we use a threelevel framework for interorganizational business processes [24] that was inspired by the wellknown ANSI/SPARC three-level model for database models [41]. The framework is depicted in figure 4.1. The middle level of the framework is the conceptual level for business process models. At this level, business processes are designed (i.e., their intended functionality is specified in abstract terms). The conceptual level is independent from both (internal) infrastructural specifics and (external) collaboration specifics. The bottom level is the internal level, at which process models are directly interpreted by process management systems. Hence, process models at this level are in general technologyspecific (e.g., described in the specification language of a specific workflow management system). Models at the conceptual level are mapped to the internal level for process enactment (e.g., by workflow management systems). For details of this mapping, see [24]. The top level is the external level, at which process interaction with external parties is modeled. At this level, process models are market-specific (i.e., have to conform to standards and/or technology used in a specific electronic market). Models at the conceptual level are projected to the external level for integration with processes of partner organizations to form an interorganizational business process. Projection is used here because only relevant parts of the conceptual model are of interest at the external level. In the projection, process details are hidden by aggregation/abstraction of process steps.
Support for Dynamic Business Process Management
87
External-level processes are also referred to as public processes, whereas conceptual-level and internal-level processes are referred to as private processes (see, e.g., [10]). Traditionally, business process structures for automatic processing are specified only at the conceptual and internal levels (or even only at the internal level). Process structures at the external level do not exist, are simple and implicit in software, or are described in textual documents for manual processing. To cater for DIBPM, however, explicit process structures at the external level are required to support automated process-based collaboration between partners, such that dynamically forged interorganizational business processes are adequately supported. We use the following definition of interorganizational business process: An interorganizational business process is a business process enacted by two or more autonomous organizations, of which at least one exposes the explicit control flow structure of a nonblack-box process to the other organization(s).
This definition states that in an interorganizational business process, at least one party must make a nontrivial (consisting of more than one activity) process structure at the external level accessible to its collaborator(s). In the “more traditional” interorganizational service invocation (as found in the basic Service-Oriented Computing paradigm), we don’t see explicit control-flow sharing between organizations (control flow of a function implementation is kept at the conceptual and internal levels). Below, we describe what “accessible control flow structure” means by exploring various control-flow interface levels. 4.2.3
Control-Flow Interface Levels
Given the DIBP model with initiator and responder, we distinguish four control-flow interface classes at the external level [24]. Black box With a black-box interface, the initiator observes the process at the responder as a black box. This means that the consumer has no information about the way the service is executed. A black-box service is modeled by a single activity in the initiator process. Glass box With a glass-box interface, the initiator can access the process structure and observe the state of the responder process at the external level, but does not synchronize with it through explicit control flow in either direction. Synchronization (if existent) is implicit in the initiator process. Half-open box With a half-open box interface, the state of the responder process is synchronized with that of the initiator process through one or more explicit control-flow relations (i.e., arrows going from initiator to responder, but not in the opposite direction, except for the end-ofservice control flow dependency). This means that the progress of execution at the responder is influenced by that of the initiator, and that the responder’s autonomy is consequently reduced. Open box In the open-box interface, there can be arbitrary explicit control-flow relations between initiator and responder. The execution progress of both parties depends on each party, and the execution autonomy of both parties is reduced.
88
Paul Grefen
Figure 4.2 Black-box process.
Clearly, the black-box class is not very interesting in the context of the discussion in this chapter; this interface class is actually excluded by our definition of interorganizational business process given earlier. We include it, however, for reasons of completeness. Typically, the glassbox and half-open box classes are found in service outsourcing scenarios, where the glass box caters for the simpler scenarios, and the half-open box for the more complex ones. The open-box class is required for peer-to-peer processes with two-way interactivity at the external level between organizations. Figure 4.2 shows an example of a black-box process. The top half of the figure represents the process initiator, and the bottom half, the process responder. Filled circles represent process steps at the conceptual level, and white circles, the abstracted process steps at the external level. Arrows denote explicit control flow between activities. Dashed lines denote aggregation/abstraction relations between activities (as implied by the projection from the conceptual to the external level). In the black-box model, the process initiator views the responder process as one single activity; process details are completely encapsulated in this black box. Consequently, explicit interorganizational control flow at the external level is related only to the start and the end of the responder process at the external level. No fine-grained process synchronization between collaborators is supported. Figure 4.3 shows a glass-box process. At the external level, the process responder exposes a projection (an abstraction; see also figure 4.1) of its conceptual process to the process initiator. The initiator can observe the execution state of the abstract process and react to it internally. There are no explicit control-flow relations other than those for start and end of the responder process. The projection can aggregate and/or abstract activities at the conceptual level. In the example responder process, three pairs of activities at the conceptual level are projected onto three single activities at the external level. Figure 4.4 shows an example of a half-open box process. Here, additional control flow exists from initiator to responder between intermediate process steps (not only between start and end steps), allowing the initiator to control the progress of process enactment at the responder in a
Support for Dynamic Business Process Management
89
Figure 4.3 Glass-box DIBP.
Figure 4.4 Half-open box DIBP.
more fine-grained way; the initiator determines when the responder is allowed to start its second activity at the external level. This is a simple example; interaction in real-world application scenarios may be more complex. In the CrossFlow project, for example, scenarios from the logistics and insurance domains have been elaborated [38]. To allow fine-grained synchronization at the external level, the initiator has to expose a projection of its conceptual process at the external level as well. The responder has no direct control over this external process, however. Finally, figure 4.5 shows an example of an open-box process. There is explicit control flow between intermediate steps of the business process both from initiator to responder and vice versa. Consequently, both parties have explicit control over the progress of the enactment at the “other side.” The responder determines when the initiator is allowed to start its second activity; the initiator determines when the responder can start its third activity (all at the external level). Hence, the open-box model is useful in business scenarios in which bidirectional process synchronization is indispensable, such as collaborative design scenarios. Apart from the role of initiator (origin of the process enactment), the open-box scenario is symmetric with respect to the collaborators.
90
Paul Grefen
Figure 4.5 Open-box DIBP.
In the classification above, we have been looking at control-flow structures only. Below, we turn our attention to additional requirements related to process enactment quality and dynamism. 4.2.4
Transactional Processes
Transactions are units of work (processes) that have certain reliability characteristics. Transaction management has long been applied in data management and process (workflow) management. To obtain reliable process support, we must also provide transaction management functionality to DIBPM. The requirements for transaction management in DIBPM are more complex than in functionoriented collaboration, in which we can view services as atomic units of functionality. The requirements are also more complex than in traditional business process (workflow) management because of the dynamic and interorganizational nature of DIBPM. This requires transactional structures to be “constructed on the fly” across organizational boundaries as processes are integrated. Finally, since DIBPM relies on process enactment in an e-commerce setting, business aspects of transaction management related to service trading have to be taken into account in some situations as well. Thus, full-fledged transaction management for DIBPM needs to have the following characteristics: To cope efficiently with the three-level process model introduced in section 4.2.2, we require a multilevel transaction model that takes into account both external and internal transaction semantics. External semantics specifies behavior observable by collaborators; internal semantics, the execution behavior encapsulated by a single collaborator. A model for external and internal levels has been proposed in the CrossFlow project [43]. Different levels may be given different transaction semantics (e.g., as in the WIDE workflow transaction model, which consists of an upper level based on compensatable transactions and a lower level based on nested transactions [46].
•
Support for Dynamic Business Process Management
c
c
a
c
c
91
c
c
c
Figure 4.6 Transactional process with external and internal levels.
To cope efficiently with long-running interorganizational business processes that go through different phases from a business perspective, we require a multiphase model that allows having changing transaction semantics depending on the progress of a process. Phases may be information exchange, contract preparation, contract establishment, contracted process execution, and postexecution analysis—each with its own transactional requirements. In this example, the third and fourth phases are typically the most strictly transactional, whereas the second can be “loosely” transactional and the first doesn’t need any transaction support at all.
•
To cope with reliability in both process structure and business process semantics, flexible transaction semantics may be required. An example of flexible transaction semantics is found in a set of “unconventional” atomicity criteria that describe process atomicity from a businessoriented point of view [37]. An example is payment atomicity, in which the all-or-nothing character of payments is supported.
•
Specification of transactional structures must be coupled to specification of business process control flow. This coupling can be achieved in two ways [23]. The first way is to integrate both transaction and control-flow specification into one specification; this requires a transactional process specification language. The second way is to have a separate transaction specification language that complements a process specification language by allowing explicit links between specifications in both languages. In the latter case, both types of specifications can be integrated into a single electronic document. This approach has been followed in the CrossFlow project, where an electronic contract contains (among other elements) a process specification clause and a transaction specification clause that refers to the process specification (the transaction specification clause is termed “level of control clause” in the CrossFlow context) [31]. Figure 4.6 is an example of a transactional process. The top of the figure shows the transactional behavior as exposed at the external level; the bottom, the behavior at the conceptual level (or internal level in this case). The external level process consists of five activities. The second and third activities are compensatable (indicated by the “c”), that is, they can be undone even after completion. The last two activities are to be executed in an atomic fashion (indicated by the “a”). At the conceptual level, there is a more detailed process with more compensatable activities. Compensation behavior at the conceptual level is invisible to a collaborator, but may
92
Paul Grefen
be used to implement compensation at the external level. Compensation of the second activity at the external level is, for example, implemented by compensation of the two activities it corresponds to at the conceptual level. 4.2.5
Quality-of-Service Agreements
In the DIBPM paradigm, parts of primary processes of organizations are performed by external, autonomous partners. These partners may have different business objectives than their collaborators, and hence different quality objectives for the execution of their processes. For example, an initiator may be interested in fast process execution, whereas a responder may prefer efficiency with respect to internal resource usage. Hence, explicit attention to quality-of-service (QoS) parameters is important in the collaboration between partners. QoS requirements must be specified in explicit service agreements that govern the collaboration between partners (i.e., that can be monitored during process enactment). We can distinguish between various dimensions (aspects) of process quality that are relevant in DIBPM. Some important example dimensions for QoS are the following: •
Execution times (of complete process, individual steps, or cumulative)
•
Reaction/wait times to process or step activation
•
Availability characteristics of resources (including actors) required for process enactment
•
Quality/precision of process variables or (intermediate) results.
An example of the last class is found in the situation where a logistics service provider records the estimated delivery time as a process variable at the external level. In a QoS specification, the precision of this variable is defined: How precise is the estimation and how current is it (how often is it updated)? In the DIBPM model, collaborations between autonomous organizations are dynamic, so there are no preset agreements on QoS parameters (they do exist in long-lasting static relations between partners). Therefore, dynamic QoS specification and interorganizational QoS management are necessary, taking a match between the requirements, on the one hand, and facilities, on the other hand, of collaborators as a starting point. Similar to specification of transaction structures (as discussed above), specification of QoS parameters should be integrated with or coupled to business process (control flow) specification. Where electronic contracting is used between collaborators, QoS parameters form an important part of the contract. For example, in the CrossFlow approach to electronic contracting [31], QoS specification is an explicit clause in an electronic contract. 4.2.6
Dynamic Process Integration
So far in this section, we have looked at the requirements of interorganizational business process management. We have, however, not yet looked at the functionality required for the dynamic formation of collaborations. This last point is addressed here.
Support for Dynamic Business Process Management
93
Earlier in this section, we gave a definition of an interorganizational business process. This definition does not take the dynamic nature of DIBPM into account, however. Therefore, we present a second definition that adds dynamism to the first definition: A dynamic interorganizational business process is an interorganizational business process that is formed dynamically by the (automatic) integration of the subprocesses of the involved organizations. Here, “dynamically” means that during process enactment, collaborator organizations are found by searching business process marketplaces and the subprocesses are integrated with the running processes.
Searching business process marketplaces implies the use of business process brokers. A process initiator’s search for collaborators via a broker is based on two aspects: • The runtime characteristics of a specific (set of) process instance(s) at the business process initiator
The characteristics of process responders available and offering the required process functionality.
•
Process templates with explicit control flow are used at the external level by both initiators and responders to provide input to process brokers. Process matching can be name-based or structure-based (we will revisit this point later in this chapter). Matching can also be based on nonfunctional aspects as described in this section: transaction and/or QoS characteristics. In this case, selection of business process collaborators is based not only on functionality aspects of offered processes, but also on quality aspects related to the execution of these processes. 4.3
Relation to SOC Technology
In section 4.2, we analyzed the requirements of DIBPM. In this section, we take a look at existing SOC technology to assess its use for support of DIBPM. We start with a short discussion of early developments in DIBPM—specific technology developed before the general adoption of the Web Services framework. These early developments lack the adoption of generally accepted standards. Therefore, we turn our attention next to the discussion of the Web Services framework that does provide a standardized platform. We compare the functionality offered by various components of the Web Services framework against specific requirements identified in section 4.2. 4.3.1
Early Developments
Many developments in cross-organizational workflow management have targeted cooperation between organizations specified at process definition time. An early example is the WISE project [2]. The WISE project (Workflow-based Internet SErvices) at ETH Zürich aimed at providing a software platform for process-based business-to-business electronic commerce, focusing on support for networks of small and medium enterprises. WISE relies on a central workflow engine
94
Paul Grefen
to control cross-organizational processes (called virtual business processes). A virtual business process in the WISE approach consists of a number of black-box services linked in a workflow process [2]. A service is offered by an involved organization and can be a business process controlled by a workflow management system local to that organization—but this is completely orthogonal to the virtual business process. WISE relies on specific software for process design, composition, and enactment. An early development of DIBPM technology has took place in the context of the CrossFlow project [21] [38] [29]. In this project, concepts and support have been developed for interorganizational (cross-organizational, in CrossFlow terminology) dynamic service outsourcing. A clear distinction is made between external and internal levels, but a conceptual level is not used. In CrossFlow, collaboration partners use a service broker to find each other on the basis of service templates that are transformed into electronic contracts upon collaboration agreement. In the project, explicit attention has been paid to interorganizational transaction management and quality-of-service management. Transaction management relies on a two-level transaction model with X-transactions on the external level and I-transactions on the internal level. Qualityof-service management allows the monitoring of services during their enactment, based on monitoring agreements specified in an electronic contract. The CrossFlow approach is focused on the service outsourcing business paradigm only. It relies on dedicated technology for interorganizational process orchestration at the external level that interfaces to standard workflow technology at the internal level (IBM MQSeries Workflow). 4.3.2
The Basis of the Web Services Stack
The basis of Web Services technology is formed by the SOAP and WSDL standards, the bottom two layers of the Web Services stack. These two standards allow service invocation over the Web, but do not contain any process elements. The Simple Object Access Protocol (SOAP) is a protocol specification for exchanging messages in a distributed environment [7]. The specification contains an envelope format for messages that captures context information of the message, a message-encoding format that can represent structured data, and a model for dealing with messages that follow the requestresponse pattern. The representation of SOAP is XML-based. SOAP can be used with transport protocols such as HTTP, HTTPS, or even SMTP e-mail, although HTTP is the most commonly used. WSDL is a generic XML-based language used to define interfaces of services in a distributed system [18]. Abstract interfaces are described as port types comprising a set of operations. An operation is defined by its input and output messages. Abstract interface definitions can be bound to a particular transfer protocol. In a binding, the specifics of the implementation of a Web Services interface are defined (e.g., for each operation, the encoding format and address information). Any binding can be defined, but the most common binding is to SOAP over HTTP. In this case, the address information may contain a URL and the mapping of WSDL operation, and message descriptions to SOAP runtime encoding is predefined.
Support for Dynamic Business Process Management
4.3.3
95
Process Specification in Web Services
When complex Web Service functionality is built by composing simpler Web Services, a composition language is required that allows the specification of the control flow between the invocation of the simpler Web Services (also called “orchestration” of Web Services). In the context of the Web Services stack, BPEL [20] is commonly used for service composition. BPEL is a “merge” between two predecessors, WSFL and XLANG. IBM has proposed the Web Services Flow Language (WSFL) as a graph-oriented Web Service composition language [32]. WSFL allows a workflow-like specification language style—the language shares many properties with the specification language of IBM’s MQSeries Workflow system. Microsoft has proposed XLANG as an extension to WSDL to allow the orchestration of Web Services [40]. In XLANG, WSDL definitions can be extended by control-flow expressions that define the sequencing, repetition, and other interdependencies of operations defined in ports of a WSDL file. BPEL contains a rich set of control-flow primitives to describe (business) processes; it can be considered a workflow specification language in the Web Service context. But in the current use of BPEL, control-flow specifications are encapsulated by Web Services (that are again specified in WSDL); that is, the control flow of a composed Web Service is not visible to the “outside world.” Clearly, in the DIBPM context, we need to “externalize” the control flow along the classes identified in section 4.2.3. (We will address this issue further in section 4.4.) 4.3.4
Web Service Coordination and Transactions
In the Web Services stack, the default approach for transaction management is specified in the WS-Transaction set of standards. We discuss this approach below. Alternative approaches are the Business Transactions Protocol (BTP) standard and the WS-Composite Application Framework (WS-CAF). BTP is a standard developed by OASIS which is not exclusively targeted toward the Web Services platform, but also covers other environments [35]. WS-CAF is also developed in the context of OASIS [11]. It is a framework for composite Web Services applications that includes the specifications WS-Context [12]; WS-Coordination Framework [13], and WS-Transaction Management [14]. An overview that places all these standards in a historical perspective is provided in [44]. The first standard in the context of WS-Transaction is WS-Coordination (WS-C) [15], which describes a model for coordinating distributed Web Services by the use of coordinators. The WS-C standard defines a coordination architecture, but does not define a coordination protocol. The core WS-C mechanism is a coordination context, which determines the scope of coordination. Applications use coordinators to process coordination activities so that these applications do not need to understand coordination protocols. An application can have its own coordinator or use one that is shared with other applications. A coordinator exposes (1) an activation interface for an application to register for participation in the coordination, and (2) interfaces for particular coordination protocols. The WS-C interfaces are defined in WSDL. Coordinators can be part of a hierarchy, interacting with other coordinators at higher or lower levels in the hierarchy.
96
Paul Grefen
Based on WS-Coordination, the WS-Transaction standard defines two transactional coordination protocols: WS-AtomicTransaction [16] and WS-BusinessActivity [17]. These two coordination protocols provide the “implementation” of two transaction mechanisms: one for classic atomic transactions and one for compensation-based, loosely coupled transactions. The first protocol provides support for the standard distributed two-phase commit protocol as commonly used in the database management world. The second protocol provides a “business agreement protocol” for “business activities,” which is the standard’s term for loosely coupled process activities that can be rolled back only by compensation. Compensation is based on the execution of inverse activities that undo the effects of already completed activities from an application point of view [22]. The standards describe the models of two-phase commits and the business agreement protocol, and define the WSDL interfaces to the coordinators and the coordination contexts as extensions to the corresponding WS-Coordination elements. 4.3.5
Quality-of-Service Management
The Web Service Level Agreement (WSLA) framework provides a language for describing Service Level Agreements (SLAs) of Web Services and an architecture for distributed monitoring of compliance of services to SLAs [33] [30]. The focus of the WSLA framework is the performance aspect of services (e.g., response time and throughput). The performance aspect is certainly relevant for DIBPM, but as we have seen before, more aspects need to be taken into account when dealing with business processes. The WS-Agreement (WS-A) proposal [3] is under discussion in the Global Grid Forum’s GRAAP Working Group. WS-A provides a specification of interfaces to publish agreement templates and request creation of new agreements. It addresses multiple perspectives on a service, such as the service interface description and QoS parameters. The agreement format consists of service description terms and guarantee terms. The content of the terms is expressed using specific languages (e.g., WSDL for an interface description, JSDL for a job description, and WSLA for expressions in guarantee clauses). Thus, WS-Agreement provides an infrastructure for SLA establishment but relies on other languages to define the actual content of the QoS guarantees. 4.3.6
Service Brokering
In the Web Service context, service brokering is based on the UDDI standard [42] [36]. UDDI describes a set of registries that are used to find compatible business partners in Web Service markets. The registries comply with an information structure consisting of a set of core data types and Web Services for querying registry information based on these data types. Both data types and services are being standardized via OASIS [36]. In this chapter, we do not go into the details of UDDI, as they are not too relevant in this context. A development that may influence brokering is that of the Semantic Web [6]. In the Semantic Web, ontology-based approaches are used to reason about the semantics of information found in the Web—in the context of this book, about Web Services. The development of the OWL-S
Support for Dynamic Business Process Management
97
language [34] is specifically targeted at supporting Web Service descriptions. Using OWL-S service specifications will allow matching not only of syntactic (structure-oriented) aspects of services but also of semantic (meaning-oriented) aspects. 4.4
Applying SOC Technology for DIBPM
In this section, we apply SOC technology as described in section 4.3 to the application field of DIBPM. We do so by relating to the requirements that were identified in section 4.2. We start with introducing the BP-WS concept, which is the basic concept for our approach to Web Service-based DIBPM support. We will see that the BP-WS concept relies on infusing explicit control flow at the external process level into the Web Service concept. Next, we introduce various BP-WS classes that coincide with the control-flow interface levels we introduced in section 4.2. Then, we move to the noncontrol-flow requirements, discussing transactional requirements, quality-of-service requirements, and brokering requirements. Thus, the structure of this section follows the structure of section 4.2. 4.4.1
BP-WS Concept
We use an application of the basic Web Services paradigm that caters for services with an internal process structure that has external visibility, in contrast with a “traditional” Web Service. This internal process coincides with a business service process offered by a service provider. The state of execution of the internal process can be observed, and specific control primitives can be applied to this state to allow external control over the execution. For this purpose, we introduce the concept of a Business Process Web Service (abbreviated to BP-WS), which includes a business process specification and a business process state that can be accessed externally. Access to specification and state are provided through a number of dedicated Web Service interfaces (ports) [25]. In our approach, the structure of a BP-WS is used for the implementation of business service responders in all cases. In the more complex controlflow interface classes, the BP-WS structure is also applied to the service initiator. BP-WS can be combined like any “traditional” Web Service to enable process composition. In the context of DIBPM, composition means combining the process of service initiator with that of service responder. In a dynamic service outsourcing context, this implies that service composition can be interpreted as “virtually embedding” insourced business service processes executed by service providers into the business process of a service consumer. 4.4.2
Explicit Process Structures
In the BP-WS approach, control-flow specifications are contained in Web Services and made externally accessible. Following the standard approach in Web Services, BPEL is used for control-flow specification of a BP-WS. Extensions to BPEL are, however, required for transaction and QoS aspects, as discussed in section 4.2. We denote BPEL and these extensions together
98
Paul Grefen
as BPEL+. The nature of the extensions is discussed in sections 4.4.4 and 4.4.5. The extensions can be integrated into BPEL, or can be separate language(s) with cross-reference elements to BPEL. The functionality of the BP-WS control-flow interface depends on the interaction type required between service initiator and service responder; the more complex the required interaction, the richer the interface functionality must be. In the subsections below, we introduce the controlflow interface functionality for a BP-WS in four incremental steps, based on the classification introduced in section 4.2.3. Depending on an application context, a simpler or more complex BP-WS class can be used in practice. 4.4.3
BP-WS Classes
Below, we introduce four BP-WS classes following the four control-flow interface levels identified in section 4.2.3. Each next class provides a richer control-flow interface compared with the previous class, thereby allowing more complex business process interaction. Black-Box BP-WS A black-box BP-WS has an internal business process implementing a service, but this process is completely encapsulated (i.e., not visible to the environment of the BP-WS). For the blackbox control-flow interface class, we need only the “traditional” invoke-and-reply interface. The control-flow interface is not different from the situation where the BP-WS would have been a traditional “nonprocess” Web Service. We label the black-box type of BP-WS as BP-WS/B. The architecture of the BP-WS/B is shown in a stylized fashion in figure 4.7. The BP-WS contains a process specification in BPEL+ that is interpreted by a business process engine. The engine is controlled through the ACT interface, which contains functionality to activate the BPWS process (i.e., to invoke the business service). The BP engine is only conceptually part of the BP-WS; in the actual software structure, it might be a component that is shared among BPWS instances. (We address this further in section 4.5.) BP-WS/B BPEL+ SPEC ACT ACT BP Engine
Figure 4.7 BP-WS/B architecture.
Support for Dynamic Business Process Management
99
BP-WS/G SPEC SPEC
BPEL+ SPEC
ACT ACT BP Engine MON MON
Figure 4.8 BP-WS/G architecture.
Glass-Box BP-WS In the glass-box control-flow interface class, the process specification is visible to the environment of the BP-WS and functionality is provided to monitor the progress of the execution of the process (i.e., the state of the process execution). For this functionality, we require two additional interfaces with respect to the BP-WS/B architecture. A stylized architecture of the glassbox BP-WS (BP-WS/G) is shown in figure 4.8. The SPEC interface is used to make the business process model available to the outside world. Through this interface, an initiator can obtain a process specification of the business process service in BPEL format. The SPEC interface can be considered a reflection interface, as it gives information about the behavior of a BP-WS. We treat the control flow specification as a separate specification here, but it may be embedded in an electronic contract that is obtained via the SPEC interface. The MON interface is used to monitor the state of execution of the internal process. Through this interface, an initiator can obtain information regarding the state of execution after it has been invoked through the ACT interface. Information obtained is to be interpreted in the context of the process specification obtained through the SPEC interface. The MON interface provides the initiator with functions to obtain a list of completed/active activities, the status of a specified activity, and possibly the values of process variables defined in the business process specification. As discussed before, these process variables may be used for QoS monitoring. Half-Open BP-WS To cater for the half-open control-flow interface class, we have to add one more interface to the BP-WS/G architecture. The CTRL interface is used to control the execution of the internal process. Through this interface, an initiator can issue control primitives to influence the execution of a service on its behalf. Invocation of control primitives is typically based on execution state information obtained through the MON interface. The BP-WS/H architecture is shown in figure 4.9.
100
Paul Grefen
BP-WS/H SPEC SPEC
BPEL+ SPEC
ACT ACT CTRL CTRL
BP Engine
MON MON
Figure 4.9 BP-WS/H architecture.
We distinguish between control functions at the process (service) level and at the individual activity level. Control functions at the process level influence the execution of the entire process. Typical examples at the process level are “pause process,” “resume process,” “abort process,” and “compensate process.” Control-flow functions at the activity level influence the execution of individual activities in a service process. Primary functions are “lock activity” and “release activity.” The first informs the BP-WS not to start an activity until it is released through the second function. Together, these two functions provide an implementation of control-flow dependencies from service initiator to service responder. Open-Box BP-WS To support the open-box control-flow interface class, we must provide a BP-WS with the information required for control-flow dependencies from responder to initiator (on top of the previously discussed functionality). We use an interaction style that does not violate the asynchronous, loosely coupled architecture style of Web Services. This approach also does not require any extension to the process specification language (BPEL). We introduce a SYNC interface through which a service initiator (the caller of the BP-WS) informs the BP-WS about the activities the completion of which it wants to be informed and how it wants to be informed. This list of activities can be constructed by the initiator on the basis of the service specification obtained using the SPEC interface of the responder. The received list of activities and requested notifications (synchronization activities) is stored by the responder BP-WS as an extension to the process specification and is interpreted by its BP engine (indicated as SYNC ACTS in figure 4.10). The BP engine uses the CTRL interface of the process initiator—more specifically, the “release activity” function (as discussed above)—to inform the caller of completion of the listed activities. 4.4.4
Transactional Processes
As we have seen in section 4.2, DIBPM requires flexible transaction management dealing with business process semantics. Thus, a combination is needed of transaction management in a Web
Support for Dynamic Business Process Management
101
BP-WS/O SPEC SPEC
BPEL+ SPEC
ACT ACT CTRL CTRL
BP Engine
MON MON SYNC SYNC
SYNC ACTS
Figure 4.10 BP-WS/O architecture.
Services context and transaction management in an advanced interorganizational workflow context. Important is the visibility of transactional semantics at the external level, related to explicit control flow (as discussed earlier in this section). Required elementary transaction mechanisms for DIBPM must support atomic (sub)processes and compensation processes. In a basic form, these are supported by the WS-Transaction standards, as discussed in section 4.3. In the case of richer process structures, however, more flexible compensation mechanisms are required [22] [43]. These mechanisms should support the dynamic generation of context-dependent compensation subprocesses, including partial compensation based on dynamic selection of savepoints. The specification of these dynamically generated compensation subprocesses must be made available through the SPEC interface of a BP-WS, such that a collaborator can monitor and control execution of compensation processes where appropriate. Extended transactional semantics may be required to deal with a business context, such as support for alternative business process atomicity criteria [37]. Also, transactional structures may have to be composed from various types of basic transactional components [45]. When we take a more detailed look at transactional requirements related to the various BP-WS control-flow interface classes (as discussed subsection 4.4.3), we can observe the following: Black box In the black-box class, there are no externally visible process structures. Consequently, interorganizational transactional semantics pertains only to the black-box process (i.e., it cannot refer to internal-level characteristics of the process offered). Hence, transactional semantics is simple (e.g., a process as a whole is guaranteed to be atomic). Glass box In the glass-box class, process structures are visible at the external level, but explicit interorganizational control flow is not offered. Hence, explicit interorganizational transaction management also is not offered. The BPEL+ specification may contain transactional elements,
102
Paul Grefen
however, such that a collaborator can observe progress of transaction management (without being actively involved). A typical example is the progress of a compensation process, the specification of which is made available through the SPEC interface. Half-open box In the half-open box class, explicit interorganizational process control is supported. This implies that explicit interorganizational transaction management is also required to obtain reliable process enactment. More concretely, the initiator must be able to have control over transaction management at the responder. Therefore, the BPEL+ specification contains advanced transactional elements, such as identification of atomicity spheres, compensating activities, and transactional savepoints. This specifies which specific functionality the BP-WS offers through its CTRL interface. Open box In the open-box class, we have a situation similar to that in the half-open box case, but now in a symmetrical fashion; the responder has control over transaction management at the initiator. 4.4.5
Quality-of-Service Agreements
Next to transaction management, management of quality of service caters for the quality of business service process execution in DIBPM. To support QoS management, it is placed in the context of the BP-WS approach. WS-Agreement (WS-A) [3] may be usable as a starting point for QoS management in DIBPM. WS-A is currently, however, targeted at low-level services in a grid computing context. Thus, application in a DIBPM context requires development of the application of WS-A in a different (and possibly more complex) context. QoS management has a clear link to e-contracting; in e-contracts, required QoS parameters are specified. In the process, QoS attributes are related to elements in explicit process specifications that are also included in the e-contract [31] [4]. When we take a detailed look at the various BP-WS classes from the QoS management viewpoint, we come to conclusions comparable with those from the transaction management viewpoint. Black box Interorganizational QoS aspects in the BPEL+ specification pertain only to the black-box process (i.e., they cannot refer to external characteristics of the process offered). Hence, these aspects will typically be relatively simple. Glass box The BPEL+ specification contains detailed QoS elements that can be interpreted in the context of the process specification published through the SPEC interface. Half-open box The BPEL+ spec contains advanced QoS elements that can be interpreted in the context of the process specification published through the SPEC interface, and that are meant to provide input to explicit process control by the initiator (using the CTRL interface). Open box The BPEL+ spec contains advanced QoS elements, as in the BP-WS/H case, but now process control is symmetrical (i.e., the responder can also control the initiator based on QoS information).
Support for Dynamic Business Process Management
103
When, in QoS management, deviations from agreed QoS parameters (as defined, for example, in an electronic contract) are observed across organizational boundaries, appropriate action can be taken. Depending on the BP-WS class used, this action can be of various natures. In the glass-box class, the action will be intraorganizational (interorganizational control is not supported). In the open-box class, explicit interorganizational control can be exerted, such as the rollback of a process using transactional protocols (as discussed in section 4.4.4). 4.4.6
Brokering of Business Process Services
Brokering of business process services is in general not an easy task because of the complexity of business process specifications (as compared with the “traditional” black-box Web Services). Brokering is based on matching specific characteristics of business processes of initiator and responder. This matching can be based on approaches ranging from simple to very sophisticated. The main approaches are the following: Name-based In the name-based approach, matching services are found based on the name of the service only. Clearly, this approach is applicable only in very simple cases or in highly standardized markets (in which “a name says it all”). Attribute-based In this approach, matching is performed by comparing values of attributes of services. Attributes that are used in matching are standardized in a specific domain or market, such that there are no semantic conflicts. The name of a service is one of the attributes. Other attributes can be business-oriented (such as price) or based on transactional or QoS dimensions (as discussed earlier in this chapter). The attribute-based approach has been used, for example, in the CrossFlow project [29]. Semantics-based The semantics-based approach is an extension of the attribute-based approach; however, now attributes to match need not be predefined, but are compared on the basis of ontologies. To realize this, Web Services technology needs to be combined with Semantic Web technology. Especially relevant in this context is the development of the OWL-S language [34], which allows the specification of semantics of Web Services. Structure-based In the structure-based approach, matching is not based on attributes of a business process, but on the structure (or behavior) of the process. This approach can be applied in two ways; either the initiator specifies a control-flow interface that has to be matched by the responder, or the initiator specifies a template of the required responder process. Structure-based matching must be combined with one of the other approaches above to be able to match process activities in a structure (i.e., reason about the “contents” of process structures). Semantics-based and structure-based process matching are still under development—many issues involved require fundamental research. Early work on behavior-based matching is reported in [47], but this work is not yet placed in the context of the Web Services platform. Structure-based matching may be based on the notion of process inheritance analysis [1], (i.e., establishing whether one process is a subtype of another, and hence implements at least the same
104
Paul Grefen
i1
r1
i2
r2
i3
r3
i1
r4
r1
i2
r2
i3
r3
r4
r5 Figure 4.11 An example of process matching.
functionality). Many questions remain to be answered here. Take figure 4.11 as an example (initiator on top, responder on bottom). If the initiator requests a responder structure, as depicted on the left, will it be satisfied with a process on the right? The process on the right includes the required activities r1 to r4 and has the same control-flow interface. But there is a subtle yet important difference: In the left case, the start of r3 depends only on the completion of initiator activity i2 (r2 is always completed before i2).
•
In the right case, the start of r3 also depends on the completion of responder activity r5 (thus, the initiator has less control over the progress of the responder process).
•
Advanced matching in the open-box BP-WS class is hard. WS-Policy [8] [9] expressions may be used as a basis to state requirements on the initiator with respect to the control-flow interface offered. This means that the responder control interface is checked against the requirements of the initiator, and vice versa. 4.5
An Integrated Architectural Picture
In this section, we provide an integrated picture of the elements introduced in this chapter by means of an abstract system architecture for DIBPM. The architecture is abstract because it needs to be made concrete based on a choice of specific software modules to be used. We first present the overall architecture. Next, we zoom in on the DIBPM module that contains the main functionality for business process management as discussed in this chapter. 4.5.1
Overall Architecture
The overall architecture is shown in figure 4.12. The layering of this architecture is based on the three-level framework of section 4.2.2, identifying external, conceptual, and internal levels. Processes are designed at the conceptual level, using the Process Designer module. After completion, a conceptual process specification (CPS) is translated by the Converter module into
Support for Dynamic Business Process Management
105
E PS
C
Process Process Designer Designer
CPS
Converter
EPS
DIBPM DIBPM
MS
Mapper Mapper
IPS
WFM WFM
broker collaborator
I IS
Figure 4.12 Overall integration architecture.
an external process specification (EPS), an internal process specification (IPS), and a mapping specification (MS) that specifies the relation between the first two. The Converter uses an infrastructure specification (IS) for the generation of the IPS. The IS contains the details about the internal business process management system (e.g., a workflow management system) required for the generation of process specifications that can be directly interpreted (enacted) by this system. The Converter uses a policy specification (PS) to generate the EPS. The PS contains details of allowed and preferred interaction patterns and related transactional and QoS aspects. The PS does not contain infrastructural details, since the EPS operates in a standard technological environment (based on the Web Service standards). The actual business process enactment takes place at two levels. At the internal level, a business process engine—typically a workflow management (WFM) system—keeps the state and manages the progress of process instances. The WFM system interfaces to back-end information systems, which are typically application-oriented (in contrasted to the process-oriented WFM system). At the external level, the DIBPM module is responsible for process management. It manages the state of process instances at the external level. It is also in charge of “external contacts” by interfacing to both broker and collaborator(s). Subsection 4.5.2 provides further details on the DIBPM module.) Relevant state changes of business processes at either the external or the internal level are mapped by the Mapper module to the other level, such that the states at both levels remain well synchronized. Major relevant state changes are caused by start and end of activities. Any start or end of an activity at the external level implies a state change at the internal level. A start (end) of an activity at the internal level implies a state change at the external level only if that activity is the first (last) activity in the combined projections from internal to conceptual to external level. Apart from control-flow synchronization, the Mapper may also have to support data synchronization (see [24] for more details).
106
Paul Grefen
A prototype of the execution infrastructure was developed in the CrossFlow project [21]. This prototype, however, was not targeted at a Web Service platform; it is based on a proprietary Java-based platform to achieve interorganizational interoperability. The Mapping Specification developed in CrossFlow is called the Internal Enactment Specification (IES) [38]. The IES describes the mapping between the process at the external level specified in an electronic contract [31] and the process at the external level executed on IBM’s MQSeries Workflow engine. The architecture shown in figure 4.12 uses a direct mapping from external to internal level. A two-step mapping via the conceptual level would also have been possible (as proposed in [24]). The advantage of a two-step approach is a separation of concerns. It yields two simpler mappings and allows more freedom in changes to external- and internal-level characteristics. On the other hand, it introduces more complexity in the architecture design and possibly a runtime performance penalty (each relevant event has to be mapped twice between the external and the internal layer, and vice versa). 4.5.2
DIBPM Module Architecture
collaborator
PBI PBI
BP-WSI BP-WSI TX TX EP EP Engine Engine
Contractor
QoS QoS
PS
Figure 4.13 DIBPM module architecture.
EPS
DIBPM
broker
DIBPE
DIBPC
Figure 4.13 zooms in on the DIBPM module of figure 4.12. Within the DIBPM module, there are two submodules: on the left, the submodule for finding (and contracting) collaborators (DIBPC), and on the right, the submodule for process execution (DIBPE). The DIBPC module contains a Contractor module that is responsible for finding collaboration partners and establishing relations (contracts) with them. It uses as input an external process specification (EPS) that specifies the process functionality that a collaborator must offer, and a policy specification (PS) that specifies what the company’s policy is for selecting collaborators.
Support for Dynamic Business Process Management
107
To find collaboration partners, it uses an interface to a process broker (PBI). Typically, this interface will be based on the UDDI standard. But, as shown in section 4.4.6, advanced scenarios require advanced brokering approaches currently not offered by this standard. The heart of the DIBPE module is the EP engine, which is a simple workflow engine that manages the process enactment at the external level. In doing so, it interprets the external process specification (EPS). The EP engine is the implementation of the BP engine in the abstract BPWS architecture. The EP engine interfaces to the Mapper module (see figure 4.12; not shown here) to synchronize the external process state with the internal process state. The EP engine interfaces to a Transaction Manager module (TX) and a QoS Manager module. The TX module implements the functionality discussed in section 4.4.4, and the QoS module, the functionality discussed in section 4.4.5. The QoS module may invoke functionality of the TX module to react to observed QoS deviations by a collaborator. Through the BP-WS interface (BP-WSI), the EP engine interfaces to EP engines of business process collaborators. The BP-WSI module implements the SPEC/ACT/CTRL/MON/SYNC interfaces of the BP-WS classes (as discussed in section 4.4.3). 4.6
Conclusions and Outlook
In this chapter, we have explored service-oriented support for dynamic interorganizational business process management (DIBPM). We have seen that a combination of Service-Oriented Computing and workflow management technologies is essential for full-fledged DIBPM support. SOC provides for the dynamic aspects in this combination, and WFM, for the process-oriented aspects. Additionally, we have to include advanced transaction (TX) and QoS support. So, in short, the conclusions can be formulated as a simple “equation”: DIBPM = SOC + WFM + (TX/QoS). As we have already stated, a complete, ready-to-use solution for full-fledged DIBPM is not yet in existence, but many ingredients are available or under development, more or less “waiting to be integrated.” A number of new developments add to the possibilities described in this chapter and thus provide input to new developments in DIBPM. We mention a few below as an illustration—it is hard to be complete here in a few sentences. As we have seen, transaction management is important for providing reliable business process management. In current research, the use of parameterizable transaction building blocks is under investigation; this will allow even more flexibility in transaction support. A project in this context is the Dutch XTraConServe project [45], targeted at a framework in which complex transactional workflows can be composed from transactional components from a transaction model taxonomy. Agent technology may contribute to the infusion of more autonomous intelligence into interorganizational process construction and enactment. The use of agent technology for interorga-
108
Paul Grefen
nizational workflows is studied in the CrossWork project, aiming at semiautomated construction of interorganizational workflows in dynamically formed virtual enterprises (see [41] for preliminary ideas). Agents in the CrossWork project are designed to reason about workflow semantics (making use of ontologies), thereby forming a link with developments in the Semantic Web. DIBPM can also be taken into the world of grid computing, thereby viewing business process services as commodities that are traded between partners operating in a business process grid. To conclude, we may answer the question “Is DIBPM good for every situation?” DIBPM as discussed in this chapter has two main characteristics that distinguish it from more “traditional” Enterprise Application Integration (EAI) approaches: (1) it is very process-based (hence all the attention to control flow in this chapter), and (2) it is geared toward very dynamic partnerships (hence the attention to matching and brokering). In other words, it is quite different from static, function-oriented, interorganizational application integration approaches, such as those based on EDI (see, e.g., [28]). In situations that are not processcentric and not dynamic, DIBPM may be a bit of overkill. Thus, the answer to the question above is “no,” but as process orientation becomes more and more important in business applications and market developments drive dynamic partnering, the answer can be extended with “but it is good for more and more situations.” Acknowledgments Part of this chapter is based on the work reported in [23] and [24]. The author therefore thanks the colleagues with whom this work was done: Heiko Ludwig and Asit Dan of the IBM T.J. Watson Research Center, and Samuil Angelov of Eindhoven University of Technology. References [1] W. M. P. van der Aalst. Inheritance of interorganizational workflows: How to agree to disagree without losing control? Information Technology and Management, 4(4) (2003). [2] G. Alonso, U. Fiedler, C. Hagen, A. Lazcano, H. Schuldt, and N. Weiler. WISE: Business to Business e-commerce. In Proceedings of the Ninth International Workshop on Research Issues in Data Engineering, pp. 132–139. IEEE Computer Society, (1999). [3] A. Andrieux et al. Web Services Agreement specification (WS-Agreement): Version 1.1. Global Grid Forum http://forge.gridforum.org/sf/projects/graap-wg (2008). [4] S. Angelov and P. Grefen. The 4W framework for B2B e-contracting. International Journal of Networking and Virtual Organisations, 2(1):78–97 (2003). [5] S. Angelov. Foundations of B2B Electronic Contracting. Ph.D. thesis, Technical University of Eindhoven, 2006. [6] G. Antoniou and F. van Harmelen. A Semantic Web Primer. MIT Press, 2004. [7] D. Box, D. Ehnebuske, G. Kakivaya, A. Layman, N. Mendelsohn, H. Frystyk Nielsen, S. Thatte, and D. Winer. Simple Object Access Protocol (SOAP) 1.1. W3C note. May 8, 2000. http://www.w3.org/TR/SOAP. [8] D. Box, F. Curbera, M. Hondo, C. Kaler, D. Langworthy, A. Nadalin, N. Nagaratnam, M. Nottingham, C. von Riegen, and J. Shewchuk. Web Service Policy Framework, Version 1.0. 2002. [9] D. Box, F. Curbera, M. Hondo, C. Kaler, H. Maruyama, A. Nadalin, D. Orchard, C. von Riegen, and J. Shewchuk. Web Service Policy Attachment, Version 1.0. 2002.
Support for Dynamic Business Process Management
109
[10] C. Bussler. The application of workflow technology in semantic B2B integration. Journal of Distributed and Parallel Databases, 12(2/3):163–191 (2002). [11] D. Bunting et al. Web Services Composite Application Framework (WS-CAF). OASIS, 2003. [12] D. Bunting et al. Web Service Context (WS-CTX). OASIS, 2003. [13] D. Bunting et al. Web Service Coordination Framework (WS-CF) Ver 1.0. OASIS, 2003. [14] D. Bunting et al. Web Services Transaction Management (WS-TXM) Ver 1.0. OASIS, 2003. [15] L. F. Cabrera et al. Web Services Coordination (WS-Coordination). BEA, IBM, and Microsoft, 2004. [16] L. F. Cabrera et al. Web Services AtomicTransaction (WS-AtomicTransaction). BEA, IBM, and Microsoft, 2003. [17] L. F. Cabrera et al. Web Services BusinessActivity (WS-BusinessActivity). BEA, IBM, and Microsoft, 2004. [18] E. Christensen, F. Curbera, G. Meredith, and S. Weerawarana. Web Services Description Language (WSDL) 1.1. W3C note. March 2001. http://www.w3.org/TR/wsdl. [19] Intra- and Inter-Organisational Business Models. CrossWork Project deliverable no. 1.1/2. Profactor, 2004. [20] Web Services Business Process Execution Language (WSBPEL) 2.0. IBM, 2007. http://www.oasis-open.org/ committees/tc_home.php?wg_abbrev=wsbpel. [21] P. Grefen, K. Aberer, Y. Hoffner, and H. Ludwig. CrossFlow: Cross-organizational workflow management in dynamic virtual enterprises. International Journal of Computer Systems Science & Engineering, 15(5):277–290 (2000). [22] P. Grefen, J. Vonk, and P. Apers. Global transaction support for workflow management systems: From formal specification to practical implementation. VLDB Journal, 10(4):316–333 (2001). [23] P. Grefen. Transactional Workflows or Workflow Transactions? In Proceedings of the Thirteenth International Conference on Database and Expert Systems Applications, pp. 60–69. LNCS 2453. Springer, 2002. [24] P. Grefen, H. Ludwig, and S. Angelov. A three-level framework for process and data management of complex e-services. International Journal of Cooperative Information Systems, 12(4):487–531 (2003). [25] P. Grefen, H. Ludwig, A. Dan, and S. Angelov. An analysis of Web Services support for dynamic business process outsourcing. Journal of Information and Software Technology, 48(11):1115–1134 (2006). [26] P. Grefen. Towards dynamic interorganizational business process management In Proceedings of the Fifteenth IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, pp. 13–20. IEEE Computer Society, 2006. [27] P. Grefen, N. Mehandjiev, G. Kouvas, G. Weichhart, and R. Eshuis. Dynamic Business Network Process Management in Instant Virtual Enterprises. Beta Working Paper 198. Technical University of Eindhoven, 2007. [28] M. Hendry. Implementing EDI. Artech House, 1993. [29] Y. Hoffner, S. Field, P. Grefen, and H. Ludwig. Contract driven creation and operation of virtual enterprises. Computer Networks, 37(2):111–136 (2001). [30] A. Keller and H. Ludwig. The WSLAFramework: Specifying and monitoring service level agreements for Web Services. Journal of Network and Systems Management, 11(1) (March 2003). [31] M. Koetsier, P. Grefen, and J. Vonk. Contracts for cross-organizational workflow management. In Proceedings of the First International Conference on Electronic Commerce and Web Technologies, K. Bauknecht et al., eds., pp. 110– 121. LNCS 1875. Springer, 2000. [32] F. Leymann. Web Services Flow Language (WSFL 1.0). IBM, 2001. http://www-4.ibm.com/software/solutions/ webservices/pdf/WSFL.pdf. [33] H. Ludwig, A. Keller, A. Dan, R. King, and R. Franck. A service level agreement language for dynamic electronic services. Electronic Commerce Research, 3(1–2):43–59. Springer, 2003. [34] D. Martin, ed. OWL-S: Semantic Markup for Web Services. World Wide Web Consortium, 2004. [35] OASIS. Business Transaction Protocol, Version 1.0. June 2002. [36] OASIS. UDDI Specifications TC. 2003. http://www.oasis-open.org/committees/uddi-spec. [37] M. Papazoglou. Web Services and business transactions. WWW Journal, 6(1):49–91 (March 2003). [38] J. Saint-Blancat, ed. CrossFlow Deliverable D16: Final Report. CrossFlow Consortium, IBM, 2001. Available at www.crossflow.org.
110
Paul Grefen
[39] I. Stalker, N. Mehandjiev, G. Weichhart, and K. Fessl. Agents for decentralised process design (extended abstract). In International Conference on Cooperative Information Systems, R. Meersman and Z. Tari, eds., pp. 23–25. Springer, 2004. [40] S. Thatte. XLANG: Web Services for Business Process Design. Microsoft, 2001. http://www.gotdotnet.com/team/ xml_wsspecs/xlang-c/default.htm. [41] D. Tsichritzis and A. Klu, eds. The ANSI/X3/SPARC DBMS Framework. AFIPS Press, 1977. [42] UDDI Technical White Paper. 2002. http://www.uddi.org. [43] J. Vonk and P. Grefen. Cross-organizational transaction support for e-services in virtual enterprises. Journal of Distributed and Parallel Databases, 14(2):137–172 (2003). [44] T. Wang and P. Grefen. A Historic Survey of Transaction Management from Flat to Grid Transactions. Beta Working Paper 138. Technical University of Eindhoven, 2005. [45] T. Wang, P. Grefen, and J. Vonk. Abstract transaction construct: Building a transaction framework for contractdriven, service-oriented business processes. In Proceedings of the Fourth International Conference on Service-Oriented Computing, A. Dan and W. Lamersdorf, eds., pp. 434–439. Springer, 2006. [46] P. Grefen, J. Vonk, E. Boertses, and P. Apers. Two-layer transaction management for workflow management applications. In Proceedings of the 8th International Conference on Database and Expert Systems Applications, pp. 430–439. [47] M. Mecella, B. Pernici, and P. Craca. Compatibility of E-Services in a cooperative multi-platform environment. In Proceedings of the 2nd International workshop on Technologies for E-Services, pp. 44–57.
5
Data and Process Mediation in Semantic Web Services
Adrian Mocan, Emilia Cimpian, and Christoph Bussler
5.1
Motivation
The Web is a highly distributed and heterogeneous source of data and information, and the emergence of Web Services extends the heterogeneity problems from the data level to the behavior level of business logics, message exchange protocols, and Web Service invocations. The need for mediator systems able to cope with these heterogeneity problems and to offer the means for reconciliation, integration, and interoperability is obvious, and the success of the Semantic Web vision depends directly on the solutions offered by these systems. This chapter presents an overview of mediator systems, analyzing existing and future trends in this area, and describes the mediation architecture part of the Web Service Execution Environment (WSMX) [18] [26]. We provide an insight on data mediation together with a survey of the multitude of existing approaches for it. In parallel, we explore and characterize the largely unexplored topic of process mediation that must be part of the mediation solution for the Semantic Web and Semantic Web Services. The following sections provide an analysis of the Web Service heterogeneity problems and underline the benefits that ontologies can bring in resolving them. A set of mediation requirements is identified, followed by a brief overview of a Semantic Web Services framework containing built-in support for specification of mediators. 5.1.1
Web Service Heterogeneity
One of the most difficult obstacles Web Services have had to overcome in the attempt to exploit the true potential of the World Wide Web and materialize the vision of Semantic Web Services is heterogeneity. Heterogeneity is present in several areas: data, business logics, message exchange protocols, and Web Services invocation [13]. It is caused by the nature of the Web itself: an open space for publishing the data and the information owned and created by any human or software agent. More recently, heterogeneity has extended beyond data through published Web Services, as a means to facilitate the interoperability and the integration of various systems. Obviously, heterogeneity is, and will remain, a core aspect and a core issue of the Web.
112
A. Mocan, E. Cimpian, and C. Bussler
Our aim is to provide the solutions and technologies necessary to overcome heterogeneity as a problem and therefore turn it into an advantage of the Semantic Web. Naturally, the first problem that has to be tackled is data heterogeneity [5]. In the context of Web Services, a common problem that is encountered is caused by the differences between the formats of data being communicated between a Web Service invoker and a Web Service provider. The functionality of a Web Service might be easily achieved by combining or composing existing Web Services. But the interaction patterns of the integrated Web Services do not always match precisely, the direct cause of this being processes heterogeneity. Additional heterogeneity problems may appear even at a lower level, when the communication technologies used are different, in the form of protocol heterogeneity. The scope of this chapter is to cover the first two types of heterogeneity problems described above: to present existing approaches and future plans in the development of data and process mediators. We consider that intelligent, flexible, and dynamic mediation solutions can be achieved only if the semantics of data is part of the mediation process. That is, if data from different sources are mediated on the basis of their semantic similarities as expressed by their reconciled conceptualizations. Process mediation may be needed when a requester attempts to invoke and execute a Web Service. The communication pattern of the Web Service may be different from the one the requester expects, in which case one party has to adjust to the other’s communication pattern—it has to change its process execution in order to match the other party’s specifications. The adjustment of the different patterns in order to make them match is called process mediation. 5.1.2
Ontologies for Simplifying Integration
The main reason that the integration problems are difficult to solve, no matter whether manual, semiautomatic, or automatic solutions are sought, is the lack of semantics. In most cases the semantics (of both data and processes) remains in the creator’s or designer’s mind. As a consequence, the party responsible for the integration task (human or machine) has to refer to this semantics without having an explicit representation of it. Humans can rely on their domain and background knowledge in order to come up with proper solutions. Machines, on the other hand, can in this case provide only brute computational power, which in most cases is used in executing and applying the manually designed transformations. Ontologies are trying to fill this gap; they are able to provide a conceptualization of the domain and to offer reference points during the integration process. The ontologies may be used in two ways: either by annotating the data that already exist or by offering the conceptual model of the data to be further created. The effects can eventually be seen in a higher degree of automation during the integration stage that could lower the humans’ efforts and the costs of the whole process. From the mediation point of view, the semantics play a crucial role in the process. The heterogeneity is addressed by exploiting the semantic similarities of the entities, subject to the mediation process, and this is usually done by acting at the ontological level and later applying the outcome to the actual data.
Data and Process Mediation in Semantic Web Services
5.1.3
113
Mediation Specification Overview
This section presents a general overview of what a mediation component should look like in the context of Semantic Web Services. In particular, this section analyzes the general aspects of the mediator systems in an attempt to provide an abstract view of a mediator. More details about the required functionality and the architectural elements are presented in followings sections. One of the most important issues of mediator systems is the involvement of the human expert in the mediation process. Even if the techniques for identifying, extracting, and making the semantics of both data and processes available to algorithms are becoming more effective,1 100 percent accuracy cannot be achieved. A human expert is always required for solving either the exceptional cases or the cases where the computers fail to make the correct decisions (either because of insufficient information or because of the semantics missing from the analyzed representations). Mediators typically contain two distinct parts: a design-time module and a run-time module (see figure 5.1). As its name indicates, the design-time module is used for designing the transformations that have to be applied to the mediated entities (the middle-top box of the mediator in figure 5.1). This is the phase where the human user can provide their inputs. This module is usually represented by a user interface (in most cases, a graphical one) that offers support in various ways,
Mediator
Conceptual Specification
Conceptual Specification
Mapping Rules in terms of
in terms of Instance(s)
Sourc e Figure 5.1 Overview of a mediator.
Rules Processor
Instance(s)
Target
114
A. Mocan, E. Cimpian, and C. Bussler
from validations and conflict detection to user guidance and suggestions [27]. This module acts on the conceptual specifications of the entities to be mediated (for example, ontologies in the case of data mediation and process representations in the case of process mediation). The result of the design-time module is mappings which define how the messages exchanged between the given source and target entities have to be transformed in order to accommodate the existing differences, both in the exchanged data and in behavior. In the case of process mediation, the mappings can even take the form of a complex process which, together with the source and target behaviors, can be composed and executed in a meaningful and heterogeneity-free way. The second stage has the role of executing the previously generated rules and of providing the actual data/messages that have to be delivered to the target party. This step takes place during runtime and acts on the entities’ instances that need to be mediated (e.g., data instances or process instances). The most important feature of this step is that it can be executed automatically, without any human user intervention, allowing the integration of the mediator into more complex processes and activities. The complete automation is possible because the rules have been completely specified and verified in the design-time step. There are cases where the role of the rule processor is more limited; it is just to forward the rules to the target party, assuming by this action that the target has all the necessary means for executing them. For example, in the case of data mediation there could be the responsibility of the target to add the rules to its own ontology and to query the updated ontology (also considering aspects of the source ontology and instances to be mediated), in order to obtain the desired, mediated data. 5.1.4
General Mediation Requirements
There are several requirements for a mediation system, some of them depending on the nature of the mediation process (as an intermediary process in the communication between two or more parties) and some of them depending on the implemented mediation approach. (see Section 5.1.2 for a description of the currently existing mediation approaches) The main requirements envisioned for a mediation system are the following: 1. Transparent Being an intermediary process in the communication, mediation needs to be transparent for the involved parties. They don’t need, and probably don’t want, to know every detail of their interactions either with the mediator system or with their partner. 2. Independent and decoupled The mediation system must not depend on a particular execution environment and should not be related to any other components needed for the communication. These requirements lead to a single one: the mediation must be available as a Web Service which will guarantee the desired independence and decoupling. Although there has been intense research activity in this area, mediation is still a semiautomatic process. This means that inputs by the human user are required. The best solution in this situation is to separate the operations that need human user inputs (provided at design time) from the operations that can be completed automatically during runtime.
Data and Process Mediation in Semantic Web Services
115
A design-time tool must assist the user during the creation of the mappings (that is, the equivalences of the data used by the involved parties). The requirements for this tool are the following: 1. Provide a graphical user interface This interface supports the user in the creation of the mappings. The user interface must contain visual indicators for pointing out which parts of the message or process are already mapped, what equivalences are more probably true, which is the current issue under analysis, and so on. 2. Provide mapping suggestions This provision imposes the existence of a mapping algorithm for calculating the probability of matching two different ontologies’ concepts or processes. These proposed suggestions must have a high precision, which means the ratio between the correctly identified matches and all the identified matches has to be as close to 1 as possible [10]. The tool must generate valid mapping rules from the mappings provided at design time. It must be able to obtain, given instances of the source ontology or the source process, the corresponding target ontology instances. The requirements for this component are the following: 1. Scalability The number of clients that might need to use the services of a mediation system, as well as the number of heterogeneous data sources that need to be mediated, are continuously growing. The mediation system must be designed to be scalable in both these directions. 2. Flexibility The effort needed for adapting the mediation system to the changes that may appear in the structure of the ontologies must be minimal. 3. Correctness Based on the mappings done at design time, the runtime component must create and execute correct mapping rules. However, the technical solution chosen to represent the mappings in an executable form (e.g., mapping rules in a logical language) must not change the equivalences previously identified. 4. Consistency The generated rules have to reflect the semantic relationships in a consistent way. This means that the mediated data obtained by executing the mapping rules must be consistent with respect to the target ontology (and the potential constraints defined therein). 5. Completeness During the creation of the mappings, the user may make mistakes in selecting the proper correspondences. Unless the system is able to correct them (or to suggest to the user how to correct them), these mistakes will lead to faulty mapping rules. One aspect of this is completeness (i.e., all target elements have to be matched). There can be situations in which some of the target elements do not have any semantic correspondents in the source, in which case the completeness is to be achieved by providing the appropriate default values as imposed by the target ontology. 5.1.5
The Web Service Modeling Ontology
The Web Service Modeling Ontology (WSMO) [33], [35] aims to be the most complete Semantic Web Services specification available. It describes various aspects of Semantic Web Services:
116
A. Mocan, E. Cimpian, and C. Bussler
ontologies, goals, Web Services, and mediators. Ontologies add semantics to the data used by all the other elements described in WSMO. WSMO takes as its starting point the Web Service Modeling Framework (WSMF) [13], refining and extending the concepts presented therein. WSMO introduces a conceptual model for ontologies defining the essential components of an ontology. Goals express the aims and the objectives a requester may have when intending to consume a Web Service. Web Services represent the functional part of WSMO through having the role of offering a semantically described functionality in order to enable their (semi)automatic use. Finally, mediators provide the interoperability means between the other components. WSMO defines four types of mediators: ooMediators (used to import ontologies and solve the possible mismatches between them), ggMediators (used to link two goals), wgMediators (for linking a Web Service with a goal), and wwMediators (used to link two Web Services). In this context, data mediation as described in section 5.1.1 is achieved by the ooMediators and used by all the other types of mediators described above. The process mediation has to be carried out in two of the three other types of mediators (wgMediators and wwMediators) in conjunction with data mediation. Plase note that the ontology-to-ontology mediation process may fully benefit from the conceptual model for ontologies offered by WSMO. That is, the input ontologies have the same conceptual model (or metalevel) which makes possible the usage of various algorithms and techniques for identifying the semantic similarities between them without its being necessary to (previously) lift the ontologies (and data) to a different conceptual level. As for the processes mediation, one important concept defined by WSMO needs to be taken into consideration: choreography [5]. Choreography defines the way of interacting with a certain entity (Semantic Web Service or requester of a service), providing well-defined semantic public processes. By doing this, WSMO choreography represents a step forward for process representation, considering that most of the currently existing approaches (e.g., BPEL and UML) provide only syntactic, semantically undefined processes. Process mediation is used to accommodate mismatches between the choreographies of the two participants in a conversation. 5.2
State of the Art in Mediation
As [40] illustrate, the definition of an algebra and of mismatch-resolving mechanisms for all kinds of heterogeneities that can appear is still impossible. The purpose of this chapter is to present some of the existing approaches in data and information mediation as well as the currently existing standards in process representation and process mediation. An important aspect that has to be taken into consideration is the continuously growing number of systems that need to be integrated. Assuming that each of them uses a different ontology and has a communication pattern different from all the others, the attempt to implement individual mediation mechanisms (i.e., mediation rules) for each pair of partners seems to be an unfeasible solution, since the number of such mediation rules has an n-square growth (figure 5.2a). Additionally, maintaining n sets of mediation rules in order to communicate with n part-
Data and Process Mediation in Semantic Web Services
(a) n to n mediation
117
(b) n to 1 mediation
Figure 5.2 N-square problem: n-to-n mediation versus n-to-1 mediation.
ners is just too difficult and time-consuming. In these circumstances it is preferable to have a central system that is able to understand and to communicate with all the others (figure 5.2b). This central system will have the role of facilitating the communication between any two partners. As it has an ontology itself, it reduces the number of required transformation rules from n square to n, considering n communication partners. By this we do not assume the necessity of a global semantic model that will enable the mediation of any local model in the world. It has become clear that such an agreed-upon model can never be created and maintained. But it is feasible to assume the existence of such global models for islands of users, for particular user communities that share and act in the same domain. 5.2.1
Data Mediation
In the last decades, data mediation has been the subject of intense research; therefore a large number of approaches emerged, each proposing a different solution to this problem. In the following, an overview of these approaches is presented, considering both the types of targeted application domains and the adopted techniques and strategies. The first analysis of existing approaches in this chapter is based on the classification proposed by [20], which identifies three main classes of applications: information integration and the Semantic Web, data migration, and ontology merging. Information Integration and Semantic Web The World Wide Web offers a multitude of information sources (e.g., knowledge bases, databases, semistructured data, etc.), together with a multitude of format representations for them. Users need effective and uniform strategies for retrieving and using them at their full potential. [39] call for the “intelligent integration” which would allow the integration of a large variety of
118
A. Mocan, E. Cimpian, and C. Bussler
data sources based on semantic means offered by the ontologies, and would provide an advanced query processor. A few years earlier, [23] considered that one of the most critical problems in this area is the usage of different vocabularies for describing similar information across domains. The solution proposed is a scalable system (called Observer) for vocabulary sharing, allowing the user to build queries using terms from a chosen ontology. [15] developed a tool called Infomaster, which aims to solve the main problems that appear during the integration: distribution and heterogeneity. Infomaster provides integrated access to distributed heterogeneous data sources, thus offering the illusion of a single, centralized, and homogeneous system. Usually such a system is based on a so-called mediated schema that in most cases does not contain any actual data. The data from the distributed sources are linked to this mediated schema by using transformation rules. [38] provides a formalism for expressing rules and classifies them in integration rules (used for exchanging the information) and context transformation rules (used for constructing new information) for better handling the possible heterogeneity problems. [30] present the vision of the electronic marketplace for business-to-business electronic commerce. The challenge is to provide an efficient management of product, catalog, and document descriptions by creating a product integration framework. For making this possible, a three-layer approach is adopted: syntax, data models, and ontology. The first layer corresponds to the instance documents and their XML serialization. The second layer bridges the syntax and the ontology layers. And the third layer defines the terminology that will be further used for describing the documents and for aligning all the other representations. Conforming to [20], the Semantic Web pushes this type of application to the extreme; no central mediated schema is available. The tasks also include actions in addition to queries, and the coordination should be achieved by using ontologies. Data Migration Data migration deals with the translation of data and knowledge between two data sources having different representations. The migration process is carried out on the basis of a set of mapping rules that covers the mismatches between two data sources. One approach in this direction is addressed by the Clio project [31], which aims to provide a way of mapping XML and relational schemas without requiring the user to create queries for every translation problem. Clio is in fact a high-level schema mapping tool, and guides the user to the mapping specification by using the so-called value correspondences (they specify how the values of source attributes are mapped to the target attribute). Ontology Merging Ontology merging is used when several ontologies that model overlapping domains need to be integrated, but developed in different contexts. The merging process usually starts with the establishing of a set of mapping rules, followed by an algorithm that generates the minimal ontology that covers the initial ones [20].
Data and Process Mediation in Semantic Web Services
119
[29] propose a merging algorithm (PROMPT) that takes two ontologies as inputs and guides the user through an iterative process that has a merged ontology as result. In the beginning this algorithm computes a set of suggestions based on lexical and structural similarities, presents them to the user, and waits for an operation to be selected. After an operation is chosen, the algorithm computes the eventual conflicts and presents a new set of suggestions. This iterative process is repeated until the result (the merged ontology) meets the requirements. [Chalupsky, 2000] comes up with two powerful mechanisms that facilitate ontology merging and translation: syntactical rewriting and semantic rewriting. The first uses pattern-directed rewrite rules specifying sentence-level transformations using pattern matching. The second modulates syntactic rewriting, using semantic models and logical inference. The merging process should consist of the following steps (some of them have to be repeated until the desired result is obtained): finding the semantic overlap; designing transformation to bring the sources into mutual agreement; morphing the sources to carry out the transformations; and taking the union of the morphed sources and checking the result for consistency, uniformity, and nonredundancy. Another interesting approach for ontology merging is the one proposed by [37], a bottom-up approach based on formal concept analyses [14]. The mechanism is based on application-specific instances of the two given ontologies and consists of three steps: the instance extraction and the computing of the two corresponding formal contexts; the execution of the algorithm that derives the common context and computes a concept lattice; and the generation of the final merged ontology. [28] proposes the Ontobuilder system, demonstrating the direct usage of ontology merging in the information-seeking domain. The extracted ontologies (candidate ontologies) are merged with an existing ontology (target ontology) in order to refine and generalize the target ontology. This approach mainly considers syntactical strategies toward merging, such as textual matching, ignorable character removal, dehyphenation, stop terms removal, substring matching, and thesaurus matching. In addition, content matching may be taken into consideration by computing a coefficient representing the match effectiveness based on the number of options (i.e., allowed values for a given term) in one term that match options in the other term. Another way of classifying the existing approaches in data mediation is to analyze the strategies applied. We can distinguish two major trends in this direction: the first is based on the schema-level mappings, and the second, on machine learning. In the first case, the mediation process starts with the creation of the mappings at the schema level where each mapping denotes a certain degree of similarity between an element from the source ontology and one from the target ontology. This step is usually carried out during design time because the human intervention (domain expert) is always necessary in order to assure the correctness of mappings, even for simple validations. The process continues with the translation of actual data (in most cases, ontology instances) from terms of the source ontology into terms of the target ontology. This last part of the process can be performed at runtime with no human intervention.
120
A. Mocan, E. Cimpian, and C. Bussler
[32] provide a classification of the possible directions that can be taken for determining the right mappings at the schema level: the usage of heuristics based on the structures defined in the ontologies, on naming, or on domain-independent techniques. The structure-level matching refers to matching combinations of elements that appear together in a structure. Depending on how precise and complete a match of the structure is, we can have full or partial structural match. For complex cases, the usage of known equivalence patterns—stored in a library, for example— can increase the efficiency of the mapping process. The heuristics based on naming use linguistic techniques for determining semantically similar schema elements: name equality, canonical name representation equality, synonymy/hyponymy relation, and so on. The Onion tool kit [24] uses domain-external heuristics based on graph-oriented models in order to determine the proper mappings at the schema level. Onion uses rules that cross the semantic gap by creating an articulation between systems that extends the graph-oriented model with a small algebraic operator set for representing ontologies to make them amenable to automatic composition. In the second case, the machine-learning approach considers instances of the two ontologies in mediation process. Based on several training sets, the machine learns a set of mappings between the input ontologies’ schemas; then, on the basis of these mappings, the actual data are translated. In some cases, the machines can learn classifiers in the same manner and then apply them to the data to be mediated. [19] describe a system called Caiman which is able to create mappings using machinelearning techniques for text classification. The nodes in the ontology are considered to be described by the documents associated with them, and as a consequence, a text classification algorithm may be applied for computing the degree of similarity of two nodes from two given ontologies. Another system, developed to assist the ontology mapping process by employing machine-learning techniques, is GLUE [11]. For two given ontologies, GLUE is able to find the most similar concepts, using probabilistic definitions for several practical similarity measures. The system uses multiple learning strategies, each of them exploiting a different type of information either in the data instances or in the taxonomic structure of the ontology. 5.2.2
Process Mediation
Process mediation is still a poorly explored research field in general, and specifically in the context of Semantic Web Services. The existing work represents only visions of mediator systems able to resolve the processes’ heterogeneity problems in a (semi)automatic manner without presenting more elaborated details about their architectural elements. Still, these visions represent the starting point and valuable references for future concrete implementations. [13] identify two types of processes: private business processes, which define the internal business logic of Web Services, and public processes, expressing the public, visible interaction patterns. From the mediation point of view, we are concerned only with the second type of processes. The concept of public processes is refined by WSMO as choreography [36]. The internal behavior of participants is not relevant for communication, and therefore we are interested only in the external behavior expressed in terms of the message exchange sequence.
Data and Process Mediation in Semantic Web Services
121
The solution envisioned by WSMF is that the business partners must agree on two matching processes. This means either that they will have to mediate between their individual public processes (communication patterns) or that a process mediator has to accommodate the mismatches. For example, if we consider a virtual travel agency (VTA) service and a client who wants to invoke it, the public process of the service may state, for example, that the service needs to receive two stations (the starting point and the destination of the trip) in order to return a possible route. The computations done internally by the service in order to generate the trip are not of interest for the mediation; all that matters is that its public process defines its communication pattern as follows: 1. Receive station, the starting point of the trip. 2. Receive station, the destination of the trip. 3. Send route. (See figure 5.3.) In order to communicate with the service, the client has to define an equivalent communication pattern (send two stations, receive a route), but it is often the case that the pattern defined by the client is different (for example, it wants to send a single message containing both stations, the date of travel, and maybe the preferred departure time, and to receive confirmation for the reservation directly). Since neither client nor service is willing to modify its communication pattern, an external process mediator has to be involved in the communication process.
Public process (defining the communication pattern)
station Private process station
? route
Figure 5.3 Communication pattern of VTA service.
122
A. Mocan, E. Cimpian, and C. Bussler
Hub-and-spoke integration Application 1
Application 3
Extract
Insert
Put
Transform
Application 2
Application 4
Get
Workflow management system
Store
Retrieve
Database management system
Figure 5.4 Process-based integration [3].
[3] identifies a process-based integration architecture meant to facilitate the communication between business partners, based on process integration (figure 5.4). The main component of the architecture is the workflow management system. After one of the applications (or business partners) sends a message, the message is extracted and then transmitted to the Workflow management system. The system can start (create) a new workflow instance when it receives a message, or it can continue with a workflow instance previously created (continue an existing conversation). Any messages received by the workflow management system can be stored and retrieved from the database as required by the workflow instances. A workflow instance determines the processing of the messages received, as well as the appropriate target spoke. Any message can be stored and retrieved without being sent out. Once a message is ready to be sent out, the workflow management system retrieves it from the database. The message is then transformed, before the insert happens. The workflow represents the integration process, and the workflow instance’s steps achieve the send and receive data from the application.
Data and Process Mediation in Semantic Web Services
5.2.3
123
Research Prototypes and Products
Two different directions were taken in the development of mediation systems, the research and the industrial, resulting in two different classes of tools: industrial products and research prototypes. In the research area, the activity was concentrated on finding novel solutions to improve the quality of results and to reduce the human user inputs. In the industrial area, the focus was on the rapid development of robust and reliable mediation systems suitable for particular applications. In this section we will briefly present some of the existing research prototypes, as well as industrial mediation systems, in order to underline the current trends in both these areas. Research Prototypes In the following pages we will analyze in more detail a number of research prototypes and specifications from both data and process mediation points of view. We present two prototypes that offer implementation for data mediation: PROMPT [29] and MAFRA [21]. The first is a tool for semiautomatic ontology merging and alignment. It is developed as a plug-in for Protégé,2 a well-known knowledge acquisition tool. The second one, MAFRA, is a mapping framework for distributed ontologies partially implemented as part of the Kaon Ontology and the Semantic Web framework. We choose to present aspects related to these tools because they meet the main characteristics and functionality that, in our views, a data mediation tool should have. PROMPT This tool is based on the algorithm with the same name, which proposes a way of guiding the user through an iterative process that ends with a merged ontology as an output. The process starts with the identification of the classes with similar names and provides a list of initial matches. Then the following steps are repeated several times: the user selects an action (by choosing a suggestion or by editing the merged ontology directly), and the tool computes new suggestions and determines the eventual conflicts (see figure 5.5).
Make initial suggestions Select next operation
Perform automatic updates Find conflicts Make suggestions
Figure 5.5 The PROMPT algorithm [29].
124
A. Mocan, E. Cimpian, and C. Bussler
PROMPT is able to handle ontologies conforming to the OKBC [4] format having, at the top level, classes, slots, facets, and instances. The allowed operations include merge classes, merge slots, merge binding between a slot and a class, perform deep/shallow copy of a class from one ontology to another (including or not including all the parents of the class up to the root of the hierarchy and all the classes and slots it refers to). Some of the conflicts identified by PROMPT are name conflicts, dangling references (a frame refers to another frame that does not exist), redundancy in the class hierarchy (more than one path from a class to a parent other than root) or slot value restrictions that violate class inheritance. Being implemented as a plug-in for Protégé, PROMPT can take advantage of all the ontology engineering capabilities of this tool. Some of the features offered by this implementation are setting the preferred ontology (for automatic conflict-solving in favor of one ontology), maintaining the user’s focus (suggestions the user sees first should relate to the frame in the same area), providing explanations for the suggestions made, logging, and reapplying operations. MAFRA This conceptual framework (see figure 5.6) describes a set of modules organized in two dimensions, horizontal and vertical, depending on their usage in the mapping process. The horizontal dimension modules (Lift and Normalization, Similarity, Semantic Bridging, Execution, and Postprocessing) represent distinct phases in the mapping process, whereas the vertical dimension modules (Evolution, Domain Knowledge and Constraints, Cooperative Consensus Building, and Graphical User Interface) are used during the entire mapping process in a close interaction with the horizontal modules. Lift and Normalization copes with syntactical, structural, and language heterogeneity, raising all the data to be mapped to the same representation language. Similarity deals with discovering similarities between entities, analyzing lexical similarities together with the entities’ properties such as attributes or relations. Using the Semantic Bridging layer transforms the previously
Semantic Bridging Similarity Lift & Normalization
Figure 5.6 Conceptual framework of MAFRA [21].
GUI
Execution
Cooperative Consensus Building
Domain Knowledge & Constraints
Evolution
Postprocessing
Data and Process Mediation in Semantic Web Services
125
detected similarities to establish the correspondences between the entities from the source and target ontologies. The Execution module evaluates the semantic bridges, and the Postprocessing module checks and tries to improve the quality of the results obtained. The first horizontal layer, Evolution, focuses on keeping the bridges obtained by the Semantic Bridging module up to date, consistent with the transformations and the evolutions that may take place in the source and the target ontologies. Cooperative Consensus Building is responsible for establishing a consensus between two communities relative to the semantic bridges. Domain Knowledge and Constraints may substantially improve the quality of bridges by using background knowledge and domain constraints, such as glossaries, lexical ontologies, or thesauri. Finally, the Graphical User Interface is required because the creation of mappings is a difficult and time-consuming process, thus requiring extensive graphical support. Unfortunately, none of the research prototypes we considered, offers support for process mediation, because the communication pattern heterogeneity is not addressed. There are, however, two well-known specifications regarding the process heterogeneity problems—OWL-S3 and Web Services Business Process Execution Language (WS BPEL4)—that we briefly describe in this chapter. Neither of them offers support for process mediation, but they both provide means for process representation. OWL-S OWL-S defines the processes as the ways a client may interact with a service [22]. From this point of view, a direct correspondence can be identified between WSMO Choreography and OWL-S processes, since both of them are dealing with the way a client may interact with a Web Service. OWL-S distinguishes between two types of processes: atomic processes and composite processes. An atomic process describes a service that expects one input and returns one output. Input and output are messages received from and sent to the client. A composite process is one that maintains some state; each message the client sends, advances it through the process. An OWL-S process is defined by its inputs, outputs, preconditions, and effects [22]. The inputs are the information that is required for the performance of the process; the outputs represent the information that the process provides to the requester; the preconditions are conditions which must all hold in order for the process to be successfully invoked; and the effects are the changes produced as result of performing the process. WS BPEL WS BPEL distinguishes two ways of describing the business processes: executable business processes and abstract business processes, also known as business protocols. The executable ones model the internal behavior of a participant in an interaction, and the abstract ones describe the message exchange behavior of the involved parties. The key distinction between the two classes is that the executable business processes model data in a private way that should not be described in the public protocols [1]. As with OWL-S, the WS-BPEL abstract processes can be directly related to the WSMO choreography, since they are meant to perform the same task: allowing interaction with the client.
126
A. Mocan, E. Cimpian, and C. Bussler
To conform to the WS BPEL specifications, the definition of a business protocol involves the specification of all the visible messages exchanged between the involved parties, without any reference to the internal behavior. The following activities are defined for describing WS BPEL abstract processes [1]: receive, reply, invoke, assign, throw, wait, empty, sequence, switch, while, pick, flow, scope, compensate. 5.2.4
Industrial Products
A number of tools for data and process mediation have been developed in industrial environments. Most of these systems are application-oriented and offer solutions for specific application scenarios. In this section we will describe some of the best-known mediation systems: Biztalk Server, Contivo, and Crossworlds. Biztalk Server Microsoft BizTalk Server 2006 [2] is an integration server that offers solutions for developing, deploying, and managing integrated business processes and XML-based Web Services. It also provides integration between messaging and orchestration, and enhanced security and support for industry standards. BizTalk Server 2006 is divided into two conceptual parts: the core engine and the services for information workers that are built on top of that engine. The core engine (figure 5.7) provides a way to specify the business process and a communication mechanism between the applications the business process uses.
Orchestrations Business Rules Engine
Outbound
Inbound
Receive Adapter
Receive Pipeline
Send Pipeline
Send Adapter
Incoming Message
Outgoing Message
Subscriptions
MessageBox
Figure 5.7 The core engine of Microsoft BizTalk Server 2006 [2].
Message Path
Data and Process Mediation in Semantic Web Services
127
In general, the business processes implemented by orchestrations receive and send documents. Part of the information from the incoming documents has to be transferred to the outgoing document by performing the appropriate transformations. For defining these transformations, the BizTalk Mapper is offered to the user. This tool can be used for creating mappings between two XML schemas, each mapping defining the relationships between elements in those schemas. Each of them is expressed as a graphical correlation and is implemented as an one or more XSLT transformations. The transformations can be simple (copying a name and an address from one document to another) or more complicated, expressed by using a functoid. Functoids are pieces of executable code that can define mappings of various complexities between XML schemas. They can be created by using .NET languages such as C# and Visual Basic.NET, or by expressing them directly in XSLT. Because transformations are required very often, the BizTalk Mapper offers a set of common functoids, grouped in several categories: mathematical functoids, conversion functoids, logical functoids, cumulative functoids, and database functoids (more details can be found in [2]). These functoids can be combined in sequence, cascading the output of one into the input of another. After defining all the mappings between the two schemas, one more step has to be fulfilled: the creation of the business process able to invoke the mappings and based on the schemas to perform the transformations. Contivo A leading provider of automated data integration, Contivo5 automates the design of platformindependent data transformation between applications. The Contivo Vocabulary Management Solution (see figure 5.8) provides tools for application implementation and integration projects. Its main components are the following: 1. Preconfigured data models and formats: a suite of data models and formats which covers a specific business processes 2. Contivo Repository: provides central storage and management for the business vocabulary, the data and document models, and the resulting data integration results 3. Contivo Analyst: provides the user tools to customize the vocabulary, data models, and maps to meet the unique needs of the end user. The three main activities performed by the Contivo Vocabulary Management Solution are (1) capture integration metadata, such as application interface definitions, standards, and data dictionary; (2) manage dictionary, interfaces, and mappings in a shared repository; (3) deliver usable metadata and reports to new and follow-on projects, and maintenance to create transforms. CrossWorlds CrossWorlds6 is an IBM integration tool meant to facilitate the B2B collaboration through business process integration. It may be used to implement various e-business models, including
128
A. Mocan, E. Cimpian, and C. Bussler
Contivo Repository
Contivo RepositoryTM –Dictionary for Semantic Association –Repository for Sharing and Reuse
Contivo Dictionary
Source
Target
Contivo AnalystTM –Java-based client –Content Modeling & Automated Mapping Design Time Run Time Source
Generated Runtime Code
Target Contivo Code Generation –Automated map & transform generation
Integration Environment
Figure 5.8 Contivo Vocabulary Management Solution (http://www.contivo.com/products/vms.html).
enhanced intranets (improving operational efficiency within a business enterprise), extranets (for facilitating electronic trading between a business and its suppliers), and virtual enterprises (allowing enterprises to link to outsourced parts of their organizations) [6]. The system consists of the InterChange Server, which represents the central infrastructure, and various modular components, connected in a hub-and-spoke architecture (figure 5.9). The InterChange Server provides the following services: event management, connector controllers, maintenance of the repository of all IBM CrossWorlds objects, database connectivity, transaction service, and transaction collaboration. All three systems described above offer support for both data and process mediation, but they do not take into consideration the semantic aspects of the data or processes they mediate. 5.3
Semantic Web Services Mediation
The mediation in the context of Semantic Web Services needs to address two aspects: data-level mediation and process-level mediation. In this section we illustrate what we understand by similarity/equivalence of data and processes, and we explain our approaches in solving these heterogeneity problems.
Data and Process Mediation in Semantic Web Services
Legacy Application
129
Color Management Enterprise Application
Custom Connector
Application Connector
InterChange Server Legacy Data
Order Data Collaboration
Sales Order
Application Connector
Sales Order Processing Enterprise Application
Order Status for display on Web site
XML Connector
Web Server Web Client
Figure 5.9 CrossWorlds architecture [7].
5.3.1
Data Similarity
Solving data heterogeneity problems has two main dimensions: (1) to identify the similarities between different data representations and to align these representations syntactically and semantically, and (2) to make use of these similarities in a concrete scenario that involves multiple data sources. We discuss the first issue in this subsection from the ontology alignment point of view, and talk about the instance transformation scenario in the next subsection. As has been mentioned, an ontology is a formal, explicit specification of a shared conceptualization [16]. But it is usually the case that more than one party develops ontologies modeling the same domain, and applications using different conceptualizations of the same domain have to interoperate. The similarities between two ontologies are identified at the schema level and are used later as a reference point for handling the data expressed in terms of the two ontologies. [32] offer a survey of the existing approaches in the schema matching and introduce the so-called Match operator. Match is applied on two schemas, and the result is a set of mapping
130
A. Mocan, E. Cimpian, and C. Bussler
Schema Matching Approaches
Individual matcher approaches
Instance/contents-based
Schema-only based
Element-level
Structure-level
Linguistic Constraintbased
Name similarity Description similarity Global namespaces
Type similarity Key properties
Combining matchers
Constraintbased
Graph matching
Hybrid matchers
Element-level
Linguistic
IR techniques (word frequencies, key terms)
Composite matchers
Manual composition
Automatic composition
Constraintbased
Value pattern and ranges
Further criteria: – Match cardinality – Auxiliary information used ... Sample approaches
Figure 5.10 Classification of schema matching approaches [32].
elements; each mapping element specifies that certain elements from the first schema correspond to (match) other elements from the second schema. Furthermore, each mapping element may have an associated mapping expression which specifies how the elements are related. The Match operator may be implemented in various ways, applying different techniques with respect to the problem that has to be solved (see figure 5.10). The schema-only-based approaches take into consideration only the information available at the schema level, and they can apply linguistic techniques (e.g., to determine the lexical relations between elements’ names) or can analyze the matching of elements that appear together in the same structure (structural-level matching). Sometimes the information at the schema level is incomplete, and then the instance-level data can give important inputs for determining the schema elements’ semantics. Different techniques may be used together in order to obtain better mappings (composite matchers), either by combining the results obtained from applying different approaches or by combining the different approaches directly (hybrid matchers). 5.3.2
Data Mediation Rules
After the similarities between the input ontologies are identified, they have to be expressed as rules in a formal language, in order to be executed and to obtain the mediated data. That is, a rule has to specify all the transformations (structural transformations as well as value transfor-
Data and Process Mediation in Semantic Web Services
131
mations) applied to the source data. The expressiveness and the power of these rules depend directly on the chosen mapping language. [8] identify a set of requirements that a mapping language has to meet. They include (1) specifying instance transformations, (2) using mapping patterns (mapping patterns capture the mapping similarities and provide the means for reusing them), (3) versioning support (in a context of evolving ontologies, the mappings are also evolving), (4) treating classes as instances (the domains of the two ontologies may have different granularities), and (5) mapping of different cardinalities (one-to-one mappings usually are not enough). Another requirement we consider to be highly beneficial is the existence of a clear separation line between the mappings (the mapping elements, as described in section 5.3.1) and the actual rules. The mappings should be represented in a language-independent way, whereas the rules are committed to a specific mapping language. Having such a separation enables the creation of different rules (expressed in different mapping languages) from the same set of mappings. Furthermore, even if there is always only one mapping language, the reuse of the existing mappings in different rules (i.e., one rule is used for transforming only one type/instance of data) is trivial. Additionally, if the mediation takes place in an environment that implies evolving ontologies, this requirement avoids the complicated process of rewriting the affected rules to accommodate the changes by replacing the out-of-date mappings with new ones. The transition from mappings to rules can be done by a specialized software component as a completely automatic process. For a better understanding of this approach, we introduce a small example to illustrate this process from the identified similarities between two ontologies to the mediated data. The example we use (figure 5.11) is a classic one: solving the mismatches between two address formats. Each of the two formats is represented by a concept in an ontology and described by a certain number of attributes. Each of the attributes has a range, which can also be a concept (or a data type). In order to keep the problem simple, some of the concepts playing the role of ranges in this example are not further described, and as a consequence they will be treated as common data types. In figure 5.11 a graphical representation of the identified similarities between two fragments of ontologies is provided. Each arrow shows exactly what elements were identified as similar.7 address street_name => street number => positive_int value => integer
personal_address has_local => local_address street_name => street number => integer
location_name => locality country_name => country
city_name => city
zip => zip_code
zip_number => zip
value => string
country_name => country value => integer
Figure 5.11 Graphical representation of the similarities between two address formats.
132
A. Mocan, E. Cimpian, and C. Bussler
The aim is to transform these similarities in rules applicable to all incoming instances of the address concept (from the first ontology) in order to produce instances of the personal_address concept (from the second ontology). The first step we consider is to express these similarities as mappings, keeping in mind the requirements we mentioned before: that the mappings should be independent of the rules and not committed to any given mapping language. These can be done easily by dividing the identified similarities into concept similarities, attribute similarities, and concept-attribute similarities. They can then be expressed as concept mappings, attribute mappings, concept-attribute mappings, and attribute-concept mappings. Each mapping can be seen as a pair of concepts and attributes, and almost none of them has any semantics if taken separately (the exceptions are the mappings between data types, but their usage in this case is very limited). Only the whole set of mappings, sometimes together with the schema information, is able to fully describe the semantics of a particular mediation scenario. For our example, the mappings are presented in figures 5.12 through 5.15. For the concepts, the representation of mappings is simple: a source concept may be mapped to one or more concepts from the target ontology. For example, the concept address is part of two mappings, one of them having personal_address as the target concept, and the other one
Figure 5.12 Concepts mappings.
Figure 5.13 Attributes mappings.
Data and Process Mediation in Semantic Web Services
133
Figure 5.14 Attribute-to-concept mappings.
Figure 5.15 Concept-to-attribute mappings.
having local_address as the target. In the case of attribute mappings, each of the participants is represented by its name, range, and owner (the concept that is described by that particular attribute). Each attribute mapping may have associated different operations (not shown in this example) that can perform different transformations of attribute values from the source ontology in order to obtain the desired values for the attributes from the target ontology (e.g., concatenations, multiplications with a constant, etc.). If there are no operations associated with the mappings, or the operations are reversable, then these mappings are bidirectional; otherwise, they are unidirectional. This step can be skipped by automatically generating the rules from the identified similarities, but, as mentioned in the previous paragraphs, this is not recommended. The next step is to generate the actual rules (i.e., the executable representation of these mappings). Unlike the mappings, the rules are committed to a certain language, and an appropriate engine (e.g., a reasoner) has to be used for their execution. Out of a given set of mappings, several rules can be generated, generally one for each existing mapping. Figure 5.16 shows some of the rules corresponding to the example in figure 5.11. The rules are represented using the Web Service Modeling Language (WSML) [9], which is a language providing formal syntax and semantics for WSMO. For example, a rule is generated to state that an instance of personal_address is created on the basis of an instance of the address (lines 6 to 9), and another rule, to state that the value of the attribute street_name from address is given to the attribute street_name in local_address (lines 20 to 24). As a technical detail, in this example the med function symbol is used to create unique identifiers for the new instances, based on the identifiers of the source instances. It takes two parameters: (1) the instance from the source that is the basis on which that target instance is built and (2) the target concept that the source instance is mediated to. The rule caMappingRule28 (lines 32 to 38) exemplifies the need for the second parameter. As shown in the rules from lines 32 to 45, the mapping rules become more complex for the attribute-concept and concept-attribute mappings (emphasizing the crucial role of graphical tools in allowing the creation of complex mapping rules by nontechnical users).
134
Figure 5.16 The rule for the address—personal_address mapping.
A. Mocan, E. Cimpian, and C. Bussler
Data and Process Mediation in Semantic Web Services
135
In order to complete the mediation process, these rules have to be executed in the appropriate environment—in our example, by using a WSML reasoner. The role of the reasoner is to infer the target instances based on the source and target ontology schemas, the source instances, and the mapping rules. In fact, the reasoner acts on a pseudo-merged ontology containing the relevant elements from the source and target ontology (all the elements coming from different ontologies are considered to have different namespaces, and thus they are distinct), the source instance, and the mapping rule. By using specific queries (e.g., “give me all the instances of the concept personal_address”), the resoner returns exactly the mediated data as results. The example presented in this section is a small and simple one (actually, the reader can see that the mapping rules tend to get complex rather rapidly, even for small examples). However, the illustrated steps are common for most cases, disregarding the sizes of the problems, the exact representation of the mappings, or the language and the reasoner used to evaluate the rules. 5.3.3
Process Representation
The approach to process mediation presented in this chapter is based on the assumption that the processes are represented as choreographies, part of the interface definition of both participants in a conversation. In WSMO, every Web Service must declare a choreography interface, which will specify the way of invoking it. The reason for specifying the choreography interface is to clarify what the Web Service expects in terms of communication tasks [36]. Similarly, every client of a Web Service (the requester of service) defines the way it wants to invoke the Web Service by specifying its own choreography. In order for communication to take place, the two choreographies must define equivalent processes, or an external mediation system has to be involved in the communication. The role of the mediator system will be to make the conversation possible by use of techniques such as message blocking, message splitting and aggregation, acknowledgment generation, and so on. WSMO choreography representation is based on the Abstract State Machine [17] methodology. In [34] the authors provide the rationale for using ASMs as the underlying model for choreography: 1. Minimality ASMs provide a minimal set of modeling primitives (i.e., enforce minimal ontological commitments). Therefore, they do not introduce any ad hoc elements that it would be questionable to include in a standard proposal. 2. Maximality 3. Formality
ASMs are expressive enough to model any aspect around computation. ASMs provide a rigid mathematical framework to express dynamics.
WSMO choreography consists of states (descriptions of the behavior of the service in its communication) and guarded transitions (transition rules that transform the states). A state represents a fragment of an ontology containing the information that the owner of the choreography considers necessary to be public in order for the communication to take place. Each concept
136
A. Mocan, E. Cimpian, and C. Bussler
from a state must have a certain attribute that shows who has the right to create instances of this concept (only the owner or also the environment), and specifies if the environment will, at some point in time, receive instances of that concept. In [34], this attribute (named mode) can take the following values: Static meaning that the extension of the concept, relation, or function cannot be changed. If not explicitly defined, the attribute mode takes this value by default.
•
Controlled meaning that the extension of the concept, relation, or function can be changed only by its owner.
•
In meaning that the extension of the concept, relation, or function can be changed only by the environment. A grounding mechanism for this item must be provided to implement write access for the environment.
•
Shared meaning that the extension of the concept, relation, or function can be changed by its owner and by the environment. A grounding mechanism for this item must be provided to implement read/write access for the environment.
•
Out meaning that the extension of the concept, relation, or function can be changed only by its owner. A grounding mechanism for this item must be provided to implement read access for the environment.
•
The mode attribute allows us to reason about which messages are expected at some point in time, and which messages can be discarded. Whereas the states are used only for representing which instances need to be created and by whom, and who has access to them, the transition rules express the conditions that need to be fulfilled in order to pass from one state to another. Consider the following example: a Web Service that needs to receive the details of a credit card from a certain client, in order to issue a train ticket. For specifying these details, in its choreography it will have two concepts: creditCard and ticket. The creditCard concept has the mode attribute set to in, and the ticket has the mode attribute set to out. The transition rule that triggers the creation of an instance of the ticket is if creditCardInstance memberOf creditCard then add (_# memberOf ticket) endIf Any additional computations carried out internally by the service (for example, checking if the credit card is valid) do not need to be visible to the client, and they are not mentioned in the service’s interface.
Data and Process Mediation in Semantic Web Services
5.3.4
137
Process Equivalence
By “process equivalence” we understand in this context the full matching of the communication patterns from the source and the target of the communication. Since a business communication usually consists of more than one exchanged message, the finding of the equivalences between the message exchange patterns of the two (or more) parties is not at all a trivial task. [13] identify three possible cases that may appear during the message exchange. Precise Match The two partners have exactly the same pattern in realizing the business process, which means that each of them sends the messages in exactly the order that the other requests them. This ideal case does not necessitate a mediator, since the communication takes place without any problem. Unsolvable Message Mismatch In this case, one of the partners expects a message that the other does not intend to send. Since the mediator cannot provide this message, the communication reaches a dead end (one of the partners is waiting indefinitely). Resolvable Message Mismatch This case appears, for example, when a buyer sends all the line items of a PO in a single message, but the seller expects them separately. In this case the mediator can break up the initial message, and send the line items one by one to the seller. [5] identify five mediation patterns that a process mediator should be able to solve in a truly automatic manner (resolvable message mismatch), based on the choreographies of the two participants in a conversation. These patterns are depicted in figure 5.17. a. Stopping an unexpected message (figure 5.17a)—If one of the partners sends a message that the other does not want to receive, the mediator should retain and store it. This message can be sent later, if needed, or it can be deleted after the communication ends. b. Inversing the order of messages (figure 5.17b)—If one of the partners sends the messages in a different order than the other partner expects, the messages that are not yet expected will be stored and sent when needed. c. Splitting a message (figure 5.17c)—If one of the partners sends in a single message multiple information that the other partner expects to receive in more than one message, the information can be split and sent in a sequence of separate messages. d. Combining messages (figure 5.17d)—If one of the partners expects a single message, but the message that is sent contains information sent in multiple messages, the information can be combined into a single message.
138
A. Mocan, E. Cimpian, and C. Bussler
PM Business Partner 1
A B
PM B
Business Partner 2
Business Partner 1
A B
PM
A and B
Business Partner 2
PM A B
A and B
Business Partner 2
b)
a)
Business Partner 1
B A
Business Partner 1
Business Partner 2
A B
d)
c)
PM Business Partner 1
A
A
AckA
Business Partner 2
e) Figure 5.17 Mediation patterns [5].
e. Sending a dummy acknowledgment (figure 5.17e)—If one of the partners expects an acknowledgment for a certain message, and the other partner does not intend to send it, even if it receives the message, an acknowledgment can be automatically generated and sent to the partner which requires it. This list illustrates only a minimal subset of mismatches that can be automatically solved. Process mediation can also address combinations of these patterns; in addition, as the work evolves, more complicated scenarios may appear. Some of them will be solved automatically, and probably some of them will require human user inputs at some point. 5.3.5
Process Mediator Algorithm
For automatically solving the process heterogeneity patterns described in section 5.3.4, the process mediator (PM) needs to perform the following steps. 1. Loads the two choreography instances. 2. Mediates the incoming instances in terms of the targeted partner ontology (by using an external data mediator), and checks whether the targeted partner is expecting them at any phase of the communication process. This is done by checking the value of the mode attribute for the mediated instances’ owners. If the attribute mode for a certain concept is set to in or shared, then this concept’s instances may be needed at some point in time. The instances that are expected by the targeted partner are stored in an internal repository.
Data and Process Mediation in Semantic Web Services
139
3. For all the instances from the repository, the PM has to check whether they are expected at this phase of the communication, which is done by evaluating the transition rules (an external reasoner may be used). The evaluation of a rule will return the first condition that cannot be fulfilled (i.e., the next expected instance for that rule). This means that an instance is expected if it can trigger an action (not necessarily to change a state, but to eliminate one condition for changing a state). The possibility that various instances from this repository can be combined in order to obtain a single instance, expected by the targeted business partner, is also considered. 4. Each time the PM determines that an instance is expected, it sends it, deletes it from the repository, updates the targeted partner choreography instance, and restarts the evaluation process (restarts step 4). When a transition rule can be executed, it is marked as such and not reevaluated at further iterations. The PM only checks whether a transition rule can be executed, and does not execute it, since it cannot update any of the two choreography instances without receiving input from one of the communication partners. By evaluating a rule, the PM determines that one of the business partners can execute it without expecting any other inputs. This process stops when, after performing these checks for all the instances from the repository, no new message is generated. 5. For each instance forwarded to the targeted partner, the PM has to check whether the sender is expecting an acknowledgment. If the sender expects an acknowledgment, but the targeted partner does not intend to send it, the PM generates a dummy acknowledgment and sends it. 6. The PM checks all the sender’s rules and marks the ones that can be executed. 7. The PM checks the requester’s rules, to see if all of them are marked; when all are marked, the communication is over and the PM deletes all the instances created during this conversation (together with the choreography instances) from its internal repository. This simple rule may not hold for more complicated choreographies (for example, loops or conditional branches). In this case the PM cannot determine if the conversation is over, but this will not affect the conversation in itself. In this case the only difference is that the PM cannot delete any data from the repository, but it can delete them based on other constraints (for example, time constraints). The only thing that should be kept in an internal storage is the actions the PM needs to take when receiving a message. This could be useful if the same two partners are later involved in a second conversation. 5.4
Architecture and Implementation of Semantic Web Service Mediation
As described at the beginning of this chapter, mediation is needed in the context of Semantic Web Services (SWS) usage. That is, the mediation prototypes developed on the basis of the
140
A. Mocan, E. Cimpian, and C. Bussler
previously described approaches must act in a framework that enables the usage of SWS from the discovery until the actual invocation phase. This section begins by defining the Web Service modeling Execution Environment and continues with details on how the data and the process mediators are designed, developed, and integrated into this framework. 5.4.1
The Web Service Modeling Execution Environment
Vertical Services e.g. Security
Execution Management
The Web Service Modeling Execution Environment (WSMX) is a software framework that allows the runtime binding of a service requester and a service provider [18] [26]. It is a reference implementation of WSMO [35], and its aim is to enable the dynamic discovery, selection, mediation, and invocation of Web Services based on the formal descriptions of the requester and of the Web Services themselves. The framework architecture is presented in figure 5.18, together with its main elements. These elements are structured in three horizontal layers (problem-solving layer, application services layer, base services layer) augmented by vertical layers. The problem-solving layer consists of a set of services that use the underlying layers and enables programmers and end users to interact with WSMX (programmatically or by the means of graphical interfaces). The application services are those services directly involved in solving or supporting the requests coming from the problem-solving layer. The base services layer provides low-level support for the two upper layers. In this way, the application services and problem-solving services are able to use the storage or the reasoning mechanisms provided by the services in this layer. In WSMX, vertical services are those services not directly involved in solving requests coming from the problem-solving layer (as the application services do); but their role is either to manage
End User Tools
Developer Tools
Adaptation
Composition
Data Mediation
Process Mediation
Problem Solving Layer
Discovery
Communication
(external) Application Services Layer
Fault Handling
Storage Base Services Layer
Figure 5.18 WSMX architecture [26].
Reasoning
Choreography
Monitoring
Data and Process Mediation in Semantic Web Services
141
the execution of the application services or to augment them with extra functionality. The two vertical services considered by WSMX are Execution Management and Security. WSMX architecture adopts two main principles: components decoupling and standardization of external behavior of components. Components decoupling is a fundamental principle of Service-Oriented Architecture that was adopted by WSMX. Making components self-contained supports a clear separation of concepts. Each component has a well-defined functionality that can be utilized by other components. By standardizing the external behavior of the components, WSMX architecture separates the interfaces from the actual implementation of the components, and will allow easy plug-in and plug-out of different components developed by a third party. The data mediator and the process mediator play an important role during the entire execution. The data mediator may be needed during the discovery of a Web Service, the selection of a Web Service, and the actual invocation of the selected Web Service. The process mediator is not needed during the discovery process, but it is useful during the selection (for checking if the back-end application can actually communicate with the selected Web Service) and during the invocation, for facilitating the communication of the two involved parties. 5.4.2
Data Mediator
Before describing the principles and the technical details of the data mediator used by WSMX, it is interesting to take a look at the desired functionality of this mediator and at the main usage scenario the authors had in mind (figure 5.19). The scenario is a very simple one: two parties decide to become partners in a business process. Each wants to be able to send messages to or receive messages from the other (e.g., purchase orders and purchase order acknowledgments) by means of their information systems. Most probably the two new partners are using different ontologies and, accordingly, they will express the exchanged messages in terms of their own ontologies. It is the role of the mediation component to transform the messages sent by one of the parties, in the terms of one ontology, into
Business Partner 1
Business Partner 2 Web Service Modeling Execution Environment (WSMX)
uses
communicate
Data Mediation has source
Ontology 1
Figure 5.19 Mediation Scenario.
has target
Ontology 2
Information System
uses
142
A. Mocan, E. Cimpian, and C. Bussler
Source Ontology
inTermsOf
Target Ontology inTermsOf Source Instance
Execution Environment
Target Instance
Mapping Rules Mapping Rules Creator
Run-time Component
Design-time Component Mappings
Mappings
Storage
Figure 5.20 Data mediator component.
terms of the ontology used by the target partner. From the technical point of view, the mediation component has to take as input the instances of one ontology (the source ontology) and to transform them into instances of the other ontology (the target ontology). The WSMX data mediator is structured in two main components from both the conceptual and the architectural points of view. The first component is the Ontology Mapping Tool, which supports the identification of similarities between the given ontologies. It is a graphical user interface used during design time and is not effectively part of the WSMX architecture. The second component is a runtime module and appears in the WSMX architecture under the generic name of data mediator. This component, divided into two submodules (Mapping Rules Creator and Rules Execution Environment), uses the outputs produced by the design-time component in order to achieve its functionality. An overview of the whole WSMX mediator component is shown in figure 5.20. Design-Time Module The Ontology Mapping Tool has the role of identifying the similarities between the ontologies used by the two communicating parties. Since this task requires human intervention, the Ontology Mapping Tool provides a graphical user interface in order to make the user’s task much easier by offering guidance and support during the entire process. The tool offers suggestions for the possible mappings, based on the structural and lexical characteristics of the ontology
Data and Process Mediation in Semantic Web Services
143
elements and, using a top-down approach, it guides the domain expert to find all the mappings required by its goal. During the mapping process a bottom-up approach can be chosen in addition to the top-down approach mentioned earlier. In the first case the user starts by mapping the primitive concepts (concepts with no internal structure defined) or data types, and then continues with the mappings of the more complex concepts for which the internal structures are already resolved. Based on the already created mappings (and, if possible, on the lexical characteristics), the system suggests the next steps that have to be taken to ultimately obtain the desired mappings. The second strategy is suitable when the user knows the concepts that have to be mapped, and wants to map only the necessary elements and nothing more. In this case, the system maintains the so-called contexts (one for the source and one for the target) that contain only the elements relevant for that particular mapping. These contexts are accordingly updated in such a way that the user sees only the information required for a particular mapping. By following these contexts, all the elements relevant for the given mapping are considered. The goal of better isolating the domain expert from the burden of logical languages is also achieved by the use of several perspectives on the ontologies which help in identifying and capturing different types of mismatches only by graphical means. (For more information regarding the mechanisms and techniques used in this, consult [25].) The outcome of this module is the set of identified similarities expressed as mappings (represented in the format described in [12]). These mappings are language-independent and indicate the elements of the two ontologies that are related by some degree of similarity. Each mapping can ultimately be seen as a pair of elements (one from the source ontology and the other from the target ontology), and a mapping itself does not have any semantic meaning out of that context (i.e., without the other mappings and the information contained by the ontologies’ schemas). This way of representing the similarities between two ontologies presents advantages: not being tied to any language and representation format, and assuring a high flexibility for the next steps of the mediation process. The mappings are stored in an external storage, to be used later in the mediation process or for further refinements. Runtime Module This module has the role of using the identified similarities and actually transforming the input instances into instances expressed in terms of the target ontologies. Due to the chosen representation form of the mappings, the runtime module is divided in two submodules: the Mapping Rules Creator and the Rules Execution Environment. The Mapping Rules Creator receives the data to be mediated, and its task is to identify all the mappings that are related to the input data. It uses the schema information to build executable rules that are able to transform the source instances into target instances. These rules are expressed in a concrete ontology representation language (i.e., WSML) and contain all the necessary information for the reasoner to build the target instances. The language chosen directly influences the second submodule of the runtime component: the Rules Execution Environment.
144
A. Mocan, E. Cimpian, and C. Bussler
It takes as inputs the mapping rules generated by the Mapping Rules Creator and the data (instances) to be mediated, and returns the data expressed in terms of the target ontology. This submodule can also check constraints and validate the mediated data. We choose to enable the runtime component with the full functionality required to create the mediated data in order to conform to the first requirement of a mediation component (see section 5.1.4): transparency. By assuming that the target party has all the necessary means for making use of the mapping rules sent by the mediator, this requirement is not fulfilled, even if the receiver indeed has this capability. In this way it would become aware of the mediation process that needs to be carried out. Furthermore, it would have to consider an entire set of additional strategies to cope with the potential failures of the rules execution process. If the runtime component provides all the mediated data by itself, the target party has only the responsibility to handle the received data and to apply the same methods as if the data came from a homogeneous environment. Currently, in WSMX the Mapping Rules Creator implementation generates WSML [9] rules. The Rules Execution Environment relies on a WSML reasoner integrated through a specialized Java API.8 Process Mediator The process mediator (PM) component has the role of analyzing the choreographies of the two participants in a conversation and determining, based on these choreographies, if any incoming message is expected by the targeted partner. When WSMX receives a message, either from the requester of the service or from a Web Service, it has to check if it is the first message in a conversation. If it is, WSMX creates copies (instances) of both the sender and the targeted business partner choreographies, and stores them in a repository. If it is not the first message of a conversation, WSMX has to determine the two choreography instances corresponding to the conversation (their IDs). These computations all performed on the message by two WSMX components, communication and choreography [26]. After the IDs of the two choreography instances are obtained, the PM receives them, together with the message consisting of instances of concepts from the sender’s ontology. Based on the IDs, the PM loads the two choreography instances from the WSMX repository by invoking the WSMX resource manager. All the transformations performed by the PM will be done on these instances. If different ontologies have been used for modeling the two choreographies, the PM has to invoke an external data mediator to transform the message into the terms of the target ontology (figure 5.21). After various internal computations (described in section 5.3.5), the PM determines whether, based on the incoming message, it can generate any messages expected by either of the partners. The generation of a message determines a transformation in the choreography instance of the party that receives the message. After sending the message, the process mediator reevaluates all the rules (by using the WSML reasoner) until no further updates are possible.
Data and Process Mediation in Semantic Web Services
145
Data Mediator
message
mediated message
ids/choreographies/ontologies
Choreography Engine
incoming message choreography instance id outgoing message
Process Mediator
choreography instances
choreography instances’ ids
Resource Manager Figure 5.21 Process mediation interaction with other components.
Figure 5.22 presents the steps performed during process mediation, as well as the components involved. The ovals represent actions, and rectangles represent components. The components at the bottom part of the figure are external components, not part of the process mediator. The links with the choreography engine are not represented in figure 5.22. The choreography engine triggers the entire process by invoking the PM. The result of process mediation is returned to the choreography engine. The process mediator consists of three subcomponents: choreography parser, WSML reasoner, and an internal repository. Choreography Parser The choreography parser has the role of determining if any instance obtained after the data mediation process is expected by the targeted partner. That is, the choreography parser will have to perform the following operations: 1. Determine the owner (concept) of an certain instance 2. Determine the value of the mode attribute for the owner; if the value is set to in or shared, the instance will be stored in the internal repository for further usage. The choreography parser does not have to return the value of the mode attribute, but only a Boolean value: true if the data are required by the targeted partner at some point in time (the mode is set to in or shared), or false otherwise. Internal Repository The internal repository is used for storing information that will be sent to one of the partners at some point in time. It offers the methods store, delete, and update.
146
A. Mocan, E. Cimpian, and C. Bussler
Choreography Parser
uses Loads Choreographies
look-up
Storage
Map Message
Internal Repository
WSML Reasoner
uses
uses
Check mode attribute
Store Instances
Evaluate Tr. Rules
uses Update
calls
Data Mediation
Figure 5.22 Interaction diagram for the process mediator’s components.
WSML Reasoner The WSML reasoner has to extract the instances from the repository one by one, and to check if by sending an instance, at least one condition of one transition rule can be fulfilled, which will be done by evaluating the transition rule. The possibility that more than one instance from the repository should be combined in a single message also needs to be considered. 5.5
Summary
This chapter provides an overview of data and process mediation in the context of Semantic Web Services, in terms of both the existing approaches and future trends. Data mediation is a highly explored area, offering well-studied and elaborated strategies for coping with data heterogeneity. But in the context of Semantic Web Services, the heterogeneity problems have been extended even further to business logics, message exchange protocols, or Web Services invocation. Data mediation becomes only a subproblem of the reconciliation, interoperation, and integration aspects of the Semantic Web. Solutions to address process mediation are also required in order to obtain powerful mediators capable of supporting the requirements of the Semantic Web Service. These mediators should offer automatic means for accomplishing their functionality (during runtime), even if they are based on manual or semiautomatic efforts in the design-time stage. As a consequence, the major challenges are still to minimize the human effort by developing efficient and effective strategies for the design-time phase, and building flexible and highly decoupled runtime components in order to automatically accommodate the various interoperation and integration demands on the Web.
Data and Process Mediation in Semantic Web Services
147
Notes 1. The Ontology Alignment Evaluation Initiative (http://oaei.ontologymatching.org) is one of the initiatives that organize a contest to evaluate the performances of automatic matching algorithms. 2. http://protege.stanford.edu. 3. OWL-S is an OWL-based Web Service ontology; more information about it is available at http://www .daml.org/services/owl-s/1.1. 4. Information about WS-BPEL can be found at http://www.oasis-open.org/apps/group_public/download.php/18714/ wsbpel-specification-draft-May17.htm. 5. http://www.contivo.com. 6. Cross Words has been renamed to WebSphere InterChange; more information is available at http://www-360.ibm .com/software/integration/wbiserver/ics/features/. 7. Usually the semantic of being “similar” varies from one mediation scenario to another. In our example, for the data (i.e., instances) transformation scenario, the term “similar” has the following meaning: each time data characterized by (or instance of) a specific entity in the source ontology are encountered, data characterized by (or instances of) the similar entity in the target ontology must be created. 8. See http://tools.deri.org/wsml2reasoner for more details about the available WSML reasoners.
References [1] A. Alves, A. Arkin, S. Askary, Ben Bloch, F. Curbera, Y. Goland, N. Kartha, C. Kevin Liu, Dieter König, V. Mehta, S. Thatte, D. van der Rijn, P. Yendluri, and A. Yiu, eds. Web Services Business Process Execution Language Version 2.0. OASIS committee draft. May 2006. [2] Understanding BizTalk Server 2006. Microsoft, February 14, 2006. http://www.microsoft.com/technet/prodtechnol/ biztalk/2006/understanding.mspx. [3] C. Bussler. B2B Integration. Springer, 2003. [4] V. K. Chaudhri, A. Farquhar, R. Fikes, P. D. Karp, and J. P. Rice. OKBC: A programmatic foundation for knowledge base interoperability. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), pp. 600–607. MIT Press, 1998. [5] E. Cimpian and A. Mocan. WSMX process mediation based on choreographies. In First International Workshop on Web Service Choreography and Orchestration for Business Process Management. Springer LNCS 3812 2005. [6] Technical Introduction to IBM CrossWorlds. IBM, 2002. [7] Connectors Development Guide for Java. IBM, 2002. [8] J. de Bruijn and A. Polleres. Towards an Ontology Mapping Language for the Semantic Web. Technical report DERI-2004-06-30. Digital Enterprise Research Institute (DERI), 2004. [9] J. de Bruijn, H. Lausen, R. Krummenacher, A. Polleres, L. Predoiu, M. Kifer, and D. Fensel. The Web Service Modeling Language WSML. WSML working draft. October 2005. http://www.wsmo.org/TR/d16/d16.1/v0.2. [10] H.-H. Do and E. Rahm. COMA: A system for flexible combination of schema matching approaches. In Proceedings of the 28th International Conference on Very Large Data Bases. Morgan Kaufmann, 2002. [11] A. Doan, J. Madhavan, P. Domingos, and A. Halevy. Learning to map between ontologies on the Semantic Web. In Proceedings of the 11th International Conference on WWW 2002. ACM, 2002. [12] J. Euzenat, F. Scharffe, and L. Serafini. Specification of the Delivery Alignment Format. Knowledge Web deliverable D2.2.6. 2006. [13] D. Fensel and C. Bussler. The Web Service modeling framework WSMF. Electronic Commerce Research and Applications, 1(2) (2002). [14] B. Ganter and R. Wille. Formal Concept Analysis: Mathematical Foundations, Cornelia Franzke, trans. Springer, 1999.
148
A. Mocan, E. Cimpian, and C. Bussler
[15] M. Genesereth, A. Keller, and O. Duschka. Infomaster: An information integration system. In Proceedings of the ACM-SIGMOD International Conference on Management of Data. Association for Computing Machinery, 1997. [16] Thomas R. Gruber. A translation approach to portable ontology specifications. Knowledge Acquisition, 5:199–220 (1993). [17] Y. Gurevich. Evolving algebras 1993: Lipari guide. Specification and Validation Methods, E. Börger, ed. pp. 9–36. Oxford University Press, 1994. [18] A. Haller, E. Cimpian, A. Mocan, E. Oren, and C. Bussler. WSMX: A semantic Service-Oriented Architecture. In International Conference on Web Services, pp. 321–328. IEEE Computer Society, 2005. [19] M. S. Lacher and G. Groh. Facilitating the exchange of explicit knowledge through ontology mappings. In Proceedings of the 14th FLAIRS Conference. AAAI Press, 2001. [20] J. Madhavan, P. A. Bernstein, P. Domingos, and A. Y. Halevy. Representing and reasoning about mappings between domain models. In Eighteenth National Conference on Artificial Intelligence, pp. 80–86. AAAI Press, 2002. [21] A. Maedche, B. Motik, N. Silva, and R. Volz. MAFRA: A mapping framework for distributed ontologies. In Proceedings of the 13th International Conference on Knowledge Engineering and Knowledge Management, A. GómezPérez and V. R. Benjamins, eds. Lecture Notes in Computer Science 2473. Springer, 2002. [22] D. Martin, M. Burstein, J. Hobbs, O. Lassila, D. McDermott, S. McIlraith, S. Narayanan, M. Paolucci, B. Parsia, T. Payne, E. Sirin, N. Srinivasan, and K. Sycara. OWL-S: Semantic Markup for Web Services. November 22, 2004. [23] E. Mena, V. Kashyap, A. Sheth, and A. Illarramendi. Observer: An approach for query processing in global information systems based on interoperability between pre-existing ontologies. In Proceedings of the First IFCIS International Conference on Cooperative Information Systems. IEEE Computer Society Press, 1996. [24] P. Mitra, G. Wiederhold, and M. Kersten. A graph-oriented model for articulation of ontology interdependencies. In Proceedings of the Seventh International Conferences on Extending Data Base Technology, Carlos Zaniolo et al., eds., pp. 86–100. LNCS 1777. Springer, 2000. [25] A. Mocan and E. Cimpian. Mappings creation using a view based approach. In First International Workshop on Mediation in Semantic Web Services (Mediate 2005), M. Hepp et al., eds., CEUR, 2005. [26] A. Mocan, M. Zaremba, M. Moran, and E. Cimpian. Filling the gap: Extending Service Oriented Architectures with semantics. In Second IEEE International Symposium on Service-Oriented Applications, Integration and Collaboration (SOAIC-2006). IEEE CS 2006. [27] A. Mocan, E. Cimpian, and M. Kerrigan. Formal model for ontology mapping creation. In Proceedings of the Fifth International Semantic Web Conference, Isabel Cruz et al., eds. Springer, 2006. [28] G. Modica, A. Gal, and H. Jamil. The use of machine-generated ontologies in dynamic information seeking. In Proceedings of the Ninth International Conference on Cooperative Information Systems, pp. 433–448. Springer, 2001. [29] N. Noy and M. Musen. PROMPT: Algorithm and tool for automated ontology merging and alignment. In Proceedings of the Twentieth National Conference on Artificial Intelligence. American Association for Artificial Intelligence, 2000. [30] B. Omelayenko and D. Fensel. An analysis of integration problems of XML-based catalogues for B2B e-commerce. In Semantic Issues in E-Commerce Systems: Proceedings of the Ninth IFIP 2.6 Working Conference on Database Semantics (DS-9). IEEE CS Kluwer 2001. [31] L. Popa, M. A. Hernandez, Y. Velegrakis, R. J. Miller, F. Naumann, and H. Ho. Mapping XML and relational schemas with Clio (demo). In Proceedings of the 18th International Conference on Data Engineering (ICDE). 2002. [32] E. Rahm and P. Bernstein. A survey of approaches to automatic schema matching. VLDB Journal 10:334–350 (2001). [33] D. Roman, U. Keller, H. Lausen, J. de Bruijn, R. Lara, M. Stollberg, A. Polleres, C. Feier, C. Bussler, and D. Fensel. Web Service modeling ontology. Applied Ontology, 1(1):77–106 (2005). [34] D. Roman, J. Scicluna, and C. Feier, eds. Ontology-based Choreography and Orchestration of WSMO Services. WSMO working draft version 0.1. 2005. http://www.wsmo.org/TR/d14/v0.1. [35] D. Roman, H. Lausen, and U. Keller, eds. D2v1.3 Web Service Modeling Ontology (WSMO). WSMO final draft. October 2006. http://www.wsmo.org/TR/d2/v1.3.
Data and Process Mediation in Semantic Web Services
149
[36] D. Roman, J. Scicluna, and J. Nitzsche, eds. Ontology-based Choreography of WSMO Services. WSMO final draft D14. 2006. http://www.wsmo.org/TR/d14. [37] G. Stumme and A. Maedche. FCA-merge: Bottom-up merging of ontologies. In 17th International Joint Conference on Artificial Intelligence (IJCAI ’01), pp. 225–234. Morgan Kanfmann 2001. [38] H. Wache. Towards rule-based context transformation in mediators, pp. 107–122. In International Workshop on Engineering Federated Information Systems (EFIS 99), pp. 107–122. Sankt Augustin, 1999. [39] H. Wache and D. Fensel. International Journal of Cooperative Information Systems, special issue on intelligent information integration, 9(4) (2000). [40] R. Yerneni, C. Li, H. Garcia Molina, and J. Ullman. Computing capabilities of mediators. In Proceedings of ACM SIGMOD, A. Delis et al., eds. ACM Press, 1999.
6
Toward Configurable QoS-Aware Web Services Infrastructure
Lisa Bahler, Francesco Caruso, Chit Chung, Benjamin Falchuk, and Josephine Micallef
6.1
Introduction
Communications service providers are reengineering their back-office systems and enterprise architectures to increase business agility and flexibility for revenue generation, and to integrate the supply chain for cost reduction. The same trend is seen in other industries that are driven by similar business objectives. A flexible, standards-based integration architecture is a key factor for success of these initiatives, and therefore many of these re-architecture efforts are adopting (or looking to adopt) a Service-Oriented Architecture. A Service-Oriented Architecture (SOA) is one in which software application functionality is exposed as a “service” with an associated machine-processable service description that documents the service’s interface, transport bindings, and policy restrictions. SOA enables easy integration of the various components needed to deliver a business solution, as well as interoperability with existing (legacy) enterprise systems. Web Services standards and technologies play a key role in realizing an SOA, primarily due to interoperability of services implemented on different platforms (e.g., J2EE, .NET, CICS) and use of Internet protocols to integrate the enterprise to customers, suppliers, and partners. Mission-critical enterprise systems, such as telecommunications network management systems, have stringent nonfunctional requirements for reliability, performance, availability, security, and scalability. Web Services technologies that address these essential quality of service (QoS) aspects are still in their infancy, with several emerging and often overlapping specifications that address some QoS feature, but there are still many gaps and limitations. In this chapter, we address the challenges of designing Web Services with the requisite QoS for mission-critical operations. Business agility and cost-effective use of technology for implementing business solutions place an additional important requirement on a Web Services architecture: namely, that the solution can be configured and adapted to support diverse and evolving deployment environments without software code changes. Modern component architectures, such as J2EE, support this requirement through declarative specifications of component properties, such as security and transactionality, which can be configured at deployment. An additional objective for
152
Lisa Bahler and Colleagues
QoS-aware Web Services architectures, therefore, is to enable service QoS to be configurable and composable at deployment. The rest of the chapter is organized as follows. Section 6.2 introduces a real business scenario, which will be used as a running example in subsequent sections. Section 6.3 provides an analysis and design methodology for Web Services that can be transparently deployed on different transports. Section 6.4 focuses on an important QoS aspect for Web Services: namely, the QoS requirements for the message exchange to accomplish a business service. An adaptive Web Services gateway that can be configured to provide security (and other capabilities) is the topic of section 6.5. Section 6.6 describes the use of Semantic Web technologies to support and automate deployment configuration to satisfy the solution QoS requirements for a specific technology environment. Conclusion is in section 6.7. (Note: This work was originally submitted in 2004 and reflects our views at that time.) 6.2
Motivating Business Scenario
We present an illustrative business scenario to motivate and explore the problems addressed in this chapter. ACME Communications, a provider of communications services, has developed a state-of-the-art SOA to effectively provision and bill for services and manage its network, as shown in the center of figure 6.1. ACME’s system design depends on certain infrastructure technology choices that support the business objectives; for example, a reliable messaging infrastructure is used so that applications do not have to reimplement logic for reliable intersystem communication. To expand its service offering and meet aggressive growth targets, ACME establishes a number of strategic partnerships with service resellers and suppliers. Joe’s Telco is an ACME service reseller, and needs access to ACME’s service-ordering Web Service. ACME’s serviceprovisioning applications also need to interact with other partners that provide network facilities and billing services. The ACME service and network management ecosystem, composed of ACME and its partners and suppliers, is illustrated in figure 6.1. So what does this mean for the Web Services architecture that ACME has developed to run its business? First, the services must be able to work correctly with different middleware technologies. For example, the semantics of the interaction between the service-ordering and -billing services should be unaffected whether communication is over a reliable transport (as is the case within ACME) or over HTTP when the billing service is provided by Pay Online. Second, we have different security trust domains. Now information that comes through the service-ordering interface may not be targeted for ACME; for example, a credit card number on an order that originates from Joe’s Telco with billing to be provided by Pay Online should not be visible to ACME. The following sections describe how to design Web Service-based solutions that address these business requirements.
Toward Configurable QoS-Aware Web Services Infrastructure
153
ACME Ecosystem
Joe’s Telco
ACME ACME Communications Communications Administrative Domain Administrative Domain
Administrative Domain
Customer Customer Portal Portal
ACME Customer Portal
Need formal semantic service description
Assurance Activation
Service Ordering
Need for Internet friendly transport protocol
Inventory Need for Internet friendly transport protocol
Billing Need Needfor for selective message security level security
PAY Online PAY Online Administrative Domain Need for QoS guarantees
Billing
Figure 6.1 ACME Communications Service-Oriented Architecture.
6.3
Designing Transparently Transport-Adaptable Web Services
Web Services are commonly thought to imply the use of HTTP as the transport in order to circumvent enterprise firewall issues and to expose services outside the enterprise. This impression that Web Services are only HTTP-based is reinforced by the strong and unbalanced support for HTTP-based Web Services offered by tool vendors. In reality, according to the WSDL specification [28] [29], Web Services are transport-independent. This means that a Web Service can be exposed through different transport protocols, such as HTTP, JMS, IIOP, and SMTP. It is important to note that being transport-independent does not necessarily mean being transparently adaptable to different transport middleware. A Web Service designed for a synchronous transport such as HTTP generally exhibits different interaction behavior than a Web Service designed to operate in an asynchronous messaging environment such as JMS. The difference lies in the interface and coordination mechanism (e.g., blocking vs. nonblocking) between the application and the transport middleware. This issue of application adaptation to different
154
Lisa Bahler and Colleagues
transports is orthogonal to the issue of whether SOAP messages are or are not used to package the information flowing between a Web Service and its clients; even if SOAP messages are carried within JMS messages, the distinction between the essentially asynchronous JMS transport and the synchronous HTTP transport still exists. (Note: In the rest of the chapter we will focus on Web Services over SOAP.) Portability across transports may be compromised if particular care is not taken in designing the service. In the next section we describe a methodology for designing Web Service interfaces that are transparently adaptable to different transport middleware. 6.3.1
Methodology
There are four essential elements in our methodology: the business activity, the communication pattern, the communication style, and the message exchange pattern. Business Activity A business activity is a collaboration between two parties (i.e., a service consumer and a service provider) to achieve a business objective. A business activity is a primitive activity (not business decomposable) and may be composed with other business activities (i.e., through a workflow engine) to realize a complex business flow. For example, if the objective is to acquire a snapshot of an inventory database, the getInventory business activity involves characterizing the inventory snapshot scope (i.e., what part of inventory to retrieve) and querying the inventory system to retrieve the required data set. Business Communication Pattern The parties involved in a collaboration will exchange messages to accomplish a business activity. A business communication pattern identifies the actors, the role of each in the communication, and the abstract type of messages sent and/or received (e.g., request, response, notification, error). Examples of communication patterns include request-response, multiple-batch-response, and notification. The design of Web Services realizing a business activity will include selecting one or more of these communication patterns. For example, the getInventory business activity is likely to require result sets to be partitioned into several chunks and sent to the service consumer according to the multiple-batch-response business communication pattern. A communication pattern defines the collaboration as a high-level choreography without specifying how it is actually carried out. It is an abstract concept and is analogous to the concept of WSDL Transmission Primitive in WSDL 1.1 [28] within the PortType or the WSDL Message Exchange Pattern in WSDL 2.0 [29] within the interface. Communication Style The communication style identifies the invocation mechanism used by the parties to exchange messages and focuses on the exchange mechanism between the service logic (application) and the message-processing layer (SOAP processor) to send and receive messages. In our methodology,
Toward Configurable QoS-Aware Web Services Infrastructure
RPC Service Consumer
155
MSG
Abstract Request/Response
Abstract Request
Service Provider
Service Consumer
Abstract
Service Provider
Response
Generate Request Data
Service Service Receptacle Receptacle
Request Msg Blocking
Compute and Send Response
Response Msg Process Response Data
Generate Request Data
Callback Callback Receptacle Receptacle
Request Msg + ReplyTo + CorrelationID NonCompute Blocking and Send Response Response Message + CorrelationID Process Response Data
Figure 6.2 Communication styles for request-response pattern.
we identify two communication styles: remote procedure call (RPC) and messaging (MSG). Figure 6.2 contrasts the two styles for the request-response communication pattern. In the RPC communication style, the service consumer invokes the service provider through a service receptacle and receives a response as return argument. The call to the middleware is a blocking synchronous call and implements the RPC semantics. In the MSG communication style, the service consumer invokes the service by sending a request message through the service receptacle and, at the same time, exposing a callback receptacle. The service provider will respond by sending the reply message to the callback receptacle. The interaction is nonblocking. Besides the different coordination mechanisms, the significant difference between these two styles lies in the creation of the callback receptacle in the MSG style. This difference has implications in the abstract operation signature (or document structure) as well as in the choreography built on top of the communication style. Note that it is possible, and often desirable, to expose the same business activity through both communication styles by implementing two flavors of the service. Transport Binding The binding concept, as defined in SOAP, describes the serialization mechanism and conventions used to serialize and exchange XML messages over a target transport. The binding adapter is the logical component implementing the bindings. With varying amounts of effort, both RPC and MSG communication styles can be mapped to transport fabrics that have different native
156
Lisa Bahler and Colleagues
Transport Type Communication Style
Synchronous (e.g. HTTP/S, IIOP)
Asynchronous (e.g., JMS,SMTP)
RPC
Maps Natively
Bindings Adapter needs to handle message correlation
MSG
Binding Adapter needs to handle “store and forward” semantics
Maps Natively
Figure 6.3 Mapping communication style to transport type.
characteristics. The transport’s native capability to synchronously connect the parties or asynchronously store and forward the messages is a significant discriminator in mapping the two communication styles. Figure 6.3 summarizes the four different combinations of communication style and transport type. Mapping the RPC style to a synchronous transport, such as HTTP, is straightforward. A service request causes the establishment of a logical communication channel (for HTTP, a TCP/IP socket) to transport the request, and the response will be carried back over the same channel. The call from the service consumer is blocking, and therefore both service consumer and service provider need to be active simultaneously. Similarly, the MSG style maps natively to an asynchronous transport, such as JMS. A JMS message is sent by a service provider, queued by the JMS broker, and asynchronously dispatched to a JMS subscriber. Mapping the RPC style to an asynchronous transport requires synchronization to be implemented in the layer between the application and the transport. A service consumer will invoke the middleware with a blocking call according to the RPC style semantics. The binding code in the binding adapter will handle the dispatching and synchronization of the messages through an asynchronous transport. The correlation ID is not exposed to the application in the RPC style and needs to be explicitly managed in the binding adapter. In order to map an MSG style to a synchronous transport, the binding adapter needs to implement a “store and forward” mechanism to decouple the service consumer from the service provider. A similar approach is discussed in Section 6.7 for adding reliability guarantees. Message Exchange Pattern A message exchange pattern (MEP) is the combination of a business communication pattern and a communication style, and fully identifies the messages and the choreography (sequencing and cardinality) of messages independently from a business activity. An MEP can be equated to a SOAP MEP [21]. Putting It All Together The association of a business activity with one or more MEPs fully specifies an interface in a technology-neutral manner, and corresponds to the notions of PortType in WSDL 1.1 and
Toward Configurable QoS-Aware Web Services Infrastructure
157
request response Comm. Style MSG
JMS HTTP/S
Message Exchange Pattern
Abstract Interface
Comm. Pattern
RPC
Binding
Concrete Interface
Business Activity
multiple batch response
RPC
MSG
bulk response notification
RPC
MSG
JMS
HTTP/S
Figure 6.4 Defining a concrete interface.
Interface in WSDL 2.0. We call this the abstract interface. Binding the transport protocol to the abstract interface results in a technology-specific interface we call the concrete interface. The binding encapsulates all the details needed to map an abstract service from the business level to the wire protocol. Figure 6.4 illustrates the various elements of the methodology and the boundary between the abstract and concrete interfaces. Starting from a business activity, our methodology advocates selecting one or more communication patterns that best fit the task and then selecting one or both communication styles to optimize the service against a type of transport. The result is a portfolio of MEPs, each of which will be able to support any transport according to the provided bindings. Although each Web Service will be transparently adaptable to either style of transport, the choice of an MSG or RPC style will influence the performance and the cost of coupling the transport. 6.3.2
Summary
In this section we have introduced the concepts of business activity, communication pattern, communication style, and MEP; explained their fundamental roles in achieving transparent adaptability to a communication transport; and explained the trade-offs. The design methodology presented in this section did not take into consideration the messaging quality of service; this is discussed in section 6.4, which also introduces two generic mechanisms to annotate the WSDL with metainformation related to the service QoS. The same mechanisms can easily be used to capture the metadata about the communication pattern, style, and MEP. Another key benefit of modeling, structuring, and documenting the service according to our methodology is the capability to automatically generate the binding adapters as well the ability to configure or optimize the service according to the requirements at deployment time. Section 6.6 will further address
158
Lisa Bahler and Colleagues
how the service metadata can be used to understand and bridge gaps between the service requirements and deployment infrastructure. 6.4
Messaging Quality of Service
Among the many different aspects of Web Service quality of service (QoS) are those associated with messaging, which is our focus here. Specifically, we focus on the delivery guarantee and the message order. The delivery guarantee can be one of the following: Message loss and duplicates allowed
•
None
•
At least once
No message loss; duplicates allowed
•
At most once
Message loss allowed; no duplicates
•
Exactly once No message loss; no duplicates. The order of the messages received can be one of the following:
•
None
No particular message order is guaranteed.
Time Messages from a particular consumer are delivered to the provider in the order determined by an ID established by the consumer. Message IDs are assumed to increase with time.
•
Priority The infrastructure will expedite the delivery of higher priority messages, when possible. Queued messages waiting to be delivered to the provider will be sorted by priority. This order can be combined with time order, allowing messages of a particular priority to be sorted by time.
•
Both sides of the communication must cooperate to achieve a particular QoS, which means that they must agree upon, or at least be aware of, what that QoS is. In addition to this abstract QoS, there must be a mutual understanding of the specific reliability protocol to be employed, along with its tunable parameters, such as time-outs; these form a reliability binding policy. The abstract QoS can be thought of as adding an extra dimension to the abstract interface portion of figure 6.5, and the reliability binding policy can be thought of as adding an extra dimension to the binding portion of that figure. A natural place to put information about the abstract QoS and reliability binding policy is in the WSDL file [28]. This has the effect of putting the service provider in control of the QoS for an interaction. The reliable messaging logic may be implemented directly within the provider and the consumer, but it is more likely to be implemented by their middleware infrastructures (i.e., SOAP processor and transport binding). In such a case, any QoS and reliability binding policy must be consistent with what the infrastructure can offer. Furthermore, assuming that the infrastructure
Toward Configurable QoS-Aware Web Services Infrastructure
Service Consumer
Abstract Interface + Abstract QoS
159
Service Provider
WSDL-Driven Reliable Messaging Configuration Reliable Messaging Infrastructure
Reliable Messaging Infrastructure Transport Protocol + Reliable Messaging Protocol + RM Protocol Parameters
Figure 6.5 Reliable Messaging scenario.
can honor a specified QoS and binding policy, it will need to be aware of how these are set for messages targeted toward a particular provider endpoint. This implies that the messaging infrastructure must be made aware of the QoS and reliability binding policy found in the WSDL file. Figure 6.5 illustrates these concepts. In the discussion that follows, the focus is on the configuration and deployment of providers, consumers, and their infrastructures according to a WSDL file, taking into account the capabilities of the infrastructures. The dynamic, negotiated configuration of entities that discover each other on the fly is not considered here; for insight into this issue, the reader is directed to [25]. 6.4.1
Requirements upon the Provider, the Consumer, and Their Infrastructures
In this section we talk about the requirements upon the provider, the consumer, and their infrastructures to maintain the abstract and binding policies for an exactly-once, time-ordered delivery within an MSG-style interaction pattern. As an example, assume the getInventory Web service as defined in section 6.3.1. The provider specifies that the requests to it must have an exactly-once delivery guarantee and must be delivered in time order from any particular consumer. The provider also identifies the Reliable Messaging protocol that is to be used. The Web Service Description Language File The WSDL file for this service must contain annotations to convey this information. The WSDL abstract interface element (PortType in WSDL 1.1 and Interface in WSDL 2.0) needs to be annotated with the requirements for exactly-once and time-ordered delivery. The binding element, which denotes a concrete binding of the service to a particular protocol and transport, needs to be annotated with the Reliable Messaging protocol to be used, along with the settings of its parameters. There are several ways in which this annotation may occur: via the use of policy elements [17] [15] or attachments [16]; the use of feature and property elements [29]; or
160
Lisa Bahler and Colleagues
the use of a semantically richer representation based on an ontology (see section 6.6). The appropriate mechanism to use is the subject of current debate [3]. There are four actors to consider in this discussion: the provider, the provider-side reliability infrastructure, the consumer, and the consumer-side reliability infrastructure. What requirements would such a WSDL place upon each of these actors? Consumer Responsibilities Toward Achieving the Desired QoS The consumer will need to convey to the consumer-side infrastructure what the prevailing QoS and binding policies should be for its communication with the provider. This can occur implicitly via the use of a deployment descriptor, or it could occur explicitly via API calls to the infrastructure component that provides reliable messaging, or via the use of message headers to convey the policy in effect for each request. Consumer-Side Infrastructure Responsibilities The consumer-side infrastructure must be able to safe-store any unacknowledged messages that it sends to the provider, along with their sequence identifiers and message IDs. It also must make sure that the reliability headers for the messages it sends are complete, which may mean adding to the header information already set by the consumer. The consumer-side infrastructure must also help to establish and cooperate in the method by which the consumer side receives acknowledgments and responses from the provider side. It must keep track of acknowledgments and perform retries, as appropriate. It also must know how to send responses to requests back to the consumer, and it needs to safe-store these messages until they are acknowledged by the consumer. Last, the consumer-side infrastructure must know how to handle faults, notifying the consumer as appropriate. Provider-Side Infrastructure Responsibilities For the delivery guarantee of this example, the provider-side infrastructure needs to safe-store messages it receives from the consumer-side infrastructure until they are processed by the provider. Additionally, it needs to weed out duplicate messages coming from the consumer side. The provider-side infrastructure also needs to send acknowledgments or negative acknowledgments back to the consumer side. The provider-side infrastructure will need to guarantee that the header placed on the response message is complete. Last, it will need to be able to relay fault messages to the consumer side. Provider Responsibilities As for the provider itself, if its infrastructure is to be configured with knowledge of the QoS and binding policies, it will need to convey these policies to the infrastructure, either implicitly via a deployment descriptor or via an explicit API call to the infrastructure component that provides the reliable messaging. For the exactly-once delivery guarantee of this example, the provider also needs to generate an acknowledgment to the provider-side infrastructure to let it know that a message from the consumer has been safely processed and can be released from the infrastructure’s safe-store.
Toward Configurable QoS-Aware Web Services Infrastructure
6.4.2
161
Configuration of the Provider, the Consumer, and Their Infrastructures
To make the example of section 6.4.1 more concrete, it will further be assumed that communication between the consumer and the provider will be implemented by SOAP messages over HTTP. Since this combination will, by itself, not guarantee the abstract reliability of exactlyonce and time-ordered delivery that is desired, the WSDL binding element will need to be annotated to show that an additional reliability layer, such as WS-Reliability [30] or WS-ReliableMessaging [26], will need to be implemented. The binding element will also contain the tunables for this reliability protocol. Abstract QoS Configuration The abstract QoS settings directly control the interaction of the consumer and the provider with their respective infrastructures. Ideally, the API exposed by the infrastructure is generic and independent of the transport binding. In this example, we assume the use of a generic API based upon the Java Message Service API. Because the delivery guarantee is exactly once, all messages must be persistently stored by the infrastructure. Additionally, the infrastructure must not redeliver any message that has already been acknowledged by an application (either consumer or provider). Each application should acknowledge a message as soon as it has been fully processed, but no sooner. Likewise, the desire that requests made by the consumer be processed in time order implies that all consumer requests are issued within a single session to a single messaging destination and are received by the provider from this destination within a single session. (The session and destination are logical concepts taken from the JMS API.) The abstract QoS listed within the WSDL file must be translated into configuration files or deployment descriptors for the application code. This application code should be written in such a way as to be parameterized by the configuration. It should at least understand all the possible values that the configuration may contain, even if it simply responds to an attempted configuration with an error message that it cannot satisfy the configuration. For example, the code in this case should understand that “exactly once” translates into API calls to the infrastructure that have message persistence turned on and the acknowledgment mode set to “client acknowledgment.” The code also needs to understand that “time-ordered” handling of consumer requests within the provider means that the consumer needs to generate the requests within a single session, and the provider needs to read them within a single session; all such requests also need to be set to the same priority by the consumer, to prevent them from being delivered out of order. The application code also needs to be written in such a way that explicit client acknowledgment of the completion of processing of each message can be turned on at will. The consumer-side infrastructure can be dynamically configured, via the API, by messages that are sent to it from the consumer. The infrastructure should safe-store messages when they are sent persistently, until they are acknowledged by the provider-side infrastructure. Additionally, when the consumer-side infrastructure knows that a consumer request is sent persistently,
162
Lisa Bahler and Colleagues
it should safe-store the response coming back from the provider until it is acknowledged by the consumer. The time-order restriction is naturally satisfied by the consumer-side infrastructure when it creates increasing sequence numbers within a single session for the requests emanating from the consumer. The provider-side infrastructure will similarly be dynamically configured via the API by responses emanating from the provider. The provider-side infrastructure will need to be configured to know how to handle the requests coming from the consumer. Specifically, it must safestore each request message from the consumer until it receives an acknowledgment from the provider that it has completely processed the request message. Additionally, the provider-side infrastructure will need to be told to pass the consumer requests to the provider in the proper sequence, in order to handle the time-order requirement. Binding-Specific Configuration For this example, WS-ReliableMessaging is chosen. This reliability protocol has tunable parameters particular to it that may be set. The information about the actual protocol used, and its tunable parameters, will exist as annotations on the WSDL binding. The possible policy assertions with which to annotate the binding are the following: SpecVersion Identifies the version of the ReliableMessaging specification that governs the exchange of messages between the consumer and provider sides.
•
SequenceCreation This element indicates that the provider side wishes to be in control of creating new sequences.
•
SequenceExpiration This element sets an absolute expiration time for a particular sequence.
•
InactivityTimeout This element sets a limit on the amount of time during which there is no activity of any sort associated with a sequence. At that point, an endpoint may consider the sequence closed.
•
BaseRetransmissionInterval and ExponentialBackoff These elements are used to control the amount of time a consumer side will wait for an acknowledgment before resending a message.
•
AcknowledgementInterval This indicates the amount of time the provider side may queue up acknowledgments before it is forced to send an acknowledgment message to the consumer side.
•
For this example, a timing profile is used as the basis of a policy attachment for the binding:
Toward Configurable QoS-Aware Web Services Infrastructure
163
Additionally, this policy will be amended so that the specVersion will be explicitly identified, as in the following:
This binding policy indicates that the provider and consumer infrastructures must both support the given version of WS-ReliableMessaging. Additionally, the consumer infrastructure, if it does not receive an acknowledgment of a message it sends within three seconds, will retransmit the message at that point and will continue to retransmit it according to an exponential back-off algorithm if the message continues to go unacknowledged. The provider infrastructure will hold on to its outgoing acknowledgments for at most one second before sending an acknowledgment message to the consumer infrastructure. The provider side will also consider any active consumer sequence to be closed once a day has passed with no further activity from that consumer. This binding-specific policy information can be translated into configuration files or deployment descriptors for the provider-side and consumer-side infrastructures. Having explored the issues of the specification of reliability characteristics for Web Services and the translation of such specifications into provider, consumer, and infrastructure configuration, we now turn our attention toward security characteristics. 6.5
Transparent Security Support for Web Services
Another aspect of Web Service QoS is messaging security. There are many facets to security, such as message confidentiality and integrity, authentication and authorization, nonrepudiation and auditing, and preventing denial and theft of service. This section focuses on a mechanism for supporting message confidentiality and integrity, but the mechanism is general and may be applied to other security facets (as well as nonsecurity aspects). In addition, this mechanism provides Web Service security for multiple services, thus eliminating the duplication of security support in each Web Service. Message integrity is the assurance that a message (or selected message elements) is not tampered with in any way, and is typically supported by digitally signing the message. To assure
164
Lisa Bahler and Colleagues
message integrity, a recipient verifies the hash code in a signed message. Message confidentiality is the assurance that a message is not readable by anyone other than its intended recipient, and is typically provided by some form of encryption. The sender encrypts the sensitive data, and the receiver decrypts it, using a mutually agreed key and algorithm. The traditional solution is for pairwise establishment of the security policy in both consumer and provider for each consumer-provider interaction. The problems with this solution are (1) lack of scalability as the number of services in the solution grows, (2) difficulty to ensure that consistent policies are applied for related groups of services, and (3) considerable time and effort required to change policies to meet rapidly evolving business needs. Our approach arranges services into trusted groups where services within a group are trusted (i.e., do not have to be secured) and secure messaging is needed only when communicating externally. Each trusted group of services is managed as a single administration unit, called an administrative domain (AD). The security infrastructure is provided by an architectural component associated with each AD, thus obviating the need for security implementation in each service. This architectural component for interacting Web services is the Web Services Gateway. 6.5.1
Web Services Gateway
A Web Service, Gateway (WSG), as illustrated in Figure 6.6, exposes a number of Web Services, acting as a single point of access to the underlying Web Services in the AD. The business activity exposed by the WSG is identical to that exposed by the underlying Web Service, whereas the Gateway enforces the security requirements for each underlying Web Service. While access to the underlying Web Services is constrained to go through the WSG, responses from the underlying Web Services may return via the WSG or directly back to the service consumer. The former is defined as two-way WSG and the latter as one-way WSG. There is also a hybrid WSG
Service Consumer1
WS WS Gateway Gateway
…
Referral Referral
Service Consumer N
Policy Policy
Web Web Service 11 Service Web Web Service M Service M Configuration
Web Service Administrative Administrative Domain
Domain
Figure 6.6 Web Services Gateway.
Toward Configurable QoS-Aware Web Services Infrastructure
165
which supports both types of Web Services. A WSG offers a single logical point of control which enables central administration of all the underlying Web Services, and facilitates management functions including starting/stopping, logging, monitoring, and debugging. Besides centralizing control and administration, a WSG offers several other benefits— enhancement modularity, access restriction and policy enforcement, indirection and transparency, reconfigurability and adaptability—described in the following sections. Enhancement Modularity WSG can enhance business domain services with QoS (such as security, which is the focus of this section) or reliability or scalability. The interception of the request by the WSG makes it possible to enhance the basic functions offered by the underlying business Web Services without requiring each of them to implement the enhancements themselves. For example, a security policy may require all requests to be accompanied by a security credential (e.g., username and password) for authentication, but this policy may be changed to allow anonymous access. The underlying business Web Services, no matter how many there are, are not affected by this change in security policy. We refer to this characteristic as enhancement modularity. Access Restriction and Policy Enforcement A WSG may be deployed on the Internet, funneling service consumer requests to underlying business Web Services inside an enterprise Intranet. A WSG also may funnel service consumer requests inside the enterprise Intranet to external Internet Web Services. The WSG can validate ingress/egress messages against service policies, rejecting messages if the policies are not met. This is called access restriction. However, WSG is also capable of policy enforcement by modifying the message to meet specified policies. For example, if a WSG policy requires requests to be encrypted and digitally signed, the WSG may reject a request that does not satisfy the policy rules. Alternatively, the WSG can encrypt and/or digitally sign them automatically. The WSG can be configured for either restriction or enforcement of specific policies. Indirection and Transparency The WSG offers a level of indirection between the service consumer and the service provider. This characteristic enables the WSG to virtualize the physical location and network topology of the actual hosts housing the underlying services. The physical location/host running a Web Service can be changed, and additional locations/hosts added, without the service consumer ever being aware of the change. This characteristic, which is referred to as service consumer transparency, enables load-balancing and host maintenance. Similarly, the underlying Web Services work without knowledge of the existence of the WSG and, in fact, treat the WSG as just another service consumer. Any enhancements to the WSG are transparent to the underlying Web Services that the WSG exposes. This characteristic, known as service provider transparency, enables adaptation of the underlying Web Services as will be discussed in the next section. Transparency in either direction does not mean a service consumer can make requests as if it were making them directly to the underlying Web Services. The consumer is interacting with
166
Lisa Bahler and Colleagues
the WSG, and must comply with any additional requirements imposed by it. For example, any security enhancement added by a WSG will require a service consumer to provide the necessary security information (e.g., username/password) to meet the additional security enhancement required. Reconfigurability and Adaptability In order to meet the evolving needs of an SOA environment, a WSG must be flexible and allow reconfiguration via some mechanism outside of software code (e.g., a configuration file). Reconfiguration generally means enabling or disabling enhancement services. It naturally modifies the WSG behavior but, more important, it results in modified behavior of the underlying Web Services. This is what distinguishes a WSG: the ability to dynamically adapt the underlying Web services without explicitly changing them. For example, a service provider wishing to promote an introductory service to allow free access without security may do so, and when the free trial period is over, is able to reinstate security without changing the underlying Web Services. If during the trial period more requests are coming in than can be handled by a single server, the service can adapt by having the WSG load-balance the requests across multiple back-end servers providing the same service. 6.5.2
How Does a Web Services Gateway Work?
A WSG intercepts a Web Service request, processes the request according to prescribed policy rules, and then forwards the request to the underlying business Web Service to actually perform the requested function. This interception is possible without change to the underlying business Web Service because the WSG exposes a WSDL interface similar to that of the underlying business Web service. The difference is that the service element contained in the concrete part of the WSDL is changed to reflect an endpoint on the WSG, rather than that of the underlying Web Service. To the service consumer looking at the modified WSDL, the WSG is the service, and to the underlying business Web Service, the WSG is the service consumer. The WSG utilizes the SOAP header to carry additional information, if needed, to perform any enhancements. This change may extend beyond the SOAP header to the SOAP body. For example, whereas WS-Addressing uses only header elements, WS-Security uses the SOAP header and also modifies the SOAP body [27] [31]. This is necessary, for example, when encryption of the SOAP body is required to secure the transmission of the message. Nevertheless, WSG maintains transparency for the underlying Web Services because it is the entity that decrypts the message. A WSG that implements security may also be responsible for authentication of the service consumer as well as authorization of the request. This does not mean the WSG performs the authentication or authorization directly; there may be an authentication and authorization server providing this service. The underlying Web Services trust the WSG and allow requests from it subject to any additional application business security rules that they themselves may impose.
Toward Configurable QoS-Aware Web Services Infrastructure
167
The WSG uses referral and policy components. The referral component allows the WSG to map a given URL to another. When a service consumer request is received by the WSG on a given URL, the WSG looks up the real URL in the referral store, and uses the real URL to invoke the underlying service. The policy component allows the WSG to determine the policy rules that should be applied for a particular request. 6.5.3
Design Patterns Compared
As we have discussed, a WSG offers several advantages for Web Services architectures, and we are starting to see product offerings [33] [24]. Thus, it is interesting to compare this architectural element with classical design patterns in order to discern similarities and differences, and to gain a deeper understanding of its role in SOA. Although the WSG is an architectural element and not an object in the traditional sense, it possesses many of the characteristics of the following object patterns: adapter, wrapper, remote façade, base gateway, mapper, mediator, and proxy [6] [5]. The WSG is similar to the adapter and wrapper patterns in that both sit between the back end and the service consumer. However, whereas the WSG presents an interface identical to the back-end service, the adapter or wrapper changes a service interface so it is usable by service consumers. A Web Service using the adapter or wrapper pattern would be a new Web Service, not a WSG. The adapter differs from the wrapper only in the intent of the new interface; the adapter interface is for service consumers known at the time of service creation, whereas the wrapper interface is for as yet unknown service consumers. The remote façade is employed to provide a coarse-grained veneer on top of a fine-grained object to improve remote communications efficiency, and is often used in conjunction with the data transfer object (DTO) by allowing the transport of more information with fewer required requests to the server. The underlying business Web Services should certainly adhere to this pattern due to network and communications inefficiencies by aggregating fine-grained methods into coarse-grained service operations as espoused by SOA best practice. However, this would be done in the underlying Web Service and not in the WSG. The WSG is more akin to Fowler’s base gateway pattern [5]. However, the base gateway is designed as a client pattern merely to simplify the underlying services by repackaging the interface in such a way that the base gateway interface is friendlier to clients. The underlying interface is generally more complex because it is inherently tied to the underlying domain, which may be unfamiliar or too low-level. The base gateway should be kept as simple as possible for the sole benefit of the client. The WSG pattern, on the other hand, merely exposes an identical back-end interface without any (functional) changes to the business service. There are no additional exposed operations, although it is possible for the WSG to hide an underlying operation, or no-op it, as a matter of policy enforcement. Other related design patterns are the mapper and the mediator, whose primary purpose is to insulate objects so they are separate and unaware of each other. Both the difference between them is that the objects separated by a mapper are unaware of the mapper, whereas the objects
168
Lisa Bahler and Colleagues
separated by a mediator are aware of the mediator. The WSG certainly insulates the service consumer and the underlying Web Services, but whereas the service consumer is aware of the WSG (after all, the service consumer sends requests directly to it), the underlying Web Services are unaware of the WSG. The proxy is a surrogate for an object and controls access to it. In general, a proxy shows up as two related components with a consumer-side stub and a provider-side skeleton. In practice, there are various kinds of proxies: virtual proxy (a proxy that provides caching), access proxy (a proxy that restricts access), and smart proxy (a proxy that provides enhanced functions), among others. Perhaps the pattern that best matches the WSG is the smart proxy, which occurs when the WSG provides enhancements for a single underlying Web Service. However, the WSG is, more often than not, a collection of smart proxies. 6.5.4
A WSG Security Example
This section describes a simple but elegant WSG implementation using Microsoft C# and the Microsoft Web Services Extension 2.0 tool kit (WSE 2.0), which provides support for the WSSecurity standard. In addition, WSE 2.0 supports other WS-* specifications, among them WSAddressing, WS-Referral, WS-Policy, and WS-Security Policy (the last two are only partially supported) [17] [32]. Although this simple implementation is done using Microsoft software, the WSG can equally serve non-Microsoft back-end Web Services and non-Microsoft clients—a key benefit of Web Services. This WSG supports the policies asserted in a policy file. By deploying Web Services with varying QoS policies, the WSG adapts the supported Web Services with the policy assertions. Each policy may be associated with a WSDL service address. In fact, as will be shown, the same Web Service can be deployed to two endpoints with a unique policy for each (in the example, one with a security policy and the other without). The WSG AD and a couple of service consumers are shown as figure 6.7. HelloWorld Web service is a simple Web Service that listens for requests and returns “Hello World!” It is physically located in the host Internalhost with endpoint, say http://internalhost/ WSGw/Service1.asmx (URL). This Web Service is deployed on the WSG. WSG is a Web Service that exposes the HelloWorld Web Service at two virtual URLs, say, http://gateway/gwsoapreceiver/helloworld.ashx (URLv1) and http://gateway/gwsoapreceiver/ requireUsername.ashx (URLv2). The WSG uses two files to store the referral and policy. A deployment of a Web Service in the gateway requires configuring (a) Referral for the service, which consists of defining a virtual URL that consumers can access, and mapping the virtual URL to the physical URL, and (b) Policy for the service exposed at the virtual URL, which defines additional processing that will be performed for the request received at the virtual URL. The WSG Configuration Web Service implements the Referral and Policy configuration. The Referral component maintains the virtual URL-to-physical URL mapping, using a file in the WSG prototype. For this example, the Referral file contains URLv1→URL and URLv2→
Toward Configurable QoS-Aware Web Services Infrastructure
Service Consumer1
WS WS Gateway Gateway Referral Referral
169
Web Service 1 HelloWorld Service Web
Service M Service Consumer2
Policy Policy Configuration Web Service Administrative Administrative Domain
Domain
Figure 6.7 Example of a WSG.
URL mappings. The format of this cache is XML, and the syntax conforms to WS-Referral. The ability to read the Referral information is built into WSE 2.0. The Policy component maintains the security policies or any other processing rules associated with each virtual URL; a file is used in the WSG prototype. When a request is received by the WSG, it checks any applicable policies and, if successful, retrieves the physical URL corresponding to the virtual URL from the Referral component. Then the WSG routes the request. In this example, the HelloWorld Web Service is deployed on the WSG at URLv1 with no policy, and at URLv2 with a simple policy that requires a request to be accompanied by a valid username and password (username token). The name of this policy is RequireUser NameToken. The format of this file is XML, and the syntax conforms to WS-Policy and WS-SecurityPolicy. Service Consumer 1 makes a request to URLv1. WSG looks up the policies for URLv1 and finds none are applicable. It looks up the Referral to determine where to route this request— in this case, the HelloWorld Web Service, which sends back the expected response to the consumer. Service Consumer 2 makes a request to URLv2. WSG looks up the policies for URLv2 and determines that the request must be accompanied by a username and password. If the credential is present and valid, WSG looks up URLv2 in Referral and routes the request to the HelloWorld Web service, which sends back the expected response to the consumer. A SOAP fault error message is generated if a service consumer attempts to invoke the Web Service with the secured URL but does not supply a valid username token. This simple example shows how a WSG can provide a flexible and powerful mechanism for managing policies in a single AD. Using this architecture, the same Web Service can be deployed multiple times with different policies. A change in standard policy is easily effected by a change of the policy configuration.
170
Lisa Bahler and Colleagues
Currently, there is only limited support in WSE 2.0 for predefined policies such as security policies requiring security tokens, encryption, and/or signature. Although WS-Security allows the partial content of a SOAP message to be encrypted, the encryption, as well as signature via policies, must be on the entire message. A custom policy verifier and enforcer must be created if any nonsupported policy is to be enabled. 6.5.5
Beyond a Single AD
How can the WSG component be used to meet the security requirements of the example SOA scenario such as that described in section 6.2? Recall from the scenario, that Joe’s Telco is required to forward personal information for credit card approval and prepayment arrangement, or just a credit card number, as part of the service order to ACME Communications. To minimize liability in case of personal information theft, ACME Communications delegates credit approval processing to Pay Online. The personal information of the service order is required to be encrypted with a Pay Online-specific key so that only Pay Online can decrypt this information. This requirement may also be mandated by Joe’s Telco, which wants assurance that private information is revealed only to a specific service provider, regardless of how many intermediary service providers are involved in completing the service order. In this multistep scenario, ACME and Pay Online are in separate ADs and there is a separate WSG for each domain—let’s say WSG1 for ACME and WSG2 for Pay Online. Joe’s Telco must encrypt the message with two different keys, one specific to WSG1 to encrypt the service order and one specific to WSG2 to encrypt the credit information. On receipt of the service order, WSG1, on behalf of ACME, decrypts the service order but the credit information is not yet revealed. When the message is passed to Pay Online, WSG2 decrypts and reveals the credit information to Pay Online. This type of security where the data are encrypted, regardless of the protocol used to communicate the data, is known as message-level security. Since the data themselves are encrypted and only the intended recipient has the key, message-level security is secured “at rest.” Thus, regardless of where the service order may wind up, all participants—from Joe’s Telco, to ACME, to Pay Online—can rest assured that this information is secure. The added burden to obtain this level of privacy is multiple encryption operations by the originator and separate decryption operations by different WSGs in separate ADs. 6.5.6
Summary
SOA architectures employing the WSG pattern support enhanced services with characteristics of central administration, enhancement modularity, access restriction and policy enforcement, indirection and transparency—and, most important, reconfigurability and adaptability. A WSG is a reusable architectural pattern that resembles a number of design patterns but is most similar to a collection of smart proxies. The example illustrated a simple but powerful WSG and how it may be used to adapt a basic nonsecure Web Service with a simple security policy. Any non-
Toward Configurable QoS-Aware Web Services Infrastructure
171
functional QoS may be added with relative ease. WSG restricts Internet client access to underlying Web Services hosted inside an enterprise. By reversing the architecture, an enterprise may control how internal Web Services, acting as clients, access external Internet Web Services. The WSG is a truly flexible and powerful architectural element that should be exploited in any Web Services-based SOA environment. In section 6.6, we explore how the policies that govern a WSG may be intelligently and consistently deduced from the complex and real-time requirements of dynamic SOA environments. 6.6
Semantics-Based Web Service Configuration
Following the presentation of important nonfunctional aspects of Web Services with respect to transport adaptation, messaging QoS, and security, this section introduces the technologies that enable the automated configuration of an SOA-based solution in accord with the business services’ nonfunctional requirements and the infrastructure capabilities. By “infrastructure” we mean all the nonapplication-specific functionality, such as transport, reliable messaging, and security. Although today’s XML and XML Schema specifications enable a great degree of interoperability, unless they are otherwise grounded by a shared understanding of the domain’s semantics, subtle but serious problems can still occur. The problems often revolve around ambiguous semantics of computer representations. We assume that the infrastructure, and to some extent the Web Service interface to the infrastructure, are configurable, through either APIs or configuration data. Some of the principal issues relating to semantics and QoS-aware Web Services infrastructures include the following: 1. What is the big picture view of my Web Service architecture? How are the providers, consumers, and infrastructure components interrelated? 2. Given a set of Web Service requirements (both functional and nonfunctional), can the infrastructure satisfy them? If yes, can we infer a configuration that satisfies the requirements, and automate the deployment configuration? Configuring the Web Services infrastructure (equivalently, answering the above two issues) for deployment is a challenge. This is repeatedly pointed out explicitly in the literature and is the raison d’être for much of the related work [4] [13] [10]. Sophisticated technologies— from constraint languages to analysis engines to rule systems—have been proposed as solution components. Some of the reasons for the challenging nature of this problem are discussed below. If there were only a few simple services to manage, the human administrator could deduce and continually manage their deployment. Today’s SOA-based systems, however, consist of a large number of services, each with sophisticated, interdependent QoS requirements.
•
172
Lisa Bahler and Colleagues
The range of configurable characteristics of infrastructure software is expanding, but there is not a consistent way of expressing and manipulating such characteristics (from the points of view of infrastructure and services) across software implementations and tools (e.g., reliable messaging software, security software, etc). Underpinning technologies do now exist; for example, WS-Policy and OWL.
•
• A specification technology for application and infrastructure capabilities and requirements must be used that is both simple enough that it is accessible by most administrators and expressive enough to allow the modeling of sophisticated (nontrivial) infrastructure aspects.
The remainder of this section briefly describes (1) the state of the art in semantic Web Services and (2) a novel infrastructure-wide application of semantic tools which makes the problems we disussesed above more tractable for Web Services infrastructure administrators. 6.6.1
Background
In recent months and years, sophisticated technologies and approaches have been proposed to help address semantic problems discussed previously. These include, among others: Resource Description Format (RDF) and RDF Schema (RDF-S), the latter supporting RDF with higher-level “complex types” [20]
•
•
DARPA Agent Markup Language and Ontology Inference Layer (DAML+OIL) [2]
Web Ontology Language (OWL) and OWL-S, a vocabulary and semantic for Web ontologies and an upper ontology for Web Services (respectively) [12].
•
OWL has become a W3C recommendation. Drawing inspiration from DAML+OIL and other earlier work, and providing capabilities not previously enabled by RDF and RDF-Schema, OWL provides new levels of scalability, distribution, openness, and extensibility to ontology design [12]. Figure 6.8 illustrates Berners-Lee’s Semantic Web Bus [1]. At the ontologies level, OWL can be used to capture the semantics of a domain. Once it is captured, Rules and Logic engines exploit the ontology at a higher level. The logics associated with OWL are monotonic (i.e., the truth of a proposition does not change when new axioms are added to the system), and several reasoning engines are now widely available (e.g., Racer, F-OWL, and JTP). Three variations of the OWL language have been proposed by W3C: Full, DL, and Lite. OWL Full should be “viewed as an extension of RDF, while OWL Lite and OWL DL can be viewed as extensions of a restricted view of RDF.”1 OWL leverages description logic (DL), which is a “knowledge representation formalism unifying and giving a logical basis to ... semantic data models.”2 While designing an OWL ontology, one might capture classes, properties, restrictions, enumerations, and instances of the domain. If QoS was the domain in question, for example, then examples of OWL artifacts are include the following (adapted from [10]): •
Class—e.g., “Availability” is a type of “quality”; “Throughput” is a type of “performance.”
Property—e.g., a “Measurement” property relates “Quality” objects to “QualityMeasurements” objects.
•
Toward Configurable QoS-Aware Web Services Infrastructure
173
heuristic engine
Search Proof
Rules
Logic
Rules
Data Data
Data
Ontologies
Figure 6.8 Semantic Web Bus.
•
Restriction—e.g., a “Disaster” object must have exactly one “timeOfOnset” Object property.
•
Instance—e.g., “5 milliseconds,” is an instance of “Latency.”
6.6.2
Current Approaches
Composition and matchmaking of Semantic Web Services are functions of the language used to express service capabilities, the type of logic system used to make matches, and the algorithms used. However, the richer these technologies are, the more time-consuming processing becomes. Heuristics and ad hoc methods can also be used, particularly when matching and/or composing semantic services in a small, well-defined, well-scoped-out domain. Many computationally complex techniques rely on a service description that identifies not only Input and Output but also Preconditions and Effects (the four are often referred to as IOPEs) of the service invocation; DAML-S and OWL-S enable this. In general, three steps are undertaken during matching: (1) matching within a context, (2) matching within the syntax, and (3) matching semantics. Various filtering and pruning approaches, which pare down possible matches, can be used at any step. In [7], which extends [13], an approach to matching queries to service advertisements is presented. Not simply Boolean, they propose different degrees of match: exact, plug-in, subsume, intersection, and disjoint. Such degrees are possible due to semantic markup. [8] employs a filtering approach to matchmaking very much in line with the three steps above—context, syntax, and semantics matching—and uses DAML-S technology for service descriptions. [18] presents
174
Lisa Bahler and Colleagues
the METEOR-S framework and outlines enhancements to UDDI and WSDL that better enable service discovery and matching. The METEOR-S discovery engine uses class subsumption and metrics to find matches to queries. RETSINA and LARK [23] are, respectively, a multiagent framework and a semantically rich language for expressing agent capabilities. In [10], upper and middle OWL ontologies have been designed to capture the notions of QoS, including concepts of measurement, response time, integrity, and so on. Infosleuth is an architecture for agent-based applications, including matchmaking; the logic approach is based upon LDL++ [11]. Other approaches and information include [9] [13] [19] [22]. WS-Policy provides a syntax for expressing policy rules, and though its policy instances are well-structured and resilient, the syntax itself is entirely explicit; no implicit information can be gained by analyzing the expressed policies. It is therefore not as rich an expression syntax as OWL, and we focus on an application of the richer technology (as much current research is doing). 6.6.3
Toward Automated Configuration and Analysis
Much of the current work in this domain relates to matching Web Service consumers and providers—that is, given a query, how to find a Web Service to satisfy it. We have had success in our lab extending the scope of the application of semantics (i.e., meaning: rich models and query support) to match Web Service requirements to infrastructure capabilities. This has several benefits, including the following: 1. Nonfunctional aspects of Web Services, such as security and reliability, are increasingly of importance as Web Services are applied to mission-critical operations. 2. Deployment administrators can gain valuable insight into the configuration of the solution through tools and query engines. Our framework helps to provide these benefits by providing administrator tools that are based on the semantics of the SOA architecture. Some high-level use cases are the following: Setup and markup of infrastructure Administrator(s) describes infrastructure components with respect to an OWL ontology (an ongoing activity).
•
Query and command Queries (e.g., “Can my infrastructure support these services?”) or commands (e.g., “Configure my WSGs such that security requirements of Services are satisfied”) are submitted. In response, tools compute (with the use of some inference enabled by OWL) upon the current infrastructure model and return results. Processing may invoke the “matchmaking” algorithm (see below).
•
Notification Guided by some precomputed patterns, the tool proactively seeks out inconsistencies (or other undesirable “patterns”) in the infrastructure (e.g., inadequate security technologies) and notifies administrators.
•
Toward Configurable QoS-Aware Web Services Infrastructure
175
Approach In our approach, a Web browser-based Wizard tool can be used by the deployment administrator to view and refresh existing code, upload new semantic code, annotate existing components, query the infrastructure model, and perform configuration tasks upon the infrastructure. During a setup/markup phase a suitably rich ontology is created that expresses valuable relationships between the solution business services and the infrastructure components (this is described later in this section). With this ontology as a reference, business services and infrastructure components are marked up and managed via a simple-to-use Web interface. The Wizard pulls all the annotated information into a knowledge base. Among many other things, the ontology needs to be rich enough to capture the following: 1. Functional and nonfunctional aspects of the Web services to be deployed on the infrastructure (e.g., concepts to capture nonfunctional requirements of services, such as delay, response time, creditPurchase, and cashPurchase) 2. The notions of messaging QoS (e.g., see [10]), such as ability to mark an entity as being either capable of or requiring Exactly-once, AtLeast-once, AtMost-once message delivery 3. The notions of security QoS (e.g., tokens, signatures, security models, trust domain, etc.) 4. Instances of services and any other infrastructure entities (e.g., use DWL to claim that “the trust_domain 127.5.0.0 is a secured network requiring communications only via reliable message queues with X.509 certificates”). In essence, any aspects of the deployment configuration that one wishes to validate and automate should be captured. We have such a prototype ontology in the lab. OWL and the Stanford Protégé tool are used to model this ontology. Figure 6.9 illustrates, at a high level, the infrastructure ontology as being made up of upper and domain-specific concepts, and shows some of the roots of these artifacts. It also illustrates a small subset of the ontology presented within the Protégé ontology editor. The subset in the figure calls out some of the concepts and relationships between security artifacts. For example, a part of the class tree captures the interrelated nature of encryption capabilities (such as AES, DAC) and their properties. Therefore (and this is true in general as well), the ontology serves as both a classifier of capabilities and a metamodel for creating a scenario of instances that correspond to the “real” infrastructure. Ontology editors (such as Protégé) allow designers to import and extend existing ontology. Therefore, the ontology presented in [14] is a suitable starting point in that it captures the semantics of the W3C Web Services architecture. In addition, the WS-* references cited in Figure 6.9 and current work in QoS ontologies [10] are also good starting points. Our matchmaking approach is similar to the state of the art, and is a combination of (1) filters, (2) reasoning, and (3) heuristics/rules. Filters effectively prune possible matches based on context, syntax, and semantics. Reasoning on an OWL model can exploit class subsumption (e.g., is one class or property a generality of another?), transitivity, or disjointedness. Heuristics
176
Lisa Bahler and Colleagues
Infrastructure Domain Specific Concepts
Upper Ontology
Infra-thing rdfs:subClassOf
QoS-thing
Security-thing
Messaging-thing
Communication-thing
Physical-component Communications Communications conceptmodel model concept (e.gWSDL, WSDL,etc.) etc.) (e.g
auth_server trust-domain
Inaddition: addition:Domain Domain In modelsfor forservices, services, models physical physical components, etc. components, etc.
QoS Artifacts QoS Artifacts concept concept model model WS-Reliability WS-Reliability concept model concept model
WS-Security WS-Security concept concept model model
Figure 6.9 Infrastructure ontology design. Shaded parts comprise aspects of the “security” domain ontology (some security classes shown in the Protégé tool).
Toward Configurable QoS-Aware Web Services Infrastructure
177
and rules are shortcuts that allow us to make certain assumptions when certain conditions hold. Below is our general matching approach. General Algorithm: Given an application’s functional and nonfunctional requirements {r1,r2,r3..} determine if they can currently be satisfied on the infrastructure 1. Determine if any heuristics or rules apply to the r1,r2,r3.. 2. Filter r1,r2,r3.. with respect to ontology context, syntax, semantics 3. Determine requirements classifications into the “upper-ontology” slots 4. For each r1, r2, .. Determine if any heuristics or rules apply Determine exact matches (i.e., do any components satisfy exactly?) Determine plug-in matches (i.e., do any components satisfy more specifically?) Determine subsumed matches (i.e., do any components satisfy more generally?) Configuration Scenario Tool scenarios are presented in this section with appropriate detail. These and other scenarios document the success we have had in our lab environment. Automating Security Configuration. In this demonstrable scenario a Web Services Gateway (WSG) has been instrumented to allow security-related configuration changes via a Web Service interface. The WSG administrator understands its capabilities with respect to the infrastructure ontology and documents its capabilities in OWL format. A stripped-down example of such a declaration might include the following lines (which describe the gateway’s intrinsic ability to perform various types of encryption).
A similar documentation phase has been completed for the various other infrastructure components as well as the resident Web Services. In the example, AES is a class reference to an ontology concept with other informative properties and relationships. In the simplest example, the Service Ordering Web Service may declare, again using the ontology as a grammar, its requirement for Kerberos encryption. Using the Wizard tool, the deployment administrator requests that the model be refreshed; as a result, all the OWL metadata regarding infrastructure services and components are “pulled into” a knowledge base (KB). The KB can now be exercised through a reasoning engine, heuristic rule set, or other algorithms that result in implicit knowledge being made explicit. For example, as the administrator chooses the “Configure my Security Aspects” option from the Wizard Web page, the security configuration algorithm begins to analyze and compare the security requirements of the services versus the capabilities of the infrastructure. Seeing that Service Ordering requires Kerberos-style encryption, and that the infrastructure supports the three encryption types listed above, it must perform additional reasoning to determine if an inexact (subsumption) match is possible. Applying subsumption reasoning to the Kerberos_v5 capability, it arrives at the parent concept in the ontology, Kerberos, which proves a match to the service’s requirement (e.g., if a WSG supports Kerberos_v5, then it also supports the technology Kerberos in general). The Wizard then (a) reasons that it must enable the Kerberos feature upon the WSG, (b) formulates the correct configuration, and (c) invokes the WSG Configuration Web Service to effect the configuration. At the WSG, upon receipt of the request, the appropriate configuration changes are made to enable Kerberos encryption for the Service Ordering service. A complementary subscenario (not described in full here) is one in which, for example, the Service Ordering service requires (and declares in a policy) username and password authentication. In this case the Wizard reasons whether or not the WSG can support the username-password paradigm and, if so, enable it (by invoking the WSG Configuration Web Service) for the particular Web Service endpoint in the infrastructure. Ensuring Configuration Consistency. A service called Billing has several nonfunctional requirements that can (in this case) be satisfied by the infrastructure. Before the WSG is configured by the Wizard, the Wizard applies heuristics to reduce the possibility of incompatible service policy ordering. Precedence properties on WSG capabilities and rules enable such computation to occur automatically. For example, a WSG is capable of both decryption and content filtering. However, the former must occur before the latter. Without such systematic descriptions of infrastructure capabilities, maintaining order and ensuring consistency among configurable actions would be a tedious and error-prone human-in-the-loop process.
Toward Configurable QoS-Aware Web Services Infrastructure
6.6.4
179
Summary
Most related systems describe semantic Web tools for matching service requests to advertisements (based on the constructs identified in the operations, preconditions, or effects). Our approach is moving toward extending the reach of semantics to the entire infrastructure. A unified ontology, capturing nonfunctional service requirements and infrastructure characteristics (such as WSGs), and a knowledge base that captures the current state of the Web Services and infrastructure (using instances of the ontology), allow our Wizard component to perform reasoning on the infrastructure. Such reasoning includes consistency, subsumption, classification, and heuristic rule application. At any rate, assuming a certain degree of scale and sophistication of components, such reasoning and automation, help administrators move toward automated deployment configuration. 6.7
Conclusion
In section 6.2 we introduced an illustrative scenario to motivate and explore the problems addressed in this chapter. We now revisit that scenario, applying the methodology and techniques outlined in the chapter. We focus on the transformation of ACME’s SOA-based business systems, originally designed to operate within the ACME administrative domain, into a new ecosystem composed of ACME and its partners and suppliers. The introduction of a new external portal (Joe’s Telco) and the addition of the new billing system (Pay Online) introduce new requirements in terms of transport, messaging QoS, and security. Assuming the initial service ordering implementation was optimized for reliable internal communication (i.e., an asynchronous MOM infrastructure), it now needs to be exposed with a more Internet-friendly protocol (e.g., HTTP). As was discussed in section 6.3, we can configure the binding adapter to take care of the mismatch at the protocol level (i.e., perform store and forward). Moving from a MOM-based protocol to a lightweight HTTP will also have consequences on the messaging QoS. According to section 6.4, in order to guarantee the same QoS available within the administrative domain, the new Web Service needs to compensate for the lack of reliability natively provided by HTTP transport by layering it with a WS reliable messaging stack. Additionally, the reliable messaging infrastructure must be configured with the parameters appropriate to the reliable messaging stack in accordance with the services’ abstract QoS contract. In terms of security, the credit card information gathered by the new customer portal that is targeted for Pay Online should not be visible to ACME. In section 6.5 we demonstrated how this can be achieved by partitioning the SOA into separate administrative domains, and by using different keys to encrypt/decrypt information for each AD. Finally, the deployment configuration can be automated by the Wizard described in section 6.6, provided that both service requirements and the infrastructure capabilities of the ACME ecosystem are formally described and part of the knowledge base.
180
Lisa Bahler and Colleagues
Notes 1. See W3C OWL overview, http://www.w3.org/TR/owl-features. 2. See Description Logics homepage, http://dl.kr.org.
References [1] T. Berners-Lee. Semantic Web. Talk at XML2000. Available at http://www.w3.org/2000/Talks/1206-xml2k-tbl. [2] D. Connolly, F. van Harmelen, I. Horrocks, D. McGuinness, P. Patel-Schneider, and L. Stein. DAML+OIL Reference Description. W3C note. December 18, 2001. [3] G. Daniels. Comparing Features/Properties and WS-Policy. W3C online archives. September 2004. [4] A. Dearle, G. N. C. Kirby, and A. McCarthy. Middleware Framework for Constraint-based Deployment and Autonomic Management of Distributed Applications. Technical report CS/04/2. School of Computer Science, University of St. Andrews, 2004. [5] Martin Fowler. Patterns of Enterprise Application Architecture. Addison-Wesley, 2003. [6] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns. Addison-Wesley, 1995. [7] L. Li and I. Horrocks. A software framework for matchmaking based on Semantic Web technology. In Proceedings of the 12th ACM International Conference on the World Wide Web. ACM Press, 2003. [8] S. Ludwig and P. Santen. A grid service discovery matchmaker based on ontology description. In Proceedings of EuroWeb ’02. British Computer Society 2002. [9] D. Martin, M. Paolucci, S. McIllraith, M. Burstein, D. McDermott, D. McGuinness, B. Parsia, T. Payne, M. Sabou, M. Solanki, N. Srinivasan, and K. Sycara. Bringing semantics to Web Services: The OWL-S approach. In Proceedings of the First International Workshop on Semantic Web Services and Web Process Composition, pp. 5–21. Springer, 2005. [10] E. M. Maximilien and M. P. Singh. Framework and ontology for dynamic Web Services selection. IEEE Internet Computing, 8(5):84–93 (September–October 2004). [11] M. Nodine, J. Fowler, T. Ksiezyk, B. Perry, M. Taylor, and A. Unruh. Active information gathering in Infosleuth. International Journal of Cooperative Information System, 9(1–2):3–28 (2000). [12] W3C, Web-Ontology Working Group. Conclusions and Future Work. 2004. http://www.w3.org/2001/sw/ WebOnt. [13] M. Paolucci, T. Kawamura, T. Payne, and K. Sycara. Semantic matching on Web Services capabilities. In Proceedins of the First International Semantic Web Conference, I. Horrocks and J. A. Hendlêr, eds. LNCS 2342. Springer, 2002. [14] M. Paolucci, N. Srinivasan, and K. Sycara. OWL Ontology of Web Services Architecture Concepts. http://www .w3.org/2004/02/wsa. [15] Web Services Policy Assertions Language (WS-PolicyAssertions), A. Nadalin, ed. May 2003. http://www.ibm .com/developerworks/library/specification/ws-polas. [16] Web Services Policy Attachment (WS-PolicyAttachment). September 2004. http://www.ibm.com/developerworks/ library/specification/ws-polatt. [17] Web Services Policy 1.5-Framework (WS-Policy), September 2007. http://www.w3.org/TR/ws-policy. [18] P. Rajasekaran, J. Miller, K. Verma, and A. Sheth. Enhancing Web Services description and discovery to facilitate composition. In Proceedings of the First International Workshop on Semantic Web Services and Web Process Composition, pp. 55–68. Springer, 2005. [19] J. Rao and X. Su. A survey of automated Web Service composition methods. In Proceedings of the First International Workshop on Semantic Web Services and Web Process Composition, pp. 43–54. Springer, 2005. [20] Resource Description Framework (RDF): Concepts and Abstract Syntax, G. Klyne and J. Carroll, eds. W3C recommendation. February 10, 2004. http://www.w3.org/TR/rdf-concepts. [21] SOAP Version 1.2, W3C recommendation 27 April 2007. http://www.w3.org/TR/soap12-part1.
Toward Configurable QoS-Aware Web Services Infrastructure
181
[22] N. Srinivasan, M. Paolucci, and K. Sycara. Adding OWL-S to UDDI: Implementation and throughput. In Proceedings of the First International Workshop on Semantic Web Services and Web Process Composition, pp. 75–86. Springer, 2004. [23] K. Sycara, J. Lu, M. Klusch, and S. Widoff. Matchmaking among heterogeneous agents on the Internet. In Proceedings of the AAAI Spring Symposium on Intelligent Agents in Cyberspace. ACM, 1999. [24] Systinet Gateway. http://www.systinet.com/products/wasp_jserver/overview. [25] E. Wohlstadter, S. Tai, T. Mikalsen, I. Rouvellou, and P. Devanbu. GlueQoS: Middleware to sweeten quality-ofservice policy interactions. In 26th International Conference on Software Engineering (ICSE ’04), pp. 189–199. IEEE Press, 2004. [26] Web Services Reliable Messaging Protocol (WS-ReliableMessaging), by R. Bilorusets et al. March 2004. [27] Web Services Addressing (WS-Addressing). W3C member submission. August 10, 2004. http://www.w3.org/ Submission/2004/SUBM-ws-addressing-20040810. [28] Web Services Definition Language (WSDL) 1.1. W3C note. March 15, 2001. http://www.w3.org/TR/wsdl. [29] Web Services Description Language (WSDL). Version 2.0 Part 1: Core Language. W3C working draft. August 3, 2004. http://www.w3.org/TR/wsdl20. [30] WS-Reliability 1.1, K. Iwasa, ed. OASIS Web Services Reliable Messaging TC, committee draft 1.086. August 24, 2004. [31] Web Services Security (WS-Security), OASIS Standard 1.0. April 6, 2004. http://www.oasis-open.org/committees/ tc_home.php?wg_abbrev=wss. [32] Web Services Security Policy Language (WS-SecurityPolicy), version 1.1. Microsoft; July 2007. http:// docs.oasis-open.org/ws-sx/ws-securitypolicy/200702. [33] A. Yang. Setting up a Web Services security infrastructure. Business Integration Journal, September 13, 2004.
7
Configurable QoS Computation and Policing in Dynamic Web Service Selection
Anne H. H. Ngu, Yutu Liu, Liangzhao Zeng, and Quan Z. Sheng
7.1
Introduction
Web Services are self-describing software applications that can be advertised, located, and used across the Internet by means of a set of standards such as SOAP, WSDL, and UDDI [12]. Web services encapsulate application functionality and information resources, and make them available through standard programmatic interfaces. They are viewed as one of the promising technologies that could help business entities to automate their operations on the Web on a large scale through automatic discovery and consumption of services. Business-to-business (B2B) integration can be achieved on a demand basis by aggregating multiple services from different providers into a value-added composite service. In the last few years, the dynamic approach to Web Service composition has gained considerable momentum and standardization [1]. However, with the ever-increasing number of functional equivalent or similar Web Services being made available on the Internet, there is a need to be able to distinguish them on the basis of their nonfunctional properties such as quality of service (QoS). QoS is a set of quantitative and qualitative characteristics of a Web Service that is the extent to which service delivery meets user expectations. Quantitative characteristics can be evaluated in terms of concrete measures such as service execution time and price, whereas qualitative characteristics specify the expected services offered by the Web Service, such as reliability and reputation. The QoS of a given Web Service is expressed as a set of nonfunctional properties, each of which is called a QoS parameter or a QoS metric. QoS parameters of Web services can be evaluated from the perspective of the service providers or from the perspective of the service requesters. In this chapter, we are concerned with the QoS of Web Services from the perspective of service requesters. For example, qualities are perceived by the requesters in terms of cheaper and faster services. In fact, the QoS perceived by its users is becoming a determinant factor for the success of a Web Service. For example, Web Services with lower execution price could be selected for a travel-planning task to meet a service requester’s specific budget constraints [16]. Although QoS is gaining a significant momentum in the area of Service-Oriented Computing [9] [12] [2], a service composition system that can leverage, aggregate, and make use of
184
Anne H. H. Ngu and Colleagues
individual components’ QoS information to derive the optimal QoS of the composite service is still an ongoing research problem. This is partly due to the lack of a configurable QoS model and a reliable mechanism to compute and police QoS that is fair (bias neither to service providers nor to service requesters) and transparent to both service requesters and providers. Currently, most approaches that deal with QoS of Web Services address only some generic dimensions, such as execution price, execution duration, availability, and reliability [9] [16]. In some domains, such generic criteria might not be sufficient. A QoS model should also include domain-specific criteria and be extensible. Moreover, most of the current approaches rely on service providers to advertise their QoS information or provide an interface to access the QoS values, which is subject to manipulation by the providers. Obviously, service providers may not advertise their QoS information in a “neutral” manner. For example, service providers may advertise their execution duration as being shorter than real execution duration. In fact, such QoS information should be collected by observation. In approaches where QoS values are collected solely through active monitoring, there is a high overhead since QoS must be checked constantly for a large number of Web Services. On the other hand, an approach that relies on a third party to rate or endorse a particular service provider is expensive and static. In our framework, the QoS model is configurable, and QoS information can be provided by providers, computed on the basis of execution monitoring by the users, or collected via requesters’ feedback, depending on the characteristics of each QoS criterion. In a nutshell, we propose a framework that aims at advancing the current state of the art in QoS modeling, computing, and policing. There are three key aspects to our work. Extensible QoS model In the presence of multiple Web Services with overlapping or identical functionality, service requesters need objective QoS criteria to distinguish one service from another. We argue that it is not practical to come up with a standard QoS model that can be used for all Web Services in all domains. This is because QoS is a broad concept that can encompass a number of context-dependent, nonfunctional properties such as privacy, reputation, and usability. Moreover, when evaluating the QoS of Web Services, we should also take domainspecific criteria into consideration. For example, in the domain of phone service provision, the penalty rate for early termination of a contract and compensation for non-service, offered in the Service-Level Agreement (SLA), are important QoS criteria. Therefore, we propose a configurable QoS model that includes both generic and domain-specific criteria. In our approach, new domain-specific criteria can be added and used to evaluate the QoS of Web Services without changing the underlying computation model.
•
Preference-oriented service ranking Different users may have different preferences or requirements for QoS. It is important to be able to represent QoS from the perspective of service requesters’ preferences. For example, service selection may be driven completely by price, regardless of the time it takes to execute the service. A different requester may be very servicesensitive. This means that criteria such as penalty rate or the ability to return the goods after purchase are viewed as more important than the price and the time. Another service selection
•
Configurable QoS Computation
185
may be driven completely by time because of tight deadlines. A QoS model should provide means for users to accurately express their preferences without resorting to complex coding of user profiles. Fair and open QoS computation Once a set of QoS criteria has been defined for a particular domain, we must ensure that QoS information is collected and computed in a fair manner. The approaches to collecting QoS information should be based on the nature of the QoS properties. In our framework, the value of a QoS criterion can be collected either from service properties that are published by providers, or from execution monitoring, or from requesters’ feedback, based on the characteristics of the criterion. For example, though execution price can be provided by service providers, execution duration can be computed on the basis of service invocation instances, and service reputation is based on service requesters’ feedback.
•
We also build a policing mechanism that will prevent the manipulation of QoS value by a single service requester by requiring each requester to have valid user ID and password to update the QoS registry. Furthermore, this pair of ID and password must be verified by the service provider at the consumption of the service, to ensure that only the actual consumer of the service is allowed to give feedback. On the other hand, providers can review those criteria and improve their QoS if they desire to. Moreover, providers can update their QoS information (e.g., execution price, penalty rate) at any time. They can also check the QoS registry to see how their QoS is ranked among other service providers. We believe that an extensible, transparent, open, and fair QoS computation and policing framework is necessary for the selection of Web Services. Such a framework can benefit all participants. All requesters are actively policing QoS for a particular service. As a consequence, each requester can search the registry for his or her preferred services, based on the most updated QoS. On the other hand, service providers can benefit from such a framework by viewing their service’s QoS properties at any time, and adjusting their services correspondingly. Although this framework requires all requesters to update their usage experiences with a particular type of service in a registry, this overhead on the user is not large. This chapter is organized as follows. In section 7.2, we give the details of an extensible QoS model and its computation. In section 7.3, we describe the implementation of the QoS registry and explain how QoS information can be collected on the basis of active monitoring and active user feedback. Section 7.4 discusses the experiments that we conducted to confirm the fairness and the validity of the various parameters used in QoS computation. Finally, we discuss related work in section 7.5 and conclude in section 7.6. 7.2
QoS-Based Service Selection
Currently, in most SOC frameworks, there is a service broker which is responsible for the brokering of functional properties of Web Services. In our framework, we extend the capability of
186
Anne H. H. Ngu and Colleagues
the service broker to support nonfunctional properties. This is done by introducing a configurable and multiple-dimensional QoS model in the service broker. We aim to evaluate the QoS of Web Services using an open, fair, and transparent mechanism. In the following subsections, we first introduce the configurable QoS model, and then we give the details on how to evaluate Web Service based on our model. 7.2.1
Extensible QoS Model
In this section, we propose an extensible QoS model, which includes generic and domain- or business-specific criteria. The generic criteria are applicable to all Web Services (for example, their pricing and execution duration). Although the number of QoS criteria discussed in this chapter is limited (for the sake of illustration), our model is extensible. New criteria (either generic or domain-specific) can be added without fundamentally altering the underlying computation mechanism, as shown in section 7.2.2. In particular, it is possible to extend the quality model to integrate nonfunctional service characteristics such as those proposed in [11], or to integrate service QoS metrics such as those proposed in [10]. Generic Quality Criteria We consider three generic quality criteria which can be measured objectively for Web Services: (1) execution price, (2) execution duration, and (3) reputation. Criteria such as availability and reliability are indirectly measured in our model through the use of active user feedback and execution monitoring. Execution price This is the amount of money which a service requester has to pay the service provider to use a Web Service such as checking a credit card, or the amount of money the service requester has to pay to the service provider to get a commodity such as an entertainment ticket or a monthly phone service. Web Service providers either directly advertise the execution price of their services, or they provide means for potential requesters to inquire about it. Let s be one Web Service; then qpr(s) is the execution price for using s.
•
• Execution duration The execution duration qdu(s) measures the expected delay in seconds between the moment when a request is sent and the moment when the service is rendered. The execution duration is computed using the expression qdu(s) = Tprocess(s) + Ttrans(s), meaning that the execution duration is the sum of the processing time Tprocess(s) and the transmission time Ttrans(s). Execution time is obtained via active monitoring. • Reputation The reputation qrep(s) of a service s is a measure of its trustworthiness. It depends mainly on end users’ experiences with using the service. Different end users may have different opinions on the same service. The value of the reputation is defined as the average ranking given to the service by end users—that is, qrep(s) = (Σni=1Ri(s))/n, where Ri(s) is the end user’s ranking of a service’s reputation and n is the number of times the service has been graded. Usually, end users are given a range to rank Web Services. For example, in Amazon.com, the range is [0,5].
Configurable QoS Computation
187
Business-Related Criteria The number of business-related criteria can vary in different domains. For example, in phone service provision, the penalty rate for early termination and the fixed monthly charge are important factors for users to consider when selecting a particular service provider. We use the generic term “usability” to group all business-related criteria. In our chosen application, we measure usability from three aspects: transaction support, compensation rate, and penalty rate. Transaction support is used for maintaining data consistency. In prior QoS models, no transactional criteria are used in the computation of QoS value. However, from the perspective of a requester, whether a Web service provides an “undo” procedure to roll back the service execution in a certain period without any charges is an important factor that will affect his or her choice. Two different dimensions can be used to evaluate the transactional property: (1) whether the “undo” procedure is supported, expressed as qtx(s), and (2) what the time constraint is on the “undo” procedure, expressed as qcons(s). It should be noted that qtx(s) = 1/0, where 1 indicates that the Web Service supports the “undo” transaction and 0 otherwise. qcons(s) indicates the time frame in which the “undo” procedure is allowed. •
Compensation rate qcomp(s) of a Web Service indicates the percentage of the original execution price that will be refunded when the service provider cannot honor the committed service or deliver the ordered commodity.
•
• Penalty rate qpen(s) of a Web Service indicates what percentage of the original price service requesters need to pay to the provider when they want to cancel the committed service or ordered commodity after the time-out period for the transaction to be rolled back has expired.
7.2.2
Web Service QoS Computation
The QoS registry is responsible for the computation of the QoS value for each service provider. Assuming that there is a set of Web Services S (S = {s1, s2, . . . , sn}) that provides the same kind of services, we can use m QoS criteria to evaluate a Web Service and obtain the following matrix Q. Each row in Q represents a Web Service si, and each column represents one of the QoS criteria. ⎡q1,1 ⎢q 2 ,1 Q=⎢ ⎢⋯ ⎢ ⎣q n ,1
q1,2 q 2 ,2 ⋯ q n ,2
⋯ q 1 ,m ⎤ ⋯ q 2 ,m ⎥ ⎥ ⋯ ⋯ ⎥ ⎥ ⋯ q n ,m ⎦
(7.1)
In order to rank the Web Services, the matrix Q needs to be normalized. The purposes of normalization are (1) to allow for a uniform measurement of service qualities independent of units, (2) to provide a uniform index to represent service qualities for each provider in the registry, and (3) to allow setting a threshold regarding the qualities. The number of normalizations performed depends on how the quality criteria are grouped. In our example criteria given in
188
Anne H. H. Ngu and Colleagues
section 7.2, we need to apply two phases of normalization before we can compute the final QoS value. The second normalization is used to provide a uniform representation of a group of quality criteria (e.g., usability) and set the threshold to a group of quality criteria. First Normalization Before normalizing matrix Q, we need to define two arrays. The first array is N = {n1, n1, . . . , nm}. The value of nj( j∈[1, m]) can be 0 or 1. nj = 1 is for the case where the increase of qi,j benefits the service requester, and nj = 0 is for the case where the decrease of qi,j benefits the service requester. The second array is C = {c1, c2, . . . , cm}. Here cj(j∈[1, m]) is a constant which sets the maximum normalized value. Each element in matrix Q will be normalized using equations (7.2) and (7.3).
vi , j
qi, j qi, j 1 n ⎧ if ∑ < c j andd n j = 1 qi , j ≠ 0 and ⎪1 n 1 n n i =1 ⎪ ∑ qi , j q i, j ∑ n i =1 ⎪ n i =1 =⎨ qi, j 1 n ⎪ if ∑ ≥ cj qi , j = 0 and n j = 1 or cj ⎪ 1 n n i =1 q ⎪ i, j ∑ n i =1 ⎩
(7.2)
1 n ⎧1 n ∑ qi, j ⎪ n ∑ i = 1 qi , j 1 n n i =1 ⎪ if ∑ < c j andd n j = 0 qi , j ≠ 0 and qi, j qi , j n i =1 ⎪ vi , j = ⎨ (7.3) 1 n ⎪ q i, j ∑ ⎪ n i =1 ≥ cj ⎪ c j if qi , j = 0 and n j = 0 or qi, j ⎩ 1 n qi , j is the average value of quality criteria j in matrix Q. In the above equations, ∑ n i =1 Applying these two equations to Q, we get matrix Q′, which is shown below: ⎡ v1,1 ⎢ v 2 ,1 Q′ = ⎢ ⎢⋯ ⎢ ⎣ v n ,1
⋯ v 1 ,m ⎤ v 2 ,2 ⋯ v 2 ,m ⎥ ⎥ ⋯ ⋯ ⋯ ⎥ ⎥ v n ,2 ⋯ v n ,m ⎦
v1,2
(7.4)
Example 1. Data in the following example are taken from our QoS registry implementation. We assume that there are two Web Services in S and that their values of quality criteria are 0.5 0.5 100 2.0 ⎤ ⎡25 1 60 Q=⎢ . 40 1 200 0 .8 0.1 40 2.5⎥⎦ ⎣ The quality criteria are (in order) price, transaction, time-out, compensation rate, penalty rate, execution duration, and reputation. For array N, since the increase of price, penalty rate, and
Configurable QoS Computation
189
execution duration doesn’t benefit the service requester, and the increase of the remaining quality criteria benefit the service requester, the value for each element in array N is {0,1,1,1, 0, 0,1}. For array C, each element in the array that is considered as a reasonable maximum normalized value in this implemented domain is set to 5. Using equations (7.2) and (7.3), we have the normalized matrix: 1.0 0.462 0.769 0.64 0.7 0.8894 ⎤ ⎡1.3 Q′ = ⎢ . ⎣0.8134 1.0 1.538 1.23 3.0 1.75 1.111 ⎥⎦ Second Normalization In our QoS model, a quality criterion can also be represented and manipulated as a group. The usability criterion is such an example. Each group can contain multiple criteria. For example, both compensation and penalty rates belong to the usability group. To compute the final QoS value for each Web Service, we introduce matrix D and matrix G. Matrix D is used to define the relationship between quality criteria and quality groups. Each row in matrix D represents a quality criterion, and each column represents one quality group. Matrix G represents QoS information based on the values of the quality group of Web Services. Each row in matrix G represents a Web Service, and each column G represents one quality group. Matrix D and matrix G are shown below: ⎡ d1,1 d1,2 ⎢d d2 , 2 2 ,1 D=⎢ ⎢⋯ ⋯ ⎢ ⎣ dm,1 dm,2
⋯ d1,l ⎤ ⋯ d2 , l ⎥ ⎥ ⋯ ⋯ ⎥ ⎥ ⋯ dm ,l ⎦
⎡ g1,1 g1,2 ⋯ g1,l ⎤ ⎢g g2,2 ⋯ g2,l ⎥ 2 ,1 ⎥ G=⎢ ⎢⋯ ⋯ ⋯ ⋯ ⎥ ⎥ ⎢ ⎣ gn,1 gn,2 ⋯ gn,l ⎦
(7.5)
(7.6)
Here, l is the total number of groups of quality criteria. For the value of each element in matrix D, if the ith quality criterion in Q′ is included in the jth group in G, the value of di, j will be 1; otherwise, the value will be 0. By applying matrix D to Q′, we have matrix G. The equation is shown below. G = Q′ * D.
(7.7)
To normalize matrix G, two arrays are needed. In the first array, T = {t1, t2, . . . , tl}, tj(j∈[1, l]) is a constant which sets the maximum normalized value for the group j. In the second array, F = {f1, f2, . . . , fl}, fj(j∈[1, l]) is a weight for group j, such as price sensitivity or service sensitivity. This is used to express users’ preferences over the jth group. Each element in matrix G will be normalized using equation (7.8).
190
Anne H. H. Ngu and Colleagues
gi, j gi, j 1 n ⎧ if ∑ gi , j ≠ 0 and < tj ⎪1 n 1 n n i =1 ⎪ ∑ gi , j g i, j ∑ n i =1 ⎪ n i =1 (7.8) hi , j = ⎨ gi, j 1 n ⎪ ≥ tj gi , j = 0 or if ∑ tj ⎪ 1 n n i =1 g ⎪ i, j ∑ n i =1 ⎩ 1 n gi , j is the average value of group criteria j in matrix In the above equations, ∑ n i =1 G. Applying equation (7.8) to G, we get matrix G′ which is shown below: ⎡h1,1 h1,2 ⎢h h2,2 2 ,1 G′ = ⎢ ⎢⋯ ⋯ ⎢ ⎣hn,1 hn,2
⋯ h1,l ⎤ ⋯ h2,l ⎥ ⎥ ⋯ ⋯ ⎥ ⎥ ⋯ hn,l ⎦
(7.9)
Finally, we can compute the QoS value for each Web Service by applying array F to matrix G′. The formula for QoS is shown below: l
QoS ( si ) = ∑ (hi , j * f j ) .
(7.10)
j =1
Example 2. This example continues the computation of QoS from example 1. For array T, each element in the array is set to 5, which is considered a reasonable maximum normalized value in this implemented domain. For array F, each weight is given as 1. This means the user gives the same preference to every group. The defined D, which determines the grouping of the quality criteria in this example, is shown below. ⎡1 ⎢0 ⎢ ⎢0 D = ⎢⎢0 ⎢0 ⎢ ⎢0 ⎢⎣0
0 0 0 0 0 1 0
0 1 1 1 1 0 0
0⎤ 0⎥ ⎥ 0⎥ 0 ⎥⎥ 0⎥ ⎥ 0⎥ 1⎥⎦
The column in D is in the order price, time, usability, and reputation. First, by applying D to ⎡0.813 1.75 7.068 1.111 ⎤ matrix Q′, we obtain G = ⎢ . Second, we apply equation (7.8) 0.7 3.531 0.889 ⎥⎦ ⎣1.3 ⎡0.769 1.429 1.334 1.111 ⎤ to G to have a normalized G ′ = ⎢ . Finally, using equation (7.10), ⎣0.946 0.571 0.666 0.889 ⎥⎦ we have QoS(s1) = 4.643 and QoS(s1) = 3.072.
Configurable QoS Computation
191
QoS Registry Database QoS Ranking Publish (WSDL)
Get Service
QoS Feedback
Update Services
QoS Computation Web Browser
Service Requester QoS Monitoring
Web Service Message Exchange (SOAP)
XML
Service Provider
Service Description (WSDL)
Figure 7.1 Architecture diagram of QoS registry.
7.3
Implementation of the QoS Registry
To demonstrate our proposed QoS model, we implemented a QoS registry (see figure 7.1) within a hypothetical phone provisioning marketplace called Universal Phone Service (UPS for short). The UPS marketplace is implemented using the BEA Weblogic Workshop Web Service tool kit [17]. It consists of various service providers that can register to provide various types of phone services, such as long distance, local, wireless, and broadband. The marketplace also has a few agencies that offer credit-checking service. The UPS marketplace has Web interfaces that allow a customer to log in and search for phone services on the basis of his or her preferences. For example, the customer can specify whether the search for a particular type of service should be price- or service-sensitive. A price-sensitive search will return a list of service providers who offer the lowest prices. A service-sensitive search will return a list of service providers with the best-rated services (not necessarily the lowest prices). The marketplace also has Web interfaces that allow service providers to register their Web Services with the QoS registry, update their existing Web Services in the registry, or view their QoS ranking in the marketplace. The registry’s other interfaces are available for requesters/end users to give feedback on the QoS of the Web Services that they have just consumed. 7.3.1
Collecting Service Quality Information
In our framework, we distinguish two types of criteria: deterministic and nondeterministic. Here, “deterministic” indicates that the value of a QoS criterion is known or certain when a service is invoked (for example, the execution price and the penalty rate). The nondeterministic is for a QoS criterion that is uncertain when a Web Service is invoked (for example, execution duration).
192
Anne H. H. Ngu and Colleagues
For the deterministic criterion, we assume that service providers have mechanisms to advertise those values through means such as Web Service quality-based XML language [3] and communities [6]. How all service providers come to an agreement on a deterministic set of QoS criteria to advertise is beyond the scope of this chapter. In the next section, we discuss how all QoS criteria in our model can be collected in a fair, open, and objective manner via active monitoring and active users’ feedback. Collecting Quality Information from Active Execution Monitoring In our framework, some quality information is collected by active execution monitoring, wherein the monitoring is conducted by service requesters. For example, actual service execution duration is collected by service requesters. This requires interfaces used by all service requesters to implement mechanisms to log the actual execution time. Although this approach puts more burden on the service requester, it has the following advantages over approaches where the service broker or QoS registry is required to perform the monitoring: (1) it lowers the overhead of the QoS registry and simplifies the implementation of the registry; (2) data are collected from actual consumption of the service, which is up-to-date and objective; and (3) it avoids the necessity to install expensive middleware to poll the large number of service providers constantly. Through active execution monitoring, when a service requester gets a set of values different from the deterministic criteria advertised by the service provider, this difference can be logged. If this phenomenon is common in a particular domain, we can expand the criteria for reputation to record the difference between the actual deterministic quality criteria and the advertised deterministic quality criteria from the service provider. The bigger the difference, the lower the reputation will be for that service provider. Collecting Quality Information from Users’ Feedback Each end user is required to update the QoS of the service that he or she has just consumed. This ensures that the QoS registry is fair to all end users, since QoS values can be computed on the basis of real user experience with up-to-date runtime execution data. To prevent the manipulation of QoS by a single party, for each feedback the end user is given a pair of keys. The service provider must authenticate the pair of keys before the user is allowed to update the QoS value. The update must take place in a limited time frame to prevent the system from being overloaded with unconsumed service requests. We assume that the service requester can either download a plug-in from the QoS registry for providing feedback on QoS or access the services via a portal which must implement the active monitoring mechanism as outlined above for QoS computation. 7.4
Experiments
We conducted a series of experiments to (1) investigate the relationship between QoS value and business criteria and (2) study the effectiveness of price and the service sensitivity factors in our
Configurable QoS Computation
193
Table 7.1 Comparing Service Quality Data of Two Providers Provider
Price
Transaction
ABC
25
Yes
BTT
40
Yes
Time-Out
Compensation Rate
Penalty Rate
Execution Duration
Reputation
60
0.5
0.5
100
2.0
200
0.8
0.1
40
2.5
Table 7.2 Results of Sensitivity Experiments Provider
Price Sensitivity
Service Sensitivity
QoS Value
ABC
1
2
4.675
BTT
1
2
5.437
ABC
2
1
4.281
BTT
2
1
3.938
QoS computation. The experiments were conducted on a Pentium computer with a 750 Mhz CPU and 640 MB RAM. We first simulated 600 users searching for services, consuming the services, and providing feedback to update the QoS registry in the UPS application. This would generate a set of test data for nondeterministic quality criteria such as reputation and execution duration for each service provider in the QoS registry. The deterministic quality criteria (price, penalty rate) had been advertised and stored in the QoS registry for each service provider prior to the simulation. Table 7.1 shows the values of various quality criteria from two phone service providers with respect to local phone service. From this table, we can see that provider ABC has a better price, but its service criteria are not very good. Its time-out value is small, which means it is not so flexible for end users. Its compensation rate is lower than BTT, and its penalty rate is higher. Furthermore, its execution time is longer than BTT, and its reputation is lower than BTT as well. Using our QoS computation algorithm, can we ensure that ABC will win a price sensitivesearch and BTT will win a service-sensitive search? Table 7.2 shows that BTT has a QoS value of 3.938 with a price sensitivity search and a QoS value of 5.437 with a service sensitivity search. On the other hand, ABC has a QoS value of 4.281 with a price sensitivity search and a QoS value of 3.938 with a service sensitivity search. ABC wins a price sensitivity search (4.281 > 3.938), and BTT wins a service sensitivity search (5.437 > 4.675), which verifies our original hypothesis. The following three figures show the relationship between the QoS value and the price, the compensation rate and the penalty rate, and the service sensitivity factors. From figure 7.2, we can see that the QoS increases exponentially as the price approaches zero. As the price reaches 180 (i.e., 9 × 20), the QoS tends to level off. This gives us the confidence that price can be used
QoS
194
Anne H. H. Ngu and Colleagues
9 8 7 6 5 4 3 2 1 0
Price
1
3
5
7
9
11
13
15
17
19
Price Unit = 20 Figure 7.2 Relationship between QoS and price.
QoS
4.4 4.35 4.3 4.25 4.2 4.15 4.1 4.05 4
Compensation Rate Penalty Rate
3.95 1
3
5
7
9
11
13
15
17
19
Rates Unit = 0.1 Figure 7.3 Relationship between QoS and compensation and penalty rate.
very effectively to increase QoS within a certain range. It also shows that in our QoS computation model, QoS cannot be dominated indefinitely by the price component. Figure 7.3 shows the relationship between QoS and compensation and penalty rates. For the penalty rate, the QoS value decreases as the penalty rate increases. For the compensation rate, the QoS value increases as the compensation rate increases. Similarly, this concludes that QoS computation cannot be dominated by these two components indefinitely. Figure 7.4 indicates that the QoS value increases almost four times faster for the service sensitivity than the price sensitivity. This shows that using the same sensitivity value for both price and service factors will not be effective for the price sensitivity search in our QoS registry. To get effective price and service sensitivity factors for a domain, we need to analyze the sensitivity factors after the QoS registry has been used for a while and readjust the values by performing the experiment mentioned in section 7.4.
Configurable QoS Computation
195
20 Price Sensitivity
Y:QoS
15
Service Sensitivity
10 5 0 1
3
5
7
9
11
13
15
17
19
X: Sensitivities 1 unit = 10% Figure 7.4 Relationship between QoS price and service sensitivity factors.
7.5
Related Work
Multiple Web Services may provide similar functionality, but with different nonfunctional properties. In the selection of a Web Service, it is important to consider both types of properties in order to satisfy the constraints or needs of users. Although it is common to have a QoS model for a particular domain, a configurable QoS model that allows addition of new quality criteria without affecting the formula used for the overall computation of QoS values has not been proposed before. Moreover, very little research has been done on how to ensure that service quality criteria can be collected in a fair, dynamic, and open manner. Most previous work is centered on the collection of networking-level criteria such as execution time, reliability, and availability [16]. No model gives details on how business criteria such as compensation and penalty rates can be included in QoS computation and enforced in an objective manner. In a dynamic environment, service providers can appear and disappear around the clock. They can change their services at any time in order to remain competitive. Previous work has not addressed the dynamic aspects of QoS computation. For example, there is no guarantee that the QoS obtained at runtime for a particular provider is indeed the most up-to-date one. Our proposed QoS computation, which is based on active monitoring and consumers’ feedback, ensures that the QoS value of a particular provider is always up to date. In [13], the author proposes a QoS model which has a certifier to verify published QoS criteria. It requires all Web Service providers to advertise their services with the certifier. This approach lacks the ability to meet the dynamics of a marketplace where the needs of both consumers and providers are constantly changing. For example, it does not provide methods for providers to update their QoS dynamically, nor does it give the most recently updated QoS to service consumers. There are no details on how to verify the QoS with the service providers in an automatic and cost-effective way. In [14], the authors propose a QoS middleware infrastructure that requires a built-in tool to monitor metrics of QoS automatically. If such a tool can be built, it needs to poll all Web
196
Anne H. H. Ngu and Colleagues
Services to collect metrics of their QoS. Such an approach requires the willingness of service providers to surrender some of their autonomy. Moreover, if the polling interval is set too long, the QoS will not be up to date. If the polling interval is set too short, it may incur a high performance overhead. A similar approach which emphasizes service reputation is proposed in [8] [7]. A Web service agent proxy is set up to collect reputation ratings from previous users of Web services. The major problem with this approach, however, is that there is no mechanism in place to prevent false ratings from being collected. We have a mechanism to ensure that only the true user of the service is allowed to provide feedback. In [4], the authors discuss a model with a Service-Level Agreement (SLA) between providers and consumers. The penalty concept is given in its definition of SLA. The main emphasis of this concept is what should be done if the service provider cannot deliver the service under the defined SLA, and the options for consumers to terminate the service under the SLA. Penalty is not included as a criterion which can be used in QoS computation. Similarly, in [5], the authors present an agreement management architecture that facilitates service binding by agreement. While both [4] and [5] focus on the management of agreements between service providers and service requesters, our work focuses on dynamic selection of Web Services based on trustworthy and fair service QoS. Web Service Offerings Language (WSOL) [15] is a language that enables monitoring, metering, accounting, and management of Web Services. In WSOL, the specification of QoS metrics (e.g., how the QoS metrics are measured) is outsourced to external entities. This contrasts with our work, where the computation of QoS values of Web Services is done via an extensible, open, and fair QoS model. In this sense, WSOL and our work complement one another. Finally, a language-based QoS model is proposed in [3]. Here, QoS of a Web Service is defined using a Web Service QoS Extension Language. This XML-based language defines components of QoS in a schema file which can be used across different platforms and is human-readable. 7.6
Conclusion
With the ever-increasing number of functional equivalent or similar Web Services being made available on the Internet, the quality-based selection of Web Services becomes of paramount importance to ensure customer satisfaction. The main drawback of current work in quality-driven services selection, however, is the inability to guarantee that the QoS of Web Services remains open, trustworthy, and fair. We have presented an extensible QoS model that achieves the dynamic and fair computation of QoS values of Web Services through secure active users’ feedback and active monitoring. Our QoS model concentrates on criteria that can be collected and enforced objectively. In particular, we have shown how business-related criteria (compensation, penalty policies, and transactions) could be measured and included in the QoS computation. Our QoS model is extensible, and thus new domain-specific criteria can be added without changing the underlying computation model. In addition, the formulas for the QoS computation can
Configurable QoS Computation
197
be varied for different domains by using different sensitivity factors or different criteria with QoS compute associated groupings. Service providers can also query their QoS and update their services correspondingly in order to remain competitive in the market. To validate the feasibility and benefits of our approach, we implemented a QoS registry based on our extensible QoS model. The experimental results show that our QoS computation algorithm performs very well in Web Services selection. The experimental results also reveal the relationships between QoS and a number of QoS metrics. The approach presented in this chapter can be used in the design and development of middlewares for dynamic quality-based Web Services choices which are open and transparent to service providers and service consumers. The design of the QoS registry, the QoS computation mechanism, and the QoS simulation can be reused in other service-oriented frameworks. However, one limitation of our approach is that the success of this model depends on the willingness of end users to give feedback on the quality of the services they consume. Our future work includes proposing mechanisms to automate the collection of feedback data. To ensure that all users will provide feedback, we need to make feedback provision very straightforward. We also want to see that the implemented QoS registry becomes intelligent both internally and externally. Internal intelligence refers to the ability to infer the appropriate sensitivity factors in order to get the list of providers that meets end users’ requirements. External intelligence refers to the ability of the QoS registry to predict the trend of QoS from certain providers at runtime. For example, we want to have a system which knows how much the QoS value of a Web Service decreases in the morning and increases in the evening or during different times of the year, and takes action to notify its users. For service providers, we want to have the ability to notify them when the QoS of their services is below a certain threshold. References [1] Boualem Benatallah and Fabio Casati, eds. Special issue on Web Services. Distributed and Parallel Databases, 12 (2/3) (2002). [2] Marco Conti, Mohan Kumar, Sajal K. Das, and Behrooz A. Shirazi. Quality of service issues in Internet Web Services. IEEE Transactions on Computers, 51(6):593–594 (June 2002). [3] Peter Farkas and Hassan Charaf. Web Services planning concepts. Journal of the WSCG, 11 (1) (February 2003). [4] Li-jie Jin, Vijay Machiraju, and Akhil Sahai. Analysis on Service Level Agreement of Web Services. Technical report HPL-2002-180, Software Technology Laboratories, HP Laboratories. (June 2002). [5] Heiko Ludwig, Asit Dan, and Robert Kearney. Cremona: An architecture and library for creation and monitoring of WS-agreements. In Proceedings of the Second International Conference on Service Oriented Computing (ICSOC ’04). ACM, 2004. [6] Carlo Marchetti, Barbara Pernici, and Pierluigi Plebani. A quality model for multichannel adaptive information systems. In Proceedings of the 13th International World Wide Web Conference on Alternate track papers ˙ε posters ˙ (WWW ’04). ACM Press, 2004. [7] E. Michael Maximilien and Munindar P. Singh. Conceptual model of Web services reputation. SIGMOD Record, 31(4):36–41 (October 2002). [8] E. Michael Maximilien and Munindar P. Singh. Reputation and endorsement for Web services. ACM SIGecom Exchanges, 3(1):24–31 (Winter 2002).
198
Anne H. H. Ngu and Colleagues
[9] Daniel A. Menascé. QoS issues in Web services. IEEE Internet Computing, 6(6):72–75 (November/December 2002). [10] Aad van Moorsel. Metrics for the Internet Age: Quality of Experience and Quality of Business. Technical Report HPL-2001-179, HP Labs. July 2001. Also published in Fifth International Workshop on Performability Modeling of Computer and Communication Systems. 2001 (Friedrich-Alexander-Universitat Erlangen-Nurnberg. Eds: R. German, J. Luhti, and M. Telck). [11] Justin O’Sullivan, David Edmond, and Arthur ter Hofstede. What’s in a service? Distributed and Parallel Databases, 12 (2–3):117–133 (September–November 2002). [12] Mike P. Papazoglou and Dimitrios Georgakopoulos. Service-Oriented Computing. Communcations of the ACM, 46(10):25–65 (October 2003). [13] Shuping Ran. A model for Web sevices discovery with QoS. ACM SIGecom Exchanges, 4(1):1–10 (Spring 2003). [14] Amit Sheth, Jorge Cardoso, John Miller, and Krys Kochut. Qos for service-oriented middleware. In Proceedings of the Sixth World Multiconference on Systemics, Cybernetics and Informatics (SCI ’02), pp. 528–534. International Institute of Informatic’s and Systemics. 2002. [15] Vladimir Tosic, Bernard Pagurek, Kruti Patel, Babak Esfandiari, and Wei Ma. Management applications of the Web Service Offerings Language (WSOL). Information Systems, 30(7):564–586 (November 2005). [16] Liangzhao Zeng, Boualem Benatallah, Marlon Dumas, Jayant Kalagnanam, and Quan Z. Sheng. Quality driven Web services composition. In Proceedings of the Twelfth International Conference on the World Wide Web (WWW ’03). ACM Press, 2003. [17] BEA Weblogic WorkshopTM 8.1. http://commerce.bea.com/showallversions.jsp?family=WLW.
8
WS-Agreement Concepts and Use: Agreement-Based, Service-Oriented Architectures
Heiko Ludwig
8.1
Introduction
The use of Web Services in an enterprise environment often requires quality guarantees from the service provider. Providing service at a given quality-of-service (QoS) level consumes resources depending on the extent to which the service is used by one or more clients of a service provider (e.g., the request rate per minute in the case of a Web Service). Hence, a service client and a provider must agree on the time period in which a client can access a service at a particular QoS level for a given request rate. Based on this agreement, a service provider can allocate the resources necessary to live up to the QoS guarantees. In more general terms, the service provider commits to—and a service customer acquires—a specific service capacity for some time period. We understand service capacity as a service being provided at a given QoS for a specific client behavior constraint. This constraint is typically the number of requests per minute for specific Web Service operations, and may include constraints on the input data for computationally intensive operations. In the information technology (IT) services industry, agreements—specifically, Service-Level Agreements (SLAs)—are a widely used way of defining the specifics of a service delivered by a provider to a particular customer. This includes the provider’s obligations in terms of which services at which quality, the modalities of service delivery, and the quantity (i.e., the capacity of the service to be delivered). Agreements also define what is expected by the customer, typically the financial compensation and the terms of use. In the context of this chapter, we will use the term SLA for any type of agreement between organizations, although others sometimes use it as only the part of an agreement that relates to quality of service. Although SLAs have often been applied to low-level services such as networking and server management, and manual services such as help desks, software-as-a-service, accessed through Web or Web Services interfaces, is becoming more attractive to organizations. In a Saugatuck Technology study published in Network World, the questioned CIOs believed that in 2005, 14 percent of their software application budget was spent on software-as-a-service, and it was believed that this proportion would increase to 23 percent by 2009.
200
Heiko Ludwig
The Web Services stack as defined by WSDL, SOAP, UDDI, the WS-Policy framework, and other specifications primarily addresses the issue of interoperability across application and domain boundaries. It enables a client to learn about a service and its usage requirements, bind to it in a dynamic way, and to interact as specified (i.e., potentially in a secure, reliable, and transactional way). The organizational notions of service provider and service customer are beyond the scope of the specifications, and SLAs between autonomous domains are dealt with out of band, typically in a manual process. The WS-Agreement specification is defined by the Grid Resource Allocation Agreement Protocol (GRAAP) working group of the Global Grid Forum (GGF). It enables an organization to dynamically establish an SLA in a formal, machine-interpretable representation as part of a Service-Oriented Architecture (SOA). WS-Agreement provides the standard specification to build an agreement-driven SOA in which service capacity can be acquired dynamically as a part of the service-oriented paradigm and the corresponding programming model. The specification comprises an XML-based syntax for agreements and agreement templates, a simple agreement creation protocol, and an interface to monitor the state of an agreement. This chapter outlines the concept of agreement-driven SOA, explains the elements of the WS-Agreement specification, and discusses conceptual and pragmatic issues of implementing an agreement-driven SOA based on WS-Agreement. 8.1.1
Motivation
Web Services aim at flexibly and dynamically establishing client service connections in a loosely coupled environment. Traditional SLAs between organizations define the quantity and the quality of a service that one organization provides for another, as well as the financial terms of the relationship. However, traditional SLAs are usually quite static due their high cost of establishment, and hence their typically long runtime. The traditional way of establishing SLAs hampers the dynamicity of Web Services in a cross-domain environment. Consider the following simplified scenario. A financial clearinghouse provides third-party services to buyers and sellers at one or more securities exchanges. The customers of the clearinghouse maintain accounts with it and use a Web Services interface to manage their accounts (i.e., to inquire about the current balance and the list of completed and pending transactions, and to transfer funds to and from the account). This service is called the account management service. In addition, clients can submit trades they have made for settlement by the clearinghouse through a Web Services interface. They can monitor the state of the clearing process and request notification of events such as the completion of the clearing process or exceptions that occurred. This Web Service interface is called the settlement service. For the clients of the clearinghouse, performance and availability of the service are essential. Quality-of-service (QoS) guarantees, along with pricing and penalties, are defined in an SLA. The performance parameters relate to the response time for account operations and clearing requests (the submission), notification times of clearing events, and the total time of the clearing
WS-Agreement Concepts and Use
201
process (from submission to completion). For clearing requests, an absolute maximum response time per request is defined, as well as the maximum average response times in five-minute and one-day windows. The availability is measured as a time-out of thirty seconds of a Web Service request. The different clients of the clearinghouse have different requirements regarding the service’s QoS parameters. Since better QoS guarantees require more resources to implement a Web Service, QoS guarantees are capped to a certain number of requests per time span (e.g., 1000 requests per minute and 10,0000 per trading day). If more requests are submitted, either another (lower) set of QoS guarantees applies or no performance guarantees are given. Finally, clients’ service capacity demands vary over time. Triggered by certain events, such as end-ofyear trades, tax dates, initial public offerings, and acquisition of new customers by clients, settlement capacity requirements temporarily or permanently change over time. To differentiate the clients’ Web Services requests, each client is given different endpoints. The clearinghouse implements its Web Services on a multitiered cluster of servers comprising HTTP servers, application servers, and database servers. The cluster is connected to a storage area network (SAN) where data are managed. The clearinghouse also maintains an off-site backup data center to which all data are mirrored in real time. To connect to customers and to the off-site data center, the clearinghouse rents VPN bandwidth from three different networking companies. A monitoring system gathers response time data from the application servers’ instrumentation and checks the compliance of each SLA’s QoS guarantees. Penalties are deducted from the clients’ monthly bills. The clearinghouse wants to keep costs low by minimizing resource consumption (i.e., the number of servers and amount of storage it owns in main and off-site data centers, and the amount of bandwidth it buys). To minimize resource consumption, it shares resources among its clients. All requests are routed to a central workload manager and then prioritized according to QoS level. If current demand exceeds the cluster’s capacity, those requests which entail the least penalty are delayed. The cluster’s capacity and the network are adjusted to maximize profit, not to serve peak demand. Today, clients request changes in capacity by phone. A call center agent checks the management system to determine whether the change can be accommodated at the requested time. If additional capacity is needed and provisioning time permits, it checks whether it can buy the additional cluster servers, notification nodes, and networking capacity. Based on this information, the agent potentially submits orders for the additional capacity, if required and possible, and schedules a configuration change of the central dispatcher node to take the additional workload into account. The clearinghouse wants to automate the request for additional Web Services capacity at varying QoS levels based on a standard interface that all clients can use. The clients of the clearinghouse are brokerage companies. Brokerage companies can use different clearinghouses at the securities market. They receive trades from various clients (e.g., private individuals, institutional investors such as mutual funds or pension funds, and companies issuing and managing bonds). Client demand is volatile in a brokerage house. Changes in demand can be foreseeable, such as the end-of-year business, or spontaneous, such as news that
202
Heiko Ludwig
triggers stock and bond market activity. For planned changes, brokerage companies want to buy clearing services capacity in advance. However, they also want to manage sudden onset of demand. If market activity increases to or beyond the capacity currently acquired, brokerage companies want to buy additional capacity from the clearinghouse offering the best conditions at the time. This short-time acquisition process should be fully automated to accommodate the unplanned increase of Web Service client traffic. 8.1.2
Requirements and Assumptions
The introductory discussion and the example scenario illustrate a number of interesting observations and requirements related to capacity and performance of Web Services. First, performance QoS parameters such as response time vary with the client workload submitted to the Web Service, given that the set of resources stays constant. If a Web Service provider wants to guarantee a QoS level, it has to anticipate the client workload and add resources correspondingly. In an interorganizational scenario, an SLA is a viable means of conveying future capacity requirements to a service provider, and the mechanism to establish SLAs must provide this function. From a service provider’s perspective, its ability to provide service at a given QoS level is bounded by the number and capacity of the resources available at the time of service. The service provider must be able to decline requests for further capacity if it would endanger satisfying the SLAs that have already been established, and to reject the corresponding Web Services requests. Given the resource dependency of performance QoS properties, a service customer wants to establish SLAs ahead of time to ensure that its clients will receive the QoS they require. If Web Service clients require more capacity at runtime, service customers will want to shop around multiple providers to find the required capacity at the best price. They need a standard mechanism to search partners and create agreements. To accommodate short-term capacity requests as outlined in the example, the service capacity acquisition mechanism must have the ability to be fully automated, at least for cases of simple decision-making. Service customers must monitor their Web Service clients’ activity internally to identify additional service capacity requirements. Finally, service customers must be able to direct Web client requests to Web Service providers according to the contracted capacity. Given the requirements of Web Service providers and Web Service customers to manage Web Service capacity and to acquire capacity in a dynamic way, we need a means of establishing SLAs for Web Services in an automated and standardized way that is integrated into the ServiceOriented Architecture. In this chapter, we will ignore all issues of security and malicious use of the SLA mechanism. We assume that these issues can be addressed in a manor orthogonal to the conceptual discussion here.
WS-Agreement Concepts and Use
8.1.3
203
Related Work
There are multiple approaches to use of the concept of formalized agreements (contracts) in the context of electronic services, both in specifying contractual agreements and in architectures and systems that deal with agreements. A number of approaches address various issues of the process of agreement creation, fulfillment, and monitoring independently of the particular technology environment of the electronic service. In the context of the ODP Enterprise Language, a high-level model for the representation of contractual obligations has been proposed, but no explicit syntax has been specified by the ISO [10]. There are multiple nonstandardized proposals for specific languages which have not found widespread adoption (e.g., [6]). There are multiple other approaches to representing and formalizing contractual content independent of any standard approaches (e.g., SORM [15], a model of contractual rights and obligations representing enforcement and monitoring-related aspects of a contract), and policy-related work as conducted by Milosevic et al. [4] [17]. Further work on architectures for agreement-based systems has been presented independently of Web Services contexts [16] [10]. Some approaches specifically address SLAs for Web Services. The Web Service Level Agreement (WSLA) approach proposes a language and monitoring framework for Web and other services [14] [11]. Parties can define service-level parameters and specify service-level guarantees as logic expressions over them. The semantics of service-level parameters is described by a functional specification of the way in which high-level metrics are computed from well-known resource-level metrics. It can be specified which party to an agreement—provider, customer, or third party—collects them, and when metric values and compliance and violation messages are to be sent to other parties. The WSLA Measurement Service interprets the WSLA specification and sets up a corresponding distributed measurement environment. Although this approach comprises many elements required for monitoring SLAs, there is no establishment mechanism, and the mapping from an SLA to an implementation infrastructure requires substantial additional effort (as discussed in [3] and [5]). The Web Services Offer Language (WSOL) proposes a language to represent Web Servicesrelated SLAs [19]. It covers a scope similar to that of WSLA but uses ontologies and low-level languages to define metric semantics. It has not been shown how to derive a service implementation from a WSOL representation. Bhoj et al. [1] propose another XML-based SLA representation. As in WSOL, the metric semantics is described in an executable language such as SQL or Java. A number of publications address the issue of representing performance-oriented and other QoS parameters for the selection of services (e.g., in GlueQoS [20]) and offer related categorizations and ontologies (e.g., [16], [17] and [18]), used primarily for matchmaking services. Other related approaches propose agreement formats and infrastructure to facilitate interaction and coordination between parties (e.g., tpaML/BPF [2] and CrossFlow [7]). However, these approaches are not suitable for service capacity reservation.
204
8.2
Heiko Ludwig
Agreement-Driven Service-Oriented Architectures: Extending the SOA Paradigm
To address the issue of service capacity, we have to extend the architectural model of the service-oriented approach. In the traditional model of SOA, service properties are published to a directory such as UDDI. Clients can pick services depending on their capabilities described in the directory or as metainformation at the service endpoint. When they have decided, clients can bind to a service by establishing the transport protocol relationship, which can be as simple as sending SOAP messages over HTTP or establishing a secure connection (e.g., via SSL) prior to using it. In this traditional model of binding to a Web Service, it is assumed that any client can bind to a service if it has the capabilities required in the service’s metainformation. In a capacity-aware SOA, the organization owning clients (our example’s brokerages) and the organization owning the Web Service (e.g., the clearinghouse) agree prior to service usage on the conditions under which clients of the customer organization can use the service of the provider organization. Clients are not serviced without agreement. The main elements of this agreement-driven Service-Oriented Architecture are outlined in figure 8.1. As in a traditional SOA, Web Services Client Applications run on a Service Client System, the execution environment of the client application (e.g., a managed network of PCs). The client application invokes a service application implementing a Web Service in a service delivery system (e.g., the managed cluster of servers of our clearinghouse). However, the client application’s way to bind to a Web Service is different. In an agreement-driven SOA, Web Services and clients belong to organizations. Web Services invocations between organizations are governed by an agreement. To obtain authorization to
Sea
rch
Agreement Management Claim Information
Offers
Request for Service
System Client
Directory
Agreement
Service Client
Application Client Application
Service Customer Figure 8.1 Agreement-driven Service-Oriented Architecture.
Ad
ver
tise Agreement Management Monitoring
Provisioning
Service Delivery System Service Application Service Invocation
Service Provider
WS-Agreement Concepts and Use
205
use a Web Service, the client application derives its requirements for a service, including the QoS properties and the capacity it needs, and submits a request for service capacity (RSC) to the agreement management. The agreement management component of a service customer searches for suitable service providers and negotiates an SLA for the required service capacity. It returns the service-claiming information to the client application. The claiming application describes how to access the service according to the contract, and may be either a specific endpoint or an application, adding a claiming token to the SOAP header of requests or mechanisms. Using the claiming information, the client application can then access the Web Service on the terms negotiated in the agreement (e.g., adhering to the request rate cap and receiving the agreed performance). If the service performance does not live up to what has been guaranteed, the client can report the problem to the agreement management component, which can handle the dispute with the service provider and seek either correction of the problem or financial compensation. If the problem cannot be corrected, a new agreement with another service provider can be negotiated. The agreement management component of a service provider advertises the service provider’s capabilities and negotiates agreements. When receiving agreement offers from potential service customers, it assesses the available capacity of its service delivery system. If the capacity is available at the conditions specified in the agreement, it schedules the provisioning of the service at these conditions and returns the corresponding claiming information to the service customer. When the service is due to be delivered, the agreement manager provides the service (i.e., allocates servers, installs service applications, and configures network components and workload managers). The implementation of an agreement-driven SOA differs in a number of ways from the implementation of a traditional SOA. Client applications must be able to determine their QoS and capacity requirements. This is often the case in a high-performance, Gridlike application scenario such as gene sequencing. However, in the context of business applications such as the clearinghouse’s clients, it is impractical to have each client determine its requirements independently. The capacity management function can be implemented separately from the actual business logic of the client and then take into account the collective requirements of all clients for a service. Furthermore, clients must be able to interpret claiming information and potentially add claiming details to their service requests. This function can be separated into a gateway that intercepts service requests from the client and adds the additional information. Service applications delivering Web Services also require some additional functionality. Service requests must be differentiated by their agreement. Resources must be assigned to agreements or groups of agreements (classes of service). Compliance with QoS guarantees must be monitored and, in case of conflict, yield management decisions must be made. Some service providers already implement different classes of service which they differentiate on bases other than agreements. Hence, preparing a service delivery system for an agreement-driven SOA typically is less effort than the client-side changes.
206
Heiko Ludwig
The agreement management component is a new element of an SOA implementation architecture. Various issues have to be addressed. Managing the life cycle of agreements (creation, monitoring, expiration management, and renewal) is a common issue for both service customer and service provider. Also, both parties have to design the access to their respective service systems to monitor agreement compliance. A service provider must design a decision-making function that derives the resource requirements for a particular agreement in question and assesses whether the agreement is feasible and economically viable. The service customer has to devise a decision-making mechanism that decides how to allocate capacity among multiple service providers. 8.3
WS-Agreement Concepts
The WS-Agreement specification addresses the interfaces and interaction between service provider and service consumer at the agreement management level. These interfaces are based on the Web Services Resource Framework (WSRF), each interface being defined as a resource with properties and addressed by an endpoint reference. Though this is not important on the conceptual level of this section, we will use WSRF terminology in describing the main conceptual elements of the WS-Agreement specification, which are outlined in figure 8.2.
Agreement Responder
Agreement Initiator getProperty()/Templates
Agreement Acceptance
createAgreement(Offer) accept() reject()
Agreement Agreement
getProperty()/Terms/State Service References
Service Client System
Agreement Factory
Agreement Agreement Agreement
Service Delivery System Service Invocation
Service Customer
Figure 8.2 WS-Agreement overview.
Service Provider
WS-Agreement Concepts and Use
207
WS-Agreement defines two parties in the dialogue to establish an agreement: the agreement initiator and the agreement responder. These roles are entirely orthogonal to the roles of a service provider and a service customer. Service providers and service customers can take either role, depending on the specific setup of an application domain. An agreement responder exposes an agreement factory. The agreement factory provides a method to create an agreement by submitting an agreement offer in the offer format defined by the WS-Agreement XML schema. The agreement factory can also expose a set of agreement templates that agreement initiators can retrieve to understand which kind of agreements the agreement responder is willing to enter into. Agreement templates are agreement prototypes that can be modified and completed according to rules by an agreement initiator. Upon receiving an agreement offer, an agreement responder decides whether it accepts or rejects an offer. If an agreement responder decides immediately, it returns the decision as a response to the synchronous call. Alternatively, it can use an agreement initiator’s agreement acceptance interface to convey the decision. A response to an agreement creation request contains an endpoint reference to an agreement. An agreement is created by an agreement responder and exposes the content of the agreement and its runtime state. The runtime state comprises the overall state of the agreement (pending, observed, rejected, completed), and the state of compliance with individual terms of the agreement, which will be discussed in detail in subsequent sections of this chapter. The runtime status information of the agreement can be used by both parties to manage their service delivery and service client systems, respectively. The agreement can also provide a list of references to the services that are subject to the agreement. In addition to the agreement responder, the agreement initiator can also expose an agreement instance to make term compliance status information available to the agreement responder. This may be useful in cases where measurements determining the term compliance status are taken at the system of the agreement initiator. For example, in the case of an agreement initiator being the service provider, throughput measurements and server-side response times are gathered at the service provider’s instrumentation, and it may be easier to evaluate there if a guarantee has been complied with. Based on the early experience gathered in WS-Agreement deployments, service providers typically assume the role of agreement responders and provide the agreement resource. This also holds for our scenario. The brokerage companies ask for capacity, and hence assume the role of the agreement initiator; the clearinghouse is the agreement responder. However, scenarios in which service providers take the initiative to create agreements are common in Grid computing environments, and we may see such a development in a Web Services context in which service providers monitor the service usage by their service customers, and suggest new agreements when they reach the capacity limits of their existing agreements. WS-Agreement does not define the interaction between the agreement management level and the service level of the agreement-driven SOA, for either service providers or service customers. How to provision a service from an agreement depends on the particular application domain and
208
Heiko Ludwig
the specific implementation of the service delivery system used, and is beyond the domainindependent scope of WS-Agreement. 8.4
Agreement Model
To facilitate the negotiation, establishment, and monitoring of an agreement, the parties involved in the process must have a common understanding of the agreement’s content. WS-Agreement defines a standard model and high-level syntax for the content of agreement offers and agreement templates. This section provides an overview of the agreement model, the details of the agreement elements being described in the language section. The main structure of an agreement—and of offers and agreements templates—is outlined in the figure 8.3. An agreement has three structural components: its descriptive name; its context, which comprises those elements of an agreement that don’t have the notion of obligations; and the terms section, which contains the main part of the agreement. The context contains descriptions of the agreement responder and the agreement initiator. Furthermore, it defines which one of the parties is the service provider and the expiration time of the agreement. The parties can add other elements to the context. The terms section contains the terms grouped by connectors. Connectors express whether and how many of the contained terms must be fulfilled to make the agreement as a whole fulfilled. The connectors are modeled after their WS-Policy equivalents and comprise ExactlyOne, All, and OneOrMore.
Agreement
1
0,1 Name
1
Context
Terms
***
* Connector
*
AgreementInitiator 0..1
1,* AgreementResponder 0..1
1
Service
1
ExactlyOne
1..*
- provides 0..1 ServiceProvider
0..1 Obligation' 0..1 1
GuaranteeTerm
ServiceCustomer
Obligation
Figure 8.3 Agreement model.
Term
ServiceDescriptionTerm
All
OneOrMore
WS-Agreement Concepts and Use
209
Ready
Not Ready
Processing
Completed
Idle
Figure 8.4 Service state model.
The WS-Agreement model defines two term types, service description terms and guarantee terms. Each term describes one service, and a service may be described by multiple terms. Service description terms primarily describe the functional aspects of a service (e.g., its interface description and the endpoint reference where it is available). A service follows the following state model, which is exposed as the state of its terms in the agreement resource representing it. (See figure 8.4.) Not ready means the service cannot be used; Ready means that it can be, and potentially is, used; and Completed means that it cannot be used anymore. Each top-level state can have substates. As a standard, the ready state has the substates processing and idle. Specific domains can introduce substates as they see fit (e.g., to distinguish successful or unsuccessful completion). The service provider is obligated to deliver the services as a whole. Service description terms cannot be violated individually. Individually monitorable aspects of a service that can fail independently of the functioning of the underlying “core” service are captured in guarantee terms. Guarantee terms can relate to a service as a whole or to a subelement (e.g., an operation of a Web Service). They have a qualifying condition that defines the circumstances under which the guarantee applies (e.g., during business hours, a logic expression defining the actual guarantee, referred to as service level objective; and a business value, usually a penalty for noncompliance, a reward, or a nonmonetary expression of priority. Guarantee terms follow the state model shown in figure 8.5. If the service is not ready or the guarantee term cannot be evaluated, its state is not determined. Otherwise, if precondition and service-level objective (SLO) are true, or precondition is false, the guarantee’s state is fulfilled; if precondition is true and SLO is false, the guarantee term is violated. How long or how often a guarantee term must be violated for a penalty to apply is defined in the details of the business value part. This state model can also be extended to the needs of a particular domain by substating. The agreement model of the WS-Agreement specification provides a high-level structure and state models that can be adapted to the needs of a specific application environment by additions to the context, introduction of new term types, and specialization of the state models associated to terms. The representation of the model in an agreement offer or agreement template, as well
210
Heiko Ludwig
Fulfilled Not Determined
Violated Figure 8.5 Guarantee terms state model.
as the specific mechanisms to apply domain-specific extensions, are discussed in the next section. 8.5
Offer and Template Language
The WS-Agreement language defines the representation of offers that an agreement initiator submits to an agreement responder and of agreement templates. As discussed above, the agreement model is extensible and, hence, the WS-Agreement language does not provide a complete syntax to describe every type of content in an agreement. It dose provide, however, the main concepts of an agreement and language elements to describe the main elements at a top level. The details of agreement offers can then be filled in with other languages as is seen fit. For example, WS-Agreement may contain a WSDL definition to define an interface or use a domainspecific expression language to define a response time guarantee. By being flexible through including existing languages in the overall model and syntax of WS-Agreement, it becomes useful to a wide range of applications while still providing enough structure to develop applications-independent middleware. 8.5.1
Agreement Offer Structure
A WS-Agreement offer has three main elements: 1. An ID and a descriptive name. 2. A context element defining those parts of an agreement that don’t have the character of rights and obligations: the parties to the agreement, which party has the role of a service provider, the ID of the template that was used to create the agreement, the agreement’s expiration time, and other elements of the agreement that have the character of definitions. 3. The core part of the agreement offer is the terms section. It defines the obligations of the two parties to the agreement. The specification defines two types of terms: service description terms
WS-Agreement Concepts and Use
211
Agreement Offer Name, ID
Context
Terms Service Description Term Guarantee Term Service Description Term Service Description Term Guarantee Term
Figure 8.6 WS-Agreement offer structure.
and guarantee terms. Terms are related using term compositors. Though the WS-Agreement specification defines these two types of terms, additional ones can be added by users of WS-Agreement. Figure 8.6 outlines the structure of an agreement offer XML document. The following example illustrates the XML structure of the content of an agreement offer.
SupplementalAgreementInDecember
http://www.abroker.com/
http://www.thisclearinghouse.com/ AgreementResponder
212
Heiko Ludwig
2005-11-30T14:00:00.000-05:00 ...
. . .
. . .
. . .
. . .
...
The agreement offer has the ID ClearingCapacity123, which must be unique between the parties. The name element can further illustrate what is meant. The example context contains the agreement initiator and agreement provider, which can be described by any content. URIs are convenient. In our example, the agreement responder is the service provider. The expiration time is given in the XML date-time format. An agreement offer can contain any number of terms. The term compositors structuring them are equivalent to the WS-Policy compositors All, ExactlyOne, and OneOrMore. In our example, all service description terms and one of the guarantee terms must be observed.
WS-Agreement Concepts and Use
213
Service Description Terms Service description terms describe the services that the service provider will render to the service customer. This means the service provider, as defined in the context, is liable to deliver what is promised in all the service description terms. Multiple service description terms can be used to describe different aspects of a service (e.g., one terms for its WSDL, one for an endpoint reference where the service will be available, and one for additional policy information that is not contained in the WSDL). The objective of service description terms is to specify to the two parties to the agreement what services will be rendered by the service provider. A service description term has a name and a service name attribute to indicate which service it describes, as the following example snippet outlines.
...
A service description term may have any content, as does the WSDL definition in this example. A special type of service description term is the service reference. It contains a pointer or a handle to the service in question or a pointer to a description of a service, rather than including the description in the agreement. The following example outlines a service reference containing an endpoint reference to a Web Service.
http://www.thisclearinghouse.com:9090/services/ settlement/
abroker.com
214
Heiko Ludwig
In this example, the service reference contains an endpoint reference according to the WSAddressing specification. It points to the URL of the service at the clearinghouse’s service delivery system and uses the broker’s name as a reference property. A further special type of service description term is the service properties term. Service properties are aspects of a service that can be measured and may be referred to by a guarantee term. In our clearinghouse example, this includes the average response time or the availability. The purpose of a service properties definition is to clarify what a particular service property relates to and what metric it represents. The following example illustrates the use of service properties.
//wsag:ServiceDescriptionTerm/[@Name= ”ClearingServiceReference]
//wsag:ServiceDescriptionTerm/[@Name= ”ClearingServiceReference]
Each variable definition represents a service property. It is given a unique name and a metric that is commonly understood by the parties to the agreement. In our example, we assume that the clearinghouse establishes a set of metrics that it can measure and that it is willing to include
WS-Agreement Concepts and Use
215
in guarantees, such as clearing:RequestsPerMinute and clearing:ResponseTimeAveragePerMinute metrics. The location element contains XPath expressions that point to contents of service description terms and defines what part of the service the variable relates to. In the example it relates to the service reference pointing to the clearing service. The granularity of a service is often not sufficient. We want to be able to distinguish, for example, response times for different operations of a service. In this case, we can point to a specific operation of a service in its WSDL definition. Guarantee Terms One key motivation to enter SLAs is to acquire service capacity associated with specific performance guarantees. A guarantee term defines an individually measurable guarantee that can be fulfilled or violated. It has the following elements. The service scope defines the service or part of a service the guarantee applies to. This can be a Web Service endpoint as a whole (e.g., for guarantees on availability), or individual operations of a service, which is more suitable for response time guarantees. Referring to subelements of a service requires additional language elements not covered by WS-Agreement. The qualifying condition contains a Boolean expression that defines under which condition the guarantee applies. Again, the parties to the agreement use a suitable language applicable in their domain. The service-level objective defines what is guaranteed, using a suitable expression language. The business value list defines the valuation of this guarantee. In cross-organizational scenarios, penalty and reward are the most common forms of expressing value. There are also the options to express importance abstractly as an ordinal number, or to express the relative importance of guarantees among the guarantee terms of an agreement. This helps to decide trade-offs if not all guarantees can be fulfilled. Guarantees can be given by both service providers and service customers. The obliged party is defined in each guarantee term. This enables a service provider to give guarantees if the guarantee fulfillment also depends on the performance of the service customer. In a highperformance computing environment, the commitment to complete a computation at a given time may depend on the service customer’s providing the data input in time as the stage-in file. This dependency can be defined as a guarantee term owed by the service customer. Consider the following example.
SettlementService
216
Heiko Ludwig
RequestRate 1000
AverageResponseTime 5
P60S USD
100
WS-Agreement Concepts and Use
217
This example defines a response time guarantee for the settlement service. It applies to the entire service. The qualifying condition limits the guarantee to business hours and a request rate, as defined in the service properties, less than 1000. All expressions in this guarantee term are represented in the PMAC expression language, which is a convenient language for logic and algebraic expressions [9] and can incorporate custom predicates and functions. In our example, we included the custom predicate BusinessHours defined by the clearinghouse. The service-level objective requires the average response time to be less than five seconds, as defined in the service properties. The business value list specifies a penalty. For each assessment interval of sixty seconds (in XML Schema duration syntax) $100 is charged if the service-level objective is not met. 8.5.2
Templates
The WS-Agreement offer language provides significant flexibility to define a rich set of offers. This is particularly the case because domain-specific languages are used to represent expressions and to describe services. Though this approach provides great expressiveness to the parties of an agreement, it makes it difficult for an agreement initiator to create an agreement offer that the agreement responder will understand, using a common subset of domain-specific languages. Also, interpreting an arbitrary agreement is complex, and it is generally difficult to derive resource requirements from an arbitrary set of guarantees, even if the semantics is well understood [4]. Templates simplify the process of creating commonly understood and acceptable agreement offers. As discussed in the previous section, templates are made available by an agreement responder at the agreement factory. Agreement templates are prototype agreements with an additional section, the creation constraints, which describes how the template content may be changed by an agreement initiator. Creation constraints contain individual fill-in items and global constraints. An item has a name; a location, which is a pointer to a particular part of the agreement prototype that can be modified; and an item constraint, which defines how this particular item may be changed. Item constraints are expressed in the XML Schema language, providing means of restricting contents in a standard way. Global constraints relate to more than one item and can be represented in any suitable language. In constraints we can express that for request rates greater than 100, the average response time in an operation must be greater two seconds. Figure 8.7 illustrates the structure of agreement templates and the concept of location pointers in the agreement prototype. An agreement template may contain a creation constraint section. This is a section that specifies the constraints on possible values of terms for creating an agreement. Consider the following agreement template example.
218
Heiko Ludwig
Agreement Template Name
Context
Terms
CreationConstraints Item name: Throughput location: constraint: < 10000 …
Figure 8.7 WS-Agreement template structure.
http://www.thisclearinghouse.com/ AgreementResponder ...
WS-Agreement Concepts and Use
...
SettlementService
...
AverageResponseTime 3
...
/wsag:Template/wsag:Name
/wsag:Template/wsag:Context/wsag: AgreementInitiator
219
220
Heiko Ludwig
/wsag:Template/wsag:Terms/wsag:All/wsag: GuaranteeTerm. . . [@Name=SettlementResponseTime]/wsag: ServiceLevelObjective/exp:Less/. . . wsag:Value/
The template contains the usual elements of an agreement offer and the creation constraint part. There are three items in the creation constraints. The first points to the name of the agreement offer that an agreement initiator must set. The next item points to the agreement initiator element. Finally, the ResponseTime item points to the value of the Guarantee term setting the threshold for the average response time. An agreement initiator can choose only among 1, 2, and 9, with the default set to 3 in the prototype. 8.5.3
Dealing with Extensions and Agreement Variety
Templates are a good mechanism to reduce choices and complexity in the agreement establishment process. Rather than analyzing an agreement template or offer from scratch, parties can rely on known structures and deal with a limited parameter set, the items. Dealing with a template having a low number of items enables parties to automate all or parts of their decision-making and service system configuration functions. However, in a given application domain, there may be many service providers or customers offering templates through their agreement factories. This may entail that agreement initiators which interact with multiple agreement responders still have to analyze the structure of the template each time they encounter a new agreement responder. Due to the flexibility of the WSAgreement language, this significantly increases decision-making complexity and reduces the capability to automate.
WS-Agreement Concepts and Use
221
The solution to this problem is more organizational than technical. A particular industry can develop a set of standard agreement templates that can be provided centrally to all parties or published through agreement factories. Choosing the latter way, agreement responders could further restrict item and global constraints without reducing the decision complexity faced by agreement initiators. 8.6
Designing Agreement Management Interaction
The WS-Agreement specification provides a number of port types that must be configured to meet the requirements of a particular application domain. The main configuration approaches are combination, extension, and reduction (i.e., implementing only a subset of operations of a service). These approaches can be applied in any combination. There are multiple ways to design synchronous and asynchronous agreement establishment mechanisms based on the combination of available WS-Agreement port types. The WSAgreement factory-related port types are AgreementFactory, which returns an immediate decision for createAgreement calls; PendingAgreementFactory, which creates an agreement resource but doesn’t return a decision immediately; and AgreementAcceptance, which enables an agreement initiator to receive an accept or reject answer asynchronously. There are two port types related to agreements: the Agreement port type, which offers static information (i.e., the agreement content and the references to services), and the AgreementState port type, which exposes the agreement state. Figure 8.8 illustrates various alternatives of agreement establishment. In alternative (a), an agreement is created, using a synchronous call. The approval decision is made synchronously, the agreement is established, and its EPR is returned. The creation of the agreement resource itself may be deferred, however. Alternative (b) outlines a simple asynchronous protocol. The agreement initiator requests an asynchronous createAgreement operation. The agreement provider creates an agreement state resource and returns an EPR. However, no decision is made and the state of the agreement is pending. The agreement initiator can then retrieve the state of the agreement and proceed to use it if and when it is approved. Finally, in alternative (c), the agreement initiator can add an AgreementAcceptance EPR to its asynchronous createAgreement request to be notified by the agreement responder when a decision is made. Furthermore, both agreement initiator and agreement responder can expose agreement state port types that enable the parties to monitor each other’s view of guarantee compliance. This is particularly necessary if the agreement initiator is the service provider. In Grid environments, we often find a situation where an agreement is being made for the execution of a single large compute job. In this case, it may not be necessary to expose a separate Web Service to submit jobs. One could either extend the agreement resource by operations that receive the input data of a compute job and start the execution, or include all job-relevant information in the agreement and interpret the createAgreement operation as also the start of the service, if approved.
222
Heiko Ludwig
Agreement Initiator
Agreement Responder createAgreement(Offer) EPR
1
Agreement Factory
3
2
Agreement
(a)
Agreement Responder
Agreement Initiator createAgreement(Offer) Agreement EPR
1
Pending Agreement
3
2
getProperty(AgreeementState) Agreement 4
(b)
Agreement Responder
Agreement Initiator createAgreement(Offer, AcceptanceEPR) Agreement Acceptance
1
Agreement Factory
accept() reject()
2
3
Agreement
(c)
Figure 8.8 Agreement establishment alternatives.
8.7
Implementing WS-Agreement-Based Systems
As discussed in section 8.2 implementing capacity-aware, agreement-driven systems requires a set of new functions from service providers and service customers. This comprises functions that manage the agreement life cycle for either party and functions that manage how service client requests consume acquired service capacity and how services associate service requests with agreements and how they treat them. WS-Agreement standardizes interaction for agreement establishment and compliance monitoring in a domain-independent way. Every WS-Agreement deployment, however, will take
WS-Agreement Concepts and Use
223
place in a particular application domain and will deal with domain-specific languages embedded into agreement offers and connecting to a specific service delivery system. Furthermore, each implementation will connect to a specific service delivery system or service client system. Implementing WS-Agreement from scratch for each agreement initiator and agreement responder, however, is tedious. Providing a domain- and technology-independent implementation of core features of WS-Agreement that are extensible or can be used by specific implementation parts greatly reduces the effort of implementing a WS-Agreement system. A first implementation framework for WS-Agreement-based systems is Cremona (Creation and Monitoring of Agreements) [13]. Cremona is a middleware layer that can be used to create agreements and to access agreement state at runtime. We will discuss it in more detail in this section as an example of how domain-independent parts of WS-Agreement can be implemented, and can be used and extended by the specific aspects of an implementation. It has different structures for agreement responders and agreement initiators. The design objective is to implement the synchronous WS-Agreement establishment and monitoring protocol, to make it suitable for service providers and customers, to separate domain-independent from system-specific and domain-specific components, and to provide interfaces for administrative tools. The agreement responder structure is outlined in figure 8.9, which comprises the following components. The agreement factory is a domain-independent implementation of the agreement factory port type. The template set maintains the collection of currently valid agreement templates that initiators can use to submit createAgreement requests. The agreement set component administers the collection of agreement instances and routes requests addressed to a specific agreement endpoint reference to the corresponding agreement instance. An agreement instance exposes the terms and context of an agreement as well as the runtime status of service description and guarantee terms. The agreement instance uses a status monitor interface to retrieve the status of its terms. The status monitor implementation is specific to the system providing or using the service. It accesses system instrumentation on the service provider or the service consumer side to gather relevant basic measurements, and derives from them the status of a term according to its state model. In the clearinghouse case, the status monitor implementation must access the instrumentation of the application servers of the clearinghouse and derive term status from low-level instrumentation data (e.g., time stamps and counters). The decision-maker interface is used by the agreement factory to decide whether to accept a createAgreement request. The decision-maker implementation depends on the service role and is domain-specific. In our example scenario, the decision-maker component of the clearing-house must assess the resource usage at the requested time of service. It must derive the set of resources needed (i.e., HTTP servers, applications server, and database servers) and make its decision dependent on whether these additional resources can be obtained at the time requested. The agreement implementer interface is used to announce a new agreement. Its service role-specific implementation takes the necessary measures to provide or consume a service according to the agreement (e.g., provision a system or schedule the job). Applied to the clearinghouse services, this means a process to provision the
224
Heiko Ludwig
Admin Client Admin Web Service Interface 1. createAgreement Agr Factory
Template Set 5. announce
4. register Agr Set
Agreement Implementer
2. acceptAgreement 3. create
Decision Maker
Agr Instance Status Monitor
Party Web Service Interface Cremona
Legend: Component
Interface
Service Delivery System Figure 8.9 APRM—agreement responder structure, createAgreement flow.
set of servers and to configure central dispatchers and the backup mechanism. All objects are accessible through the administrative Web Service interface. Figure 8.9 illustrates the interaction among components processing the createAgreement request by an agreement initiator. The interfaces Decision-Maker, Agreement Implementer, and Status Monitor provide the access to domain- and technology-specific implementations that connect the agreement management to the service delivery or service client system. These implementations may bear significant complexity and may involve automated as well as manual process steps (e.g., for decision-making and provisioning). Upon receiving a createAgreement request, the agreement factory asks the decision-maker whether the agreement can be accommodated. If so, it creates the agreement instance and registers it with the agreement set. Subsequently, it is announced to the agreement implementer, which responds if the agreement is set to be used under the terms defined in the agreement. This does not require that the service be provisioned. The system must be set to be ready when the agreement requires it, which can be much later. Finally, the request is returned to the agreement initiator. Operations to retrieve templates and obtain term status and content on factory and agreement instances are implemented by simpler interaction sequences. The Cremona components of an agreement initiator mirror those of the agreement provider and complement them with initiator-specific components. Figure 8.10 outlines these components. The agreement initiator component is the central element. It mediates the interaction on behalf of a component or user client that wants to create a new agreement. The factory set maintains the factories to be used. The agreement set maintains references to the agreements
WS-Agreement Concepts and Use
225
Admin Client 3. setTemplateValues
1. getTemplates Admin Web Service Interface 2. getTemplates
Factory Set
Agr Initiator
5. createAgreement
8. announce
6. register Agreement Implementer
Factory Proxy
Agr Set
4. createAgreementDocument Template Processor
7. create Agr Inst Proxy Legend: Component
Cremona
Interface
Service Client System Figure 8.10 APRM—agreement initiator structure, createAgreement flow.
that the agreement initiator can use to claim service. Factory proxy and agreement instance proxy maintain connections to their respective counterparts on the agreement provider side. The template processor facilitates the creation of agreement instance documents from agreement templates. It fills in values in constraint items and validates constraints. The agreement implementer interface is used to publish the availability of a new agreement, equivalent to the use in the provider APRM. In the case illustrated in figure 8.10, a user client wanting to initiate a new agreement requests templates. The agreement initiator requests the set of templates from the factory set, which in turn receives it from the respective agreement providers through the factory proxies. Having decided on the template and its values, the client submits the chosen values through the agreement initiator to the template processor, which constructs an agreement instance document. If it is valid, the agreement initiator invokes the proxy of the factory in question to submit a createAgreement request. If the return is positive, it registers the endpoint reference of the new agreement with the agreement set, which then creates a proxy connected to the agreement provider’s agreement interface. Otherwise, the client can revise the values set in the template based on the provider’s response, and try it again. Finally, the new agreement is announced through the agreement implement interface, whose implementation must make sure that it can be used. The agreement initiator component can be used by a component other than a user client (i.e., an automated component) if the decision-making task to fill in a template is simple. Beyond the createAgreement flow, the Cremona components can be used to add new factories
226
Heiko Ludwig
to the factory set and use the agreement proxies to query the agreement terms and their current status. As outlined, Cremona provides implementation support for the WS-Agreement protocol and the management of WS-Agreement artifacts such as templates and agreements. Furthermore, Cremona provides interfaces that can be implemented in a specific way (e.g., to trigger provisioning after having accepted a new agreement or to implement status requests). By defining a set of narrow interfaces between the Cremona responder and initiator components and the domain-specific parts of a system, Cremona can be used in situations in which the service provider is the agreement responder or initiator. Obviously, the set of components and interfaces and their interaction model defined by Cremona is just one first approach. Other component structures and corresponding programming models may provide implementation support beyond the one offered by Cremona. 8.8
Conclusion
Agreement-based Service-Oriented Architectures enable service customers to reserve service capacity ahead of the time of service use, and thereby enable service providers to plan their resource consumption to meet performance requirements. This is a key requirement for scenarios of dynamic and performance-sensitive Web Services such as the clearinghouse example. Agreement-based SOAs add the notions of service capacity, agreement, and organization to the traditional SOA stack. Prior to using a Web Service, a service customer establishes an agreement that its clients can access a service with a service provider at a given performance level and request rate. A key enabler for an agreement-based SOA is a standard way to represent agreements (SLAs) and conduct agreement-level interaction for creating and monitoring agreements. WS-Agreement provides a standard definition of an agreement structure that can be amended by domainspecific concepts and language elements. Furthermore, WS-Agreement defines interfaces for establishing agreements and monitoring their compliance state according to their terms. Though WS-Agreement addresses the agreement-related interaction between organizations, and implementation support (e.g., Cremona) is available, many issues related to the relationship between the agreement layer of an agreement-driven SOA and the service layer remain to be addressed. Service client systems have to manage the use of the acquired service capacity by their clients and respond to dynamic client request changes (e.g., by buying more capacity, routing capacity to a different provider, or delaying requests). A service delivery system has to manage its contractual obligations and the yield of its agreements, sometimes by incurring a small penalty on one agreement rather than a larger one on another. Foremost, though, a service provider has to derive a service delivery system from an agreement with a service customer in an efficient way. In addition, there is no established standard to claim service against an agreement. Though there are multiple options, such as different endpoints for different agreements, shared service use will require some agreement identification in a SOAP header of a Web service
WS-Agreement Concepts and Use
227
request in a universally understood way. Finally, important issues arise in the context of domain heterogeneity. How do parties know which domain-specific language another party understands? How can a service customer deal effectively with different service providers, and vice versa. These issues of ontology and heterogeneity present further research challenges. In summary, though, WS-Agreement provides the key enabler to capacity- and performanceaware, agreement-based Service-Oriented Architectures enabling meaningful interorganizational Web services. References [1] P. Bhoj, S. Singhal, and S. Chutani. SLA management in federated environments. In Proceedings of the Sixth IFIP/IEEE Symposium on Integrated Network Management (IM ’99), pp. 293–308. IEEE Publishing, 1999. [2] A. Dan, D. Dias, R. Kearney, T. Lau, T. Nguyen, F. Parr, M. Sachs, and H. Shaikh. Business-to-business integration with tpaML and a B2B protocol framework. IBM Systems Journal, 40(1) (February 2001). [3] A. Dan, D. Davis, R. Kearney, A. Keller, R. King, D. Kübler, H. Ludwig, M. Polan, M. Spreitzer, and A. Youssef. Web services on demand: WSLA-driven automated management. IBM Systems Journal, 43(1) (2004). [4] A. Dan, C. Dumitrescu, and M. Ripeanu. Connecting client objectives with resource capabilities: An essential component for grid service management infrastructures. In Service-Oriented Computing: Proceedings of the Second International Conference (ICSOC 2004), pp. 57–64. ACM, 2004. [5] A. Dan, H. Ludwig, and G. Pacifici. Web Services Differentiation with Service Level Agreements. 2003. ftp:// ftp.software.ibm.com/software/ websphere/webservices/webserviceswithservicelevelsupport.pdf. [6] J. Cole, J. Derrick, Z. Milosevic, and K. Raymond. Policies in an enterprise specification. In Policies for Distributed Systems and Networks. Proceedings of the Policy Workshop, M. Sloman, J. Labo, and E. Lupu, LNCS 1995. Springer, 2005. [7] P. Grefen, K. Aberer, H. Ludwig, and Y. Hoffner. CrossFlow: Cross-organziational workflow management for service outsourcing in dynamic virtual enterprises. IEEE-CS Data Engineering Bulletin, 24(1):52–57 (2001). [8] Y. Hoffner, S. Field, P. Grefen, and H. Ludwig. Contract-driven creation and operation of virtual enterprises. Computer Networks, 37(2):111–136 (2001). [9] IBM. PMAC Expression Language Users Guide. Alphaworks PMAC distribution. 2005. www.alphaworks.ibm .com. [10] ISO/IEC JTC 1/SC 7. Information Technology-Open Distributed Processing—Reference Model—Enterprise Language. ISO/IEC 15414 | ITU-T recommendation X.911. Committee draft. July 8, 1999. [11] A. Keller and H. Ludwig. The WSLA framework: Specifying and monitoring service level agreements for Web services. Journal of Network and System Management, 11(1) (March 2003). Special issue on e-business management. [12] H. Ludwig. A Conceptual framework for building e-contracting infrastructure. In Technology Supporting Business Solutions, R. Corchuelo, A. Ruiz-Cortéz, and R. Wrembel, eds. Nova Science, 2003. [13] H. Ludwig, A. Dan, and R. Kearney. Cremona: An architecture and library for creation and monitoring of WSAgreements. In Service-Oriented Computing: Proceedings of the Second International Conference (ICSOC, 2004), pp. 65–74. ACM, 2004. [14] H. Ludwig, A. Keller, A. Dan, R. King, and R. Franck. A Service Level Agreement language for dynamic electronic services. Journal of Electronic Commerce Research, 3(1/2):43–59 (2003). [15] H. Ludwig and M. Stolze. Simple obligation and right model (SORM) for the runtime management of electronic service contracts. In Web Services, E-Business, and the Semantic Web, 2nd International Workshop (WES 2003), C. Bussler, D. Fensel, M. Orlowska, and J. Yang, eds. Lecture Notes in Computer Science 3095. Springer, 2004. [16] E. Maximilien and M. Singh. Toward autonomic Web services trust and selection. In Service-Oriented Computing: Proceedings of the Second International Conference (ICSOC 2004), pp. 212–221. ACM, 2004. [17] Z. Milosevic, A. Barry, A. Bond, and K. Raymond. Supporting business contracts in open distributed systems. In Proceedings of the Workshop on Services in Open Distributed Systems (SDNE ’95), IEEE Press, 1995.
228
Heiko Ludwig
[18] M. Tian, A. Gramm, T. Naumowicz, H. Ritter, and J. Schiller. A concept for QoS integration in Web Services. In Proceedings of the Web Information Systems Engineering Workshops: First Web Services Quality Workshop, pp. 149–155. IEEE Computer Society, 2003. [19] V. Tosic, B. Pagurek, and K. Patel. WSOL: A language for the formal specification of classes of service for Web Services. In Proceedings of the International Conference on Web Services (ICWS ’03), pp. 375–381. CSREA Press, 2003. [20] E. Wohlstadter, S. Tai, T. Mikalsen, I. Rouvellou, and P. Devanbu. GlueQoS: Middleware to sweeten quality-ofservice policy interactions. In Proceedings of the 26th International Conference on Software Engineering (ICSE 2004). IEEE, 2004.
9
Transaction Support for Web Services
Mark Little
9.1
Introduction
Distributed systems pose reliability problems not frequently encountered in more traditional centralized systems. A distributed system consisting of a number of computers connected by a communication network is subject to independent failure modes of its components, such as nodes, links, and operating systems. Thus, the lack of any centralized control can mean that part of the system can fail while other parts are still functioning, thus leading to the possibility of abnormal behavior of application programs in execution. Many enterprise applications require some kind of infrastructural support in order to guarantee consistent outcome and correct execution. One often-used technique is implemented through atomic transactions, which have the well-known ACID properties (atomicity, consistency, isolation, and durability). Put simply, a transaction provides an “all-or-nothing” (atomic) property to work that is conducted within its scope, while at the same time ensuring that shared resources are isolated from concurrent users (often by locks that are acquired and maintained for the duration of the transaction). Importantly, application programmers typically only have to start and end a transaction; all of the complex work necessary to provide the transaction’s properties is hidden by the transaction system, leaving the programmer free to concentrate on the functional aspects of the application at hand. Web Services have emerged as a new application development and integration paradigm for business-to-business (B2B) interactions over the Internet. However, most B2B collaborative applications require transactional support in order to guarantee consistent outcome and correct execution. These applications often involve long-running computations (e.g., lasting days or even weeks), loosely coupled systems, and components that do not share data, location, or administration, and it is thus difficult to incorporate ACID transactions within such architectures. Most collaborative business process management systems support complex, long-running processes in which undoing tasks which have already completed, or choosing another acceptable execution path, may be necessary for recovery. Furthermore, the fact that transactions in back-end systems underpinning the business processes are assumed to exhibit ACID properties can lead to problems when exposing resources to third parties, since it presents opportunities
230
Mark Little
for those parties to lock resources and prevent other concurrent transactions from making progress. Though there are several extended transaction models that support complex transaction structures and relaxation of the ACID properties (e.g., releasing locks early within a transaction), most of these models are database-centric—preserving the consistency of shared data is the main objective of these models. They are therefore generally not applicable to applications consisting of loosely coupled, Web-based business services where recovery of business process is as important as data recovery. Web Services workflow techniques, such as those epitomized by WS-BPEL [13] and WSCDL [14], build upon earlier work in other distributed environments [12][10] where long-duration interactions (business transactions) have operated for many years. These systems use the notion of a compensatable interaction (with an associated compensation transaction); the work performed by a service can be undone later (usually by that same service) if the business interaction requires it. These compensation transactions are triggered by fault handlers in the event of system failures (e.g., a machine crash) or failures of the business logic (e.g., not enough funds available). However, as we shall see in this chapter, although specifications such as WS-BPEL define the requirement for compensation transactions, they do not define how they are provided to users. In fact, they rely upon the kinds of Web Services transaction work that we shall discuss throughout the rest of this chapter. In this chapter we shall look at the important area of transactions as they apply to Web Services; without transaction capabilities, it is impossible to build complex composite applications that people can trust to ensure consistent state changes, even in the presence of failures. However, before examining the kinds of Web Services transaction protocols that have been developed, we need to examine what are often referred to as traditional ACID transaction systems. These systems have formed the backbone of enterprise systems since the 1960s and have remained relatively unchanged since that time. Thus, it is informative to know about ACID transactions in order to understand why they have proven ineffective for the Web Services architecture. 9.2
ACID Transactions
Consider the case of an airline reservation system (shown in figure 9.1). In the figure the airline has many seats that can be reserved individually, and the state of a seat is either RESERVED or UNRESERVED. The reservation service exports two operations, reserveSeat and unreserveSeat. Finally, assume that there is a transaction manager service that will be used to manage transactions that are required in order to process the users’ requests. Imagine that a user wants to reserve a block of seats for his family (1A, 1B, and 1C, as shown in the figure). The service allows only a single seat to be reserved through the reserveSeat operation, so this will require the user to call it once for each seat. Unfortunately, the reservation process may be affected by failures of software or hardware that could affect the overall con-
Transaction Support for Web Services
1A
1B
231
1C
Figure 9.1 An airline reservation system.
sistency of the system in a number of ways. For example, if a failure occurs after reserving 1A, then obviously none of the other seats will have been reserved, and it may be that the reserved seat should be canceled before starting again upon recovery. What is required is the ability to reserve multiple seats as an indivisible block (i.e., that despite failures and concurrent access, either all of the airline seats will be reserved or none will). At first glance this may seem like a fairly straightforward thing to achieve, but it actually requires a lot of effort to ensure that these requirements can be guaranteed. Fortunately, atomic transactions possess the following (ACID) properties that make them suitable for this kind of scenario [7] [1]. Atomicity The transaction completes successfully (commits), or if it fails (aborts), all of its effects are undone (rolled back).
•
Consistency invariants.
•
Transactions produce consistent results and preserve-application-specific
Isolation Intermediate states produced while a transaction is executing are not visible to others. Furthermore, transactions appear to execute serially, even if they are actually executed concurrently.
•
232
Mark Little
Commit?
Commit
A
Yes
A
C
C Commit?
Commit
Yes B Phase 1
B Phase 2
Figure 9.2 A two-phase commit protocol.
Durability failure).
•
The effects of a committed transaction are never lost (except by a catastrophic
A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced onto stable storage, e.g., disk). When a transaction is aborted, all of the changes are undone. Atomic transactions can also be nested, in which case the effects of a nested action are provisional upon the commit/abort of the outermost (top-level) atomic transaction. 9.2.1
Two-Phase Commit
Traditional transaction systems use a two-phase protocol to achieve atomicity between participants, as illustrated in figure 9.2. During the first (preparation) phase, an individual participant must make durable any state changes that occurred during the scope of the transaction, such that these changes can be either rolled back or committed later, once the transaction outcome has been determined. Assuming no failures occurred during the first phase, in the second (commitment) phase participants may “overwrite” the original state with the state made durable during the first phase. Associated with every transaction is a coordinator, which is responsible for governing the outcome of the transaction via the commit protocol. It communicates with enlisted participants to inform them of the desired termination requirements, that is, whether they should accept (commit) or reject (rollback) the work done within the scope of the given transaction. In order to guarantee consensus, two-phase commit is a blocking protocol. After returning the first-phase response, each participant that returned a commit response must remain blocked until it has received the coordinator’s phase 2 message. Until this message is received, any resources used by the participant are unavailable for use by other transactions, since such use may result in non-ACID behavior. If the coordinator fails before delivery of the second-phase message, these resources remain blocked until it recovers.
Transaction Support for Web Services
9.2.2
233
The Synchronization Protocol
Traditional transaction-processing systems and specifications employ an additional protocol, often referred to as the synchronization protocol. Participants in this protocol are informed that a transaction is about to commit, so they can, for example, flush cached state, which may be being used to improve performance of an application, to a durable representation prior to the transaction’s committing. They are then informed when the transaction has completed and in what state it completed. Before the coordinator starts the two-phase commit protocol, all registered synchronizations are informed. Any failure at this point will cause the transaction to roll back. The coordinator then conducts the normal two-phase commit protocol, and once the transaction has terminated, all registered synchronizations are informed. Failures at this stage are ignored; the transaction has terminated. The synchronization protocol does not have the same failure requirements as the two-phase commit protocol. For example, synchronization participants don’t need to make sure they can recover in the event of failures; this is because any failure before the two-phase commit protocol completes, means the transaction will roll back, and failures after it has completed cannot affect the data the synchronization participants were managing. 9.3
Web Services’ Impact on Transactions
The concept of the atomic transaction have played a cornerstone role in creating today’s enterprise application environments. Every exchange of money for goods is a transaction, as are most other activities within commerce, the military, and science. Transaction-processing technology ensures that any activity’s operations on data are recorded consistently on computer systems, so that the systems remain as reliable indicators of the “real world” as their paper-based antecedents did—or at least as closely as possible, given the vagaries of the electronic medium. However, B2B interactions may be complex, involving many parties, spanning many different organizations, and potentially lasting for hours or days (e.g., the process of ordering and delivering parts for a computer, which may involve different suppliers, and may be considered to have completed only once the parts are delivered to their final destination. Unfortunately, for a number of reasons B2B participants cannot afford to lock their resources exclusively and indefinitely on behalf of an individual, thus ruling out the use of atomic transactions. Web Services provide a service-oriented, loosely coupled, and potentially asynchronous means of propagating information between parties, whereas the underlying services use traditional transaction-processing infrastructures. The fact that transactions in back-end systems are constructed with ACID properties can lead to problems when composing business activities from these services/resources, since it presents opportunities to lock resources and prevent transactions from making progress.
234
Mark Little
Application activity t2 t1
t4
t5
t6
t3 time Figure 9.3 A logical long-running “transaction,” without failure.
Structuring certain activities from long-running atomic transactions can reduce the amount of concurrency within an application or (in the event of failures) require work to be performed again [3]. For example, there are certain classes of applications where it is known that resources acquired within an atomic transaction can be released “early,” rather than having to wait until the atomic transaction terminates; in the event of the atomic transaction rolling back, however, certain compensation activities may be necessary to restore the system to a consistent state. Such compensation activities will typically be application-specific, may not be necessary at all, or may be more efficiently dealt with by the application. For example, long-running activities can be structured as many independent, short-duration atomic transactions, to form a “logical” long-running transaction [4][10][9]. This structuring allows an activity to acquire and use resources only for the required duration of this long-running activity. This is illustrated in figure 9.3, where an application activity (shown by the dotted ellipse) has been split into different, coordinated, short-duration atomic transactions. Assume that the application activity is concerned with booking a taxi (t1), reserving a table at a restaurant (t2), reserving a seat at the theater (t3), then booking a room at a hotel (t4), and so on. If all of these operations were performed as a single atomic transaction, resources acquired during t1 would not be released until the atomic transaction had terminated. If subsequent activities t2, t3, and so on do not require those resources, then they will be needlessly unavailable to other clients. However, if failures and concurrent access occur during the lifetime of these individual transactional activities, the behavior of the entire “logical long-running transaction” may not possess ACID properties. Therefore, some form of (application-specific) compensation may be required to attempt to return the state of the system to consistency [8]. For example, let us assume that t4 aborts. Further assume that the application can continue to make forward progress, but in order to do so, it must now undo some state changes made prior to the start of t4 (by t1, t2, or t3). Therefore, new activities are started: tc1, which is a compensation activity that will attempt to undo state changes performed by, say, t2, and t3, which will continue the application once tc1 has completed. tc5′ and tc6′ are new activities that continue after compensation (e.g., since
Transaction Support for Web Services
Application activity
235
failure
t2 t1
t4
tc1
t5’
t6’
t3
time Figure 9.4 A logical long-running “transaction,” with failure.
it was not possible to reserve the theater seat, restaurant table, and hotel room, it is decided to book tickets at the cinema. WS-BPEL uses a similar approach, but obviously other forms of composition are possible. (See figure 9.4.) So far there have been three efforts to incorporate transactions into Web services to address the peculiarities of long-duration business-to-business interactions. In the following sections we shall examine them. 9.4
The OASIS Business Transactions Protocol
In 2001, a consortium of companies including Hewlett-Packard, Oracle, and BEA began work on the Organization for Advance Structured Information Systems (OASIS) Business Transactions Protocol (BTP) [2]. The effort attempted to provide a solution to the problems of integrating Web services and transactions by defining a new (non-ACID) transactional model specifically for Web Services. 9.4.1
Consensus of Opinion
In general, a business transaction requires the capability for certain participants to be structured into a consensus group, such that all of the members in such a grouping have the same result. Importantly, different participants within the same business transaction may belong to different consensus groups (a group of participants who see the same final outcome). The business logic then controls how each group completes. In this way, a business transaction may cause a subset of the groups it naturally creates to perform the work it asks, while asking the other groups to undo the work. For example, consider a scenario where a user is booking a vacation, has provisionally reserved a plane ticket and taxi to the airport, and is now looking for travel insurance. The first consensus group holds Flights and Taxi, since neither of these can occur independently. The user may then decide to visit multiple insurance sites (called A and B in this example), and as he goes, may reserve the quotes he likes. So, for example, A may quote $50, which is just within
236
Mark Little
budget, but the user may want to try B in case he can find a cheaper price without losing the initial quote. If the quote from B is less than that from A, the user may cancel A while confirming both the flight and the insurance from B. Each insurance site may therefore occur within its own consensus group. This is not something that is possible when using ACID transactions. Although BTP uses a two-phase protocol, it does not imply ACID transactions. How implementations of prepare, confirm, and cancel are provided is a back-end implementation decision. Issues to do with consistency and isolation of data are also back-end choices, and are neither imposed nor assumed by BTP. A BTP implementation is primarily concerned with two-phase coordination of participants. 9.4.2
Open-Top Coordination
In a traditional transaction system the user has very few verbs with which to control the transaction. Typically these are “begin,” “commit,” and “rollback.” Furthermore, when an application asks for a transaction to commit, the coordinator will execute the two-phase commit protocol before returning the result to the application. This is what BTP terms a closed-top commit protocol. However, the actual two-phase protocol does not impose any restrictions on the time between executing the first and second phases. Clearly, the longer the period between first and second phases, the greater the chance for failures to occur and the longer resources remain locked. Therefore, BTP took the approach of allowing the time between the two phases to be set by the application through expanding the range of verbs available to include explicit control over both phases (i.e., “prepare,” “confirm,” and “cancel”), what BTP terms an open-top commit protocol. The application has complete control over when transactions prepare, and uses whatever business logic is required later to determine which transactions to confirm or cancel. This ability to explicitly control the termination protocol via business logic is a powerful tool since it supports a greater variety of strategies for implementing a transactional system than the traditional closedtop approach. 9.4.3
Atoms and Cohesions
To address the specific requirements of business transactions, BTP introduced two types of extended transactions, both using the open-top completion protocol. Atom An atom is the typical way in which “transactional” work performed on Web Services is scoped. The outcome of an atom is guaranteed to be consistent, such that all enlisted participants will see the same outcome, which will either be to accept (confirm) the work or to reject (cancel) it. However, an atom does not imply full ACID semantics; it only determines the atomicity constraint. Thus, within the scope of BTP it is not possible to have a pure ACID transaction; additional constraints (the CID that remain when using an atom) may be provided outside of the protocol, but the specification authors do not believe that they should be catered for within BTP.
•
Transaction Support for Web Services
237
Cohesion This type of transaction was introduced in order to relax atomicity and allow for the selection of work to be confirmed or canceled based on higher-level business rules. A cohesion may give different outcomes to its participants such that some of them may confirm and the remainder may cancel. The two-phase protocol for cohesions is parameterized to allow a user to specify precisely which participants to prepare and which to cancel. The strategy underpinning cohesions is that they better model long-running business activities, where services enroll in atoms that represent specific units of work. As the business activity progresses, it may encounter conditions that allow it to cancel or prepare these units, with the caveat that it may be many hours or days before the cohesion arrives at its confirm-set: the set of participants that it requires to confirm in order for it to successfully terminate the business activity. Once the confirm-set has been determined, the cohesion collapses down to being an atom; all members of the confirm-set see the same outcome.
•
Web Services are assumed to do work within the scope of atoms, which are created by the initiator of the business transaction. Multiple atoms are composed into a business transaction (e.g., arranging a vacation) by a cohesion composer such that different atoms may possess different outcomes, as directed by the business logic (e.g., cancel one insurance quote and confirm another). Businesses take part in atomic or cohesive transactions via participants, and both cohesions and atoms use coordination to ensure that participants see the desired outcome. It is interesting to note that BTP does not support the notion of a pre-commit, post-commit protocol such as the synchronization protocol in ACID transaction systems we mentioned earlier. Although one could, for example, implement such an approach using specialized (prioritized) atoms, it would not be compliant with the specification and would affect portability and interoperability. This apparent oversight is a result of the fact that BTP was designed in relative isolation from existing transaction-processing systems. It is likely that should BTP evolve, changes incorporating a synchronization-like protocol may occur based on user feedback and requirements. 9.4.4
XML Message Sets and Carrier Bindings
Though BTP does not define a complete network stack, its development was not undertaken in isolation from other work in distributed computing. Since Web Services are fast becoming widespread, the BTP committee was influenced in its decision-making to a great extent by the emerging Web Services architecture and protocols. To that end, the committee has defined a binding to SOAP 1.1 over HTTP 1.1 as part of the BTP 1.0 specification. This binding addresses the issue of how to transport BTP messages between the client application, the coordinator, and participants in a transactional application. However, BTP is not Web Services-specific and can be used in other environments where SOAP and HTTP are not applicable, but where longrunning transactions are a requirement (for example, mobile environments). The BTP specification splits the messages into two types: those that are for consumption solely by BTP infrastructure, and those that are meant for application consumption which may also propagate BTP messages. In situations where BTP messages are exchanged without the
238
Mark Little
encumbrance of application messages, the strategy is straightforward. A message (or several messages, if one of the message-compounding mechanisms is used) is simply propagated within the body of the SOAP envelope. For example, a typical begin message is represented as follows:
For application messages which also carry BTP content, the situation is different. In this situation the BTP messages are typically located within the header of the SOAP envelope, as can be seen in the following example, where a BTP context is propagated with an application-specific method call.
soap-http-1
http://mybusiness.com/btpservice
Transaction Support for Web Services
239
12fa6de4ea3ec
atom
99 101
9.4.5
Participants
We’ve already mentioned that each BTP participant supports a two-phase termination protocol via prepare, confirm, and cancel operations. What the participant does when asked to prepare is implementation-dependent (e.g., reserve the theater ticket); it then returns an indication of whether or not it succeeded. However, in contrast to an atomic transaction, the participant does not have to guarantee that it can remain in this prepare state; it may indicate that it can do so for only a specified period of time, and what action it will take (confirm or undo) if it has not been told how to finish before this period elapses. In addition, no indication of how prepare is implemented is implied in the protocol, such that resource reservation (locking), as happens in an ACID transaction system, need not occur. 9.4.6
Roles in BTP
Although for simplicity we have talked about services, coordinators, and participants, within BTP all endpoints are either superiors or inferiors or both. An entity within the coordinating
240
Mark Little
Superior
Superior/inferior
Inferior
Figure 9.5 Superior and inferior relationships.
entity’s system plays the role of superior (e.g., the atom coordinator) and an entity within the service plays the role of an inferior (e.g., the participant). Each inferior has only one superior. However, a single superior may have multiple inferiors within one or multiple parties. A tree of such relationships may be wide, deep, or both, as shown in figure 9.5. A superior receives reports from its inferiors as to whether they are “prepared.” It gathers these reports in order to determine which inferiors should be canceled and which confirmed. The superior does this either by itself or with the cooperation of the application element responsible for its creation and control, depending upon whether the transaction is an atom or a cohesion (as we shall see later). The initiator of a transaction communicates with an atom/cohesion coordinator (factory) and asks it to start a new atom or cohesion. Once it is created, information about the atom or cohesion (the context) can be propagated to Web Services in order for them to associate their work with it. Although work is typically conducted within the scope of an atom, it is entirely possible for services to register participants directly with cohesions. The terminator of the atom or cohesion will typically be the same entity as the initiator, but need not be (e.g., a long-running stock purchase transaction may be started by the company that requires the stock, and finished by the company that delivers it. Although an atom can be instructed to confirm all participants immediately, it is more typically instructed to prepare them first, and later to either confirm or cancel them. 9.4.7
Optimizations
Since BTP is intended for long-running transactions, you might assume that performance has not been a prime factor in its development. However, this is not the case, and in fact BTP contains a number of optimizations not usually found in traditional transaction-processing systems.
Transaction Support for Web Services
241
One-Shot Typically a participant is enlisted with a BTP transaction when a service invocation occurs (e.g., “reserve seat” in the case of our airline reservation scenario). When the request completes, the response is sent back to the initiator of the request. During transaction termination, the coordinator will interact with the participant to ensure completion. In some circumstances, it may be possible to compound many of the messages into a “one-shot” message. For example, the service invocation may cause a state change to occur that means the participant can prepare immediately after the invocation completes. Rather than have to wait for an explicit coordinator message, BTP allows the enroll request and statement of preparation to be compounded within the service response. The receiver is then responsible for ensuring that this additional information is forwarded to the responsible actors. Resignation by a Participant In a two-phase commit protocol, in addition to indicating success or failure during the preparation phase, a participant can also return a “read-only” response; this indicates that it does not control any work that has been modified during the course of the transaction, and therefore does not need to be informed of the transaction outcome. This can allow the two-phase protocol to complete quickly because a second round of messages is not required. The equivalent of this in BTP is for a participant to resign from the transaction it was enrolled in. Resignation can occur at any time up to the point where the participant has prepared, and is used by the participant to indicate that it no longer has an interest in the outcome of the transaction. Spontaneous Prepare In some situations, rather than preparing when instructed do so to by the coordinator, a participant may be able to prepare spontaneously. For example, there is an invocation which moves that service into an idempotent state such that further invocations have no effect on it; in this case, an associated participant may prepare that service immediately, rather than wait for the instruction to do so. In BTP, a participant is allowed to attempt to prepare at any point and inform the coordinator of the result. Autonomous Decision by a Participant In a traditional two-phase protocol a participant enrolls with a transaction and waits for the termination protocol before it either confirms or cancels. To achieve consensus, it is necessarily a blocking protocol, which means that if a coordinator fails before delivering the final phase messages, prepared participants must remain blocked, holding on to (possibly valuable) resources. Modern transaction-processing systems have augmented two-phase commit with heuristics, which allow such participants to make unilateral decisions about whether they will commit or roll back. Obviously, if a participant makes a choice that turns out to be different from that taken by other participants, nonatomic behavior occurs. BTP has its equivalent of heuristics, allowing participants to make unilateral decisions as well. However, in contrast to other transaction implementations, the protocol allows a participant to
242
Mark Little
give the coordinator prior knowledge of what that decision will be and when it will be taken. A participant may prepare and present the coordinator with some caveats as to how long it will remain in this state and into what state it will then migrate (e.g., “will remain prepared for ten days and then will cancel the seat reservation”). This information may then be used by the coordinator to optimize message exchange. 9.4.8
Qualifiers
An interesting approach taken by BTP to loosely coupled domains and long-running interactions was introducing the notion of qualifiers to the protocol. A qualifier can be thought of as a caveat to that aspect of the protocol with which it is associated. Essentially a qualifier is a way of providing additional extended information within the protocol. Although the BTP specification provided some standard qualifier types (such as time-outs for how long a participant is willing to remain in a prepared state), it is possible to extend them and provide new implementations that are better suited to the application or participant. Obviously, any use of or reliance on nonstandard qualifiers will reduce application portability. 9.4.9
Example of Using BTP
How would a user deploy BTP in order to coordinate this application in a reliable manner? Consider the example we discussed earlier, illustrated in figure 9.3. A user wants to buy an airline ticket, book a taxi, and get some insurance to cover the trip. All of this needs to happen within a single business transaction, but the problem is that we wish to obtain the cheapest insurance quote as we go along, without losing prior quotes until we know that they are no longer the cheapest; at that point we want to be able to release those quotes while maintaining the cheapest. In a traditional transaction system, all of the work performed within a transaction must be either accepted (committed) or declined (rolled back); the required loosening of atomicity is not supported. In BTP, however, we can use atoms and cohesions. A cohesion is first created to manage the overall business interactions. Then we can book the airline ticket and reserve the taxi (remember, none of this work can actually happen until we confirm the entire business transaction). The business logic (application, client, etc.) creates an atom for the airline and taxi phase of the work (ReserveAtom, say) and enrolls it with the cohesion, as shown in figure 9.6. Once the client has obtained the context from the factory, it can invoke the airline and taxi reservation services within the scope of the atom, such that their work is then ultimately controlled by its outcome; because this is an atom, we know that ultimately either the airline ticket and taxi will be booked or neither will occur. When a suitable flight and taxi can be obtained, ReserveAtom is prepared to reserve the bookings for some service-specific time. Then two new atoms (AtomQuote1 and AtomQuote2) are created and enrolled with the cohesion (our overall business transaction), before being used to obtain two different quotes from the respective insurance services; at this point, the airline ticket and taxi remain in a reserved state that we (via the cohesion) can still control.
Transaction Support for Web Services
Cohesion/atom factory
Create cohesion and atom
Client
Transaction context
243
Airline reservation system
Reserve flight seat on flight + Reserve atom context
Book taxi + Reserve Atom context
Taxi booking Figure 9.6 Setting up and using the ReserveAtom context.
When the quote from the first insurance site is obtained, it is obviously not known whether it is the best quote, so the business logic can prepare AtomQuote1 to maintain the quote while it communicates with the second insurance site. If that site does not offer a better quote, the application can cancel AtomQuote2 and it now has its final confirmation set of atoms (ReserveAtom and AtomQuote1) which it can confirm. (See figure 9.7.) 9.5
Web Services Coordination and Transactions
In August 2002, IBM, Microsoft, and BEA released the Web Services Coordination (WS-C) and Web Services Transactions (WS-Tx) specifications. These specifications were updated in 2003 and WS-Tx was separated into WS-Atomic Transaction (WS-AT) and WS-Business Activity (WS-BA) [5][6]; however, for simplicity we will continue to refer to them as WS-Tx. The fundamental idea underpinning WS-C is that there is a generic need for a coordination infrastructure in a Web services environment. The WS-C specification defines a framework that allows different coordination protocols to be plugged in to coordinate work among clients, services, and participants. WS-Tx provides specific plug-ins for transactions. Whatever coordination protocol is used, and in whatever domain it is deployed, the same generic requirements are present: Instantiation (or activation) of a new coordinator for the specific coordination protocol, for a particular application instance
•
244
Mark Little
confirm
Cohesion/atom factory
Insurance service A
Confirm ReserveAtom + AtomQuote1
cancel
confirm
confirm
Client
Insurance service B
Airline reservation system
Taxi booking
Figure 9.7 Booking the vacation.
•
Registration of participants with the coordinator
•
Propagation of context. The main components involved in using and defining the WS-C are the following:
1. The activation service On behalf of a specific coordination protocol, this service creates a new coordinator and its associated context. It also implicitly creates an associated registration service. The type of protocol support required is identified at creation time by a Universal Resource Identifier (URI) provided by the initiator of the transaction. 2. The registration service Again acting on behalf of a specific coordination protocol (and specific instance of a coordinator), an instance of this service is used by specific participants to enroll with the coordinator. 3. The context This contains information necessary for WS-C to perform coordination, as well as information specific to the protocol implementation. This is shown in the code fragment below, where the schema states that a context consists of a URI that uniquely identifies the type of coordination that is required (xs:anyURI), an endpoint where participants to be coordinated can be registered (wsu:PortReferenceType), and an extensibility element designed to carry specific coordination protocol context payload (xs:any), which can carry an arbitrary XML payload.
Transaction Support for Web Services
245
http://example.org/ws-transaction/activation
http://example.org/ws-transaction/client-app
http://schemas.xmlsoap.org/ws/2003/09/wsat
http://example.org/tx-id/aabb-1122-ddee-3344-ff00
2003-06-30T00:00:00–08:00
Transaction Support for Web Services
247
http://schemas.xmlsoap.org/ws/2003/09/wsat
http://example.org/ws-transaction/registration
After obtaining a transaction context from the coordinator, the client application proceeds to interact with Web Services to accomplish its business-level work. With each invocation on a business service, the client propagates the context, such that the each invocation is implicitly scoped by the transaction. This is identical to the way in which traditional transaction systems operate. Transaction termination uses the two-phase commit protocol (Durable2PC). Figure 9.8 shows the state transitions of a WS-Atomic Transaction and the message exchanges between coordinator and participant; the coordinator-generated messages are shown in solid lines, whereas the participant messages are shown in dashed lines. In addition to the Durable2PC protocol, WS-Atomic Transaction provides for a Volatile2PC protocol, which is the WS-Atomic Transaction equivalent of the synchronization protocol discussed at the start of this chapter. When an atomic transaction is terminating, the associated coordinator first executes the prepare phase of the Volatile2PC protocol if any participants have registered for it. Any failures at this stage will cause the transaction to roll back. After the
Aborting Register
Rollback
Rollback
Aborted Rollback
Active
Prepare
Preparing
Prepared
Prepared
Commit
ReadOnly or Aborted
Coordinator Generated Participant Generated
Figure 9.8 Two-phase commit state transitions.
Aborted
Committing
Comitted
Ended
248
Mark Little
Durable2PC protocol has completed, the commit or rollback phase of the Volatile2PC protocol is executed, informing any participants about the transaction outcome. 9.5.2
WS-Business Activity
Most business-to-business applications require transactional support in order to guarantee consistent outcome and correct execution. These applications often involve long-running computations, loosely coupled systems, and components that do not share data, location, or administration, and it is difficult to incorporate atomic transactions within such architectures. For example, an online bookshop may reserve books for an individual for a specific period of time, but if the individual does not purchase the books within that period, they will be “put back onto the shelf” for others to buy. Furthermore, because it is not possible for anyone to have an infinite supply of stock, some online shops may appear to users to reserve items for them, but in fact may allow others to preempt that reservation (i.e., the same book may be “reserved” for multiple users concurrently); users may subsequently find that the item is no longer available, or may have to be reordered specially for them. A business activity is designed specifically for these kinds of long-duration interactions, where exclusively locking resources is impossible or impractical. In this model, services are requested to do work, and where those services have the ability to undo any work, they inform the WSBusiness Activity that if it later decides to cancel the work, it can instruct the service to execute its undo behavior. Therefore, although the full ACID semantics is not maintained by a WSBusiness Activity, consistency can still be maintained through compensation. Such compensation may use backward error recovery, but will typically employ forward recovery. Central to WS-Business Activity is the notion of scopes and defining activity-to-task relationships. A business activity may be partitioned into scopes, where a scope is a business task or unit of work. Such scopes can be nested to arbitrary levels, forming parent-and-child relationships. A parent scope has the ability to select which child tasks are to be included in the overall outcome protocol for a specific business activity, and thus nonatomic outcomes clearly are possible. A business activity defines a group that allows the relaxation of atomicity based on business-level decisions. If a child task experiences an error, it can be caught by the parent, which may be able to compensate and continue processing. When a child task completes, it can either leave the business activity or signal to the parent that the work it has done can be compensated later. In the latter case, the compensation task may be called by the parent should it ultimately need to undo the work performed by the child. A task within a business activity can specify its outcome to the parent directly, without waiting for a request. This feature is useful when tasks fail such that the notification can be used by the business activity exception handler to modify the goals and drive processing forward without having to wait until the end of the transaction to fail. In order for the WS-Business Activity model to function, the following assumptions are made:
Transaction Support for Web Services
249
All state transitions are reliably recorded, including application state and coordination metadata (the record of sent and received messages).
•
All request messages are acknowledged, so that problems are detected as early as possible. This avoids executing unnecessary tasks and can also detect a problem earlier, when rectifying it is simpler and less expensive.
•
Like atomic transactions, the Business Activity model has multiple protocols: BusinessAgreementWithParticipantCompletion and BusinessAgreementWithCoordinatorCompletion. Under the BusinessAgreementWithParticipantCompletion protocol, a child activity is initially created in the active state; if it finishes the work it was created to do, and no more participation is required within the scope of the WS-Business Activity (such as when the activity operates on immutable data), then the child can unilaterally send an exited message to the parent. However, if the child task finishes and wishes to continue in the WS-Business Activity, it must be able to compensate for the work it has performed (e.g., unreserve the seat on the flight). In this case it sends a completed message to the parent and waits to receive the final outcome of the WS-Business Activity from the parent. This outcome will either be a close message, meaning the WS-Business Activity has completed successfully, or a compensate message, indicating that the parent activity requires that the child task reverse its work. A business activity also can terminate in one of two ways: atomic or mixed. In the atomic mode, all participants will see the same outcome (complete or undo), whereas in the mixed mode, some participants may be completed, whereas others may be forced to undo their work. This latter approach is similar to the functionality offered by the cohesion transaction type in OASIS BTP that was discussed earlier. The BusinessAgreementWithCoordinatorCompletion protocol is identical to the BusinessAgreementWithParticipantCompletion protocol, with the exception that the child cannot autonomously decide to end its participation in the business activity. Rather, the child task relies upon the parent to inform it when the child has received all requests for it to perform work, which the parent does by sending the complete message to the child. The WS-BPEL specification, which defines a workflow environment for Web Services, requires compensation handlers which work on compensation transactions, in order to undo work undertaken in any long-duration interactions. Rather than define its own compensation transaction model, the WS-BPEL specification defines the requirements, which are similar to those satisfied by the Business Activity model. Thus, it is likely (though not mandated) that WS-BPEL implementations will leverage WS-BusinessActivity implementations to provide compensation transactions in an interoperable manner. 9.5.3
Example of Using Business Activities
For simplicity, we shall concentrate on the purchasing of the cheapest flight in the travel agent scenario, as shown in figure 9.9 (i.e., the travel agent must now obtain three quotes before choosing the cheapest for the customer).
250
Mark Little
coordinator BA Child BA 1
Child BA 2
Child BA 3
Flight A
Flight B
Flight C
Travel Agent
Figure 9.9 Using Business Activities to choose the cheapest insurance quote.
In this case a WS-Business Activity is used, and quotes are committed immediately, as per the WS-Business Activity model. In the nonfailure case, things are straightforward, and each child WS-Business Activity reports to the coordinator, via a completed message, that it has performed the work. However, this means that the work can be compensated for later. Once the travel agent has received all of the quotes, it can choose the cheapest flight and send the participant acting on behalf of that service a close message; all other airline reservation sites receive a compensate message via their corresponding participants. In the failure case, we shall assume that flight B could not get a quote, so its corresponding WS-Business Activity fails. It reports to the coordinator through a fault message that it has failed. Upon receiving this message, the logic driving the WS-Business Activity may use forward error recovery to try to obtain a quote from an alternative site, as shown in figure 9.10. If the forward error recovery works and the alternative quote is obtained, then the WSBusiness Activity proceeds as before. However, if the forward error recovery fails, the WSBusiness Activity finds itself in a situation where it cannot make forward progress. Therefore, there is no choice but to cancel and compensate for all of the previous successfully completed activities. The WS-Business Activity coordinator does this by sending compensate messages to all of these activities. Once the compensation has successfully taken place, the system should be in a state which is equivalent to the state before the purchase operations were carried out. 9.6
The Web Services Composite Application Framework
In July 2003, Arjuna Technologies, Fujitsu, IONA Technologies, Oracle, and Sun released the Web Services Composite Application Framework [11]. It is divided into three parts:
Transaction Support for Web Services
251
begin coordinator begin
begin Child BA 1
fault
begin
completed
Child BA 2 Flight B
Flight A
BA
Child BA 3
Child BA 4
Flight D
Flight C
Travel Agent
Figure 9.10 Handling errors and making forward progress.
Web Service Context (WS-Context), a lightweight framework for simple context management.
•
Web Service Coordination Framework (WS-CF), a sharable mechanism to manage context augmentation and life cycle, and guarantee message delivery. The overall concept here is very similar to the WS-C specification discussed earlier and is based on work conducted within the CORBA environment [9].
•
Web Services Transaction Management (WS-TXM), comprising three distinct protocols for interoperability across multiple transaction managers and supporting multiple transaction models (two-phase commit, long-running actions, and business process flows). The overall aim of the combination of the parts of WS-CAF is to support various transaction processing models and architectures. The parts define incremental layers of functionality that can be implemented and used separately by these and other specifications separately or together.
•
9.6.1
The Transaction Models
As with the IBM and Microsoft specifications, WS-TXM defines a set of pluggable transaction protocols that can be used with the coordination framework: ACID transaction, which is designed for interoperability across existing transaction infrastructures and is sufficiently similar to traditional two-phase commit (including a synchronization protocol) that we will not spend any further time on it, and long-running action and business process transaction, which are both designed for long-duration business interactions.
252
Mark Little
Long-Running Activities The long-running action model is designed specifically for business interactions that occur over a long duration. Within this model, all work performed within the scope of an application is required to be compensatable. Therefore, an application’s work is either performed successfully or undone. How individual Web Services perform their work and ensure it can be undone if compensation is required, are implementation choices and are not exposed to the long-running action model. The long-running action model simply defines the triggers for compensation actions and the conditions under which those triggers are executed. In the long-running action model, each application is bound to the scope of a compensation interaction. For example, when a user reserves a seat on a flight, the airline reservation center may take an optimistic approach and actually book the seat and debit the user’s account, relying on the fact that most of its customers who reserve seats later book them; the compensation action for this activity would obviously be to unbook the seat and credit the user’s account. Work performed within the scope of a nested long-running action must remain compensatable until an enclosing service informs the individual service(s) that it is no longer required. A compensator is the long-running action participant that operates on behalf of a service to undo the work it performs within the scope of a long-running action. How compensation is carried out will obviously be dependant upon the service; compensation work may be carried out by other long-running actions which themselves have compensators. As in any business interaction, application services may or may not be compensatable. Even the ability to compensate may be a transient capability of a service. The long-running action model allows applications to combine services that can be compensated with those that cannot be compensated (in the WS-Business Activity model this would be equivalent to having services that always responded with an exit message to the coordinator). Obviously, by mixing the two service types the user may end up with a business activity that will ultimately not be undone by the long-running action model, but that may require outside (application-specific) compensation. For example, consider the travel example illustrated in figure 9.11, where nonfailure acti-vities are connected by solid lines and compensation activities are connected by dashed lines. In this case the user first attempts to book a first-class seat on an airline; the compensator for this (which is executed in the event of a crash or failure to complete the booking, for example) starts another long-running action that tries to cancel the booking. If the cancellation long-running action fails, then its compensator e-mails the system administrator for the airline reservation site; if the cancellation succeeds, however, it tries to book an economy seat on the same flight. When a service performs work that later may have to be compensated within the scope of a long-running action, it enlists a compensator participant with the long-running action coordinator. The coordinator will send the compensator one of the following messages when the activity terminates: Success The activity has completed successfully. If the activity is nested, then compensators may propagate themselves (or new compensators) to the enclosing long-running action. Other-
•
Transaction Support for Web Services
253
email admin
flights cancel seat
book econ. seat
book first class seat
book hotel
book taxi
Figure 9.11 Compensator long-running actions.
wise, the compensators are informed that the activity has terminated and they can perform any necessary cleanups. Fail The activity has completed unsuccessfully. All compensators that are registered with the long-running action will be invoked to perform compensation in the reverse order. The coordinator forgets about all compensators that indicated they operated correctly. Otherwise, compensation may be attempted again (possibly after a period of time) or, alternatively, a compensation violation has occurred and must be logged.
•
An application can be structured (sequentially and concurrently) so that long-running actions are used to assemble units of compensatable work, and then are held in the active state while the application performs other work in the scope of different (concurrent or sequential) longrunning actions. Only when the right subset of work is arrived at by the application will that subset be confirmed; all other long-running actions will be told to cancel (complete in a failure state). Figure 9.12 illustrates how our travel agent scenario may be structured using this technique. LRA1 is used to obtain the taxi to the airport. The user then wishes to get the cheapest flight from three different airlines. Therefore, the agency structures each seat reservation as a separate long-running action. In this example, the airline represented by LRA2 gives a cost of $150 for the flight; while LRA2 is still active, the application starts LRA3, a new independent-level longrunning action to ask the next airline for a cost: LRA3 gives a value of $160, and so it is canceled. Finally, the travel agency starts LRA4 to check the other airline, which gives a price of $120 for the seat. Thus, LRA2 is canceled and LRA4 is confirmed, with the result that the seat is
254
Mark Little
flights
LRA2
insurance LRA5
LRA1
LRA3 LRA6
LRA4
Figure 9.12 Using long-running actions to select units of work.
bought. The travel agency then uses the same technique to select the cheapest travel insurance from among two options (using LRA5 and LRA6). This capability, in which different services may be compensated for in the scope of the same LRA, whereas others are accepted, is similar to the BTP cohesions transaction model discussed earlier. The ability to have nonatomic outcomes is therefore another common factor that runs through all of the Web Services transactions specifications and use cases. Business Process Model The Business Process model is specifically aimed at tying heterogeneous transaction domains into a single business-to-business transaction. So, for example, with the Business Process model it’s possible to have a long-running business transaction span messaging, workflow, and traditional ACID transactions. In this model all parties involved in a business process reside within business domains. Business process transactions are responsible for managing interactions between these domains. A business process (business-to-business interaction) is split into business tasks, and each task executes within a specific business domain. A business domain may itself be further subdivided into other business domains (business processes) in a recursive manner. Each domain may represent a different transaction model if such a federation of models is more appropriate to the activity. Each business task may provide implementation-specific countereffects in the event that the enclosing task must cancel. In addition, periodically the controlling application may request that all business domains checkpoint their state such that they can either be consistently rolled back to that checkpoint by the application or restarted from the checkpoint in the event of a failure. For example, if we return to our earlier travel scenario, then the user may interact synchronously with the travel agent to build up the required details of the vacation. Alternatively, the user may submit an order (possibly with a list of alternative requirements, such as destinations,
Transaction Support for Web Services
255
dates, etc.) to the agent, who will call back when it has been filled. Likewise, the travel agent then submits orders to each supplier, requiring them to call back when each component is available (or is known to be unavailable). The Business Process transaction model supports this synchronous and asynchronous interaction pattern. Business domains are instructed to perform work within the scope of a global business process. The business process has an overall manager that may be informed by individual tasks when they have completed their work (either successfully or unsuccessfully), or it may periodically communicate with each task to determine its current status. In addition, each task may make periodic checkpoints of its progress such that if a failure occurs, it may be restarted from that checkpoint rather than having to start from the beginning. A business process can either terminate in a confirmed (successful) manner, in which case all of the work requested will have been performed, or it will terminate in a canceled (unsuccessful) manner, in which case all of the work will be undone. If the work cannot be undone, then this fact must be logged. One key difference between the Business Process transaction model and that of traditional transaction systems is that it assumes success: the Business Process model is optimistic and assumes the failure case is in the minority and can be handled or resolved offline if necessary—but not always automatically, often requiring human interaction. Logging is essential in the Business Process model for replay and compensation. However, recovery may ultimately be the responsibility of a manual operator if automatic recovery/compensation is not possible. 9.7
Comparison
In table 9.1, we give a feature comparison of the various specifications we have examined in this chapter. Table 9.1 Comparison of Transaction Specifications Capability
BTP
WS-CAF
WS-C/WS-Tx
Extensible coordination service
No
Yes
Yes
Backed by a standards organization
Yes
Yes
No
Atomic outcome
Yes
Yes
Yes
Mixed outcome
Yes
Yes
Yes
Web Services-specific
No
Yes
Yes
ACID transaction support
No
Yes
Yes
Compensation transaction support
Yes
Yes
Yes
Interposition
Yes
Yes
Yes
Synchronization protocol
No
Yes
Yes
Spontaneous prepare
Yes
Yes
Yes
256
9.8
Mark Little
Conclusions
Web Services transactions are not meant to replace current transactional infrastructures and investments; rather, they are designed to work with them. It is extremely unlikely that transactional Web Services will be developed from the bottom up, reimplementing all functionality and especially transactions. Web Services are about interoperability as much as they are about the Web (or the Internet). Most of the applications that run in this environment are going to use Web Services to glue together islands of strongly coupled domains, such as J2EE or CORBA. Although other protocols may well evolve over the coming years, it’s likely that some combination of WS-C/Tx and WS-CAF will become the standard, since these specifications are precisely about fostering interoperability with existing infrastructures and have the backing of the industry. Although BTP predates these other specifications, it has not had the same level of industry backing, and as a result is unlikely to become the Web Services standard for transactions. However, BTP has definitely had an impact on the other specifications (e.g., the mixed mode for WS-BusinessActivity or WS-LRA). Furthermore, since all of the other specifications support multiple transaction models, it is possible that BTP may be supported within either or both of the competing specifications as simply another transaction protocol. It is important to remember that because BTP is not tied to the Web Services stack as much as the other specifications, BTP offers a route to long-running, compensation-based transactions for environments other than Web Services. Much has been made of the fact that ACID transactions aren’t suitable for loosely coupled environments such as the Web. However, very little attention has been paid to the fact that these loosely coupled environments tend to have large, strongly coupled corporate infrastructures behind them. Asking the question “What can replace ACID transactions?” is wrong; the question to ask is “How can we leverage what already exists while at the same time supporting loose coupling?” As we have seen throughout this chapter, that question is a complex one to answer. Note This chapter originally submitted in 2004–2005 and captures the research work and views of the authors at that time.
References [1] Philip A. Bernstein and Eric Newcomer. Principles of Transaction Processing. Morgan Kaufmann, 1997. [2] OASIS BTP Technical Committee. Revised BTP Specification. http://www.oasis-open.org/committees/ business-transactions. April 2002. [3] D. J. Taylor, How big can an atomic action be? In Proceedings of the Fifth Symposium on Reliability in Distributed Software and Database Systems, pp. 121–124. IEEE Computer Society Press, 1986. [4] H. Garcia-Molina and K. Salem. Sagas. In Proceedings of the ACM SIGMOD International Conference on the Management of Data. ACM Press, 1987.
Transaction Support for Web Services
257
[5] The Web Services Atomic Transaction Specification. http://msdn.microsoft.com/library/default.asp?url=/library/ en-us/dnglobspec/html/wsat.asp. 2004. [6] The Web Services Business Activity Specification. http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ dnglobspec/html/wsba.asp. 2004. [7] Jim Gray and Andreas Reuter. Transaction Processing: Concepts and Techniques. Morgan Kaufmann, 1993. [8] J. J. Halliday, S. K. Shrivastava, and S. M. Wheater. Implementing support for work activity coordination within a distributed workflow system. In Proceedings of the Third International Conference on Enterprise Distributed Object Computing (EDOC ’99), SpringerVerlag pp. 116–123. 1999. [9] OMG. Additional Structuring Mechanisms for the OTS Specification. Document orbos/2000-04-02. September 2000. [10] Nortel, supported by the University of Newcastle upon Tyne. OMG document bom/98-03-01. Submission for the OMG Business Object Domain Task Force (BODTF). Workflow Management Facility, 1998. [11] The Web Services Composite Application Framework Technical Committee. http://www.oasis-open.org/ committees/documents.php?wg_abbrev=ws-caf. 2003. [12] The Workflow Management Coalition. http://www.wfmc.org. [13] The Web Services Business Process Execution Language Committee. http://www.oasis-open.org/committees/ tc_home.php?wg_abbrev=wsbpel. 2003. [14] The Web Services Choreography Description Language. http://www.w3.org/TR/2004/WD-ws-cdl-10-20040427. April 2004.
10
Transactional Web Services
Stefan Tai, Thomas Mikalsen, Isabelle Rouvellou, Jonas Grundler, and Olaf Zimmermann
10.1
Introduction
Service-Oriented Computing (SOC) is a distributed computing model for intra- and interorganizational application integration. The model is centered on open standards and the pervasiveness of Internet technologies. SOC suggests a component- and metadata-driven approach to dynamically discover and select services, to execute services in heterogeneous implementation environments, and to interact with services in a client-server or peer-to-peer fashion. The Web Services platform architecture defines a set of specifications that provide an open, XML-based platform for the description, discovery, and interoperability of distributed, heterogeneous applications as services [39],[1]. The platform comprises the basic specifications SOAP, UDDI, and WSDL, as well as specifications for business process management and various interoperability protocols supporting, for example, transactions, reliable messaging, and security. Figure 10.1 illustrates the Web Services platform architecture. The various Web Services specifications are designed to complement each other, serving as building blocks that can be combined to provide interoperability at different software layers, from low-level transport protocols to high-level application interactions. The combined usage of some specifications is well understood, such as WSDL for service description, SOAP bindings in WSDL for interaction, and UDDI registries holding WSDL descriptions for service discovery [19],[49]. However, this is not the case for all compositions of specifications. Different specifications may also suggest the use of different middleware systems supporting the specifications. For example, a workflow engine may be used to execute Web Services business processes, and a transaction monitor may be used for Web Services transaction interoperability. The (flexible, dynamic) composition of specifications may then raise the need for corresponding (flexible, dynamic) middleware integration. In this chapter, we address the problem of transactional coordination in Service-Oriented Computing. We argue for the use of declarative policy assertions to advertise and match support for different transaction models, and to define transactional semantics of Web Services compositions. We focus on concrete, protocol-specific policies that apply to relevant Web Services specifications.
260
Stefan Tai and Colleagues
Reliable Messaging
Security WS-Security, WS-Trust, WS-Federation, and other
WS-ReliableMessaging
Transactions WS-Coordination WS-AtomicTransaction WS-BusinessActivity
Messaging
Metadata
WS-BPEL
WSDL, WS-Policy, WS-PolicyAttachment, WS-MetadataExchange, and other
Business Processes
SOAP, WS-Addressing, and other
XML XML Namespaces, XML Information Set
Figure 10.1 Web Services specifications.
We explore the combined use of the Web Services specifications for service composition and service coordination: the Business Process Execution Language (BPEL) for Web Services, and the specifications that use the Web Services Coordination (WS-C) framework. These include Web Services Atomic Transaction (WS-AT) and Web Services Business Activity (WS-BA). We present a policy-based approach that extends BPEL with coordination semantics and uses policies to drive and configure corresponding middleware systems to support a reliable SOC environment. The chapter is structured as follows. We first introduce the basic transactional concepts on which our approach relies, and give an overview of the current Web Services specifications in support of reliable SOC. Next, we introduce our policy-based approach to Web Services transactional coordination and describe a middleware system in support of this approach. Finally, before concluding, we go over related work and discuss a number of open issues in the area of transactional coordination for Service-Oriented Computing.
Transactional Web Services
10.2 10.2.1
261
Concepts Transaction Styles
Transactions are a common way to address reliability and fault tolerance in distributed computing, and a variety of transaction models have been proposed. Most of these models are variants of one of the two main transaction processing styles: direct transaction processing (DTP) and queued transaction processing (QTP) [5]. Direct Transaction Processing The direct transaction processing (DTP) style is illustrated in figure 10.2. A single DTP interaction comprises one client (process A) and one or more service providers (process B). The client sees the invocation of B as part of an atomic unit that is protected by transaction Ta. The provider sees its processing as an atomic unit protected by transaction Tb. Ta and Tb are coordinated as part of a single global transaction Tg. The invocation of the service provider is synchronous, with the client blocking while while waiting for the server’s response (transactional RPC). The invocation also is reference-based
A
B Tg
Tb
Ta
Receive
Dba
TxRPC
Read
Invoke B
Write
Reply
Dba
2PC
Figure 10.2 Direct transaction processing (DTP).
Read/Write
DBb
262
Stefan Tai and Colleagues
A
B Ta1 Tb
DBa Send Msg B
Receive RM Read/Write
DBb
RM Reply
Receive Msg B DBa
Ta2
Figure 10.3 Queued transaction processing (QTP).
(thus, direct), and requires the provider to be available at the time the invocation takes place. State changes are executed in isolation (that is, they are not visible outside the transaction), and can be automatically rolled back by the middleware should the transaction abort. The coordination of client and provider(s) is achieved using a distributed two-phase commit (2PC) protocol. Queued Transaction Processing The queued transaction processing (QTP) style is illustrated in figure 10.3. In QTP, the client’s view spans two transactions, Ta1 and Ta2, which cannot be treated as an atomic unit. This is in contrast with DTP, where the client’s action was protected by a single transaction. The separation into two transactions has implications for the recovery of A should there be a failure after executing Ta1 but prior to completing Ta2. The invocation of the server is asynchronous through middleware (queue) mediation, using reliable messaging protocols to ensure guaranteed delivery of request and reply messages. Therefore, the server does not have to be available at the time the client is sending a message, and the client is not blocked while waiting for the server’s response. Unlike DTP, no global
Transactional Web Services
263
transaction exists in QTP to coordinate the transactions Ta1, Tb, and Ta2, and state changes performed in each of these transactions, when completed, are visible. It is the client application’s responsibility to correlate request and reply messages and to initiate compensation actions to logically undo any unwanted state changes. The Java 2 Enterprise Edition (J2EE) [29] Enterprise Java Beans (EJB) [30] programming model can be used to implement QTP and DTP clients and servers. The EJB model includes Session Beans, Entity Beans, and Message-Driven Beans. Session Beans and Entity Beans can be used to implement DTP providers, whereas Message-Driven Beans (MDBs) can be used to implement QTP providers. EJBs can also be clients to other service providers, and therefore can be used to implement DTP and QTP clients. 10.2.2
Programming Models
DTP and QTP interactions can be programmed in various ways, using transaction-processing middleware. The middleware typically provides APIs for use by applications to create transactions and to register transactional resources (such as a database). Another approach is to have transactions managed by a middleware container inside of which the applications are deployed; in this case, a declarative programming model using transaction attributes is provided. Transactional APIs Transactional APIs exist for the client (the initiator) of a transaction and for the transactional service providers. For DTP interactions, the client API provides operations to create transaction contexts, to associate contexts with invocations, and to complete transactions. A provider interface includes operations to register (persistent) resources and to participate in completion protocols such as the two-phase commit. Examples of DTP APIs are the CORBA OTS interfaces [26], the Java Transaction API [32], and the APIs of the X/Open distributed transactionprocessing reference model [46]. For QTP interactions, a messaging interface is provided by the messaging middleware. A message queue manager interface allows setting up queues, and a message queue interface allows putting messages to and getting messages from the queue. The client and the provider use the same interfaces. An example set of APIs is the Java Message Service (JMS) queuing interfaces [31]. In addition to message put and get requests, JMS allows grouping of a set of message deliveries into an atomic unit of work. In both DTP and QTP, the applications (client program and providers) use the APIs explicitly within their program logic. Transactions are demarcated within the program, and outcome processing is encoded as part of the application. While the invocation of a service provider by a client may be remote, the use of a transaction management API (for example, to create a transaction context) is mostly a local call to the middleware. Exceptions to this rule, however, are possible. In the emerging Web services industry, transaction management interfaces are available as Web services for local and remote invocation, as we discuss later.
264
Stefan Tai and Colleagues
Declarative Transactions A different approach to programming DTP and QTP interactions is to use declarative assertions of transaction behavior, which are interpreted by a middleware container inside of which applications are deployed. With declarative transactions, the transactional semantics of an application can be specified using attributes/policies, rather than by explicit calls to APIs. The basic idea is to associate transactional attributes with different programming language elements (e.g., a method), and have the middleware manage transactions according to these attributes. This disentangles the application’s business logic from its transaction management logic. A prominent example of declarative transaction programming is the J2EE Enterprise Java Beans (EJB) container-managed transaction model [30]. The transactional behavior of an application is declared using a predefined set of transaction attributes; transactions are created, joined, ignored, and completed by the EJB container according to these attributes. EJB transaction attributes are primarily a means for configuring the transactional behavior of an application in a container, and are not meant to be communicated to the client of the EJB. In general, however, DTP and QTP interactions can be described from both the client’s and the provider’s viewpoints, and a container can be designed to manage transactions for both outgoing and incoming requests. Some systems, including EJB, support both declarative transactions and explicit transactional APIs. Further, the combined use of transactional APIs and declarative transactions within a single transactional scope is also an option. This is useful in cases where the declarative approach cannot fully describe the desired transactional behavior of the application. 10.3
Specifications
Web services are applications that are described, published, and accessed over the Web using open XML standards. They promote a Service-Oriented Computing model where an application describes both its functionality (in a platform-independent fashion) and its mappings to available communication protocols and deployed service implementations. This description can be discovered by client applications using service registries in order for the client to use the service by means of XML messaging. The Web Services family of specifications (WS-*) defines a platform architecture for the integration of diverse applications. The WS-* specifications aim to accommodate varying application and interoperability requirements through separate specifications that can be composed to address specific needs. In the next section, we provide a brief summary of the Web Services specifications relevant to our discussion. We refer the reader to the published specifications and diverse Web Services literature for further details [39] [1] [7] [21].
Transactional Web Services
10.3.1
265
Basics
The Simple Object Access Protocol (SOAP) specification [36] defines a messaging protocol for Web Services. SOAP is an XML message format (consisting of an envelope, message headers, and a message body) and a standard encoding mechanism that allows messages to be sent over a variety of transports, including HTTP and SMTP. The interface of a Web Service is described using the Web Services Description Language (WSDL) [38]. WSDL separates the abstract functionality of a service from its mappings to available deployed implementations. The abstract description consists of the operations supported by a service and the definition of their input and output messages. A port type groups a set of operations. The concrete aspects of a WSDL definition include bindings that map operations and messages of a port type to specific protocol and data-encoding formats (such as SOAP), ports that provide the location of physical endpoints implementing a specific port type using a specific binding, and services definitions as collections of ports. The Web Services Addressing (WS-Addressing) specification [37] provides transport-neutral mechanisms (elements) for identifying Web Services endpoints and capturing common addressing information (such as source and destination endpoints). Complementary to these basic specifications, the Web Services Interoperability Initiative (WS-I) defines profiles such as the WS-I Basic Profile 1.1 [43], sample scenarios, and other assets designed to promote cross-platform adoption and interoperability of the basic Web Services specifications. 10.3.2
Reliable Messaging
The basic Web Service communication capabilities defined above provide only limited reliability guarantees. For example, SOAP over HTTP is “reliable” as long as the connection stays alive; it delivers messages at most once, in order, and with a definite acknowledgment for each message delivery or delivery failure. HTTP is unreliable in the sense that when a connection is lost, the message sender will get a connection failure event, but be in doubt about the status of the message. WS-Reliable Messaging (WS-RM) [14] defines a protocol that allows messages to be delivered reliably in the presence of system and network failures (or the sender to be notified in case of nondelivery). The WS-RM specification is designed to be composed with other Web Services specifications, such as SOAP and WS-Addressing, to provide additional message delivery guarantees, such as exactly-once delivery. It does so by defining protocol header blocks as well as WSDL contracts for consumer and provider endpoints that jointly implement common integration patterns such as Message Channel and Guaranteed Delivery [6]. With WS-ReliableMessaging support, Web Service consumers can act as QTP clients, and Web Service providers as QTP providers.
266
Stefan Tai and Colleagues
The Reliable Asynchronous Messaging Profile 1.0 (RAMP) [11] is an extension profile of a composition of the WS-I Basic Profile 1.1, the WS-I Simple Soap Binding Profile 1.0 [45], and the WS-I Basic Security Profile 1.0 [44]. In particular, it adds interoperability requirements for WS-RM (the receiver of a message must understand WS-RM header blocks if they are used), WS-Addressing (WS-Addressing elements must be present in every exchanged message), and WS-SecureConversation [25]. 10.3.3
Coordination
The Web Services Coordination (WS-Coordination) specification [12] defines an extensible framework which can be used to implement different coordination models that require a shared context. This includes traditional atomic transactions and long-running business transactions; interoperability protocols for these models based on WS-Coordination are defined in the Web Services Atomic Transactions and Web Services Business Activity specifications [12]. WS-Coordination enables the creation of coordination contexts for propagation among coordination participants, and the registration of participants for particular coordination protocols of a given coordination type. The specification defines three main elements that are commonly required for different kinds of coordination: A coordination context the coordinated activity
•
•
An activation service
A registration service coordination protocols.
•
The context that is shared and propagated among the participants in The Web Service used to create a coordination context The service used by participants to register for inclusion in specific
WS-Coordination coordination types extend the coordination context, adapt the registration service (and optionally, the activation service), and define a set of specific coordination protocols and corresponding protocol Web Services. The protocol services, registration service, and activation service together constitute a coordinator (coordination middleware). Figure 10.4 illustrates the principal WS-Coordination architecture. A coordination participant, in the role of a requester or a responder, is an application that uses a coordinator. The application interacts (locally) with the coordinator to create a coordination context (omitted from the figure). The context is propagated to any other (remote) participant(s) via an application message. The context includes the WS-Addressing endpoint reference of the registration service of the requester’s coordinator, so that the responder’s coordinator (“subcoordinator”) can register for participation in a specific coordination protocol. The coordination protocol messages are then exchanged between the coordinators. A coordination participant requires a coordination middleware for protocol registration and specific protocol interactions. Protocols include agreement coordination protocols, such as a durable two-phase commit protocol (in case of atomic transactions) or a participant-driven completion protocol for business transactions.
Transactional Web Services
267
Coordination Participant (Requestor)
Coordination Participant (Responder)
Coordinator Application
Activation
Registration
Coordinator Protocol Protocol Service
Application
Activation
Registration
Protocol Protocol Service
AppRequest (Ctx)
Registration protocol AppResponse
Coordination protocol
Figure 10.4 Web Services coordination.
10.3.4
Atomic Transactions
The WS-Atomic Transaction specification [12] extends the WS-Coordination framework with a coordination type for performing distributed atomic transactions. The WS-Atomic Transaction coordination type1 includes three protocols: Completion, Volatile 2PC, and Durable 2PC. We discuss each protocol briefly below. The Volatile 2PC (V2PC) and Durable 2PC (DP2C) protocols are used to coordinate the transactional participants of the distributed transaction. A transactional participant registers for either the V2PC or D2PC protocol, using the registration service specified in the coordination context. The V2PC protocol is meant for participants that manage volatile resources (such as a cache) and participants that wish to be synchronized with the transaction but don’t manage any persistent state. The D2PC protocol is meant for participants that manage durable resources (such as a database or queue manager), where fault tolerance and consistency are critical. All participants in the transaction, V2PC and D2PC, are driven to the same conclusion: they either all commit or they all roll back. The difference is in the order in which they are prepared and the behavior in the presence of failures. V2PC participants are prepared prior to all D2PC participants, allowing the overall set of transaction participants (both V2PC and D2PC) to grow until the first D2PC participant is prepared. In the event of a failure, there is no guarantee that a V2PC participant will be able to determine the final outcome of the transaction; D2PC participants can always (eventually) determine the outcome of the transaction. The Completion protocol is used in cases where transaction completion must be initiated explicitly using Web Services messaging. In such cases, the Completion participant registers for the Completion protocol, using the registration service specified in the coordination context. The participant then uses the Completion protocol either to commit or to roll back the transaction. Once the fate of the transaction is known, the coordinator uses the Completion protocol to notify
268
Stefan Tai and Colleagues
the participants of its outcome. The Completion protocol is typically used in combination with the WS-Coordination Activation service. Joint use of the Completion protocol and the Activation service allows a WS-Atomic Transaction to be created and coordinated by a third-party Web Service. 10.3.5
Business Activities
The WS-Business Activity Framework specification [12] extends the WS-Coordination framework with coordination types for loosely coupled, long-running business transactions. The WS-Business Activity specification defines two alternative coordination types: atomic outcome2 and mixed outcome.3 With the atomic outcome coordination type, all participants that have completed work on behalf of a business activity are driven to the same conclusion; they either all accept the work they have performed or they all undo the work they have performed. With the mixed outcome coordination type, individual participants that have performed work on behalf of a business activity can be driven to different conclusions; that is, some may be asked to accept the work they have performed, and others may be asked to undo the work they have performed. Two coordination protocols can be used for either coordination type: BusinessAgreementWithParticipantCompletion and BusinessAgreementWithCoordinatorCompletion. A participant registered for the ParticipantCompletion protocol is responsible for notifying the coordinator when it has completed all of its work on behalf of a given business activity. The coordinator must wait for all such participants to notify it before attempting to close the business activity. A participant registered with the CoordinatorCompletion protocol relies on the coordinator to determine when work on behalf of a business activity is complete. 10.3.6
Policy Framework and Metadata Exchange
The Web Services Policy Framework (WS-Policy) [9] is a general-purpose, extensible model to describe a broad range of requirements, preferences, and capabilities in an XML Web Servicesbased system. WS-Policy defines a grammar and syntax for expressing functional or nonfunctional properties of a Web Service in a declarative manner. A policy is an XML expression that logically combines one or more assertions which specify concrete or abstract service characteristics, such as a required security authentication scheme or a desired quality of service. Policies can be flexibly attached to various Web Services definitions, including WSDL-type definitions, as described in the Web Services Policy Attachment specification [10]. The WS-Metadata Exchange specification [13] defines message protocols to retrieve the policies, WSDL, or XML schema of a Web Services endpoint and/or given target namespace. 10.3.7
BPEL
The Business Process Execution Language (BPEL) [24] is a choreography language for defining flows of a business process that composes various Web Services. Compositions are created by
Transactional Web Services
269
defining control semantics around a set of interactions with the services being composed. The BPEL composition model is recursive: a BPEL process, like any Web service, supports a set of WSDL interfaces that enable it to be exposed and invoked as a regular Web service. The interpretation and execution of BPEL processes require some middleware support. A BPEL process contains a set of typed connectors known as partner links, each specifying the port type required from the party being connected along that link and the port type provided by the process to that party in return. The composition model explicitly stays away from binding these to actual service endpoints, leaving the door open for flexible binding schemes and selection algorithms. Endpoints can be bound at deployment time or at runtime. The “activity” is the unit of composition. Primitive activities provide such actions as Web Service messaging (e.g., Invoke, Receive, Pick) and throwing faults. Structured activities impose predefined control semantics on the activities nested within them, such as sequence or parallel execution. Additional control may also be defined using explicit conditional control links. BPEL scopes contain a set of (nested) activities and provide the units of data, fault, and compensation handling. 10.4
Transactional Workflows
Service coordination and service composition—and, correspondingly, coordination middleware and composition middleware—are complementary aspects and systems of transactional workflows. WS-Coordination (and coordination-type definitions based on WS-Coordination) and BPEL are the respective Web Services specifications for service coordination and composition. Whereas the WS-Coordination protocols for service coordination are required properties of the external interactions between services, the BPEL schema for a service composition is an aspect that is mostly internal to the implementation of the service that composes other Web services. Figure 10.5 illustrates the coordination of BPEL composition and WS-Coordination. We combine the functional composition of a set of Web services with nonfunctional (reliability, transactions, security, and other) coordination properties required for process partner interactions. This is a common requirement in the development of (production) workflows [20] that define and implement both business logic and the quality of service necessary to integrate distributed heterogeneous applications. 10.4.1
Example
Consider a federated order-processing and vendor-managed inventory system such as the one introduced in [4]. The system is used by car dealers to order parts from an automobile manufacturer; the manufacturer in turn obtains parts from a supplier operating multiple warehouses. All application communications in the system are built using Web Services protocols. Here, we focus on the warehouse application that communicates with the supplier and other, subordinate warehouse services.
270
Stefan Tai and Colleagues
SOAP
SOAP
SOAP
WSDL
SOAP
Web Service
WSDL
Client
WSDL
BPEL Web Service
Web Service
Create Ctx
Coordinator
WSDL Client
Web Service
WSDL
(a)
Coordinator
SOAP (Ctx)
(b) Figure 10.5 BPEL Web Services Composition (a) and Web Services Coordination (b).
WSDL
SOAP (Ctx)
Web Service
WSDL
WSDL
Coordinator
Transactional Web Services
271
Warehouse Application (BPEL Web Service) Supplier Application
WS-RM
WS-AT
WS-AT
Warehouse 1 Database
Warehouse 2 Database
Atomic Transaction
Figure 10.6 Warehouse application example: coordination requirements for business processes.
In this example, the warehouse application receives orders for parts from the supplier application. In order to tolerate potential message loss and/or temporary unavailability of the warehouse application, the supplier requires the use of a reliable messaging protocol. The protocol ensures delivery of messages sent; messaging middleware is used to (re-)send a message from the supplier to the warehouse until a response is received. This describes a QTP interaction, for which a declarative policy assertion for a reliable messaging protocol can be used. All incoming orders at the warehouse application are divided among a number of subordinate services representing physical warehouses and databases. For example, to ensure inventory coverage, 70 percent of an order may go to warehouse 1 and the remaining 30 percent may go to warehouse 2. An atomic transaction protocol is needed to ensure transactional semantics when updating the different databases. The warehouse application is the transaction client and the subordinate warehouse database services are transaction participants. This describes a DTP interaction, for which a declarative policy assertion for an atomic transaction protocol can be used. In summary, the warehouse application can be viewed as a (business) process that is made available as a service and that uses other services as part of the process. The application comprises a sequence of activities including order receipt and order processing, some of which require the use of an interoperable protocol for reliable messaging or atomic transaction processing. The application is illustrated in figure 10.6. In the following section, we explore how such processes can be implemented by using and composing the Web Services specifications.
272
10.4.2
Stefan Tai and Colleagues
Policy-based Transactional Workflows
To declare client and provider transaction support in a Service-Oriented Architecture, we propose the use of client transaction policies and provider transaction policies [34]. A client transaction policy defines the client’s capabilities, requirements, and preferences to invoke a service provider in a transaction according to a transaction model, to participate in reliable messaging, and/or to drive compensation. A service provider transaction policy defines the provider’s capabilities and requirements to be invoked in a transaction according to a transaction model, to engage in reliable messaging, and/or to support compensation. Client and provider policies can be matched for compliance before executing transactions; a matched client-provider policy describes a (short-lived or long-lived) contract between two partners to engage in transactional interactions. Transaction policies can be expressed on different levels of abstraction. Abstract policies allow describing and matching transaction capabilities and requirements independent of the (lower-level) interoperability formats and protocols that can be used to implement the transaction. The Transaction Attitudes framework and middleware form an example system supporting such abstract policies [22]. Concrete policies, on the other hand, refer to specific formats and protocols used to implement the transaction, such as the Web Services transactions protocols (WS-Atomic Transaction and WS-Business Activity). Concrete policies are needed to ensure service compatibility and proper execution of service interactions according to published (standard) specifications. In the remainder of this chapter, we focus on concrete policies (expressed using WS-Policy) to support transaction processing compliant to the Web Services BPEL and transactions specifications. The combined use of BPEL and WS-Coordination can take different forms. First, a BPEL process, exposed as a regular Web Service, can be coordinated externally (for example, as a participant in an atomic transaction). Second, the BPEL process itself can coordinate a set of services. In the first case, the BPEL process appears as a regular Web Service to the coordination requester, and it participates in the externally initiated coordination by accepting an incoming shared coordination context. The incoming context may be further propagated to the services that the BPEL process invokes, in which case the invoked services will register with and be enlisted as participants in the external transaction. This is illustrated in figure 10.7(a). If a registration and enlistment with a different (local, interposed) transaction coordinator is desired, however, the BPEL process may choose to not propagate the incoming context, but to create a new context, as shown in figure 10.7(b). Whenever the BPEL process creates a coordination context—using a coordination middleware—for propagation to services that it invokes, the process describes a composition of coordinated services. That is, in addition to business process control semantics, coordination control semantics are introduced. BPEL coordination control semantics can vary both within a single BPEL process and between different processes since different transaction models (coordination types and protocols)
Transactional Web Services
273
Client
WSDL
Web Service
WSDL
Coordinator
WSDL
SOAP (Ctx1)
Web Service
WSDL
Coordinator
WSDL
BPEL Web Service
Coordinator
WSDL SOAP (Ctx1)
Coordinator WSDL
(a) Client Coordinator SOAP (Ctx1)
WSDL
BPEL Web Service
WSDL SOAP (Ctx2)
Create Ctx2 Coordinator WSDL
(b) Figure 10.7 Combining BPEL and WS-Coordination (a) without and (b) with interposed coordination.
274
Stefan Tai and Colleagues
are available and different transactional patterns can be implemented. For example, selected process scopes may be declared to be coordinated according to a specific coordination type, or all interactions with a single service partner may be declared to be coordinated as a transaction. A high degree of flexibility is desired when extending BPEL with coordination semantics in order to align with the dynamic nature of Service-Oriented Architectures: desired or required transaction models and mechanisms for interaction with service partners may be determined only late at runtime, or may even change during process execution time. Accordingly, the composition of coordinated services requires paying careful attention to the (integration of the) middleware systems: the BPEL process engine and the transaction coordination middleware. The flexibility desired for selecting and varying coordination models requires the BPEL middleware to support dynamic transaction configuration, using the transaction middleware. 10.5
Coordination Policies and Policy Attachments
In the following, we present a policy-based approach to Web Services transactions. We introduce coordination policies and coordination policy attachments to Web Services definitions that allow programming and configuring transactional processes and participants. 10.5.1
Coordination Policies
We define coordination policies as declarative assertions of coordination behavior that define a capability or a requirement to participate in a distributed transaction. A coordination policy is represented as an XML element that references the XML namespace URI of a published WSCoordination coordination type. Coordination types are defined in the WS-Atomic Transaction and WS-Business Activity specifications. The XML syntax defined in the WS-Policy framework can be used to express coordination policies. The policies can then be attached to diverse Web Services definitions, including WSDL port types (for transaction participants) and BPEL process definitions (for transaction clients). XML extensibility and referencing mechanisms, as defined in the WS-Policy Attachment specification, are used for this purpose. Concrete examples of coordination policies and policy attachments are given below. Element We define the element to express coordination policies in terms of WS-Coordination coordination types. The element identifies a WS-Policy assertion, and thus, a statement that is part of the WS-Policy policy definition container, the element. A coordination type is associated with each assertion.
Transactional Web Services
275
The following describes the attributes and tags listed in the schema outlined above: /CoordinatedService This identifies a coordinated service assertion. /CoordinatedService/@CoordinationType This mandatory attribute is the URI of the coordination type associated with the assertion. The range of coordination types is extensible (as defined in the WS-Coordination specification), with each coordination type defined in a published or proprietary specification. The only requirement is that coordination types be unique (i.e., represented as URIs), and that all parties processing the policy assertion understand the semantics of that coordination type. /CoordinatedService/@wsp:Usage This mandatory attribute contains the usage of the assertion; the type and semantics of this attribute are defined in the WS-Policy specification. /CoordinatedService/@{any} Optional extensibility attributes used to express additional information about the subject application’s asserted coordination capabilities/requirements. Examples The following examples illustrate different coordination policies. These policies use the element defined above and the standard constructs of the WS-Policy framework. Below, a coordination policy named “ATRequired” is defined, which can be used to declare required support for the WS-Atomic Transaction coordination type. This coordination policy can be used for DTP-style transactions.
Another example of a coordination policy is a policy that defines optional support for the WS-BusinessActivity MixedOutcome coordination type.
For QTP-style transactions, no shared transaction context and distributed participant registration are needed. QTP participants are loosely coupled, and their coordination is achieved by the applications, which rely on reliable messaging middleware. More specifically, QTP requires the transaction participants to support a reliable messaging protocol to exchange request and reply messages with an exactly-once delivery semantics. Therefore, standard WS-ReliableMessaging policy assertions can be used. The WS-ReliableMessaging specification defines such policy assertions, an example of which is given below.
10.5.2
Coordination Policy Attachments
Coordination policies can be meaningfully attached to various elements of a Web Service definition, following the flexible model of WS-Policy Attachments. In this section, we describe a recommended use of attaching coordination policies to WSDL definitions and BPEL processes. WSDL In order to define a provider transaction policy, the assertion (and/or a reliable messaging protocol assertion) is attached to a Web service port. This
Transactional Web Services
277
describes the coordination (transaction) capabilities of a deployed Web service that can be invoked. Consider the federated order-processing example introduced earlier. A assertion can be attached with a WSDL port binding definition of the database services to declare their transactional semantics. The element of the WS-Policy specification and the “ATRequired” policy defined above can be used for this purpose.
The policy attachment states that the service supports the WS-Atomic Transaction coordination type and its protocols, such as the durable two-phase commit protocol. A client invocation on this port is required to carry a coordination context, so that the invoked operation can be executed according to the coordination type specified. Policy and coordination middleware is required to execute the transaction protocols, to verify policy compliance, to reject invocations, and to raise exceptions. BPEL Scopes In order to program transaction clients, coordination policies are attached to BPEL process definitions. We propose to attach coordination policies to BPEL scopes and to BPEL partner links. Coordination policies can also be attached to other activities, including invoke activities, as we will discuss later in this chapter, as well. A BPEL scope is the demarcation of a group of activities of the process. Scopes are the units of fault handling and compensation in BPEL. A BPEL scope with an attached coordination policy represents a transactional unit of work. Using the “ATRequired” WS-Atomic Transaction coordination policy introduced earlier, for example, an atomic scope can be modeled. Using the “BAOptional” WS-BusinessActivity coordination policy, a compensation-based business activity scope can be modeled. A (simplified) example of an atomic funds transfer transaction is given below. (1) (2)
278
Stefan Tai and Colleagues
(3) . . . />