129 84 3MB
English Pages 231 [232] Year 2002
Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2548
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Juan Hernández Ana Moreira (Eds.)
Object-Oriented Technology ECOOP 2002 Workshop Reader ECOOP 2002 Workshops and Posters Málaga, Spain, June 10-14, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Juan Hern´andez Universidad de Extremadura, Departamento de Inform´atica Quercus Software Engineering Group Av. Universidad, s/n, 10.071 C´aceres, Spain E-mail: [email protected] Ana Moreira Universidade Nova de Lisboa, Facultade de Ciˆencias e Technologia Departamento de Inform´atica, 2829-516 Caparica, Portugal E-mail: [email protected]
Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at .
CR Subject Classification (1998): D.1-3, H.2, F.3, C.2, K.4, J.1 ISSN 0302-9743 ISBN 3-540-00233-2 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by Boller Mediendesign Printed on acid-free paper SPIN: 10871681 06/3142 543210
Preface The European Conference on Object-Oriented Programming (ECOOP) conference series, in cooperation with Springer-Verlag, is glad to offer the objectoriented research community the sixth edition of the ECOOP Workshop Reader, a compendium of workshop reports and poster summaries from the 16th European Conference on Object-Oriented Programming (ECOOP 2002). ECOOP 2002 was held in M´ alaga, Spain, from June 10th to June 14th 2002. As usual, the workshops took place during the first two days of the conference and gave authors and participants an opportunity to present and discuss ideas that are topical and innovative in object-oriented technology, in an atmosphere that fostered interaction, exchange, and problem solving. ECOOP 2002 hosted 19 high-quality workshops covering a large spectrum of research topics related to object-oriented technology. This year, these workshops were selected from 25 proposals by the workshop selection committee, primarily in accordance with their scientific merit and their probability of generating lively discussion. This volume contains the reports of those 19 workshops. Each chapter covers one workshop, summarizing the current research being carried out in the workshop topic, the major issues discussed, the main conclusions, and possible directions for further research. Each chapter finishes with a list of updated references where the reader can find complementary information about the workshop theme. The last chapter contains the summaries of the posters displayed at ECOOP 2002. This book was only possible thanks to the effort of a large group of people contributing in many different ways. We would like to thank the members of the selection committee, each workshop organizer, and each workshop and poster participant. The additional work for the workshop organizers in terms of recording and summarizing the discussions will certainly be appreciated by the readers. Finally we wish to convey our warm appreciation to our colleagues of the ECOOP 2002 organization team for their unique blend of efficiency and comradeship. Preparing for and holding the conference was a lot of fun. Organizing the workshops and this book proved to be very stimulating and instructive; we wish our readers an equally fruitful experience. We are pretty sure that the ECOOP 2002 workshop reader will provide you, the reader, with an excellent snapshot of the major trends in object-oriented technology. October 2002
Juan Hern´ andez Ana Moreira
Organization ECOOP 2002 was organized by the Department of Lenguajes y Ciencias de la Computaci´ on of the University of M´ alaga, and the Department of Inform´ atica of the University of Extremadura, under the auspices of AITO (Association Internationale pour les Technologies Objets). The proceedings of the main conference were published as LNCS 2374.
Workshop Chairs: Poster Chair:
Juan Hern´ andez (University of Extremadura) Ana Moreira (Universidade Nova de Lisboa, Portugal) Juan M. Murillo (University of Extremadura)
Workshop Selection Committee Mehmet Aksit Jo˜ ao Ara´ ujo Elisa Bertino Robert France Juan Hern´ andez Ana Moreira Ambrosio Toval
University of Twente, The Netherlands Universidade Nova de Lisboa, Portugal Universit`a degli Studi di Milano, Italy Colorado State University, USA Universidad de Extremadura, Spain Universidade Nova de Lisboa, Portugal Universidad de Murcia, Spain
Contents
Resource Management for Safe Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grzegorz Czajkowski, Jan Vitek
1
Generative Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Krzysztof Czarnecki, Kasper Østerbye, Markus V¨ olter Tools and Environments for Learning Object-Oriented Concepts . . . . . . . . . 30 Isabel Michiels, J¨ urgen B¨ orstler, Kim B. Bruce 12th Workshop for PhD Students in Object Oriented Systems . . . . . . . . . . . 44 Miguel A. P´erez, Pedro J. Clemente Web-Oriented Software Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Oscar Pastor, Daniel Schwabe, Gustavo Rossi, Luis Olsina Component-Oriented Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Jan Bosch, Clemens Szyperski, Wolfgang Weck Concrete Communication Abstractions of the Next 701 Distributed Object Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Antoine Beugnard, Salah Sadou, Laurence Duchien, Eric Jul Unanticipated Software Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 G¨ unter Kniesel, Joost Noppen, Tom Mens, Jim Buckley Composition Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Markus Lumpe, Jean-Guy Schneider, Bastiaan Sch¨ onhage, Thomas Genssler The Inheritance Workshop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Gabriela Ar´evalo, Andrew Black, Yania Crespo, Michel Dao, Erik Ernst, Peter Grogono, Marianne Huchard, Markku Sakkinen Model-Based Software Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Andreas Speck, Elke Pulverm¨ uller, Ragnhild Van Der Straeten, Ralf H. Reussner, Matthias Clauß Quantitative Approaches in Object-Oriented Software Engineering . . . . . . . 147 Mario Piattini, Fernando Brito e Abreu, Geert Poels, Houari A. Sahraoui Multiparadigm Programming with Object-Oriented Languages . . . . . . . . . . . 154 Kei Davis, Yannis Smaragdakis, J¨ org Striegnitz
VIII
Contents
Knowledge-Based Object-Oriented Software Engineering . . . . . . . . . . . . . . . . 160 Maja D’Hondt, Kim Mens, Ellen Van Paesschen Object-Orientation and Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Andreas Gal, Olaf Spinczyk, Dario Alvarez Integration and Transformation of UML Models . . . . . . . . . . . . . . . . . . . . . . . 184 Jo˜ ao Ara´ ujo, Jonathan Whittle, Ambrosio Toval, Robert France Mobile Object Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 Ciar´ an Bryce Feyerabend: Redefining Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Wolfgang De Meuter, Pascal Costanza, Martine Devos, Dave Thomas Formal Techniques for Java-like Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Sophia Drossopoulou, Susan Eisenbach, Gary T. Leavens, Arnd Poetzsch-Heffter, Erik Poll Poster Session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Juan Manuel Murillo, Fernando S´ anchez
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Resource Management for Safe Languages Grzegorz Czajkowski1 and Jan Vitek2 1
Sun Microsystems Laboratories, [email protected] 2 Purdue University, [email protected]
Abstract. Safe programming languages offer safety and security features making them attractive for developing extensible environments on a wide variety of platforms, ranging from large servers all the way down to hand-held devices. Extensible environments facilitate dynamic hosting of a variety of potentially untrusted codes. This requires mechanisms to guarantee isolation among hosted applications and to control their usage of resources. While most safe languages provide certain isolation properties, typically resource management is difficult with the current standard APIs and existing virtual machines. This one-day workshop brought together practitioners and researchers working on various approaches to these problems to share ideas and experience.
1
Workshop Overview
The workshop consisted of four 90-minute sessions. In the first one Doug Lea from State University of New York in Oswego delivered an invited talk on the Application Isolation API proposed as an extension to the JavaT M programming language [1]. Presentations of accepted position papers were given in the next two sessions. Each author had 7 minutes to present the main idea of his/her work. After all of the authors in a given session finished, the presentations were discussed - this include the time for questions about specific presentations as well as general remarks and brain-storming. The last session was a panel discussion, during which the workshop attendants listed a list of open or “really difficult” issues in the discussed domain. The total of nine presentations were accepted for the workshop, and each of them was presented. About 25 people attended the workshop, the majority from Europe, with a few attendees from the US. Most of the participants were from the academia.
2
Position Paper Summaries
All the papers accepted and presented are available from http://www.ovmj.org/ workshops/resman. The section below contains summaries of the papers, provided by the authors. J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 1–14, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
2.1
Grzegorz Czajkowski and Jan Vitek
Creating a Resource-aware JDK
Authors Walter Binder (CoCo Software Engineering GmbH, Vienna, Austria), Vladimir Calderon (University of Geneva, Switzerland) Contact e-mail [email protected] Accounting and limiting the resource consumption of applications is a prerequisite to prevent denial-of-service attacks in mobile code environments. Moreover, it enables the monitoring and profiling of applications, which may be charged for their utilization of computing resources. Java has become the de facto standard for the implementation of mobile code environments. However, current Java runtime systems do not offer any mechanisms for resource accounting and resource control. Prevailing approaches to provide resource control in Java-based platforms rely on a modified Java Virtual Machine (JVM), on native code libraries, or on program transformations. The Java resource accounting facility J-RAF is based completely on program transformations. In this approach the bytecode of applications is rewritten in order to expose its CPU and memory consumption (CPU and memory reification). Programs rewritten by J-RAF keep track of the number of executed bytecode instructions (CPU accounting) and update a memory account when objects are allocated or reclaimed by the garbage collector. Resource control with the aid of program transformations offers an important advantage over the other approaches, because it is independent of a particular JVM and underlying operating system. It works with standard Java runtime systems and may be integrated into existing mobile code environments. Furthermore, this approach enables resource control within embedded systems based on modern Java processors, which provide a JVM implemented in hardware that cannot be easily modified. Typically, rewriting the bytecode of an application is not sufficient to account and control its resource consumption, because Java applications use the comprehensive APIs of the Java Development Kit (JDK). Therefore, resource-aware versions of the JDK classes are necessary in order to monitor the total resource consumption of an application. Ideally, the same bytecode rewriting algorithm should be used to rewrite application classes as well as JDK classes. However, the JDK classes are tightly interwoven with native code of the Java runtime system, which causes subtle complications for the rewriting of JDK classes. The authors outline the difficulties of modifying the JDK classes: The native code of the Java runtime system relies on several low-level assumptions regarding the dependencies of Java methods in certain JDK classes. Thus, program transformations that are correct for pure Java may break native code in the runtime system. Unfortunately, these dependencies are not well documented, which complicates the task of defining transformation rules that work with the Java class library. Moreover, the transformed JDK classes may seem to work as desired even with large-scale benchmarks, while the transformation may have compromised the security model of Java. Such security malfunctions are hard to detect, as they cannot be perceived when running well behaving applications.
Resource Management for Safe Languages
3
We have experienced that a minor restructuring of the method call sequence completely breaks several security checks, which are based on stack introspection and assume a fixed call sequence. Consequently, modifications and updates of the JDK are highly error-prone. Having explained the difficulties of rewriting the JDK classes, the authors present and compare different bytecode transformation schemes that allow to create resource-aware JDK classes. The distinct rewriting strategies are evaluated based on standard benchmark suites for Java. 2.2
Scoped Memory
Authors Greg Bollella (Sun Mircosystems, Inc.), Kirk Reinholtz (NASA Jet Propulsion Laboratories) Contact e-mail [email protected] The full text of this short position paper is available on the workshop’s Web site. 2.3
Resource Accounting in a J2ME Environment
Authors Walter Binder, Balazs Lichtl (both from CoCo Software Engineering GmbH, Vienna, Austria) Contact e-mail [email protected] Nowadays Java has become a common implementation language for applications which have to be executed in different hardware environments without changes. With the specification of the Java 2 Micro Edition (J2ME), Sun has created a standard for embedded applications developed in Java. Resource accounting and control is getting increasingly important in various settings. One particularly important case for resource control is the protection of platforms executing dynamically uploaded applications (mobile code) from faulty or malicious code, which may overuse the computing resources of the system. Another interesting setting is the charging of applications for their resource consumption. Unfortunately, current standard Java runtime systems do not support resource control. Hence, until resource control may become part of future releases of the Java Development Kit, resource control in Java environments has to be based either on modified JVMs or on program transformation techniques. So far, modified JVMs have not been deployed widely, because typically they suffer from limited portability and from low performance, since usually they do not support a state-of-the-art JIT compiler as provided by standard Java runtime systems. Moreover, there are embedded systems based on recent Java processors, which provide JVMs implemented in hardware that cannot be easily modified to enable resource control. Thus, program transformation techniques are an attractive alternative to modifying the JVM. Ideally, program transformations for resource control shall be compatible with existing Java runtime systems, shall cause only moderate overhead, and
4
Grzegorz Czajkowski and Jan Vitek
allow accurate accounting. While the accounting accuracy on a Java 2 Standard Edition JVM is limited, because of the significant part of native code that is not accounted for and due to optimizations performed by the JIT compiler, the accuracy on a Java processor can be much better, as the execution time of individual bytecode instructions can be measured and only very simple and well documented optimizations are performed, such as the combination of certain JVM instructions. However, regarding the accounting overhead, sophisticated optimizations can be beneficial, and consequently the relative overhead on an embedded Java processor may be significantly higher than on a JVM with a modern JIT compiler. The authors report their initial results from applying program transformations for resource accounting, which are described in an accompanying work, to embedded applications running on a Java processor. The authors created a benchmark suite for the J2ME environment and measured the overhead of CPU accounting with different rewriting schemes and optimizations. 2.4
JRAF - The Java Resource Accounting Facility
Authors Vladimir Calderon (Computer Science Department, University of Geneva, Switzerland), and Walter Binder (CoCo Software Engineering GmbH, Vienna, Austria) Contact e-mail [email protected] Program transformations are a portable approach to extend existing Java environments with resource management functionality [4, 3]. Using such techniques, the bytecode of Java applications is modified to expose its resource consumption (resource reification through bytecode rewriting). In this paper we outline the concept and structure of JRAF1 , the Java Resource Accounting Facility, which comprises a series of tools for resource reification. These tools offer CPU and memory accounting with pluggable accounting strategies, bytecode optimizations, calibration mechanisms to fine-tune the accounting schemes and the accuracy of accounting to particular execution environments, as well as higher level components that make use of the collected accounting information. To manage such a huge set of tools, JRAF provides a powerful and flexible configuration mechanism and controls the application and proper composition of the separate resource management tools. It has a layered architecture offering an accounting and a resource control interface. JRAF Components Accounting Interface The accounting interface of JRAF manages all resource reification tools, as well as the structure of the corresponding accounts. Currently, JRAF supports interfaces for CPU and memory accounting, but it can 1
http://abone.unige.ch/
Resource Management for Safe Languages
5
be easily extended for accounting of additional resources. The accounting interface of JRAF consists of two parts: – Interfaces for the resource reification tools. – Interfaces defining the accounts. Analyzer Tool This is a very important JRAF component. In fact, the analyzer is an essential tool used by other accounting tools. Its main purpose is to ease the comprehension of the deep characteristics of the input classes to be reified, crucial for the rewriting process. CPU Reification Tool In order to show a more concrete implementation of the accounting interface we explain the interface to the CPU tool in more detail, which supports specific features: an interface for optimization algorithms that help to reduce the overhead of CPU accounting (see [2] for examples of such optimizations); an interface allowing to associate different accounting weights with JVM instructions and common initializations for CPU tools. Memory Reification Tool Another very important resource is the memory. A memory reification tools was already implemented [5]), we needed then to make this tool compliant to JRAF. This tool is now fully integrated to JRAF, similarly to the CPU tool, and can be used within a single JRAF reification process along with the other tools. Resource Control Interface The resource control interface comprises all tools using the accounts, such as resource monitors, schedulers, logging components, etc. JRAF aims at joining the accounting with these components. It allows to plug resource control tools to an application through the accounting interface. JRAF in Action We have successfully applied JRAF on the Java 2D Demo2 to demonstrate the application of JRAF on a general multithreaded application. In this example, JRAF reifies the CPU consumption of the individual threads and displays this information in an extra window. The reader may want to apply JRAF to his own Java applications using the JRAF demo available at URL http://abone.unige.ch/, which currently is configured to reify arbitrary applications using one of our CPU tools and to attach a simple thread monitor. Conclusion JRAF has a layered architecture, it is extensible and provides an XML-based configuration mechanism. JRAF manages, coordinates, and combines the application of various bytecode rewriting tools for resource accounting. It has become a general tool to add different resource management strategies to arbitrary Java applications. Currently, JRAF includes three different CPU reification tools with various optimizations, one memory reification tool, as well as a graphical monitor and scheduler. 2
The Java2D Demo is available at http://java.sun.com/products/java-media/2D/samples/index.html.
6
2.5
Grzegorz Czajkowski and Jan Vitek
Resource Consumption Interfaces for Java Application Programming - A Proposal
Authors Grzegorz Czajkowski, Stephen Hahn, Glenn Skinner, Pete Soper (all from Sun Microsystems, Inc.) Contact e-mail [email protected] Software systems in many circumstances need an awareness of their resource impact on the underlying executing platform, so they can satisfy externally imposed performance requirements. Programs constituting such systems need the ability to control their consumption of resources provided by the platform. This document summarizes the current status of a proposal being prepared to define an extensible, flexible, and widely applicable resource consumption API for the Java platform. Control over resource consumption is developed using a set of abstractions that allow the statement of reservations and constraints for individual resources utilized by the executing application. The Java programming language and its associated runtime environment have grown beyond their initial goal of writing portable applications. The advent of the Web, applications servers, and the enterprise and micro editions of the Java platform has created pressure to make more system programming features available to programmers, as they develop progressively more sophisticated applications in an increasingly wide range of environments. This document addresses one such need: the ability to monitor or control the consumption of resources by an application. The proposed resource consumption interface (RC API) controls resources available to collections of isolates and, as such, depends on the availability of the isolate abstraction. (The isolate abstraction is provided by Java Specification Request (JSR) 121 [1]. However, the RC API is designed so that new methods need not be added to the Isolate API. The general goals for the resource consumption API are as follows: 1. Extensibility: The resource consumption interface should support the addition of new controlled resources. 2. Cross-platform applicability: The interface, as well as its underlying abstractions, should be applicable to all kinds of Java platforms. 3. Cross-scenario applicability. The interface should support the different forms of application management supported by the Isolate API, as well as being meaningful in a single application context. 4. Flexibility. The interface should be able to describe and control a broad range of resource types. 5. Completeness of abstraction. The interface should hide from applications whether a given resource is managed by the Java Virtual Machine (JVMT M ), by a core library, or by an underlying operating system facility. 6. Lack of specialization. The interface should not require an implementation to depend on specialized support from an operating system or from hardware, although implementations may take advantage of it if available. The proposal examines the resource consumption interfaces from the perspectives of three classes of developers, each of whom is a participant in resource
Resource Management for Safe Languages
7
management: the application developer, the middleware developer, and the runtime environment developer. 2.6
Towards Resource Consumption Accounting and Control in Java: A Practical Experience
Authors Frederic Guidec, Nicolas Le Sommer (both from VALORIA Laboratory, University of South Brittany, France) Contact e-mail [email protected] All software components are not equivalent as far as resource access and consumption are concerned. Some components can do very well with sparse or even missing resources, while others require guaranteed access to the resources they need. In order to deal with non-functional requirements pertaining to resource utilization we propose a contractual approach of resource management and access control. This idea is being investigated in the context of project RASC3 (Resource-Aware Software Components). In this project our objective is to provide software components with means to specify their requirements regarding hardware and/or software resources, and to design methods and models for utilizing this kind of information at any stage of a component’s life-cycle. The remaining of this paper gives an overview of two software products whose development is in progress in the context of project RASC. Raje: A Resource-aware Java Environment Raje can be perceived as an extension of the traditional JRE (Java Runtime Environment). This extension provides means to monitor resource access and consumption at middleware level. It makes it possible to monitor the usage of “global” resources (CPU, system memory and swap, network interfaces, etc.) as well as that of the “conceptual” resources used by Java programs (TCP and UDP sockets, CPU time and memory consumed by each Java thread, etc.). In Raje all resources are modeled as firstclass Java objects. Information about any kind of resource can thus be gathered by calling appropriate methods on the Java object modeling this resource. Since resource objects can be created and destroyed dynamically by Java programs, Raje implements a resource register, whose role is to identify and to keep track of resource objects at runtime. By consulting the resource register a program can locate all the objects that model resources in its name space. Resource monitoring can be performed in either a synchronous or asynchronous way. Resource monitoring is said to be achieved synchronously when any attempt to access a given resource can be intercepted and checked immediately. Monitoring a resource asynchronously consists in consulting the state of this resource explicitly every now and then, in such a way that the time of the observation does not necessarily coincide with the time of an attempt to use the resource. Both monitoring models are indeed complementary models. In any case Raje provides facilities for performing both kinds of monitoring. For 3
http://www.univ-ubs.fr/valoria/Orcade/RASC
8
Grzegorz Czajkowski and Jan Vitek
the programmer, deciding which model should be applied mostly comes down to making a tradeoff between the precision and the cost of monitoring. Most of the code included in Raje is pure Java and, as such, is readily portable. However, part of this code consists of C functions that permit the extraction of information from the underlying OS, and the interaction with inner parts of the JVM (Java Virtual Machine). To date Raje is implemented under Linux, and the JVM it relies on is a variant of TransVirtual Technology’s Kaffe 1.0.6. Jamus: Java Accommodation of Mobile Untrusted Software Jamus is an experimental platform we develop on top of Raje in order to experiment with the idea of resource contracting. Jamus supports the deployment of “untrusted” software components, provided that these components can specify their requirements regarding resource utilization in both qualitative (e.g., access rights to parts of the file system) and quantitative terms (eg read and write quotas). Emphasis is put on providing a safe and guaranteed runtime environment for such components. Resource control in Jamus is based on a contractual approach. Whenever a software component applies for being deployed on the platform, it must specify explicitly what resources it will need at runtime, and in what conditions. The main originality of this approach lies in the fact that a specific contract must be subscribed between the Jamus platform and each software component it accommodates. By specifying its requirements regarding resource access and consumption, the candidate component requests a specific service from the Jamus platform. At the same time it promises to use no other resource than those mentioned explicitly. In return, whenever the platform accepts a candidate component, it promises to provide this component with all the resources it requires. At the same time it reserves the right to sanction any component that would try to access other resources than those it required. Based on this notion of resource contracting, Jamus can provide some level of quality of service regarding resource availability. It also provides components with a relatively safe runtime environment, since no component can access or monopolize resources to the detriment of other components. Further Readings The development of both Raje and Jamus is still in progress. Interested readers can refer to [11, 12] for a more detailed description of this work. Up-to-date information about this development (and about other topics addressed in project RASC) can also be found at http://www.univubs.fr/valoria/-Orcade/RASC. 2.7
Safe Mobile Code - CPPCC: Certified Proved-PropertyCarrying Code
Author Zoltan Horvath and Tamas Kozsik (both from Department of General Computer Science, University Eotvos, Lorand, Budapest, Hungary) Contact e-mail [email protected]
Resource Management for Safe Languages
9
Some systems (e.g. Java virtual machines) make it possible for an application to download a component over the network, link it dynamically to the application and execute it. In such cases a safety-critical application may want to get guarantees that the downloaded component does no harm: does not use too much resources, does not read or modify data unauthorised, etc. The widely-used way to avoid malicious code is that the application accepts code only from trusted partners who guarantee the expected behaviour. The identity of the partner can be proved by e.g. public-key cryptography systems or certificates. But still, there is a danger that the trusted partner makes a mistake, sends a wrong or an outdated component, or one with a programming error in it. The class file verifier mechanism of Java, for example, can prove static and structural constraints and type correctness when linking the downloaded component to the application. These checks ensure that the code is well-formed, and the applied data-flow analysis guarantees that there will be no operand stack overflow or underflow, that local variable uses and stores are valid, and that the arguments of the instructions are of valid types. Proof-carrying code is a technique for providing further assurances. With PCC, the code consumer specifies safety requirements, which tell under what conditions parts of memory can be read or written or how much of a resource is allowed to use. The code producer must provide a set of program properties and encoded proofs packed together with the code transmitted, to certify that the code really satisfies the requirements. Complex methods may be used to construct the proofs, but their verification should be a simple computation. It would be very fruitful to include further semantical information to the code compared to the above approaches. Based on such information the code consumer could decide more precisely to which extent the obtained code is appropriate for solving a certain specified problem. For example the consumer could expect a piece of code that can sort a list of integer numbers. Type requirements for such a code would be: take a list of integers and produce a list of integers. These requirements can be type-checked. Structural constraints can prescribe that the sorting operation does not perform illegal memory access – the aforementioned Java class file verifier can make such a guarantee. On the other hand, the following additional semantic requirement would also be necessary: the result of the operation is such that every element of the list is either the first element or an element that is greater than or equal to the previous element. A safety-critical application must be sure at least that the downloaded code, if it terminates, satisfies this requirement. (A time-critical application would also set up a requirement on termination.) As the requirements made for the downloaded components are getting more and more complex, the proofs of correctness get longer and harder to verify. Hence efficiency comes to the front. The goal is to reduce the memory and time consumption of the technology for communicating safe code. In the solution we suggest for this problem there are three main parties involved. Here is a summary of their tasks.
10
Grzegorz Czajkowski and Jan Vitek
1. The code producer packages the machine code with the original source code, type information, some important properties of the code, and the proof of these properties designed by the help of a theorem prover. 2. The certifier checks the proofs in respect to the source code and the properties, then it checks whether the machine code is really the result of a proper compilation of the source code, and finally it checks whether the type information is correct. If every check succeeds, then the machine code, the type information, the properties and a certificate will be packaged and sent back to the code producer, which can place it in a library for later use. 3. If an application needs a component available at a mobile code producer, it can request and receive a certified proved-properties-carrying code, which can be dynamically linked into the application – of course, after the necessary safety checks. First the certificate is verified, then the type information and the carried properties are compared to the type and semantic requirements specified for the dynamically loaded component. A prototype system to illustrate the ideas above is being assembled. Its already developed components are based on the ”dynamic” construct of the Concurrent Clean language and a proof tool specifically designed for Concurrent Clean. 2.8
Resource Control in Component-Based Systems
Author Jarle Hulaas (Computer Science Department, University of Geneva, Switzerland), Walter Binder (CoCo Software Engineering GmbH, Vienna, Austria) Contact e-mail [email protected] In the approach followed in this position paper, we address various applications of resource control (RC), like security, Quality-of-Service, and billing, with an emphasis on the prevention of Denial-of-Service (DoS) attacks. We assume a multi-threaded component model with resource limits enforced by the (Javabased) kernel at the level of individual components, in order to confine abnormal behaviour. Using a general communication facility (e.g., method invocation, message passing, tuple space, etc.), a component C (client, caller) may request a service from another component S (service, callee). The problem addressed here is: Which component shall be charged for the resources consumed by S while executing a request on behalf of C, even when S and C do not trust each other, and how can the communication model be kept simple while still allowing resource allocation to be managed efficiently at the application level ? As already noted by other researchers (e.g. [9]), the range of possible interaction patterns between C and S is wide: anonymous, asynchronous, callbacks or one-to-many service invocations should be supported. We show that the most secure and comprehensive solution is to resort to an abstraction called resource container for explicitly transmitting resources between donators and consumers.
Resource Management for Safe Languages
11
Components may then freely decide when to switch from one available resource container to another. Resource exhaustion may either stem from malicious or accidental resource over-use or from intentional resource revocation. Such an event, when occuring in S, must be signalled to C even when executing asynchronous invocations. S must also be able to ensure its own consistency and to terminate properly the request being serviced. To this end, a callback-based notification mechanism is needed. The API described in Section 2.5 is a good fit. We propose additionally a means for identifying the specific invocation where the problem occurred to facilitate the work of the callback; it is indeed a very delicate task for a notification routine to execute properly in presence of violated resource constraints. Finally, we notice that non-intrusive monitoring of resource consumption is a valuable facility that is not supported by other approaches we are aware of. 2.9
Distributed and Multi-type Resource Management
Authors Luc Moreau (Department of Electronics and Computer Science, University of Southampton, UK ) and Christian Queinnec (LIP6, Paris, France) Contact e-mail [email protected] Dynamic code loading has popularized the idea of Internet servers able to reconfigure themselves and to extend their capabilities by uploading code dynamically — examples of such systems can be found in the mobile agent literature. The full power of this paradigm shift can be achieved if untrusted code can be run in a safe manner, and in particular if malicious code can be prevented from using too many resources. This raises the problem of resource management, both for the provider and the consumer of resources. In previous work [7, 8], the authors introduced Quantum, a framework generalizing Kornfeld and Hewitt’s group hierarchy [6] and providing a programmatic interface for managing resources in a distributed setting. Quantum is based on the notion of energy, an abstract notion denoting a quantity of resources, and on groups acting as tanks of energy. Groups are organized along a hierarchical structure. Groups sponsor computations that consume energy from the group they are directly sponsored by. Two forms of notification are supported: exhaustion of the energy contained in a group and termination of the computation sponsored by a group. Additionally, Quantum provides a mechanism for pausing and resuming hierarchies of computations. Notifications are made available to the programmer and therefore can be arbitrary computations, whose resources must also be managed: Quantum specifies how such notifications can be integrated in a single framework. Our previous work focused on its formalization [7] and its implementation in a shared memory [8]. Our present contribution is twofold: 1. the introduction of two different primitives related to distribution — migration and communications — and their semantics in terms of groups. 2. the support for multiple types of resources.
12
Grzegorz Czajkowski and Jan Vitek
Distributed Resource Management As far as distribution is concerned, we distinguish the transfer of data between hosts from the transfer of groups between hosts. The former can easily be expressed by send and receive primitives ‘`a la’ π-calculus. The latter is reminiscent of remote procedure calls and migration of mobile agents. We introduce the primitive migrate(h, f ), which requires two arguments: h a host name and f a procedure without argument (a thunk). The effect of the migrate primitive is displayed in Figure 1. The energy (less the cost of migration) contained in the group that sponsors the migrate primitive is transferred to a newly created remote group. We require migrate to be executed in a group that sponsors only one thread. When a remote group detects the termination of the computation it sponsors, its energy (less the cost of the return) is transferred back to its parent group. Exhaustion in the remote group triggers an exhaustion notification in the parent group; in turn, the latter notification triggers an exhaustion notification in its parent group, which may be able to transfer energy through the use of the awake primitive, according to the handler programmed by the user.
Host1 group1
Host1
Host2
group1
e
Host1
Host2 0
group2
e
group1
Host2 0
t
migrate(Host2,f)
Termination
Migration
Host1 group1
Host2 0
0
group2
Host1
group2 e − Km
f()
group1 e−Kr
Host2
Exhaustion
Host1 group1
Host2 0
awake(group2)
group2
0
t
Fig. 1. Migration, Return from Migration: (a) Termination — (b) Exhaustion
Management of Multiple Resources While our previous work mainly focused on processor time, our set of primitives is able to address other kinds of energies. From an energy system implementor’s viewpoint, there are only three primitive operations that deal with energy tanks. These operations specify how energy is 1. merged when a subgroup terminates and gives its energy back to its parent, 2. consumed while a group performs some work (a descriptor tells how much is consumed) and 3. split between the creating and created groups (a descriptor details the split). This model allows the user to create his own types of energies and have the same machinery take care of these energies.
Resource Management for Safe Languages
3
13
Final Session
In the final session a list of open or not yet satisfactorily solved problems related to resource management for safe languages was created. We have not found answers to them but the list itself is quite interesting. It is reproduced here in more or less verbatim form: Languages vs operating systems? The entity: thread, isolate, process? Flexible consumer groupings/hierarchies Economics/trading (trade disk space for network bandwidth, etc.) Minimal resources, how to find granularity Granularity/units Observing vs controlling Primitive/basic/minimal complete set of resources Higher level abstractions vs low-level APIs Different views of a resource Synchrony vs Asynchrony Shared OS resource overhead Revocation/dynamic behavior Sharing and termination Security/compatibility with access control Resource-aware applications/cost of being resource aware Enforcing cooperation Reservations/price prediction Accuracy vs efficiency Platform independence Who’s to blame/charge Real time issues Reclamation-when? correct? delay Resource policies Scalability Transactional behavior Extensibility vs Portability vs Efficiency Simplicity Useful and complete minimum set of resources Taxonomy: renewable vs revocable
4
Workshop Conclusions
The participant concluded that it was a very useful meeting, although still there are more questions than answers. Perhaps a follow-up workshop will be organized in conjunction with ECOOP’03, and hopefully some of the issues will have been addressed by then. Certainly, this area does not suffer from the lack of interest!
14
Grzegorz Czajkowski and Jan Vitek
References [1] Java Community Process. JSR-121: Application Isolation API Specification. http://jcp.org/jsr/detail/121.jsp. [2] Walter Binder, Jarle Hulaas, and Alex Villaz´ on. Resource control in J-SEAL2. Technical Report Cahier du CUI No. 124, University of Geneva, October 2000. ftp://cui.unige.ch/pub/tios/papers/TR-124-2000.pdf. [3] Walter Binder, Jarle Hulaas, Alex Villaz´ on, and Rory Vidal. Portable resource control in Java: The J-SEAL2 approach. In ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA-2001), Tampa Bay, Florida, USA, October 2001. [4] Grzegorz Czajkowski and Thorsten von Eicken. JRes: A resource accounting interface for Java. In Proceedings of the 13th Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA-98), volume 33, 10 of ACM SIGPLAN Notices, pages 21–35, New York, USA, October 18–22 1998. ACM Press. [5] Alex Villaz´ on and Walter Binder. Portable resource reification in Java-based mobile agent systems. In Fifth IEEE International Conference on Mobile Agents (MA-2001), Atlanta, Georgia, USA, December 2001. [6] William A. Kornfeld and Carl E. Hewitt. The Scientific Community Metaphor. IEEE Trans. on Systems, Man, and Cybernetics, pages 24–33, January 1981. [7] Luc Moreau and Christian Queinnec. Design and Semantics of Quantum: a Language to Control Resource Consumption in Distributed Computing. In Usenix Conference on Domain-Specific Languages (DSL’97), pages 183–197, Santa-Barbara, California, October 1997. [8] Luc Moreau and Christian Queinnec. Distributed Computations Driven by Resource Consumption. In IEEE International Conference on Computer Languages (ICCL’98), pages 68–77, Chicago, Illinois, May 1998. [9] G. Banga, P. Druschel, and J. Mogul. Resource containers: a new facility for resource management in server systems. In In Proceedings of the 3rd USENIX Symposium on Operating system design and implementation, Feb. 1999. [10] Walter Binder. J-SEAL2 – A secure high-performance mobile agent system. In IAT’99 Workshop on Agents in Electronic Commerce, Hong Kong, December 1999. [11] Nicolas Le Sommer and Frederic Guidec. A Contract-Based Approach of Resource-Constrained Software Deployment. In J. van Leeuwen G. Goos, J. Hartmanis, editor, Proceedings of the First International IFIP/ACM Working Conference on Component Deployment (CD’2002, Berlin, Germany), number 2370 in Lecture Notes in Computer Science, pages 15–30. Springer, June 2002. [12] Nicolas Le Sommer and Frederic Guidec. JAMUS: Java Accommodation of Mobile Untrusted Software. In 4th EurOpen/USENIX Conference (NordU’2002, Helsinki, Finland), February 2002. http://www.univubs.fr/valoria/Orcade/RASC/Publications/NordU2002.pdf.
Generative Programming Krzysztof Czarnecki1, Kasper Østerbye2, and Markus V¨ olter3 1
DaimlerChrysler Research and Technology, Germany [email protected] 2 IT University, Denmark [email protected] 3 independent consultant, Germany [email protected]
Abstract. This report describes the results of a one-day workshop on Generative Programming (GP) at ECOOP’02. The goal of the workshop was to discuss the state-of-the-art of generative techniques, share experience, consolidate successful techniques, and identify open issues for future work. This report gives a summary of the workshop contributions, debates, and the identified future directions.
1
Introduction and Overview
The Workshop on Generative Programming was held on June 10, 2002, at the ECOOP’02 conference in M´ alaga, and it was the second ECOOP workshop on this topic. The workshop aimed to bring together practitioners, researchers, academics, and students to discuss the state-of-the-art of generative techniques, share experience, consolidate successful techniques, and identify open issues for future work. The workshop was attended by a total of 22 participants, including the organizers, 2 invited speakers, presenters of 10 accepted paper contributions, and other ECOOP attendees. The call for participation, the contributions, and the workshop results can be accessed at www.generative-programming.org/ ecoop2002-workshop.html. The remaining part of this report is organized as follows. Section 2 gives an introduction to GP. The goals, topics, and the format of the workshop are outlined in Section 3. Sections 4 and 5 give a brief summary of each contribution including the debates. Section 6 gives an account of the identified future directions. The reports ends with the list of participants and contributions.
2
Generative Programming and Technology Projections
Generative programming builds on system-family engineering (also referred to as product-line engineering) [5, 7] and puts its focus on maximizing the automation of application development [3, 2, 1, 4, 6]: given a system specification, a concrete system is generated based on a set of reusable components. J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 15–29, 2002. c Springer-Verlag Berlin Heidelberg 2002
16
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
Problem Space • Domain-specific concepts and • Features
Solution Space • Elementary components • Maximum combinability • Minimum redundancy
Configuration Knowledge • Illegal feature combinations • Default settings • Default dependencies • Construction rules • Optimizations
DSL Technologies
Generator Technologies
Component Technologies
• • • • •
• • • •
• • •
•
Programming language Extendible languages New textual languages Graphical languages Interactive wizards and GUIs Any mixture of the above
• •
Simple traversal Templates and frames Transformation systems
Generic components Component models AOP approaches
Programming languages with metaprogramming support Extendible programming systems Reflection
Fig. 1. Generative domain model and technology projections
The key to automating the assembly of systems is a generative domain model that consists of a problem space, a solution space, and the configuration knowledge mapping between them (see Figure 1). The solution space comprises the implementation components and the common system family architecture, defining all legal combinations of the implementation components. The problem space consists of the application-oriented concepts and features that application engineers use to express their needs. This space is implemented as a domain-specific language (DSL). The configuration knowledge specifies illegal feature combinations, default settings, default dependencies, construction rules, and optimizations. The configuration knowledge is implemented in the form of generators. The generated products may also contain non-software artifacts, such as test plans, manuals, tutorials, maintenance guidelines, etc. Each of the elements of a generative domain model can be implemented using different technologies, which gives rise to different technology projections: – DSLs can be implemented using programming language-specific features (such as in template metaprogramming in C++ [2]), extensible languages (which allow for domain-specific syntax extensions such as OpenC++, OpenJava [inv1], keyword-based programming [pos3], Refill [pos4], and Jasper [pos5]), graphical languages (e.g., UML, Simulink/Stateflow [pos6]), or interactive GUI and wizards (e.g., [pos9]). – Generators can be implemented using template-based approaches (e.g., TL [3] and Jostraca [pos8]), built-in metaprogramming capabilities of a language (e.g., template metaprogramming in C++), transformation systems (e.g., QCoder [pos9]) partial evaluation (e.g., [pos12]), or extensible programming systems (e.g., OpenC++, OpenJava, keyword-based programming, Refill, and Jasper).
Generative Programming
17
– Components can be implemented using, e.g., generic components (such as in the STL), component models (e.g., JavaBeans, EJB, ActiveX, or CORBA), or aspect-oriented programming approaches (e.g., AspectS [pos11]).
3
Workshop Topics and Format
Potential participants were asked to submit a two-page (or longer) position paper detailing their experience with GP, their perspective on the relation of GP and other emerging approaches, and their planned contribution to the workshop. Possible topics included – synergy between object-oriented technology, components and generative techniques, styles of generators (application generators, generators based on XML technologies, template languages (e.g., JSP), template metaprogramming, transformational systems, intentional languages, aspects, subjects, etc), particularly their uses and limitations; – design of APIs that support generative techniques – generation of code artifacts, such as application logic, UIs, database schemas, and middleware integration; – generation of non-code artifacts such as test cases, documentation, tutorials, and help systems; – capturing configuration knowledge, for example, in DSLs, and extensible languages; – influence of generative techniques on software architecture (e.g., building and customizing frameworks and applying patterns); – testing generic and generative models; and – industrial applications of generative technology. The format of the workshop was designed to foster discussion and interaction rather than presentations. The workshop schedule consisted of two short presentation sessions in the morning, and one pro-and-contra session and one discussion session in the afternoon. The workshop started with a few words of explanation about the format and a short introductory talk on GP, with the contents outlined in Section 2. The short presentation sessions consisted of 10-minute talks by the authors of accepted paper contributions, followed by 5 minutes for answering questions. Similar to last year, the short presentation sessions were followed by a proand-contra session. The pro-and-contra session consisted of a 10 minute pro-andcontra discussion for the six papers selected for the session. The idea was to have two volunteers for each paper, one defending the ideas presented in the paper and one trying to come up with counterarguments. Within the 10 minutes the pro and the contra volunteer were asked to make 2-minute initial statements and then to exchange their arguments. They were also allowed to ask the audience for additional arguments, if needed. The pro-and-contra session inspired lively debates and turned out to be a very efficient format to discuss the advantages and disadvantages of the different approaches and the relationships among them. The
18
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
workshop ended with a open discussion, in which we identified and summarized open issues and topics for future work.
4
A Summary of the Contributions and the Debates
The paper contributions to this workshop can be classified into three categories – language extension technologies: OpenJava and OpenC++ [inv1], keywordbased programming [pos3], Refill [pos4], and Jasper [pos5]; – other GP technologies: Jostraca (a template-based approach) [pos8], QCoder (wizards and code transformations) [pos9], partial evaluation [pos12], and aspects [pos11]; – applications of GP : embedded software [pos6], graphical user interfaces [pos7], and distributed client-server systems [pos10] The following subsections give a short summary of the contributions and the workshop debates. 4.1
Language Extension Technologies
OpenJava, OpenC++, and Javassist Shigeru Chiba was the first invited speaker and gave an overview of the tools OpenJava, OpenC++, and Javassist (see [inv1]). OpenJava and Open C++ are both source-to-source translation toolkits. The unique features are basically that compile-time reflection allows people to work with source-level vocabulary such as classes, methods, fields, types, etc. For typical programmers, source-level vocabulary is more familiar than parse-tree level stuff (such as declarators, initializers, etc.). As a consequence, OpenJava and Open C++ are easy to use for application-domain specialists, but also more limited than parse-tree level tools. As another consequence, both OpenJava and Open C++ have limited possibilities for syntax extension, again as an attempt to stay within the notion of standard source level vocabulary. For the method body, however, a direct parse-tree representation is used since creating (meta-) objects for each token is too complicated. However, note that this is not an AST (with a lot of BNF-non-terminals), but a concrete syntax tree. The API of both OpenJava and Open C++ is basically a Visitor visiting the parse/source tree: an event is raised when certain “events” occur when traversing the parse tree (such as a method call, instance creation, etc). The events can be limited to instances of specific classes and event listeners can be “attached” to specific classes. Javassist is a new tool that allows users to work with source-level vocabulary while internally modifying the byte code. Here there are of course no syntax extensions whatsoever. Summary of the debate. Experience has shown that the idea of basing the translation on source level abstractions does indeed lower the initial learning curve for users of the system. Currently the three APIs of OpenJava and Javassist are somewhat different, due to historical reasons, but they could be more homogenized. It was discussed to what extent there could be a concept such as “compile-time reflection.” OpenJava and OpenC++ uses the word reflection to denote that the translation is at the source level rather than the parse tree level.
Generative Programming
19
Keyword-Based Programming In [pos3], Thomas Cleenewerck et al. present the notion of keyword-based programming. The goal is to devise a general mechanism for embedding domain specific language extensions into existing languages, or building entirely new ones. This requires a composable language. Keyword based programming allows named language constructs to be specified in terms of their translation into existing constructs, which can again contain keywordbased definitions. Keywords can be combined into languages through an XML specification. A simple example is the keyword time, which is used to time execution of statements, as in the following fragment: ... time { for(int I=0;I> long aXX = System.out.currentTimeMillis(); >> $js; >> System.out.println("action has taken " + (System.currentTimeMillis() - aXX)); } } }
Summary of the debate. Several issues came up in the debate. One was the relation between keywords and syntactical domains (or, in other words, constructors and types/sorts) and whether there was a one to one mapping. The answer was that it was necessary to try to find the minimal set of keywords in order to avoid language explosion. The issue of conflicting keyword recognition patterns was raised, but dismissed as not being a problem in practice so far. It was stated that the keyword system was implemented to a large part in itself.
20
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
Refill Kasper Østerbye presented in [pos4] a framework for language extension based on the idea that it should be possible to extend not only syntax, but also static semantics, name binding rules, and compile time semantics. The focus is on providing a simple model for doing this, which like the two previous presentations do not force the language extender into parse theoretic issues, but enables the extender to stay at the host language level. Currently the system focuses on building an extensible version of Java. The idea is that using extended syntax should be as easy as using third party classes, as in the following example: import mycontrolstructures.ForEach; import java.util.Vector; class Test { Vector employees = new Vector(); public void printAllEmplees(){ foreach Person p in employees System.out.println(p.name + " " + p.address); } }
The import statement has the additional semantics that if the imported classes represent language extensions, the language is extended within this file. The language extensions are defined as slightly extended Java classes, which define compile-time actions such as name-binding, type check, etc. This approach, just as the one described previously, addresses the issue of a modular language design. This author also expressed the opinion that the problem of incompatible language extension has yet to prove itself as a practical issue, while it is of course an undecidable theoretic problem to see if two language extensions conflict even at the syntactic level, not to mention any of the semantic levels. Summary of the debate. The issue came up how one could then extend the foreach construct in the example. Currently the approach is only suited to add new language constructs, not to modify existing ones. But language constructs can be places in hierarchies, and thus share common semantics, but concrete syntax cannot be inherited. Jasper Dmitry Nizhegorodov presented in [pos5] another extensible Java compiler named Jasper. Jasper has been used extensively within the Oracle Corporation, both for EJB deployment, and for miscellaneous Java extensions, including multi-methods and generic functions. It is based on a compile-time meta-object protocol, and allows both syntax extensions as well as more traditional macro expansions. The tool is targeted at transformations, using template based code generation. The system is layered, with the bottom layer being an extensible parser and transformer. Both of these ensure that the abstract syntax tree is well formed at all times, through the use of Java’s type system. On top of this, a macro layer has been built, which translates macro definitions into parser extensions. Here is an example:
Generative Programming
21
MACRO Statement UNLESS (Expression test, Statement body) { return NEW Statement (Expression test, Statement body) if (test == false) body; }
This translates into the two following definitions: class UNLESS_Statement extends StatementParserExtension {...} class UNLESS_StatementExpander extends StatementExpander { Expression test; Statement body; public Statement expand () { return new IfStatement (0, new EqualExpression(0, test, new BooleanExpression(0, false)), body, null); } }
The first is a class used for extending the parser, the second is the class used for defining the transformation. During parsing, the test and body fields of the statement expander class is set, and are hence available when the transformation tool calls the expand method. Syntactic validity is seen in that the return type of expand is a Statement, where the actual returned object is an if-statement, which is a subtype of statement. Summary of the debate. The tool is completely implemented and mature, and it has been used to solve real world problems at Oracle. As such it was hard to argue with. Jasper only addresses transformations, not static semantic checks. 4.2
Other GP Technologies
Jostraca Richard Roger presented Jostraca [pos8]. Jostraca is a tool that focuses on the specification of code generation, for which purpose it uses a template-based approach inspired by Java Server Pages and XML. The main idea is that JSP and XML based techniques are existing tools for programmers today, and hence there is something to leverage on. A substantial part of the paper and presentation is devoted to reflection on the fact that generative programming is not mainstream, and hence to change this, it is important to investigate ways in which the average programmers can start to get some leverage today. And Jostraca is one such approach. A simple example is that of implementing getters and setters for fields in Java. In a JSP syntax, this could be implemented as:
public class Person {
22
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter private String m = ""; public String get() { return m ; } public void set(String p){ m = p; }
}
The idea is here that the fields are specified at top of the program, and the for-loop iterates over those fields, producing the code in the body of the for-loop for each field. Unfortunately, the syntax of such templates is somewhat clumsy. The paper discusses the readability issue and introduces an extra layer of syntactic sugar. Also, this new template can be run through the Java compiler to verify that it is at least syntactically correct, thus addressing an important issue, namely that of debugging templates. Summary of the debate. The approach is best suited for specifying code generation, where there are a lot of repetitions for simple programs. Wizards and Code Transformations The paper [pos9] by Marcelo d’Amorim et al. describes QCoder, a tool for the generation and maintenance of Java programs. The tool can be used to generate class structures including their implementation, and also to safely modify the new code by applying refactorings. QCoder implements both code generation and refactoring using code transformations. Transformations are written in a language that extends Java with metaprogramming constructs, such as metavariables, semantic-based pattern matching, and meta-expressions. The language is specifically designed to write code transformations on Java programs. In contrast to a purely template-based approach such as Jostraca, the transformational approach of QCoder supports not only the instantiation of target templates, but also matching source templates on existing code to drive the instantiation of the target templates. Code transformations in QCoder are performed by user-defined wizards. QCoder wizards can be easily defined by creating new or assembling existing wizards using a tool customizer. The composition of wizards involves composing graphical elements (implemented as JavaBeans) and code transformations. An integration of QCoder with JBuilder and VisualAge for Java is available. The paper discusses the practical applications of the technology, e.g., introducing specific architectural patterns into an EJB application. Summary of the debate. The first issue discussed by the audience was the problem of the after-the-fact modifications of code generated by wizards, which is known from popular IDEs. The problem is that by re-running a wizard with different option settings the modifications get lost. Although not fully eliminating the problem, the advantage of QCoder is that automatic refactoring can be applied to existing code, meaning that certain modifications corresponding to different option settings could still be achieved automatically. Another issue
Generative Programming
23
discussed by the audience was the efficiency of code generation. In the case of QCoder, rather small transformations are performed in an interactive manner, meaning that the performance has not been a problem so far. Aspects in Squeak Robert Hirschfeld [pos11] presented AspectS, an implementation of aspects for the Squeak language. According to the author, the design should also work for most other Smalltalk implementations. AspectS is built on top of John Brant’s implementation of method wrappers and extends the meta-object protocol to accommodate the aspect modularity mechanism. The implementation covers the aspect mechanisms known from AspectJ, including the cflow construct. Because of the Smalltalk architecture, aspects can be woven into an application and later removed, both at runtime. The paper discusses the implementation of AspectS in some detail and it gives examples of its use. Summary of the debate. The discussion mainly revolved around clarifications. The performance is ok, as there is very low overhead in the method wrapper approach. The syntax for aspects are embedded in the usual Smalltalk syntax, and it uses a lot of blocks, and Smalltalk’s keyword-based method-names in order to somewhat improve the readability. Partial Evaluation As exemplified by AspectS, dynamic reflection provides a flexible way to compose components. Unfortunately, reflective calls are considerably slower than their base-level counterparts. In [pos12], Susumu Yamazaki and Etsuya Shibayama present an approach to optimize reflective calls in Java using an automatic specialization technique. In contrast to previously proposed techniques, this approach is capable of specializing reflective calls to polymorphic methods and it also works in the context of dynamic loading. The approach utilizes two techniques to reach this goal: 1. a specialization technique combined with binding time analysis 2. generative extension object technique (GEO) The specialization technique can generate specialized versions of a reflective method for the subclasses of a given class. The reflective calls contained in such a specialized method are statically resolved based on the location of the specialized method in the class hierarchy. The paper shows that, in the presence of dynamic loading, specialized methods sometimes need to be added to already loaded classes (e.g., in the case of double dispatching). This, however, is not directly possible without extending the JVM in a nonstandard way. As a solution to the problem, the paper proposes to use the extension object pattern, which can be statically added to a class requiring specialization. The extension object pattern provides a hook that allows extending the original object by object composition at runtime and thus, avoids the JVM extension problem. In order to see the performance benefit of this technique, Yamazaki and Shibayama compared an unspecialized version of the reflective visitor pattern
24
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
with a specialized version with and without GEO (the specialization was performed by hand). The specialized version without GEO was about 100 times faster than the unspecialized version, and the one with GEO was 60 times faster than the unspecialized version. As of writing, a partial evaluator based on the described approach is under development. 4.3
Applications of Generative Programming
Embedded Software In [pos6], Gerd Frick and Klaus M¨ uller-Glaser make the point that embedded real-time systems is a real-world application domain where state-of-the-art software development processes are already generative. Visual formalisms (i. e., with graphical notations) have been used in control systems engineering for many years. For example, Statecharts are used to model discrete, typically reactive and open-loop control systems, and signal flow notations are used to model continuous, closed-loop control systems. Both kinds of notations can be used together to create hybrid models. There is a large number of development environments supporting controlsystem design using the formalisms described above, e.g., Statemate, Matlab / Simulink / Stateflow, and ASCET-SD. Such development environments usually support system modeling, simulation, and code generation. Generation of code for real-time embedded targets such as microcontrollers is becoming state of the art in the industry. The alternative term for code generation often used there is “autocoding.” Generated code starts being used in production systems, e.g., in automobiles. This became possible due to the availability of code generators that output highly optimized, production-quality code. The main advantage of this approach is the higher abstraction level. The executable system models correspond to programs written using DSLs. Open issues are the integration of different modeling approaches as well as considerations of the physical model (e.g., distributed ECU architecture in the automobile). A first step is the use of XFL, an XML-based concept involving a metalanguage for representing languages and transformations, as the common mediator between existing languages and tools. XFL is currently under development at the FZI in Karlsruhe, Germany. Graphical User Interfaces In [pos7], Sofie Goderis and Wolfgang De Meuter propose to generate GUIs from high-level specifications by means of declarative metaprogramming. The goal of the proposal is to truly decouple GUIs and applications and to generate GUIs for different devices based on a single specification. According to the authors, current GUI technologies do not sufficiently decouple GUI code from application code. For example, the Model-View-Controller pattern still requires change notification calls to be distributed in the application code. Furthermore, most user interface builders generate a GUI plus function stubs to be filled by application programmers, which requires manually changing the generated code or using subclassing. The authors would rather prefer a declarative way of coupling the user interface with the application code.
Generative Programming
25
Currently, porting an application to a new device requires manually rewriting its GUI implementation. This could be avoided by generating the GUI for different devices (e.g., computer, mobile phone, PDA, etc.) from one specification. The idea is to write the specification in terms of higher level concepts, such as text, figure, listing, question, action, dependency, choice, etc., rather than the low-level concepts found in current GUI definition (e.g., button, field, frame, etc.). The paper proposes to use Declarative Meta Programming (DMP) to generate GUIs by representing the high-level GUI specification and the necessary hooks in the application code as logic facts, and by using logic rules to express how to generate the code to combine both parts. DMP can be done in SOUL, a Prolog-like programming language which is tightly integrated with Smalltalk. SOUL programs can reason about Smalltalk code and Smalltalk can call SOUL programs. The paper gives a simple example illustrating how SOUL could be used to solve the GUI generation problem. The realization and validation of the described proposal is still in a very early stage. Summary of the debate. The main issue discussed by the audience was the readability of logic programs. The problem of making the GUI specification more intuitive could be solved by providing a graphical tool on top of logical facts. However, the use of logical facts has also important advantages such as the possibility to represent domain knowledge in an explicit way and to use inference algorithms to check for conflicts. Another concern raised by the audience was the need to incorporate the potentially huge differences between GUI technologies of different devices. This problem could be addressed by using a layered model of device-independent or dependent rules. This approach can also help to achieve better reuse in the GUI domain. The issue of connecting a GUI to an application was also discussed. The Smalltalk meta-object protocol (which is supported by SOUL) was pointed out as a very flexible mechanism to connect a GUI to the application classes. Finally, the need to compare the effectiveness of the DMP generation approach should also be compared to other approaches, e.g., the template-based approach, was emphasized by the audience. Distributed Client-Server Systems In [pos10], Barrett Bryant et al. present UniFrame, a framework for integrating distributed heterogeneous components including functional and Quality of Service (QoS) constraints. UniFrame assumes that developers will provide on a network a variety of components for specific problem domains and the specifications of these components, which describe both their functionality and QoS. The components needed for a specific application will be retrieved by so-called “headhunters,” which identify sets of components that satisfy both the desired functionality and QoS from those available on the network. After the component sets are identified and fetched, the distributed application is assembled by choosing one component from each set according to the generation rules embedded in a generative domain-specific model. This assembly
26
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
may require the creation of a glue/wrapper interface between various components as well as instrumentation to validate dynamic QoS parameters (e.g., response time). Once the system is assembled, it must be tested using a set of test cases supplied by the developers to verify that it meets the desired QoS criteria. If not, a different assembly of components is tried. So-called Two-Level Grammars (TLG) (which originally have been introduced to specify Algol 68) are used to specify the generative rules and provide the formal framework for the generation of the glue/wrapper code. The two levels of a TLG are type definitions in the form of a context-free grammar, and function definitions in the form of another context-free grammar. TLG may be used to provide for attribute evaluation and transformation, syntax and semantics processing of languages, parsing, and code generation. The paper gives the an example of automatically assembling a bank account management system from the client and server components for a bank domain that are available on the network. Summary of the debate. One of the concerns raised was that glue code is typically something that is very special for a given context and therefore hard to reuse. In particular, it may be hard to predict, what glue codes the different components on a network may need. However, the main purpose of glue code in UniFrame is to adapt technologies, not component functionality, and a glue code generation capability for a set of standard technologies can be provided.
5
Collaboration in the GP Community
The second invited talk was by Joost Visser ([inv2]), who presented his initiative to provide the GP community with an efficient electronic collaboration platform. Given the growing interest for GP and the amount of workshops, conferences, tools, and companies related GP technology, there is a definitive need for a single, efficient, and persistent platform for collaboration in the GP community. The solution is a GPWiki site and a repository for collaborative software development. GPWiki (www.program-transformation.org/gp/) is a collaborative web authoring for the GP community based on WikiWiki. WikiWiki is Hawaiian for “Fast!!” and it stands for a light-weight, easy-to-use system for collaborative web content management. WikiWiki allows you to edit pages via your browser. Any word with 2 non-consecutive capitals is a link. There are many implementations of WikiWiki. The one behind GPWiki is TWiki. GPWiki serves the GP community as a medium to summarize discussions (workshop debates, mailing lists); present approaches, methods, subfields; advertise tools, systems, languages, libraries; develop surveys, taxonomies; conduct comparative studies; grow bibliographies, glossary, indexes; maintain event calendar; vent opinions, ponder hunches, and outline perspectives. The proposal for a platform for collaborative software development for the GP community is based on the idea that reuse is a catalyst for collaboration. This has been the experience from initiating and running such a plat-
Generative Programming
27
form for program transformation tools at www.program-transformation.org/ package-base/. The latter contains a collection of currently 57 packages for language prototyping, compiler construction, program transformation, software renovation, documentation generation, etc. Joost identified two main ingredients of a collaborative software development platform for GP: a set of common exchange formats and an online component repository. An common exchange format would allow for the interoperability of various components, generators, front-ends, analyzers, back-ends, etc. A common format used for the program transformation component repository is ATerms. The online component repository needs to provide a common interface for building, testing, documenting, bundling, and distributing packages. The tool used by the program transformation component repository is autobundle. The workshop paricipants were encouradged to contribute to GPWiki and the online repository.
6
Concluding Discussion
At the beginning of the discussion session, we came up with a prioritized list of discussion topics. The first issue that we discussed was correctness support for generative programming. Erik pointed out that we need typing mechanisms for language extensions and for manipulating languages. However, with many generative approaches, particularly the dynamic ones, the distinction between runtime and compile time becomes blurred. Thus, what is the role of strong typing? When do errors happen? Dmitry responded that in GP we also have things like macro-definition time, macro-expansion time, etc., but the goal should be to catch errors as early as possible. As another issue related to correctness, Kasper emphasized that the base language should be designed in a composable way in the first place. We need to think about what the useful, composable (orthogonal, compatible) features of programming languages are. Language extensions should be checked statically, and have their own semantics and type system. However, the typing issue is tough because different extensions used in parallel might have influence on each other’s semantics, errors and syntax. J¨ org pointed out that we need a theory of extensible languages to reason about language extensions and compositions. An important observation made by Chiba was that the discussed correctness issues are not necessarily intrinsic to GP, but they also show up in the context of composing conventional libraries. However, correctness is not limited to type checking, as language extensions should cover editing, debugging, error checking, and code generation. Finally, Erik pointed that domain-specific language extensions can be used as a “wrapper” around complicated frameworks, allowing detecting framework usage errors at compile time instead of runtime. That’s a big advantage of GP compared to conventional libraries and frameworks. The second issue that was briefly discussed was the need for a comparison of different generative techniques. This would help researchers to better understand alternative approaches, and developers would certainly welcome more
28
Krzysztof Czarnecki, Kasper Østerbye, and Markus V¨ olter
guidance in selecting the appropriate technique for their problems. Such a comparison would be rather an ongoing effort that could be facilitated through GPWiki. In addition to a taxonomy and individual descriptions of approaches, the participants would welcome some good demonstrations of expressive power (or “expressive ease,” as Kasper pointed out) of individual approaches, common case studies, industrial success stories, and guidelines for selecting individual approaches. The other issues identified at the beginning of the session, but not discussed for the lack of time, included the problem of customizing generated artifacts, reusability of language extensions, testing and debugging of generators, applications of GP, relations of GP to other disciplines, and feature interaction. Finally, at least half of the audience expressed their interest to participate in a follow-up, specialized workshop on extensible languages.
Participants Joao Araujo (New University of Lisbon, Portugal; [email protected]), Uwe Bardey (University of Bonn, Germany; [email protected]), Paulo Borba (Qualiti Software Processes and Federal University of Pernambuco, Brasil; [email protected]), Barrett Bryant (University of Alabama at Birmingham, USA; [email protected]), Shigeru Chiba (Tokyo Institute of Technology, Japan; [email protected]), Thomas Cleenewerck (Catholic University of Leuven, Belgium; [email protected]), Krzysztof Czarnecki (DaimlerChrysler Research and Technology, Germany; [email protected]), Erik Ernst (University of Aarhus, Denmark; [email protected]), Gerd Frick (FZI Forschungszentrum Informatik, Karlsruhe, Germany, [email protected]), Sofie Goderis (Free University of Brussels, Belgium; [email protected]), Robert Hirschfeld (DoCoMo Euro-Labs, Germany; [email protected]), Boris Magnusson (University of Lund, Sweden; [email protected]), Dmitry Nizhegorodov (Oracle, USA; [email protected]), Joost Noppen (University of Twente, The Netherlands; [email protected]), Kasper Østerbye (IT University, Denmark; [email protected]), Richard Roger (InterComponentWare AG, Germany; [email protected]), Sybille Schupp (Rensselaer Polytechnic Institute, USA; [email protected]), Jrg Striegnitz (Research Center J¨ ulich, Germany; [email protected]), Joost Visser (CWI, The Netherlands; [email protected]), Markus V¨ olter (independent consultant, Germany; [email protected]), Jonathan Whittle (NASA Ames Research Center, USA; [email protected]), Susumu Yamazaki (Tokyo Institute of Technology, Japan; [email protected])
List of Contributions The invited presentations and workshop presentations and papers are available at www.generative-programming.org/ecoop2002-workshop.html. 1. Shigeru Chiba (Tokyo Institute of Technology, Japan), Overview of OpenJava and OpenC++. (invited talk) 2. Joost Visser (CWI, Amsterdam, The Netherlands), Collaboration In the Generative Programming Universe (invited talk)
Generative Programming
29
3. Thomas Cleenewerck, K. Hendrickx, E. Duval, and H. Olivi´e (Catholic University of Leuven, Belgium), Capturing and using emergent knowledge by keyword based programming 4. Kasper Østerbye (IT University, Denmark), Refill - a generative Java dialect 5. Dmitry Nizhegorodov (Oracle, USA), Code-Generation Aspects of Jasper, a Reflective Meta-Programming and Source Transformations Processor 6. Gerd Frick and Klaus D. M¨ uller-Glaser (FZI Forschungszentrum Informatik, Karlsruhe, Germany), Generative development of embedded real-time systems 7. Sofie Goderis and Wolfgang De Meuter (Free University of Brussels, Belgium), Generating User Interfaces by means of Declarative Meta Programming 8. Richard J. Rodger (InterComponentWare AG, Germany), Jostraca: a template engine for generative programming ´ 9. Marcelo dAmorim, Clovis Nogueira, Gustavo Santos, Adeline Souza, and Paulo Borba (Qualiti Software Processes and Federal University of Pernambuco, Brasil), Integrating Code Generation and Refactoring 10. Barrett R. Bryant, Fei Cao, Wei Zhao, Rajeev R. Raje, Mikhail Auguston, Andrew M. Olson, and Carol C. Burt (University of Alabama at Birmingham, Indiana University Purdue University Indianapolis, New Mexico State University, USA), Generative Programming Using Two-Level Grammar in UniFrame 11. Robert Hirschfeld (DoCoMo Euro-Labs, Germany), AspectS – Aspects in Squeak 12. Susumu Yamazaki and Etsuya Shibayama (Tokyo Institute of Technology, Japan), Runtime Code Generation for Bytecode Specialization of Reflective Java Programs
References [1] D. Batory and S. O’Malley. The Design and Implementation of Hierarchical Software Systems with Reusable Components. In ACM Transactions on Software Engineering and Methodology, vol. 1, no. 4, October 1992, pp. 355-398 [2] K. Czarnecki and U. Eisenecker. Generative Programming: Methods, Tools, and Applications. Addison-Wesley, Boston, MA, 2000 [3] J. C. Cleaveland. Program Generators with XML and Java. Prentice-Hall, XML Book Series, 2001 [4] J. C. Cleaveland. Building Application Generators. In IEEE Software, vol. 9, no. 4, July 1988, pp. 25-33 [5] P. Clements and L. Northrop. Software Product Lines: Practices and Patterns. Addison-Wesley, 2001 [6] J. Neighbors. Software construction using components. Ph. D. Thesis, (Technical Report TR-160), University of California, Irvine, 1980 [7] D. M. Weiss and C. T. R. Lai. Software Product-Line Engineering: A FamilyBased Software Development Process. Addison-Wesley, Reading, MA, 1999
Tools and Environments for Learning Object-Oriented Concepts Isabel Michiels1 , J¨ urgen B¨ orstler2, and Kim B. Bruce3 1
PROG, Vrije Universiteit Brussel, Belgium 2 Ume˚ a University, Sweden 3 Williams College, Massachusetts, USA
Abstract. The objective of this workshop was to discuss current techniques, tools and environments for learning object-oriented concepts and to share ideas and experiences about the usage of computer support to teach the basic concepts of object technology. Workshop participants presented current and ongoing research. This was the sixth workshop in a series of workshops on learning objectoriented concepts.
1
Introduction
The primary goal of learning and teaching object-oriented concepts is to enable people to successfully participate in an object-oriented development project. Successfully using object-oriented technology requires a thorough understanding of basic OO concepts. However, learning these techniques, as well as lecturing about these concepts has proven to be very difficult in the past. Misconceptions can occur during the learning cycle and the needed guidance cannot always be directly provided. The goal of this workshop was to share ideas about innovative teaching approaches and tools to improve the teaching and learning of the basic concepts of object technology rather than teaching a specific programming language. Teaching tools could be either tools used in environments or specific environments for learning OO, as well as any kind of support for developing OO learning applications themselves. In order to develop useful results regarding the issue of understanding objectoriented concepts, the workshop wanted to focus on the following topics: – – – – – – – – –
approaches and tools for teaching design early; intelligent environments for learning and teaching; frameworks/toolkits/libraries for learning support; microworlds; different pedagogies; top-down vs. bottom-up approach; design early vs. design late; topic presentation issues; frameworks/toolkits for the development of teaching/learning applications .
J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 30–43, 2002. c Springer-Verlag Berlin Heidelberg 2002
Tools and Environments for Learning Object-Oriented Concepts
31
This was the sixth in a series of workshops on issues in object-oriented teaching and learning. Previous workshops were held at OOPSLA’97 [1, 2], ECOOP ’98 [3], OOPSLA’99 [4], ECOOP’00 [5] and OOPSLA ’01 [13], and focused on project courses, classroom examples and metaphors, and tools and environments.
2
Workshop Organization
This workshop was designed to gather educators, researchers and practitioners from academia and industry who are working on solutions for teaching basic object-oriented concepts. To get together a manageable group of people in an atmosphere that fosters lively discussions, the number of participants was limited. Participation at the workshop was by invitation only. Eighteen participants were selected on the basis of position papers submitted in advance of the workshop. The workshop was organized into two presentation sessions (all morning), two working group sessions (afternoon) and a wrap-up session, where all working groups presented their results. The presentations were split up in a long presentation session, where more elaborate papers with demonstrations were presented, and a short presentation session, where the other participants could present their position about the workshop topic. Table 1 summarizes the details of the workshop program. To gather some input for the working group sessions participants were asked before the workshop to think about some controversial topics in teaching objectoriented concepts.
3
Summary of Presentations
This section summarizes the main points of all workshop presentations. More information on the presented papers can be obtained from the workshop’s home page [14]. Glenn D. Bank (Lehigh University, USA) gave an overview on the usage of the multimedia framework CIMEL to supplement computer science courses. To show how CIMEL supports learning-by-doing by means of interactive quizzes and constructive exercises, Glenn gave a short demonstration of a module on Abstract Data Types (ADTs). The module contained advanced multiple-choice questions, where help and feedback is provided by multimedia personae. Furthermore the module supports the successive construction of a concrete ADT by pointing-and-clicking. A case study with 72 students showed that the multimedia contributes significantly to objective learning and helps students design ADTs to solve a problem. Currently modules on Inheritance and ADTs are available. Materials for a CS0 course are under development. Glenn noted as a drawback that designing new multimedia lectures is expensive. Topics must therefore be chosen carefully to make its use cost effective. Carsten Schulte (University of Paderborn, Germany) reported on results from the LIFE3 projects, which investigates teaching concepts for object oriented programming in secondary schools. He proposed an apprentice-based learning
32
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
Table 1. Workshop program Time Topic 9.25 am WELCOME NOTE 9.30 am Teaching Abstract Data Type Semantics with Multimedia presented by Glenn D. Blank 10.00 am Thinking in Object Structures: Teaching Modelling in Secondary Schools presented by Carsten Schulte 10.30 am Contract-Guided System Development by Vasco Vasconcelos 11.00 am COFFEE BREAK 11.30 am Supporting Objects as An Anthropomorphic View at Computation or Why Smalltalk for Teaching Objects? presented by Stephane Ducasse 11.45 am Extreme Programming Practice in the First Course presented by Joseph Bergin 12.00 am Teaching Encapsulation and Modularity in Object-Oriented Languages with Access Graphs presented by G. Ardourel 12.15 am A New Pedagogy for Programming presented by Jan Dockx 12.30 am Teaching Constructors: A Difficult Multiple Choice presented by Noa Ragonis 12.45 am PIIPOO: An Adaptive Language to Learn OO Programming presented by R. Pena 1.00 pm LUNCH BREAK 2.30 pm A Measure of Design Readiness: Using Patterns to Facilitate Teaching Introductory Object-Oriented Design presented by Tracy L. Lewis 2.45 pm Teaching Object-Oriented Design with UML - A Blended Learning Approach by Ines Gr¨ utzner 3.00 pm Split into working groups 3.05 pm First Working group session 4.00 pm COFFEE BREAK 4.30 pm Second Working group session 5.30 pm Wrap-up session
approach that combines top-down and bottom-up teaching techniques with active learning. UML is used as a visual programming/ modelling language. Their approach is supported by CRC card modelling and FUJABA, an environment that supports static and dynamic modelling using UML ([KNNZ 00]). Consistent models can be directly executed from within FUJABA by means of complete Java code generation. This allows teachers and learners to concentrate on objectoriented concepts and the modelling of objects and their interactions. Executable models are used early on to support active learning. Carsten mentioned the absence of source code as the major advantage of their approach. Students learn to think in object structures and are able to communicate design/ modeling ideas in terms of objects and their interaction. They talk objects and concepts, not code. Preliminary results from an empirical study are very promising. Vasco Vasconcelos (University of Lisbon, Portugal) reported on a three-course sequence focusing on design-by-contract and quality (here: correctness). Each course is accompanied by a group project. Pre- and postconditions are embed-
Tools and Environments for Learning Object-Oriented Concepts
33
ded into Java code (@pre/@post) and tools are used to generate Java code monitoring the assertions. In their first course students are introduced to the basic concepts of imperative languages, plus objects and classes. Their second course focuses on (formal) correctness proofs. Object-oriented analysis and design, inheritance and polymorphism are delayed until the last course in the sequence. The main drawback of this approach is that students have difficulties in doing analysis and design for problems involving more than a couple of entities. However, students are now able to reason about algorithms within the context of more complex systems than before. Vasco noted that students run quite early into limitations of the specification language. He also highlighted the importance of tools to monitor assertions while programs are running. The tools currently available are mostly immature and not aimed at undergraduate students. St´ ephane Ducasse emphasized in his talk that a language for teaching objectoriented programming should support the anthropomorphic metaphor promoted by this paradigm. He demonstrated that all the cultural aspects of the Smalltalk language, i.e., the vocabulary and the syntax, fully support the object metaphor of objects sending and receiving messages. The syntax of Smalltalk allows one to read code as a natural language. In addition, he stated that also the programming environment should support the metaphor. He showed that Smalltalk environments offer an important property they named liveness or object proximity that promotes the anthropomorphic perception of objects. By providing excerpt from a forth coming book, he showed how Squeak with the Morphic framework reinforces this ability to perceive objects as living entities [15, 16]. Joseph Bergin (Pace University, USA) discussed the applicability of eXtreme Programming (XP) practices in introductory computer science courses. His interests were not in XP per se, but in good practices and pedagogics to improve teaching. His experiences show that most XP practices can be applied in some form in introductory courses. However this requires some changes to course management. Teachers must encourage students to work together (pair programming) and provide assignments that can be developed incrementally, embracing correction and re-grading (small releases, continuous integration and refactoring). Furthermore the teacher must be available all day for questions for example by means of an interactive web site (on-site customer). Planning is supported by time recording in small notebooks a´ la PSP (planning game). Test first programming is supported by the tool JUnit [12], which is introduced at the beginning of the course. Other XP practices are more straightforward. Gilles Ardourel started his talk by pointing out several issues one must face when teaching object-oriented languages: they are evolving fast, provide only informal documentation and can be quickly obsolete. He believes that even when teaching a specific language, you can prepare your students to changes and give them a broader view on languages by using language-independent notations to describe language mechanisms.
34
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
The author encountered these issues when teaching access control mechanisms in statically typed class-based languages. These mechanisms manage implementation hiding and define interfaces adapted to different client profiles. At his university, they use access graphs as a language-independent notation for teaching encapsulation and modularity in object-oriented languages. In access graphs nodes represent classes or a set of classes, and arrows (labelled by sets of properties) represent accesses from one node to another. They explained code examples with graphs of allowed accesses in a class hierarchy, then they used graphs of accesses allowed by an access control mechanism to describe the mechanisms. This notation has shown to provide a clear and unambiguous view on access control, and helped students in understanding the mechanisms. They discussed the definitions of the mechanisms and the implication on their code, asked for comparisons with other languages, and provided more feedback. They learned access control easily with some examples about implementation hiding, modularity or substitutability. Finally, he concluded by stating that he advocates the language-independent description of OOPL mechanisms (thus introducing language design topics in a programming course) and the use of visualization techniques for improving OO teaching. Jan Dockx explained a new pedagogy for programming used at the Catholic University of Louvain, Belgium. The course is mainly based on good Software Engineering Practices as discussed in the contract paradigm by B. Meyer, extended with behavioral subtyping. Java is used as the example language, but the presenter emphasized that object-oriented concepts are far more important and independent of any given language. In the past, they had a more traditional approach, with first a course on the science of programming, followed by an algorithms and data structure course. Noa Ragonis’ talk started by addressing the difficulties that arise when teaching constructors in object-oriented programming (OOP). She presented different structures of declarations for constructors, their semantic context, their influence on programming, and aspects of students’ comprehension. They found that the version of declaring a constructor by initializing all attributes from parameters is preferred, even though it seems difficult to learn. Other simpler styles caused serious misconceptions with the students. She pointed out that instantiation is a central concept because it parallels existence of an object in the real world, a metaphor that is often used in teaching. Constructors and instantiation are complex concepts, which are difficult to learn and teach, but we can’t avoid them and talk about objects without talking about their instantiation. A further complication arises in the instantiation of a class that includes attributes of another class. From their experience and research in teaching OOP to novices the authors found that using the professional style of declaring a constructor to initialize attributes from parameters, allows them to emphasize the following good OOP principles:
Tools and Environments for Learning Object-Oriented Concepts
35
– The constructor is very important, so it is better to expose it than to conceal it, – Initializing an object with values for its attributes is more in accordance with the real world, – Creation of simple objects before creation of composite class objects is also more in accordance with the real world, – Initializing values to attributes in the constructor method avoids access to default values, – In OOP, you have to learn parameters very early (for mutators), so the parameter mechanism in these paradigms is not an extra demand, – Assigning values to parameters in each instantiation emphasizes the creation of different objects with the same or different values, – Learning this constructor pattern in simple classes makes it easier to understand composite classes, and to understand that their attributes are the objects and not the attributes of the objects. Rosalia Pena talked about their use of a pedagogical language, called PIIPOO, that is used throughout the curriculum for teaching any programming paradigm, thus a language that evolves when teaching another paradigm. Their aim is to minimize the preoccupation of the student with semantic/syntactic details that demand a new language study while one is acquiring the tools of the new paradigm. This evolution of the pedagogical language reinforces the diachronic conception of the programming constructors, providing continuity. So, PIIPOO keeps some programming constructors from Pascal, is OO compatible, adapts and/or removes other constructors, and incorporates new ones required for the OO paradigm. Moreover, their language drives the novice to use properly the OO environment, avoiding misunderstandings. PIIPOO undertakes so few syntactic and semantic changes as possible to deal with the new way to tackle problem solving,allowing a better concentration on concepts first. The presenter concludes that once the students’ mind is set on the OO world, it is easier to undertake the study of a commercial language, to understand its peculiarities and have a better understanding of the language characteristics. Tracy Lewis talked about a research program to tackle the problem of teaching introductory object-oriented design. A design readiness aptitude test has been developed to measure the cognitive state where one is able to understand design abstractly. The idea is (this was still work in progress) that an instrument will be developed for gradually discussing design decisions using programming and design patterns, based on the level of the design readiness measurement. The pedagogy is rooted in learning-by-doing and based on minimalist instruction, constructivism and scaffolded examples. Ines Gr¨ utzner (Fraunhofer IESE, Germany) proposed a blended-learning approach for on-the-job training, intermixing traditional classroom education with e-learning approaches. Using traditional classroom education only causes problems because of often tight project schedules, short development cycles and a
36
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
heterogeneous audience. Pure e-learning approaches on the other hand, lack social communication and expert guidance. Furthermore, developing e-learning courses is often quite expensive. In the proposed blended-learning approach, online courses are used in the beginning of a training period in order to bring all trainees to a common knowledge and skills level. Traditional classroom education can then be used for teaching advanced concepts, as well as for performing group work and practical exercises. A transfer program developed according to this blended-learning approach consists of the following steps: – – – –
Kick-off meeting of all participants, their teachers, and tutors Online learning phase to provide knowledge and skills in applying UML Traditional course on object-oriented design with UML Final project work
The approach has been used in training developers and managers in using the Unified Modeling Language (UML). Results show that the approach solves typical problems of both classroom and online education. By using online-courses in pre-training phases it can be assured that all participants have achieved a minimum experience level before the classroom training starts. As a consequence, the duration of classroom training can be shortened. Social communication is supported, since trainees already know each other as well as their trainers from the pre-training phases. She concluded that both approaches have their strengths and weaknesses but the synergy effects when used in combination clearly outweigh the isolated benefits of the approaches.
4
Discussion
Before the workshop, participants were asked to think about challenging, controversial topics they wanted to discuss. This resulted in an interesting discussion by email just before the workshop. Some of these points will be presented in this section, as well as some introductory controversial topics presented by the organizers of the workshop. At the workshop, we organized a vote for selecting 3 subjects to discuss in 3 work groups. 4.1
Controversial Topics
To start up a lively discussion, Kim and J¨ urgen had prepared a few controversial topics, which are briefly presented below: Inheritance considered harmful Kim started by pointing out the keys to a basic understanding of objects: state, methods and dynamic dispatch. This means that class definition and use as well as method invocation need to be explained early on. The explanation of dynamic dispatch (inclusion polymorphism) however should not rely on sublassing. Subclassing involves many new concepts like protection mechanisms, constructors
Tools and Environments for Learning Object-Oriented Concepts
37
and the ‘super’ construct, etc and is much too complicated for beginning students to grasp. Therefore should subclassing not even be mentioned early on. In Java for example interfaces support dynamic dispatch as well and are much cleaner a concept than general subclasses. In fact, subclassing boils down to code reuse, which is normally taught at a later stage in the curriculum, and it is pretty complicated. We should therefore avoid introducing inheritance at the very beginning. No magic please J¨ urgen discussed principles for successful early examples. The traditional “HelloWorld” example has been critisized a lot lately [19]. But “HelloWorld” and its variations are not only bad examples of object oriented programs, they are also bad examples by means of the “no magic” measure. With “magic” we refer to examples or topics that are made more complex than necessary, for example by involving several new and possibly interelated concepts. Such examples involve for example language idiosyncrasies, like public static void main (String[] args) and the usage of overly complex library classes (e.g. input handling) early on. No concepts must be introduced using flawed examples, for example exemplifications using exceptions to general established rules. Java strings for example are not real objects, since String objects cannot be modified. The main method is not a real method, since there are no messages sent to it and its parameters seem to be supplied by superior forces. This list can be made much longer and educators have to think twice before presenting seemingly simple examples. Just before the workshop we launched an e-mail discussion to get some input for the planned working group sessions. The following two sections are organized around two statements that caused quite heated discussions. The object-oriented paradigm should be taught from the very beginning St´ephane argued against this statement, since students might not get the full picture of what programming is about. There are situations where other paradigms might be better suited. With Java however you can do “everything,” so why bother about other paradigms. He proposed to use a functional language like Scheme to start with. Joe opted that the possibility of designing a curriculum from scratch is not a freedom that every teacher has. At his university, this is certainly the case, so he can only use Java and he is doing the best he can. When the discussion went into the details of pedagogical/academic examples, Jan reacted that students should be confronted with the “real stuff” instead of “playing with turtles” (using Squeak). He claimed instead that programming in the small is no longer relevant. More focus should be put on design issues instead of pure programming concepts. He questions whether it is really necessary to have a more intuitive first programming course. It doesn’t matter which language you use to teach object-orientation St´ephane disagreed completely. He strongly believes that for teaching OO, the primary aspect that a language should support is the anthropomorphism with the OO paradigm and simplicity. Joe agreed on this, but he argued that you
38
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
shouldn’t teach procedural programming as a prelude to OO. Procedural is not a good first paradigm mainly because it is tied too much to a certain machine model that is not essential and that builds bad habits of thought. Furthermore it is harder to unlearn something than to learn a new concept from scratch. Rosalia argued that many students actually have a procedural background, whether you like it or not. She feels that procedural and OO are just different ways to solve problems, and both must be faced anywere, so between procedural/OO or OO/procedural she would choose the first option. Joe disagreed by saying that a procedural mindset actually hinders students from thinking OO. The more succesfully you teach the procedural approach, the harder it will be for students to learn OO later. As a summary from the presentations, discussions and the email comments, we came up with the following list of working group topics: – – – – – – – – – –
Is programming in the small still relevant? Diachronical walk through the paradigms. Problems of large system development. Definition of a student-oriented curriculum and of a student-friendly presentation of topics. What does objects-first mean? When and how should we introduce the main method for novices? Programming is a skill (learned through apprenticeship with a master) not a science (which can be studied). Should this change? The role of visual presentation (UML). Extreme programming in the first course? Teaching assertions in the first course?
These questions were discussed under the following headers: Can XP practices be taught in the first course, Should assertions be taught in the first course and What does objects-first really mean? 4.2
Can XP Practices Be Taught in the First Course?
The aim of this discussion group was to examine to which extent the 11 key ideas of XP could be used beneficially in early computer science courses. Out of the 11 key ideas mentioned above, the group picked out 4 of the ideas and discussed their relevance for education: – testing: Testing is considered to be very important and it can be seen as a form of specification, but less formal. We also believe that writing test themselves, the student learns a lot about the code part that he/she is writing the test for, so writing tests makes students think about the(ir) code. Using Tools like Junit or Sunit for student assigments could be beneficial for students, although someone made the remark that Junit tests are very hard to read.
Tools and Environments for Learning Object-Oriented Concepts
39
– the Planning Game: Students need to see how to plan their time for finalizing a student project, because they are not mature enough to do this themselves an important issue to learn because this fits well with the idea of small releases in companies. – pair programming: Pair programming can be a big help for students which also helps in developing social skills. However, a serious lack of social skills to begin with can form a real problem for the success of pair programming. – small releases: goes together with the planning game. It is important as a skill, but also as a teaching strategy. We need to help students for finding a plan for their projects to enable small releases, depending on the level of maurity of the student (first year or last year student for example) – refactoring: We agreed that there is a contradictory part here: XP always supports the saying if it isn’t broken, don’t fix it, but if you don’t do it, your code becomes dirty. Students should at least be taught to clean up their code. But when do you clean up your code, at the end of a small release, or at the beginning of the next? We certainly believe that these XP practices can make a significant contribution in learning object-oriented concepts. There is an upcoming half-day workshop on this topic at OOPSLA 2002 [18]. 4.3
Should Assertions Be Taught in the First Course?
The second working group discussed the objects-first approach to CS 1 as well as the use of assertions in CS 1. Objects-First It was agreed that good scaffolding is essential to an objects-first approach because students need good first examples. Ideally one would like to arrange examples so that all parameters, instance variables, etc., are themselves objects. It is useful to avoid primitive types initially as much as possible. It was suggested that if you want to do objects-first, Smalltalk is good, because everything is an object. In discussing the use of students acting out roles as objects, it was pointed out that a difficulty with such an approach to objectsfirst is that students don’t always follow scripts! It may require the presence of a referee to get them to behave. The group also discussed particular tools as an aid to the first few weeks of an introductory course. Blue Jay may help avoid the magic of static void main, etc., when starting, but it was felt that the Blue Jay developers may need to provide more examples. Karel may also be good environment for starting. Girls seem to enjoy it as much as boys. Assertions in the First Course The use of assertions in the first course was controversial. To be successful, it was felt that students need to read lots of assertions before they are ready to write their own. It was also felt that in many cases quantifiers are needed in order to really express assertions. Yet these cannot be part of the language (e.g., Eiffel, Java), and thus must be simply treated as special comments, with little or no language support. Another criticism was that methods in the first course are often too simple to have meaningful assertions, though there was general agreement that assertions can be useful in preparation
40
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
for writing loops, and they often help in understanding boundary conditions. In brief, an invariant essentially specifies the loop. However, the concern of many participants was why introduce a topic in CS 1 that is a stretch when there are already too many topics! 4.4
What Does Objects-First Really Mean?
This group started with the question What is a student-oriented curriculum? The possible answer might be that a student-oriented curriculum/course is one where the students are prepared to fit the course (instead of the other way around). To develop a student-friendly presentation you need to know your audience. The more uniform your audience and the better you know your students, the easier is it to find the ”right” level of presentation. They went on to discuss teaching approaches/materials. Participants pointed out that there is an apparent lack of interesting, well-designed and well-coded examples. Students get to see very few exemplary non-trivial examples. On the other hand, it is very difficult to use large examples, since students do not like to work with existing programs. Most of them have difficulties giving up control and rely on existing code, especially when the code was developed by other students. Scaffolding (as proposed in Tracy’s approach) is an essential technique to cope with these problems. The group also liked the idea of using metaphors for teaching purposes and spent some time trying to find/define a few. However, it turned out to be more difficult than expected to find convincing metaphors.
5
Conclusions
The objective of this workshop was to discuss current tools and environments for learning object-oriented concepts and to share ideas and experiences about the usage of computer support to teach the basic concepts of object technology. Ongoing work was presented by a very diverse group of people with very different backgrounds, which resulted in a broad range of topics like tool support, environments, courses, teaching approaches, languages for teaching, etc... Summarizing what was said in the debate groups, we conclude that : – we want to first focus on teaching the concepts, learning novices how to think in an object-oriented way, – the controversial topic inheritance considered harmful didn’t appear to be so controversial after all since everyone agreed that inheritance shouldn’t be the first topic taught in a computer science course, – using XP practices for a computer science course might have significant benefits, especially testing and small releases. We agreed that students should be taught to write tests, and we agreed that by writing tests themselves, they learn a lot about the software they are writing the tests for as well,
Tools and Environments for Learning Object-Oriented Concepts
41
– building on the objects-first approach, we agreed that good first examples are essential and ideally that the parameters and instance variables we use are all objects. This led us to propose using a pure object-oriented language like Smalltalk for teaching purposes since it fully mirrors the object paradigm of objects sending messages to other objects, – using assertions was controversial: although we agreed that using assertions could be useful in preparation for topics, like writing loops, people agreed that it takes a lot of time for students being able to use them properly and that it is best therefore to concentrate first on the list of more important object-oriented concepts.
6
List of Participants
The workshop had 20 participants from 12 countries. Eighteen participants came from academia and only two from industry. All participants are listed in table 2 together with their affiliations and e-mail addresses.
References [1] Bacvanski, V., B¨ orstler, J.: Doing Your First OO Project–OO Education Issues in Industry and Academia. OOPSLA’97, Addendum to the Proceedings (1997) 93–96 [2] B¨ orstler, J. (ed.): OOPSLA’97 Workshop Report: Doing Your First OO Project. Technical Report UMINF-97.26, Department of Computing Science, Ume˚ a University, Sweden (1997) http://www.cs.umu.se/~jubo/Meetings/OOPSLA97/ [3] B¨ orstler, J. (chpt. ed.): Learning and Teaching Objects Successfully. In: Demeyer, S., Bosch, J. (eds.): Object-Oriented Technology, ECOOP’98 Workshop Reader. Lecture Notes in Computer Science, Vol. 1543. Springer-Verlag, Berlin Heidelberg New York (1998) 333–362 http://www.cs.umu.se/~jubo/Meetings/ECOOP98/ [4] B¨ orstler, J., Fern´ andez, A. (eds.): OOPSLA’99 Workshop Report: Quest for Effective Classroom Examples. Technical Report UMINF-00.03, Department of Computing Science, Ume˚ a University, Sweden (2000) http://www.cs.umu.se/~jubo/Meetings/OOPSLA99/CFP.html [5] I. Michiels, J. B¨ orstler: Tools and Environments for Understanding ObjectOriented Concepts, ECOOP 2000 Workshop Reader, Lecture Notes in Computer Science, LNCS 1964, Springer, 2000, p. 65-77. http://prog.vub.ac.be/~imichiel/ecoop2000/workshop/ [6] Burns, A., Davies, G.: Concurrent Programming. Addison-Wesley (1993) [7] Goldberg, A.: What should we teach? OOPSLA’95, Addendum to the Proceedings. OOPS Messenger 6 (4) (1995) 30–37 [8] Manns, M. L., Sharp, H., McLaughlin, P., Prieto, M.: Capturing successful practices in OT education and training. Journal of Object-Oriented Programming 11 (1) (1998) [9] Stein, L. A.: Interactive Programming in Java. Morgan Kaufmann (2000) [10] Pedagogical Patterns pages. http://www-lifia.info.unlp.edu.ar/ppp/ http://csis.pace.edu/~bergin/PedPat1.3.html
42
Isabel Michiels, J¨ urgen B¨ orstler, and Kim B. Bruce
Table 2. Workshop participants Name Isabel Michiels
Affiliation
Vrije Universiteit Brussel, Belgium J¨ urgen B¨ orstler Ume˚ a University, Sweden Kim Bruce Williams College, USA Rosalia Pe˜ na University Alacala Henares, Madrid, Spain Khalid Azim Mughal University of Bergen, Norway Laszlo Kozma E¨ otu¨ os Lorand University, Hungary Jan Dockx Katholieke Universiteit Leuven, Belgium Vasco Vasconcelos University of Lisbon, Portugal Ines Gr¨ utzner Fraunhofer Institute for Experimental Software Engineering, Kaiserslautern, Germany Gilles Ardourel LIRMM, Montpellier, France Carsten Schulte University of Paderborn, Germany Tracy Lewis Virginia Tech, Virginia, USA Joe Bergin Pace University, USA Noa Ragonis unizmann institute of Science, Nettovot, Israel Glenn Blank Lehigh University, Betlehem, PA Martine Devos Avaya Research, USA St´ephane Ducasse Software Composition Group, University of Berne, Switzerland Kristen Nygaard Department of Informatics, University of Oslo, Norway Boris Mejias Vrije Universiteit Brussel, Belgium Andres Fortier Universidad Nacional de La Plata, Argentina
E-mail Address [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
[email protected] [email protected] [email protected] [email protected]
[email protected] [email protected] [email protected]
[email protected] [email protected] [email protected]
[11] European Master in Object-Oriented Software Engineering. http://www.emn.fr/MSc/ [12] JUnit home page. http://www.junit.org [13] OOPSLA01 workshop. http://www.cs.umu.se/%7Ejubo/Meetings/OOPSLA01/ [14] ECOOP 2002 Workshop homepage. http://prog.vub.ac.be/ecoop2002/ws03/ [15] Squeak homepage. http://www.squeak.org, [16] http://www.iam.unibe.ch/~ducasse/WebPages/NoviceProgramming.html
Tools and Environments for Learning Object-Oriented Concepts
43
[17] K¨ ohler, H.J., Nickel, J, Niere, J., Z¨ undorf, A.: Integrating UML Diagrams for Production Control Systems, Proceedings of the 22nd International Conference on Software Engineering (ICSE), Limerick, Ireland, June 2000, pages 241-251. [18] OOPSLA ’02 Workshop on Extreme Programming Practices in the First CS1 Courses. http://csis.pace.edu/˜bergin/XPWorkshop/ http://www.oopsla.org [19] Westfall, R.: Hello, World Considered Harmful. Communications of the ACM 44 (10), Oct 2001, 129-130.
12th Workshop for PhD Students in Object Oriented Systems Miguel A. P´erez and Pedro J. Clemente Quercus Software Engineering Group Department of Computer Science University of Extremadura, Spain {toledano,jclemente}@unex.es
Abstract. The ”Ph Doctoral Students Object-Oriented Systems” (PHDOOS) workshop has become an established annual meeting of PhD students in object-orientation. The main objective of the workshop is to offer an opportunity for PhD students to meet and share their research experiences, to discover commonalities in research and studentship, and to foster a collaborative environment for joint problem-solving. PhD students from both industry and academia are encouraged to show their research in order to ensure a broad, unconfined discussion.
1. Introduction The ”International Network for PhD Students in Object Oriented Systems” [3] was founded in 1991, at the first workshop for PhD students held in Geneva, Switzerland, in conjunction with ”5th European Conference on Object Oriented Programming”. Since its foundation, the International Network has proposed and created a workshop for PhD students in association with ”European Conference on Object Oriented Programming”(ECOOP) [10] each year. The main objective of the workshop is to offer an opportunity for PhD students to meet and share their research experiences, to discover commonalities in research and studentship, and to foster a collaborative environment for joint problem-solving. The ”12th PHDOOS workshop” [7] received financial support from the ”Association Internationale pour les Technologies Objets” (AITO)[8], a non-profit association registered in Kaiserslautern, Germany. The purpose of this association is promote the advancement of research in object-oriented technology. This enabled the participation of some students who would not afford to participate otherwise. Funding was available for PhD students doing research in the area of object-orientation. We would like to support all participants, but since funds were limited, we established some funding criteria. We were aware that the criteria had to be perfectly fair. Thus, our goal was to make them as fair as possible for everybody. This workshop is slightly unusual, because the participants are PhD students and the topics are derived from the areas of interest of the participants. However, the workshop organizers proposed a broad range of subjects related to objectorientation, including but not restricted to: languages, semantics description of J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 44–54, 2002. c Springer-Verlag Berlin Heidelberg 2002
12th Workshop for PhD Students in Object Oriented Systems
45
systems, conceptual modelling, applications, distribution, databases, patterns, frameworks, metrics, reuse, testing, graphics and user interfaces. Due to the heterogeneity nature of the topic of the paper received, the workshop conclusions are focused on the interesting research hot-spots and solving common problems. The workshop is divided into plenary sessions with a number of pre-screened presentations, holding small subgroups composed of PhD students working on similar topics. For each participant, this is an opportunity to present his/her research to a knowledgeable audience working in a similar context, and to share his/her ideas on hot-topics or new trends. In this way, the participants may receive insightful comments about their research, learn about related work, and initiate future research collaborations. The participants are divided into two categories. First, a student can to submit a (three-eight pages) extended abstract on a specific topic, and give a thirty minutes presentation, including questions and discussion, at the workshop. Second, a PhD student may submit a one-page abstract of his/her PhD project, and give a fifteen minutes talk, including questions and discussion. The program committee is essentially composed of young researchers with a strong background in some object-orientation area. The review process is not designed to select the few of the best papers, but to ensure that every participant is able to present some relevant material and well prepared. This year nineteen papers, six shorts papers and thirteen extended papers, were received. However in the workshop, only sixteen of them were presented. The origin of the participants, from eleven countries, is an evidence about PHDOOS workshop is an excellent meeting of young researchers in Object-Oriented Systems. The plenary session features two speakers who are invited to talk about the interesting research hot-spots, personal experiences or research methodology. This is an unique opportunity to hear or ask things not discussed elsewhere, and to have an ”unplugged” discussion with a well-known personality from our field. This year, the speakers were: Professor Mohamed Fayad (University Of Nebraska Lincoln, USA)[2], and Professor Eric Jul (University of Copenhagen, Denmark) [1]. This paper is organized into the following points. Next section is focused on the participants of the workshop. In section three, the presented papers are explained and main topics are summarized. Fourth section talks about invited conferences. In section fifth is included the final discussion. Finally we concluded the paper with workshop conclusion and bibliographic references.
2. PHDOOS Workshop Participants Traditionally, workshops have been large. In recent editions [6][5][4] there have been more or less twenty-five participants and this is likely to be the case this year again. We can divide the participants into three groups:
46
Miguel A. P´erez and Pedro J. Clemente
– Organizers. – Invited speakers. – Participants with papers. 2.1. Organizers The organizers of Ph Doctoral Object-Oriented Systems are made up of participants in previous workshop. This volunteers are advised by organizers in earlier editions. The organizers of 12 PHDOOS workshop were Hermes Senger, Pierre Crescenzo, Ezz Hattab, Ciril Ray and Miguel A. Prez Toledano. Hermes Senger (defended his PhD. thesis in June 2002) is currently a teaching assistant at SENAC College of Computer Science and Technology (Sao Paulo, Brazil). His current researching interests include object distribution and middleware architecture. Pierre Crescenzo is a research and teaching assistant in Computer Science at the University of Nice-Sophia Antipolis (France), and has defended his PhD. thesis in December 2001. His current researching interests are focused on the OFL Model and object-oriented meta-programming. Ezz Hattab is a PhD. candidate in Computer Sciences at National Technical University of Athens (Greece); his research of interest is focused on agentoriented methodologies. Cyril Ray is currently a PhD. student in Computer Science at French Naval Academy Research Lab (France) and the objective of this thesis is to study the distributed system concept in order to use it for geographic information system. Miguel ngel Prez Toledano is working at the University of Extremadura (Spain) as a lecturer and his research is focused on selection and retrieval of components from repositories using semantic descriptions. Besides, both organizers of last years workshop, Michael Haupt (Technical University Darmstadt, Germany) and Rainer Ruggaber (University of Karlsruhe, Germany) actively helped in the early stages of organizing this workshop. 2.2. Invited Speakers This year the invited speakers to PhDOOS were the professors Mohamed Fayad and Eric Jul. Professor Mohammed Fayad presented the conference: ”Accomplishing Innovative Object-Oriented Research: Be The King Or Be The Whipping Boy”. He is a J.D. Edwards Professor, Computer Science and Engineering, at the University of Nebraska, Lincoln (USA). He was an associate professor at the Computer Science and Computer Engineering Faculty at the University of Nevada, from 1995-1999. He has more than 15 years of industrial experience. He has been actively involved in over 60 Object-Oriented projects in several companies using ShlaerMellor, Colbert, OMT, Use Case Approach, UML, Design Patterns, Frameworks,
12th Workshop for PhD Students in Object Oriented Systems
47
Software Process Improvement, Systems and Software Engineering, Internet and Web Applications using Java, OO Distributed Computing using CORBA, and others. Professor Eric Jul presented the conference: ”Ph advises”; he is a Professor in Department of Computer Science at the University of Copenhagen (Denmark), external lecturer at the IT University of Copenhagen and adjunct Professor at the Department of Computer Science of Aalborg University (Denmark). He is the principal architect of the Emerald runtime system (kernel). Currently, he is involved in building a solid laboratory for researching in distributed systems nicknamed DISTLAB[9]. His main interest areas are distributed operating systems, object-oriented languages, garbage collection and object-oriented design. 2.3. Participants with Papers The attendants, with accepted papers, who took part in the workshop were the following: – Francisco Jose da Silva e Silva, Federal University of Maranhao (Brazil). – Peter R. Pietzuch, Computer Laboratory University of Cambridge (England). – Eduardo Santana de Almeida, Federal University of Sao Carlos (Brazil). – Brian Shand, Computer Laboratory University of Cambridge (England). – Octavio Martn-Daz, Department of Languages and Computer Science Systems, University of Sevilla (Spain). – ngel Luis Rubio, Department of Mathematics and Computing, University of La Rioja (Spain). – Manuel Romn, Department of Computer Science, University of Illinois at Urbana-Champain (USA). – Tracy L. Lewis, Department of Computer Science, Virginia Tech (USA). – Eric Tanter, Center for Web Research Computer Science Department, University of Chile (Chile). – Jos Luis Isla Montes, Department of Languages and Computer Science Systems, University of Cdiz (Spain). – Pedro J. Clemente, Quercus Software Engineering Group, University of Extremadura (Spain). – Sergio Lujn-Mora, Department of Languages and Computer Science Systems, University of Alicante (Spain). – Olivier Nano, Department of Computer and Science, University of Niza (France) – Sergio Castelo Branco Soares, Informatics Center, Federal University of Pernambuco (Brazil). – Hans Reiser, Department of Distributed Systems and Operating Systems, University of Erlangen-Nrnberg (Germany). – Attila Ulbert, Department of General Computer Science, Etvs Lornd University, Budapest (Hungary). – Alfonso Snchez Sandoval, Technical University of Lisbon (Portugal).
48
Miguel A. P´erez and Pedro J. Clemente
Moreover, Wei Zhao (Department of Computer and Information Sciences, University of Alabama at Birmingham,USA) and K. Chandra Sekharaiah (Department of Computer Science, Indian Institute Madras, India) apologized for their absence.
3. Presented Paper in PHDOOS 2002 This workshop is a meeting point of Ph students, who are developing their thesis in Object Oriented Systems. So due to the big number of topics and heterogeneity of received papers, the typical position papers from other workshops are not suitable for this one. PHDOOS is a thermometer to detect hot spots in current lines of researching and new working lines. Nineteen papers (six short papers and sixteen large ones) have been received in the 12 PhDOOS workshop. Sixteen of them have been presented in the two days that constituted the workshop. Those papers can be classified into any of the following topics: Conceptual Modelling, Development Based Components, Adaptation of Systems, Patterns, Framework, Testing and Architectural Design. As we can see, the variety in the papers makes difficult a common conclusions wich is suitable for all of them. So, the innovate ideas presented in the papers, will be observed briefly. These ideas are grouped by interest topics. All the papers discussed here, in this section, are available for public view on the workshops web site [7]. 3.1. Conceptual Modelling Conceptual modelling is one of the most productive lines of work within the Objects Oriented Systems. The use of UML, Catalysis, the emerge of Aspect Oriented Programming, and the use of software components has opened the way for modelling of systems. During this workshop, three works related to this topic were received. The paper ”Progressive implementation with aspect-oriented programming” was presented by Sergio Castelo Branco Soares (Federal University of Pernambuco, Brazil). In this paper, he presents an overview of specific guidelines for implementing distribution aspects with AspectJ and RMI, which were derived from an experience with restructuring a real information system. He is also interested in implementing persistence, distribution, and concurrency control aspects. The second paper, ”An extension to UML based on AOP for CBSE modelling”, was wrote by Pedro J. Clemente (University of Extremadura, Spain). It is presented a new modeling mechanism for component-based systems, which use Aspect Oriented Programming to describe and implement the dependencies between components. This model allows the separate definition of dependencies between components during the design phase using AOP techniques. In ”Multidimensional Modeling using UML and XML”, Sergio Lujn-Mora (University of Alicante, Spain) presented an extension of the Unified Modeling Language (UML), using stereotypes, to represent main structural and dynamic
12th Workshop for PhD Students in Object Oriented Systems
49
MultiDimensional (MD) properties at the conceptual level. Moreover, it is used the eXtensible Markup Language (XML) to store MD models. Then, it is applied the extensible Stylesheet Language Transformations (XSLT), that allow us to automatically generate HTML pages from XML documents.
3.2. Development Based Components In order to meet the demand for more reliable and efficient software at a low cost, new techniques are being studied. One that stands out, the Component-Based Development (CBD). Software Components can be understood as code blocks which have been previously tested and have well-defined interfaces, which facilitate the construction of software, mostly through the reuse, thus, avoiding code redundancy in the applications. The paper ”Distributed Component-Based Software Development Strategy” presented by Eduardo Santana de Almeida (Federal University of Sao Carlos, Brazil) in PHDOOS, searches a Strategy for Distributed Component- Based Software Development, which refers to develop application domain components and develop applications that reuse these components. We could have included other papers in this topic because there were several works related but they have been introduced in other groups with common interests.
3.3. Adaptation and Evolution of Systems Adaptation techniques allow software to modify its own functions and configuration in response to changes in its execution environment. It has a number of potential benefits such as the ability to respond rapidly to security threats and the opportunity to optimize performance as changes in the execution environment takes place. In ”Dynamic Adaptation of Distributed Systems”, Francisco Jose da Silva e Silva (Federal University of Maranhao, Brazil) aims at addressing the problem of how to change computing systems at runtime, in order to adapt to variations in the execution environment, such as resource availability and user mobility. He presented a generic framework to support the construction of adaptive distributed applications. The Second paper, included in this topic, offers a different point of view about techniques for controlling system evolution. Eric Tanter (University Of Chile) presented the work ”Run-Time Metaobject Protocols: The Quest for their Holy Application”. Run-time MetaObject Protocols (MOPs) are reflective systems that allow objects to be controlled at run time by one or many metaobjects. These metaobjects can then alter the semantics of the execution for the base objects they control. The last paper submitted, about adaptation and evolution of systems, was ”A rewriting model for services integration” from Olivier Nano (University of Niza, France). This thesis looks at how the integration of services will be independent of the platform, in order to allow the configuration of distributed applications.
50
Miguel A. P´erez and Pedro J. Clemente
3.4. Patterns Design patterns are a topic of great interest among researchers and objectoriented software designers. Their success reside mainly in the creation of a common vocabulary of ”good solutions” to frequent problems of design in specific contexts, facilitating the reuse of the expert knowledge as ”components” of design and improving the documentation, understanding and communication of the final design. In ”A new approach for modelling design patterns with UML”, Jos Luis Isla Montes (University of Cdiz, Spain) describes a different model for design patterns specification and its integration with UML, in the context of a design tool that facilitates its definition, application, visualization and validation in certain contexts. The other paper received, from Trace Lewis (Virginia Tech, USA), about Patterns is completely different from previous works. The thesis project, ”A Measure of Design Readiness: Using Patterns to Facilitate Teaching Introductory ObjectOriented Design”, is focused on the field of education. It is proposed a measure of assessing ”design readiness” an assessment of the cognitive state where one is able to understand design abstractly. They will then use programming and design patterns to assist in teaching critical design concepts. This research is an attempt to address the question: can we improve a student’s chance of success in learning design concepts by adjusting instruction to his/her level of design readiness? 3.5. Frameworks The Frameworks represent another step towards reutilization of Oriented Objects System, trying to adapt previous solutions to new software systems development. In PHDOOS 2002, Frameworks was the most popular topic among Ph students. In this area we received 4 papers were presented: – ”A Framework for Object-Based Event Composition in Distributed Systems” was presented by Peter R. Pietzuch (University of Cambridge, England). In event-based publish/subscribe communication, a detection framework is proposed which allows commonly to use composite event detections to be placed as near to the event sources as possible, and reused among subscribers in Distributed Systems. – ”A Distributed Object-Oriented Application Framework for Ubiquitous Computing Environments” was presented by Manuel Romn (University of Illinois at Urbana-Champain, USA). He showed a novel application framework inspired on the Model-View-Controller that provides functionality to design and implement applications that are aware and can take benefit of the resources contained in the users’ environment, can be partitioned dynamically among different devices, and can alter their internal composition at runtime to adapt to changes in their environment. – ”Metamodeling and Formalisms for Representation of Behavior” from ngel Luis Rubio (University of La Rioja, Spain). This framework allows to study
12th Workshop for PhD Students in Object Oriented Systems
51
in detail the languages and techniques (used to develop complex software systems), and this study is a preliminary step to analyze some issues regarding these languages such as comparison, adaptation, transformation, among others. The thesis presents a solution to this problem, by means of the introduction of a generic architecture, called Noesis architecture. – ”Custom invocation semantic and corpuses in data multicasting an invocation of custom invocation semantic in object-oriented middleware” from Attila Ulbert (Etvs Lornd University, Hungary). In this paper an extensible architecture is presented to solve the problem of multicast group communication. The ORB(M) framework allows developers to implement their own semantic elements and deploy various new methods bound to the process of remote invocation. The invocation semantics are composed of semantic elements supplied by the developer or the framework itself. Thus, the framework gives us the freedom to replace and arbitrarily customise the invocation semantic used by a given method. 3.6. Testing In Object Oriented Systems, the objects are dynamically created and must be able to persist after the application that created it is finished. However, the resources used by dynamically created objects are limited. It is necessary to free these resources if an application does not require an object (garbage collection). This is needed in order to keep storage free unreachable objects which are not reclaimed. Garbage collection must do its job without creating dangling pointers. In this subject we can include the thesis project ”Distributed Garbage Collection for Wide Area Replicated Memory” from Alfonso Snchez Sandoval (Technical University of Lisbon, Portugal). The objective of this thesis is to develop a distributed garbage collection (DGC) algorithm for managing replicated objects in wide area environments. DGC is considered a suitable approach for maintaining the Wide Area Replicated Memory free of leaks without creating dangling pointers. 3.7. Architectural Design Software architecture involves the descriptions of elements from which systems are built, interactions, patterns, software elements, constraints,...The organization of design elements (assignment, composition, evolution, selection...) are the software architecture level of design. Two paper were presented on this topic during the workshop: – ”Malicious Fault Tolerance: From Theoretical Algorithms to an Efficient Application Development Process” was presented by Hans Reiser (University of Erlangen-Nurnberg, Germany). The goal of this thesis outline is to propose a novel architecture and software development process that adequately integrates the handling of malicious faults. Particular emphasis is set on the design of the basic modular architecture for distributed consensus, which as an important mechanism to obtain fault tolerance.
52
Miguel A. P´erez and Pedro J. Clemente
– ”Towards a Quality-aware Processing of Architectural Alternatives” by Octavio Martn-Daz (University of Sevilla, Spain). The paper introduces a new quality-aware process model to manage architectural alternatives. The main tasks are the generation and selection of best offers. The first task begins from an initial offer, some conditions, a repository and a catalogue. Then an appropriate transformation is searched and applied, thus obtaining a new offer nearer to the goal and fulfilling of conditions.
4. Invited Conferences As we mentioned in previous sections, Professors Mohamed Fayad and Eric Jul were invited to ”12th PHDOOS workshop”. In the conference, ”Accomplishing innovative Object-Oriented research: be the king or be the whipping boy”, Prof. Fayad examined, object-oriented software engineering (OOSE) research with respect to the following central themes: What are the new OOSE trends? What is his opinion about the extended areas of research in OOSE? How scientists identify the lasting area of research in OOSE? What are the approaches of accomplishing innovative research in OOSE and as a result you can be a king or be a whipping boy? – What are the elements of innovative research in OOSE Research? – How do you know if you are on the right track in your research or not? – Where and how should you publish the results of your OOSE research? – – – –
This keynote showed different models for OOSE research and the pros and cons of each model. Actual experiences and lessons learned from different OOSE research projects were discussed. Professor Eric Jul presented the conference ”Ph. advices”. He explained, with great clarity, a series of recommendations about how to identify the purpose of a thesis and how to achieve the proposed objectives. Moreover, in his graphic explanation, he also gave advice about relevance of developed works and how to focus on the thesis. Also, he discussed how to develop a Ph thesis, how to advance in your progress and how to write the created works.
5. Final Session As we said before in section 3, the heterogeneous of received papers, makes impossible to establish discussion groups similar to others workshops. The discussions took place after the paper presentations. Each participant had five to ten minutes (depending on the type of paper) to comment the relevant aspects, conflicting and interesting points of the work. Therefore, the last journal discussion session was centered in common interesting subjects, to collect contributions from the participants without going into specific details from each work.
12th Workshop for PhD Students in Object Oriented Systems
53
During this session, the following subjects were discussed: – Where are the software components repositories?. In this subject there was unanimity. There are hardly public software repositories, and the existing ones, lack an acceptable components catalogue. – How repositories are used? The majority of public repositories describe their software elements in natural language, but do not include more detailed information. – Are UML and Catalysis enough to develop and maintain component based systems? In this point, there was disparity in criteria. With regard to the development of sofware using components, these tools are accurate but they lacke detailed semantic description of individual components. Plus, the majority thought that UML and Catalysis model at a high level, which can make the maintaining and the evolution of software systems difficult. – Is the Object Oriented technology (patterns, framework, aspects...) used in the developing companies? In this point, there was almost complete agreement that, excluding very big companies that invest in researching and cooperate with the universities, the majority of companies, do not frequently use the leading Object Oriented technology. Finally, depending on the line of works presented and the opinion of both the participants and invited speakers, it was clear that future lines of works should be focused on produce quality software at low cost. So, the analysis and design of patterns, the developing of frameworks, and the developing based in software components are considered as priority future working lines.
6. Workshop Conclusions As previous PHDOOS workshop editions, this return to be an excellent meeting point for young researchers, who are working in Object Oriented Systems. The comments offered by referees about the papers, the exchange of experiences among Ph. Students, the established relations for future collaborations and the relevant advices from invited speakers, lead this workshop as a very interesting meeting.
References [1] Prof. Eric Jul’s Home Page. www.diku.dk/users/eric/. [2] Prof. Fayad’s Home Page. http://www.cse.unl.edu/ fayad/. [3] International Network for PhD Students in Object Oriented Systems (PhDOOS). http://www.ecoop.org/phdoos/, 1991. [4] 9th Workshop for Ph Doctoral Students in Objects Oriented Systems. http://www.comp.lancs.ac.uk/computing/users/marash/PhDOOS99/, 1999. [5] 10th Workshop for Ph Doctoral Students in Objects Oriented Systems. http://people.inf.elte.hu/phdws/, 2000.
54
Miguel A. P´erez and Pedro J. Clemente
[6] 11th Workshop for Ph Doctoral Students in Objects Oriented Systems. http://www.st.informatik.tu-darmstadt.de/phdws/, 2001. [7] 12th Workshop for Ph Doctoral Students in Objects Oriented Systems. http://www.softlab.ece.ntua.gr/facilities/public/AD/phdoos02/, 2002. [8] AITO. Association Internationale pour les Technologies Objets. http://www.aito.org/. [9] DISTLAB. Distributed Systems Lab. http://www.diku/dk/researchgroup/distlab. [10] ECOOP. European Conference on Object-Oriented Programming. http://www.ecoop.org/.
Web-Oriented Software Technology Oscar Pastor1, Daniel Schwabe2 , Gustavo Rossi3 , and Luis Olsina4 1
4
Department of Information Systems and Computation Valencia University of Technology P.O. Box: 22012,E-46022, Valencia, Spain. [email protected] 2 Computer Science Department - PUC-Rio Marques de Sao Vicente, 225, 22453-900, Rio de Janeiro, Brasil. [email protected] 3 LIFIA Facultad de Inform´ atica, UNLP. La Plata, Argentina. [email protected] Department of Informatics, Engineering School, UNLPam. Calle 110 esq. 9, 6360 General Pico, La Pampa, Argentina. [email protected]
Abstract. Web Engineering has become an important research area, due to the unceasing growth of web sites and applications. New challenges must be faced to provide correct solutions to the problem of defining a precise process to go from requirements to a final software product of quality, all this in the context of a sound web-oriented software technology. This situation is influencing all the conventional areas of software engineering, that must be properly adapted to the particularities of Web Applications. For instance, UML-based Web Engineering, need of new mechanisms and techniques for elicitating and representing user requirements for the Web, methods for web-based development and web-based conceptual modeling, methods to evaluate the quality of web application production process and the quality of web products,... among others are issues that require solutions from both the development and the quality assurance point of view. Addressing some of these issues, this work presents the conclusions of the discussions held in the Second Edition of the International Workshop on Web-Oriented Software Technology (IWWOST), held in M´ alaga, Spain, June 2002, as part of the European conference on Object-Oriented Programming (ECOOP). Following the tradition of previous editions, the workshop has been a rich forum of discussions, whose main results are presented structured in four main categories: UML-based web engineering, requirements engineering in the context of web applications, conceptual modeling for the web, and quality assessment for web production processes and products. Each issue is developed starting from the motivation of the problem, followed by the introduction of diverse concrete, current, potential solution to the corresponding problems together to the relevant references.
This work is partially supported by the CYTED Program, in the VII.18 research project, WEST
J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 55–69, 2002. c Springer-Verlag Berlin Heidelberg 2002
56
Oscar Pastor et al. In this way, an added value of this work is a complete overview on what has been already done, and what is still to be done.
1
Introduction
In the web technology era that we have entered, several web development applications methods are being presented, many using some kind of object-oriented technology. More than ever it is essential to look for common basic abstractions related to web technology, in order to avoid the ”yet another method” syndrome. Web-Oriented Software Technology, Web Engineering and other related terms have been coined in the last few years to face a well-known problem for the Software Engineering community: accepting as a new feature that the Web has become the environment of choice for applications development. A key factor to deal with this is how to obtain a quality and cost-effective software product as a result of a well-defined software production process. Strategies for obtaining such a Product and process of quality have been extensively proposed in the last years for conventional applications, where by conventional we mean basically non-web applications, with statics (data) and dynamics (behavior) aspects specified and implemented according to some given model and technology. But if we deal with web sites and applications, specific features appear that have not been previously taken into account. From the modeling point of view, beyond conventional data and behavior specification, we have among others navigation, extended presentation features, personalization, that require the introduction of new models and methods. On the other hand, from the design and implementation point of view, new technological platforms have been introduced to properly achieve the software development process for this new kind of application. As a logical consequence, assuring the quality and how to measure it requires the introduction of new proposals that must consider appropriately which are the intended characteristics and attributes of quality to specify and analyze. Furthermore, building a Web Application is not merely an engineering issue. It is also a communication issue, where marketing techniques are required to provide an added-value to overcome market competition, and where suggestive interaction techniques play a central role in the achievement of quality. For the first time, engineering, communication and aesthetics must be properly combined in the process of building the software product that constitutes a Web Application. Characterizing precisely what is really original in the context of the so-called Web Engineering is still an open discussion. Nevertheless, it seems to be widely accepted that this new set of features, absent in conventional software production processes, models and methods, justifies considering Web Engineering as an emergent, new topic. Furthermore it requires the creation of a technology especially intended to provide solutions for the problem of developing quality Web Applications within cost constraints.
Web-Oriented Software Technology
57
The resulting evolution of perspective has been reflected in a set of methods that cover different aspects of this emerging discipline, see [22], [1], [12], aiming at providing Web-based system development with a systematic approach and with quality control and assurance. In spite of this fact, once more the rapid development and high degree of pragmatism that characterizes Web evolution have promoted tight coupling of the models, methods and techniques defined in the different existing proposals. This feature is a clear drawback in an environment characterized by the burst of new architectures, access devices, protocols, languages or even the appearance of novel organization demands (such as mobile software, that is software not bound to one particular machine). Also there exists no ontology that allows the communication among researchers and practitioners, which is clearly reflected in the fact that terms so widely used as ”service”, ”navigation” or even ”metric” or ”pattern” may have different (even sometimes contradictory) meanings in different contexts, including different hypermedia conceptual modeling proposals. Consequently, the next required step must be to identify, at least for discussion purposes, the issues that should be selected as more relevant in the context of Web-Oriented Software Technology. Following the discussions held around the contributions presented in the Second edition of the International Workshop on Web Oriented Technology, hosted by ECOOP’02, we could group the main issues under the followings headings: – UML-based Web Engineering, as an attempt to adapt the UML notation to the particularities required to model, to design or to implement a Web Application [13]. – Establishing the relation between Requirements Specification and Navigation Analysis and Design, under the hypothesis that changing the domain implies changing the navigation [2], [18], [6]. – Conceptual Modeling of Web Application [5], [21], [4], [23], with the aim of defining precisely the navigation primitives required as building blocks for specifying Web Applications. – Methods for analyzing and evaluating the quality and/or the functional size of web applications [16], [10], [20]. Next, each issue is going to be introduced with further details in its corresponding section. Finally, the main conclusions and the used references will close the work.
2
UML-based Web Engineering
The intensive use of UML as a modeling language has also reached the Web Engineering community. Some approaches [13], [7] propose using the notation and diagrammatic techniques provided by the UML, under the hypothesis that they are powerful enough to cover the requirements that arise when modeling Web Applications; and indeed for any aspects the UML notation can be used
58
Oscar Pastor et al.
without extensions. If necessary, a Web designer can also make use of other UML modeling techniques by adding other views as needed. In particular, sterotypes, tagged values and constraints represent the built-in mechanisms provided by UML, addressing special aspects of a domain. It is usual, for instance, to introduce UML profiles, including defined stereotypes, for the modeling of navigation and presentation aspects of Web Applications. This strategy can be applied in different contexts of a software production process. Use Cases and Activity Diagrams can be used for Requirements Specification; Class Diagrams for Conceptual Design; Statecharts and Interaction Diagrams for modeling Web scenarios; Activity Diagrams for Task Modeling; and Deployment diagrams to document the distribution of Web Application Components. Using UML as a standard notation seems to be a sensible use of the proposal. There are however some important limitations, such as the lack of support for a fine grained component concept, and the lack of complete integration of the different modeling techniques. Furthermore, the unrestrained use of the extension mechanisms works against standardization, since at the limit everything can be seen as UML-compliant if the corresponding stereotype is introduced. Finally, the semantics of the resulting diagram is fully dependent on the particular interpretation of the extensions that were introduced as modeling constructs. In terms of software development methods, other well-known problems inherited from the original UML proposal, are carried over to its use in Web environments: – In its attempt to put together different proposals, UML can be seen as a proliferation of unfamiliar symbols. – As a consequence, UML can be considered in fact as a complex and sometimes cryptic programming language, which must be mastered in order to build diagrams of a possible future system. In terms of code generation, this is a big constraint, because the input model semantics cannot be stablished in general due to the use of the free-extension mechanisms. – As commented before, about it lacks direction to a clear, seamless development to guide the usually non-trivial process of obtaining the final software product from the diagrams. – There is an important problem of traceability: how does one maintain the consistency between the program and the diagrams? Seamlessness (providing a single, continuous process) and reversibility are a basic topic sidesteped by UML. – Finally, the another important drawback is the absence of a precise object oriented model in its foundation. In summary, how to profit from UML in the context of Web Engineering is an open issue, whose pros and cons have been outlined above. Although it accepted that the use of UML as notation is sensible and positive, it is not so clear how to embed it properly within a software production process. As we will see later, a successful procedure to use UML in Web Application Development would have to follow these steps:
Web-Oriented Software Technology
59
1. to define precisely the set of conceptual modeling primitives, 2. to determine their intended semantics, 3. finally, use the standard and special purpose representation facilities provided by the UML graphical notation. An important, last topic of interest is the use of UML as a metamodeling notation to provide guidelines for creating the conceptual design of any web-based application. In [23] such a metamodeling approach is introduced to create the conceptual design of a web-based educational application. The idea is to facilitate the conceptual design of web-based educational applications by meta-modelling their underlying content and navigational structure. UML serves the purpose of syntax notation and semantics for this metamodel. Taking into account the problems associated to the use of UML, the important fact is that this idea can be applied to any particular domain, opening an important research area where metamodeling techniques and web engineering fit well. A different approach in this respect is discussed in [5],where Semantic Web notations such as RDF, RDF Schema, and DAML+OIL are used to characterize Web applications.
3
Requirements Engineering for Web Applications
Quality is often related to conformance to user requirements. The precise elicitation and representation of these user requirements is an important issue in conventional software engineering, as is the introduction of methods that define clearly how to go from requirements to conceptual schema and to a final software product with a sound methodological basis. In the context of a correct the Web-Oriented Software Technology, this is a very important issue, since we must deal with navigation requirements in addition to the conventional functional and non-functional requirements. The distinction between navigational and functional requirements is very interesting, open discussion; it is even controversial if they must be considered separately. Some proposals distinguish them explicitly [2], under the assumption that navigation is independent from domain concepts. The concept of a Navigation Semantic Unit is introduced as a set of information and navigation structures that collaborate in the fulfillment of a subset of related user requirements. Without going to such extremes, other proposals use some diagrammatic modelling technique -normally UML compliant, with the needed extensions as commented in the previous section- with the ultimate goal of supporting the design process of Web applications in incremental levels of detail, starting with requirements gathering. As an example [18] presents a diagrammatic technique called User Interaction Diagram (UID) to represent the information exchange between the user and the application. These UIDs form the basis for the conceptual modeling and the navigation design. To support the design process, heuristic guidelines allowing the derivation of the conceptual model and the navigation
60
Oscar Pastor et al.
model of OOHDM [24] from the UIDs are provided. In the context of the previous discussion, it is interesting to notice that there is a mapping from UIDs to Navigational Context Schemas (the main structuring primitive for the navigational space within OOHDM) together with a mapping of interactions with no navigation. There is no general agreement about which kind of diagram is more appropriate to gather user requirements for web applications. Use Cases, Activity Diagrams or other particular extensions of UML diagrams are used, depending on the proposal. The specification of functionality is often an added problem. Normally, functionality is defined using other diagrams (UML Interaction Diagrams mainly), but there appears to be a non-empty intersection between the specification of the application functionality and the specification of the application navigation that needs to be explored. Finally, to close this section, let us consider the challenges associated with emergent, promising application areas such as e-commerce and m-commerce, that require ubiquitous access to web applications, providing time-aware, location-aware, device-aware and personalized services. This is also being taken into account when representing requirements to convert them in some set of customization rules, as done in [6]. Once more, from a web engineering point of view, appropriate modeling techniques are needed being able to address the ubiquitous nature of such applications. Kappel [6] proposes a generic customization model allowing to adapt web application services towards their context as required by ubiquity. Accordingly, the customization model comprises a context model and a rule model, providing the basic building blocks to specify certain customizations. This is done in the context of standardization efforts that are being undertaken to collect requirements and provide representation techniques with respect to certain ubiquity issues (particularly focusing on device independence and personalization).
4
Conceptual Modeling of Web Applications
One of the main current issues in Web Engineering environments is to provide software development methods that properly capture the essence of web applications employing software engineering models, preferably ones that are well known by the Web (and Software) Engineering community. Conceptual Modeling is a basic research area in this context. The key point at this level is to identify and to precise the abstract conceptual primitives required to specify Web Applications based on the problem space (conceptual point of view). When facing the problem of modeling a Web Application, data structure and system functionality must be specified, as done historically. But some new features specific of Web Applications arise, specially navigation and presentation specification. As stated in [19], Web Applications, as hypermedia applications, should provide easy navigational access to their related resources, preventing users from getting lost, and providing consistent navigation primitives. In order to achieve that, different methods are being proposed, all of them having in
Web-Oriented Software Technology
61
common the support of a set of primitives intended to allow the specification of navigation. If some concrete, common characteristics are examined, we could conclude that navigation is typically seen as a problem of specifying a navigational space composed by some kind of navigational nodes and the corresponding navigational links. The particular conceptual primitives provided, and the way in which the navigational model is embedded in the whole software production process, varies from method to method. Even so, this view of the navigational space regarding a Web Applications is present in some way in the majority of the proposals. The previous statement can be checked with respect to some known weboriented development methods. For instance, OOHDM [24], [5] defines the following navigation primitives: – navigation objects, that are view of conceptual objects; – navigation contexts, that are sets of navigation objects defined according to rules determined by the application designer. Navigation contexts may be further specified as group of contexts, since it is possible to sometimes parameterize their defining property. The OOHDM Navigation Model consists of two schemas: Navigational Class schema (that defines all navigable objects as views over the application domain) and Navigational Context schema (that defines the main structuring primitive for the navigational space: navigational contexts, and links that connect them). According to WebML [3], Web Applications have two main orthogonal dimensions: the data model and the hypertext model. The data model enables describing the schema of data resources according to the Entity-Relationship Model. The hypertext model enables describing how data resources are assembled into information units and pages, and how such units and pages are interconnected to constitute a hypertext. The WebML hypertext model includes basically a Composition Model and a Navigation Model. The Composition Model concerns the definition of pages and their internal organization in terms of elementary interconnected units. The Navigation Model in this case describes links between pages and context units to be provided to facilitate information location and browsing. In the case of OOH-Method [8], the navigation design modeling phase is accomplished by using a set of Navigation Access Diagrams (NADs) that are partially based on the class diagram representing the domain structure of the system. The main primitives provided are navigational classes (that are enriched domain classes whose attributes and method visibility have been restricted according to the user access permissions and navigation requirements) and navigational links, (that define the navigation paths the user can follow through the system, and that are labelled with additional information to construct the user navigation model). Hierarchical structures can be defined on navigational classes as collections. A different extension of the OO-Method approach is OOWS [15]. It introduces a navigational model over the preexisting object, dynamic and functional
62
Oscar Pastor et al.
model, to deal with the full expressiveness required to generate a Web Application. This navigation model uses as building blocks a navigational map for each type of user. A navigational map represents the global view of an application for a given audience. It is represented as a directed graph where nodes are navigational nodes and arcs are navigational links. UWE (UML-based Web Engineering) [13] defines an iterative UML-based process for Web Applications that includes a navigation task involving two models: a navigation space model and a navigation structure model. In this case, the underlying idea is that navigation modeling of Web Applications comprises the construction of two different models: one to specify which objects can be visited by navigation through the application (analysis level), and another to define how these objects are reached (design level). The main modeling elements used in the navigation space model are the stereotyped class ”navigation class” and the stereotype association ”direct navigability”. Once more, we find an alternative representation to the idea of specifying a navigation space through the use of some primitive for nodes and another primitive for links between them. A logical consequence of this diversity of methods and primitives in the need to compare different models. The standard approach to achieved this is to use metamodeling techniques to identify and represent in a precise way the conceptual primitives provided by a method. The next step is to make compare each method’s description in the common representation provided by the Metamodel. This is expected to be an interesting area of research for the near future. We have repeatedly stated that the basic primitives for navigation specification are nodes and links of some navigational space. It seems to be sensible to say that a node is some uniquely identifiable piece of information [21]. But the particular properties of links are also a subject of research. Some interesting approaches try to exploit the possibility of specifying different types of links used during the process of Web Application Conceptual Modeling, to enhance the usability of the resulting Web Sites. The underlying question is, why not model link types from a conceptual modeling point of view?. The final goal is to exploit this enriched link expressiveness to have more powerful modeling methods. Sven [21] proposes a useful taxonomy from a web design point of view, where four categories of links are defined, with a description of how each of them can be exploited to enhance the usability of a Web Site. There are also some attempts to address the specification of navigation through the use of formal methods. In particular, Hadez [4] is a formal specification language for the design of data-intensive Web Applications. It divides the specification of a Web Application into three main parts: its conceptual schema, which describes the domain-specific data and its relationships; its structural schema, which describes how this data is combined and gathered into more complex entities called composites; and the perspective schema, which uses abstract design perspectives to indicate how these composites are mapped to hyperpages, and how the user interacts with them (navigation specification). A VDM style is used to specify these schemas, providing a formal framework in which properties of a specification can be formulated and answered. As happened in the
Web-Oriented Software Technology
63
past, formal specifications are perceived as being too difficult for a typical Web designer, which may be considered as a significant drawback of this kind of approach. In spite of this, practical successful results are reported, as can be seen in [4], where this approach is used to formally specify the Web Museum of the National Gallery of Art in Washington. Another important open issue is how to link the conventional data and behavior specification of Information Systems with the new expressiveness attached to navigation and presentation; here as well, different strategies are followed. We could classify these different strategies in two groups: 1. those coming from the conventional Software Development community, where data and behavior have been historically specified in depth through the use of extended software engineering models and methods (such as Structured Analysis and Design in the eighties, Object-oriented modeling and development in the nineties). The specification of data and structure can be considered to be powerful, but the absence of a navigational view is the main drawback when facing the development of Web Applications. The answer to this problem, for these methods, has been the extension of their preexisting modeling approach, introducing a Navigation or a Presentation Model, that must fit within the whole Software Production Process. Examples of this category are OOWS [14], where the conventional OO-Method modeling approach is extended by adding a Navigational Model. This model is created on the top of a static UML Class Diagram, and system functionality and navigation specification are clearly separated. 2. those coming from the hypermedia world, where the situation is the opposite. Navigation features are historically taken into account in these methods, and some kind of navigational modeling is always present. Specification of data structure and system functionality has been an add-in when Web Applications have been faced, because just navigation was not enough. The answer given by these methods is based on the provision of some kind of class modeling (if the object oriented model is the selected) and some kind of system functionality specification, that can be based on UML Interaction Diagrams (as we commented in the section related to UML), or any other functional specification strategy. In either case, the end conclusion is that to provide a Software Development Process of quality to develop Web Applications, both system static/dynamics and navigation/presentation features must be properly specified regardless of the origin of the method. Finally, we introduce some particular considerations related to presentation features. Although it can be said that presentation has always been present and important in the development of software products, it is widely accepted that in the context of Web Applications the quality of the presentation has become a first-level requirement. Sometimes, it even more valued than the correct functionality of the Web Application, if we look at commercial Web Sites where -at minimun- functionality and presentation are both basic.
64
Oscar Pastor et al.
The direct consequence of this is that the specification of presentation features should be introduced in the conceptual modeling process. There is no agreement on how to do it, although different proposals are being presented, linking the HCI community to the Web Engineering world. The use of patterns seems to be a promising strategy, that consists in defining interface patterns that could be attached in some way to the functional or navigational models of a Web Application. Some interesting proposals are those of [11], where a user interface development environment is introduced within an object-oriented software development process. Its main objective is to incorporate usability criteria in the modeling process in order to achieve highly usable user interfaces. An iterative design process permits the incorporation of the consecutive improvements proposed by users. In these proposals, open questions are still how to link this usability specification with a navigational model, and how to provide and assure traceability among the different models involved. Also in UWE [13], a particular form of a class diagram is used to define a presentation model, describing where and how navigation objects and access primitives will be presented to the user, using a stereotyped ”Presentation Class” based on a Class in a conventional UML Class Diagram. An interesting use in practice of Interface Patterns, properly embedded in a complete Web Application development process, is presented in [17], where a set of concrete interface patterns are defined and embedded into the navigational and the functional process of the OO-Method proposal. Of course, other problems could also be analyzed within this section, for instance those related to the definition of disciplined approaches to managing the evolution of Web Information Systems. This happens because they are complex systems, have long lifetime, and their representation and evolution history become quite intricate. Again, this a problem analogous to software engineering community for software systems, but enriched with all the new primitives specific for Web Applications. Much work is still to be done in this area, but initial ideas are presented in [25], where a model for the development and evolution of Web Information Systems configurations is described. The model couples the development and evolution process into one framework, which seems promising for the implementation of a configuration management problems.
5
Quality for Web Sites and Applications
Assuring quality in processes and products has become a cornerstone for any current organization, and Web Engineering is no exception. Assuring the quality of Web Applications is an essential task that requires a systematic approach that is able to deal with complexity, and also allowing eliciting, assessing and maintaining Web Applications with the desired quality, and within budgetary constraints. To make things more complicated, it is widely accepted that current Web Applications are very complex (different issues to support this argument have been presented in the previous sections) and are also highly sophisticated
Web-Oriented Software Technology
65
software products, where the application quality perceived by users is one of the central factors determining its success or failure. The current situation is that too often the quality of Web Sites and applications is assessed in an ad-hoc way, primarily based on common sense, intuition and expertise of the developers. Studies about quality of products and processes for the Web are very recent and there are no widely accepted evaluation methods and techniques for different assessment purposes. Although some techniques currently exist for evaluating certain quality characteristics such as usability, accessibility, etc., and testing techniques for measuring stress, performance and faults, in practice they provide only partial solutions because they separately focus either on nonfunctional requirements or on functional requirements. The following main topics can be identified at this level: 1. Need for methods for evaluating the quality of Web Applications, with the definition of the corresponding metrics; 2. Definition of repositories of metrics and cataloguing tools to easing querying, retrieving and reusing of web metrics; 3. Web functional size measurement methods, to be able to estimate the functional size of any given Web Application, possibly even from its conceptual schema; 4. Verification of the internal consistency of Web Applications, by assuring the correct use of extended methodological tools such as design patterns (by calculating, for instance, metrics about the coherent use of design patterns throughout the application schema). Interesting attempts to provide solutions to these topics can already be found. For instance, WebQEM [10], a Web Quality Evaluation Method. It is useful to systematically assess characteristics, subcharacteristics and attributes that influence product quality. The implementation of the evaluation yields global, partial and elementary indicators that can help different stakeholders in understanding, and improved the corresponding product. The steps followed are conceptually simple but useful: 1. 2. 3. 4.
Quality Requirements Definition and Specification Elementary Evaluation (both Design and Implementation stages) Global Evaluation (both Design and Implementation stages) Conclusion of the Evaluation (regarding recommendations)
Logically, the definition and the specification of quality requirements are essential activities in the evaluation process. An extension of WebQEM including the use of Function Points for estimating the functional size of Web Applications is proposed in WebFP QEM (Web Function Points and Quality Evaluation Method) [20]. It is a methodology intended to perform an engineering contribution by proposing a systematic and flexible approach aimed analyzing the informational and navigational architecture and to evaluate the quality of complex Web Applications in the operative phase. It allows capturing business and application goals for different audiences, specifying
66
Oscar Pastor et al.
informational and navigational models (the functional requirements), specifying quality models and assessment criteria (the nonfunctional requirements), analyzing outcomes in order to give recommendations for improvements, in addition to drawing cost prediction reports. The claim is that an operative site should be assessed not only from the quality perspective of the nonfunctional requirements, but also from the functional viewpoint such as its informational and navigational architecture. Other basic contribution is the possibility of extending well-known and widely-used techniques for estimating functional size (such as, for instance, Function Points-based methods) to be properly used in Web Engineering contexts. If it is accepted that, regarding the quality perspective, a clear definition and management of functional and non-functional requirements are needed to specify, measure, control and potentially improve the produced Web Applications, the support of automated processes and tools is also necessary in order to assist evaluators and other stakeholders in quality assurance activities. In this context, an emerging area of work is related to the design of Catalogs for Metrics. This constitutes an answer to the second topic introduced above. A catalog of metrics basically allows tools, evaluators and other stakeholders to have a service and a consultation mechanism, which start based on a sound specification of the entity type (resource, process, product,etc), the attribute definition and motivation, the metric formula, criteria and application procedures, among other template items. A concrete contribution in this area is presented in [9]. Besides enhancing the quality of the development process, in the current WebOriented Software Technology the use of design patterns is a way of improving the quality of the final applications. The consistent use of patterns -as argued for other software engineering domains- makes it easy for users to form reliable expectations about the overall application structure, access and exploration of information and activation of operations. In fact, consistent patterns allow users to apply past experiences to solve current problems or, even better, allow them to predict how an unfamiliar sections of the application will be organized. This is why in terms of Web Applications quality measurement it is also necessary to define some pattern-based metrics, to verify if design patterns were applied consistently. A promising contribution in this area is presented in [16], where the Web Quality Analyzer (WQA) is introduced. The WQA is a XSLbased framework which is able to automatically analyze the XML specification of Web Applications designed through WebML, with the aim of identifying the occurrence of design patterns, and calculating metrics revealing if they are used consistently throughout the application. Metrics are defined by XSL files, which are applied over the XML specification of the application conceptual schema, to compute the metric. What it is measured is if a given pattern is being used correctly. It is still an open question how to assure that the same pattern is used to solve the same problem in a given context, which has serious methodological implications in how to go from the problem space to the solution space in some automated way.
Web-Oriented Software Technology
67
In summary, quality -in all its different facets- is a very important issue in Web Engineering contexts, and a lot of research work (both theoretical and experimental) is required to provide solutions to the main relevant problems, some of which have been introduced in this section.
6
Conclusions
In the web application era that we have entered, many topics are demanding the attention of researchers in order to provide sound and correct solutions to specific problems directly related to the so-called Web Engineering discipline. This report summarizes the main conclusions of the successful second edition (held in M´ alaga, Spain, in conjunction with the 16th European Conference on Object-Oriented Programming)of the ”International Workshop on Web-Oriented Software Technology (IWWOST02)”, with 30 participants. It is a continuation of the successful first edition, held in Valencia, Spain, in June 2000, which gathered 25 researchers representing some of the most relevant universities and research centres with experience in the Web Engineering area. IWWOST wants to be an open forum for all people involved in some way in initiatives related to hypermedia and conventional object-oriented software production methods for web-based applications. The final objective is to put them together, with the intention of defining a precise process for building Web Applications, with the required quality. This should be done starting from user requirements elicitation and representation for such a web environment, up to the final software product built upon the modern OO (or other), web technologies provided by industry. With this in mind, this report includes concrete results and references for the topics considered more relevant, structured in four main issues: UML-based Web Engineering, Requirements Specification in Web Engineering environments, Conceptual Modeling of Web Applications and Quality in Web Products and Processes. Each issue has been analyzed in some detail, giving as much useful information as possible, in order to provide the right paths for obtaining further details. IWWOST will continue to maintain its original spirit, and we hope to provide in its next edition, more results and better solutions for the selected topics, as well as for other new problems identified as been of interest for Web Engineering.
References [1] Ginige A. and Murugesan S. Web Engineering: an Introduction. IEEE Multimedia Special Issue on Web Engineering, 4:14–18, 2001. [2] Cachero C., Kock N., Gomez J., and Pastor O. Conceptual Navigation Analysis: a Device and Platform Independent Navigation Specification. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 21–32, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [3] Ceri S. and Fraternali P. and Bongio A. Web Modeling Language WEBML: A Modeling Language for Designing Web Sites. In In Proceedings of the 9th International World Wide Web Conference (WWW2000), and Computer Networks, volume 3 (1-6), pages 137–157, 2000.
68
Oscar Pastor et al.
[4] German D.M. Using Hadez to Formally Specify the Web Museum of the National Gallery of Art. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 62–78, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [5] Lima F. and Schwabe D. Exploring Semantic Web Modeling Approaches for Web Aplication Design. In CYTED, editor, In Proceeding of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 120–133, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [6] Kappel G., Retschitzegger W., Kimmerstorfer E., Proll B., Schwinger W., and Hofer T. Towards a Generic Customisation Model for Ubiquitous Web Applications. In CYTED, editor, In Proceedings of the Second Int. Workshop on WebOriented Software Technology (IWWOST’02), pages 79–104, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [7] Conallen J. Modeling Web Application Architectures with UML. In CACM, volume 42(10), 1999. [8] G/’omez J., Cachero C., and Pastor O. Conceptual Modeling of DeviceIndependent Web Applications. IEEE Multimedia Special Issue on Web Engineering (Part II), 8(2):26–39, April-June 2001. IEEE Computer Society. [9] Olsina L., Lafuente G., and Pastor O. Designing a Catalogue for Metrics. In In Proceedings of the Second Int. Conference on Web Engineering (ICWE2002) in the framework of 31 JAIIO, pages 108–122, Santafe, Argentina, September 2002. ISSN 1666-6526. [10] Olsina L. and Rossi G. A Quantitative Method for Quality Evaluation of Web Sites and Applications. IEEE Multimedia, 9(4), 2002. [11] Lozano M., Gonzlez P., Montero F., Molina J. P., and Ramos I. Integrating Usability within the User Interface Development Process of Web Applications. In CYTED, editor, In Proceeding of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), Malaga, Spain, June 2002. ISBN: 84-9315389-3. [12] Koch N. Software Engineering for Adaptive Hypermedia Systems: Reference Model, Modeling Techniques and Development Process, 2001. [13] Koch N. and Kraus A. The Expressive Power of UML-based Web Engineering. In CYTED, editor, In Proceeding of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 105–120, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [14] Pastor O., Abrahao S., and Fons J. Building E-Commerce Applications from Object-Oriented Conceptual Models. In ACM SIGecom Exchanges, volume 2.2, pages 24–32. ACM Press, June 2001. [15] Pastor O., Abrahao S., and Fons J. Object-Oriented Approach to Automate Web Applications Development. In 2nd International Conference on Electronic Commerce and Web Technologies (EC-Web’01), pages 16–28, Munich, Germany, Septiembre 2001. Springer-Verlag LNCS 2115. [16] Fraternali P., Maristella M., and Maurino A. WQA: an XSL Framework for Analyzing the Quality of Web Applications. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 46–61, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [17] Molina P., Pastor O., Mart´i S., Fons J., and Insfr´ an E. Specifying Conceptual Interface Patterns in a Object-Oriented Method with Automatic Code Generation. In 2nd International Workshop on User Interfaces to Data Intensive Systems (UIDIS2001), pages 72–79, Zurich, Switzerland, 2001.
Web-Oriented Software Technology
69
[18] Vilain P. and Schwabe D. Improving the Web Application Design Process with UIDs. In CYTED, editor, In Proceedings of the Second Int. Workshop on WebOriented Software Technology (IWWOST’02), pages 176–192, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [19] Rossi G. and Schwabe D. and Lyardet F. Web Application Models are more than Conceptual Models. In Proceedings of the ER Conference, pages 239–252, Paris, France, November 1999. Springer-Verlag. [20] Abrahao S., Olsina L., and Pastor O. A Methodology for Evaluating Quality and Functional Size of Operative Web Appls. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 1–20, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [21] Casteleyn S. and De Troyer O. Exploiting Link Types during the Web Site Design Process to Enhance Usability of Wev Sites. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 33–45, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [22] Murugesan S., Deshpande Y., Hansen S., and Ginige A. Web Engineering: A New Discipline for Development of Web-Based systems. In In Proceedings of the First ICSE Workshop on Web Engineering, ICSE, May 1999. [23] Retalis S., Papasalouros A., and Skordalakis M. Towards a Generic Conceptual Design Metamodel for Web-based Educational Applications. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 163–175, Malaga, Spain, June 2002. ISBN: 84-931538-9-3. [24] Schwabe D. and Rossi G. An Object-Oriented Approach to Web Application Design. In Theory and Practice of Object Systems (TAPOS), pages 207–225, October 1998. [25] Psaromiligkos Y. and Retalis S. Configuration Management for Web-based Instructional Systems. In CYTED, editor, In Proceedings of the Second Int. Workshop on Web-Oriented Software Technology (IWWOST’02), pages 148–162, Malaga, Spain, June 2002. ISBN: 84-931538-9-3.
Component-Oriented Programming Jan Bosch1 , Clemens Szyperski2 , and Wolfgang Weck3 1 University of Groningen, Netherlands [email protected] - http://www.cs.rug.nl/∼bosch/ 2 Microsoft, USA [email protected] - http://research.microsoft.com/∼cszypers/ 3 Oberon microsystems, Switzerland [email protected]
Abstract. This report covers the seventh Workshop on ComponentOriented Programming (WCOP). WCOP has been affiliated with ECOOP since its inception in 1997. The report summarizes the contributions made by authors of accepted position papers as well as those made by all attendees of the workshop sessions.
1. Introduction WCOP 2002, held in conjunction with ECOOP 2002 in Malaga, Spain, was the seventh workshop in the successful series of workshops on component-oriented programming. The previous workshops were held in conjunction with earlier ECOOP conferences in Linz, Austria; Jyv¨askyl¨ a, Finland; Brussels, Belgium; Lisbon, Portugal; Sophia Antipolis, France; and Budapest, Hungary. WCOP96 had focused on the principal idea of software components and worked towards definitions of terms. In particular, a high-level definition of what a software component is was formulated. WCOP97 concentrated on compositional aspects, architecture and gluing, substitutability, interface evolution, and non-functional requirements. WCOP98 had a closer look at issues arising in industrial practice and developed a major focus on the issues of adaptation. WCOP’99 moved on to address issues of structured software architecture and component frameworks, especially in the context of large systems. WCOP 2000 focused on component composition, validation and refinement and the use of component technology in the software industry. WCOP 2001 addressed issues associated with containers, dynamic reconfiguration, conformance and quality attributes. WCOP 2002 had been announced as follows: WCOP 2002 seeks position papers on the important field of componentoriented programming (COP). WCOP 2002 is the seventh event in a series of highly successful workshops, which took place in conjunction with every ECOOP since 1996. COP has been described as the natural extension of object-oriented programming to the realm of independently extensible systems. Several J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 70–78, 2002. c Springer-Verlag Berlin Heidelberg 2002
Component-Oriented Programming
71
important approaches have emerged over the recent years, including component technology standards, such as CORBA/CCM, COM/COM+, JavaBeans/EJB, and most recently .NET, but also the increasing appreciation of software architecture for component- based systems, and the consequent effects on organizational processes and structures as well as the software development business as a whole. After WCOP’96 focused on the fundamental terminology of COP, the subsequent workshops expanded into the many related facets of component software. WCOP 2002 has an explicit focus on dynamic reconfiguration of component systems, that is, the overlap between COP and dynamic architectures. Also, submissions reporting on experience with component-oriented software systems in practice are strongly encouraged, where the emphasis is on interesting lessons learned, whether the actual project was a success or a failure. COP aims at producing software components for a component market and for late composition. Composers are third parties, possibly the end users, who are not able or willing to change components. This requires standards to allow independently created components to interoperate, and specifications that put the composer into the position to decide what can be composed under which conditions. On these grounds, WCOP’96 led to the following definition: A component is a unit of composition with contractually specified interfaces and explicit context dependencies only. Components can be deployed independently and are subject to composition by third parties. Often discussed in the context of COP are quality attributes (a.k.a. system qualities). A key problem that results from the dual nature of components between technology and markets are the non-technical aspects of components, including marketing, distribution, selection, licensing, and so on. While it is already hard to establish functional properties under free composition of components, non-functional and non-technical aspects tend to emerge from composition and are thus even harder to control. In the context of specific architectures, what can be said about the quality attributes of systems composed according to the architecture’s constraints? As in previous years, we could identify a trend away from the specifics of individual components and towards the issues associated with composition and integration of components in systems. Although we had one session on foundations of components, the discussion was primarily concerned with the runtime characteristics of component-based systems. Thirteen papers were accepted for presentation at the workshop and publication in the workshop proceedings. About 40 participants from around the world participated in the workshop. The workshop was organized into four morning sessions with presentations, one afternoon breakout session with five focus groups, and one final afternoon session gathering reports from the breakout session and discussing future direction.
72
Jan Bosch, Clemens Szyperski, and Wolfgang Weck
2. Presentations This section summarizes briefly the contributions of the thirteen presenters, as grouped into four sessions, i.e. Foundations, Components and Generators, Monitoring, EJB and COTS, and, finally, Dynamic Reconfiguration. 2.1 Foundations The first session consisted of three papers. The first paper by Andreas Gal addressed the performance of component-based systems. Due to the independent deployment, traditional compiler optimization techniques cannot cross component borders. The proposed solution is dynamic compilation that, at run-time, performs inter-component optimization. The solution has been implemented in Lava (combining Lagoona and Java), exploits profile-guided optimization and dynamic compilation. The second paper was presented by Yahya Mirza and discussed a compositional collections component framework. The basic claim of the paper is primitive composition patterns need to be identified and supported kernel or virtual machine. The third paper in the foundations session was presented by Peng Liang and addressed rate-monotonic analysis for scheduling field device components. Field devices have strict timing requirements and a prerequisite for using software components in this domain is the ability to accurately predict timing properties of the composed system. 2.2 Components and Generators The second session also contained three papers. The first paper, presented by Pedro Jose Clemente Martin, discusses the use of UML for combining componentbased and aspect-oriented software development. The approach separates component dependencies into intrinsic and non-intrinsic. In the case of intrinsic dependencies the component only depends on the framework or context. A nonintrinsic dependency is really an aspectual (cross-cutting) dependency. The approaches uses tag values in UML to express aspectual dependencies. The second paper, concerning (web) services, was presented by Oliver Nano. His paper discusses the integration and composition of web services in component platforms such as EJB and CCM. The proposed approach is to define an algebraic meta model and to define operations on meta-objects that achieve service integration. Services are specified operationally over abstract operations. Massimo Tivoli discusses the role of software architecture in the assembly of components. His paper starts from the premise that current COTS component composition technologies cannot solve the predictable assembly problem. For many systems important system properties include safety and liveness and the software architecture, providing a connection skeleton with connections captured by connectors, allows for predictability. The presented approach, given set of components C and desired properties P, automatically composes C such that P is met.
Component-Oriented Programming
73
2.3 Monitoring, EJB, and COTS The third session was concerned with monitoring and dynamic adaptability of component-based systems and consisted of three papers. The first paper, presented by Adrian Mos, presented the COMPAS framework. COMPAS provides tool support for monitoring component-based systems, modeling and prediction of performance. The approach starts from the observation that usually poor performance is caused by poor design and not by poor code. The COMPAS framework provides non-intrusive monitoring, stochastic model generation, UML-MDA model representation, workload based performance prediction and tight integration between monitoring and modeling. The second paper addressed container services for high-confidence software and was presented by William Thomas. High confidence software is defined as being able to withstand attacks and hazards without causing accidents and unacceptable losses. High confidence software is difficult to develop and validate and, in addition, the validation is often brittle. Although both design and run-time techniques exist, there is a clear shift towards run-time techniques. The author presented an approach in which containers, potentially multiple, are used to capture and verify properties. Mediators containers can be used to enforce properties during invocations, whereas monitor containers can enforce global properties. Finally, the third paper by Zahi Jarir discussed the dynamic adaptability of services in the Enterprise Java Beans component model. The main motivation is that especially in the context of web services dynamic adaptation of behaviour is required, although the current EJB model does not support this. The proposed approach is to allow EJB applications to be aware of and adapt to variations in execution context through the use of an infrastructure for adaptable middleware and rule-based adaptation policies. This approach has been implemented in JonAS, an adaptable EJB infrastructure implementation. 2.4 Dynamic Reconfiguration The last session consisted of four papers. The first paper was presented by Chris Salzmann and discussed the notion of architectural invariants. The approach is to separate the logical and technical architecture. The logical architecture focuses on component types and functional dependencies whereas the technical architecture is concerned with the actual components that implement the logical architecture. This is supported in the Service Architecture Definition Language (SADL), an XML-based definition language including support for services, components, bindings and sandboxes. The second paper discussed the DiPS/CuPS component framework that allows for ‘hot-swappable’ system software, in particular for flexible protocol stacks. The DiPS part of the approach contains a model of plug-compatible components, responsible for exactly one protocol function, organized using the pipe-and-filter architectural style. Plug compatibility is achieved by two design decisions. First, component functionality is kept fully separate and second, the
74
Jan Bosch, Clemens Szyperski, and Wolfgang Weck
framework is used to establish connections. The CuPS part supports, as a coordination layer, non-anticipated adaptations of a running protocol stack. Eric Bruneton presented an approach to recursive and dynamic software composition with sharing. The aims of the author are to have composite components where individual components can still be shared, while allowing for dynamic reconfiguration and the satisfaction of non-functional properties. The Fractal component model is presented as a solution. It provides a flexible API to introspect and reconfigure Fractal components as well as one or more controller classes for each interface of the API. Fractal components are created by composing controller classes. The final paper presented X-Adapt, an architecture for dynamic systems, and was presented by Finbar McGurren and Damien Conroy. X-Adapt is a reflection-based architecture; it can place proxies between server objects to intercept messages. These proxies interact with a configuration manager, which uses configuration rules and maintains a configuration map. The configuration manager draws on a set of monitor to detect changes to initiate and control service-level adaptation.
3. Break-Out Session During the afternoon, the attendees were organized into break-out groups. The breakout groups addressed a number of topics, i.e. mechanisms and decision making for reconfiguration and run-time optimization. At the end of the day, the workshop was concluded with a reporting session where the results of the break-out sessions were presented. Below, the results of the sessions are briefly discussed. 3.1 Mechanisms for Reconfiguration The material in this subsection is largely based on a contribution by Stefan van Baelen, who volunteered to gather the results from the breakout group that he participated in. The breakout group first considered the reasons for reconfiguration. The primary argument is that Systems are constantly evolving in response to new requirements. To address this, there is a need for mechanisms that permit reconfiguration of systems against a minimal cost. These mechanisms should allow us to change the component configuration (e.g. the dependencies that a component requires). One can identify two types of system reconfiguration, i.e. dynamic (at run-time) and static (at compile-time). We have used mechanisms for static reconfiguration for a long time, but why are we currently interested in dynamic reconfiguration at run-time? One can identify several reasons, e.g. to update software while keeping current state, to evolve systems that cannot be stopped and rebooted (24x7x365) and, finally, many systems are ‘in the field’ and cannot easily be physically accessed for upgrading.
Component-Oriented Programming
75
One can identify several types of reconfiguration, e.g. instance replacement, service level adaptation, change of component configuration, interface adaptation and change of message protocol. Reconfiguration can be initiated by several sources, e.g. sender, receiver, third party in system (e.g. monitor component) or external trigger (e.g. human interaction). The next topic is what elements are aware of a reconfiguration, i.e. only the sender, only the receiver, both the sender and receiver or, finally, only the connector (sender and receiver are not aware of change). Further, there are several aspects that influence the complexity of reconfiguration. These include the component communication mechanism (method calls, events/message passing or black-board communication/shared memory), stateless vs. ‘statefull’ components and ‘big bang’ reconfiguration or parallel existence of old and new versions. Especially the situation of ‘statefull’ and parallel existing components raises a difficult need of shared component state needed between the old and new component. As a conclusion, the break-out group presented a number of possible mechanisms for reconfiguration. First, aspect oriented programming has the necessary tools to allow reconfiguration of the components that form the final system. The modeling of these systems and the representation of each component and their dependences permit to obtain a framework to represent each aspect of the system. Then component functionality can to be adapted to new requirements using AOP techniques. Second, reification to meta-level and reflection in sender and/or receiver. In this case, calls to component are reified to the meta-component. The meta-component can perform all necessary actions to adapt the message. Code for reconfiguration is represented as state of this meta-component (codified as data). In this way, non-procedural artifacts such as business rules, restrictions, etc. as well as procedural artifacts can be reconfigured. Subsequently, the adapted call can be invoked on target component using reflection. Third, proxy objects can be placed between the sender and the receiver. In this case, all calls to component are actually done on its proxy component, the proxy can perform the actions necessary to adapt the message. Finally, the proxy can invoke the adapted call on target component. Finally, one may employ a component system with message delegation based on component identifiers. In this case, a component sends a message to other components by passing an identifier to the component system. The component system uses directory service to map identifier to the receiver component. Subsequently, the component system delivers message to target component. As the component system performs the mapping dynamically, it can ‘rebind’ a component name to a new target component. 3.2 Model Extraction and Model Driven Architecture The second break-out group focused on the relationship between models and architectures. Starting from the observation that there is still a lack of consensus on what an architecture is, the group listed a few understood characteristics:
76
Jan Bosch, Clemens Szyperski, and Wolfgang Weck
– Business rules are towards the top of the hierarchy; – Components and connectors permeate through the hierarchy; – Down to but excluding platform/technology level. The group described its own interests in this space: – Engineering: Develop graphical representation of architecture, covering both static and dynamic views. Using such representations as input to formal language systems, with a goal to generate systems. The current implementations are function-based systems; component-based systems (with aspects) are work in progress. – Reverse Engineering Automatically capture specifications of blackbox COTS at the level of component interfaces and component connectors. To date a graphical approach is used but input of specifications is required. A possible probabilistic approach has been identified. In the area of reverse engineering, the group focused on compile time versus run time opportunities. At runtime, a performance model can be determined by extracting properties at intercepting component proxies. These models can be represented at PIM and PSM levels. At compile time, an activity graph (encoded in XMI) can be generated. Sequence diagrams (also encoded in XMI) can be generated by probes at run-time. The group sketched the relationships between UML, XMI, source code, and executable: 1. UML and XMI should be equivalent; 2. UML is the basis for source and XMI is derived from source; 3. Executables are derived from source and XMI is reified from executables. Finally, the group identified a number of issues. In particular, given component invocations and timestamps (and other technology-related information), how can transaction flow in the system (and thus the ‘start’ of connectors) be determined? How can the result be represented? Is MDA suitable here? Can UML be extended to cope with components and aspects? 3.3 Decision Making for Dynamic Reconfiguration The third break-out group also started by defining dynamic reconfiguration. This group defined it as a system, existing of components &{} connectors, where the set of components may change during runtime change. Also, a number of stimuli initiating reconfiguration were identified, e.g. a failing component, system improvement (functionality or quality attributes), specialization and response to changing resources. The decision making process is basically concerned selecting the optimal configuration among a set of possible configurations. To address this, the group first defined four cases, organized along two dimensions, i.e. adaptable versus selfadaptive and open versus closed. Open reconfiguration allows for new component
Component-Oriented Programming
77
types to be added to the system, whereas closed does not. Further, self-adaptive systems initiate and reconfigure without external involvement whereas adaptable systems are reconfigured by a party external to the system. The main conclusion of the break-out group is that the optimal configuration for a system is depending on the ability of the system to represent relevant information about the itself, e.g. the quality attributes, to evaluate this information and to predict the effect of reconfiguration on the relevant properties of the system. To achieve this, abstractions are needed. 3.4 Run-Time Optimization The fourth break-out group addressed the issue of run-time optimization in component-based systems. The relevance of this topic is clear when one realizes that before deployment optimization can only take place within a component. Inter-component optimization can only take place once the system exists. Especially in the face of dynamic reconfiguration, run-time optimization is unavoidable. Optimization is typically about minimizing the use of resources, such as CPU cycles, memory, power consumption and inter-machine communication bandwidth. One concern with run-time optimization is that in order to be effective, one has to measure the property to optimize, preferable both before and after the optimization. Nevertheless, several techniques are available to perform the actual optimization. At the code level, dynamic recompilation and code optimization and optimized method invocation can be used. At the component level, one may replace an entire component version, migrate a component between address spaces or machines and optimize the component configuration. However, run-time optimization provides optimized resource usage against a cost because of the need to monitor, making decisions and the optimization operation itself. This cost may express itself as increased footprint and development effort and the increased difficulty to predict the behaviour of the system. Although the break-out group realized the benefit and importance of runtime optimization, the group also identified a number of problems that need to be addressed. These include security and resource concerns associated with inline optimization and optimization of embedded/RT systems, especially with small footprints.
4. Final Words As organizers, we look back on yet another highly successful workshop on component-oriented programming. We are especially pleased with the constantly evolving range of topics addressed in the workshops, the enthusiasm of the attendees, the quality of the contributions and the continuing large attendance of more than 30 and often as many as 40 persons. We would like to thank all participants of and contributors to the seventh international workshop on component-oriented programming. In particular, we would like to thank the presenters of the break-out group results.
78
Jan Bosch, Clemens Szyperski, and Wolfgang Weck
5. Accepted Papers The full papers and additional information and material can be found on the workshop’s Web site (http://research.microsoft.com/~cszypers/events/ WCOP2002/). This site also has the details for the Microsoft Research technical report that gathers the papers and this report. 1. P. Inverardi and M. Tivoli. “The role of architecture in components assembly.” 2. P. J. Clemente, F. S´ anchez, and M. A. P´erez. “Modelling with UML Component-based and Aspect Oriented Programming Systems.” 3. E. Bruneton, T. Coupaye, and J. B. Stefani. “Recursive and Dynamic Software Composition with Sharing.” 4. Z. Jarir, P.-C. David, and T. Ledoux. “Dynamic Adaptability of Services in Enterprise JavaBeans Architecture.” 5. N. Janssens, S. Michiels, T. Mahieu, and P. Verbaeten. “Towards HotSwappable System Software: The DiPS/CuPS Component Framework.” 6. A. Mos and J. Murphy. “Understanding Performance Issues in ComponentOriented Distributed Applications: The COMPAS Framework.” 7. G. J. Vecellio, M. M. Thomas, and R. M. Sanders. “Container Services for High Confidence Software.” 8. A. Gal, P. H. Fr¨ ohlich, and M. Franz. An Efficient Execution Model for Dynamically Reconfigurable Component Software. 9. O. Nano, M. Blay, A.-M. Pinna, M. Riveill. An abstract model for integrating and composing services in component platforms. 10. Ch. Salzmann “Invariants of component reconfiguration.” 11. P. Liang, G. Ar´evalo, S. Ducasse, M. Lanza, N. Schaerli, R. Wuyts, and O. Nierstrasz. “Applying RMA or scheduling field device components.” 12. Y. H. Mirza. “A Compositional Component Collections Framework.” 13. F. McGurren and D. Conroy. “X-Adapt: An Architecture for Dynamic Systems.”
Concrete Communication Abstractions of the Next 701 Distributed Object Systems Antoine Beugnard1 , Salah Sadou2 , Laurence Duchien3 , and Eric Jul4 1
ENST-Bretagne, Brest, [email protected] http://www-info.enst-bretagne.fr/∼beugnard/ 2 Universit´e de Bretagne Sud, Vannes, [email protected] - http://www.univ-ubs.fr/valoria/Salah.Sadou 3 LIFL, Lille, [email protected] 4 University of Copenhagen, Copenhagen, [email protected] - http://www.diku.dk/∼eric/
Abstract. As applications become increasingly distributed and networks provide more and more connection facilities, applications require more and more interconnections, thus communication takes a central part of modern systems. Hence, since large applications parts have been underlined like databases systems or graphical user interface, the goal is to wonder, if we can say the same for the communication part of applications.
1
Introduction
As applications become increasingly distributed and networks provide more and more connection facilities, applications require more and more interconnections, thus communication takes a central part of modern systems. To tackle the communication issue a lot of techniques and concepts have been developed in different research fields and some industrial solutions have been proposed. Over the last 15 years, the basic building blocks for distributed object systems have emerged: distributed objects, communicating with Remote Message Send (RMS), also known as Remote Method Invocation (RMI) or LocationIndependent Invocation (LII). However, it has also become clear that while such abstractions are by themselves sufficient to expose the hard problems of distributed computing, they do not solve them. Hence, since large applications parts have been underlined like databases systems or graphical user interface, the goal is to wonder, if we can say the same for the communication part of applications. At last year’s ECOOP workshop on The Next 700 Distributed Object Systems, we identified some of these problems (Security, Partial Failure, Guaranteeing Quality of Service, Run-time evolution, Meta-Object protocols, and Ordering of events) that are important concerns of any communication abstraction. The J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 79–91, 2002. c Springer-Verlag Berlin Heidelberg 2002
80
Antoine Beugnard et al.
goal of this workshop was to work on the definition of new and good communication abstractions and on the distributed-specific features mentioned above. The goal being defined, we have received seven positions papers. All papers had been reviewed by two members of the program committee, and all were considered bringing an interesting point of view and deserving a chance to be discussed. In order to make the workshop profitable to both paper authors and attendees we organized the workshop as follows: – The morning was dedicated to short presentations of selected papers (10 to 15 minutes with a strong deadline). After the presentation, a volunteer participant briefly summarized the main points of the presentation (goal, main idea, clear or needing-to-be-reformulated points). Few questions from attendees followed. The paper author then commented and answered the questions in order to clarify the audience understanding. – The afternoon was dedicated to the composition of some working groups that shared their conclusions at the end of the workshop.
2
Position Papers Abstracts and Discussions
We denote three subjects in the position papers. First, R. Filman’s paper [12] discusses about injectors (filters, wrappers) and annotations (added information) that are part of a mechanism to get a handle distribution. Second, a group of three papers presents communication abstractions on peer-to-peer and publish/subscribe protocols. The paper presented by Ulrik P. Schultz [14] defines peer data structures as a means to represent and synchronize data in applications for pervasive computing systems. The paper ”Visibility as Central Abstraction in Event-based Systems” written by Ludger Fiege [11] and al. introduces scopes as basic abstraction in event-based systems and devises on top of the visibility concept abstractions in heterogeneous systems. The third paper presented by C. Damm [8] proposes a distributed extension of a collaborative OO modelling tool. This distributed extension is based on publish/subscribe mechanisms. Finally, last subject describes abstract communications in particular contexts. Selma Matougui [13] proposes a state of art on coordination languages and defines communication components as elements of coordination languages. Monique Calisti [6] presents abstract components for agent based communication in distributed agent-based systems. The last paper presented by N. Baker [5] proposes communication abstractions based on multi-level system description managed by a workflow application that instanciates relationship between components. 2.1
Injectors and Annotations by Robert E. Filman
We describe a technology that achieves system-wide properties (ilities) in large systems by controlling and modifying inter-component communications. Tra-
Concrete Communication Abstractions
81
ditional component-based applications intermix the code for component functionality with ility support. This yields non-reusable components and produces inflexible systems. Our technology separates ilities from functional code and provides a mechanism for weaving them together with functional components. This allows a much richer variety of component reuse and system evolution. Key elements of this technology include intercepting inter-component communications with discrete, dynamically configurable ”injectors,” annotating communications and processes with additional meta-information, and a high-level, declarative specification language for describing the mapping between desired system properties and services that achieve these properties. Our research is an example of Aspect-Oriented Programming (AOP) and brings to AOP a methodology for developing and composing aspects. Questions/Answers – How do I inject injectors? Injectors are associated with each method of each instance object. There exists a table (the default injectors table) which maps each interface I and method M to a sequence of injector factories. When creating a new instance object, we go through the methods of that interface, and obtain this sequence. We call, in turn, each factory in this sequence (providing the proxy object) to generate an injector object to go into this proxy. Both this table, and the injectors on any object are dynamic—a program with access to these interfaces can change the factories for any interface/method or the injectors for any instance/method. – Security may require injecting behavior directly into components. How does OIF handle this? OIF is a ”black-box” model—it doesn’t have access to the internals of objects. This has the disadvantage of not being able to modify behavior at the micro level. This has the advantage of working with code for which we lack source and of being a lot easier to implement. – Are injectors limited to point-to-point communication? Don’t know. Give me your non–point-to-point communication model and I’ll tell you how to apply these ideas. – How does one preserve consistency across a distributed system? One thing I’ve thought about is a ”version control” injector which would vet communications with respect to the injectors and application objects in use, updating the injector sequence and transforming messages in response to version inconsistencies. – Are annotations used so that injectors can talk to each other locally or remotely? Both. The annotation sequence is passed among both the local and remote injectors and they can freely modify it. (A ”version 2” intention is to require declarations on the part of injectors as to which annotations they modify.) – What’s the AOP weaving mechanism? The table of injector factories controls the default (initial) injectors on a given instance. OIF includes a specification language, Pragma, for defining
82
Antoine Beugnard et al.
this table. In this language, one can group injectors into collections that satisfy particular policies, and describe relationships about the relative ordering of injectors. – How do you decide the order of injectors? In Pragma, one can group injectors into collections that satisfy particular properties, and describe relationships about the relative ordering of injectors. The actual injectors executed is a function of the original (default) sequence of injector factories for that interface/method and what dynamic changes have happened to this sequence during the execution of the program. – Isn’t thread context a global variable? Per thread, so to speak. This in some sense corresponds to the Java concurrency model. Just like EJB, this has the disadvantage that one can’t willynilly create new threads and have them interact appropriately. What’s appropriate is, in fact, a difficult question. Additionally, changes to the thread context are not properly scoped. – How are injectors dynamic? Very. There are mechanisms for dynamically changing an instance/method’s injector sequence and the injector factory table. 2.2
Peer Data Structures by Ulrik P. Schultz
Contemporary research in services for pervasive computing and ubiquitous computing often assumes a central server, even though from an architectural point of view the central server is ill-suited to these new computing paradigms. Examples of such centralized services includes context and location awareness, a platform for groupware applications, a location-based information system, as well the simpler yet still difficult problem of managing appointments in a groupbased calendar. The origin of this dichotomy is the simplicity of programming clients that operate through a central server, as opposed to decentralized peers that share relevant information. Implementing a pervasive computing system without a central server typically involves defining a communication protocol for distributing and synchronizing data. Rather than implementing such protocols from scratch for every application, my goal is to have a library of standard abstractions for sharing and synchronizing data, similarly to the collection library available for any decent object-oriented language. Distributed Asynchronous Collections are a prime source of inspiration for this work, but are based on the publish/subscribe paradigm, which is not necessarily ideal for distributing all kinds of data. Rather than relying on a continued dissemination of data, I want each local copy of the data structure to continuously be as complete as possible. (Of course, the publish/subscribe paradigm is powerful enough to provide this behavior as well, but I am searching for other potentially simpler solutions.) The position paper describes a work-in-progress in the implementation of communication abstractions for a pervasive computing virtual machine: I describe the current state of our virtual machine implementation (including its basic communication mechanisms), outline how to implement peer data structures
Concrete Communication Abstractions
83
as an abstraction for distributed communication with simple communication and synchronization behavior, and last give examples of how these data structures concretely can be used as a substitute for a centralized server architecture. Questions/Answers – The ordering of changes seems important, but it’s an asynchronous system, so how do you handle this issue? The merge operator must be independent from the order in which events arrive (if it’s not, then the data structure looks different at each node, which is fine in some case). – Too much work spent gossiping? The number of peers must be appropriate (see Eugster et al’s work on ”Lightweight Probabilistic Broadcast” [9]). – What is the overhead of maintaining the same state? No more than what is spent gossiping. – What if I need an accuracy guaranty? Synchronize with a server, but it’s unclear how synchronization is done when peer relations are dynamically created. – Where to keep the neighbor relations? It’s part of the data structure. – Doesn’t copying data to all peers limit where peer data structure can be applied? Yes. 2.3
Visibility as Central Abstraction in Event-Based Systems by Ludger Fiege, Mira Mezini, Gero M¨ uhl, and Alejandro P. Buchmann
Event notification services facilitate loose coupling and are increasingly used to compose today’s distributed, evolving software systems. They are accessed via library calls or programming language extensions, both essentially offer publish and subscribe functionality that announce a notification and expresses a consumer’s interest in receiving a specific kind of notification, respectively. However, this is a simplistic abstraction to cope with the flexibility and degrees of freedom an event notification service offers. Publish and subscribe provides support for producers and consumers, but the loose coupling in these systems extracts dependencies between the cooperating components out of the components itself, beyond the control of publish and subscribe calls. Besides publishers and subscribers the role of an administrator have to be distinguished whose responsibility is to orchestrate and control cooperation of a set of event-based components. In a sense, the development of event services follows the early stages of programming language evolution in that the need for mechanisms to structure event-based applications is disregarded. We propose to use visibility as central abstraction and introduce the wellknown notion of scoping to event-based systems. Controlling the visibility of
84
Antoine Beugnard et al.
events is a simple yet powerful mechanism that allows to identify application structure and offers a module construct that is used to localize event coordination. It extends the publish and subscribe primitives and their flat design space with an application structure on top of which we devise support for bundling and composing new components, refining delivery semantics in these bundles, and event transformations in heterogeneous systems. The contribution of the scoping abstraction in event-based systems is twofold: first, it is a design tool that helps to analyze application structure and customize event-based cooperation. Second, it also is an implementation framework that offers to control the loose coupling in event-based systems. 2.4
Building Distributed Collaboration Tools Using Type-Based Publish/Subscribe by Christian Heide Damm and Klaus Marius Hensen
Authors present an extension, Distributed Knight, to the Knight tool for collaborative, co-located, and gesture-based object-oriented-modelling. Distributed Knight supports distributed collaboration and is built using a type-based publish/suscribe mechanism. We discuss how this architecture generalises to the distribution of data instantiated from general MOF-compliant metamodels and argue that type-base publish/subscribe mechanisms are useful for building distributed collaboration tools. Questions/Answers – Why elaborate on pruning events? Two reasons: (i) avoid flooding the net (mouse events) (ii) some data become out-of-date (changes in state, where only the last state is important). Sometimes the intermediate states are also desired, in which case you shouldn’t prune events. But the architecture is not application-specific, and such cases (all of them?) should be implemented using Awareness Events. – What about Security? Haven’t considered it, but it is not different than in other publish/subscribe abstractions. – When to use TPS and when to use RMI? Use RMI for point-to-point and TPS otherwise. For example, when joining a session, the newcomer publishes an event saying that he is interested in joining the session, and all the session participants respond by publishing an event saying that they are participating in the session, and that they can be contacted by the newcomer using RMI in order to get updated with all the data. Objection: what if in the future want to make the initial data transfer available to other publish/subscribe clients? Good point, that is not possible, but the other clients will be able to see that the newcomer has actually joined the session. Implement RMI on top of TPS? – What changed in the event hierarchy as a result of the transition to a distributed version?
Concrete Communication Abstractions
85
n the stand-alone version, we didn’t use TPS at all; it is only used for distribution. – Can you have different views of the same data? It depends on what the ”session data” is. If the session data contains information about the view (e.g., the viewport or the zoom factor), then it is not possible. In Knight, this is not the case. The session data must be the same at all clients. If you make a change to one element, and this change automatically affects other elements, then you really change those elements too. And the SessionDataEvent will therefore contain the changes of those elements too. 2.5
Coordination Languages as Communication Components by Selma Matougui
The concept of coordination is by no mean limited to computer science, but in this domain, it is defined as the process of building programmes by building together active pieces There exist many problems with coordination. Authors have selected some of them and gave some suggestions about how to manage them using a coordination process: – Management of shared resources: it is necessary to control the allocation and/or the access to the resource. – Transfer of information: we should ensure their physical transfer, control the synchronization and control possible duplicated transfer. – Activity synchronization: a coordination process should serialize concurrent execution requests and select the request to execute. – Group decisions: because no one in a group has enough authority to decide, a coordination process should implement the schema of decision and announce it. So, to manage all these problems, there is a real need for an explicit representation of coordination between active entities, this is done by coordination models and languages. There exist two families of coordination models and languages: data driven and control driven approaches. In data driven models, the activity is centered around a shared body of data. They have no proper language but offer a set of primitives that should be incorporated in a computation to its coordination. They use generative communication. However, in control driven models, the activity is centered around processing, the notion of data almost does not exist. They communicate by message passing processing. But these languages are domain specific and coordination code stay still mixed to computation code, this limits the possibility of reuse. A new approach appeared: the component approach. It put coordination abstractions into explicit reusable entities. But, in this paper we propose to use Mediums to express coordination. They are communication components that well specify offered and required services, they use either generative communication or message passing communication and offer a refinement process.
86
Antoine Beugnard et al.
Questions/Answers – How can separation be done? The frontier between computation and coordination is not always obvious. Features such as reusability can be use to find the good one. – Example of separation? As an example, we have studied and proposed coordination abstractions such as ”electing” or ”reserving”. These abstractions can be considered as ”high level primitives” that can be used by the computation part of components in order to coordinate them. – There is another approach called filters which use the same principle of separation. Can you compare your approach with it? Already heard about aspect programming but need more study. – Can components be composed with mediums? Yes, mediums can be composed with components. They can be used as connectors. 2.6
Abstracting Communication in Distributed Agent-Based Systems by Monique Calisti
The paper discusses abstraction components for agent based communication. we report back on our experience in deploying such components for developing agent based solutions and we draw some conclusions on the main benefits and challenges of communication abstractions for multi-agent systems. We finally argue that some of these abstractions have the potential to be re-used or integrated on top of traditional (non-agent) distributed systems. Questions/Answers – The reader was expecting (from his interpretation of the title) something about how agents could use abstractions for dynamic reasoning about the state of the world and about the ongoing communications? ”Abstracting communication” refers to the multi-layered structure or communication stack that is the most commonly adopted approach in the agent world for agent communication. ”Abstractions” are in this sense the various layers in which the message exchange can be decomposed. Some of them can be re-usable (i.e., interaction protocols, ACLs, content languages and eventually ontologies) and some of them are domain and/or application dependent. – Ludger notices that in the communication stack presented in the paper, it would be maybe more intuitive (from an OSI perspective) to have ontologies on top of the stack, since they are the most ”application” dependent components. Monique observes that it is indeed true that ontologies are the most applicationoriented components, however, interaction protocols are at the highest level of abstraction in terms of communication. That’s why they have been put on top of the communication stack.
Concrete Communication Abstractions
87
– What is the difference between an active server and an object? Monique first asks Salah for an explanation about what he assumes an ”active server” to be. This enables to clarify that the agent paradigm is very much of a different way of conceiving a system/solution in terms of ”how” software entities can decide what to do, rather than defining them in terms of what they should do. If you invoke a method that does not exist, the active server could either not react at all or sends an error message. An agent could eventually try to do something that is consistent with its goals/mental state and its vision of the world (capability of dealing with unexpected events). – What about standard components for agent communication languages? At the ACL level there are two main approaches: I) Speech act theory based approaches (the two most known ACLs are KQML and FIPA-ACL); II) Ad hoc approaches in which data exchange is structured in the form of messages, without an intentional level behind it. – What about the need of defining an appropriate boundary between coordination and computation? It is not possible to draw ”THE” line. It is very dependent on the type of applications and on the boundary conditions how much you will delegate to the abstracted communication stack (including interaction and coordination protocols) and how much complexity your software components will have in dealing with it. Coordination can be influenced by computation (i.e., decision making process) and vice versa. Therefore, the conclusion is that abstractions are very important for coordinated communications, but there is always a trade off against the computation mechanisms that agents will make use of. – Is there an experimental feedback about the use of XML? The experience gained when implementing an XML-based solution (XML used for the ontology representation) versus a non-XML one shows that: • It is very important the way you design your ontology since when parsing XML based messages this can impact the performance of your system. • XML encoded messages can be heavy in terms of representation, transport and parsing. • XML is good in terms of re-usability: the same representation (I.e., ontology) can be used in different framework by just sharing the DTD file. 2.7
Communication Abstraction Based on Multi-level System Description by N. Baker, Z Kovacs, G. Mathers, and R. McClatchey
Component-based software development facilitates the construction of (often reusable) software artifacts by assembling prefabricated configurable building blocks. A component-based approach to software development also offers an achievable degree of reusability and flexibility, over traditional development methods. Component autonomy is achieved by establishing component coupling through the underlying communication infrastructure. The communication architecture is responsible for reifying the component relationships, within
88
Antoine Beugnard et al.
the system. CERN has developed a workflow management application that instantiates relationships between components from descriptions held in a model. The first implementation of this application, called CRISTAL, uses descriptions for physical components (e.g. products, agents) and their workflow activities and has proven that using a so called description-driven approach it is possible to evolve new ’versions’ of these descriptions and apply them at runtime. This description-driven approach allows association between component types to evolve and component relationships to be modified at run-time and then provides the essential elements in supporting the registration of components’ interests in an event-based system. Using description to define components it will be possible to modify component relationships at runtime and to provide trace-ability of the evolution of the relationships. This paper reports on work that extends the description-driven approach of CRISTAL to handle traceable (re-)configuration of component relationships. Questions/Answers – How do you choose between what is system and what is described? This is something that is decided by what you decided is core to your component. For example, core to a workflow management application is a workflow engine. You are able to create descriptions to populate that workflow engine. – What can’t be described? Aspects that exist in the vertical abstraction do not have technologies that allow modification. We take advantage of populating at the model level to be able to modify behavior. – Is it only data that is described, or can behavior be described? We are able to describe processes carried out by a component, that process may be applicable to (be related) to another component.
3
Discussions
The second part of the workshop was dedicated to discussions and working groups. After a brainstorming session where attendees suggested several subjects of discussion, we elected three of them. The conclusion of each working group is summarized below. 3.1
Peer to Peer
Peer-to-peer (P2P) systems are systems where objects, called peers, have no fixed client or server role. The communication between peers is spontaneous and unconstrained. Two types of P2P systems may be underlines: non-internet communication between independent devices internet sharing of information
Concrete Communication Abstractions
89
Applications for which P2P is an advantage are those which need circumventing central authority and fault tolerance. For instance, wireless networks hardware such as Bluetooth, and softwares like Gnutella [2], Jxta [4], Freenet [1], Groove [3] are Peer-to-peer systems. Some issues were identified by the group. Here is a non-exhaustive list: – – – –
Special hardware/protocols/... are needed What is the relation between publish/subscribe and P2P? Ilities: what about scalability, heterogeneity, security? Performance, optimization, debugging.
Peer-to-peer systems promote technological and social changes like new devices, new ways of working, going both ways. P2P as a new communication patterns demand new solutions such as fault tolerant TCP/IP and no central authority. This kind of communication, which may be anonymous or pseudonymous, leads to awareness, heterogeneous devices and more freedom (”cutting out the middle man”). 3.2
Publish/Subscribe
Publish/subscribe communication is based on the principle of existence of agents that produce messages/events and other agents, called consumers, which subscribe to the system in order to have one or more categories of messages/events. The same agent may be producer and consumer at the same time. One of the principles of this type of communication is that the producers and the consumers of messages/events are anonymous to each other, so that the number of producers and consumers may change dynamically. In this working group, three points were discussed. They are related to the motivation, the current state and the remaining problems. Here is their conclusion for each point: Motivation: Publish-subscribe communication is inherent in some domains, such as systems that receive stimuli from many sensors (process automation, telecom) and will be useful for Enterprise Application Integration (EAI) for decoupling application components and for software composition. Current State: Popular products implement basic publish-subscribe communication services, based on standards (TINA, CORBA, J2EE, .NET) and event channel (CORBA), topics (JMS), etc. There are some research prototypes like Scopes [10] or Siena [7] (Content-based notification service); Problems: Limited control of the communication flow inside the products, such as visibility of events/messages and delivery of events/messages to subscribers. Systems need language support, optimizations to reduce network traffic (grouping of events, collocation) and some Ilities like security. And finally, guidelines for building publish-subscribe systems due to their inverted control flow. Only few patterns for publish-subscribe computation are known.
90
3.3
Antoine Beugnard et al.
Towards a Unified Theory
The goal of this working group was ambitious. But, the first workshop on communication abstraction had to be! Discussions were quite abstract, trying to build a common vocabulary between people who do not share exactly the same ontologies. In order to start communication on communication it was unavoidable to start sharing conceptual knowledge. Hence, we tried to share a definition for abstraction and communication. Abstraction A model that cuts off uninteresting elements so that we can understand relevant things about something. Has properties we can reason about and can be at any level. Communication Exchange of information, sharing a context or knowledge. Then, we wondered what kind of properties are of interest; a starting place was to catalog interesting properties of distributed communication systems: – – – –
Object identity (localization, addressing, etc.) Temporal properties (Synchronicity, QoS, etc.) Causality, ordering Failure (awareness, detection, notification, management, correction, etc.)
Finally, we proposed to start a catalog of Communication Abstractions that would precise ”What do they exchange?” and ”What properties do they have?”.
4
Workshop Conclusions
An indirect goal of this workshop was to gather people from various communities that are working on the same problem with no opportunity to meet and share their point of views. Communication is essential to multi-agents systems, software architecture, distributed systems but with different goals. We have found interesting to make these people met, and we hope attendees also have found it fruitful. Suggestions and needs expressed by the attendees invite us to organize a follow up workshop next year.
References [1] [2] [3] [4] [5]
Freenet. http://freenetproject.org/. Gnutella. http://www.gnutella.com/. Groove. http://www.groove.net/. Jxta. http://www.jxta.org/. N. Baker, Z Kovacs, G. Mathers, and R. McClatchey. Communication abstraction based on multi-level system description. http://perso-info.enstbretagne.fr/˜beugnard/ecoop/MultiLevelCommAbst.pdf, June 2002.
Concrete Communication Abstractions
91
[6] Monique Calisti. Abstracting communication in distributed agent-based systems. http://perso-info.enst-bretagne.fr/˜beugnard/ecoop/DABCommAbst.pdf, June 2002. [7] Antonio Carzaniga, David S. Rosenblum, and Alexander L Wolf. Design and evaluation of a wide-area event notification service. ACM Transactions on Computer Systems, 19(3):332–383, August 2001. [8] Christian Heide Damm and Klaus Marius Hensen. Building distributed collaboartion tools using type-based publish/subscribe. http://perso-info.enstbretagne.fr/˜beugnard/ecoop/PublishSubscribe.pdf, June 2002. [9] P. Eugster, S. Handurukande, R. Guerraoui, A. Kermarrec, and P. Kouznetsov. Lightweight probabilistic broadcast, 2001. [10] L. Fiege, M. Mezini, G. M¨ uhl, and A. Buchmann. Engineering event-based systems with scopes. In European Conference on Object-Oriented Programming (ECOOP), June 2002. [11] Ludger Fiege, Mira Mezini, Gero M¨ uhl, and Alejandro P. Buchmann. Visibility as central abstraction in event-based systems. http://perso-info.enstbretagne.fr/˜beugnard/ecoop/VisibilityAsCentralAbst.pdf, June 2002. [12] Robert E. Filman. Injectors and annotations. http://perso-info.enstbretagne.fr/˜beugnard/ecoop/InjectorsAnnotations.pdf, June 2002. [13] Selma Matougui. Coordination languages as communication components. http://perso-info.enst-bretagne.fr/˜beugnard/ecoop/CoordLang.pdf, June 2002. [14] Ulrik P. Schultz. Peer data structures. http://perso-info.enstbretagne.fr/˜beugnard/ecoop/PeerDataStructures.pdf, June 2002.
Unanticipated Software Evolution G¨ unter Kniesel1 , Joost Noppen2 , Tom Mens3 , and Jim Buckley4 1
2
Dept. of Computer Science III, University of Bonn, Germany, [email protected] - http://www.cs.uni-bonn.de/∼gk/ Software Engineering Lab, University of Twente, Enschede, The Netherlands, [email protected] - http://wwwhome.cs.utwente.nl/∼noppen/ 3 Programming Technology Lab, Vrije Universiteit Brussel, Belgium, [email protected] - http://prog.vub.ac.be/∼tommens/ 4 University of Limerick, Castletroy, Limerick, Ireland, [email protected] - http://www.csis.ul.ie/
This workshop was dedicated to research towards better support for unanticipated software evolution (USE) in development tools, programming languages, component models and related runtime infrastructures. The report gives an overview of the submitted papers and summarizes the essence of discussions during plenary sessions and in working groups.
1. Introduction Many studies of complex software systems have shown that more than 80% of the total cost of software development is devoted to software maintenance. This is mainly due to the need for software systems to evolve in the face of changing requirements. Despite the importance of software evolution, techniques and technologies that offer support for software evolution are far from ideal. In particular, unanticipated requirement changes are not well supported, although they account for most of the technical complications and related costs of evolving software. By definition, unanticipated software evolution (USE) is not something for which one can prepare during the design of a software system. Therefore, support for such evolution in development tools, programming languages, component models and related runtime infrastructures becomes a key issue. Without it, unanticipated changes often force software engineers to perform extensive invasive modification of existing designs and code. Correspondingly, the USE workshop addressed the issues inherent in unanticipated, incremental evolution of object-oriented and component based systems. The main goal of the workshop was to increase mutual awareness of the different research groups active in this field. By fostering exchange of ideas we wanted to promote new approaches and technologies for dealing with unanticipated software evolution. 1.1 Workshop Overview The workshop lasted one day. In keeping with the spirit and format of a workshop, USE 2002 had a highly discursive nature, with presentations in the morning J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 92–106, 2002. c Springer-Verlag Berlin Heidelberg 2002
Unanticipated Software Evolution
93
and different theme-based discussion tracks (“breakout groups”) in the afternoon. For the morning session the organizers selected presentations that covered a wide range of approaches [6, 14, 4, 21, 20, 27]. Although more than half of the slot devoted to each presentation was reserved for discussion, this time was seldom long enough. The heterogeneity of approaches and of the participants’ background triggered lively debates. Occasionally, these led to general agreement but more often they demonstrated the need for stronger exchange of ideas between different communities working on different facets of unanticipated software evolution. The paper presentations were followed by an overview of all received submissions, intended as a starting point for identifying topics for breakout groups. Starting from the question “What is Unanticipated Software Evolution?”, G¨ unter Kniesel offered a tentative answer and a classification of the various dimensions of the problem and proposed approaches. 1.2. What Is USE and What Is USE Research? Agreeing on a common definition of unanticipated software evolution (USE) turned out to be one of the most controversial issues. The opinion that every change is anticipated at some point in time and the opposite view that every change is inherently unanticipated where both present. This is characteristic of the heterogeneity of the different views on this topic. Among others, the discussion revolved around the questions of what changed, when it changed, whether the change depended on any hooks built into in the previous version of the application and from whose point of view the change was (un)anticipated. There was no conclusive answer to this question. Maybe the reason is that this might be the wrong question, since different definitions are sensible from different points of view. So one could step back from asking what is unanticipated software evolution. Instead one could ask why we are concerned about unanticipated evolution and what is the aim of research on unanticipated evolution. This is less controversial: the obvious goal is to reduce the costs of evolving software without sacrificing correctness and quality. Correspondingly, malign changes are those that exhibit undesired side-effects or involve high costs either because they are non-incremental or because they depend on prior encoding of hooks. In contrast, benign changes exhibit no undesired side-effects, are incremental and do not depend on prior encoding of hooks. Then one can simply say that the aim of research on unanticipated software evolution is to enlarge the category of benign changes. Put differently, the aim of USE research is to enlarge the category of changes that exhibit no undesired side-effects and for which a non-invasive1 implementation is possible without encoding of proper hooks in prior versions of the changed software. 1
An non-invasive change is one that adds something to a software without changing any of its pre-existing parts. The terms non-invasive, incremental and additive are synonyms.
94
G¨ unter Kniesel et al.
Although this definition was not produced during the workshop we want to put it forward as a working definition that can be further elaborated in the future. It comes close to some of the opinions expressed at the workshop so we hope it can eventually evolve into a generally accepted understanding of what USE research is about.
2. Workshop Papers All submitted papers were reviewed by the USE program committee2 . From the 24 accepted submissions 2 were from industrial research and development departments. All the others contributions came from academic research. Java HotSwap API The two industrial papers address the new dynamic class replacement mechanism (“hot swap API”) included in JDK 1.4 and supported by Sun Microsystems’ Java HotSpot Virtual Machine: – Pazandak [21] presents ProbeMeister, a product that uses the hot swap API to attach to multiple remotely running applications, and effortlessly insert software probes to gather information about their execution. This information can be used to effect changes within the running applications, improve their operation or recover from failures. – Dmitriev [4] describes yet unreleased improvements of the HotSwap API for fine-grained, fast and scalable bytecode instrumentation in running server programs. As an interesting case study he shows that a JVM supporting an evolved API for dynamic bytecode instrumentation allows developers to profile dynamically selected parts of a target application much more efficiently than with other techniques. Instance Adaptation Instance adaptation is the process of changing the layout of existing objects after their class definition has changed. This is a well-known problem in object oriented databases and manifests itself also in systems that allow update of classes at run-time. – Duggan and Wu [6] present “adaptable objects”, an approach to enable interoperation of objects whose interface and layout may be updated at runtime. It relies on lazy creation of version adapter proxies that mediate where there is a version mismatch between an object and code attempting to access that object. – Hirschfeld, Wagner, and Gybels [11] suggest that there is much to learn from the proven techniques for unplanned dynamic evolution in Smalltalk and provide an overview of Smalltalk’s mechanisms for renaming, removal, and layout change of a class. – Rashid [22] presents a comparative evaluation of schema relationships and instance adaptation of four object database systems, each representing a particular category of evolution systems. The discussion aims to demonstrate the benefits of an aspect-oriented approach in the face of unanticipated changes. 2
See the USE 2002 web site at http://joint.org/use/2002/ for programm committee members and additional reviewers.
Unanticipated Software Evolution
95
Dynamic Linking The two contributions about dynamic linking differ from all others in that they do not concentrate on enabling unanticipated changes but instead demonstrate that unanticipated changes can sometimes be undesired or have undesired side-effects: – Drossopoulou and Eisenbach [5] point out that, in some situations, dynamic linking in Java affects program execution and the integrity of the virtual machine. In order to let programmers understand precisely these effects, the papers demonstrates the process in terms of a sequence of source language examples, in which the effects of dynamic linking are explicit. – Eisenbach, Sadler and Jurisic [7] present a series of examples in Java and C# that pinpoint problems in the way how these two languages deal with dynamic linking. The examples show similarities and differences between the two languages in the treatment of compile-time constants, interface changes and resolution of fields (and method) access. The paper shows that neither system can guarantee the integrity of distributed applications, so developers need to be aware of the employed dynamic linking process if they want to cope with “DLL hell”. Static Linking Zenger [27] presents the programming language Keris, an extension of Java with extensible modules as the basic units of software. The linking of modules is inferred and allows to extend systems by replacing selected submodules with compatible versions without needing to re-link the full system. Extensibility is unanticipated, type-safe and non-invasive; i.e. the extension of a module preserves the original version and does not require access to source code. Evolving Components and Services – Evans et al [8] describe the evolution a distributed run-time system that itself is designed to support the evolution of the topology and implementation of an executing, distributed system. The paper presents the different versions of the architecture, discusses how each architecture addresses the problems of topological and functional evolution, explains the reasons for the evolution of the architecture and discusses lessons learned in this process. – Sora et al [26] present a component model that supports unanticipated customization of systems in a top-down stepwise refinement manner, while composing also the structure of components. This strategy allows to realize very fine-tuned compositions even when the composition decision is made automatically, as it is necessary for self-customizable systems. The presented component composition model is able to cope with evolving requirements and discovery of new component types. – Oriol [20] argues that application evolution at runtime is prevented by connections between different software entities and proposes a disconnected communication architecture based on three main concepts: extreme associative naming, late binding and asynchrony of communications. Everything is a service. Service registration and invocation occur through a semantic description. The choice of the service that best
96
G¨ unter Kniesel et al.
matches the description of the requested service is performed at the moment of the invocation. – Gorinsek [9] presents novel work in the field of component-based evolution of embedded systems. It addresses the necessary properties of a system supporting dynamic updating of components in an embedded platform and proposes a new approach to design such a system. – Buckley [2] describes work towards the development an environment where software components could autonomously adapt their interfaces to each other at runtime. Such adaptation by components would allow them adjust to unanticipated software evolution by enabling their interfaces to change in response to amendments in the service components they use and in the client components that they service. – McGurren and Conroy [14] define the notion of a dynamic system, presents a taxonomy of dynamic systems and introduces three types of adaptation that may be used in the implementation of dynamic systems: instance replacement, service level adaptation and interface adaptation. Then the X-Adapt architecture is proposed as a design to support these three types of adaptation while combining them with integrity management. Change Impact Analysis – Neumann, Strembeck and Zdun [18] present an approach that fosters changes of software by managing runtime-traceable dependencies of requirement specifications and test cases to corresponding architectural elements and source code fragments. In case of (unexpected) change requests it is easy to find the affected system parts, thus facilitating timely change propagation and regression testing. – Mens, Mens and Wermelinger [15] propose intentional software views as an intuitive and lightweight technique to model crosscutting concerns. This technique provides the formal basis for tools that help grouping together source-code entities that address the same concern. The technique also facilitates unanticipated software evolution by providing the ability to automatically verify the consistency of views and detect invalidation of important intentional relationships among views when the software evolves. – Mens, Demeyer, Janssens [17] addresses the need to formalise behaviour preserving software evolution as a basis for dealing with refactoring. The paper introduces a graph representation of those aspects of the source code that should be preserved by a refactoring, and graph rewriting rules as a formal specification for the refactorings themselves. The authors show that it is feasible to reason about the effect of refactorings on object-oriented programs independently of the programming language being used, which is crucial for the next generation of refactoring tools. Dynamic Object Models: Roles, Views, Delegation – Markovic and Sochor [13] present an formal object model unifying wrapping, replacement and roled-objects techniques. In this model the objects with roles can dynamically change the interfaces they support. The model comprises also other techniques used to handle evolving objects.
Unanticipated Software Evolution
97
– Sadou [25] presents an approach unanticipated evolution of distribted objects at run time. New client programs may add behavior to existing server objects, whereas old clients may continue to use the unadapted version of the server. The approach relies on a combination of the adapter pattern and of delegation. A prototype version of the Adapter system is implemented as a set of Java libraries that does not require addition of new language constructs. – Anderson and Drossopoulou [1] present Delta, a first imperative calculus for object-based languages with delegation. Such languages (eg SELF) support exploratory programming by composition of objects, sharing of attributes and modification of objects’ behaviour at run-time. Furthermore delegation allows objects to delegate execution of methods to other objects. These features allow for creation of very flexible programs that can accommodate changes in requirement at a very late stage. Aspect-Oriented Approaches David and Ledoux [3] present an approach to dynamic adaptation to changing execution conditions based on distinguishing functional and non-functional concerns. These two kinds of concerns are composed together, at run-time, by a weaver which is aware of the execution conditions so that it can adapt its weaving to their evolution. JVM Extensions for USE Redmond and Cahill [23, 24] present the Iguana/J architecture, which supports unanticipated, non-invasive, dynamic modification in an interpreted language without extending the language, without a preprocessor, and without requiring the source code of the application. Anticipating Requirements Noppen et al [19] talk about optimisation of software development policies for evolutionary system requirements based on a Software Evolution Analysis Model (SEAM). This is a probabilistic model for evolution requirements, which helps anticipating on future requirements. Classification Gustavson et al [10] present a classification of dynamic software evolution built the distinction of two major facets, the technical facet and the motivation facet. The above summaries cannot give more than a raw idea of the topics addressed in each paper. Furthermore, the classification above could be replaced by many others, depending on one’s personal point of view. Therefore, interested readers are invited to consult the full papers. They can be download from the USE 2002 web site either selectively or as an archive containing the entire online proceedings [12].
3. Working Group: Scenarios The “Scenarios” working group consisted of Misha Dmitriev, Sophia Drossopoulou, Dominic Duggan, Susan Eisenbach, Robert Hirschfeld, Jens Gustavson, G¨ unter Kniesel, Finbar McGurren, Yahya Mirza, Manuel Oriol and Bernard Pagurek. Its goal was to collect real-life scenarios of unanticipated software evolution from the experience of the participants.
98
G¨ unter Kniesel et al.
The intent of such a collection was to serve as a basis for evaluating the expressiveness of a possible taxonomy3 and as a set of “functional benchmarks” on which to compare different approaches to USE. Another possible use could be to identify specific requirements of certain application areas. The group discussed the scenarios summarised in the following. For scenarios from already completed projects, the participants reported also about the approach towards USE taken in their project, the individual techniques that had been applied and the reasons why they were sufficient for that scenario. For scenarios from new or planned projects the discussion focused more on identifying the particular problems of USE that arise in that scenario and on the applicable techniques. Network management An often encountered problem in network management is how to perform a consistent change of protocols on a server and its clients. The challenge here is to perform the change as an atomic operation while the network is still up and running. Bernard Pagurek reported on a solution of this problem based on “software hot swapping”. This technique was applicable because the protocols that had to be updated were stateless, so there were no complications related to state transfer between old and new object versions. A similar example was the move from IP4 to IP6. Mobile Devices Presently, updating the operating software on mobile devices (e.g. handhelds) requires users to turn the device in to a dealer after making himself a backup of any personal data, configurations, preferences, etc. The opposite is highly desirable: remote software updates (at the user’s site), without needing a reboot, without loss of data and without loss of functionality. This is still a topic of ongoing research. Independent Evolution of Platform and Services Another challenging scenario is independent evolution of a distributed platform and of the services running on that platform. Updates in the platform must still support services written for previous versions of the platform. New and updated services must be able to run in platforms that were not prepared for that service update. All this must happen dynamically and be transparent to the user. Change of Law Legislation is in a permanent flux. Unanticipated changes in legislation can have a significant impact on existing products and services. For example, a new law that all hearing impaired people have to be supported by communication devices might require dynamic upgrades to the user interfaces of already delivered devices. The new user interface version would need to contain functionality that was not anticipated in its original design. Computer Games Modern computer games provide a multitude of characters and scenarios (worlds) in which these characters interact. If new characters and scenarios can be loaded dynamically into a game a “protocol discovery protocol” is needed that allows the new behaviour of new types of beings to be determined. Existing character types must learn dynamically how to deal 3
Development of a taxonomy was the topic of another working group.
Unanticipated Software Evolution
99
with the newly introduced ones. For instance, an existing “dog” character will have to “learn” that it should chase a new “cat” character. Profiling Another example was selective, dynamic profiling of applications, including library code, as reported in [4]. Bank Account Evolution A bank might conduct surveys of its customers’ habits and decide to offer selected customers new types of accounts that are more attractive for that particular group. Similar unanticipated changes could be possible for credit card users. The changes of account type would need to happen dynamically, too. This list still needs to be completed and related systematically to the different existing USE approaches and to the USE taxonomy proposed by the next working group.
4. Working Group: Taxonomy The “Taxonomy” working group consisted of Jim Buckley, Tom Mens, Awais Rashid, Salah Sadou, Stefan Van Baelen and Matthias Zenger. Its aim was the development of a taxonomy based on characterizing mechanisms of software evolution and the factors that impact upon these mechanisms. The goal of the taxonomy is to position concrete tools and techniques within the domain of software evolution, so that it becomes easier to compare and combine them, or to evaluate their potential use for a particular maintenance or change context. The group proposes a categorisation along the four following dimensions: – – – –
properties of the change mechanism, properties of the change itself, system properties and change process.
4.1 Properties of the Change Mechanism Time of change Depending on the programming language, or the development environment being used, it is possible to distinguish between phases such as design time, compile time, load time, link time and run time. Relative to these phases one can determine four times of change: T1 = the time (interval) when a change is requested ; T2 = the time (interval) when a change is prepared ; T3 = the time (interval) when the change becomes available for execution; T4 = the time (interval) when the change is executed for the first time. Parallelism Software evolution may be carried out sequentially or in parallel. With sequential software evolution, only one version of the software is available at any given time. With parallel evolution, many versions of the same system can co-exist for some time.
100
G¨ unter Kniesel et al.
Within parallel evolution, one can further distinguish between convergent change and divergent change. With convergent changes, as in the example above, two parallel versions can be merged together into a new unified version. With divergent change, different versions of the system co-exist indefinitely as part of the maintenance process. Incrementality Something that is closely related to versioning is the incrementality of the change. For incremental changes, an altered component can be incorporated into a system while the old (non-extended) version gets preserved. This is the basis for versioning mechanisms and it is required for systems in which old and new versions coexist simultaneously. Degree of automation In software re-engineering, numerous attempts have been made to automate, or partially automate, software maintenance tasks. Typically, the tasks suited to automation are structural transformations of the software system. 4.2 Properties of the Change Semantic or purely structural A distinction can be made between purely structural and semantic changes. While semantic changes have an impact on the behaviour of the software, structural changes aim to preserve the semantics. In contrast to several structural changes, semantic changes are very difficult to automate. Addition, subtraction, modification Another distinction is between addition, subtraction, and modification of an element in the software. These activities can be structural in that files or modules can be rationalized, added to or altered structurally. Likewise the activities can be semantic, when functionality is added, removed or altered. Granularity of change Another distinguishing feature of change would be its granularity. This can go from very coarse granularity (such as system, subsystem and module level) to an extremely fine granularity (such as variable, method and statement level). Impact of change Related to the granularity is the impact of the change. Sometimes, seemingly local changes may have a global impact because the change is propagated through the rest of the code. Change effort Another related issue is the change effort. In many cases, changes with a high change impact also require a significant effort to make the changes. However, in some situations this can be overcome by automated tools. For example, source code renaming is typically a global change with a high change impact, but the corresponding change effort is low because renaming can proceed in an automated way. 4.3 System Properties Activeness The system can be passive (changes are driven externally) or active (the system can drive the changes itself). Typically, for a system to be active, it must contain some monitors that record external and internal state. It
Unanticipated Software Evolution
101
must also contain some logic that allows self-change based on the information received from those monitors. A system is passive if change must be driven by an external agent, typically using some sort of user interface. Openness A software system can be open in that it is specifically built to allow for dynamic, unanticipated evolution. Unanticipated adaptations can be specified and incorporated at runtime. For example, a system might rely on plug-ins at runtime. While the plug-in modules may be unknown in advance, the ability to add them to the system at runtime is explicitly provided. Closed systems are those that have their complete functionality and adaptation logic specific at build time. A system however, cannot be open to every possible change. It is likely that systems will only be open to certain (unanticipated) changes. 4.4 Change Process A final classification of software evolution can be based on the change process, which is a part of the software development process, and is typically imposed by the project manager. Planning We can distinguish between planned or unplanned evolution, based on whether the changes that are to be applied are managed in a more or less coherent manner. Control During or after a change, we can distinguish between controlled or uncontrolled evolution. Controlled evolution typically happens in a planned context, when the constraints in the change process are explicit and enforced. For example, versioning rules may be used to impose constraints on the evolution. After the workshop the “Taxonomy” group continued its discussion via the USE mailing list ([email protected]) with occasional contributions of other workshop participants. The results of this ongoing effort are being compiled into a technical report [16] that will provide a much more complete account of all the issues involved than is possible within the limited space available here. The summary in this section reflects a snapshot of the discussion at the beginning of September 2002.
5. Working Group: Language-Level Support for USE This group consisted of Christopher Anderson, Misha Dmitriev, Huw Evans, Joris Gorinsek, Lubomir Markovic, Kim Mens, Ioana Sora and Barry Redmond The discussion addressed existing challenges and emerging techniques for direct support of unanticipated evolution in languages and run-time infrastructures. It focused on the following issues: Language annotations versus tool support Language annotations could be used to explicitly specify properties of a program, that must be known at
102
G¨ unter Kniesel et al.
evolution time. However, changing languages is difficult and “non-standard” languages seldom find wide-spread acceptance. Therefore, tool support to help programmers reason about evolution and about the related properties of a program might be a better alternative. Object replacement There are many problems involved in designing a powerful and generally applicable mechanism for dynamic replacement of objects or components: when should the change happen, how and by whom is it triggered, how is it performed, and at which granularity. When moving from replacement of objects to replacement of entire component instances a sort of transaction concept is needed for ensuring atomicity of the change. All this is even harder in a resource restricted system. Dynamic update The issue of transaction is also related to the issue of ensuring consistency of a system after a dynamic update, possibly based on some constraint mechanism. If dynamic update is based on object replacement rather than addition of new objects and delegation, than state transfer between the object versions can become difficult. On one hand, encapsulation might prevent transfer of all relevant information. On the other hand, there complex interactions can exist between multiple versions. Dependency management Dependencies between objects need to be tracked. They can occur at many levels: invocation dependencies, structural dependencies, architectural dependencies, etc. Dependencies may need to work in both directions: if change is made to A then B must be checked and vice versa. There are more open questions here than ideas of how to address them. Performance Evolution can be an expensive operation and building evolution support into a system can be at odds with performance. One possible way out is the approach pioneered by the dynamic language Self and successfully adopted for a commercial environment by the Java HotSpot virtual machine: The idea is to perform extensive optimizations for performance but make the optimizations undoable for evolution.
6. Working Group: USE in the Software Life Cycle Support for USE in languages, architectures and infrastructures addresses the late phases of the software life-cycle (implementation and deployment). Tool support for static evolution additionally addresses the design phase. There is hardly any work about what support for unanticipated evolution might mean in early phases (An exception is [19].). Therefore, this group explicitly addressed USE during the entire life-cycle, including requirement elicitation and analysis. The group consisted of Pascal Costanza, Thomas Ledoux, Joost Noppen, and Uwe Zdun. In the first place, support for USE is needed because changes to requirements and their consequences for software systems are, by definition, not known in advance. So talking about USE during requirement elicitation might sound like a contradiction and it was indeed perceived as such by other workshop participants. However, there is no contradiction, as shown in the following.
Unanticipated Software Evolution
103
6.1 No Silver Bullet Yet For practitioners and researchers of USE techniques it is a sad but nevertheless true fact that no single approach to USE is capable of dealing gracefully with all possible types of evolution. A particular evolution can still be outside the scope of the language in which a system is programmed, of its run-time infrastructure and of the adaptation patterns implemented in the system. Including as many evolution techniques as possible is no viable solution either, since this would increase the complexity of the system and lead to problems in performance, usability, size, cost-effectiveness, etc. In the end, there will always be changes that require extensive invasive modifications of an existing application. 6.2 Selection Models for Software Evolution Techniques Since every technique for coping with evolution addresses a certain set of evolution problems it should only be applied in the context in which it functions best. However, choosing the right technique at analysis and design time is non-trivial. It requires analysts and designers to be aware of existing USE techniques and be able to assess their strength and weaknesses with respect to certain evolution scenarios. Experience shows that awareness and skilful use of novel techniques spreads slowly in the software engineering community. In general, the pace of innovation is faster than the pace of dissemination. Even experts in a certain domain sometimes have difficulties in keeping up with new developments. So there is ample room for tool support in this area. Its intention is to help software engineers make an informed decision about proper evolution approaches without having to spend a significant share of their work researching and compiling the newest developments in the field of software evolution. In order to achieve such tool support, a better understanding of the software evolution process is needed. This should be expressed by a model that can support the decisions on which evolution techniques to consider without committing to the occurrence of particular evolution scenarios. For the selection of the proper design and evolution mechanisms there is no clear-cut solution, but many different research directions can provide relevant feedback on this topic, like for instance: probabilistic models, market analysis, domain models, etc. 6.3 Summary The essence of the discussion in this group can be summarized as follows: – No single approach to USE is capable of dealing gracefully with all possible types of evolution. There will always be changes that will require extensive modifications of an existing system. – Being aware of this fact, one can still take advantage of the fact that some evolution scenarios are more likely to occur than others and that some approaches to USE are better suited for certain types of evolution than others.
104
G¨ unter Kniesel et al.
– So, techniques for improved upfront analysis and design do not compete with techniques for USE but rather complement them. One obvious way to combine both is to make analysts and designers aware of existing USE techniques and of their strength and weaknesses with respect to certain evolution scenarios. Another one, advocated here, is to devise models of evolution that already include such knowledge and help programmers make an informed decision as to which evolution support techniques are most promising for a given application. Unanticipated evolution can invalidate an existing product, even if several preparation techniques have been included. But by making a sound choice of extension mechanisms and adaptation techniques as well as a good initial architecture design it should be possible to make software systems more resilient towards evolution.
7. Conclusions From individual discussions with participants the organisers gained the impression that the feelings about the workshop were split. On one hand, many people appreciated the chance to meet and exchange ideas in a forum in which unanticipated software evolution was the primary topic. On the other hand, many participants were unhappy about not having enough time for discussion in breakout groups and for presentation of all submitted papers. Also the 10 to 12 minutes discussion per presented paper were occasionally perceived as too short. Obviously, the organizers had underestimated the high interest in unanticipated software evolution and also the particularly high need for discussion in a newly forming research community. Therefore the next USE workshop, organised in conjunction with ETAPS 2003, will last two days (http://joint.org/use/2003/). The workshop showed that there is considerable work on USE in many academic and industrial research centers. The majority focuses on unanticipated run-time evolution but approaches that address all other phases of the software life cycle are present too. The surprising diversity of proposed techniques shows that there is really a strong need for forming a community of USE researchers, where groups with very different backgrounds can meet and learn from each other. Establishing a common understanding of basic notions, a catalogue of benchmark scenarios, techniques, and their classification are just the very first steps towards synergetic joint work.
8. Acknowledgements In the first place want to thank all the participants for making this a successful workshop. Big thanks are due also to all the members of the programm committee for the time and expertise they put into their work. Many participants have expressed their gratefulness for the very detailed, and constructive reviewer comments they received. Last but not least, the organizers of ECOOP 2002 provided an excellent infrastructure for the USE workshop.
Unanticipated Software Evolution
105
References [1] Christopher Anderson and Sophia Drossopoulou. Delta - an imperative objectbased calculus with delegation. In [12], 2002. [2] Jim Buckley. Adaptive component interfaces. In [12], 2002. [3] Pierre-Charles David and Thomas Ledoux. Dynamic adaptation of non-functional concerns. In [12], 2002. [4] Mikhail Dmitriev. Hotswap technology application for advanced profiling. In [12], 2002. [5] Sophia Drossopoulou and Susan Eisenbach. Manifestations of java dynamic linking - an approximate understanding at source language level. In [12], 2002. [6] Dominic Duggan and Zhaobin Wu. Adaptable objects for dynamic updating of software libraries. In [12], 2002. [7] Susan Eisenbach, Chris Sadler, and Vladimir Jurisic. Feeling the way through DLL Hell. In [12], 2002. [8] Huw Evans, Malcolm Atkinson, Margaret Brown, Julie Cargill, Murray Crease, Phil Draper, Steve Gray, and Richard Thomas. The pervasiveness of evolution in GRUMPS software. In [12], 2002. [9] Joris Gorinsek. Empres: Component-based evolution for embedded systems. In [12], 2002. [10] Jens Gustavson and Uwe Assmann. A classification of runtime software changes. In [12], 2002. [11] Robert Hirschfeld, Matthias Wagner, and Kris Gybels. Assisting system evolution: A Smalltalk retrospective. In [12], 2002. [12] G¨ unter Kniesel, Pascal Costanza, and Mikhail Dmitriev (eds). Online proceedings of USE 2002 – First International Workshop on Unanticipated Software Evolution, June 2002. http://joint.org/use/2002/sub/. [13] Lubomir Markovic and Jiri Sochor. Object model unifying wrapping, replacement and roled-objects techniques. In [12], 2002. [14] Finnbar McGurren and Damien Conroy. X-Adapt: An architecture for dynamic systems. In [12], 2002. [15] Kim Mens, Tom Mens, and Michel Wermelinger. Supporting unanticipated software evolution through intentional software views. In [12], 2002. [16] Tom Mens, Jim Buckley, and other authors pending. Towards a software evolution taxonomy. Technical report, Programming Technology Lab, Vrije Universiteit Brussel, 2002 (in preparation). [17] Tom Mens, Serge Demeyer, and Dirk Janssens. Formalising behaviour preserving software evolution. In [12], 2002. [18] Gustaf Neumann, Mark Strembeck, and Uwe Zdun. Using runtime introspectible metadata to integrate requirement traces and design traces in software components. In [12], 2002. [19] Joost Noppen, Bedir Tekinerdogan, Mehmet Aksit, Maurice Glandrup, and Victor Nicola. Optimising software development policies for evolutionary system requirements. In [12], 2002. [20] Manuel Oriol. Evolution of code through asynchronous services. In [12], 2002. [21] Paul Pazandak. ProbeMeister: Distributed runtime software instrumentation. In [12], 2002. [22] Awais Rashid. Aspect-oriented schema evolution in object databases: A comparative case study. In [12], 2002.
106
G¨ unter Kniesel et al.
[23] Barry Redmond and Vinny Cahill. Supporting unanticipated dynamic adaptation of application behaviour. In Object-Oriented Programming – Proceedings of ECOOP 2002, volume 2374 of LNCS. Springer, 2002. [24] Barry Redmond and Vinny Cahill. Supporting unanticipated dynamic adaptation of application behaviour. In [12], 2002. [25] Salah Sadou and Hafedh Mili. Unanticipated evolution for distributed applications. In [12], 2002. [26] Ioana Sora, Nico Janssens, Pierre Verbaeten, and Yolande Berbers. A component composition model to support unanticipated customization of systems. In [12], 2002. [27] Matthias Zenger. Evolving software with extensible modules. In [12], 2002.
Composition Languages Markus Lumpe1 , Jean-Guy Schneider2 , Bastiaan Sch¨ onhage3 , and Thomas Genssler4 1
3 4
Iowa State University, USA [email protected] 2 Swinburne University of Technology, Australia [email protected] Object Technology International, The Netherlands Bastiaan [email protected] Forschungszentrum Informatik Karlsruhe, Germany [email protected]
Abstract. This report gives an overview of the Second International Workshop on Composition Languages (WCL 2002). It explains the motivation for a second workshop on this topic and summarizes the presentations and discussions.
1
Introduction
Workshop Goals A component-based software engineering approach mainly consists of two development steps: (i) the specification and implementation of components and (ii) the composition of components into composites or applications. Currently, there is considerable experience in component technology and many resources are spent for the first step, which resulted in the definition of component models and components such as CORBA, COM, JavaBeans, and more recently Enterprise JavaBeans (EJB), the CORBA Component Model (CCM), and .NET. However, much less effort is spent in investigating appropriate techniques that allow application developers to express applications flexibly as compositions of components. Existing composition environments mainly focus on special application domains and offer at best rudimentary support for the integration of components that were built in a system other than the actual deployment environment. The goal of this workshop was to bring together researchers and practitioners in the area of component-based software development in order to address problems concerning the design and implementation of composition languages and to develop a common understanding of the corresponding concepts. We also wanted to determine the strengths and weaknesses of composition languages and compare them with similar approaches in related fields. In this workshop, it was intended to continue the fruitful discussions started at the first Workshop on Composition Languages (WCL 2001), which was held in conjunction with ESEC/FSE 2001 in Vienna. J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 107–116, 2002. c Springer-Verlag Berlin Heidelberg 2002
108
Markus Lumpe et al.
This briefly summarizes the context in which the goals of this workshop were originally defined. More specifically, we wanted to emphasize important issues of (i) the design and implementation of higher-level languages for component-based software development, (ii) approaches that combine architectural description, component configuration, and component composition, (iii) paradigms for the specification of reusable software assets, (iv) expressing applications as compositions of software components, and (v) the derivation of working systems using composition languages and components. Submission and Participation In the call-for-submission, authors were encouraged to address any aspect of the design and implementation of composition languages in their position statements. Furthermore, we were particularly interested in submissions addressing formal aspects of the issues mentioned in the previous section as well as case studies of using composition languages for real-world applications. The following list of suggested topics was included in the original call-for-submissions: • • • • • • • • • • • • • • • •
Higher-level abstractions for composition languages, Programming paradigms for software composition, Support for the specification of software architectures, Implementation techniques for composition languages, Scalability and extensibility of the language abstractions, Analysis of runtime efficiency of compositional abstractions, Formal semantics of composition languages, Type systems for composition languages, Domain-specific versus general-purpose composition languages, Design and implementation strategies for cross-platform development, Interoperability support, Compositional reasoning, Case studies of composition language design, Case studies of system development using composition languages, Tool support for composition languages, Taxonomy of composition languages.
All workshop submissions were evaluated in a formal peer-reviewing process. Review was undertaken on full papers by at least two members of the workshop organizing committee and selection was based on their quality and originality. Six papers were accepted for presentation at the workshop and publication in the subsequent proceedings.
2
Presentations
The morning of the workshop was dedicated for two presentation sessions with three presentations each. The first session covered case studies of composition
Composition Languages
109
language design whereas the second session was dedicated to compositional reasoning. This section briefly summarizes the main issues of the presentations given. The first presentation of the workshop was given by Giacomo Piccinelli (a joint submission with Scott Lane Williams). He presented some of the work done in the context of the Dynamic Service Composition (DySCo) project, in particular the usage of workflow systems for the dynamic composition of Web services. In his presentation he identified two main problem domains that have been targeted in the DySCo project: internal business processes and B2B processes. The key idea to formally represent the dynamic composition of Web services is the use of a process algebra. The proposed process algebra is based on CCS and serves as a compact formal model that is implemented by means of workflow systems. Using a process algebra, the problem of the composition of Web services can be translated into a problem of interacting processes, which in turn enables reasoning about composition. He concluded his presentation with an analysis of the expressive power of non-CCS-based approaches and he raised the question whether the DySCo approach can be applied to other problem domains as well. The second talk in this session was given by Michael Winter (representing the PECOS Group) who presented CoCo, a composition language developed for a special class of embedded systems called field devices. He introduced the context of the PECOS project in which the work was performed, presented some of the problems related to software development for embedded devices, and outlined the resulting component model of the CoCo language. Within the project, CoCo is used for the specification of (i) components, (ii) compositions of components, and (iii) architectural styles and system families. Furthermore, the CoCo language is used for the generation of code (mainly C++ and Java), scheduling information as well as for compositional reasoning. CoCo and its underlying composition model was further illustrated using one of the PECOS case studies. He concluded his presentation by raising some open questions for discussion, in particular related to the specification and composition of nonfunctional properties. The last presentation of the first session was given by Markus Lumpe who presented his work on using meta-data to compose components in the context of the .NET framework. He focused on the usage of so-called custom attributes for specifying both functional and non-functional properties of components. Possibilities to provide additional information for each element of a component-based application in a language neutral way were introduced. He illustrated some of the underlying concepts on a small example, discussed the issues of considerable run-time overhead if custom attributes are not used carefully, and pointed out to problems which cannot be solved using custom attributes alone. The presentation was concluded by giving directions for future research such as extensions to the current attribute mechanisms for dynamic file import as well as for compiler and property editor configuration.
110
Markus Lumpe et al.
Common to all presentations given in the case studies session was the fact that specifying non-functional properties of components is an important issue and appropriate abstractions need to be provided in a composition language. The same applies for the support of reasoning about properties of composed systems given the properties of the individual building blocks. This implies that the semantics of the underlying composition model has to be defined using appropriate formalisms. The first presentation of the second session was given by Johann Oberleitner (a joint submission with Thomas Gschwind) and dedicated to the discussion of so-called architectural composition languages (ACLs). He motivated their work by stating that neither architectural descriptions languages nor composition languages fulfill the needs for flexible component-based application development. By combining these two types of languages (resulting in an architectural composition language), however, he claimed that these needs can be satisfied in a more appropriate way. In particular, an ACL will allow for an explicit description of software architecture while still permitting the construction of executable applications and new components. Further requirements for such an architectural composition language were discussed and a prototype implementation was presented. He concluded his presentation with a summary of related work as well as plans how to further develop the prototype implementation. The second presentation in the reasoning session was given by Luca Pazzi who presented composition in the context of state-charts. The main motivation of his work was that current practice does not clearly distinguishes between implicit and explicit composition mechanisms (or attitudes). He illustrated his point of view by claiming that implicit composition is used to model associative knowledge without additional entities whereas explicit composition requires additional entities to accomplish the same. To illustrate the different notions of composition, he presented a railway gate control system that was modeled using state-charts. He showed that formerly independent state-charts can be composed implicitly to model the desired behavior. However, implicit composition is not powerful enough to enable an external characterization of the system. Here, another approach is needed. He proposed the use of so-called “whole state-charts”, which provide additional information in order to explicitly characterize the behavior of the composite system. He concluded his presentation with a brief illustration of state propagation and stated the proposition that “explicit is better!” The final talk of the workshop was given by Sven L¨ ammermann (a joint submission with Enn Tyugu) who presented a composition language with precise logical semantics. The main theses of this presentation was that logic can support compositional programming. In fact, the motivation for their work was the application a logic-based program synthesis in the context of object-oriented compositional programming. The presented approach is based on a surprisingly simple language, which allows an equational specification of composition. The proposed language derives its expressive power from the translation of the language elements into a set of logical formulae, which in turn constitute specific axioms of a theory in which the goal of an actual composition can be verified. In
Composition Languages
111
fact, the presented system is intended to support the notion of “correct compositions”. Sven L¨ ammermann stated that if the resulting program matches all given requirements, their approach also guarantees the correctness of the composition. The main focus in the second session was on correctness of composition. In fact, even though the problem was addressed using very different settings, the goal was the same – reliable composition mechanisms. All proposed solutions were based on prototype implementations. Unfortunately, these implementations were only used by a small number of people. Reliable information, whether or not these solutions can be used in practice, was not available. However, the notion of correctness and especially the notion of “correctness by construction” caught much attention in the following discussion sessions.
3
Discussion Sessions
The afternoon of the workshop was dedicated for discussing various issues which were brought forward during the morning presentations. At the beginning of the afternoon, an initial list of questions and discussion topics for the two discussion sessions was elaborated. This list contained the following items: 1. What are we composing? What is composition? Can we give a precise definition of the used terminology? 2. What is the difference between a service and a component? What is a Web service? 3. How can we express correctness of composition? 4. What is the conceptual usage of components? Do we need different naming schemes? 5. What kind of meta-information do we really need for composition? 6. What kind of support is needed to enable the composition of non-functional properties? 7. What kind composition languages is the industry calling for? 8. Ontology These questions and topics were subsequently subdivided into three categories: • What do we want to compose (topics 1 and 2)? • How do we compose (topics 3 to 6)? • Why do we want to compose (topics 7 and 8)? The remaining part of the afternoon was then mainly dedicated for discussing the first two categories (i.e. what do we want to compose, how do we compose) and the corresponding questions. Due to time constraints, we were not able to address the third category (i.e. why do we want to compose) at all. The first part of the discussion was about the entities to be composed, in particular about the notion of service and component. Various suggestions were made what services and components are and how the should be interpreted. The discussion also revealed that there was quite some confusion amongst the
112
Markus Lumpe et al.
participants when composition occurs: do we compose at compile-time (or even before then) or at run-time? This part of the discussion was initiated by the question whether a service and a component are the same or whether they are different. One participant came up with the notion of a service being a generalization of a component, i.e. Service = Component + Context + Interpretation. This definition was put into contrast by another participant who claimed that a component is a generalization of a service, i.e. Component = Service + Context. Other participants suggested that a service is a stateless abstraction which can be used for state transitions “outside” the service (i.e. by other services and components). We concluded the discussion with the agreement that a service in the context of composition is best characterized as an abstraction very similar to a (stateless) function in a functional programming language. Furthermore, we defined the term (software) component for the remainder of the workshop as an abstraction that • is a self-contained unit of composition, • is built to be composed (i.e. is an element of a component framework), • offers a collection of services and requires a collection of (other) services in order to be functional, and • is an (abstract) entity that needs adaptation in order to be (re-)used. We then shifted our attention to the question what a composition language is. Everybody agreed that composition is in essence the explicit description of connections between components by means of connectors. It was argued that a composition language should allow us to define the semantics of connectors and provide means to reason about composed systems using information provided by components, their relationships as well as additional constraints. This, however, does not allow for a precise definition of the notion of a composition language. To solve this problem, we listed features and properties of such a language and subsequently removed the non-essential ones. According to the final list, a composition language • must incorporate components and connectors as first-class entities, • should be a declarative language, but under special circumstances imperative composition must still be possible, • acts as an integration language (i.e. offers facilities to incorporate abstractions from the “outside world”), • allows for reasoning about a composed system and it constituent parts, • supports both the notion of “correctness-by-construction” and “correctnessby-composition” (in the latter case, the weaker notion of partial correctness will be considered sufficient), and • must be usable by application developers.
Composition Languages
113
The discussion about the characterization concluded with the remarks that composition languages are domain specific languages and that there does probably not exist an universal composition language (this still needs to be proven, though!). The focus of the discussion then shifted towards the notion of correctness, i.e. when do we consider a composition of components to be correct. Two different notions of correctness have to be considered in this context: • does a component contribute the “right” services towards a system to be developed, and • does the composed system fulfill the expected requirements. It was argued that both situations have to be considered separately, but that some form of meta-data is needed to decide whether compositions are correct. In the case of composed systems, the notion of partial correctness is important as the lack of appropriate meta-date may not allow us to fully verify the correctness of an entire system, but only of parts thereof. As mentioned above, we were unfortunately not able to address any issues related to the third topic category (i.e. why do we want to compose). Furthermore, additional questions were raised during the general discussion but could not be discussed in detail. The following list contains some of these questions. • Does a composition language need to be computationally complete (i.e. is a composition language nothing else than a “glorified” programming language)? • Should a composed system be fully specified in a composition language or is it desirable that parts thereof are specified in a different environment? • What is the main purpose of a composition language: to build applications of predefined components or to define (larger-scale) components as compositions of (smaller-scale) components? • Which notion of correctness is most appropriate in the context of software composition? Summing up the afternoon sessions it can be noted that a very lively discussion was going on and that various (sometimes contradictory) points of view were brought forward. Once again it has become apparent that the terms and concepts related to component-based software technology in general and composition languages in particular have not reached a state of maturity and that further discussions are needed for clarification. We believe, however, that this workshop was a next step into this direction.
4
Conclusions
Concluding, we are able to state that the second Workshop on Composition Languages (WCL 2002) was quite a successful event that was appreciated by
114
Markus Lumpe et al.
all participants. As this report reveals, not all of the initial goals were met, but most importantly we were able to continue the fruitful discussions of the first Workshop on Composition Languages (WCL 2001) and come to consensus on some important issues. To address the issues we have not been able to deal with during the workshop, it is necessary to further enhance and encourage research collaborations between various disciplines (such as component models and frameworks, software architectures, concepts of programming languages, etc.) and bridge the gap between these disciplines. Therefore, it is very likely that further events especially dedicated to the topic of composition languages will be organized in the near future. Acknowledgments We would like to thank all workshop attendees for their insights and useful comments as well as the local ECOOP organizers for their support.
Organizing Committee • Markus Lumpe Department of Computer Science, Iowa State University 113 Atanasoff Hall, Ames, IA 50011-1041, USA Tel: +1 515 294 2410, Fax: +1 515 294 0256 Email: [email protected] • Jean-Guy Schneider School of Information Technology, Swinburne University of Technology P.O. Box 218, Hawthorn, VIC 3122, Australia Tel: +61 3 9214 8189, Fax: +61 3 9819 0823 Email: [email protected] • Bastiaan Sch¨ onhage Object Technology International, Burgemeester Haspelslaan 131 1181NC Amsterdam, The Netherlands Tel: +31 20 545 6140, Fax: +31 20 643 9174 Email: Bastiaan [email protected] • Thomas Genssler Forschungszentrum Informatik, University Karlsruhe Haid-und-Neu-Strasse 10-14, D-76131 Karlsruhe, Germany Tel: +49 721 965 4620, Fax: +49 721 965 4621 Email: [email protected]
Accepted Papers • Giacomo Piccinelli (Hewlett-Packard Laboratories Bristol, United Kingdom) and Scott Lane Williams (Hewlett-Packard Software & Solutions, Cupertino, USA): Workflow: A Language for Composing Web Services.
Composition Languages
115
• Michael Winter, Thomas Genssler, Alexander Christoph (Forschungszentrum Informatik Karlsruhe, Germany), Oscar Nierstrasz, St´ephane Ducasse, Roel Wuyts, Gabriela Ar´evalo (University of Bern, Switzerland), Peter M¨ uller, Chris Stich (ABB Research Center, Germany), and Bastiaan Sch¨ onhage (Object Technology International Amsterdam, The Netherlands): Components for Embedded Software – The PECOS Approach. • Markus Lumpe (Iowa State University, Ames, USA): On the Representation and Use of Metadata. • Johann Oberleitner and Thomas Gschwind (Technical University Vienna, Austria): Requirements for an Architectural Composition Language. • Luca Pazzi (University of Modena and Reggio Emilia, Italy): Part-Whole Statecharts, or How I Learned to Turn the Explicit Attitude of Composition Languages on My Side. • Sven L¨ ammermann (KTH/IMIT, Sweden) and Enn Tyugu (Institute of Cybernetics, Tallin, Estonia): A Composition Language with Precise Logical Semantics.
List of Participants Marie Beurton-Aimar Rainer van den Born Ludger Fiege Thomas Genssler Sven L¨ ammermann Angeles Lorenzo Sanches Markus Lumpe Johann Oberleitner Lucca Pazzi Thomas Peeneweich Jenifer Perez Benedi Giacomo Piccinelli Jean-Guy Schneider Bastiaan Sch¨ onhage Judith Stafford Michael Winter Jia Yu
University Bordeaux, France Object Technology International Amsterdam, The Netherlands Technical University Darmstadt, Germany Forschungszentrum Informatik Karlsruhe, Germany KTH Stockholm, Sweden Universida Polit´ ecnica de Valencia, Spain Iowa State University, USA Technical University Vienna, Austria University of Modena, Italy Katholieke Universiteit Leuven, Belgium Universida Polit´ ecnica de Valencia, Spain Hewlett-Packard Laboratories Bristol, United Kingdom Swinburne University of Technology, Australia Object Technology International Amsterdam, The Netherlands Carnegie Mellon University, USA Forschungszentrum Informatik Karlsruhe, Germany Chinese Academy of Science Beijing, China
Related Links • The workshop web-site: http://www.cs.iastate.edu/~lumpe/WCL2002/ • WCL 2001 workshop web-site: http://www.cs.iastate.edu/~lumpe/WCL2001/
[email protected] rainer van den [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] giacomo [email protected] [email protected] Bastiaan [email protected] [email protected] [email protected] jia [email protected]
116
Markus Lumpe et al.
• The Piccola language page: http://www.iam.unibe.ch/~scg/Research/Piccola/ • The PErvasive COmponent Systems home page: http://www.pecos-project.org/
The Inheritance Workshop Gabriela Ar´evalo1, Andrew Black2 , Yania Crespo3 , Michel Dao4 , Erik Ernst5 , Peter Grogono6, Marianne Huchard7 , and Markku Sakkinen8 1
Software Composition Group, U. Bern, Neubr¨ uckstrasse 10, 3012 Bern, Switzerland [email protected] 2 OGI School of Sci. & Eng., Oregon Health & Sci. Univ., Beaverton, Oregon, USA [email protected] 3 Dpto. Inform´ atica, U. de Valladolid, Campus M. Delibes, Valladolid 47011, Spain [email protected] 4 France T´el´ecom R&D DAC/OAT, 92794 Issy Moulineaux Cedex 9 [email protected] 5 Department of Computer Science, University of Aarhus, Denmark [email protected] 6 Dept. of Computer Science, Concordia University, Montreal, Canada H3G 1M8 [email protected] 7 LIRMM (CNRS-University Montpellier 2), 161, rue Ada, 34392 Montpellier cedex 5 [email protected] 8 Information Technology Research Institute, University of Jyv¨ askyl¨ a, Finland [email protected]
1
Introduction
The Inheritance Workshop at ECOOP 2002, which took place on Tuesday, 11 June, was the first ECOOP workshop focusing on inheritance after the successful workshops in 1991 [41] and 1992 [48]. The workshop was intended as a forum for designers and implementers of object-oriented languages, and for software developers with an interest in inheritance. It was organized by Andrew P. Black, Erik Ernst, Peter Grogono, and Markku Sakkinen. Because of the size and diversity of the field, it is hard to come up with a litmus test for “object orientation”, but one of the most widely accepted ingredients is inheritance. Indeed, in his 1987 characterization of the language design space [58], Wegner made inheritance one of the two defining characteristics of object-orientation. Nevertheless, inheritance remains an active research area, because of problems like fragile base classes, the so-called inheritance anomaly, and the lack of encapsulation between a class and its subclasses. We believe the abundant activity demonstrates that inheritance is both hard to avoid and hard to get right. The goal of this workshop was to advance the state of the art in the design of inheritance mechanisms, and the judicious use of inheritance. The number of submissions confirmed the interest in this topic. We accepted 15 short position papers, written by a total of 28 authors from 11 different countries. We had particularly solicited reports from practitioners, but received contributions only from researchers. However, they represent so many different J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 117–134, 2002. c Springer-Verlag Berlin Heidelberg 2002
118
Gabriela Ar´evalo et al.
approaches and viewpoints that the workshop became a valuable forum for crossfertilization of ideas. The papers can be roughly classified as follows: language design and language constructs [24, 27, 40, 49, 50, 56]; analysis and manipulation of inheritance hierarchies [1, 19, 22, 29]; generalization in UML models [42]; language usage [5]; role models [51]; metaprogramming [16]; partial evaluation [7]. The submitted papers were reviewed by the workshop organizers, although not formally refereed, and the accepted papers published in the workshop proceedings [6] were revised by the authors in the light of these reviews, with a length limit of 7 pages. As real workshop papers, they are mostly less complete and finished than conference papers would be, but we believe that they compensate for this lack of polish by providing access to fresh ideas and ongoing work. We found that every paper had some interesting ideas, and we thank all authors for their contributions. In addition to these papers, we were happy to have Gilad Bracha (Sun Java Software) as an invited speaker. His talk was entitled “Mixins in Strongtalk” [2]. It was not possible to publish the paper in the proceedings, but copies were available at the workshop. The website of the workshop is still accessible: http://www.cs.auc.dk/ eernst/inhws/. Both the papers from the proceedings and the invited paper are available there, or directly at: http://www.cs.jyu.fi/ sakkinen/inhws/papers/. According to the list that was collected at the workshop, there were 27 persons present, 15 of whom were authors of workshop papers. The attendees came from 10 different countries, the largest attendance (5) coming from France. The authors of 4 accepted papers were not able to attend the workshop. Only 9 papers were selected for oral presentations at the workshop, in order to have more time for discussion. After each paper another workshop participant presented a short comment prepared in advance. These presentations took the first half of the day. The afternoon sessions started with the invited talk. After that, we spent about two and one half hours in group discussions in three breakout groups. We came together again for a final one hour plenary session in which the groups tried to summarize their findings. As so often happens, the day appeared to be too short for all the topics that we would like to have discussed. There was a common feeling that an inheritancerelated workshop would be welcome also at some future ECOOP, perhaps as soon as 2003 if there are active organizers. We felt that even a somewhat more restrictive topic could attract sufficient participation. There is a mailing list that can be used for such suggestions: http://majordomo.cc.jyu.fi/mailman/listinfo/inheritance-ecoop. The rest of this report is divided into two parts, namely Sect. 2 which describes the outcome of the discussions in the hierarchy manipulation subworkshop, and Sect. 3 which describes the outcome of the discussions in the mixins
The Inheritance Workshop
119
subworkshop. The third group discussed dynamism, but did not produce written results for this report.
2
Hierarchy Manipulation
An object oriented program is typically organized as a hierarchy of classes. Structurally, the hierarchy may be a tree, a forest, or a directed acyclic graph. Semantically, the hierarchy may be concerned with: – Specialization: the class hierarchy is guided by a classification of concepts of the application domain (close to an ontology); – Subtyping: in a type hierarchy, a type T1 is a subtype of T2 if an object of T1 is always substitutable to an object of T2 without type error and other semantic constraints (based on assertions, exceptions, etc.); – Economy of development : Inheritance is used to reduce code or structure duplication. These categories may overlap or be in conflict with one another. Our discussion includes all of these kinds of hierarchy. At any stage in the software process, the developers may discover that the class hierarchy is inappropriate and should be changed. We refer to such changes as hierarchy manipulation and they are the subject of this report. We describe some possible reasons for manipulating hierarchies, some contexts in which the need to manipulate arises, the relation between hierarchy manipulation and refactoring, and finally some specific problems in hierarchy manipulation. 2.1
Why Do We Manipulate Hierarchies?
We don’t manipulate hierarchies only for fun but with a given objective. The objective may be to try to improve the way that information is structured by providing better factorization or better decomposition [29, 22], or it may be to conform to some programming language constraints as the transformation from multiple to single inheritance [19, 46]. There may be other objectives. The reasons for manipulating hierarchies (as specialization of manipulating software) can be placed into five main categories [17]. The first four normally occur after development but the fifth occurs during development. – – – – –
evolution: supporting changes on requirements reuse: adapting for reusing purposes maintenance: making corrections qualification: looking for good characteristics incremental refactoring: modification of the hierarchy during development
120
Gabriela Ar´evalo et al.
New requirements or modifications of existing requirements may be functional or non-functional. Non-functional requirements such as “improving efficiency” can lead to hierarchy manipulation [55, page 99] which can be categorized as refactoring (see Section 2.3). Satisfying new functional requirements may often enforce deeper changes, but previous refactoring may better prepare the hierarchy for such changes. The need to manipulate class hierarchies arises in several contexts: 1. Analysis reveals that the hierarchy is deficient in some respect. For example, classes might be redundant, or classes that should be present are not present. 2. A design review shows that classes are too tightly coupled, not cohesive, or have too few or too many methods. 3. The hierarchy is hard to understand and use, due to a non-rational construction — for example, it might be the result of several different development styles. 4. An expert may find that the constructed hierarchy does not match a natural specialization of the application domain. 5. Refactoring often involves changes to the class hierarchy. 6. When a hierarchy has to be extended or reused, it may be necessary to add generalization classes in order to correctly insert new concepts. In the worst case, the hierarchy may have to be entirely reconstructed to benefit from a systematic construction (a process similar to reverse engineering). 7. A hierarchy developed during design may have to be manipulated to match restrictions in the implementation language. For example, a multiple inheritance hierarchy must be mapped to classes with single inheritance and interfaces for Java implementation [19, 46].
Context 1234567 evolution • • reuse •• maintenance • • qualification • • • incremental • •
Table 1. The relationship between categories and contexts of hierarchy manipulation
Table 1 shows the relationship between the categories and contexts that we have identified. Clearly, there is considerable overlap between these contexts; in fact, typical situations will involve a blend of several of them. The need to modify the hierarchy may occur more than once during development. Hierarchy analysis could be manual or automatic, but we are particularly interested in automatic analysis using, for example, concept lattices (see Section 2.2 below) or metrics (e.g., [15]). Crespo’s classification [17] of the “Method”
The Inheritance Workshop
121
of software (hierarchy) manipulation, asks “how does the manipulation start: by inference or by demand?”. Inference means what we are calling here “automatic analysis” and demand, “manual analysis”. There are other automatic analysis techniques (or inference methods), such as program slicing [54], algorithms based in heuristics [14, 39], and algorithms detecting violation of predefined rules (e.g., the Law of Demeter [35]). In metrics, coupling and cohesion have been intensively studied, but they do not cover all specific aspects of the quality of class hierarchies. Well-known metrics directly connected to inheritance hierarchy measurement include NMO/NMI (Number of Overridden/Inherited Methods), SIX (Specialization Index) [36], PII (Pure Inheritance Index) [38], and MIF/AIF (Method/Attribute Inheritance Factor) [11]. But these metrics do not address issues such as property redundancy measurement or quality of method specialization, as investigated in [20]. 2.2
Formal Concept Analysis
Formal concept analysis (FCA) [4, 3, 28] has several applications in the domain of object-oriented software analysis and development. – ownership-based [30, 23, 59, 32]: concept analysis is based on the relation that associates a class with a property (attribute/method) it owns (mainly declares or inherits) – behaviour-based [1, 43, 44]: the relation now links a pair (class,selector) to a composite property like “call mode (via self vs. via super)”, concrete vs. abstract implementation, etc. – usage-based [52]: a variable is associated with a property (attribute/method) if the variable makes access to the property – orthogonal-variability based [44]: analyzing frameworks for improving design, obtaining orthogonal dimensions on variability (hot spots) – object-reference based [31]: improving class associations analyzing object references – combine ownership-based and usage-based [43] – combine ownership-based and object-referenced based [31] – other applications [1, 57], not necessarily related to hierarchy manipulation 2.3
Hierarchy Manipulation and Refactoring
The word “refactoring” was first used by Opdyke [39], who defined refactoring as a kind of semantics-preserving program transformation that raises program editing to a higher level and is not dependent on the semantics of a program. An alternative definition by Koni-N’Sapu [34] says “Refactoring consists of changing a software system in such a way that is does not alter the external behaviour of the program. It is a disciplined way to clean up code.”. Hierarchy manipulation is related but not tied to refactoring. Important evolution and reengineering operations can not be categorized as refactoring because there is no preservation of behaviour. Whereas refactoring preserves semantics,
122
Gabriela Ar´evalo et al.
we do not see this as a necessary property of hierarchy manipulation. For example, if the hierarchy is modified to meet new requirements, the semantics of the program will change. Moreover, refactoring may impact aspects of objectoriented systems that do not relate to class hierarchies. Fowler et al. provide a catalog of refactoring transformations [26] but it is not exhaustive. As a first step, however, the catalog could be used to identify hierarchy manipulation operations and try to find out whether they can be inferred and/or automated with FCA. “Inference” here is the key point, because FCA can indicate when and how some transformation must be done. But automation is more than that, because it covers code (or models) analysis and manipulation, parsing techniques, and so on (cf. Section 2.7 below). Bearing in mind that, “if you have a hammer, every problem looks like a nail”, we should be careful to avoid missing other analysis techniques. Crespo [17] proposed a classification for refactoring operations that can be generalized to software manipulation, and can be extended, refined, and with other categories such as optimization techniques [22]. Crespo’s classification considers the reason for manipulation, as well as the direction, results, consequences, method, human intervention and target of the manipulation, and can be refined and extended either with other categories as defined in [29], or with the classification of other works on hierarchy manipulation such as [22]. Environments that assist refactoring, such as The Refactoring Browser [45] should also support hierarchy manipulation. 2.4
Problems in Hierarchy Manipulation
In the following we discuss the problems that we identified in hierarchy manipulation. Each problem is discussed in the framework proposed by the workshop organizers: problem statement, who is affected, forms of solution, and possible approaches. 2.5
Problem: Modelling and Automating Manipulation
Suppose that we wish to improve a hierarchy by analysis based on concept lattices followed by refactoring. This requires solving two problems: 1. How do we formulate a model in terms of concept lattices? The problem is to find the right predicate for the right purpose: a predicate is not intrinsically good or bad, it may or may not be relevant for a given refactoring purpose. What criteria can we use to ensure that the chosen predicate is appropriate? 2. Transforming the current hierarchy to the desired hierarchy by hand is tedious and error-prone. How can we automate the required refactoring? Who Is Affected? Designers working with an iterative process model need criteria and techniques for hierarchy analysis. Implementors performing hierarchy manipulation need software tools to help them.
The Inheritance Workshop
123
Forms of Solution. The central problem here is that inference techniques could lead to very complex transformations. We can distinguish atomic and compound refactoring operations, but even compound refactoring operations can be less complex than the required transformation. Perhaps a good combination of that refactoring operations would suffice. The problem, however, is to detect the required combination automatically. We can speak about “refactoring plans” (cf. “population migration plans” in database terminology). It may be possible to formalize refactoring as graph rewriting, because refactoring combination could be very well expressed in terms of graph rewriting. Building refactoring plans to accomplish a given advice (or indication) from inference techniques can be seen as future research direction. Possible forms of solution include: – A set of rules or guidelines for assessing the usefulness of the predicate used for concept analysis. Alternatively, Galois lattices (and sub-hierarchies) yield inheritance hierarchies that are proven to satisfy the maximal factorization criterion (among others) for properties among classes. – An algorithm for refactoring. The algorithm might have two components: the first part would compare the current and desired hierarchies and build a plan of changes; the second part would apply the changes. The solution must also include an implementation of the algorithm, of course. – Incremental refactoring would manipulate the hierarchy each time it is modified by the designer [23]. Approaches – There are many different possible refactoring operations. A first step would be to identify refactoring operations that can be automated by FCA. For example: • Attribute/method redundancy can be removed by ownership-based FCA • sophisticated ownership analysis can correctly insert abstract methods • “Concept pattern 2-case1” [1] of behaviour-based FCA indicates places of possible common code in sibling classes, etc. One approach would be to use a catalog such as Fowler’s [26] and to analyze for each refactoring operation, which operation can be discovered and/or automated by which kind of FCA — this would probably involve inventing new forms of FCA. – Think up several predicates and try them out on a variety of hierarchies. If possible, the predicates should be based on well-defined benchmarks and metrics. – Look for a series of small steps that, taken together, map the current hierarchy to the desired hierarchy. Choose a suitable model or representation of the source code for the implementation of the algorithm (this could be plain text, a linked data structure, or some combination of these).
124
2.6
Gabriela Ar´evalo et al.
Problem: Validating Transformations
Suppose that we have taken a current hierarchy Hc , applied a transformation to it, and obtained the desired hierarchy Hd . How do we validate Hd ? This problem has three components: 1. Is the objective of the manipulation fulfilled? 2. Does the structure of the new hierarchy accurately reflect the desired structure of the application? 3. Does the new hierarchy provide the same functionality and performance as the old one? Who Is Affected? If development is understood as a seamless transition from analysis to encoding, initial users (clients/experts of the application domain) should recognize and approve validity of software artifacts that directly encode concepts of the application domain. Natural specialization in the application domain (ontologies) should be more or less reflected in software artifacts. Without validation, the implementors will have to test the new hierarchy extremely thoroughly to ensure that it behaves in exactly the same way as the old hierarchy and meets all of the system requirements. If the required transformation can be obtained by means of refactoring, there is no problem because refactoring operations preserve behaviour and we could pass the problem to the refactoring definition and implementation. But, when we start to work with combinations of refactoring operations, we must not only be sure that refactoring combination preserve behaviour but we must also be sure we choose the appropriate combination. Forms of Solution – A tool that evaluates a hierarchy according to stated criteria. – A tool that formally analyzes and/or runs tests on two hierarchies in order to compare their behaviour and performance. Approaches – There is a subjective aspect to the second component of the problem being described (the structure of the hierarchies): perhaps human judgment would be required to assess the appropriateness of the new hierarchy. However, there are two ways in which the assessment might be partly automated: • design metrics and use them to compare the two hierarchies • use AI techniques, such as a rule-based expert system, to assess the hierarchies – It should be possible to establish functional equivalence by formal techniques: for example, by showing that all calls in the new hierarchy have the same effect as equivalent calls in the old hierarchy. However, it is hard to assess performance by formal techniques.
The Inheritance Workshop
125
– A more promising approach would be to construct a test suite automatically. Benchmarking, as used in the parallel and high-performance computing community, might be a suitable approach. 2.7
Problem: Separation of Concerns
How can we separate language-dependent and language-independent issues in hierarchy manipulation? Who Is Affected? Without this separation, we would have to build a complete set of tools for each programming or modelling language. Separating out the language-independent issues would enable us to build tools that could do part of the work of hierarchy manipulation for any programming language, or even for multi-language systems. Forms of Solution. A complete solution would consist of a list of languagedependent issues in hierarchy manipulation, and a list of language-independent issues. Approaches. Build metamodels for languages. Group languages with similar metamodels into families. Hierarchy manipulations expressed at the metamodel level would apply to all languages in the corresponding family and would, to that extent, be language independent. Manipulations that could be applied to all metamodels would be fully language independent. In addition to defining the metamodel, we have to define “instantiation of the meta-model”: for applying a transformation to a C++ (for example) hierarchy, first we have to interpret C++ artifacts as instances of the meta-model (this can be difficult, and it may be necessary to omit aspects such as access control), secondly; after application of the transformation, we have to re-generate correct C++ code. Huchard et al. defined in their research [33]: – a general meta-model and a ownership-based FCA construction tool using this meta-model; – a tool for extract from Java classes informations about their interface that match the meta-model – a tool that uses result of the FCA construction algorithm for generate Java code of an interface hierarchy (that compiles and can be linked to classes). Crespo et al. defined a metamodel for a certain family of languages [17, 18]. A metamodel instantiation for Eiffel has been defined and a Java instantiation is almost complete. The approach is via framework construction. The languageindependent part is encoded into the kernel of the framework, and the languagedependent part is encoded as framework hot-spot instantiations. Similar work is being done by the Software Composition Group at the University of Bern [53].
126
Gabriela Ar´evalo et al.
Working at the analysis/design level might help tackling the language dependency problem. UML is an object-oriented meta-model, so a possible solution might be to use UML as much as possible. Other possibilities includes enriching UML and using other analysis design formalisms, e.g., to express specialization between properties—attributes or methods [21]. Some language dependent aspects might even be transformed into this language independent level. Producing a list of OO languages artifacts and their specific implementation in different languages along with the possible transformations of one into another might be of great help. This of course may rely on one or several metamodels. 2.8
Hierarchy Manipulation — Conclusion
The discussion demonstrated that hierarchy manipulation is a rich area in which much research remains to be done. The members of this group feel that a Hierarchy Manipulation Study Group should be established and intend to take steps to form such a group.
3
Mixins
The traditional notion of inheritance binds each subclass very tightly to its superclass(es). The concept of mixins can be used to make this connection more flexible. The concept was first introduced as mixin classes, a programming convention in languages such as Flavors [13] and CLOS [8]. A mixin class is an ordinary class that is by convention used in a special manner, namely as one of several superclasses. The idea is that the mixin class adds certain facilities to some of its fellow superclasses, possibly using other facilities of those fellow superclasses. Hence, a mixin class may use features not available in the class itself, because these features are expected to be provided by other classes. It is possible to write a mixin class in Flavors and in CLOS because the LISP family of languages is not statically type checked; but it is also possible to produce run-time type errors (‘message not understood’), if the mixin class uses a feature that should be—but is not—provided by any of its fellow superclasses. To make the mixin concept more robust it was necessary to develop it as a separate concept, a step taken by Bracha and Cook in 1990 [10]. The mixin as a concept and a language construct has been further developed and refined many times since then, e.g., in [25, 9, 37]. Generally, a mixin is a building block for classes. A mixin M can be applied to a class C, thereby producing a subclass C of C. With a suitable interpretation of classes and ⊕, this could be formalized as C = C ⊕ M . Flatt et al. [25] formalize mixins as functions from classes to classes, but there is no deep conflict in these points of view because the function would simply be λC . C ⊕ M . A mixin such as M can be reused with several classes. For example, M may also be applied to D, producing a subclass D . Using traditional inheritance,
The Inheritance Workshop
127
we would need two identical copies of the text corresponding to M , in order to create C from C as well as D from D. This textual redundancy demonstrates the inferior support for reuse with traditional inheritance, and it introduces a potential for inconsistencies. Moreover, C and D will be unrelated with traditional inheritance and name based type equivalence, whereas they would have a common element M when using mixins. It may be possible to write polymorphic code that is capable of working on instances of either C or D using features from M ; with traditional inheritance it would again be necessary to create two textually identical copies, one working on C and another working on D . Since this is concerned with client code, the duplication of code could penetrate deeply into the rest of any system using C and D . To summarize: mixins can be used to open the doors to a number of new abstraction and reuse opportunities. However, the introduction of mixins does not only solve problems, it also raises new problems. We identified three core problems at the workshop which are described below. 3.1
Problem: Mixing Things from Different Sources
When mixins are used it will often be the case that mixin composition (⊕) is used to combine entities written in different contexts. Indeed, it seems to be one of the important benefits of mixins that they could be used to combine a class C from one vendor, Va , with a mixin M from another vendor, Vb . After all, it may well be that C is better for the given purpose than any class delivered by Vb , but M is better than any mixin delivered by Va . However, it is not enough that C has exactly the right semantics for the desired superclass, and M provides exactly the right semantic adjustment for the desired mixin. The two must also agree on a number of more mundane properties associated with the expression of the class C and the mixin M . In other words, classes and mixins are not abstract semantic entities, they depend on such seemingly accidental details as the choice of names, access or visibility specifications, const, final, and other modifiers, and more. Who Is Affected? This problem affects programmers working on complex, real-life projects. Possible Solutions Encapsulation. It may be possible to use encapsulation to make both classes and mixins more abstract. In particular, it may be possible to hide the difference between stored and computed results, at least in some cases. This would, e.g., make it possible for an instance variable of type T in M to (dynamically!) override a method in C returning a value of type T , as is possible in ordinary inheritance in Eiffel. Overriding an instance variable v in C by two methods in M , having signatures similar to a ‘getter’ and a ‘setter’ method for v, might also be feasible in some languages. Since there is no general approach that allows us
128
Gabriela Ar´evalo et al.
to use a method (or two) where an object is expected, or vice versa, it might be necessary to depart more radically from main-stream semantics, in order to make stored and computed state freely interchangeable. In the same vein, it might be useful to let a method in M override two methods in C, or vice versa. This introduces the question of naming, which is discussed below in the last problem. Disambiguation by origin. If the problem is a name clash in superclasses, i.e., among mixins used to build the superclasses, then it may be possible to solve the problem by explicitly selecting a feature from a particular mixin. This could be similar to the SomeClass::SomeFeature syntax in C++. Note that the name clash would have to be resolved at mixin application, unless the language allows some knowledge about the actual superclass to be made available at the mixin declaration. Since a naive semantics for this mechanism would imply that late binding of method implementations is disabled, there is a need to define more sophisticated semantics of such an explicit selection by origin, such as the ‘titles’ suggested for C++ in [47]. This is all the more important because the superclass from which the feature must be selected is not statically known inside the mixin definition. Disambiguation by type. With the same the problem, i.e., a name clash in superclasses, it may be possible to use disambiguation by type as a solution. This means that exactly one of the available definitions is chosen, because it matches a given type better than all the others. This probably implies that the usage context (what we called M earlier) must contain a specification of the type of the feature, such that the comparison between this requested type and all the available types (in what we called C) can be based on a visible criterion. In many languages it would actually be possible to infer the type of a named entity from the expression(s) in which it is used, but this seems to be a rather error-prone basis to build on, because the programmer might never realize that there was a name clash, and because seemingly benign changes of the program may change the semantics drastically. 3.2
Problem: How to Specify the Requirements of a Mixin
When composing a class and a mixin it is important that the class satisfies the requirements of the mixin—otherwise they should not be composed. Such requirements may take many forms. There are the automatically checkable requirements, such as ‘any class with which this mixin is composed must define an instance variable named x of type int’, or ‘it must define a method foo conforming to [a specific signature]’. The reason we might want to make such simple requirements explicit is that we may not know exactly what class C and mixin M are being composed at a given mixin application site. Being explicit about requirements will make it possible to ensure that these simple requirements are satisfied—like an ordinary type system keeping track of the consistency of types of values without actually keeping track
The Inheritance Workshop
129
of the values themselves. This amounts to giving classes and mixins types with respect to mixin application, and checking the types at mixin application. Note that such type checking may require explicit type declarations, and possibly a more verbose mixin composition language. There are also precisely specifiable requirements based on correctness criteria that cannot be automatically checked, e.g.: ‘this mixin method may call the method lock once and then select or update some number of times, and then unlock once, and that must be an appropriate usage of these methods from the class with which this mixin is composed’. Whether such a method protocol is actually respected by a piece of code is of course undecidable, though it can be checked at run-time. It is even further away from decidability—and it cannot be checked at run-time—whether it is application-correct to treat the superclass methods lock, select, update, and unlock as described. Nevertheless, programmers may be allowed to specify such requirements explicitly, and it might then be possible to check the consistency of these annotations, e.g., that there exists a method protocol that satisfies all the requirements. Finally, the requirements of a mixin on its superclass may have to be described in natural language, and it is then up to programmers to check that mixin applications do not violate these requirements. There may be tool-support for presenting such requirements to programmers when they write the mixin application expression. Who Is Affected? This problem affects anybody who wants to reuse a given mixin with a given class: A reuser of code needs concise and explicit specifications of constraints on the usage, because (s)he cannot be expected to know how the reused code works in great detail. Possible Solutions Specify the requirements. An explicit requirements specification implies more work at mixin definition time, but it also serves as documentation of the exact intentions in this area. If it turns out that the requirements are not satisfied in some case where they ‘should’ be satisfied, the programmer will have to think about the requirements specification once more. After changing the specification, (s)he should reconsider whether the implementation of the mixin actually fits the new requirements, or—in the case of automatically checkable requirements— (s)he should let the language processing system re-check the requirements. Infer the requirements. As opposed to the explicitly specified requirements, inferred ones are very easy on programmers at definition time. Programmers can just write the code with some functionality, and both the painstaking derivation of requirements, the tedious typing of them, and the reading-unfriendly verbosity of the resulting code is avoided. Language processing tools may give the programmer the opportunity to inspect the requirements and see if they conform to his wishes, but they do not force the programmer to do so. On the other hand,
130
Gabriela Ar´evalo et al.
purely inferred requirements could never include such things as constraints on method protocols and other, more complex issues. Intermediate solutions. It would be possible to give an explicit requirements specification that is to be treated as an upper bound on the actual demands of any future version of the mixin. Similarly, a class might be annotated with a specification that is to be considered a lower bound on what any future version of the class will provide. This kind of approximate requirements specification will provide some support for safer code evolution. Moreover, it might be possible to combine such incomplete specifications with inferred specifications, giving rise to warnings from compilers and other tools when there is a conflict. 3.3
Problem: Dependence on Names
One particularly thorny issue is the choice of names for features. Each name is chosen by a programmer at some point in the development of a given piece of software, when the future usage contexts are unknown. In particular, code that is intended to be highly reusable might be used in many unforseen contexts, and ironically it is in exactly this kind of code that the right choice of name is most important. Since a mixin generally performs white-box reuse of the class with which it is composed, the mixin depends on a wider set of names and properties in the superclass than client code does. In Java terminology, the mixin would have access to the protected interface, rather than being restricted to the public interface. Compared to traditional inheritance, a given mixin is much more vulnerable to name mismatches than an ordinary subclass. The traditional subclass will always be written using exactly the name space that is actually available in its superclass. The mixin may turn out to be very useful with superclasses with different name spaces, except that it can only be applied to superclasses whose features happen to have exactly those names that the mixin expects. Note, however, that a subclass and a mixin are equally vulnerable to name mismatches arising from evolution of the (actual) superclasses. Change a name in a class, and typically both subclasses and applied mixins will break. This illustrates that the dependence on names is a problem with a wide scope. Who Is Affected? This problem also affects anybody who wants to reuse a given mixin with a given class: reuse may be possible or impossible depending on the chosen names for features in the class and in the mixin, rather than on the inherent semantics of the class and the mixin. Possible Solutions Explicit renaming. It is possible to use a mechanism such as Eiffel feature renaming to adapt a given mixin M to a given class C: as a subclass C is being created by applying M to C, each feature of the mixin that needs to have a
The Inheritance Workshop
131
different name according to the requirements of the class is first renamed. In some cases, features of C could be renamed instead. Coloring. Coloring is a way of resolving name conflicts. If there are two methods foo that conflict, and we need to access them both, then we color one as “the green foo” and the other as “the blue foo” and now we can talk about them both. Scope rules may be manipulated to direct all usages of names in a given area of source code to prefer the “blue” names, etc. This might also be combined with renaming, so the green foo might be renamed and exported as grass foo while the blue foo might be renamed as sky foo. Call-by-declaration. In [24], the concept of ‘call-by-declaration’ is introduced. It is named according to the traditional phrases used to describe parameter transfer mechanisms for procedures and methods, because the mechanism is similar to such parameter transfers in several ways. However, it is a mechanism that introduces explicit parameterization of a mixin with the declarations upon which it depends. It is then possible to bind these formal declarations to actual declarations in the actual superclass at mixin application time. Call by declaration provides support for feature renaming at mixin application, without affecting the declarations of the class or of the mixin. Explicit parameterization. It is possible to use a broader notion of explicit parameterization than the one inherent in the call-by-declaration approach. It might for instance be possible to parameterize the mixin with the methods it should provide: If a given method foo is declared in the mixin definition but not chosen at parameterization (configuration) time, the method foo would simply not be included. As a consequence, requirements on the superclass derived from the implementation of foo would vanish. However, the mixin would still have to be consistent, so if some other method bar in the mixin calls foo then bar must also be excluded, or some other implementation of bar that does not use foo must be provided as a parameter. 3.4
Mixins — Conclusion
The discussions about mixins illustrated that there are several deep problems yet to be solved, and also that the participants in this subworkshop are working actively on the problems, along with other researchers.
References [1] G. Arevalo and T. Mens. Analysing object-oriented application frameworks using concept analysis. In Black et al. [6], pages 3–9. [2] Lars Bak, Gilad Bracha, Steffen Grarup, Robert Griesemer, David Griswold, and Urs H¨ olzle. Mixins in Strongtalk. Technical report, Sun Microsystems, Inc., 2002. [3] M. Barbut and B. Monjardet. Ordre et classification. Hachette, 1970.
132
Gabriela Ar´evalo et al.
[4] G Birkhoff. Lattice Theory. AMS Colloquium Publication. American Mathematical Society, third edition, 1967. [5] Andrew P. Black. A use for inheritance. In Black et al. [6], pages 10–15. [6] Andrew P. Black, Erik Ernst, Peter Grogono, and Markku Sakkinen, editors. Proceedings of the Inheritance Workshop at ECOOP 2002. Number 12 in Publications of Information Technology Research Institute. University of Jyv¨ askyl¨ a, M´ alaga, Spain, June 2002. [7] Gustavo Bobeff and Jacques Noy´e. On the interaction of partial evaluation and inheritance. In Black et al. [6], pages 16–22. [8] B. Bobrow, D. DeMichiel, R. Gabriel, S. Keene, G. Kiczales, and D. Moon. Common Lisp Object System Specification. Document 88-002R. X3J13, June 1988. [9] Viviana Bono, Amit Patel, and Vitaly Shmatikov. A core calculus of classes and mixins. In Rachid Guerraoui, editor, ECOOP ’99 — Object-Oriented Programming 13th European Conference, Lisbon Portugal, volume 1628 of Lecture Notes in Computer Science, pages 43–66. Springer-Verlag, Berlin - Heidelberg, June 1999. [10] Gilad Bracha and William Cook. Mixin-based inheritance. In Proceedings OOPSLA/ECOOP’90, ACM SIGPLAN Notices, volume 25, 10, pages 303–311, October 1990. [11] F. Brito e Abreu and W. Melo. Evaluating the impact of object-oriented design on software quality. In Proc. METRICS 96. IEEE Computer Society, 1996. [12] F. Brito e Abreu, Mario Piattini, Geert Poels, and Houari A. Sahraoui, editors. Proceedings of the 6th ECOOP Workshop on Quantitative Approaches in ObjectOriented Software Engineering (QAOOSE 2002). Springer, 2002. [13] Howard I. Cannon. Flavors: A non-hierarchical approach to object-oriented programming. Symbolics Inc., 1982. [14] E. Casais. Managing Evolution in Object Oriented Environments: An Algorithmic Approach. PhD thesis, Centre Universitaire d’Informatique, University of Geneva, May 1991. [15] H.S. Chae. Restructuring of classes and inheritance hierarchy in object-oriented systems. Master’s thesis, Software Engineering Laboratory, Computer Science Department, Korea Advanced Institute of Science and Technology (KAIST), 1996. [16] Pierre Crescenzo and Philippe Lahire. Customisation of inheritance. In Black et al. [6], pages 23–29. [17] Y. Crespo. Increasing software reuse potential by refactoring. PhD thesis, Departamento de Informatica, Universidad de Valladolid, 2000. In Spanish. [18] Y. Crespo, V. Cardenoso, and J.M. Marques. A model language for refactoring definition and analysis. In Actas PROLE’01, Almagro, Espana, November 2001. In Spanish. [19] Yania Crespo, Jose´e Manuel Marqu´es, and Juan Jos´e Rodr´iguez. On the translation of multiple inheritance hierarchies into single inheritance hierarchies. In Black et al. [6], pages 30–37. [20] M. Dao, M. Huchard, H. Leblanc, T. Libourel, and C. Roume. A new approach to factorization - introducing metrics. In Proc. Metrics 2002: 8th International Software Metrics Symposium, 2002. [21] M. Dao, M. Huchard, T. Libourel, and C. Roume. Sp´ecification de la prise en compte plus d´etaill´ee des ´el´ements du mod`ele objet UML. Technical report, Projet MACAO. R´eseau RNTL, 2001. Sous-projet 4, activit´e 2. [22] Michel Dao, Marianne Huchard, Th´er`ese Libourel, and Cyril Roume. Evaluating and optimizing factorization in inheritance hierarchies. In Black et al. [6], pages 38–43.
The Inheritance Workshop
133
[23] H. Dicky, C. Dony, M. Huchard, and T. Libourel. On automatic class insertion with overloading. Special issue of Sigplan Notices - Proceedings of ACM OOPSLA’96., 31(10):251–267, 1996. [24] Erik Ernst. Call by declaration. In Black et al. [6], pages 44–50. [25] Matthew Flatt, Shriram Krishnamurthi, and Matthias Felleisen. Classes and mixins. In Conference Record of POPL ’98: The 25th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 171–183, San Diego, California, 19–21 January 1998. [26] M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts. Refactoring: Improving the Design of Existing Code. Addison-Wesley, 1999. Object Technologies Series. [27] Peter H. Fr¨ ohlich. Inheritance decomposed. In Black et al. [6], pages 51–57. [28] B. Ganter and R. Wille. Formal Concept Analysis: Mathematical Foundations. Springer, 1999. [29] R. Godin, M. Huchard, C. Roume, and P. Valtchev. Inheritance and automation: Where are we now? In Black et al. [6], pages 58–64. [30] R. Godin and H. Mili. Building and maintaining analysis-level class hierarchies using Galois lattices. In Conference Proceedings Object-Oriented Programming Systems, Languages, and Applications (OOPSLA’93), pages 394–410, 1993. Published as special issue of SIGPLAN Notices, 28(10). [31] C. Hernandez, F. Prieto, M.A. Laguna, and Y. Crespo. Formal concept analysis support for conceptual abstraction in database reengineering. In Proceedings of the Database Management and Reengineering Workshop at ICSM 2002, 2002. [32] M. Huchard, H. Dicky, and H. Leblanc. Galois lattice as a framework to specify algorithms building class hierarchies. Theoretical Informatics and Applications, 34:521–548, January 2000. [33] M. Huchard and H. Leblanc. Computing interfaces in Java. In Proc. IEE International conference on Automated Software Engineering (ASE’2000), pages 317–320, 11-15 September, Grenoble, France, 2000. [34] G. G. Koni-N’Sapu. A scenario based approach for refactoring duplicated code in object oriented systems. Diploma Thesis of the Faculty of Sciences University of Bern, 2001. [35] Karl Lieberherr and Ian Holland. Assuring good style for object-oriented programs. IEEE Software, pages 38–48, September 1989. [36] M. Lorentz and J. Kidd. Object-Oriented Software Metrics, a Practical Guide. Prentice Hall, 1994. [37] Carine Lucas and Patrick Steyaert. Modular Inheritance of Objects Through Mixin-Methods. In Peter Schulthess, editor, Advances in Modular Languages, pages 273–282. Universitatsverlag Ulm GmbH, 1994. Proceedings of the Joint Modular Languages Conference, University of Ulm, Germany, 28-30 September 1994. [38] B. K. Miller, P. Hsia, and C. Kung. Object-oriented architecture measures. In 32nd Annual Hawaii International Conference on Systems Sciences. IEEE Computer Society, 1999. [39] W.F. Opdyke. Refactoring Object-Oriented Frameworks. PhD thesis, Department of Computer Science, University of Illinois at Urbana-Champaign, 1992. also Technical Report UIUCDCS-R-92-1759. [40] Klaus Ostermann and Mira Mezini. Blurring the borders between object composition, inheritance, and delegation. In Black et al. [6], pages 65–68. [41] Jens Palsberg and Michael I. Schwartzbach, editors. Types, Inheritance and Assignments A collection of Position papers of the ECOOP’91 workshop W5, DAIMI
134
[42] [43]
[44]
[45] [46] [47] [48]
[49] [50] [51] [52]
[53]
[54]
[55] [56] [57]
[58]
[59]
Gabriela Ar´evalo et al. PB-357, Geneva, Switzerland, July 1991. Computer Science Department, Aarhus University. Claudia Pons. Generalization relation in UML model elements. In Black et al. [6], pages 69–75. F. Prieto, Y. Crespo, J.M. Marques, and M.L. Laguna. Mecanos and formal concept analysis as support for framework construction. In Actas de las V Jornadas de Ingenieria de Software y Bases de Datos (JISBD’2000), pages 163–175, November 2000. In Spanish. F. Prieto, Y. Crespo, J.M. Marques, and M.L. Laguna. Formal concept analysis as support for domain framework evolution. In Taller de Evolucion de Software. VI Jornadas de Ingenieria de Software y Bases de Datos (JISBD’2001), November 2001. In Spanish. Don Roberts, John Brant, and Ralph Johnson. A refactoring tool for Smalltalk. Theory and Practice of Object Systems (TAPOS), 3(4):253–263, 1997. Cyril Roume. Going from multiple to single inheritance with metrics. In [12], pages 30–37, 2002. Markku Sakkinen. A critique of the inheritance principles of C++. Computing Systems, 5(1):69 – 110, Winter 1992. Markku Sakkinen, editor. Multiple Inheritance and Multiple Subtyping, Position papers of the ECOOP 1992 Workshop W1, Working Paper WP-23, Utrecht, the Netherlands, June 1992. Department of Computer Science and Information Systems, University of Jyv¨ askyl¨ a. Markku Sakkinen. Exheritance - class generalisation revived. In Black et al. [6], pages 76 – 81. Nathanael Schaerli, St´ephane Ducasse, and Oscar Nierstrasz. Classes = traits + states + glue. In Black et al. [6], pages 82–88. K. Chandra Sekharaiah and D. Janaki Ram. Object schizophrenia problem in modeling is-role-of inheritance. In Black et al. [6], pages 88–94. G. Snelting and F. Tip. Reengineering class hierarchies using concept analysis. Proceedings of the ACM SIGSOFT Sixth International Symposium on the Foundations of Software Engineering, ACM SIGSOFT Software Engineering Notes, 23(6):99–110, November 1998. S. Tichelaar, S. Ducasse, S. Demeyer, and O. Nierstrasz. A meta-model for language-independent refactoring. In Proceedings ISPSE 2000, pages 154–164. IEEE, 2000. F. Tip, J-D. Choi, J. Field, and G. Ramalingam. Slicing class hierarchies in C++. In Proceedings of the Conference ACM SIGPLAN OOPSLA’96, pages 179–197. Special issue of ACM SIGPLAN Notices 31(10), ACM Press, October 1996. F. Tip and P.F. Sweeney. Class hierarchy specialization. In Proceedings of the Conference ACM SIGPLAN OOPSLA’97, pages 271–285, 1997. Mads Torgersen. Inheritance is specialisation. In Black et al. [6], pages 95–101. A. van Deursen and T. Kuipers. Identifying objects using cluster and concept analysis. In 21st International Conference on Software Engineering, ICSE-99, pages 246–255. ACM, 1999. Peter Wegner. Dimensions of object-based language design. In Proceedings Second ACM Conference on Object-Oriented Programming Systems, Languages and Applications, pages 168–182, Orlando, Florida, 1987. ACM Press. A. Yahia, L. Lakhal, and J.P. Bordat. Designing class hierarchies of object database schemas. In Actes 13`emes journ´ees Bases de Donn´ees Avanc´ees, pages 371–390, Grenoble, France, September 1997.
Model-Based Software Reuse Andreas Speck1 , Elke Pulverm¨ uller2, Ragnhild Van Der Straeten3 , 4 Ralf H. Reussner , and Matthias Clauß1 1 Intershop Research, Germany {[email protected]|[email protected]} 2 Universit¨ at Karlsruhe, Germany [email protected] 3 Vrije Universiteit Brussel, Belgium [email protected] 4 CRC for Enterprise Distributed Systems Technology (DSTC) Pty Ltd Monash University, Melbourne, Australia [email protected]
Abstract. This report summarises the presentations and discussions of the First Workshop on Model-based Software Reuse, held in conjunction with the 16th European Conference on Object-Oriented Programming (ECOOP) Malaga, Spain June 10, 2002. This workshop was motivated by the observation that convenient models are very useful to understand the mechanisms of reuse. Models may help to define the interoperability between components, to detect feature interaction and to increase the traceability. They have the potential to define the essential aspects of the compositionality of the assets (i.e., components, aspects, views, etc.). Common problems discussed were how to reason about the properties of composed assets (with the focus on invalid asset compositions detection) and how to model assets to enable such reasoning. Eleven papers have been accepted and presented. Discussion groups in the areas “Software Architectures as Composed Components”, “Automated Analysis and Verification” and “Modelling and Formalising” were formed.
1
Introduction
The demand for software reuse is rather old and different approaches to support software reuse have been developed. Among the technical approaches for supporting reuse are: modules, object-orientation, patterns, frameworks and component-based software architectures. Further-on, aspect-orientation deals with cross-cutting concerns and generative software engineering supports automatic system generation out of basic building blocks. One problem, common to all of the aforementioned approaches, is the compositionality of assets (such as components, features or aspects). This problem has been partially occurred when dealing with component interoperability, aspect weaving, feature interaction and (on a more abstract level) traceability between different views or models. As an example, conventional object-oriented modelling J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 135–146, 2002. c Springer-Verlag Berlin Heidelberg 2002
136
Andreas Speck et al.
languages (like the UML) are used for communicating users requirements and system designs, but fail to support the composition of separate parts or views. Research on compositionality includes models and analysis algorithms for reusable assets allowing to (a) check their interoperability, (b) support the configuration of assets, and (c) to predict properties of the assembled system (especially compliance with user requirements). Abstracting away from the kind of asset to be composed (such as components, aspects, modules, objects or view) we coined the term “model-based software reuse” for describing common research on the compositionality of assets in general. As these questions are risen and tackled in different areas, such as componentoriented software engineering (e.g., [2, 3]), aspect-orientated programming [6], frameworks [8] or traceability (e.g., between different views of UML [14]). The workshop concentrated on a model-based approach of software reuse. The goal of the workshop was to gain a common understanding on which shared properties such asset-models must have to ensure compositionality and which properties are specific to components, aspects or views. Organisation of the Workshop and the Report During the morning sessions of the workshop the eleven accepted papers were presented, ordered in three sessions: “Reasoning and Verification”, “Stability” and “Supporting Reuse / Modelling”. These papers are summarised in section 2. In the afternoon sessions three discussion groups were formed around the topics “Software Architectures as Composed Components”, “Automated Analysis and Verification” and “Modelling and Formalising”. These discussions are summarised in section 3. Section 4 concludes this report and presents issues for future research in the area of model-based software reuse.
2 2.1
Presentations Reasoning and Verification
Rebeca P. D´ıas Redondo, Jos´e J. Pazos Arias, Ana Fern´ andez Vilas and Bel´en Barrag´ ans Mart´ınez present the ARIFS-tool for retrieving verification parts from formal requirements documents. Their research is motivated by increasing the efficiency (and, hence, the feasibility) of formal verification by reusing appropriate parts of existing proofs. Consequently, the ARIFS-tool focuses on the retrieval of appropriate proofs out of a repository. Different to other approaches, the retrieval process is able to deal with inaccurate specifications and does not require a proof (and, hence, a theorem-prover) for retrieving the verification parts. This is achieved by building a lattice of reusable verification parts. Yu Jia’s and Yuqing Gu’s paper describes their “Feature Description Logic” which is used for defining and reasoning on the semantics of software components. Their logic is mapped to state machines which can be checked by model-checking. They define and check predicates for testing consistency, substitutability and satisfiability. These results can be used for component classification, detecting
Model-Based Software Reuse
137
feature interaction problems (aka. semantic mismatch) and component feature prediction. The paper of Andreas Speck, Silva Robak, Elke Pulverm¨ uller and Matthias Clauß presents an approach for modelling features as component versions. The term version is used to capture one or a specific set of features. Versions may contain other versions of a lower (more detailed) granularity level. Versions with their features are modelled by prepositional logic. They define a mapping from features to implementations. The versions may be modelled in UML, which makes it easier for developers to understand the version design. However, the underlying logic model allows to check the completeness and consistency of a version. 2.2
Stability
In this section paper’s authored by Mohamed Fayad and his group are summarised. In the paper from Ahmed Mahdy, Mohammed E. Fayad, Haitham Hamza and Peeyush Tugnawat the modelling approach of “Enduring Business Themes” (EBT) is demonstrated by an example involving three case-studies. By modelling common parts (with so-called “Software Stability Models” (SSM)) of different specific applications reuse between these applications is fostered for a given domain. The paper of Mohamed E. Fayad, Shasha Wu and Majid Nabavi demonstrates the benefits of these SSM with another case-study. When modelled objectoriented, their example of the logistics domain has to be re-engineered for each change request. Using SSM, their model remains well organised over several design iterations. Due to the promise of object-orientation to support maintainability, the author’s consider the SSM as an important refinement of design processes for object-oriented systems. The paper of Haitham Maza and Mohamed E. Fayad gives eight essential properties to evaluate pattern reusability and introduces the concept of “Stable Analysis Pattern”. Patterns are grouped in three classes according to their origin: experienced-based, designed-by-analogy or based on Fayad’s software-stabilityconcept. These classes are evaluated against the eight requirements for pattern reusability. Remarkably, only patterns in the third class (the one of the authors) fulfil all required properties. These group of patterns are called “stable analysis patterns” due to their origin from the software stability concept. 2.3
Supporting Reuse and Modelling
The paper of Miro Casanova and Ragnhild Van Der Straeten is concerned with modelling component libraries for supporting component reuse and evolution. Their approach is to use the technique of software libraries to classify the components. The structure of their libraries is modelled by ontologies specified in a Description Logic. This helps automating the process of classifying and retrieving a component into a component library.
138
Andreas Speck et al.
The paper of Uwe Rasthofer presents a component meta-model. The rationale behind his work is to lift up the component concept from a mere programming level to a concept used within the whole software-engineering process. It is shown how the meta-model is used to model the CORBA and JavaBeans component models. Also, his meta-model is able to model the meta-model itself, hence, ending the meta-modelling hierarchy. The presentation of Alexander Fried and Herbert Pr¨ ahofer is concerned with the usage of components at a design level for engineering system-families. In particular, they sketch a library of design assets and its tool-support. An integration of these tools managing the asset library into existing CASE tool chains is proposed. Design assets are classified into (a) components, (b) configurable components, (c) component templates and collaboration protocols and (d) design patterns. Essential annotations for each class are presented. Further-on, steps for using these assets (hence, design steps) are classified. Tool support for these design steps is considered are very relevant for practical applicability. The paper concentrates of support of asset retrieval by case-based reasoning (CBR) and graph matching. The presentation of Luca Pazzi deals with mapping UML use cases to state diagrams. The approach is demonstrated by an example. First, use cases are specified with Message-Sequence-Charts (MSCs). Then, common building blocks of different use cases (and their corresponding MSCs) are identified. These building blocks are then grouped and linked in a topology-graph. Starting from the individual building blocks and their connections in the topology-graph, the statemachine is formed. The presentation of Babak A. Farshchian Sune Jakobsson and Erik Berg deals with increasing reuse in the domain of telecommunication. It is shown how the MDA (“Model-Driven-Architecture”) approach can be applied in this domain by using Parlay. MDA is an OMG standard for enterprise application integration. Parlay is a middleware standard in the telecommunication domain for ensuring independence of application software from the underlying network technology. This paper reports from a project using MDA in the telecommunication domain by deploying Parley.
3 3.1
Discussions Software Architectures as Composed Components
Group members: Haitham Hamza, Marie Beurton-Aimar, Alexander Fried, Sune Jakobsson, Yu Jia and Andreas Speck. The starting point of this group was the question: ”What is the definition of a model for reuse?” Reuse may be defined from different viewpoints. We could consider objects, classes and components as one kind of classification of assets to be reused. The reuse of complete applications comprises the reuse of organisation data. This leads to the problem of domain dependency. However, in our opinion reuse should be mostly context-free (a model for a problem in a specific domain should also be valid in other domains).
Model-Based Software Reuse
139
Some (orthogonal) approaches to support software composition are: – Software Stability Model (SSM, cf. section 2.2) A method which guides the user according to given rules in order to avoid errors. Additionally patterns are provided by the SSM method supporting the decisions of the users. – Common Ontologies for the Definition of Terms in Models Ontologies bear the chance to capture the semantic meaning of components which is crucial for the usage of the components. A model for building an ontology is a potential solution for context-free modelling. Since ontologies may help to bridge the different domains. The open issue is how to build an ontology (domain dependent) which has been considered as a generic problem. Means to deal with reused components are views and domains. Views help to find new applications for components (respectively the model of the components). Role modelling may be applied since many objects can be accessed from all actors. Patterns are reusable models which may help to define specific views. The accumulation of viewpoints will complete the model. Domains define the area of application of specific components. In order to use components in different domains it is necessary to find a way to convert between domains. Features matching in the problem domain have to rematch in solution domain. Therefore it is considered necessary to find a mathematical approach as a kind of formalisation for defining components semantically correct [10]. Here again patterns may support the transformation between domains. Potential means to solve the problems in composition may be: – Artificial intelligence and roles may be used to support the understanding of the semantics of the reused elements. – A hierarchical model of combining larger components from small pieces supports the abstraction from the details without loss of important information. – Coding patterns automatically implies accepting constraints which serve as guidelines Conclusion In general we consider the reuse of components as a semantical problem. A general purpose semantic capturing language may help to specify the semantics. A partial solution which is currently applied are annotations (e.g. in Web Services or the new UML standard). The problem which will still remain is that there are different names in different domains. There is no general dictionary like in other engineering disciplines (mechanical engineering, electrical engineering). For example in web services such a common understanding of the meaning of name is practised already today. In particular, ontologies help to tie together the different names of the same feature. 3.2
Automation and Verification
Group members: H.-Gerhard Gross, Rebeca Diaz Redondo, Luca Pazzi and Elke Pulverm¨ uller.
140
Andreas Speck et al.
Verification and Automation takes place in many locations in a three-dimensional model of component-based development. These are transformations that take place on each dimension and of course between the dimensions. The goal is to have all specification artefacts on all dimensions consistent at all times during all development activities.
G ra n u la rity o f M o d e ls H igh e r le ve l
C om ponent E n gin e e rin g
L e ss G e n e ric
Pla tfo rm D e p e n d e n t M odel
G e n e ric ity (Pro d u c t L in e s)
C ode M o re G e n e ric L o w e r le ve l
A p p lic a tio n E n gin e e rin g
B e h a vio r Stru c tu re C o lla b o ra tio n A sp e c t O rie n ta tio n
A sp e c ts (d iffe re n t vie w s o n th e sa m e th in g)
Fig. 1. Model of Component-based Reuse
Granularity Modelling is concerned with creating lower level abstractions from higher level abstractions (model) through adding information. Higher level models are easier to understand for humans, and lower level models are easier to process automatically (e.g., compiling source code). Each step that reduces abstraction to more concrete representations is concerned with decisions that are taken according to the requirements of a development project. This is similar to inheritance where new classes extend existing classes with additional functionality. Verification must make sure that the new lower level model is a correct representation of the higher level model plus additional information that distinguishes the higher level from the lower level representation. It means that the new model must not violate assumptions of the old model (inconsistencies), or in other words the assumptions and rules of the old model have to be consistently valid in the new model. This is similar to avoiding inheritance in order to change an existing class (polymorphism) which may be considered bad practice, although this is arguable.
Model-Based Software Reuse
141
Automation Automation may take place through automatic model checkers and through automatic code generators in generative programming (e.g., German national funded MDTS project www.fokus.fhg.de/mdts, and generative programming community at ECOOP). Aspects represent different views on the same modelling artefacts. In UML these map to different diagrams for one component for example. Verification must make sure that these different models of the same thing are consistent. It means different views must not describe different components, but they must describe the same component differently. This may be carried out automatically through model checkers and consistency rules, although we are not aware of any tools that actually perform these checks (e.g., Rational Rose UML consistency checks). Genericity Genericity is concerned with instantiating a generic component framework into a concrete application by assembling appropriate components according to the requirements of the planned application. This is called application engineering and is typically based on decision modelling (see product line community). Verification must make sure that the decision models are consistent, so that they do not contradict each other. We could not conclude that somebody has attempted to automate that already. Reuse Reuse takes place on all three dimensions. Currently only lower level (model/code) components are reused. However, it would be much more useful if also higher level (abstract) components may be reused and then put through all verification and model generation steps in order to achieve lower level models (model-driven architecture). This would considerably support the modelling of domain-specific artefacts that are much more adequately dealt with at a higher level of abstraction. 3.3
Modelling and Formalising Assets for Reuse
Group members: Kim Mens, Miro Casanova, Ragnhild Van Der Straeten, Dirk Deridder. Before discussing modelling and formalising for reuse, we tried to gain a common and precise view on what we mean by reuse. We agreed that minimisation of development and maintenance effort is not the only goal, also the maximisation of reliability, stability, and many other -ilities is related to reuse. To achieve this goals, several artefacts of the software process can be reused, such as source code, design models, analysis models, domain knowledge. Consequently, we have to clarify what kind of artifacts are we interested in this discussion on modelbased reuse. On the one hand, reusing code is interesting because in the end that is the only thing the end user will interact with. On the other hand, it is often claimed that reuse should happen as early as possible in the software life-cycle. Therefore,
142
Andreas Speck et al.
we decided to discuss our topic (i.e., how can we model and formalise for reuse) for each of the different phases in the software development life cycle (or only some, as time did not permit us to discuss all of them): testing, implementation, design, architecture, requirements, etc. In order to be able to compare the results for each of the discussed life-cycle phases, we decided to use the following fixed template during our discussion of each of the phases. For each software life-cycle phase we discussed: – Context: In which context is reuse of these kinds of artifacts NOT useful? – Models: what kind of models are needed/useful to help us in reusing these artifacts? – How: How will we reuse these models? Tools / formalisms / techniques / methods? – Problems: Which specific modelling problems encountered with reusing models at this level? – Formalisation: What kind of formalisms at this phase can facilitate reuse? Using this template, we discussed the implementation, design and analysis phase. The results of each of discussion or given in more detail below. Implementation – Context: Reuse of source code is not useful when we are dealing with domains where automatic code generation is feasible (e.g., database applications, GUI applications, etc.) When we expect that the code will have to be ported to another platform in the future, reusing code may not be possible. Another issue is the “not invented here syndrome”: even if the code is there and can be reused, software developers do not always want to reuse the code because it is not “their own”. A final inhibitor for reuse at the implementation level is a lack of knowledge for reusing (e.g., missing documentation on what can and should be reused and how). – Models: In fact, the only “models” we have at this level is the code itself (whether it is source code or executable code) and the documentation. – How: Reuse at the implementation level typically occurs by using the programming language features available for that purpose such as inheritance, polymorphism, interfaces and so on. Or we can just use fragments of source code or components that are available in some library. – Problems: Sometimes there is just too much code so that it is not easy to find the things we can or want to reuse, sometimes the code is just too obscure, sometimes the code is so specifically targeted to a particular purpose that it is difficult or irrelevant to reuse it in another context, sometimes the source code is just not available, etc. So there are indeed many potential problems related to reuse at the implementation level. – Formalisms: There exist only a few formalisms for dealing with reuse at the code level. One is of course the use of verifiable “preconditions, postconditions and invariants” [7] and another is to use the technique of “reuse contracts” [13, 12].
Model-Based Software Reuse
143
Design – Context: Reuse of design is often not feasible within different domains. If the design is very much implementation-oriented it is not always possible to reuse those designs. When the architectural style of the application is changed, it is also not possible to reuse the designs, because they are based on a specific architectural style. – Models: The models at this level are all the kinds of traditional design diagrams, for example, UML or OMT or Fusion diagrams. Another model, especially suitable for reuse are design patterns, as specified by [4]. – How: Reuse of design models happens by copy/paste of the model from the one application design to the other application design. Using the modelling language features such as packages, refinement, etc. reuse can also be established. – Problems: If the design models are ambiguous, we do not know whether the design is reusable in a certain context. Often design models are just not available, incomplete or even inconsistent. – Formalisation: Some formalisms exist for enhancing reuse at the design level. The Object Constraint Language (OCL), which comes with the UML specification, allows to write down class invariants on UML class diagrams, guards on state diagrams, etc. The Specification and Description Language (SDL) is frequently used in the telecommunication domain. It enables a precise description of the design. Other possible formalisms are decidable fragments of first order logic such as the Description Logics family, these have powerful reasoning capacities allowing to reason about e.g. consistency of designs. Analysis – Context: Reuse of analysis is not useful if we are confronted with different domains as is the case in horizontal markets. – Models: At analysis level, the models are all kinds of traditional analysis diagrams such as UML use cases, CRC, etc. Other analysis models are ontologies and analysis patterns as e.g. specified by [5]. – How: In this case, reuse is done also by copy/paste of models. But also some modelling language features can be used for the purpose of reuse. In the context of use cases, these are inclusion, extension and generalisation. – Problems: The analysis requirements can be written down in natural language or in any other informal medium. In most cases these requirements are also unstructured. These are the most important problems we identified in reusing analysis. – Formalisation: At the analysis level this is a difficult issue: users and managers want informal models, while we want formal ones. Formal methods that qualify for formalising analysis models are Z, VDM, etc. Other Phases In the previous sections we discussed how we can model and formalise for reuse at the implementation, design and analysis phase. Due to
144
Andreas Speck et al.
timing constraints, we were not able to have a closer look at other phases like testing, architecture and requirements. Conclusion From discussion we draw the following conclusions: – Reuse can happen at any level in the software development life cycle. – Before starting to model an application, think about what kind of reuse you want. What is important in this case, is the context and domain in which you are working in. – Modelling language features can often help in the reuse effort. – Formalisation can be useful for the verification and the consistency of the models. However, there is a trade-off between usability and formalisation. For example, one has to keep in mind that the end-user should be able to read and understand the requirements of the application and should not be confronted with logical statements which he or she does not understand at all. Open Issues In this discussion also some open issues came up. What about the link between reuse models between different life-cycle phases? But also what is the link between these models within the same life-cycle phase? For us, this seems to be a question that would be worthwhile investigating further.
4
Conclusions
One of the key aspects of software reuse is the prevention of errors by unanticipated negative interactions. Such problems may occur when not all (important) details of the reused pieces of software are known at the reuse time. Models may support the awareness of the important internals and the details of the interfaces of reused code. In the workshop researchers and practitioner presented successful approaches to model software properties. The research areas of the different workshop participants have been component-based software engineering, aspect-oriented programming, meta-modelling and software architecture as well as from the application domains of enterprise systems and telecommunications. In the presentations and discussions approaches to generalise the experiences were brought up. The relevance of traceability and to model for reuse right through the whole software engineering process was emphasised. As a result, the need for modelling languages supporting the traceability between different models (and views) more explicitly becomes more clear, as well as the relevance of software processes taking reuse as a long-term goal. This workshop may be seen in the context of other workshops concerned with modelling and feature interaction such as the Feature Interaction in Composed Systems - affiliated with the European Conference on Object Oriented Programming, Budapest, Hungary, 2001 [11], the GCSE‘01 Feature Modeling Workshop,
Model-Based Software Reuse
145
Erfurt 2001 [1] or the Workshop on Software Composition (SC 2002) - affiliated with ETAPS 2002, April 2002, Grenoble, France [9]. Further events for presenting solutions and discussing problems of modelling and validating software are intended.
Online Proceedings The papers and presentations of the workshop can be found online at the URL: http://research.intershop.com/workshops/ECOOP2002 or http://i44w3.info.uni-karlsruhe.de/∼pulvermu/workshops/ECOOP2002/
References [1] M. Clauß, K. Czarnecki, B. Franczyk, and A. Speck. GCSE‘01 Feature Modeling Workshop, Part of the Third International Symposium on Generative and Component-Based Software Engineering (GCSE 2001) . Erfurt, Germany, September 2001. http://www-st.inf.tu-dresden.de/gcse-fm01/. [2] I. Crnkovic, H. Schmidt, J. Stafford, and K. Wallnau. 4th ICSE workshop on Component-Based software engineering: Component certification and system prediction. In Proceedings of the 23rd International Conference on Software Engeneering (ICSE-01), pages 771–772, Los Alamitos, California, May12–19 2001. IEEE Computer Society. [3] Ivica Crnkovic, Stig Larsson, and Judith Stafford, editors. Workshop On Component-Based Software Engineering (in association with 9th IEEE Conference and Workshops on Engineering of Computer-Based Systems), Lund, Sweden, 2002, April 2002. [4] Gamma E., Helm R., Johnson R., and Vlissides J. Design Patterns, Elements of Reusable Object-Oriented Software. Addison Wesley, 1995. [5] Martin Fowler. Analysis Patterns, Reusable Object Models. Addison Wesley, 1997. [6] G. Kiczales. Aspect-oriented programming. ACM Computing Surveys, 28(4):154– 154, December 1996. [7] H. Klaeren, E. Pulverm¨ uller, A. Rashid, and A. Speck. Aspect Composition applying the Design by Contract Principle. In Proceedings of the GCSE’00, Second International Symposium on Generative and Component-Based Software Engineering, LNCS 2177, Erfurt, Germany, September 2000. Springer. [8] Wolfgang Pree. Komponentenbasierte Softwareentwicklung mit Frameworks. dpunkt.verlag, Heidelberg, Germany, 1997. [9] E. Pulverm¨ uller, I. Borne, N. Bouraqadi, P. Cointe, and U. Assmann. Workshop on Software Composition (SC 2002), affiliated with ETAPS 2002. Grenoble, France, April 2002. http://www.easycomp.org/sc2002/. [10] E. Pulverm¨ uller, A. Speck, and J.O. Coplien. A Version Model for Aspect Dependency Management. In Proceedings of International Symposium of Generative and Component-based Software Engineering (GCSE 2001), LNCS 1241, pages 70 – 79. Springer, 2001.
146
Andreas Speck et al.
[11] E. Pulverm¨ uller, A. Speck, M. D’Hondt, W.D. De Meuter, and J.O. Coplien. Workshop on Feature Interaction in Composed Systems, ECOOP 2001. Budapest, Hungary, June 2001. http://i44w3.info.unikarlsruhe.de/˜pulvermu/workshops/ecoop2001/. [12] Ralf H. Reussner and Heinz W. Schmidt. Using Parameterised Contracts to Predict Properties of Component Based Software Architectures. In Ivica Crnkovic, Stig Larsson, and Judith Stafford, editors, Workshop On Component-Based Software Engineering (in association with 9th IEEE Conference and Workshops on Engineering of Computer-Based Systems), Lund, Sweden, 2002, April 2002. [13] P. Steyaert, C. Lucas, K. Mens, and T. D’Hondt. Reuse contracts: Managing the evolution of reusable assets. In Proceedings of the OOPSLA 1996 Conference on Object-Oriented Programming, Systems, Languages and Applications, number 31(10) in ACM SIGPLAN Notices, pages 268–285. ACM Press, 1996. [14] Antje von Knethen. Change-oriented requirements traceability support for evolution of embedded systems. Fraunhofer IRB Verlag, 2002.
Quantitative Approaches in Object-Oriented Software Engineering Mario Piattini1 , Fernando Brito e Abreu2 , Geert Poels3 , and Houari A. Sahraoui4 1
Universidad de Castilla-La Mancha, Ronda de Calatrava, 5, 13071, Ciudad Real, Spain [email protected] 2 Universidade Nova de Lisboa & INESC, Rua Alves Redol, 9, Apartado 13069, 1000 Lisboa, Portugal [email protected] - [email protected] 3 Katholieke Universiteit Leuven & VLEKHO Business School, Koningsstraat 336, 1030 Brussels, Belgium [email protected] - [email protected] 4 Universite de Montreal, CP 6128 succ Centre-Ville, Montreal QC H3C 3J7, Quebec, Canada [email protected]
Abstract. This report summarizes the contributions and debates of the 6th International ECOOP Workshop on Quantitative Approaches in Object-Oriented Software Engineering (QAOOSE 2002), which was held in Malaga on 11 June, 2002. The objective of the QAOOSE workshop series is to present, discuss and encourage the use of quantitative methods in object-oriented software engineering research and practice. This year’s workshop included the presentation of eleven position papers in the areas of ”software metrics definition”, ”software size, complexity and quality assessment”, and ”software quality prediction models”. The discussion sessions focused on current problems and future research directions in QAOOSE.
1. Introduction The 6th International ECOOP Workshop on Quantitative Approaches in ObjectOriented Software Engineering (QAOOSE 2002) was a direct continuation of five successful workshops, held at previous editions of ECOOP in Budapest (2001), Cannes (2000), Lisbon (1999), Brussels (1998) and Aarhus (1995). Quantitative approaches in the OO field is a broad but active research area that aims at the development and/or evaluation of methods, techniques, tools and practical guidelines to improve the quality of software products and the efficiency and effectiveness of software processes. The relevant research topics are diverse, but always include a strong focus on applying a scientific methodology based on data collection (either by objective measurements or subjective assessments) and analysis (by statistical or artificial intelligence techniques). J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 147–153, 2002. c Springer-Verlag Berlin Heidelberg 2002
148
Mario Piattini et al.
Like in previous years, the workshop attracted participants from academia and industry that are involved or interested in the application of quantitative methods in object oriented software engineering research and practice. Thirteen papers were accepted to be included in the proceedings 1 and eleven were presented at the workshop. The number of participants was nineteen, mainly from Europe (Belgium, France, Spain, Italy, Portugal, Switzerland, U.K., Austria), but also from USA, Canada and Japan. Most of them with an academic background. The workshop followed this structure. The position papers that were presented were organized into four presentation sessions, consisting of 20 minutes of presentation and 10 minutes of question/discussion, a final discussion took place at the end of the workshop. This report is organized as follows. In section 2, 3 4 and 5 we present a summary of this year’s workshop contributions with comments and some conclusions elaborated during the discussion of the papers. In section 6 we present the workshop conclusions and the open issues that were identified.
2. Session I 2.1. Fast&&Serious: A UML Based Metric for Effort Estimation M. Carbone and G. Santucci (Universita degli Studi di Roma La Sapienza, Italy). In this paper the authors present a new method to estimate the size of a software project developed following the object-oriented paradigm. The method is designed to work with a set of UML diagrams describing the most important system features and calculates the complexity of a system in terms of source lines of code. The counting process is done by automatic extracting information about UML diagrams from Rational Rose petal files. The discussion of this work focused on the need of justifying the weights used in the model, as the proposed estimation model used too much intuition. Also it was argued that the authors should include the network of associations and other relationships among classes in their estimation model since those dependencies are in the essence of software complexity. Very intensive validation experiments are necessary to refine the proposed method.
1
All contributions accepted to the workshop were published in the workshop’s proceedings. To obtain a copy of the proceedings (ISBN 84-699-8696-1), contact the executive editor (Mario Piattini [email protected]). Electronic versions of the position papers) can also be downloaded from the workshop’s web pages (http://alarcos.inf-cr.uclm.es/qaoose2002/index.htm).
Quantitative Approaches in Object-Oriented Software Engineering
149
2.2. McRose: Functional Size Measurement for Rational Rose RealTime H. Diab, F. Koukane, M. Frappier, R. St-Denis. (Universite de Sherbrooke, Canada). Estimation was also the subject of the second paper, which described cROSE, a tool which automatically measures the functional software size, as defined by the COSMIC-FFP measure, for Rational Rose RealTime models. COSMIC-FFP is an adaptation of the function point measure for real-time systems. It can be used for effort estimation, productivity evaluation and benchmarking. cROSE is integrated with the Rational Rose RealTime toolset. The discussion about this paper focused on the mapping between Rational Rose Real Time (RRRT) and the Cosmic Function Point (FP) method, coming out that this mapping is the most difficult aspect in this proposal and that it still deserves more research attention. The authors were asked if they considered the possibility of using the proposed method do decide upon different design alternatives, using the information on the corresponding effort. 2.3. Combining and Adapting Software Quality Predictive Models S. Bouktif, B. Kegl, H. Sahraoui. (Universite de Montreal, Canada). This paper proposes an evolutionary approach for combining and adapting existing software quality predictive models (experts) to a particular context. The resulting model can be interpreted as a meta-expert that selects the best expert for each given task. This notion corresponds well to the real world in which individual predictive models, coming from heterogeneous sources, are not universal, and depend largely on the underlying data. Their preliminary results show that the combination of models can perform significantly better than individual experts. This kind of models could help also to improve the stability of the Java interfaces as the weakest attributes can be identify and the most suitable refactorings proposed. The authors clarify that they can use any kind of ordinal scales, with any number of labels for classification. It was noted that a very interesting work would be to compare genetic algorithms with other methods for predicting quality characteristics.
3. Session II 3.1. Implementing Automatic Quality Verification of Requirements with XML and XSLT A. Duran, A. Ruiz, M. Toro. (Universidad de Sevilla, Spain). In their presentation the authors propose a tool which uses XML and XSLT to implement requirements verification heuristics for discovering hidden conflicts
150
Mario Piattini et al.
and defects in natural language requirements. For the verification of use cases the authors define several metrics and propose some values that indicate a misuse of use case elements. It was highlighted that the tool does not support a real validation, because it does not automate the verification of the semantic properties of the requirements. Also a very interesting extension of the tool could be the supporting of the requirements negotiation. 3.2. From Objects to Components: A Quantitative Approach M. Goulao, F. Brito e Abreu. (INESC-ID, Portugal). This paper describes an experiment which compares component-based development with traditional object-oriented development. An application for the customisation and visualization of 3D functions were transformed into components using two different technologies: Borland Delphi and C++. One of the conclusions of the experiment was that Object Pascal offers a less abrupt paradigm shift from OOP to COP than Visual C++. The experiments reflects also the learning difficulties presented by the adoption of COP. As the own authors remark, more ambitious experiments have to be carry out in order to make generalizations, using different subjects, systems with different sizes and levels. It is difficult also to know exactly which things must be compared, because component-based development could not be just compared to object-oriented programming languages, but to other modularisation techniques used in OOP. 3.3. Quality Attributes for COTS Components M. F. Bertoa, A. Vallecillo. (Universidad de Malaga, Spain). The authors underline the lack of appropriate quality models that allow an effective assessment of COTS components. They propose a quality model based on the ISO 9126 standard with six quality characteristics: functionality, reliability, usability, efficiency, maintainability and portability, and several subcharacteristics. In this paper they propose also different metrics for these subcharacteristics. Both the model and the metrics should be validated and refined, even if component stakeholders probably would never reach a complete agreement on component quality attributes.
4. Session III 4.1. Developing Software Metrics Applicable to UML Models H. Kim, C. Boldyreff. (City University & University of Durham, UK). In this paper, the authors propose some new software metrics that can be applied to UML modeling elements like classes and messages. They claim that
Quantitative Approaches in Object-Oriented Software Engineering
151
these metrics can be used to predict various characteristics at the earlier stages of the software life cycle. Moreover, the authors developed a CASE tool on top of Rational Rose that support the metric collection and they provide some examples using it. The metrics proposed were supposed to evaluate size and complexity but a more specific goal should perhaps be defined. The authors started with Chidamber and Kemerer metrics and adapted them to UML, but they did not take into account more recent proposals for measuring UML models. Another issue which raise with the metrics is their collection for large systems. 4.2. Beyond Language Independent Object-Oriented Metrics: Model Independent Metrics M. Lanza, S. Ducasse. (Universite de Berne, Switzerland). The authors propose a metamodel to define domain independent metrics. This metamodel is part of the Moose Reengineering Environment. This work is an extension of the previous works on the language-independent FAMIX metamodel. The authors define new metrics because they want to have a metrics engine which works independently from the metamodel, avoiding extending the metrics engine each time a new metamodel is defined. Almost 50% of the metrics were defined for reengineering purpose, but not all of them. The rest is still unexploited. The authors were also asked about the use of 2D visualization, instead of 3D. They answer that the sheer size of the case studies did not allow them to go into complicated methods. They preferred to use fast & dirty approaches and throw away those that did not work, but 3D is certainly a topic that could and should be explored better. 4.3. Embedding Quantification Techniques to Development Methods M. Saeki. (Tokyo Institute of Technology, Japan). In this paper, the author proposes a general framework of extending an existing method which includes the activities for attaching semantic information to the artifacts and for measuring their quality using this information. This is done by using a metamodeling technique. Semantic information on the artifacts and the definition of the measures are formally defined on a metamodel. The authors clarify that presented metrics are only examples to illustrate the approach and that for real methods, sophisticated weighting methods are needed. The approach let a lot of freedom to the method designer and more guidance is needed to help her/him later to define the method and the metrics. It would be also interesting a possible extension to collect the metrics on the process and not only on the artifacts.
152
Mario Piattini et al.
5. Session V 5.1. Using OCL to Formalize Object-Oriented Design Metrics Definitions A. L. Baroni, S. Braz, F. Brito e Abreu. (Univ. Nova de Lisboa, Portugal & Ecole des Mines de Nantes, France & Univ. da Beira Interior, Portugal). This paper describes the formalization effort of different sets of object-oriented metrics definitions using the Object Constraint Language (OCL) and the UML meta-model. The approach followed by the authors allows unambiguous metrics definition and provides a good basis for tool support. Other formalization techniques could be used for formalizing metrics definition but the authors think that most of the software engineers are used to work with UML and OCL, so it is an advantage to use these techniques in metrics definition. 5.2. Measuring the Whole Software Process: A Simple Metric Data Exchange Format and Protocol M. Auer. (Technical University of Vienna, Austria) The author proposes a metric data exchange format and a data exchange protocol to communicate metric data. It also proposes a metric hub implemented as a Web service interface to a metric repository and metric connectors which translate a proprietary metric data format into the metric exchange format. Two additional position papers were accepted, but not presented at the workshop: – Going from Multiple to Single Inheritance with Metrics. C.Roume. LIRMM, France. – Defining and Validating Metrics for UML Statechart Diagrams. M. Genero, D. Miranda, M. Piattini. University of Castilla-La Mancha, Spain
6. Workshop Conclusions All the workshop long several issues arose which have recurred in every edition, such as: the importance of defining clear goals for the different metrics suites, the requirement of planning and carrying out the experiments rigorously, the need of internal and external replication of the experiments, the difficulty of calibrating the weights used in the models, the lack of data which makes it hard to generalize and to reuse existing models, the application of the metrics by the industry or the importance to have tools which automate metric collection and suggest alternatives in OO design. In this edition the participants to the workshop stressed the importance of the help that quantitative approaches can provide to understand the impact of new
Quantitative Approaches in Object-Oriented Software Engineering
153
technologies -especially components- on the software development process and on software quality. Quality models and metrics for multidimensional aspects of components are starting to be defined, and they could be the basis for better component selection and evaluation processes and tools. However, we must take into account that not all the characteristics of components could be assessed in an objective way or by third parties, some of them are also not provided by the component vendors. Regarding to metrics for components the problem is that we must work at the right abstraction level considering a component at design level more than at the code level. In this way, some of the problems and approaches used in metrics for patterns could be useful when defining and validating metrics for components. Technical aspects are not the only ones in component based development, e.g. legal aspects which are out of the scope of the metrics- are very important. The ever increasingly importance of component in software development suggest that perhaps we should revise the name of the workshop for the next edition and think about quantitative approaches to component-based software engineering. The other big issue of this workshop has been the usage of metamodels for metric definition and tool construction. Metamodels just as formal notations could be used to describe metrics more precisely. Metamodels could also offer a level of independence from OO languages, although the metamodel is not really language independent since only the syntax is unified but not the semantics. Some problems in using UML to specify metamodels were also pointed out by the participants in the workshop. Also, it was remarked that most of the proposed metamodel address only static properties, so it will be necessary to work deeper in metamodels for dynamic metrics which are traditionally disregarded comparing to static ones. Other themes discussed during the workshop were the inexistence of universal models valid for different organizations and the need of selecting and defining particular models for an organization, which is a very difficult task as well as the powerful combination of metrics and visualization for managing purposes.
Multiparadigm Programming with Object-Oriented Languages Kei Davis1 , Yannis Smaragdakis2, and J¨ org Striegnitz3 1
3
Los Alamos National Laboratory, [email protected] - http://www.c3.lanl.gov/∼kei/ 2 College of Computing, Georgia Institute of Technology, [email protected] - http://www.cc.gatech.edu/∼yannis/ John von Neumann Institute for Computing, Research Center Juelich, [email protected] http://www.fz-juelich.de/zam/ZAMPeople/striegnitz.html
Abstract. While OO has become ubiquitously employed for design, implementation, and even conceptualization, many practitioners recognize the concomitant need for other programming paradigms according to problem domain. Nevertheless, the choice of a programming paradigm is strongly influenced by the supporting programming language facilities. In turn, choice of programming language is usually a practical matter; one cannot generally afford to use a language not in the mainstream. We seek answers to the question of how to address the need for other programming paradigms in the general context of OO languages. Can OO programming languages effectively support other programming paradigms? The tentative answer seems to be affirmative, at least for some paradigms; for example, significant progress has been made for the case of (higher order, polymorphic) functional programming in C++. We show that there are many such efforts extant, including theoretical treatments, language implementations, and practical (application) implementations.
1. Introduction Although the idea of combining programming paradigms and languages goes back to the late sixties when much research effort was expended on the development and investigation of extensible programming languages, this approach has lost neither its importance nor its elegance. Programming languages and paradigms are thought models. Their distinguishing concepts have a great influence on how programmers approach the different stages of the software development process, and in practice even the algorithm design process. With respect to software quality it is therefore desirable to let the problem domain determine the choice of programming paradigm. Especially for larger software projects this implies the need for languages and tools that support the simultaneous use of different programming paradigms. Today the object-oriented programming paradigm is dominant and ubiquitously employed for design and implementation, and often in conceptualization; J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 154–159, 2002. c Springer-Verlag Berlin Heidelberg 2002
Multiparadigm Programming with Object-Oriented Languages
155
a huge collection of tools has been developed and successfully applied over the last two decades. MPOOL attempts to bring together people who are trying to build a bridge from OO-centered tools to a toolset that permits free choice of paradigms. Last year’s MPOOL [3, 5] gave evidence that there exists a larger community working in this emerging area. The extensive diversity of topics of this year’s workshop show that there is an ongoing advance in programming languages, tools, concepts, and methodologies to support multiparadigm programming. One of the main goals of the workshop is to promote and expose work that combines programming paradigms in the framework of OO languages. Building a consensus about the standard background work, the interesting problems, and the future directions is the way to form a community. The object-oriented paradigm is well-suited to implementation of, and extension to include, other programming paradigms. Our previous year’s MPOOL workshop at ECOOP’01 bore out our hypothesis that there are many such efforts extant, including theoretical treatments, language implementations, practical (application) implementations, and existing [2] and upcoming textbooks on multi-paradigm programming [6]. A workshop still seems the best forum for such an exploratory activity; since OO has a large and disparate following, and any practice worthy of being labeled ‘multi-paradigm’ is nearly certain of incorporating OO, holding such a workshop as an adjunct to a major OO conference seemed appropriate. ECOOP has been chosen over its American counterpart OOPSLA because of its wider scope. This ongoing workshop seeks to bring together practitioners and researchers in this emerging field to ‘compare notes’ on their work—describe existing, developing, or proposed techniques, idioms, methodologies, language extensions, or software for expressing non-OO paradigms in OO languages; or theoretical work supporting or defining the same. Work-in-progress reports are welcomed. The organizers may not have yet reached a consensus fixpoint for the intended scope of ‘multi-paradigm programming.’ From the outset we agreed that this meant programming with multiple paradigms within a single programming language and not using separate languages to express parts of a larger whole (though we later relaxed this restriction to allow language cross-binding). To narrow the potential field we chose to use OO languages as the base languages for expressing multiple paradigms; this immediately led to the question of the eligibility of language extensions, pre-processing, meta-programming, and other supra- and extra-lingual facilities. We particularly sought theoretical considerations of multi-paradigm programming. Following is the current span we wish to (non-exclusively) include. – – – – –
Non-OO programming with OO languages; Merging functional/logic/OO/other programs (language crossbinding); Non-OO programming at the meta level (e.g. template metaprogramming); Module systems vs. object systems; OO design patterns and their relation to functional patterns;
156
Kei Davis, Yannis Smaragdakis, and J¨ org Striegnitz
– Type system relationships across languages; – Theoretical foundations of multiparadigm programming with OO languages. This year’s workshop featured a program committee consisting of established researchers in the area. Other than the workshop organizers, the program committee consisted of (alphabetically) Gerald Baumgartner (Ohio State University), Jaakko J¨arvi (Indiana University) and Peter Van Roy (Universit´e Catholique de Louvain). Response to the call for contributions was again good. The program committee had to reject some submissions in the interests of quality, central relevance, and allowance for adequate time for open discussion. The workshop program comprised four sessions all interleaved with informally working coffee breaks and lunch. Each session was originally planned to consist of two half-hour talks followed by a half-hour discussion. The plan changed slightly in practice: a presentation by one of the organizers (J¨ org Striegnitz) replaced a scheduled talk by Hubert Dubois, who could not attend due to urgent job commitments. Discussion was interleaved with talks, instead of following both talks of a session. Thus, effectively each talk consisted of 25 minutes of presentation interleaved with intense discussion. There was a consensus on a number of general points regarding multi-paradigm programming, in particular: it is viable as an ongoing focus by practitioners and it is a practical means of solving real-world programming problems. Submissions were of sufficiently high quality that independent publication in full was warranted; they appear as a collection [4]. Technical material, topics of general interest, calls for papers and other announcements, etc., are archived at the permanent WWW site [1]. Participation and numbers of participants were judged to warrant the continuation of similar meetings at venues to be determined. There continues to be conviction that with the increasingly sophisticated nature of computing that multi-paradigm programming has come of age as a legitimate topic in its own right. Papers were grouped into sessions by content; the four sections following provide a very brief summary of each session. Those interested in full technical detail are commended to the proceedings[4].
2. Multiparadigm Programming and Computer Algebra Systems 2.1. Multiparadigm Programming (Introduction) J¨ org Striegnitz, of the organizers, presented his view of multiparadigm programming, in analogy to human languages, with the discussion ranging from anthropology to formal semantics. This talk gave rise to interesting interactions that helped “break the ice” and begin discussions that would carry on to the rest of the workshop.
Multiparadigm Programming with Object-Oriented Languages
157
2.2. Modelica — A Declarative Object-Oriented Multiparadigm Language (Peter Fritzson, Peter Bunus) The second presentation of the session was that of Modelica—a multiparadigm language for continuous and discrete-event specification of complex systems. Modelica is enjoying recognition in the systems modeling community and the presentation concentrated on how the Modelica constructs should be viewed from a programming languages perspective. The Modelica programming model was argued to be fundamentally multiparadigm, supporting functional, as well as object-oriented techniques. The audience questions helped analyze Modelica from a language design standpoint and discuss the ways a programmer might use the system.
3. New Languages to Support Multiparadigm Programming 3.1. The Return of Jensen’s Device (Timothy A. Budd) Timothy Budd presented a proposal for a multiparadigm language in the tradition of Leda, based on Java. The title of the talk refers to the emphasis on call-by-name semantics in the language design. Since several of the workshop attendees (e.g., the organizers, as well as Gerald Baumgartner of the program committee) have worked in similar directions before, the presentation resulted in a very lively discussion on language design and implementation issues. Such issues, for instance, concerned syntactic extensions for adding pattern matching to Java, adding more complete logic programming support to OO languages, etc. The presentation also tied nicely to two papers presented in last year’s MPOOL workshop (the Brew paper by Baumgartner, Jansche, and Peisert and the Extensible Compilers paper of Zenger and Odersky).
3.2. Towards Linguistic Symbiosis of an Object-Oriented and a Logic Programming Language (Johan Brichau, Kris Gybels, Roel Wuyts) Integrating logic programming in an object-oriented programming language was the topic of the next presentation. This topic was related to a paper from last year by Wuyts and Ducasse. This time the emphasis was not so much on using logic programming for reflection but in the general syntactic and semantics choices associated with integrating logic and object-oriented programming. Such choices, for instance, include how to represent the multiple results of a logic query from the perspective of an object-oriented language (especially if the result space is large), as well as how to treat object-oriented code in a logic expression. This discussion tied very well to the discussion following the previous presentation in the session.
158
Kei Davis, Yannis Smaragdakis, and J¨ org Striegnitz
4. Multiparadigm Programming with C++ 4.1 Replacing Refinement or Refining Replacement? (Sibylle Schupp) The first presentation of this session concentrated on emulating different semantics for inheritance in object-oriented languages. Much has been written about the different semantic options when a child class inherits methods from a parent class. This presentation concentrated on simulating general “refinement” semantics for inheritance by disciplined use of a C++ language framework. An interesting discussion followed on the pragmatics of using the framework in C++ and potential variations in the framework implementation. 4.2 Java-Style Variable Binding in C++ (Thomas Becker) Thomas Becker’s presentation began with a riddle. The attendees had to guess the output of a certain C++ code fragment. After demonstrating that even C++ experts occasionally have trouble interpreting C++ code that mixes callby-value and call-by-reference, the presentation argued that one can emulate Java calling semantics in C++. The simulation is valuable in a constrained environment where a single class needs to be passed around often, is commonly exported to non-expert programmers, and should have reference-like semantics (either C++ true by-reference semantics, or Java reference-value semantics) for efficiency and consistency under aliasing. The speaker motivated the problem very well by presenting the example of a financial application and Portfolio objects that needed to be passed by reference. The presentation prompted a lively discussion on language calling semantics.
5. Applications 5.1 Dynamic Inheritance for a Prototype-Based Language (Koji Kagawa) Koji Kagawa’s presentation argued that ECMAScript (aka JavaScript) can be viewed as a multiparadigm programming language, supporting both functional and object-oriented programming. The speaker also proposed language extensions for better supporting lazy evaluation and logic programming. This presentation was an interesting way of examining a language unknown to most attendees from a multiparadigm standpoint. There was significant interest both in the ways to use JavaScript as a functional language and in the ideas on what could be added for further support. 5.2 A Case in Multiparadigm Programming: User Interfaces by Means of Declarative Meta-programming (S. Goderis, W. De Meuter, J. Brichau) The final presentation (by Sofie Goderis) concerned the use of a declarative programming language that is integrated with an object-oriented language. The
Multiparadigm Programming with Object-Oriented Languages
159
declarative specification in the SOUL meta-programming language defines a user interface (i.e., graphical elements and their organization) while the objectoriented code is used to write an application controlled by the user interface.
6. Workshop Conclusions The theme of this year’s workshop was community through education. We were fortunate to have a number of PhD students in attendance and the paper presentations inspired prolonged discussion. Each paper was allocated at least 45 minutes, of which the presenter was expected to take up at most 25 minutes. Even with this generous time allotment for discussion, many conversations had to be cut short. With the benefit of several experienced researchers among the workshop participants, the conversation topics ranged all the way from language design and semantics to philosophy and linguistics. The attendance of senior participants (e.g., Timothy Budd of Oregon State University, author of multiple books, including ”Multiparadigm Programming with LEDA”) was hopefully instructive and inspirational to the younger attendees. We hope that MPOOL can help plant the seeds of a community with a long-term interest in promoting multiparadigm programming. The interaction of workshop attendees was an important step in this direction.
References [1] http://www.multiparadigm.org. [2] Timothy A. Budd. Multiparadigm Programming in Leda. Addison Wesley, 1995. [3] Kei Davis, Yannis Smaragdakis, and J¨ org Striegnitz, editors. Multiparadigm Programming with Object-Oriented Languages, volume 7 of NIC Series. John von Neumann Institute for Computing (NIC), 2001. [4] Kei Davis, Yannis Smaragdakis, and J¨ org Striegnitz, editors. Multiparadigm Programming with Object-Oriented Languages, volume 13 of NIC Series. John von Neumann Institute for Computing (NIC), 2002. [5] Akos Fr¨ ohner, editor. Object-Oriented Technology (ECOOP’2001 Workshop Reader). LNCS 2323. Springer-Verlag, Kaiserslautern, Germany, June 2001. [6] Peter Van Roy and Seif Haridi. Concepts, Techniques, and Models of Computer Programming. See http://www.info.ucl.ac.be/˜pvr/.
Knowledge-Based Object-Oriented Software Engineering Maja D’Hondt1 , Kim Mens2 , and Ellen Van Paesschen3 1
System and Software Engineering Lab Vrije Universiteit Brussel [email protected] - http://ssel.vub.ac.be/members/maja/ 2 Department of Computing Science and Engineering Universit´e catholique de Louvain [email protected] - http://www.info.ucl.ac.be/people/cvmens.html 3 Programming Technology Lab Vrije Universiteit Brussel [email protected] - http://prog.vub.ac.be/
Abstract. The complexity of software domains – such as the financial industry, television and radio broadcasting, hospital management and rental business – is steadily increasing and knowledge management of businesses is becoming more important with the demand for capturing business processes. On the other hand the volatility of software development expertise needs to be reduced. These are symptoms of a very significant tendency towards making knowledge of different kinds explicit: knowledge about the domain or the business, knowledge about developing software, and even meta-knowledge about these kinds of knowledge. Examples of approaches that are directly related to this tendency or could contribute to it are knowledge engineering, ontologies, conceptual modeling, domain analysis and domain engineering, business rules, workflow management and research presented at conferences and journals on Software Engineering and Knowledge Engineering and Automated Software Engineering, formerly known as Knowledge-Based Software Engineering. Whereas this community already contributed for years to research in knowledge engineering applied to software engineering but also vice versa, this workshop intended to focus on approaches for using explicit knowledge in various ways and in any of the tasks involved in object-oriented programming and software engineering. Another goal of this workshop is to bridge the gap between the aforementioned community and the ECOOP community . On the one hand, this workshop was a platform for researchers interested in the symbiosis of knowledge-based or related methods and technologies with object-oriented programming or software development. On the other hand it welcomed practitioners confronted with the problems in developing knowledge-intensive software and their approach to tackling them. The workshop’s URL is http://ssel.vub.ac.be/kboose/.
J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 160–173, 2002. c Springer-Verlag Berlin Heidelberg 2002
Knowledge-Based Object-Oriented Software Engineering
1
161
Introduction
Knowledge in software applications is becoming more significant because the domains of many software applications are inherently knowledge-intensive and this knowledge is often not explicitly dealt with in software development. This impedes maintenance and reuse. Moreover, it is generally known that developing software requires expertise and experience, which are currently also implicit and could be made more tangible and reusable using knowledge-based or related techniques. These are general examples of how using explicit knowledge in a multitude of ways and in all phases of software development can be advantageous. Since object-orientation is derived from frames in frame-based knowledge representations in Artificial Intelligence, object-oriented software development in se has certain advantages for making knowledge explicit. A conceptual class diagram, for example, models domain knowledge. Also, object-oriented programs can be designed in such a way that certain knowledge is represented explicitly and more or less separated from other implementation issues. However, knowledge might require a more declarative representation such as constraints or rules, thus requiring to augment object-oriented software development with these representations. Examples are UML’s Object Constraint Language, or recent trends in using business rules and ontologies. This workshop is a contribution to ECOOP because knowledge-based or related approaches towards object-oriented programming and software development are under-represented in this conference series. Object-orientation facilitates a more conceptual way of capturing information or knowledge, but current object-oriented software development methodologies do not take advantage of this property.
2
Workshop Goals and Topics
With this workshop we tried to provide a platform for researchers and practitioners interested in knowledge-based or related approaches for object-oriented programming or software development, to pursue and encourage research and applications in this area, and to bridge the gap between the ECOOP community and the long-standing community active in the conferences on Software Engineering and Knowledge Engineering (SEKE) and Automated Software Engineering (ASE), formerly known as Knowledge-Based Software Engineering. More specifically, this workshop wanted to address among others the following: – identify and characterise • object-oriented engineering tasks that can benefit from explicitly used knowledge • (knowledge-based) approaches that can support object-oriented engineering tasks • kinds of knowledge that are useful to make explicit • how explicit knowledge can be used
162
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
– look for a common life cycle which describes both the conventional software construction and the knowledge-based software construction – the symbiosis between knowledge-based or related approaches and objectoriented programming – industrial applications of explicit knowledge in object-oriented software Topics of interest include, but are by no means limited to: – software engineering tasks where explicit knowledge can be useful • requirements engineering • analysis • design • programme understanding • reengineering • reverse engineering • software evolution • software reuse • ... – approaches for making knowledge explicit • knowledge engineering • ontologies • conceptual modeling • domain analysis and domain engineering • business rules • workflow management • ... – how explicit knowledge can be used • modeling • enforcing • checking and verifying • ...
3
Workshop Agenda
The major part of the workshop consisted of group work, since we wanted to avoid a conference-like workshop. Hence, each participant was granted a 1-slide or 10-minute introduction to his or her work in relation to the workshop topics. These introductions were followed by a general discussion about knowledge-based object-oriented software engineering, where the topics proposed by the organisers and the participants in pre-workshop e-mail discussions were considered. These topics converged almost naturally to two main topics. In the afternoon the workshop participants split up in two groups, one for each of the main topics. One hour before the end of the workshop, a representative of each group presented the results of the group discussions in a plenary session. Finally there was a workshop wrap-up session. While this section describes the workshop agenda, it also provides an accurate overview of the structure of this report: summaries of the position papers (Sec. 4), discussion topics (Sec. 5), and an account of the discussions themselves (Sec. 6). A list of the participants is provided at the end.
Knowledge-Based Object-Oriented Software Engineering
4 4.1
163
Summaries of the Position Papers Facilitating Software Maintenance and Reuse Activities with a Concept-Oriented Approach, Dirk Deridder
A major activity in software development is to obtain knowledge and insights about the application domain. Even though this is supported by a wide range of tools and techniques, a lot of it remains implicit and most often resides in the heads of the different people concerned. Examples of such implicit knowledge are amongst others the links between the different artefacts, and the knowledge that is regarded common sense. In our approach we will store this knowledge in an ontology, which is consequently used as a driving medium to support reuse and maintenance activities. For this purpose the different concepts, that are represented in the artefacts, will be defined and stored in this ontology. Subsequently these concepts will be ’glued’ to the different (elements of) artefacts through extensional and intensional concept definitions. This will enable a bi-directional navigation between the concepts represented in the conceptual artefacts and their concrete realizations in the code which is needed for the intended maintenance and reuse support. An extended version of this position paper was accepted for presentation and publication at the 5th Joint Conference on Knowledge-Based Software Engineering (JCKBSE2002), in Maribor, Slovenia. [3] 4.2
Domain Knowledge as an Aspect in Object-Oriented Software Applications, Maja D’Hondt and Mar´ıa Agustina Cibr´ an
The complexity of software domains is steadily increasing and knowledge management of businesses is becoming more important. The real-world domains of many software applications, such as e-commerce, the financial industry, television and radio broadcasting, hospital management and rental business, are inherently knowledge-intensive. Current software engineering practices result in software applications that contain implicit domain knowledge tangled with the implementation strategy. An implementation strategy might result in a distributed or real-time application, or in an application with a visual user interface or a database, or a combination of above. Domain knowledge consists of a conceptual model containing concepts and relations between the concepts. It also contains constraints on the concepts and the relations, and rules that state how to infer or “calculate” new concepts and relations [15]. There is a strong analogy between the rules and constraints on the one hand, and Business Rules on the other. Business Rules are defined on a Business Model, analogous to the conceptual model of the domain knowledge. A first problem is that real-world domains are subject to change and businesses have to cope with these changes in order to stay competitive. Therefore, it should be possible to identify and locate the software’s domain knowledge easily and adapt it accordingly while at the same time avoiding propagation of
164
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
the adaptations to the implementation strategy. Similarly, due to rapidly evolving technologies, we should be able to update or replace the implementation strategy in a controlled and well-localized way. A second problem is that the development of software where domain knowledge and implementation strategy are tangled is a very complex task: the software developer, who is typically a technology expert but not a domain expert, has to concentrate on two aspects of the software at the same time and manually compose them. This violates the principle of separation of concerns [4] [14] [5], which states that the implementation strategy should be separated from other concerns or aspects such as domain knowledge. In short, the tangling of domain knowledge and implementation strategy makes understanding, maintaining, adapting, reusing and evolving the software difficult, time-consuming, error-prone, and therefore expensive. 4.3
Supporting Object-Oriented Software Development with Intentional Source-Code Views, Kim Mens, Tom Mens, and Michel Wermelinger
Developing, maintaining and understanding large software systems is hard. One underlying cause is that existing modularization mechanisms are inadequate to handle cross-cutting concerns. Intentional source-code views are an intuitive and lightweight means of modelling such concerns. They increase our ability to understand, modularize and browse the source code of a software system by grouping together source-code entities that address a same concern. Alternative descriptions of the same intentional view can be provided and checked for consistency. Relations among intentional views can be declared, verified and enforced. Our model of intentional views is implemented using a logic meta-programming approach. This allows us to specify a kind of knowledge developers have about object-oriented source code that is usually not captured by traditional program documentation mechanisms. It also facilitates software evolution by providing the ability to verify automatically the consistency of views and detect invalidation of important intentional relationships among views when the source code is modified. An extended version of this position paper was accepted for presentation and publication at the Software Engineering and Knowledge Engineering conference (SEKE 2002) in Ischia, Italy [8]. 4.4
Management and Verification of the Consistency among UML Models, Atsushi Ohnishi
UML models should be consistent with each other. For example, if a message exists between class A and class B in a sequence chart, then a corresponding association between class A and class B must exist in a class diagram. We provide 38 rules for the consistency between UML models as knowledge base. We can detect the inconsistency between models with this knowledge base. We have developed a prototype system and applied this system to a couple of specifications written in UML[13].
Knowledge-Based Object-Oriented Software Engineering
4.5
165
Making Domain Knowledge Explicit Using SHIQ in Object-Oriented Software Development, Ragnhild Van Der Straeten
To be able to develop a software application, the software developer must have a thorough knowledge of the application domain. A domain is some area of interest and can be hierarchically structured [15]. Domain knowledge specifies domain-specific knowledge and information types that we want to talk about in an application [15]. Nowadays, the de-facto modeling language used in object-oriented development is the Unified Modeling Language (UML) [12]. A lot of domain knowledge is implicitly and explicitly present in the models used throughout the Software Development Life Cycle (SDLC) and in the different UML diagrams, or is only present in the head of the software developers, or is even lost through the SDLC. To make this domain knowledge explicit, we will use one language. Our goal is to translate the different diagrams and write the additional domain knowledge down in this language. We propose to use the Description Logic SHIQ and extensions to make this knowledge explicit and to support the software modeler in using this knowledge. These logics have quite powerful reasoning mechanisms which can be used to check the consistency of the corresponding class and state diagrams. In this paper we give the translation of state diagrams and constraints written in the Object Constraint Language (OCL) [6] and two examples of implicit knowledge which is made explicit. In the first example, the consistency is checked between a class diagram and a state diagram. In the second example, a change is made to a first design model, this change makes some domain knowledge implicit. By expressing this knowledge in the logic, the reasoning mechanism of this logic can notify the designer if one of the “implicit” rules is violated. Our final goal is to build an intelligent modeling tool which enables to make implicit knowledge explicit and to use this knowledge to support the designer in designing software applications. The advantages of such a tool is that reuse and adaptability of software is improved and the understandability of the software designs increases. 4.6
Declarative Metaprogramming to Support Reuse in Vertical Markets, Ellen Van Paesschen
The implicit nature of the natural relationship between domain models (and the corresponding delta-analyses) and framework instances in vertical markets, causes a problematic hand-crafted way of developing and reusing frameworks. Constructing a bidirectional, concrete, active link between domain models and framework code based on a new instance of declarative metaprogramming (DMP) can significantly improve this situation. The new DMP instance supports a symbiosis between a prototype-based, frame-based knowledge representation language and an object-oriented programming language. This implies that framework code in the object-oriented language co-evolves with the corresponding knowledge representation. Representing
166
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
domain-dependent concepts in the same knowledge representation allows us to transform delta-analyses into framework reuse plans, and to translate changes in the framework code into domain knowledge adaptations, at a prototype-based and frame-based level. At the level of the domain-dependent concepts the separation of five kinds of knowledge will be represented by KRS’, a dialect of the frame-based prototypebased knowledge representation language KRS. To represent frameworks the new instance of DMP provides a symbiosis between KRS’ and the object-oriented programming language Smalltalk. By adding an intelligent component to this instance at the level of KRS’, domain-dependent concepts and framework implementations can be coupled, making it possible to automatically translate framework adaptations to the domain-dependent level, and to automatically plan reuse of the framework based explicitly on the delta’s at the KRS’ level. 4.7
An Agent-Based Approach to Knowledge Management for Software Maintenance, Aurora Vizca´ıno, Jes´ us Favela, and Mario Piattini
Nowadays, organisations consider knowledge, or intellectual capital, to be as important as tangible capital, which enables them to grow, survive and become more competitive. For this reason, organisations are currently considering innovative techniques and methods to manage their knowledge systematically. Organisations handle different types of knowledge that are often inter-related, and which must be managed in a consistent way. For instance, software engineering involves the integration of various knowledge sources that are in constant change. Therefore, tools and techniques are necessary to capture and process knowledge in order to facilitate subsequent development efforts, especially in the domain of software engineering The changeable character of the software maintenance process requires that the information generated is controlled, stored, and shared. We propose a multiagent system in order to manage the knowledge generated during maintenance. The roles of the these agents are summarised as follows consist of: – Comparing new information with information that has already been stored in order to detect inconsistencies between old and new information. – Informing other agents about changes produced. – Predicting new client’s demands. Similar software projects often require similar maintenance demands. – Predicting possible mistakes by using historic knowledge. – Advising solutions to problems. Storing solutions that have worked correctly in previous maintenance situations helps to avoid that due to the limited transfer of knowledge companies are forced to reinvent new practices, resulting in costly duplication of effort. – Helping to make decisions. For instance to evaluate whether it is convenient to outsource certain maintenance activities.
Knowledge-Based Object-Oriented Software Engineering
167
– Advising certain employee to do a specific job. The system has information about each employee’s skills, their performance metrics, and the projects they have worked on. A multiagent system in charge of managing maintenance knowledge might improve the maintenance process since agents would help developers find information and solutions to problems and to make decisions, thus increasing organisation’s competitiveness.
5
Discussion Topics
Based on preliminary e-mail discussions, submitted workshop papers, short presentations and discussions during the morning session of the workshop, the following relevant discussion topics were identified. 5.1
Mapping between Knowledge Representation and Object-Oriented Programs
This topic was explicitly addressed by several participants. Different kinds of mappings can be imagined: – two separate representations with an explicit link between them – a code-generation approach where there is only the represented knowledge from which code is generated automatically – a symbiotic or reflective mapping between the knowledge representation language and the object-oriented language – an approach where the knowledge representation language and the objectoriented language are actually one and the same language. (Note that this is more than mere symbiosis: the languages have become –or were designed to be – one and the same.) 5.2
How a Knowledge-Based Approach Can Support Maintenance of Object-Oriented Programs
Many participants addressed the topic of using a knowledge-based approach to achieve, facilitate or advance software maintenance in general and software evolution, reuse and understanding in particular. 5.3
Approaches and Techniques for Dealing with Knowledge-Based Software Engineering
A whole range of different approaches towards KBOOSE were proposed or mentioned by the various participants: – a rule-based backward chaining approach (e.g. Prolog) – a rule-based forward chaining approach
168
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
– a mixture of forward and backward chaining (found in many knowledge systems) – a frame-based approach (e.g. KRS [7]) – constraint solving – description logics (these are limited but decidable subsets of first-order logic with reasoning support) – model checking – an agent-based approach – ontologies Note that this topic is orthogonal to the previous two. 5.4
Consistency Checking
This was proposed by a number of participants as a means to support maintenance of object-oriented programs. Therefore, this topic can be seen as a subtopic of topic 5.2. Consistency can be checked between models (e.g. between different kinds of UML design diagrams) or between the represented knowledge and the implementation. If we consider source code as a model too, these two kinds of consistency checking essentially boil down to the same thing. 5.5
Which Software Engineering Tasks Can Benefit from a Knowledge-Based Software Engineering Approach?
This topic is closely related to topic 5.2 as well. In fact, most participants mentioned that they used knowledge-based object-oriented software engineering to support software maintenance, reuse, evolution and understanding. Of course, knowledge-based software engineering may be useful for other tasks as well. 5.6
How Can the Represented Knowledge Be Used?
Again this topic is closely related to topic 5.2. Indeed, checking model consistency is an interesting way of using the represented knowledge for achieving welldesigned or correct software. 5.7
Summary of the Discussion Topics
Looking back at all these discussion topics, given the fact that topics 5.5 and 5.6 are closely related to topic 5.2 and because topic 5.4 can be seen as a subtopic of 5.2, we decided to focus on the first three topics only during the group discussions. Furthermore, because of the orthogonality of the third topic with respect to the first two, we decided to discuss only topics 5.1 and 5.2. However, the two discussion groups were supposed to take a look at topic 5.3 too from the perspective of their own discussion topic. More specifically, for topic 5.1 it was important to know how the choice of a particular approach or technique might affect or influence the chosen mapping. For topic 5.2 it was important to discuss which particular techniques could or should be used or avoided in order to support software maintenance.
Knowledge-Based Object-Oriented Software Engineering
6 6.1
169
Group Discussions Mapping between Knowledge Representation and Object-Oriented Programs
This group consisted of Maja, Ellen, Mar´ıa Agustina, Theo, Andreas and Marie. Interests of the Participants in This Topic comes both from a technology and a problem-oriented point of view. Based on all the individual interests in this particular discussion topic, we came up with the following issues: domain and configuration knowledge In software applications many kinds of implicit knowledge can be detected. Maja and Mar´ıa Agustina both want to represent domain knowledge and in particular business rules explicitly and decoupled from the core application. Ellen wants to represent knowledge in the context of families of applications (e.g. frameworks or product lines) in vertical markets, among others knowledge about the domain, about variabilities in the domain (also referred to as delta’s), and about the link between the variabilities and the implementation for traceability. An appropriate representation for these kinds of knowledge can be found in the older hybrid knowledge representation systems which combine frame-based and rule-based representations. The core application is implemented using an object-oriented programming language. Hence the need arises for a symbiosis between frame-based and rule-based representation on the one hand, and object-oriented programming on the other. task knowledge Andreas is concerned with the task knowledge, in other words the workflow or process, in software applications. Marie is working in the domain of medical diagnosis and treatment. They both observed that the task of a software application crosscuts the entire implementation and thus becomes hard to maintain and reuse. Andreas remarks that it boils down to explicitly representing component interactions, and that possibly temporal logic would be a suitable medium. Issues that arise here are the ever ongoing discussion of black-box versus white-box components, the level of granularity of the interactions, and how to reverse engineer existing components to obtain this representation of the interactions. Symbiosis between Declarative and Object-Oriented Languages The needs and interests of the participants pointed to a symbiosis between a declarative language and an object-oriented language. A language symbiosis means that the mapping of the declarative language onto the object-oriented language is implemented and integrated in the latter. We must note, however, that declarative is a paradigm rather than a language, and in order to discuss a symbiosis between two languages, a concrete declarative language must be considered. Possible candidates are: – a truly declarative rule-based language, which means that the rule chaining strategy is transparent, i.e. forward or backward chaining, or both
170
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
– a constraint language with a constraint solver or truth maintenance system – description logic with reasoning for ensuring consistency or classification The main problem with a symbiosis between a declarative and an objectoriented language is consolidating the behavioural aspect: message passing and for instance rule chaining is hard to map. This is easily illustrated when one considers a message send which is in fact mapped to a rule, where there is a mapping between the parameters (including the receiver) of the message and the variables of the rule. When the rule returns more than one result, the question is how the object-oriented layer will deal with this since it only expects one result from a message. Theo mentioned by means of a concrete example that the ”holy grail” of a perfect integration of a knowledge-based language with an object-oriented language may be very hard to find. If one considers SQL, everyone will probably agree that this is a declarative language which is very well-suited for reasoning about knowledge stored in databases, but even this one is still not well-integrated with imperative or object-oriented languages. A possible strategy for facilitating the symbiosis might be to do a co-design from scratch of a new object-oriented and declarative language that perfectly integrates the two paradigms. But then the problem probably arises that the language will not be generally accepted because it would be yet another ”academic” language. Existing Systems with Language Symbiosis were considered in this discussion. We can distinguish between systems that provide a symbiosis between two languages of the same paradigm, such as Agora [10], and systems or languages that unite multiple paradigms. Another distinction is the way the multiple paradigms are combined. On the one side of the spectrum there are the multiparadigm languages such as C++ [2] and LISP. Then there are the systems that implement one paradigm in a language based on another paradigm, which in most cases makes the underlying language accessible in the language it implements, and may even allow metaprogramming about the underlying language in the language it implements. Examples are SOUL (Smalltalk Open Unification Language), a logic programming environment implemented in Smalltalk [9]; SISC (Second Interpreter of Scheme Code) [11] and JScheme [1] both Scheme interpreters written in Java. 6.2
How a Knowledge-Based Approach Can Support Maintenance of Object-Oriented Programs
This group – consisting of Kim, Miro, Atsushi, Ragnhild, Dirk, Aurora and Elke – addressed discussion topic 5.2 (including topic 5.4 as a subtopic), which was rephrased as follows: How can a knowledge-based approach support software maintenance (including evolution, reuse and understanding) and what particular tech-
Knowledge-Based Object-Oriented Software Engineering
171
niques and approaches might be helpful in doing so (for example, consistency checking among models)? To structure the discussion, every member of the group was asked to explain and discuss (in the context of knowledge-based software engineering) 1. what they understood by consistency checking, and optionally how they used it and for which purpose 2. what representation they used to represent their knowledge and how the represented knowledge was actually connected to the underlying object-oriented programs After this, the different answers were merged which lead to the following insights: 1. Consistency checking boils down to checking whether there are no contradictions between models, where the implementation (i.e. source code) can also be seen as a particular kind of model. Consistency checking is often syntactic but can be semantic as well (an example of the latter was Ragnhild’s approach where the different models to be checked are mapped to the same description logic and then the reasoning tools in that logic are used to check for contradictions between the produced descriptions). All participants agreed that maintenance and reuse are the main goals of consistency checking. But this does not imply that consistency checking is necessary for maintenance and reuse. 2. Regarding the representation used to represent the knowledge we discovered two different kinds of approaches. The difference seemed to be caused by the different goals to support maintenance that were followed: – One goal was mainly to collect information throughout the software development life cycle mainly with the intention of consolidating the knowledge that is in different people’s heads, thus contributing to a better understanding of the domain. Both Dirk’s and Aurora’s approach were examples of this. In Dirk’s case he represented the knowledge using an ontology expressed in a frame-based language. In Aurora’s case, she used an agent-based approach. – A second goal was to check conformance between specific models with the goal of improving the quality and correctness of the software. Kim’s, Ragnhild’s and Atsushi’s approaches were examples of this. In all these cases the knowledge was represented using a logic language (either a Prolog-like language or a more restricted description logic) and a kind of logic reasoning was used to check for consistency. In the ideal case where the models to be checked were well-defined, perhaps with some extra user-defined annotations attached to them, the corresponding logic expressions can be generated and checked automatically. In the less ideal case, model checking first requires us to describe the logic expressions either manually, semi-automatically or based on heuristics.
172
7
Maja D’Hondt, Kim Mens, and Ellen Van Paesschen
Conclusions
Although the main goals were rather well addressed and we are generally pleased with the results, one has to bear in mind that it was but the first workshop on Knowledge-Based Object-Oriented Software Engineering. Therefore, we passed what one could call the exploratory phase: all topics and lines of thought that came up were considered and elaborated upon, but we did not investigate every nook and cranny of this field. For a possible next workshop related to this one, it would be interesting to make a general classification of which particular software engineering tasks may benefit from a particular knowledge-based technique or approach. The results of this query could then very well serve as a starting point for more specific in a series of workshops on Knowledge-Based Object-Oriented Software Engineering.
References [1] Ken Anderson, Tim Hickey, and Peter Norvig. The jscheme web programming project. http://jscheme.sourceforge.net/jscheme/mainwebpage.html. [2] James O. Coplien. Multiparadigm Design For C++. Addison-Wesley, 1998. [3] Dirk Deridder. A concept-oriented approach to support software maintenance and reuse activities. In 5th Joint Conference on Knowledge-Based Software Engineering (JCKBSE). IOS Press - Series “Frontiers in Artificial Intelligence and Application”, 2002. [4] E. W. Dijkstra. A Discipline of Programming. Prentice-Hall, 1976. [5] W.L. H¨ ursch and C.V. Lopes. Separation of concerns. Technical report, North Eastern University, 1995. [6] A. Kleppe and J. Warmer. The Object Constraint Language: Precise Modeling with UML. Addison-Wesley, 1999. [7] Kris Van Marcke. The Knowledge Representation System KRS and its Implementation. PhD thesis, Vrije Universiteit Brussel, 1988. [8] Kim Mens, Tom Mens, and Michel Wermelinger. Supporting object-oriented software development with intentional source-code. In Proceedings of the 15th Conference on Software Engineering and Knowledge Engineering (SEKE). Knowledge Systems Institute, 2002. [9] Kim Mens, Isabel Michiels, and Roel Wuyts. Supporting software development through declaratively codified programming patterns. In Proceedings of the 14th Conference on Software Engineering and Knowledge Engineering (SEKE). Knowledge Systems Institute, 2001. [10] Wolfgang De Meuter. The story of the simplest mop in the world, or, the scheme of object-orientation. Prototype-Based Programming (eds: James Noble, Antero Taivalsaari, and Ivan Moore), 1998. [11] Scott G. Miller. Second interpreter of scheme code. http://sisc.sourceforge.net/. [12] The Object Management Group. The OMG Unified Modeling Language Specification. http://www.omg.org. [13] Atsushi Ohnishi. A supporting system for verification among models of the uml. Systems and Computers in Japan, 33(4):1–13, 2002.
Knowledge-Based Object-Oriented Software Engineering
173
[14] D. L. Parnas. On the criteria to be used in decomposing systems into modules. Communications of the ACM, 15(12):1053–1058, 1972. [15] A. Th. Schreiber, J. M. Akkermans, A. A. Anjewierden, R. de Hoog, N. R. Shadbolt, W. Van de Velde, and B. J. Wielinga. Knowledge Engineering and Management: The CommonKADS Methodology. MIT Press, 2000.
Participant Names, E-Mail Addresses, and Affiliations name Marie Beurton-Aimar
e-mail and affiliation [email protected] Universit´e Bordeaux 2 - France Miro Casanova [email protected] Vrije Universiteit Brussel - Belgium Mar´ıa Agustina Cibr´an [email protected] Vrije Universiteit Brussel - Belgium Krzysztof Czarnecki [email protected] Daimler Chrysler - Germany Dirk Deridder [email protected] Vrije Universiteit Brussel - Belgium Maja D’Hondt [email protected] Vrije Universiteit Brussel - Belgium Theo D’Hondt [email protected] Vrije Universiteit Brussel - Belgium Kim Mens [email protected] Universit´e catholique de Louvain - Belgium Atsushi Ohnishi [email protected] Ritsumeikan University - Japan Elke Pulvermueller [email protected] Universitaet Karlsruhe - Germany Andreas Speck [email protected] Intershop Research - Germany Ragnhild Van Der Straeten [email protected] Vrije Universiteit Brussel - Belgium Ellen Van Paesschen [email protected] Vrije Universiteit Brussel - Belgium Aurora Vizca´ıno [email protected] Universidad de Castilla La Mancha - Spain
Object-Orientation and Operating Systems Andreas Gal1 , Olaf Spinczyk2 , and Dario Alvarez3 1 University of California, Irvine, USA [email protected] - http://www.ics.uci.edu/∼gal/ 2 University of Erlangen-N¨ urnberg, Germany [email protected] - http://www4.cs.fau.de/∼os/ 3 University of Oviedo, Spain [email protected] - http://di002.edv.uniovi.es/∼darioa/
Abstract. The ECOOP workshop series on Object-Orientation and Operating Systems aims to bring together researchers and developers working on object-oriented operating systems and to provide a platform for discussing problems arising from the application of object-orientation to operating systems as well as solutions for them. The 5th workshop in this series focused on novel approaches in the overlap of the object-oriented programming and operating-systems development domains. Workshop presentations covered topics from the areas of languages and implementation techniques for operating systems, java-based operating-systems, and ubiquitous computing. As in previous years the workshop was also a platform to discuss the state of the art in object-oriented operating systems development and the impact of the presented academic research on the commercial operating systems market. This workshop report summarizes the presentations held at the workshop and the resulting discussion.
1. Introduction The first ECOOP Workshop on Object-Orientation and Operating Systems (OOOS) was held at ECOOP’97 in Jyv¨ askyl¨ a, Finland, followed by ECOOPOOOSWS’99 in Lisbon, Portugal, ECOOP-OOOSWS’2000 in Cannes, France, and ECOOP-OOOSWS’2001 in Budapest, Hungary. The ECOOP’2002 OOOS Workshop is the fifth workshop in this series and was held in Malaga, Spain, on June 11th, 2002. The workshop was organized by Dario Alvarez, Olaf Spinczyk, Andreas Gal, and Paniti Netinant. As in previous years, prospective OOOSWS participants submitted a position paper (up to 6 pages). The papers were reviewed by a program committee. The members of the program committee were: – – – – – –
Olaf Spinczyk (chair), University of Magdeburg, Germany Dario Alvarez, University of Oviedo, Spain Atef Bader, Lucent Technology, USA Francisco Ballesteros, Universidad Rey Juan Carlos, Spain Alan Dearle, University of St. Andrews, Scotland Tzilla Elrad, Illinois Institute of Technology, USA
J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 174–183, 2002. c Springer-Verlag Berlin Heidelberg 2002
Object-Orientation and Operating Systems
– – – – – – –
175
Michael Franz, University of California, Irvine, USA Andreas Gal, University of California, Irvine, USA J¨ urgen Klein¨ oder, University of Erlangen, Germany Paniti Netinant, Bangkok University, Thailand Wolfgang Schr¨ oder-Preikschat, University of Erlangen-N¨ urnberg, Germany Peter Schulthess, University of Ulm, Germany Xian-He Sun, Illinois Institute of Technology, USA
For the ECOOP-OOOSWS’2002 eight submissions have been selected for presentation at the workshop by the organizers, based on the recommendations of the program committee. The selected presentations were grouped into three sessions. The workshop started with a welcome session containing the invited talk and concluded with a discussion session. The overall workshop schedule was as follows: – Session 1: Welcome • Operating-System Engineering (invited talk) Wolfgang Schr¨ oder-Preikschat – Session 2: Systems and Languages Chair: Olaf Spinczyk • An Aspect-Oriented Implementation of Interrupt Synchronization in the PURE Operating System Family D. Mahrenholz, O. Spinczyk, A. Gal, and W. Schr¨oder-Preikschat • Dynamic Inheritance: a Powerful Mechanism for Operating System Design B. Sonntag, D. Colnet, and O. Zendra • First Impressions about Executing Real Applications in Barbados B. Perez-Schofield, E. Rosell´o, T. Cooper, and M. Cota – Session 3: Systems and Ubiquitous Computing Chair: Dario Alvarez • Code Generation At The Proxy: An Infrastructure-Based Approach To Ubiquitous Mobile Code D. Chandra, C. Fensch, W. Hong, L. Wang, E. Yardimci, and M. Franz • Gaia: An OO Middleware Infrastructure for Ubiquitous Computing Environments M. Roman, C. Hess and R. Campbell • A File System for Ubiquitous Computing C. Hess and R. Campbell – Session 4: Java-based Systems Chair: Andreas Gal • The Role of IPC in the Component-Based Operating System JX C. Wawersich, M. Felser, M. Golm, J. Klein¨ oder • Bootstrapping and Startup of an Object-Oriented Operating System R. Goeckelmann, M. Schoettner, M. Wende, T. Bindhammer, and P. Schulthess
176
Andreas Gal, Olaf Spinczyk, and Dario Alvarez
– Session 5: Discussion Chair: Andreas Gal To enhance the experience for the participants, a distinguished member of the OOOS community was asked to give an invited talk. For ECOOP-OOOSWS’2002 this talk was given by Wolfgang Schr¨ oder-Preikschat, full professor at the University of Erlangen-N¨ urnberg, Germany. He is a long standing member of the OOOS community for almost 2 decades. A summary of his talk is part of this workshop report. To facilitate lively discussions, participation of the workshop was limited and by invitation only. The ECOOP-OOOSWS’2002 was attended by 16 participants from 5 different countries. The participants were: – – – – – – – – – – – – – – – –
Dario Alvarez, University of Oviedo, Spain Eric Bruneton, France Telecom, France Deepak Chandra, University of California, Irvine, USA Marian Diaz, University of Oviedo, Spain Meik Felser, University of Erlangen-N¨ urnberg Andreas Gal, University of California, Irvine, USA Baltasar Garcia, University of Vigo, Spain Ralph G¨ ockelman, University of Ulm, Germany Sung Young Lee, Kyung Hee University, Korea Daniel Mahrenholz, University of Magdeburg, Germany Hans Reiser, University of Erlangen, Germany Manuel Roman, University of Illinois at Urbana-Champain, USA Benoˆıt Sonntag, Loria Lab, France Olaf Spinczyk, University of Magdeburg, Germany Christian Wawersich, University of Erlangen-N¨ urnberg, Germany Wolfang Schr¨ oder-Preikschat, University of Erlangen-N¨ urnberg, Germany
The remainder of this report is organized as follows. Section 2 summarizes the invited talk by Wolfgang Schr¨ oder-Preikschat. The following sections 2-5 outline the presentations and discussions in each of the three sessions of the workshop. Section 6 sums up the key issues during the discussion section and the report ends with final conclusions by the organizers and future plans.
2. Invited Talk In his invited talk Wolfgang Schr¨ oder-Preikschat discussed problems one should be aware of when trying to develop operating systems from scratch. He focuses especially on the synergic effects when operating systems meet software engineering (component reusability and product-line development). For Wolfgang, Operating systems are a never ending story, particularly when considering design and implementation issues. At a first glance, this attitude appears to be somewhat astonishing because of the half-century of development efforts which are laying behind in this field. Some even consider systems software
Object-Orientation and Operating Systems
177
research to be irrelevant by now. Still, even the Linux success story cannot obscure the shortcomings in structure and architecture as far as reusability, adaptability, and extensibility of its components is concerned. Similar holds more or less for the other major players in this arena. Linux technology, as was and still is Unix in general, is dated back to the late sixties. The big difference between Linux and Multics, e.g., is not in system functionality and/or complexity, but rather in the way how it is developed. He explained, that in the operating-systems community, almost all of the research activities concentrate on small and highly specific topics in the area of contemporary timesharing and distributed systems. Parallel systems just as embedded systems have not received sufficient attention so far. This focus on interactive systems does not go well together with the market realities. Wolfgang showed numbers, which indicate that in the year 2000 only 1.8% of the manufactured processors went into interactive systems like laptops, desktops, and servers. The remaining 98.2% went into the embedded-systems market. He made the comment that a modern car can be seen as a “massive” distributed system with dozens, sometimes hundreds, of networked embedded systems. At the time being, neither Linux nor Windows technology is able to take on responsibility for these small embedded systems. They require the OS to enter a total symbiosis with the application on the one and hardware on the other side to meet embedded-systems demands. When looking into some more detail at the functions an embed-able operating system has to provide, one will identify many commonalities with contemporary Unix-like operating systems. In many cases some sort of process model is to be supported, interrupt handling and device-driver services are required, synchronization becomes a demanding issue, memory and resource management needs to be provided, and network communication can not be sacrificed–but the entire system has to fit into a thimble of RAM and/or ROM for cost reasons. This non-functional requirement ruined the chances of Linux and Windows in all their variants so far. Wolfgang underlined his remarks with examples and codesize benchmarks which showed, that even a simple “Hello World” application results in 30kB to 200kB of code on general-purpose operating systems like Windows, Solaris and Linux. At the end of his talk Wolfgang introduced his concept of incremental operating-systems design to the audience. He explained, that operating systems design has to realize the “WYDIWYG” principle: “What you demand is what you get”. In other words: less demanding users are not to be forced to pay for resources consumed by unneeded features. His approach is a family-oriented operating system design combined with an object-oriented implementation. The family-oriented design principle was originally coined by D. L. Parnas and A. N. Habermann in the seventies. After the invited talk the reasons for the stagnation in the operating system design were discussed. Many other areas of applied software development have caught up with modern design and implementation techniques but not the commercial operating systems development market. Wolfgang stated, that this might also be caused by the fact that good operating systems design is sometimes
178
Andreas Gal, Olaf Spinczyk, and Dario Alvarez
much more art than engineering. Everybody who ever attempted to implement an operating systems is probably going to agree with this statement.
3. Systems and Languages The first technical session of the workshop dealt with programming languages and techniques for operating systems implementation. It was chaired by Olaf Spinczyk. The first talk was given by Daniel Mahrenholz, University of Magdeburg, Germany. Daniel is member of the research group at the University of Magdeburg, which was formerly headed by Wolfgang Schr¨ oder-Preikschat. He talked about the application of Aspect-Oriented Programming in the PURE family of operating systems, which Wolfgang had already introduced briefly. Daniel explained, that some concerns in the design of operating systems are hard to modularize in the implementation and thus difficult to maintain. One of these “crosscutting concerns” is the interrupt synchronization strategy. Changing that strategy is typically expensive and risky. He stated that Aspect-Oriented Programming (AOP) is a promising approach to overcome these problems, but most aspect-oriented programming languages are not adequate for the operating systems domain. Thus experiences with AOP and operating systems are rare. He reflected about his experiences with an aspect-oriented implementation of interrupt synchronization in the Pure operating system family using AspectC++, a new aspect-oriented language extension for C++ designed by Olaf Spinczyk and Andreas Gal. He continued on and provided a critical evaluation of his and the co-author’s new approach comparing it to the previous non-aspect-oriented implementation and used benchmarks to prove that AOP does not impose an unacceptable overhead. After the presentation, Daniel was asked many technical questions about the concrete realization of his approach. He was also asked why it was not possible to use the already established AOP language AspectJ. Daniel explained, that while the AspectJ language itself is very close to AspectC++, the AspectJ implementation was not designed to be used in systems with tight resource constraints. The choice of Java as functional language is also a disadvantage as Java requires much more runtime resources than well written C++ code. The next talk was given by Benoˆıt Sonntag from the Loria Lab, France. He reported about the design of the innovative general-purpose operating systems Isaac. He stated that the main purpose of Isaac is to provide for the dynamism and flexibility demands in future operating systems. He showed that these goals progressively led them towards the object-oriented design principle and he explained the advantages of interactions between user programs and the operating systems on the object level over traditional models. In the second part of his talk Benoˆıt explained that prototype-based languages have proven to be the most elegant manner to materialize their vision of a dynamic and flexible operating systems. Dynamic inheritance can be a very helpful design and implementation tool in operating systems and used device drivers as an example. Traditional operating systems do not support well the concept of flexible combining, config-
Object-Orientation and Operating Systems
179
uring and layering of sub-drivers to form concrete device drivers. In Isaac each sub-driver is a class and it is possible to change dynamically at runtime the base class any particular class is inheriting from. This allows for flexible reconfiguration of device drivers as required by the hardware discovered at boot time. After Benoˆıt’s talk everybody in the audience agreed that it is a very interesting concept that should be continued, especially because there are very few research groups left today who are willing to implement a general-purpose operating system as a whole from scratch. The audience was also impressed by the fact that Benoˆıt gave his presentation using Isaac running on his notebook. The final presentation in this session was given by Baltasar Garcia. He reported about the Barbados prototype, which is a persistent programming system following the container-based model in contrast to orthogonal persistence. Besides that, Barbados is also a complete programming environment providing the programmer with a C++ compiler, editor and debugger, all integrated into the same environment. Baltasar went on and reflected about the design decisions, which led them to implement a proprietary C++ compiler with persistence support instead of the typical interpreter approach found in many other persistence systems. He stated that the main reason for this choice was efficiency and he showed benchmarks made using a real world application executed in the Barbados environment. He concluded, that the Barbados system achieves similar execution speed than traditional C++ compilers with optimizations disabled, while at the same time providing for a persistent execution, which allows to remove a lot of data storage and retrieval code from the executed application. After his presentation Baltasar was asked to comment on the question why so many operating systems and middleware implementors end up implementing their own languages or language extension to implement their actual system in that very language. No conclusive answer to this question was found and it was decided to continue the discussion in the discussion session.
4. Systems and Ubiquitous Computing The Systems and Ubiquitous Computing session was chaired by Dario Alvarez and started with a presentation by Deepak Chandra about an infrastructure based approach to ubiquitous mobile code using code generation at a proxy. Deepak explained, that current approaches to mobile code are ill-suited for resource-constrained devices such as mobile phones and personal digital appliances – yet it is exactly these devices that are the most attractive targets of mobile-code technology. His and the co-author’s approach to reconciling this apparent contradiction is to off-load much of the required mobile-code support machinery into the (wireless) infrastructure, rather than the mobile device itself. To do so, they implemented a system involving a powerful proxy, which is in continuous contact with the mobile device. This interaction can be utilized to perform run-time improvements on processes running on the mobile device. He went on that they believe this approach will lead to various benefits such as execution speed, optimizations for power efficiency, and the ability to execute
180
Andreas Gal, Olaf Spinczyk, and Dario Alvarez
beyond hardware limitations such as strict memory constraints. He emphasized that their research work is still in an early stage and that no conclusive results have been reached yet. Nevertheless, the audience was very pleased by this interesting approach. Deepak was asked after his talk about possible real-world applications for this infrastructure. He mentioned a number of smaller application scenarios, but admitted that at this stage of the research project they are still looking for well-suited application for demo and benchmarking purposes. The following two talks in this session were jointly given by Manuel Roman. The first part of his presentation covered Gaia, an object-oriented middleware infrastructure for ubiquitous computing environments while the second part presented a file system for such ubiquitous computing environment. Manuel started his talk with pictures of the ubiquitous computing lab of his research group at the University of Urbana-Champain, showing the application scenario ubiquitous computing envisions. Example for such computing environments are active meeting rooms or active homes equipped with a large variety of heterogeneous devices (e.g. CPUs, displays, and sensors). These scenarios promise an improved, invisible, and ubiquitous computing experience in which devices automatically adapt to user requirements, and applications and data are permanently bound to users and become the crux of computation. Users roam around with their personal data, applications and devices, and are able to interact with their applications using the resources present in their current environment. He went on to explain the fundamentals of the Gaia middleware infrastructure. Gaia is a set of CORBA services forming the Gaia Kernel. On top of this framework resides an Application Framework. He introduced the concept of Active Spaces to the audience, which allows to treat spaces and the resources they contain as a single programmable entity. In the second part of his talk Manuel focused on file systems for ubiquitous computing. He explained that the distinguishing factor from traditional distributed systems is in the case of ubiquitous computing context. In his talk he presented a file system for ubiquitous computing applications that is context-aware. Context is used by the file system to organize data and to trigger data type conversions, simplifying the tasks of application developers and users of the system. The file system constructs a virtual directory hierarchy based on what context values are associated with particular files and directories. The directory hierarchy is implemented using an internal mounting mechanism, where personal mount points can be injected into the current environment to make personal storage available to applications and other users. After the presentation Manuel was asked a number of questions about the dynamic file type concept he introduced. He explained in more depth how data has to be transformed on the fly to the file format appropriate for the requesting client. Manuel was also asked why they choose C++ as implementation language for Gaia. The audience objected that Java would have been a better choice to cover such heterogeneous execution environments with different processor models. Manuel explained that performance considerations led them to implement Gaia in C++ and that in combination with CORBA the Gaia services are simple to port to new platforms.
Object-Orientation and Operating Systems
181
5. Java-Based Systems The last technical section covered Java-based operating systems and was chaired by Andreas Gal. The first presentation was given by Christian Wawersich, University of Erlangen-N¨ urnberg, Germany. He talked about the role of inter-process communication (IPC) in the component-based operating system JX, which is developed at the University of Erlangen-N¨ urnberg. In the first part of his presentation Christian explained that in traditional operating systems hardware supported memory protection between separate processes is performed. He elaborated that communication between such protection domains is obviously slower than a simple method invocation within a protection domain. In the second part of the presentation Christian introduced the language-based protection mechanism applied in the JX operating system. The Java-based JX system uses the type safety property of Java bytecode to enforce the protection between different domains. As in such a system hardware-based memory protection is dispensable, inter-process communication can basically be performed at equal cost than regular method calls. Christian concluded his talk with benchmarks about IPC in JX and compared the performance with the Linux operating system, showing that JX outperforms Linux in many areas as far as IPC is concerned. The subsequent discussion after the talk focused on the question whether the type-safety-based protection is really equivalent to the hardware-based approach. Wolfgang Schr¨ oder-Preikschat objected, that the hardware-based approach offers more functionality for the additional cost as non-typesafe or non-Java programs can be executed as well. Christian was also asked why there is still a significant overhead for IPC compared to plain method invocations in JX. He explained that there is still a context switch to be performed and parameters have to be copied. This is caused by the fact that in JX each protection domains runs separate threads to decouple the protection domains. The second talk was given by Ralph Goeckelmann, University of Ulm, Germany, on bootstrapping and startup of an object-oriented operating system. In his talk he reported about Plurix, a Distributed Shared Memory (DSM) operating system written in Java. He focused on the startup process. During his talk he explained that the startup process is especially critical for Plurix due to its shared-memory principle. In Plurix code executed on any node only exists once in the shared-memory. This also applied to the kernel code. Ralph introduced a novel scheme based on the idea of memory pools to boot an operating system out of a DSM. He also talked about false sharing problems and how they can be reduced by special memory pools and the limitation in the interaction of local memory pools used for the boot process and the DSM. He pointed out that one of the key concepts in the Plurix approach is the introduction of node private data and its language embedding with special Java language constructs. Ralph concluded his talk with benchmark numbers about the startup process of Plurix. The talk was followed by an intensive discussion about some of the design decisions in Plurix. As in previous occasions when Plurix was presented at this workshop the audience was split in a pro-DSM and contra-DSM fractions. On the one hand everybody agreed that the startup-related problems in Plurix
182
Andreas Gal, Olaf Spinczyk, and Dario Alvarez
have been solved very well by the presented approach. On the other hand some felt that most of these problems have been caused by design decisions taken earlier in Plurix. Especially the implementation of a paged-based DSM in an object-oriented operating system was criticized and it was suggest to try to use object granularity for the distribution to reduce false sharing. Ralph promised to evaluate this idea.
6. Discussion Session The last session of the workshop was dedicated to discussing the presentations given during the workshop. The discussion session was chaired by Andreas Gal. He started the session with a list of probing questions he compiled during the workshop. Amongst other things Andreas asked the audience, why most OS are still written in non-typesafe languages like C, even though the advantages of typesafe languages like Java are obvious in the area of memory protection and IPC. He also asked whether anybody knows a specific reason why concepts like context-aware filesystems, implementation techniques like AOP, or dynamic inheritance never found their way into commercial operating systems. The resulting discussion quickly focused on the question why the techniques developed by the OOOS community never really were applied in commercial operating-systems. Schr¨oder-Preikschat explained, that there are only very few academic research groups left who work in the area of general purpose OS development. The whole general purpose OS domain is covered with existing implementations. Even if these are often full of flaws, new developments do not pay off and OS are too complex to be developed from scratch by small research units. As a result, the academic OS research is often limited to extending and modifying small aspects of existing OS. The much more demanding task to tackle the problems burried in the design and architecture of contemporary OS however cannot be solved this way. A second question discussed during this session stems from the observation that often the first step in designing and implementing a novel operating system is to design a new language (or at least a language extension) to implement the OS in and writing a compiler for it. Examples for this are the OS/language pairings UNIX/C, Oberon/Oberon, Pure/AspectC++, Isaac/Lisaac, but also language extensions like Barbados. It was agreed on that two things can be concluded from this observation: First of all, it is obvious that there is a tight connection between programming languages and operating systems development. And second, that the implementation of operating systems requires certain language features, which are still not well covered in general purpose programming languages.
7. Workshop Conclusions and Future Plans One of the key insights of this workshop is that operating systems and programming languages are tightly coupled. For this the organizers of this workshop feel
Object-Orientation and Operating Systems
183
that it is appropriate to conduct an OS workshop at a language conference like ECOOP. This is underlined by the continued high attendance of the OOOSWS workshop series. We think that the OOOSWS at ECOOP is an important forum especially for young researches to present and discuss their work. Many OS conferences pay too little attention to advanced techniques like Object Orientation (OO), Component- Oriented Programming (COP) and Aspect Oriented Programming (AOP). At the same time OS topics have too little relevance for pure OO or programming language conferences. Thus, efforts are being made by the organizers of the workshop to continue this workshop series at the next ECOOP’2003 in Darmstadt, Germany. As far as future plans are concerned, the organizers plan for the next OOOSWS to attract more submissions from commercial OS developers. We believe that this will help in understand and bridging the gap between the academic and commercial OS communities.
The accepted papers and the workshop program are archived at http://ooosws.cs.uni-magdeburg.de/.
Integration and Transformation of UML Models Jo˜ ao Ara´ ujo1 , Jonathan Whittle2 , Ambrosio Toval3 , and Robert France4 1
Faculdade de Ciˆencias e Tecnologia, Universidade Nova de Lisboa, 2829-516 Caparica, Portugal [email protected] - http://ctp.di.fct.unl.pt/∼ja/ 2 NASA Ames Research Center, Moffett Field, CA 94035 [email protected] - http://ase.arc.nasa.gov/whittle/ 3 Universidad de Murcia, Murcia, Spain [email protected] - http://www.um.es/giisw/ 4 Colorado State University, Fort Collins, CO 80523 [email protected] - http://www.cs.colostate.edu/∼france/
Abstract. This workshop aimed to understand more fully the relationship between different UML models. Each model in UML represents a particular viewpoint of the system under development. However, it is not clear from the UML specification how these models should be related. The relationships between UML models can be understood in terms of transformation - in which models are translated into other models (e.g., sequence diagrams transformed into statecharts) - or integration in which models are somehow linked to contribute additional meaning (e.g., a statechart is attached to a class to describe the behavior of that class). Our target is to have an increased understanding of the possible transformation/integration approaches and how they can best be used in particular contexts.
1. Introduction Although UML has become a standard modelling language, there are many questions that can be raised concerning both transformation and integration of its models. Transformations are one of the key underlying technologies in Computer Science, and yet, to date, the potential of transformations and integrations has not been realized within the context of UML. UML is essentially a collection of different modelling notations with under-defined relationships between them. Well-defined transformations and integration approaches within UML would facilitate advances in UML design methodologies, validation and verification, code generation and model maintenance. The aim of this workshop was to identify the most promising roles for model transformation and integration in UML and to identify which transformations and integration approaches are crucial for industry and why. This workshop continued the work started with the Workshop on Transformations of UML models (WTUML), held as a satellite event to the ETAPS conference (Genoa, Italy, April 2001), where transformations of UML models were discussed from the perspectives of software development, formalization and J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 184–191, 2002. c Springer-Verlag Berlin Heidelberg 2002
Integration and Transformation of UML Models
185
tools. From these discussions, it has been realised that there is still a lot of work to do concerning the semantics of UML models so that we can achieve not only effective model transformation but also better integration among these models. The ECOOP conference included several papers on many aspects of UML. While the conference supplied an opportunity for presenting concrete results, the workshop promoted discussion among researchers and practitioners, in an informal way with the aim of sharing their experiences and ideas on integration and transformation of UML models. The discussions generated from selected presentations and discussion topics improved the current understanding of the role of integration and transformation of UML models for software development, development of tools, etc. Thirteen papers were accepted to the workshop and the number of participants was between 25 and 30, which was enough to foment discussion and delineate subsequent results. As it was observed in the previous related workshop, WTUML, which had also 30 participants, there was great interest in the topics covered. With WITUML we augmented this interest as the integration feature had been added, continuing to attract a wide range of participants from research to project or tool development. The participation consisted of both practitioners who could help identify current modelling gaps that could be filled by transformations and integrations, and researchers who could suggest transformations, integrations and frameworks for implementing them. Moreover, they came from different countries (e.g., France, Germany, United States, Brazil, Spain, Finland, Portugal), which encouraged having different viewpoints on the workshop’s subject. The workshop was organized in four sessions. The first one consisted of the opening remarks and the invited talk by Philippe Desfray from SOFTEAM entitled Using UML2.0 profiles for supporting the MDA approach. From the accepted papers some were selected to be presented. These were divided into three groups representing the other three sessions: two on transformation frameworks [4, 8, 9, 10, 13] and one on transformations in the development lifecycle [2, 5, 11]. Additionally, Ambrosio Toval was invited to give a talk in one of the transformation frameworks sessions. This report is organized as follows. Section 2 discusses types of transformation. Section 3 deals with Transformation frameworks. Section 4 depicts transformations in the development lifecycle. Finally, in Section 5 we draw some conclusions.
2. Discussion on Types of Transformations Model transformations are, basically, a set of rules that specify how model constructs are translated into other model constructs. Model transformation has been increasingly discussed specially in the Model-Driven Architecture (MDA) community. Our invited speaker, Philippe Desfray (SOFTEAM, France), gave good insights on this subject, fomenting important discussions. He focused on the use of UML profiles through the entire software lifecycle. The expected UML
186
Jo˜ ao Ara´ ujo et al.
profile improvements at version 2.0 were presented in order to discuss their support to model transformation. Model transformation was described as a key technique for code generation, refinements between different kinds of models, automating design patterns, etc. The basic approach described in the talk (and implemented in the Objecteering CASE tool) is to define a set of UML profiles that define the transformation from platform independent models (PIMs) to platform specific models (PSMs). Each profile restricts the user to a subset (or extension) of the UML that is relevant to that stage of the development lifecycle. For example, a PIM may allow multiple inheritance whereas a UML Java profile would disallow multiple inheritance. The key idea behind Objecteering is that users can develop models using multiple profiles and translate between these profiles using Objecteering’s scripting language. In this way, Objecteering is an implementation of MDA (Model Driven Architecture) as it supports a well-defined process of transforming from PIMs to PSMs. The discussions following the talk led to the identification of transformations on different levels. Essentially, we can have high-level transformations (from requirements to design), and low-level transformations (from design to implementation). Also, Philippe summarized the following types of model transformations: – Model to code transformations: that is the most common transformation, where, for example, from a UML model (e.g., class diagram) we can generate some code in Java or C++. The model construct, which is platform specific, is translated into a programming language syntactic construct. – Model to model transient transformations: here human intervention should not be needed. It is called transient, as there remains no storage of the transformed model, it occurs transparently, before code generation. For example we have an association transformed into accessor operations. – Internal model transformations: here transformations can be applied to a model with the aim to enhance the same model in order to be more expressive, or better-defined. Refactoring is an example of an internal transformation. – Transformations between different models: here we can have transformations at the same level of abstraction, e.g., from sequence diagrams to statecharts or activity diagrams. Also transformations from analysis to design models are possible, e.g., classes and associations into tables. Note that by using transformations, traceability is an important issue and should not be neglected. Traceability must be guaranteed automatically between models, inside a model transformation, and between model and code. Moreover, well and formally defined transformation rules between models are crucial. OCL is a conventional way, but other formal languages should not be ruled out (e.g., Object-Z, B, Maude), as they have constructs that are more expressive than OCL ones. More challenging transformations seems to be at the requirements level or from requirements to design level. The reason for this is the level of informality and abstraction concerning the models built during analysis. It would be useful
Integration and Transformation of UML Models
187
to generate interaction and activity diagrams from use case scenarios without excessive formalization or human intervention.
3. Transformation Frameworks The topic of the sessions on transformation frameworks was to discuss the alternative frameworks for formalizing and/or implementing UML transformations. The main competing frameworks are graph rewriting, term rewriting, XML-based approaches, and the use of UML itself, e.g., through profiles, metamodelling or OCL. Papers in this session represented term rewriting, OCL and XML. Ambrosio Toval of the University of Murcia, Spain, was invited to give a presentation on his group’s work on the use of term rewriting to implement UML transformations. Maude is an implementation of membership equational logic. In Maude, users can specify conditional equations that can be applied to a given term automatically by Maude’s rewrite engine. Hence, Maude implements an executable specification language based on rewriting. Toval’s group has been using Maude to implement UML transformations. This requires a formalization of the UML semantics within Maude. Currently, Toval’s group has formalized much of the static semantics and some of the dynamic semantics of UML and has experimented with applying transformations on models – e.g., transformations to derive multiplicities on associations where the multiplicities are not given explicitly but are implied from other model elements. Damien Pollet presented his work on using OCL to specify UML transformations [8]. The basic idea is to use OCL to describe pre- and post-conditions on transformations, expressed as OCL constraints at the metamodel level. This approach was used to specify model refactorings. An extension of the work is to include action features in OCL. The motivation behind this is that OCL is suitable for navigating a model or expressing constraints between model elements, but it cannot currently be used to effectively modify a model. One of the submissions to the OMG UML/OCL2 RFPs proposes an extension to the expression language to allow action features. This would allow OCL to be used to modify models more effectively. Annika Wagner presented an interesting approach to describing transformations using XMI.difference [13]. XMI.difference is a way of specifying minor, local changes made to a concrete model in an imperative way. Wagner’s idea is to use XMI.difference to model transformation rules. A number of problems with this approach were raised both by Wagner and in the discussions afterwards. In particular, XMI.difference must be extended somehow to apply to general, not concrete, models. Additionally, concerns were expressed about the scalability of the approach and the problem of dealing with global differences. However, it may be that a solution can be found that incorporates XMI.difference and is thus more accessible to UML modelers than graph or term rewriting, for example. Ivan Porres discussed his model transformation framework based on the Python scripting language [9]. In many ways, his ideas are similar to those in Objecteering, in that a scripting language is used to specify translations.
188
Jo˜ ao Ara´ ujo et al.
4. Transformations in the Development Lifecycle Throughout the software development lifecycle, several transformations can be applied for each phase or activity, from use case identification and specification to component design, code and testing. The list below gives some examples of these transformations: – transformations to other notations, for example, formal languages, other modelling languages, code; – formal and informal transformations; – refinements, enhancements, refactoring; – transformations for validation and verification. In the context of the workshop, some approaches were proposed covering these kinds of transformations at different development stages. At requirements level, Roussev [11] proposes a process to generate class diagrams from the use case model of a system through a sequence of model transformations. The approach formalizes the notion of use cases by defining a conservation law called value added invariant (values exchanged between actors in a use case are constant), and then describing a use case as a set of state machines that are used to generate the classes and relationships. Concerning the integration of UML models with formal languages, some approaches were discussed. Enciso, Guzm´an and Rossi [5] focused on the transformation of UML statecharts into temporal logic expressions. For this it was used Lnint-e, a temporal logic of instants extended to include interval expressions. The authors argued that particular temporal logic was expressive to formally specify behavior models. The approach by Ledang and Souqui`eres [7], consisted of translating UML models to the B notation in a systematic way. Basically, different UML models are combined giving a single specification. The authors chose B as it provided powerful support tools, recognized in industrial projects for their capabilities to generate and prove automatically and interactively the proof obligations. At testing level, two interesting transformation approaches were discussed. First, Benattou, Bruel and Hameurlain presented an approach to generate test data from formal constraints specified in OCL [2]. It was shown how the partition analysis of individual methods of classes can be automated and how a valid sequence of a given method’s class, that covers all the relevant tests, can be built. The mathematics considered was useful to generate test data. The other approach was proposed by Jim´enez, Polo and Piattini [6]. Their starting point is the algebraic modelling of the static structure of an object-oriented system and the application of a set of transformation functions to obtain the required test cases. As it has been noticed, different formal techniques were used to formalize the transformations. This implies that,although OCL is intimately related to UML, it is not considered the lingua franca for formal transformations of UML models. Other formal languages are used like B and temporal logic as they seem to be more suitable for some kinds of transformations.
Integration and Transformation of UML Models
189
5. Workshop Conclusions In general, the workshop was a great success and was one of the best attended workshops at the ECOOP conference. The original ETAPS workshop had started because of the research interests of the workshops organizers who had also noted a lack of a forum for discussing such issues. With the advent of the OMG’s MDA, the notion of transformation has become a “hot topic”. Most of the presentations and discussion in this workshop focussed on frameworks for transformations. There was less discussion on what actual transformations should look like — clearly, there are an infinite number of potentially useful transformations, but it is important to ascertain which are the most important for industry and why. This question is, of course, not unrelated that of transformation frameworks. For example, it is likely that some complex transformations are universal enough that they should be integrated into CASE tools whereas other transformations are user-specific, in which case the user needs a way of easily being able to define the transformation. This comes then right back to the question of whether graph rewriting, OCL, XML, etc. is the best approach for doing this.
Acknowledgements We want to specially thank our invited speaker Philippe Desfray for his decisive contribution to the discussions. We also want to thank all the attendees for their insights and useful comments during the workshop, and the workshop chairs for their precious support.
Position Papers All position papers can be seen on the website of the workshop: http://www-ctp.di.fct.unl.pt/˜ja/wituml02.htm 1. From UML to Ptolemy II simulation: a model transformation V. Arnould, A. Kramer, F. Madiot. 2. Generating test data from OCL specification, M. Benattou, J.-M. Bruel, N. Hameurlain. 3. Adapting design types to communication media and middleware environments, J. Ca˜ nete, F. Gal´ an, M. Toro. 4. Mapping models between different modeling languages, E. Dom´inguez, A. Rubio, M. Zapata. 5. Using temporal logic to represent dynamic behavior of statecharts, M. Enciso, I. Guzm´ an, C. Rossi. 6. Generating and executing test data for OO systems, M. Jim´enez, M. Polo, M. Piattini. 7. Integration of UML models using B notations, H. Ledang, J. Souqui`eres.
190
Jo˜ ao Ara´ ujo et al.
8. OCL as a core UML transformation language, D. Pollet, D. Vojtisek, J.-M. J´ez´equel. 9. A framework for model transformations, I. Porres. 10. Mapping Object-Oriented applications to relational databases using MOF and XMI, E. Rodrigues, R. Melo, F. Porto, J. Neto. 11. The value added invariant: a newtonian approach for generating class diagrams from a use case model, B. Roussev. 12. Adding use cases to the building of information systems with Oracle Designer, P. Saliou, V. Ribaud. 13. A pragmatic approach to rule-based transformations within UML using XMI.difference, A. Wagner.
List of Participants – Jo˜ ao Ara´ ujo, Universidade Nova de Lisboa, Portugal, [email protected] – Vincent Arnould, Thales Research and Technology, France, [email protected] – Uwe Bardey, University of Bonn, Germany, [email protected] – Antoine Beugnard, ENST - Bretagne, France, antoine.beugnard@enst-bretag – Jean Bezivin, University of Nantes, France, [email protected] – Paulo Borba, Qualiti - CESAR and CIN - UFPE, Brazil, [email protected] – Jean-Michel Bruel, LIUPPA, France, [email protected] – Phillipe Desfray (invited speaker), SOFTEAM, France, [email protected] – Manuel Enciso, Universidad de M´ alaga, Spain, [email protected] – Gerd Frick, FZI Forschungszentrum Informatik, Germany, [email protected] ´ – Angel Luis Rubio Garc´ia, Universidad de La Rioja, Spain, [email protected] – Hung Ledang, LORIA - Universit´e Nancy 2, France, [email protected] – Fr´ed´eric Madiot, Sodifrance France, [email protected] – Selma Matougui, ENST - Bretagne France, selma.matougui@enst-bretagne – Ana Moreira, Universidade Nova de Lisboa, Portugal, [email protected] – Francisco Jose Galan Morillo, University of Seville, Spain, [email protected]
Integration and Transformation of UML Models
– Damien Pollet, IRISA, France, [email protected] – Macario Polo Usaola, Universidad de Castilla La Mancha, Spain, [email protected] – Ivan Porres, Abo Akademi, University Finland, iporres@abo.fi – Vincent Ribaud, Universit´e de Brest, France, [email protected] – Elisete Rodrigues, PUC-Rio, Brazil, [email protected] – Carlos Rossi, Universidad de M´ alaga, Spain, [email protected] – Boris Roussev, Susquehanna University, United States, [email protected] – Philippe Saliou, Universit´e de Brest, France, [email protected] – Widayashanti Sardjono, University of Sheffield, United Kingdom, [email protected] – Ambrosio Toval, University of Murcia, Spain, [email protected] – Annika Wagner, University of Paderbon, Germany, [email protected]. – Jonathan Whittle, NASA Ames Research Center,United States, [email protected]
Related Links – The workshop website: http://ctp.di.fct.unl.pt/ ja/wituml02.htm – The previous workshop website (WTUML): http://ase.arc.nasa.gov/wtuml01/ – The OMG website on UML: http://www.omg.org/uml/ – The OMG website on MDA: http://www.omg.org/mda/ – The SOFTEAM website: http://www.softeam.fr/
191
Mobile Object Systems Ciar´ an Bryce Centre Universitaire d’Informatique University of Geneva, Switzerland [email protected]
1
Background
The ECOOP workshop on Mobile Object Systems was first organized in 1995 and has been held every year since. The first two episodes in the series – entitled “Objects and Agents” (1995) and “Agents on the Move” (1996) – were exploratory in nature, reflecting a growing awareness and interest in the possibilities of mobile code and mobile objects for Internet programming. Towards the end of the 1990s, Interest in the domain began to mature and several mobile object systems appeared in the research community. As a consequence, further editions of the Mobile Object Systems workshop concentrated on specific aspects of mobile objects. For instance, the title of the 1997 workshop was “Operating System Support”, the theme of the 1998 workshop was “Security”, and the theme of the 1999 installment was “Programming Language Support”. With the workshop entering into its second half-decade, the themes of the workshop became more broad in scope. On the other hand, we decided to place more emphasis on discussions in the workshops and on invited presentations. The theme of the workshop in 2000 was “Operating System Support, Security and Programming Languages”, and the theme of the 2001 edition was “Application Support and Dependability”. For the 2002 edition, the theme chosen was “Experience”, even though we accepted papers from all areas of the mobile object domain. There were 11 papers accepted for this year’s workshop, of which 9 were presented, as well as 4 invited talks. The invited talks were from Doug Lea of SUNY OSWEGO, Jan Vitek of Purdue University, Luc Moreau of Southampton University and Chrislain Razafimahefa of Geneva Univesity. The program committee was composed of experts from all areas of the mobile object systems domain, many of whom had already participated in previous editions of the workshop in some form. The committee was: Walter Binder, (CoCo Software, Vienna, Austria), Eric Jul (CopenHagen University, Denmark), Doug Lea (OSWEGO, USA), Giovanna di Marzo Serugendo (Geneva University, Switzerland), Luc Moreau (Southampton, UK), Peter Sewell (Cambridge University, UK), Jan Vitek (Purdue University, USA), and Jarle Hulaas (Geneva University, Switzerland). I am very grateful to these researchers for reviewing papers, and for contributing to the workshop over the years. The workshop was held on Monday, June 10th - just prior to the ECOOP conference - at the Computer Science institute of M´ alaga University. There were J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 192–196, 2002. c Springer-Verlag Berlin Heidelberg 2002
Mobile Object Systems
193
nearly 40 attendees at the workshop. We decided on a format of 15 minute talks followed by 15 minutes of discussion for each talk. The workshop was divided into four sessions. Three of the sessions were allocated to submitted papers and one session to invited talks. There was a satisfying amount of discussion during all sessions.
2
The Talks
The opening talk of the workshop was given by Kazuhiko Kato of the University of Tsukuba in Japan. The talk was entitled “Software Circulation using Sand-boxed File Space - Previous Experience and a New Approach”. This talk presented the POT mechanism for exchanging file spaces between machines. A POT is a collection of files; the file may be contained in the POT or named by reference (in which case the name is linked to a file in the file system of the current execution site). POTs are mobile. The execution of a (file in a) POT is controlled by a security policy, which only allows files named by the POT to be accessed. The POT mechanism extends - and is implemented over - the Planet mobile memory segment mechanism. A presentation of the implementation of POT and a web crawler application was also made. Kazuhiko Kato concluded that POTs - built over Planet - did not require strong mobility, and that the application implementation only required 2 to 4 weeks of coding. The second paper was presented by Giovanna di Marzo Serugendo of Geneva University and the talk was entitled “Towards a Secure and Efficient Model for Grid Computing Using Mobile Code”. An architecture was proposed that exploits mobile agents to build a grid infrastructure. The operator is a trusted third party who coordinates the distribution of applications/input data and collection and integration of results, as well as managing billing and accounting for resource consumption. Mobile deployment agents, dispatched from the grid operator, are responsible for distributing application workload to active resource donators. The business model utilises a form of micro-payment, with a sort of e-currency termed execution tickets. The implementation plans to use the J-SEAL 2 agent platform. Walter Binder of Coco Engineering in Austria then presented a talk entitled “MAE: a Mobile Agent Platform for Building Wireless M-Commerce Applications”. He presented an agent framework for building mobile-commerce applications (applications that a user executes using PDAs). The agent configuration is presented as well as the Waba virtual machine over which the platform is run. Walter Binder argued that mobile objects were a good paradigm for programming PDA applications since application functionality can be executed on servers if the PDA is over-loaded or disconnected from the network. He concluded that virtual machines on mobile devices are poorly designed for mobile objects. In his talk entitled “Towards Transparent Adaptation of Migration Policies”, Eric Tanter of the University of Chile in Santiago discussed the important issue of specifying what objects must move with a program when it is migrated, and
194
Ciar´ an Bryce
what objects can remain on the original host and be referenced remotely. The solution (integrated into the Reflex architecture) relies on reflection. A policy that is interpreted by the virtual machine specifies an object’s migration semantics. This policy object is written independently of the object itself. Noriki Amano of the Japanese Graduate School of Information Science presented a talk entitled “SAMcode: A Software Model for Flexible and Safe Adaption of Mobile Code Programs”. The SAMcode model enables mobile programs to adapt dynamically to their underlying runtime environment, by choosing among a set of procedures and methods. The approach enables an application to choose at each moment the method or set of methods, that will be executed next. The next talk was made by Zoltan Horvath of Budapest University, and was entitled “Safe Mobile Code: Certified Proved Property Carrying Code”. He presented an approach to host security (i.e. protecting host platforms from malicious code) that improves on proof carrying code. The motivation for this is presented as attempting to find a compromise on the two extremes of run-time checking (which is safe, but computationally expensive) and identity verification on signed code (which is fast, but not secure). The improvement on PCC in the proposed architecture is achieved through the introduction of a trusted certifying authority. This certifier intermediates between the code producer and consumer by off-loading the resource consuming part of code checking from the consumer. Once code is proof certified a certificate is attached to it, and both code and certificate can then later be downloaded at run-time by the code consumer. Tim Coninx of Leuven Catholic University presented a talk entitled “Using Reprogrammeable Coordination Media as Mobile Agent Execution Environments”. He proposed a tuple space model where the tuple space operations may pass through a customizable chain of filters before accessing the tuple space (pre-processing) and before returning results (post-processing). The model offers additional flexibility, as the tuple space operations may be extended with additional functionality in a modular way. The next talk was given by Luk Stoops of Vrije University and entitled “Fine-grained Interleaved Code Loading for Mobile Systems”. The talk presented a technique that interlaces code loading, code compilation and code evaluation in order to improve an application’s overall execution time in the presence of code transfer through networks. The idea is to profile an application and, based on this information, code is reorganised so that methods encountered first during profiling are transferred and loaded first on the target host. What their experiments show is that this technique is mostly beneficial for faster GUI construction during application start up phase. The final talk of the day that presented a submitted paper was given by Yannis Smaragdakis, and was entitled “Automatic Application Partitioning - the J-Orchestra approach”. Automatic partitioning is the process of splitting up an application into different parts in order to use these parts in a distributed setting. This position paper introduces J-Orchestra a tool for automatic partitioning and also present arguments for partitioning, such as adapting an application to a dis-
Mobile Object Systems
195
tributed environment. The talk was followed by a demonstration of the system. Also, a paper on J-Orchestra appeared at the main ECOOP conference. 2.1
Invited Talks
The morning session closed with four invited talks. The first talk was from Doug Lea of SUNY OSWEGO on the new Java isolates specification. The second talk was given by Jan Vitek of Purdue University, and described an implementation of isolates made by his group. Luc Moreau of Southampton University gave the third talk, which described a model for resource control in mobile agent systems. Chrislain Razafimahefa of Geneva University gave the final invited talk on the subject of protection domains in Java virtual machines. Doug Lea explained the motivation for isolates in his talk and outlined some of their use cases. As described in the JSR-121, isolates are isolated Java program units that can co-exist within a Java virtual machine and which can be migrated between VMs. They should be included in JDK 1.5. Isolates do not share static variables, and communicate by sending byte streams over links. The motivation for isolates is inter-application security and the requirement to support mobility (since Java threads cannot be safely moved). Doug Lea stressed that isolates should be used for applications running on the same machine or on a set of machines in the same administrative domain (or cluster). They are not intended to replace RMI. Nevertheless, isolates clearly represent a mobile object abstraction at the virtual machine level. The theme was taken up by Jan Vitek of Purdue University who presented an implementation of an earlier version of the JSR-121 specification called Incommunicado. The subject of the talk dealt was a fast inter-application communication mechanism that did not violate any functional and non-functional properties of its environment, and which also supported enforcement of application-specific security policies. Jan Vitek explained the design and implementation of this communication substrate for applications executing within a single Java virtual machine. Communication is 10 to 70 times faster than RMI between VMs running on the same machine. Luc Moreau of Southampton then presented a formal model for distributed resource management. The talk presented an extension of earlier work done with Christian Quiennec. This work introduced program primitives that captured the provision and consumption of energy among programs organised into hierarchical groups. Groups contain energy, and the starting or termination of a program in a group leads to the transfer of energy between the program and its parent. The extension to this work presented in the talk introduced distribution to the calculus. The calculus can model the remote execution of programs as well as the transfer of energy between these programs. In the final talk of the invited speaker session, Chrislain Razafimahefa of Geneva University outlined an approach to the structuring of a virtual machine for mobile object systems. The main problem mentioned was that current virtual machines are too monolithic. This makes them difficult to port onto small devices
196
Ciar´ an Bryce
and even more difficult to update. The solution presented by Chrislain Razafimahefa was based on protection domains. Currently the VM contains all code that needs to be protected from user programs. In the proposed approach, each user program and each VM service (e.g., compiler, verifier) are run in protection domains over the kernel, which becomes minimal. An approach to implementing protection domains using page-based OS protection primitives was presented.
3
Conclusions
Compared to preceding years, Grid and PDA computing seemed to be the new themes. Both constitute a class of system where computation is deployed on another device in order to avail of extra computing power. This delegation of computation is done using mobile object technology. In the case of PDAs, mobile objects are deployed from the PDA to execute on some servers. This has the advantage that the PDA need not remain connected to the network, which is generally expensive for it. In the case of the Grid, agents encapsulate computation that is executed on other servers simply to augment the computing power available to the whole computation. Resource control was a common theme throughout the workshop. This is the problem of ensuring that each mobile object is made accountable for the resources it consumes on a host. Part of the reason for the interest in this topic was that a workshop on resource control was held the following day, and both workshops had more or less the same attendees. Another reason is the growing interest in the topic by members of the mobile object domain. Resource control is now seen as an issue when machines are used to host agent computations, as in the case of the Grid for example. Another interesting aspect of the workshop was that most talks were linked to application developments elsewhere. No talk used the tradition academic example of an agent visiting a set of airline servers looking for the cheapest airline fare. The main security concern mentioned was resource control – and its importance for addressing denial of service attacks. In addition to this, many talks described mobile object systems that were already implemented. The general feeling at the end of the day was that the workshop had been productive and enjoyable. After the workshop, we rejoined the other ECOOP attendees for the “Get Together” party where we discussed the sights in M´alaga, the local cuisine, the Soccer World Cup and the upcoming conference. Plans are already afoot towards preparing next year’s Mobile Object System’s workshop, which will be the 9th in the series! We will accept papers on all topics, but would like the workshop to address some fundamental questions. Papers and presentations from this year’s workshop and of all previous editions are available at the workshop’s web site: http://cui.unige.ch/~ecoopws
Feyerabend: Redefining Computing Wolfgang De Meuter1 , Pascal Costanza2 , Martine Devos3 , and Dave Thomas4 2
1 PROG, Vrije Universiteit Brussel, Belgium Institute of Computer Science III, University of Bonn, Germany 3 Williams College, Massachusetts, USA 4 Bedarra Corporatio,USA
Abstract. The Feyerabend Project is Richard P. Gabriel’s attempt to repair the arena of software development and practice. This Feyerabend workshop is an ECOOP’02 incarnation of a series of Feyerabend workshops held in conjunction with major software engineering conferences.
1
Introduction
Fifty years into the First Computing Era some of us in the computing arena have come to realize we have made a false start that cannnot be fixed, and for us to finally be able to produce lasting, correct, beautiful, usable, scalable, enjoyable software that stands the tests of time and moral human endeaveor, we need to start over. Perhaps we will be able to salvage some of what we have learned from the First Era, but we expect almost everything except the most mathematical and philosophical fundamentals to be brushed aside. In 1975, Berkeley philosopher Paul Feyerabend wrote a book called ”Against Method”, in which he said: ”...one of the most striking features of recent discussions in the history and philosophy of science is the realization that events and developments ... occurred only because some thinkers either decided not to be bound by certain ”obvious” methodological rules, or because they unwittingly broke them. This liberal practice, I repeat, is not just a fact of the history of science. It is both reasonable and absolutely necessary for the growth of knowledge. More specifically, one can show the following: given any rule, however ”fundamental” or necessary” for science, there are always circumstances when it is advisable not only to ignore the rule, but to adopt its opposite.” In ”The Invisible Computer”, Donald Norman wrote, ”...the current paradigm is so thoroughly established that the only way to change is to start over again.” The Feyerabend Project is an attempt (triggerd by Richard P. Gabriel) to repair the arena of software development and practice. This workshop is one in a series leading up to an event to reinvent computing. For that event, a most diverse group of 75 people will be put together. The result of the two-week event will be the first steps toward a roadmap for massive rebuilding of computing both as a theoretical endeavor as a practice - and toward a plan to accomplish it. The goal of this workshop is to bring together everyone who is interested in the redefinition of computing and/or in the use of alternative metaphors/ J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 197–202, 2002. c Springer-Verlag Berlin Heidelberg 2002
198
Wolfgang De Meuter et al.
languages/ ideas for entering the Next Era of computer science. The major problem with the current flow of ideas is that current approaches merely enable the construction of software that is too brittle and too rigid in order to survive and operate in our real world which is dynamic and constantly subject to change. Our goal is to collect diverse contributions for a reconsideration of existing technology. This is pretty much in the same spirit of the ’computational rainbow’ exercise that has been done in one of the previous workshops (see http://www.dreamsongs.com): the goal of that exercise was to emphasize the existing diversity in computing which might have become hidden or ignored ’due to middleware’.
2
Workshop Organization
Just like in the previous Feyerabend workshop, the concrete organization of the workshop depended on the number of participants and on their interests. We started with a brief presentation of every participant and asked everyone to make a ’naughty statement’, in order to break the ice. Thereafter, we decided to have a short discussion on what to do. Joe Bergin (one of the few people present that attended nearly every Feyerabend workshop organized so far) gave a brief introduction about the ”thinking exercises” done at previous workshops (see http://www.dreamsongs.com). He concludes by saying that the basic goal of the exercises is to change the way people think. After a while we decided that most people were not interested in repeating one of the exercises done at previous workshops. Some new exercises were proposed, but none of them got enough interest by the other participants to be completed: – Let us compose a 5 year curriculum in CS that is not influenced by industry. – What if Von Neumann would have been a village idiot instead of a super brain. – The Feyerabend project succeeded. Now what? – How do we recognize that the Feyerabend project has succeeded. Nevertheless, after further discussion, two groups of people emerged with the same interest: – The first group is interested in what is worth saving from current day CS. ”Let is define a historical consciousness”. – The second group is interested in the bridges between CS and other disciplines. We decided to split up and get back together after the afternoon coffee break. Although both groups had a lot of interesting and vivid discussions, there are no real conclusions to log. Having conclusions wasn’t a goal, after all. After coming together again a very lively discussion on (the lack of) women in computer science took off. Issues that were discussed included:
Feyerabend: Redefining Computing
199
– How does a women-directed CS department looks like? – What if women would be in control of computing? – How can we increase the amount of CS women? (answer: lock laboratories at night!) – Teams with at least one woman perform better that other teams. – There are many women in the patterns community. – There are no women in the open source community (geeks with pizza boxes!). – The number of women in the bio-inspired-CS-community is higher that the number of women in the average CS-community. – Someone said that the ”geeky” stuff is too competitive for women (smallest program, biggest hard disk, most beautiful program,..) – Perhaps the problem is inherent to women/CS. No: countries like Argentina, Italy and Portugal prove this. The final discussion of the workshop emerged after someone proposed to do the ”exercise” of naming the two books that had most influence on their (CS-) life. A.o.: – – – – – – – – – – – – – – – – – – –
3
The Booch-design book ”Software Engineering” by Sommerville The GoF-book The Abelson&Sussman-course The Bible of Quantum Computation The Algorithmic Beauty of Plants Denotational Semantics by Strachey & Scott. 201 Principles of Software Engineering The X-problems for Fortran Solutions (forgot the number) Sparse Distributed Memory The Dragon Book Garlan&Shaw Smalltalk books BCPL:The compiler and its language Oberon by N. Wirth G¨ odel, Escher & Bach Cryptonomicron The Sussman book on Classical Mechanics. CLU by Liskov
Summary of the Position Papers
This section summarizes the main points of all workshop position papers. These papers can be downloaded from the workshop’s home page at http://groups.yahoo.com/group/feyerabend-project/.
200
Wolfgang De Meuter et al.
What we do Matters by J. Bergin Joe Bergin advocates to seriously take social consequences into account when creating new technology. Her refers to web cookies and data mining as examples where questions of ethics are obviously neglected. He also points to attacks on privacy and security, the poor quality of software and attacks on internet servers as evidence that the change of social relationships and power relationships between people and organisations are not addressed although in principle, we know how to solve these issues. Studying the Language of Programming by R. Biddle and J. Noble Robert Biddle and James Noble propose a new way of doing programming language research. In the tradition of post-modern thought, they argue for descriptive rather than prescriptive research, which means the following. Programming language research is predominantly performed by creating new languages, developing new type system and inventing new theories about programming; in this sense, it is traditionally a prescriptive activity in that researchers try to tell programmers how they should carry out their work. A descriptive research would rather try to look at actual programmers’ practice and try to assess what works and what fails. Biddle and Noble propose the following approaches for doing research on programming languages: reviews, case studies, in-situ studies, corpus analysis, semiotics, and patterns. The common baseline of these suggestions is that they concentrate on linguistic means. Biological Foundations of Computing by P. Grogono Peter Grogono argues for the recurrence to taking a look at biological systems for inspiration in computer science. He compares computation and biology, and stresses the similarities of these fields. Then he discusses several lessons learned in the field of Evolutionary Programming. He promotes the approach to provide computational agents with a general purpose programming language and let them program themselves. He lists some ”biological” features a programming language should have for such a purpose, like interchangeability of code and data, low impact of small changes to code, intensionality, the ability to suggest only partial solutions to problems, building large systems from small amounts of code, and unification of genotypes and phenotypes. He strongly encourages to disregard efficiency considerations because history proves that hardware has always managed to catch up with the software requirements. Less Is More by M. Torgersen Mads Torgersen suggests to continue fundamental research on object-oriented programming languages by trying to further unify and generalize abstractions in the tradition of the functional community. So he proposes to take away restrictions from classes in order to add expressive power, or removing unnecessary sequentiality instead of adding concurrency constructs. He cites the programming language Rune as an example that successfully unifies genericity, modularisation and covariance by turning types into first-class entities of the language. He also proposes an interesting challenge problem as a testbed for new programming language constructs.
Feyerabend: Redefining Computing
201
When Turing Meets Deutsch: A Confrontation between Classical Computing and Quantum Computing by E. D’Hondt, M. D’Hondt andT. D’Hondt The past ten years have seen quantum computation evolve from science fiction into something tangible: theory is being confirmed by experiment and extremely modest trials are actually conclusive. These limitations – far from dampening our fantasy – inspire us to imagine full-fledged quantum computers – and of course the software and associated programming paradigms for these machines. But if quantum computation is to be taken seriously, shouldn’t we start thinking about bringing theoretical physics and computer science much closer together than they are today? What about the obsession with formalism of the former and the complacency of the latter? We would like to propose a gedanken experiment: confronting a theoretical physicist and a computer scientist, and exposing them to the qubit through queries from an historically conscious veteran. Domain-Specifically Tailorable Languages and Software Architectures by U. Zdun Uwe Zdun summarizes a set of forces in the context of tailoring software systems and argues that there is a high demand for domain-specific tailoring of software systems. He lists typical reasons for this demand, such as ease-of-work for domain experts, work efficiency, rapid customization requirements, and economic reasons. He also discusses examples of different interaction and programming techniques that resolve different parts of the problem set, like scripting languages, visual programming, domain-specific languages, GUI builders, Programming by Example, software tailoring, and Generative Programming. Several open issues are derived from this discussion. P. Dickman In his (untitled) position paper, Peter Dickman explains that he is interested in the workshop because of his persuasion that people are currently thinking wrongly about distributed systems. He first argues that brute force evolutionary and neural techniques are not the right way because they are uninformative and wasteful of resources. He continues an argument in favor of ”weak” ontogeny (i.e. the sequence of events involved in the development of a single organism) instead of focussing on phylogeny (i.e. the sequence of events involved in the development of a species) as in done in branches of evolutionary computing. The argument is to focus on understanding and capturing complexity rather than trying to reduce it. He explains that currently we focus too much on homogeneity during system design and that the necessary heterogeneity could perhaps be achieved by seeking whole-organism metaphors from biology and requiring that macroscopic structures be seeded into the lowest levels of our system design (much like the structure of an elephant is encoded in its DNA).
References [1] Biddle, R., Noble, J.: Studying the Language of Programming; Victoria University of Wellington, New Zealand. [2] Dickman, P.: Position Paper; University of Glasgow, England. [3] Bergin, J.: What we do matters; Pace University, USA.
202
Wolfgang De Meuter et al.
[4] Grogono, P.: Biological Foundations of Computing; Concordia University, Montreal, Canada. [5] D’Hondt, E., D’Hondt, M., D’Hondt, T.: When Turing Meets Deutsch: A Confrontation between Classical Computing and Quantum Computing; Vrije Universiteit Brussel, Belgium. [6] Zdun, U.: Domain-Specifically Tailorable Languages and Software Architectures; University of Essen, Germany. [7] Torgersen, M.: Less is More: University of Aarhus, Denmark.
Formal Techniques for Java-like Programs Sophia Drossopoulou1 , Susan Eisenbach1, Gary T. Leavens2 , Arnd Poetzsch-Heffter3 , and Erik Poll4 1
4
Department of Computing, Imperial College of London, Great Britain {scd,se}@doc.ic.ac.uk 2 Department of Computer Science, Iowa State University, USA [email protected] 3 Fachbereich Informatik, FernUniversitaet Hagen, Germany [email protected] Dept. of Computer Science, University of Nijmegen, The Netherlands [email protected]
Abstract. This report gives an overview of the fourth ECOOP Workshop on Formal Techniques for Java-like Programs. It explains the motivation for such a workshop and summarizes the presentations and discussions.
Introduction This workshop was the fourth in the series of workshops on “Formal Techniques for Java Programs (FTfJP)” held at ECOOP. It was a follow-up to FTfJP workshops held at the previous ECOOP conferences in 2001 [4], 2000 [1], and 1999 [3], and the “Formal Underpinnings of the Java Paradigm” workshop held at OOPSLA’98. The name of the workshop has been slightly changed – from “Formal Techniques for Java Programs” to “Formal Techniques for Java-like Programs” – to explicitly include not just work on Java, but also work on related languages like C#. The workshop was organized by – – – – –
Sophia Drossopoulou (Imperial College, Great Britain), Susan Eisenbach (Imperial College, Great Britain), Gary T. Leavens (Iowa State University, USA), Arnd Poetzsch-Heffter (University of Kaiserlautern, Germany), and Erik Poll (University of Nijmegen, the Netherlands).
Besides the organizers, the program committee of the workshop also included – – – – –
Gilad Bracha (Sun Microsystems, USA), Doug Lea (State University of New York at Oswego, USA), Rustan Leino (Microsoft Research, USA) Peter M¨ uller (Deutsche Bank, Germany), and Don Syme (Microsoft Research, UK).
J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 203–210, 2002. c Springer-Verlag Berlin Heidelberg 2002
204
Sophia Drossopoulou et al.
The proceedings of the workshop has appeared as technical report [2] and is available on the web at http://www.cs.kun.nl/~erikpoll/ftfjp. There was lively interest in the workshop. We were very pleased with the high quality of the submissions. Out of 18 submissions, 10 papers were selected by the Programme Committee for longer presentations. In addition, for two position papers shorter presentations were given. 29 people from 9 countries attended the workshop. Motivation. Formal techniques can help to analyze programs, to precisely describe program behavior, and to verify program properties. Applying such techniques to object-oriented technology is especially interesting because: – the OO-paradigm forms the basis for the software component industry with their need for certification techniques, – it is widely used for distributed and network programming, – the potential for reuse in OO-programming carries over to reusing specifications and proofs. Such formal techniques are sound, only if based on a formalization of the language itself. Java is a good platform to bridge the gap between formal techniques and practical program development. It plays an important role in these areas and is becoming a de facto standard because of its reasonably clear semantics and its standardized library. However, Java contains novel language features, which are not fully understood yet. More importantly, Java supports a novel paradigm for program deployment, and improves interactivity, portability and manageability. This paradigm opens new possibilities for abuse and causes concern about security. Thus, work on formal techniques and tools for Java programming and formal underpinnings of Java complement each other. This workshop aims to bring together people working in these areas, in particular on the following topics: – – – – – –
specification techniques and interface specification languages, specification of software components and library packages, automated checking and verification of program properties, verification technology and logics, Java language semantics, dynamic linking and loading, security.
With the advent of languages closely related to Java, notably C#, we have changed to title to “Formal Techniques for Java-like Programs” to also include work on these languages in the scope of the workshop. Structure of Workshop and Report. The one-day workshop consisted of a technical part during the day and a workshop dinner in the evening. The presentations at the workshop were structured as follows:
Formal Techniques for Java-like Programs
205
– 9.30 - 11.00: session 1 • The Java Memory Model and Simulator Jeremy Manson and William Pugh • Type-Preserving Compilation of Featherweight IL Dachuan Yu, Valery Trifonov, and Zhong Shao – 11.30 - 13.00: session 2 • Transposing F to C# Andrew Kennedy and Don Syme • Simple Verification Technique for Complex Java Bytecode Subroutines Alessandro Coglio • Intraprocedural Analysis for JVML Verification William Retert and John Boyland • From Process Algebra to Java Code Andrew Phillips, Susan Eisenbach, and Daniel Lister – 14.30 - 16.00: session 3 • Checking Ownership and Confinement Properties Alex Potanin and James Noble • Analyzing the Java Package/Access Concepts in Isabelle/HOL Norbert Schirmer • Non-null types in an object-oriented language Manuel Fahndrich and Rustan Leino • Accessibility & Helper Types Jan Dockx – 16.30 - 18.00: session 4 • Model Checking Java Using Pushdown Systems Jan Obdrzalek • Specifying and Checking Java using CSP Michael M¨ oller • Predicate Transformation as Proof Strategy Nicole Rauch and Arnd Poetzsch-Heffter • Towards an Algebraic Formalization of Sequential Java-like Programs with Implicit States Kazem Lellahi and Alexandre Zamulin
Session 1 The workshop started with a “semi”-invited talk on the Java memory model and simulator. Jeremy Manson and William Pugh had submitted a paper on the simulator for the new Java memory model, and they were invited to give a longer presentation to give more background on the memory model. The Java Memory Model has been the source of great controversy, as the original description in the Java Language Specification turned out to be fatally flawed. A new formal specification is being developed through the Java community process. In the first part of the presentation, Bill Pugh explained and motivated the fundamental design decisions underlying this new memory model
206
Sophia Drossopoulou et al.
and gave examples illustrating the weird behaviour that may arise in incorrectly synchronised code. In the second part of the presentation, Jeremy Manson explained the simulator for the new Java memory model. For a representative subset of the language, this simulator will report all the possible results that a program may have according to the new definition of the memory model. The idea is that this simulator allows experimentation with some programs to get some feeling for the memory model and its properties. Given the complexity of the formal definition of the memory model, the simulator provides a useful tool in trying to understand it. Moreover, the simulator should be a useful tool for implementors of a Java virtual machine to see if its behaviour is within the behaviour allowed by the memory model. Dachuan Yu gave a talk about joint work with Valery Trifonov and Zhong Shao. They have developed a translation of Featherweight Intermediate Language (Featherweight IL) to LFLINT, a low-level language very close to machinelevel, and proved a type-preservation theorem for this translation. Featherweight IL is a significant subset of MS IL, the bytecode language used in the .NET framework, and it includes most of the new features of MS IL. Like Java bytecode, MS IL is still a relatively high-level language that requires further compilation and optimization to run efficiently on actual hardware. The interest of this work is that by being able to propagate type information down to the lowest level, we no longer have to rely on the correctness of the compiler for the type safety of the code it produces, as the output can still be typechecked. Because security ultimately relies on this type safety, this is an important advantage. It is the intention that this translation will be used for a real MS IL compiler in the near future.
Session 2 Just after the long running debate about retrofitting Java with generics has been settled, it turns out that the same issue is now being discussed for C#. (A pity C# did not take to heart all the lessons that might have been learnt from Java, in particular that it is easier to include generic types from the start, rather than having to retrofit them later, with all the complications of maintaining backward compatibility . . . ). Andrew Kennedy gave a talk about joint work with Don Syme on a type-preserving mapping from the polymorphic lambda calculus (also known as system F) to an extension of C# supporting parametric polymorphism. This mapping provides some insight into the expressive power of objected-oriented languages with polymorphic virtual methods compared to other type systems with type parameters. Java bytecode verification is an important topic of research on Java, since the security of Java relies on it, and it has been the subject of quite a few talks at FTfJP over the years. There were two talks on the subject of bytecode verification this year, both tackling the trickiest part of Java bytecode verification, namely dealing with the notorious subroutines.
Formal Techniques for Java-like Programs
207
Alessandro Coglio, one of the regulars at FTfJP with his third paper on the subject of bytecode verification, presented a relatively simple solution to the issue of bytecode verification for subroutines. Interestingly, one of the referees of the paper had to admit that this technique was actually used in a (very efficient) commercial bytecode verifier they had produced. This clearly demonstrates the practicality of the approach, and thus the interest in having a first formal account of the approach and its properties in the public domain. John Boyland proposed an alternative approach to verifying subroutines, developed in joint work with William Retert. They had observed that some of the problems encountered in the bytecode verification of subroutines are similar to problems already solved for interprocedural analysis in high level languages, and this observation led them to investigate whether similar techniques could not be used as a solution. John Boyland described a general framework for interprocedural abstract interpretation and an initial application of this framework to a subset of the Java Virtual Machine Language including subroutines. Susan Eisenbach gave a talk about joint work with Andrew Phillips and Daniel Lister on an attempt to bridge the gap between the formal models that are being proposed for mobile distributed computation and actual implementations. She presented the δπ calculus, an abstract formal model of mobile distributed computation suited for reasoning about security and correctness properties, and an implementation of the primitives of this calculus as a Java API.
Session 3 Over the years many proposals to manage aliasing in Java or other objectoriented languages have been proposed. It is clear that aliasing is one of the major stumbling blocks when it comes to program verification, and many formal proposals have been put forward to help with this issue, such as object ownership, confinement, and uniqueness. Unfortunately, it is not clear how practical these proposals really are. James Noble gave a talk about joint work with Alex Potanin on a tool for analysing the heaps produced by Java programs. The statistical analysis performed by the tool provides a picture about how common certain patterns of aliasing and sharing (cq. the absence of aliasing) are. The aim is a better understanding of the kinds of aliasing that typical programs produce, which would be a big help in designing formalisms to tackle the problems that arise with aliasing. Norbert Schirmer talked about his work extending the formalisation of Java developed in the Bali project at the Technical University of Munich to include the Java access modifiers and the associated access control. The talk gave an interesting account of the inconsistencies and ambiguities that were discovered in doing this. Interesting to hear about some good work on this topic, as this is an important topic that has not received a lot of attention before. Manuel Fahndrich gave a talk about joint work with Rustan Leino about a type system for an object-oriented language with “non-null types”. Such a type
208
Sophia Drossopoulou et al.
system allows certain null-related errors, notably nullpointer dereferencing, to be detected and avoided by type checking at compile time. The idea was to investigate the possibility of retrofitting a language like C# or Java with non-null types. The main complication turned out to be the treatment of constructors, where one would somehow want to prevent anyone getting hold of a newly constructed object before initialisation of all its fields has been completed. Interestingly, this issue was also mentioned as a problem by Bill Pugh in his talk on the new Java memory model, where he showed that several threads having access to a String object under construction could observe a – supposedly immutable – String object changing its value. In addition to the 10 regular representations, two short positions papers were presented at the workshop. The first of these was by Jan Dockx. Although the notion of class invariant is one of the cornerstones in the specification of object-oriented languages, in practice there is often the need for allowing certain ‘helper’ methods (typically private methods) to violate invariants. Jan Dockx’s talk addressed the issue of how to allow certain methods to break invariants of an object, depending on the different levels of visibility (public, protected, package, and private), inspired by the discussions on this topic for the specification language JML.
Session 4 There has been a lot of work on model checking infinite-state systems in recent years. Jan Obdrzalek talked about applying one technique that has been developed for this – namely pushdown systems – for checking control flow properties of sequential Java programs, with the emphasis on checking properties of exceptional control flow by the throwing and catching of exceptions. Michael M¨ oller gave a talk about work on the jassda runtime assertion checker for Java developed at the University of Oldenburg. His idea was to extend the expressivity provided by the traditional pre- and postconditions for individual methods, by using a CSP dialect to specify a set of allowed traces of method invocations for a Java program. That way properties about the order of certain events happen in Java programs can be conveniently specified and subsequently checked using the jassda tool. Nicole Rauch gave a talk about joint work with Arnd Poetzsch-Heffter on the ongoing development of the Jive system for the interactive verification of Java programs. She argued against the common assumption that the generation of weakest preconditions is the best technique to tackle the verification of programs, and she proposed a more flexible approach using a Hoare logic with a strategy mechanism, where the generation of weakest preconditions would just be one of the strategies. Finally, Alexander Zamulin presented the second short position paper. In his talk about joint work with Kazem Lellahi, Alexander sketched an algebraic formal model for a sequential object-oriented programming language. The approach
Formal Techniques for Java-like Programs
209
was based on a combination of traditional many-sorted algebras with abstract state machines (ASMs).
Conclusions Looking back on the workshop, the programme committee is very pleased with the quality of the submitted papers and with the attendance at the workshop. Prior to the workshop had been some discussion about the future of FTfJP, but once the submissions came in everyone was quickly convinced that there is enough life left in the research area to keep an interesting workshop alive. This is also supported by the fact that there were both familiar and new faces at the workshop, and both familiar and new topics were being addressed. The fact that there were two papers specifically on C# show that the decision to include C# in the scope of the workshop was a good one. Another special issue of the journal “Concurrency and Computation: Practice and Experience” (CCPE) dedicated to FTfJP’2002, with invited papers from the workshop, is planned. Acknowledgements Of the local organisers we would in particular like to thank Antonio Vallecillo, for his help in organising a very enjoyable workshop dinner.
References [1] S. Drossopoulou, S. Eisenbach, B. Jacobs, G. T. Leavens, P. M¨ uller, and A. Poetzsch-Heffter. Formal techniques for Java programs. In Jacques Malenfant, Sabine Moisan, and Ana Moreira, editors, Object-Oriented Technology. ECOOP 2000 Workshop Reader, volume 1964 of Lecture Notes in Computer Science, pages 41–54. Springer-Verlag, 2000. [2] S. Drossopoulou, S. Eisenbach, G. T. Leavens, A. Poetzsch-Heffter, and E. Poll. Formal techniques for Java-like programs. Technical Report NIII-R0204, University of Nijmegen, 2002. Available from http://www.cs.kun.nl/~erikpoll/ftfjp. [3] B. Jacobs, G. T. Leavens, P. M¨ uller, and A. Poetzsch-Heffter. Formal techniques for Java programs. In A. Moreira and D. Demeyer, editors, Object-Oriented Technology. ECOOP’99 Workshop Reader, volume 1743 of Lecture Notes in Computer Science, pages 97 – 115. Springer-Verlag, 1999. [4] G. T. Leavens, S. Drossopoulou, S. Eisenbach, A. Poetzsch-Heffter, and E. Poll. Formal techniques for Java programs. In A. Frohner, editor, Object-Oriented Technology. ECOOP 2001 Workshop Reader, volume 2323 of Lecture Notes in Computer Science, pages 30–40. Springer-Verlag, 2000.
210
Sophia Drossopoulou et al.
List of Participants Name Suad Alagic Davide Ancona Christopher Anderson John Boyland Gilad Bracha Patrice Chalin Alessandro Coglio Jan Dockx Sophia Drossopoulou Susan Eisenbach Manuel Fahndrich Neal Glew Andrew Kennedy Doug Lea Gary T. Leavens Jeremy Manson Michael Moeller David Naumann James Noble Jan Obdrzalek Arnd Poetzsch-Heffter Erik Poll Bill Pugh Nicole Rauch William Retert Norbert Schirmer Mirko Viroli Dachuan Yu Alexandre Zamulin
Email [email protected] [email protected] [email protected]. [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] [email protected]
Poster Session Juan Manuel Murillo and Fernando S´ anchez University of Extremadura, Spain, {juanmamu,fernando}@unex.es
Abstract. This report summarizes the ECOOP 2002 Posters Session. Posters provide an easy and informal means for presenting ongoing research to the ECOOP audience. In ECOOP 2002 eight high quality posters were on display during the conference period. The main topics addressed by the posters were dynamic inheritance in Java, selfoptimization of software applications, finding and correcting performance issues during as well as after development, separation of computation and coordination, providing tools to extend the functionality of Java, cheking of UML models and assembling software systems from distributed heterogeneous components addressing functional and non-functional requirements such as Quality of Service.
1. Introduction Posters provide an easy and informal means for presenting ongoing research to the ECOOP audience. The next sections correspond to the abstract of the accepted posters in ECOOP 2002. All the abstracts include references with additional information about the posters. The list of references and the complete affiliation of authors can be found at the end of the report.
2. Component Redundancy for Adaptive Software Applications (A. Diaconescu, J. Murphy) The growing complexity of computer systems has led to more complicated and expensive system design, testing and management processes, and decreased systems flexibility. In the last years, research initiatives have started to address these issues, as a great challenge of modern software engineering and in general of the whole I/T industry. Self-evolving and self-adaptive system technologies, autonomic computing and redundancy as a basis for system robustness are some of these initiatives. With respect to this, we are trying to address the inter-related issues of self-optimisation and self-’healing’ of software applications. We focus on the component-based software development (CBSD) approach as a new solution that promises to increase the reliability, maintainability and overall quality of largescale, complex software applications. Our goal is to propose a new component technology that adds on the existing component technologies (e.g. EJB, .Net, or CCM) to support new functionalities J. Hern´ andez and A. Moreira (Eds.): ECOOP 2002 Workshops, LNCS 2548, pp. 211–222, 2002. c Springer-Verlag Berlin Heidelberg 2002
212
Juan Manuel Murillo and Fernando S´ anchez
such as adaptability, self-optimisation, or self-’healing’. We concentrate our solution around the concept of (software) component redundancy. By this concept we understand that a number of component implementation versions, providing the same services are available, at run-time. In case one component version instance fails, or performs poorly in certain conditions, it can be replaced with another component version that offers the equivalent functionalities. The component technology must accommodate the presence of the redundant component implementation versions and be able to use them for achieving optimal performance and robustness. As a new requirement on component implementations, each such component has to support and provide a formal description of itself. This description includes component functionality information and performance related parameters corresponding to various environmental conditions (e.g. available resources and number of concurrent client requests). We identified the following entities needed for utilisation and management of redundant components: – Application Monitor - monitors and evaluates the performance of active component variants. Identifies ’problem’ component(s) in application transaction chains. – Environment Monitor - keeps track of the current available resources (e.g. memory, storage, communications bandwidth, or processing) and number of client requests. – Component Evaluator - determines the optimal component version, based on component descriptions, current environmental conditions and decision policies. – Component Swapping Mechanism - swaps instances of different component versions, while preserving state and references consistency. These entities operate at run-time in an automated, feedback loop manner. The application performance is monitored and evaluated, the optimal component version is identified and activated for each component and the resulting application is monitored and evaluated. The aforementioned entities are part of the component technology we propose. Therefore, they do not have to be (re-)implemented for each component based software application that uses this component technology. Application servers are already providing some common services such as security or transactions. The component redundancy based optimisation and adaptability would be another such service. We intend to concentrate on specifying the formal component description, Component Evaluator, and Component Swapping Mechanism and identify existent application and environment monitors. Our prime goal will be to demonstrate our approach for an existent component technology (e.g. EJB) by integrating the aforementioned entities with an open-source application server and simulating environmental conditions variations. Additional information about this poster can be found in [1].
Poster Session
213
3. Understanding Performance Issues in ComponentOriented Distributed Applications: The COMPAS Framework (A. Mos, J. Murphy) We propose the Component Performance Assurance Solutions (COMPAS) framework that can help developers of distributed component-oriented applications find and correct performance issues during as well as after development. The motivation for our work on this framework was derived from the following considerations: – Performance can be critical for large-scale component oriented applications. – A poor architecture, a bad choice of Commercial-Off-The-Shelf (COTS) components or a combination of both can prevent achieving the application performance goals. – Performance problems are more often caused by bad design rather than bad implementations. – Current performance modelling and prediction techniques, mainly require developers to build performance models based on assumptions about the run-time characteristics of the software application, which often is difficult to do, especially for complex middleware-based applications. – Often, performance is ”a function of the frequency and nature of intercomponent communication, in addition to the performance characteristics of the components themselves” [P.C. Clements]. To address these issues, the presented framework is structured into three main functional parts or modules, that are interrelated: – Monitoring: obtains real-time performance information from a running application without interfering with the application code or the application run-time infrastructure (i.e. the application server implementation). We are currently concerned with monitoring of EJB systems only. – Modelling: creates UML models of the target application using information from the monitoring module. The models are augmented with performance indicators and can be presented at different abstraction levels to improve the understanding of the application from a performance perspective. – Performance Prediction: the generated models of the application are simulated with different workloads (e.g. corresponding to different business scenarios); simulation results can be used to identify design problems or poor performing COTS components. There is a logical feedback loop connecting the monitoring and modelling modules. It refines the monitoring process by focusing the instrumentation on those parts of the system where the performance problems originate. The intent of the framework presented is not to suggest a development process that prevents the occurrence of performance issues in the design, but rather to enable early discovery of such issues and suggest corrections.
214
Juan Manuel Murillo and Fernando S´ anchez
Models are represented in UML, which many enterprise-scale application developers are familiar with. The use of Model Driven Architecture and Enterprise Distributed Object Computing (EDOC) concepts facilitates navigation between different abstraction layers. The top-level models are represented using a technology independent profile, the Enterprise Collaboration Architecture from EDOC, in order to benefit from a standardized form of representation for business modelling concepts. Lower level models are represented using UML profiles such as the UML Profile for EJB which provide means to illustrate technology specific details. Regardless of the level of abstraction, each model is augmented with performance information presented using the UML Profile for Schedulability, Performance, and Time Specification. The Performance Prediction Module uses runnable versions of the generated modules and simulates them with different workloads as inputs displaying performance information in the same manner as in the modelling phase. Additional information about this poster can be found in [2].
4. Design Compiler and Optimiser (L. Gabor, J. Murphy) A major part of software systems are not designed for performance. Very often there are flows in the system that appear late in the life cycle of the system. Even if there are profiling and monitoring tools that can detect various problems, they don’t provide a solution. It takes a very experienced designer or developer to deal with such problems efficiently. Solutions for a good design are the use of design patterns, refactorings and antipatterns. Our framework provides a way to automatically correct design problems preserving the functionality of the system and improving its performance. The approach is to provide a system of patterns, antipatterns and refactorings that will provide the instruments for automation of design optimisation. The idea is similar to the work that a compiler does when it optimise it’s output. The tool will use these instruments mainly to change the design of a model. Another task is to comprehend it. It will use patterns to understand how a system is modelled. It will divide the system into subsystems and components making it easier to process. It will gather information about participants to existing patterns. An important feature of the process is that it will take a non-intrusive approach when redesigning. The functionality of the system will be maintained. Each pattern that solves a problem will be defined as a sequence of refactorings and/or it will have aspect-oriented strategies. Other aspects are reuse of the experience and interoperability. Finding similarities between different models, in order to better recognize patterns that are implemented into the new model, manifests the reuse of experience. Additional problems can be detected by finding similar scenarios between two models. Also, since the definition of the context a pattern address isn’t usually comprehensive or complete, it can learn new situations in which a pattern may be used. The interoperability is addressed by the use of UML and MOF standards for representing knowledge.
Poster Session
215
In conclusion the main targets are the system of patterns and the expert system that has the ability to compare models, find similarities and localise the context of a pattern or a performance problem. As stated by the POSA books system of patterns should comprise a sufficient base of patterns, it should describe all its patterns uniformly, it should expose the various relations between patterns, it should organise its constituent patterns, it should support the construction of software systems and it should support its own evolution. Additional information about this poster can be found in [3] and [4].
5. Enforcing Business Policies Through Automated System Reconfiguration (L. Andrade, J.L. Fiadeiro, M. Wermelinger, G. Koutsoukos, J. Gouveia) The engineering of Business Systems is under the increasing pressure to come up with software solutions that allow companies to face very volatile and turbulent environments (as in the telecommunications domain). This means that the complexity of software has definitely shifted from construction to evolution, and that new methods and technologies are required. Most often, the nature of changes that occur in the business are not at the level of the components that model business entities, but at the level of the business rules that regulate the interactions between the entities. Therefore, we believe that successful methodologies and technologies will have to provide abstractions that reflect the architecture of such systems by supporting a clear separation between computation, as performed by the core business components, and coordination, as prescribed by business rules. This separation should help in localising change in the system, both in terms of identifying what needs to be changed in the system and circumscribing the effects of those changes. In our opinion, the lack of abstractions for supporting the modelling of interactions and architectures explains why component-based and object-oriented approaches have not been able to deliver their full promise regarding system evolution. Usually, interactions are coded in the way messages are passed, features are called, and objects are composed, leading to intricate spaghetti-like structures that are difficult to understand, let alone change. Moreover, new behaviour is often introduced through new subclasses which do not derive from the ”logic” of the business domain, widening the gap between specification and design. The approach we have been developing builds on previous work on coordination models and languages, software architecture, and parallel program design languages. Instead of delegation we use explicit architectural connectors that encapsulate coordination aspects: this makes a clear separation between computations and interactions and externalises the architecture of the system. Instead of subclassing we advocate superposition as a structuring principle: interactions are superposed on components in a non-intrusive and incremental way, allowing evolution through reconfiguration, even at run-time.
216
Juan Manuel Murillo and Fernando S´ anchez
The main advantages of our approach are adequacy and flexibility. The former is achieved by having a strict separation of computation, coordination, and configuration, with one primitive for each concept, stating clearly the preconditions for each coordination and reconfiguration rule. As for flexibility, interactions among components can be easily altered at run-time through (un)plugging of coordination rules, and it is possible to state exactly which coordination rules are in effect for which components, and which configuration policies apply to which parts of the system. In the following sections we briefly summarise our approach for different phases of software development. More details are provided by the publications available at the ATX website by tutorial 11 at this conference. Additional information about this poster can be found in [5].
6. Extensible Java Pre-processor EPP (Y. Ichisugi) We are developing an extensible Java pre-processor, EPP. EPP is a framework of Java source code processing systems. The behavior of EPP can be extended by adding extension modules, EPP plug-ins. EPP has an extensible recursive descent parser and an extensible type checking mechanism. A great variety of new language features can be easily introduced by implementing macro expansion functions for new language constructs. Independently developed plug-ins can be used in a program simultaneously. EPP can be used also as a platform for source code processing systems such as metrics tools and refactoring tools. A wide variety of language extensions have been implemented, including a data-parallel language, thread migration, parameterized types, metrics tool and the difference-based module mechanism. (The paper describing difference-based modules are accepted by ECOOP2002.) EPP plug-ins are written in an extended Java language. The extensions for EPP plug-ins themselves are implemented using EPP. The language extensions support a kind of mixin-based programming, which enhances extensibility and composability of applications. In addition, some useful features such as symbols and backquote macros as in Lisp have been introduced into Java language in order to support easy implementation of macros. The most important application of EPP is the MixJuice language, which is an enhancement of Java language with difference-based module mechanism. The module mechanism enables separation of crosscutting concerns. We are currently rewriting EPP itself using the MixJuice language, in order to enhance extensibility, performance and safety. EPP is distributed from the web page with source-code and sample plug-ins. Additional information about this poster can be bound in [6].
Poster Session
217
7. A Framework for Checking UML Models - The UBB OCL Evaluator (D. Chiorean, A. Cˆ arcu, M. Pasca, C. Botiza, H. Chiorean, S. Moldovan) The main objective of the checking process: Obtaining correct UML models. This implies: – Reduced time and costs with system development. – Better system quality. UML model correctness = completeness + correctness (against UML Specification) – Precondition - a correct and complete set of WFR State of the art: today, checking of UML models is done using: – Checking operations implemented in UML CASE Tools. – Scripts. – Other Add-Ins tools (like Rose Checker). Drawbacks of the above techniques: – Only a part of the WFR are implemented. – The equivalence between the WFR and their implementation has to be proved. – Each CASE tool has its own Repository interface and script language. – A part of the WFR cannot be implemented due to the lack of information. OCL Evaluator. Technique used: – Check UML Models by means of WFR. – The UML models are expressed in XMI 1.1 format. – The user has full access to the metamodel information by means of UML AO. • Consequently the user can extend the checks at different profile levels (including checks specific to the target language). • All the rules expressed at the metamodel (M2) level. The constraints can be fully evaluated. Apart from the UML model, no additional information is necessary. The rules are stored in a text file, conforming to the OCL standard. • The OCL support is fully compliant with the OCL 1.4 Specification. • The BCR specified by the user at the M1 level can also be evaluated provided that all the required information is available. The results obtained. Using NEPTUNE OCL Evaluator, most of the UML 1.4 WFR were evaluated. Apart from the errors discovered in case of similar projects (see http://www.db.informatik.uni-bremen.de/projects/USE) a lot of conceptual errors have been identified in the AO and WFR Specification.
218
Juan Manuel Murillo and Fernando S´ anchez
The main conclusions are: – The WFR have to be clearly explained in natural language and if needed, exemplified using pieces of models. Providing examples of correct models and of modes breaking the WFR gives the user the possibility to better understand the meaning and importance of each rule. – The UML WFR do not have to forbid legal constructions in object oriented programming languages. – The UML metamodel has pure object oriented architecture. Given that the AO and the WFR specify the UML metamodel classes behavior, their specification has to satisfy the rules specific to OOP and to the design and programming by contract domains. (The naming rules, the redefinition rules, the rules adopted for solving the ambiguities in multiple inheritance etc.). – The AO and the WFR specifications have many solutions. It is very important to find the simplest and clearest one. – We are now trying to identify all the errors in the AO and the WFR and propose solutions for them. Topics related with this poster are described in [16], [17], [13], [12], [18] and [7]. Additional information about this poster can be found in [8].
8. UniFrame: Framework for Seamless Integration of Heterogeneous Distributed Software Components (B.R. Bryant, R.R. Raje, A.M. Olson, G.J. Brahnmath, Z. Huang, C. Sun, N.N. Siram, C.C. Burt, F. Cao, C. Yang, W. Zhao, M. Auguston) A framework is proposed for assembling software systems from distributed heterogeneous components. For the successful deployment of such a software system, it is necessary that its realization not only meet the functional requirements, but also non-functional requirements such as Quality of Service (QoS) criteria. The approach involves: – The creation of a meta-model for components, called the Unified Meta Model (UMM), and an associated hierarchical format for indicating the contracts and constraints of the components. – Resource discovery components, called ”headhunters,” that locate components according to a software developer’s request. – Automatic generation of glue and wrappers, based on a designer’s specifications, for achieving interoperability. – Guidelines for specifying and verifying the quality of components and their compositions. – A formal mechanism for precisely describing the meta-model. – A methodology for component-based software design. – Validation of this framework by creating proof-of-concept prototypes.
Poster Session
219
A formal specification based on Two-Level Grammar is used to represent these notions in a tightly integrated way so that QoS becomes a part of generative domain models. Additional information about this poster can be found in [9].
9. Darwin and Lava - Object-Based Dynamic Inheritance in Java (G. Kniesel) The traditional notion of inheritance, which dates back to Simula 1967 and Smalltalk 78, has often been criticized as being too rigid to express dynamic evolution of structure and behaviour and too coarsegrained to express objectspecific sharing of state and behaviour. A more expressive alternative, objectbased dynamic inheritance, also known as delegation, was proposed in 1987 by Henry Liebermann. However, it has not yet made its way into mainstream objectoriented languages on the premise of its assumed inefficiency and incompatibility with static typing. The Darwin model and its proof of concept implementation, LAVA, an extension of Java, refute these assumptions [15]. They support statically type safe delegation and the new LAVA implementation shows that the code for dynamic inheritance generated by a compiler does not need to be less efficient than manual encoding of the same functionality. In LAVA, object-based inheritance can be specified simply by adding the keyword delegatee to a variable declaration, e.g. ”delegatee FinancialComponent parent”. Objects referenced by such variables are called parents. The effect of inheriting methods and state is achieved by automatically forwarding locally undefined method and variable accesses to parents. When an applicable method is found in a parent it is executed after binding this to the initial message receiver. This binding of this is the defining characteristic of delegation. It lets methods of delegating objects override methods of their parents, just like subclass methods override superclass methods in class-based languages. In LAVA, determination of the ”inherited” code and of part of the inherited interface can take place at object instantiation time or can be fully dynamic. In the first case we talk about ”fixed” delegation and use the usual Java keyword final to indicate that after initialisation parent objects cannot be exchanged. Fixed delegation is particularly interesting because it can be fully unanticipated - objects instantiated from any class can be used as parents. The implementation of parent classes does not need to be aware of delegation and does not need to include any hooks to enable delegation. So delegation can be used as a mechanism for adaptation of existing objects to unanticipated changes in their environment [14]. LAVA shows that fully dynamic (”mutable”) delegation has to be anticipated in order to make it compatible with static typing. The main idea is that classes whose subclass instances may be used as dynamically exchangeable parents must be annotated with a keyword that indicates this intended use.
220
Juan Manuel Murillo and Fernando S´ anchez
The poster motivates the need for object-based and dynamic inheritance, presents the contributions of LAVA and DARWIN in this field, describes the introduced language extensions and explains some of the main problems encountered and how they have been solved. Additional information about this poster can be found in [10]. The personal homepage of the author is [11].
References [1] [2] [3] [4] [5] [6] [7]
[8] [9] [10] [11] [12] [13]
[14]
[15] [16] [17]
[18]
http://www.eeng.dcu.ie/ diacones/workCurrent.html. http://www.eeng.dcu.ie/ mosa/projects/compas.html. http://www.eeng.dcu.ie/ gaborl/unofficial/gabor-it&t.pdf. http://www.eeng.dcu.ie/ gaborl/unofficial/gabor-ecoop-2002.examples.src.zip. http://www.atxsoftware.com/agility.html. http://staff.aist.go.jp/y-ichisugi/epp/. http://dresdedn-ocl.sourceforge.net/index.html. Most of this research work has been done in the framework of NEPTUNE IST 1999-20017 Research European Project. http://lci.cs.ubbcluj.ro. http://www.cs.iupui.edu/uniFrame. http://javalab.cs.uni-bonn.de/research/darwin/. http://www.cs.uni-bonn.de/ gk/. UML 1.4 Final Specification. January 2002. http://uml.sh.com. Dan Chiorean. Using ocl beyond specification. Using OCL beyond specification Practical UML-Based Rigorous Development Methods - Countering or Integrating the eXtremists, 7:57–69, 2001. G¨ uenter Kniesel. Type-safe delegation for run-time component adaptation. ECOOP’99 Conference Proceedings. Rachid Guerraoui (ed). Springer-Verlag. LNCS 1628, pages 351–366, 1999. G¨ uenter Kniesel. Darwin – Dynamic Object-Based Inheritance with Subtyping. PhD thesis, CS Dept. III, University of Bonn, Germany, July 2000. Michael Moors. Consistency checking. Rose Architect - Spring Issue, April 2000. http://www.therationaledge.com/rosearchitect/mag/index.html. Mark Richters and Martin Gogolla. Proceedings 3rd International Conference on the Unified Modeling Language (UML), chapter Validating UML Models and OCL Constraints. Springer-Verlag, 2000. J. Warmer and A. Kleppe. The Object Constraint Language. Addison Wesley, 1999.
Annex 1: List of Authors – (Andrade, L.) Lu´ıs Andrade. ATX Software SA, Linda-a-Velha, Portugal. Oblog Software SA, Linda-a-Velha, Portugal ([email protected]). – (Auguston, M.) Mikhail Auguston. Department of Computer Science, New Mexico State University, USA ([email protected]). – (Botiza, C.) Cristian Botiza. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania.
Poster Session
221
– (Brahnmath, G.J.) Girish J. Brahnmath. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]). – (Bryant, B.R.) Barrett R. Bryant. Department of Computer and Information Sciences, University of Alabama at Birmingham, USA ([email protected]). – (Burt, C.C.) Carol C. Burt. Department of Computer and Information Sciences, University of Alabama at Birmingham, USA ([email protected]). – (Cao, F) Fei Cao. Department of Computer and Information Sciences, University of Alabama at Birmingham, USA ([email protected]). – (Cˆarcu, A.) Adrian Cˆ arcu. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania. – (Chiorean, D.) Dan Chiorean. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania ([email protected]), ([email protected]). – (Chiorean, H.) Horia Chiorean. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania. – (Ciupa, I.) Ilinca Ciupa. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania. – (Diaconescu, A.) Ada Diaconescu. Performance Engineering Laboratory, Dublin City University, Dublin, Ireland ([email protected]). – (Fiadeiro, J.L.) Jos´e Luiz Fiadeiro. ATX Software SA, Linda-a-Velha, Portugal. Dep. de Informtica, Fac. de Ciˆencias, Univ. de Lisboa, Portugal (jose@fiadeiro.org). – (Gabor, L.) Lucian Gabor. Dublin City University, Dublin, Ireland ([email protected]). – (Gouveia, J.) Jo˜ ao Gouveia. Oblog Software SA, Linda-a-Velha, Portugal ([email protected]). – (Huang, Z.) Zhisheng Huang. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]). – (Ichisugi, Y.) Yuuji Ichisugi. Information Technology Institute, National Institute of Advanced Industrial Science and Technology (AIST), Japan ([email protected]). – (Kniesel, G.) G¨ uenter Kniesel. Computer Science Department III, University of Bonn, Germany ([email protected]). – (Koutsoukos, G.) Georgios Koutsoukos. Oblog Software SA, Linda-a-Velha, Portugal ([email protected]). – (Moldovan, S.) Sorin Moldovan. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania. – (Mos, A.) Adrian Mos. Performance Engineering Laboratory, Dublin City University, Dublin, Ireland ([email protected]). – (Murphy, J.) John Murphy. Performance Engineering Laboratory, Dublin City University, Dublin, Ireland ([email protected]). – (Olson, A.M.) Andrew M. Olson. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]).
222
Juan Manuel Murillo and Fernando S´ anchez
– (Pasca, M.) Mihai Pasca. ”Babes-Bolyai” University - Computer Science Research Laboratory, Romania. – (Raje, R.R.) Rajeev R. Raje. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]). – (Siram, N.N.) Nanditha N. Siram. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]). – (Sun, C.) Changlin Sun. Department of Computer and Information Science, Indiana University Purdue University Indianapolis, USA ([email protected]). – (Wermelinger, M.) Michel Wermelinger. ATX Software SA, Linda-a-Velha, Portugal. Dep. de Informtica, Univ. Nova de Lisboa, Caparica, Portugal ([email protected]). – (Yang. C.) Chunmin Yang. Department of Computer and Information Sciences, University of Alabama at Birmingham, USA. – (Zhao, W.) Wei Zhao. Department of Computer and Information Sciences, University of Alabama at Birmingham, USA ([email protected]).
Author Index
Alvarez, Dario, 174 Ara´ ujo, Jo˜ ao, 184 Ar´evalo, Gabriela, 117 Beugnard, Antoine, 79 Black, Andrew, 117 B¨ orstler, J¨ urgen, 30 Bosch, Jan, 70 Brito e Abreu, Fernando, 147 Bruce, Kim B., 30 Bryce, Ciar´ an, 192 Buckley, Jim, 92 Clauß, Matthias, 135 Clemente, Pedro J., 44 Constanza, Pascal, 197 Crespo, Yania, 117 Czajkowski, Grzegorz, 1 Czarnecki, Krzysztof, 15 D’Hondt, Maja, 160 Dao, Michel, 117 Davis, Kei, 154 Devos, Martine, 197 Drossopoulou, Sophia, 203 Duchien, Laurence, 79 Eisenbach, Susan, 203 Ernst, Erik, 117 France, Robert, 184 Gal, Andreas, 174 Genssler, Thomas, 107 Grogono, Peter, 117 Huchard, Marianne, 117
Mens, Kim, 160 Mens, Tom, 92 Meuter, Wolfgang De, 197 Michiels, Isabel, 30 Murillo, Juan Manuel, 211 Noppen, Joost, 92 Olsina, Luis, 55 Østerbye, Kasper, 15 Paesschen, Ellen Van, 160 Pastor, Oscar, 55 P´erez, Miguel A., 44 Piattini, Mario, 147 Poels, Geert, 147 Poetzsch-Heffter, Arnd, 203 Poll, Erik, 203 Pulverm¨ uller, Elke, 135 Reussner, Ralf H., 135 Rossi, Gustavo, 55 Sadou, Salah, 79 Sahraoui, Houari A., 147 Sakkinen, Markku, 117 S´ anchez, Fernando, 211 Schwabe, Daniel, 55 Schneider, Jean-Guy, 107 Sch¨ onhage, Bastiaan, 107 Smaragdakis, Yannis, 154 Speck, Andreas, 135 Spinczyk, Olaf, 174 Straeten, Ragnhild Van Der, 135 Striegnitz, J¨ org, 154 Szyperski, Clemens, 70 Thomas, Dave, 197 Toval, Ambrosio, 184
Jul, Eric, 79 Kniesel, G¨ unter, 92
Vitek, Jan, 1 V¨ olter, Markus, 15
Leavens, Gary T., 203 Lumpe, Markus, 107
Weck, Wolfgang, 70 Whittle, Jonathan, 184