Object-Oriented Technology. ECOOP '98 Workshop Reader: ECOOP'98 Workshop, Demos, and Posters Brussels, Belgium, July 20-24, 1998 Proceedings (Lecture Notes in Computer Science, 1543) 3540654607, 9783540654605

At the time of writing (mid-October 1998) we can look back at what has been a very successful ECOOP’98. Despite the time

126 84 6MB

English Pages 604 [595] Year 1998

Report DMCA / Copyright

DOWNLOAD PDF FILE

Recommend Papers

Object-Oriented Technology. ECOOP '98 Workshop Reader: ECOOP'98 Workshop, Demos, and Posters Brussels, Belgium, July 20-24, 1998 Proceedings (Lecture Notes in Computer Science, 1543)
 3540654607, 9783540654605

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis and J. van Leeuwen

1543

3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Singapore Tokyo

Serge Demeyer Jan Bosch (Eds.)

Object-Oriented Technology ECOOP ’98 Workshop Reader ECOOP ’98 Workshops, Demos, and Posters Brussels, Belgium, July 20-24, 1998 Proceedings

13

Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands

Volume Editors Serge Demeyer University of Berne Neubruckstr. 10, CH-3012 Berne, Switzerland E-mail: [email protected] Jan Bosch University of Karlskrona/Ronneby, Softcenter S-372 25 Ronneby, Sweden E-mail: [email protected]

Cataloging-in-Publication data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Object-oriented technology : workshop reader, workshops, demos, and posters / ECOOP ’98, Brussels, Belgium, July 20 - 24, 1998 / Serge Demeyer ; Jan Bosch (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Singapore ; Tokyo : Springer, 1998 (Lecture notes in computer science ; Vol. 1543) ISBN 3-540-65460-7

CR Subject Classification (1998): D.1-3, H.2, E.3, C.2, K.4.3, K.6 ISSN 0302-9743 ISBN 3-540-65460-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. c Springer-Verlag Berlin Heidelberg 1998  Printed in Germany Typesetting: Camera-ready by author SPIN 10693041 06/3142 – 5 4 3 2 1 0

Printed on acid-free paper

Preface At the time of writing (mid-October 1998) we can look back at what has been a very successful ECOOP’98. Despite the time of the year – in the middle of what is traditionally regarded as a holiday period – ECOOP'98 was a record breaker in terms of number of participants. Over 700 persons found their way to the campus of the Brussels Free University to participate in a wide range of activities. This 3rd ECOOP workshop reader reports on many of these activities. It contains a careful selection of the input and a cautious summary of the outcome for the numerous discussions that happened during the workshops, demonstrations and posters. As such, this book serves as an excellent snapshot of the state of the art in the field of object-oriented programming.

About the diversity of the submissions A workshop reader is, by its very nature, quite diverse in the topics covered as well as in the form of its contributions. This reader is not an exception to this rule: as editors we have given the respective organizers much freedom in their choice of presentation because we feel form follows content. This explains the diversity in the types of reports as well as in their lay out.

Acknowledgments An incredible number of people have been involved in creating this book, in particular all authors and all the individual editors of each chapter. As editors of the workshop reader itself, we merely combined their contributions and we hereby express our gratitude to everyone who has been involved. It was hard work to get everything printed in the same calendar year as the ECOOP conference itself, but thanks to everybody's willing efforts we have met our deadlines. Enjoy reading ! University of Berne University of Karlskrona/Ronneby October 1998

Serge Demeyer Jan Bosch

Table of Contents I. The 8th Workshop for PhD Students in Object-Oriented Systems Erik Ernst, Frank Gerhardt, Luigi Benedicenti

1

)UDPHZRUN'HVLJQDQG'RFXPHQWDWLRQ ¡ kos Frohner ................................................................................................................  5HHQJLQHHULQJZLWKWKH&25%$0HWD2EMHFW)DFLOLW\ Frank Gerhardt ............................................................................................................  (QIRUFLQJ(IIHFWLYH+DUG5HDO7LPH&RQVWUDLQWVLQ2EMHFW2ULHQWHG&RQWURO6\VWHPV Patrik Persson ..............................................................................................................  2QOLQH0RQLWRULQJLQ'LVWULEXWHG2EMHFW2ULHQWHG&OLHQW6HUYHU(QYLURQPHQWV G¸ nther Rackl...............................................................................................................  $7HVW%HQFKIRU6RIWZDUH Moritz Schnizler............................................................................................................  ,QWHUPRGXODU6OLFLQJRI2EMHFW2ULHQWHG3URJUDPV Christoph Steindl ........................................................................................................  9DOLGDWLRQRI5HDO7LPH2EMHFW2ULHQWHG$SSOLFDWLRQV Sebastien Gerard ........................................................................................................  3DUDOOHO3URJUDPV,PSOHPHQWLQJ$EVWUDFW'DWD7\SH2SHUDWLRQV$&DVH6WXG\ Tam· s Kozsik..............................................................................................................  $'\QDPLF/RJLF0RGHOIRUWKH)RUPDO)RXQGDWLRQRI2EMHFW2ULHQWHG$QDO\VLVDQG 'HVLJQ Claudia Pons ..............................................................................................................  $5HILQHPHQW$SSURDFKWR2EMHFW2ULHQWHG&RPSRQHQW5HXVH Winnie Qiu..................................................................................................................  $&RPSRVLWLRQDO$SSURDFKWR&RQFXUUHQW2EMHFW6\VWHPV Xiaogang Zhang .........................................................................................................  &RPSRQHQW%DVHG$UFKLWHFWXUHVWR*HQHUDWH6RIWZDUH&RPSRQHQWVIURP22 &RQFHSWXDO0RGHOV Jaime Gomez ..............................................................................................................  2EHURQ'$GGLQJ'DWDEDVH)XQFWLRQDOLW\WRDQ2EMHFW2ULHQWHG'HYHORSPHQW (QYLURQPHQW Markus Knasm¸ ller ....................................................................................................  5XQWLPH5HXVDELOLW\LQ2EMHFW2ULHQWHG6FKHPDWLF&DSWXUH David Parsons ............................................................................................................  6$'(6D6HPL$XWRQRPRXV'DWDEDVH(YROXWLRQ6\VWHP Awais Rashid .............................................................................................................. 

9,,,

7DEOHRI&RQWHQWV

)UDPHZRUN'HVLJQIRU2SWLPL]DWLRQ DV$SSOLHGWR2EMHFW2ULHQWHG0LGGOHZDUH Ashish Singhai ............................................................................................................  2EMHFW2ULHQWHG&RQWURO6\VWHPVRQ6WDQGDUG+DUGZDUH Andreas Speck ............................................................................................................  'HVLJQRIDQ2EMHFW2ULHQWHG6FLHQWLILF6LPXODWLRQDQG9LVXDOL]DWLRQ6\VWHP Alexandru Telea..........................................................................................................  7HVWLQJ&RPSRQHQWV8VLQJ3URWRFROV Il-Hyung Cho ..............................................................................................................  9LUWXDO7\SHV3URSDJDWLQJDQG'\QDPLF,QKHULWDQFHDQG&RDUVH*UDLQHG6WUXFWXUDO (TXLYDOHQFH Erik Ernst....................................................................................................................  2Q3RO\PRUSKLF7\SH6\VWHPVIRU,PSHUDWLYH3URJUDPPLQJ/DQJXDJHV$Q $SSURDFKXVLQJ6HWVRI7\SHVDQG6XESURJUDPV Bernd Holzm¸ ller .......................................................................................................  )RUPDO0HWKRGVIRU&RPSRQHQW%DVHG6\VWHPV Rosziati Ibrahim .........................................................................................................  &RPSLODWLRQRI6RXUFH&RGHLQWR2EMHFW2ULHQWHG3DWWHUQV David H. Lorenz .........................................................................................................  ,QWHJUDWLRQRI2EMHFW%DVHG.QRZOHGJH5HSUHVHQWDWLRQLQD5HIOH[LYH2EMHFW 2ULHQWHG/DQJXDJH Gabriel Pavillet ..........................................................................................................  ,PSOHPHQWLQJ/D\HUHG2EMHFW2ULHQWHG'HVLJQV Yannis Smaragdakis ...................................................................................................  $Q(YDOXDWLRQRIWKH%HQHILWVRI2EMHFW2ULHQWHG0HWKRGVLQ6RIWZDUH'HYHORSPHQW 3URFHVVHV Pentti Virtanen............................................................................................................  3URFHVV0HDVXULQJ0RGHOLQJDQG8QGHUVWDQGLQJ Luigi Benedicenti........................................................................................................  7KH&RQWH[WXDO2EMHFWV0RGHOLQJIRUD5HDFWLYH,QIRUPDWLRQ6\VWHP Birol Berkem...............................................................................................................  ([SHULHQFHVLQ'HVLJQLQJD6SDWLRWHPSRUDO,QIRUPDWLRQ6\VWHPIRU0DULQH&RDVWDO (QYLURQPHQWV8VLQJ2EMHFW7HFKQRORJ\ Anita Jacob.................................................................................................................  )DFLOLWDWLQJ'HVLJQ5HXVHLQ2EMHFW2ULHQWHG6\VWHPV8VLQJ'HVLJQ3DWWHUQV Hyoseob Kim ..............................................................................................................  $5HYHUVH(QJLQHHULQJ0HWKRGRORJ\IRU2EMHFW2ULHQWHG6\VWHPV Theodoros Lantzos...................................................................................................... 



7DEOHRI&RQWHQWV

,;

7KH5HOLDELOLW\RI2EMHFW2ULHQWHG6RIWZDUH6\VWHPV Jan Sabak ...................................................................................................................  ([WHQGLQJ2EMHFW2ULHQWHG'HYHORSPHQW0HWKRGRORJLHVWR6XSSRUW'LVWULEXWHG 2EMHFW&RPSXWLQJ Umit Uzun................................................................................................................... 

II. Techniques, Tools and Formalisms for Capturing and Assessing the Architectural Quality in Object-Oriented Software Isabelle Borne, Fernando Brito e Abreu, Wolfgang De Meuter, Galal Hassan Galal 44 $1RWHRQ2EMHFW2ULHQWHG6RIWZDUH$UFKLWHFWLQJ Galal Hassan Galal....................................................................................................  &203$5($&RPSUHKHQVLYH)UDPHZRUNIRU$UFKLWHFWXUH(YDOXDWLRQ Lionel C. Briand, S. Jeromy CarriËre, Rick Kazman, J¸ rgen W¸ st ...........................  ([SHULHQFHZLWKWKH$UFKLWHFWXUH4XDOLW\$VVHVVPHQWRID5XOH%DVHG2EMHFW 2ULHQWHG6\VWHP Jeff L. Burgett, Anthony Lange...................................................................................  (YDOXDWLQJWKH0RGXODULW\RI0RGHO'ULYHQ2EMHFW2ULHQWHG6RIWZDUH$UFKLWHFWXUHV Geert Poels .................................................................................................................  $VVHVVLQJWKH(YROYDELOLW\RI6RIWZDUH$UFKLWHFWXUHV Tom Mens, Kim Mens .................................................................................................  7KH,QIOXHQFHRI'RPDLQ6SHFLILF$EVWUDFWLRQRQ(YROYDELOLW\RI6RIWZDUH $UFKLWHFWXUHVIRU,QIRUPDWLRQ6\VWHPV Jan Verelst ..................................................................................................................  2EMHFW2ULHQWHG)UDPHZRUNV$UFKLWHFWXUH$GDSWDELOLW\ Paolo Predonzani, Giancarlo Succi, Andrea Valerio, Tullio Vernazza .....................  $7UDQVIRUPDWLRQDO$SSURDFKWR6WUXFWXUDO'HVLJQ$VVHVVPHQWDQG&KDQJH Paulo S.C. Alencar, Donald D. Cowan, Jing Dong, Carlos J.P. Lucena...................  5HHQJLQHHULQJWKH0RGXODULW\RI226\VWHPV Fernando Brito e Abreu, GonÁalo Pereira, Pedro Sousa ..........................................  $&RQWH[WXDO+HOS6\VWHP%DVHGRQ,QWHOOLJHQW'LDJQRVLV3URFHVVHV$LPLQJWR 'HVLJQDQG0DLQWDLQ2EMHFW2ULHQWHG6RIWZDUH3DFNDJHV Annya Romanczuk-RÈquilÈ, Cabral Lima, Celso Kaestner, Edson Scalabrin............  $QDO\VLVRI2YHUULGHQ0HWKRGVWR,QIHU+RW6SRWV Serge Demeyer............................................................................................................  3XUSRVHEHWZHHQW\SHVDQGFRGH Natalia Romero, MarÌa JosÈ Presso, VerÛnica ArgaÒaraz, Gabriel Baum, M· ximo Prieto .......................................................................................................................... 

;

7DEOHRI&RQWHQWV

(QVXULQJ2EMHFW6XUYLYDOLQD'HVHUW Xavier Alvarez, Gaston Dombiak, Felipe Zak, M· ximo Prieto .................................. 

III. Experiences in Object-Oriented Re-Engineering StÈphane Ducasse, Joachim Weisbrod

72

([SORLWLQJ'HVLJQ+HXULVWLFVIRU$XWRPDWLF3UREOHP'HWHFWLRQ Holger B‰r, Oliver Ciupke .........................................................................................  'HVLJQ0HWULFVLQWKH5HHQJLQHHULQJRI2EMHFW2ULHQWHG6\VWHPV R. Harrison, S. Counsell, R. Nithi ..............................................................................  9LVXDO'HWHFWLRQRI'XSOLFDWHG&RGH Matthias Rieger, StÈphane Ducasse ...........................................................................  '\QDPLF7\SH,QIHUHQFHWR6XSSRUW2EMHFW2ULHQWHG5HHQJLQHHULQJLQ6PDOOWDON Pascal Rapicault, Mireille Blay-Fornarino, StÈphane Ducasse, Anne-Marie Dery ..  8QGHUVWDQGLQJ2EMHFW2ULHQWHG3URJUDPVWKURXJK'HFODUDWLYH(YHQW$QDO\VLV Tamar Richner, StÈphane Ducasse, Roel Wuyts.........................................................  3URJUDP5HVWUXFWXULQJWR,QWURGXFH'HVLJQ3DWWHUQV Mel ” CinnÈide, Paddy Nixon ....................................................................................  'HVLJQ3DWWHUQVDV2SHUDWRUV,PSOHPHQWHGZLWK5HIDFWRULQJV Benedikt Schulz, Thomas Genssler .............................................................................  u *RRG(QRXJKv $QDO\VLVIRU5HIDFWRULQJ Don Roberts, John Brant............................................................................................  $Q([FKDQJH0RGHOIRU5HHQJLQHHULQJ7RROV Sander Tichelaar, Serge Demeyer..............................................................................  &DSWXULQJWKH([LVWLQJ22'HVLJQZLWKWKH520(20HWKRG Theodoros Lantzos, Anthony Bryant, Helen M. Edwards ..........................................  6\VWHPV5HHQJLQHHULQJ3DWWHUQV Perdita Stevens, Rob Pooley.......................................................................................  8VLQJ2EMHFW2ULHQWDWLRQWR,PSURYHWKH6RIWZDUHRIWKH*HUPDQ6KRH,QGXVWU\ Werner Vieth...............................................................................................................  5HSRUWRI:RUNLQJ*URXSRQ5HHQJLQHHULQJ3DWWHUQV Perdita Stevens ...........................................................................................................  5HSRUWRI:RUNLQJ*URXSRQ5HHQJLQHHULQJ2SHUDWLRQV Mel ” CinnÈide...........................................................................................................  5HSRUWRI:RUNLQJ*URXSRQ'\QDPLF$QDO\VLV Tamar Richner............................................................................................................  5HSRUWRI:RUNLQJ*URXSRQ0HWULFV7RROV Steve Counsel............................................................................................................. 



7DEOHRI&RQWHQWV

IV. Object-Oriented Software Architectures Jan Bosch, Helene Bachatene, Gˆ rel Hedin, Kai Koskimies

;,

99

3DWWHUQ2ULHQWHG)UDPHZRUN(QJLQHHULQJ8VLQJ)5(' Markku Hakala, Juha Hautam‰ki, Jyrki Tuomi, Antti Viljamaa, Jukka Viljamaa ...  ([SORLWLQJ$UFKLWHFWXUHLQ([SHULPHQWDO6\VWHP'HYHORSPHQW Klaus Marius Hansen ...............................................................................................  2EMHFW2ULHQWDWLRQDQG6RIWZDUH$UFKLWHFWXUH Philippe Lalanda, Sophie Cherki..............................................................................  6HPDQWLF6WUXFWXUH$%DVLVIRU6RIWZDUH$UFKLWHFWXUH Robb D. Nebbe..........................................................................................................  $-DYD$UFKLWHFWXUHIRU'\QDPLF2EMHFWDQG)UDPHZRUN&XVWRPL]DWLRQV Linda M. Seiter ......................................................................................................... 

V. Third International Workshop on Component-Oriented Programming (WCOP'98) Jan Bosch, Clemens Szyperski, Wolfgang Weck

130

7\SH6DIH'HOHJDWLRQIRU'\QDPLF&RPSRQHQW$GDSWDWLRQ G¸ nter Kniesel..........................................................................................................  &RQVLVWHQW([WHQVLRQRI&RPSRQHQWVLQ3UHVHQFHRI([SOLFLW,QYDULDQWV Anna Mikhajlova ......................................................................................................  &RPSRQHQW&RPSRVLWLRQZLWK6KDULQJ Geoff Outhred, John Potter ......................................................................................  /DWH&RPSRQHQW$GDSWDWLRQ Ralph Keller, Urs Hˆ lzle ..........................................................................................  $GDSWDWLRQRI&RQQHFWRUVLQ6RIWZDUH$UFKLWHFWXUHV Ian Welch, Robert Stroud .........................................................................................  &RQQHFWLQJ,QFRPSDWLEOH%ODFN%R[&RPSRQHQWV8VLQJ&XVWRPL]DEOH$GDSWHUV B¸ lent K¸ Á¸ k, M. Nedim Alpdemir, Richard N. Zobel .............................................  '\QDPLF&RQILJXUDWLRQRI'LVWULEXWHG6RIWZDUH&RPSRQHQWV Eila Niemel‰ , Juha Marjeta......................................................................................  &RPSRQHQWVIRU1RQ)XQFWLRQDO5HTXLUHPHQWV Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten ...  7KH2SHUDWLRQDO$VSHFWVRI&RPSRQHQW$UFKLWHFWXUH Mark Lycett, Ray J.Paul ...........................................................................................  $UFKLWHFWXUHVIRU,QWHURSHUDWLRQEHWZHHQ&RPSRQHQW)UDPHZRUNV G¸ nter Graw, Arnulf Mester ....................................................................................  $0RGHOIRU*OXLQJ7RJHWKHU P.S.C. Alencar, D.D. Cowan, C.J.P. Lucena, L.C.M. Nova ..................................... 

;,,

7DEOHRI&RQWHQWV

&RPSRQHQW7HVWLQJ$Q([WHQGHG$EVWUDFW Mark Grossman ........................................................................................................  $SSO\LQJD'RPDLQ6SHFLILF/DQJXDJH$SSURDFKWR&RPSRQHQW2ULHQWHG 3URJUDPPLQJ James Ingham, Malcolm Munro...............................................................................  7KH,PSDFWRI/DUJH6FDOH&RPSRQHQWDQG)UDPHZRUN$SSOLFDWLRQ'HYHORSPHQWRQ %XVLQHVV David Helton ............................................................................................................  0DLQWDLQLQJD&276&RPSRQHQW%DVHG6ROXWLRQ8VLQJ7UDGLWLRQDO6WDWLF$QDO\VLV 7HFKQLTXHV R. Cherinka, C. Overstreet, J. Ricci, M. Schrank ..................................................... 

VI. Second ECOOP Workshop on Precise Behavioral Semantics (with an Emphasis on OO Business Specifications) Bernhard Rumpe, Haim Kilov 167 VII. Tools and Environments for Business Rules Kim Mens, Roel Wuyts, Dirk Bontridder, Alain Grijseels

189

(QULFKLQJ&RQVWUDLQWVDQG%XVLQHVV5XOHVLQ2EMHFW2ULHQWHG$QDO\VLV0RGHOVZLWK 7ULJJHU6SHFLILFDWLRQV Stefan Van Baelen.....................................................................................................  %XVLQHVV5XOHVYV'DWDEDVH5XOHV$3RVLWLRQ6WDWHPHQW Brian Spencer ...........................................................................................................  (OHPHQWV$GYLVRUE\1HXURQ'DWD Bruno Jouhier, Carlos Serrano-Morale, Eric Kintzer..............................................  %XVLQHVV5XOHV/D\HUV%HWZHHQ3URFHVVDQG:RUNIORZ0RGHOLQJ$Q2EMHFW 2ULHQWHG3HUVSHFWLYH Gerhard F. Knolmayer .............................................................................................  %XVLQHVV2EMHFW6HPDQWLFV&RPPXQLFDWLRQ0RGHOLQ'LVWULEXWHG(QYLURQPHQW Hei-Chia Wang, V. Karakostas ................................................................................  +RZ%XVLQHVV5XOHV6KRXOGEH0RGHOHGDQG,PSOHPHQWHGLQ22 Leo Hermans, Wim van Stokkum..............................................................................  $5HIOHFWLYH(QYLURQPHQWIRU&RQILJXUDEOH%XVLQHVV5XOHVDQG7RROV Michel Tilman........................................................................................................... 

VIII. Object-Oriented Business Process modelling Elizabeth A. Kendall (Ed.)

217

%XVLQHVV3URFHVV0RGHOLQJ0RWLYDWLRQ5HTXLUHPHQWV,PSOHPHQWDWLRQ Ilia Bider, Maxim Khomyakov..................................................................................  $Q,QWHJUDWHG$SSURDFKWR2EMHFW2ULHQWHG0RGHOLQJRI%XVLQHVV3URFHVVHV Markus Podolsky ...................................................................................................... 



7DEOHRI&RQWHQWV

;,,,

(QWHUSULVH0RGHOOLQJ Monique Snoeck, Rakesh Agarwal, Chiranjit Basu..................................................  5HTXLUHPHQWV&DSWXUH8VLQJ*RDOV Ian F. Alexander....................................................................................................... 

&RQWH[WXDO2EMHFWV RU*RDO2ULHQWDWLRQIRU%XVLQHVV3URFHVV0RGHOLQJ Birol Berkem.............................................................................................................  0DSSLQJ%XVLQHVV3URFHVVHVWR6RIWZDUH'HVLJQ$UWLIDFWV Pavel Hruby..............................................................................................................  0DSSLQJ%XVLQHVV3URFHVVHVWR2EMHFWV&RPSRQHQWVDQG)UDPHZRUNV$0RYLQJ 7DUJHW Eric Callebaut ..........................................................................................................  3DUWLWLRQLQJ*RDOVZLWK5ROHV Elizabeth A. Kendall ................................................................................................. 

IX. Object-Oriented Product Metrics for Software Quality Assessment Houari A. Sahraoui 242 'R0HWULFV6XSSRUW)UDPHZRUN'HYHORSPHQW" Serge Demeyer, StÈphane Ducasse ..........................................................................  $VVHVVPHQWRI/DUJH2EMHFW2ULHQWHG6RIWZDUH6\VWHPV$PHWULFV%DVHG3URFHVV Gerd Kˆ hler, Heinrich Rust, Frank Simon...............................................................  8VLQJ2EMHFW2ULHQWHG0HWULFVIRU$XWRPDWLF'HVLJQ)ODZV'HWHFWLRQLQ/DUJH6FDOH 6\VWHPV Radu Marinescu........................................................................................................  $Q22)UDPHZRUNIRU6RIWZDUH0HDVXUHPHQWDQG(YDOXDWLRQ Reiner R. Dumke.......................................................................................................  $3URGXFW0HWULFV7RRO,QWHJUDWHGLQWRD6RIWZDUH'HYHORSPHQW(QYLURQPHQW Claus Lewerentz, Frank Simon.................................................................................  &ROOHFWLQJDQG$QDO\]LQJWKH022'0HWULFV Fernando Brito e Abreu, Jean Sebastien Cuche.......................................................  $Q$QDO\WLFDO(YDOXDWLRQRI6WDWLF&RXSOLQJ0HDVXUHVIRU'RPDLQ2EMHFW&ODVVHV Geert Poels ...............................................................................................................  ,PSDFWRI&RPSOH[LW\0HWULFVRQ5HXVDELOLW\LQ226\VWHPV Yida Mao, Houari A. Sahraoui, Hakim Lounis ........................................................  $)RUPDO$QDO\VLVRI0RGXODULVDWLRQDQG,WV$SSOLFDWLRQWR2EMHFW2ULHQWHG 0HWKRGV Adam Batenin ...........................................................................................................  6RIWZDUH3URGXFWV(YDOXDWLRQ Teade Punter ............................................................................................................ 

;,9

7DEOHRI&RQWHQWV

,V([WHQVLRQ&RPSOH[LW\D)XQGDPHQWDO6RIWZDUH0HWULF" E. Kantorowitz .......................................................................................................... 

X. ECOOP Workshop on Distributed Object Security Christian D. Jensen, George Coulouris, Daniel Hagimont

273

0HUJLQJ&DSDELOLWLHVZLWKWKH2EMHFW0RGHORIDQ2EMHFW2ULHQWHG$EVWUDFW0DFKLQH MarÌa ¡ ngeles DÌaz FondÛn, DarÌo ¡ lvarez GutiÈrrez, Armando GarcÌa-Mendoza S· nchez, Fernando ¡ lvarez GarcÌa, Lourdes Tajes MartÌnez, Juan Manuel Cueva Lovelle ......................................................................................................................  0XWXDO6XVSLFLRQLQD*HQHULF2EMHFW6XSSRUW6\VWHP Christian D. Jensen, Daniel Hagimont ....................................................................  7RZDUGVDQ$FFHVV&RQWURO3ROLF\/DQJXDJHIRU&25%$ Gerald Brose ............................................................................................................  6HFXULW\IRU1HWZRUN3ODFHV Tim Kindberg............................................................................................................  5HIOHFWLYH$XWKRUL]DWLRQ6\VWHPV Massimo Ancona, Walter Cazzola, Eduardo B. Fernandez .....................................  '\QDPLF$GDSWDWLRQRIWKH6HFXULW\3URSHUWLHVRI$SSOLFDWLRQVDQG&RPSRQHQWV Ian Welch, Robert Stroud .........................................................................................  ,QWHURSHUDWLQJEHWZHHQ6HFXULW\'RPDLQV Charles Schmidt, Vipin Swarup................................................................................  'HOHJDWLRQ%DVHG$FFHVV&RQWUROIRU,QWHOOLJHQW1HWZRUN6HUYLFHV Tuomas Aura, Petteri Koponen, Juhana R‰s‰nen....................................................  6HFXUH&RPPXQLFDWLRQLQQRQXQLIRUP7UXVW(QYLURQPHQWV George Coulouris, Jean Dollimore, Marcus Roberts...............................................  '\QDPLF$FFHVV&RQWUROIRU6KDUHG2EMHFWVLQ*URXSZDUH$SSOLFDWLRQV Andrew Rowley .........................................................................................................  $)DXOW7ROHUDQW6HFXUH&25%$6WRUHXVLQJ)UDJPHQWDWLRQ5HGXQGDQF\6FDWWHULQJ Cristina Silva, LuÌs Rodrigues.................................................................................. 

XI. 4th ECOOP Workshop on Mobility: Secure Internet Mobile Computations Leila Ismail, Ciar· n Bryce, Jan Vitek

288

3URWHFWLRQLQ3URJUDPPLQJ/DQJXDJH7UDQVODWLRQV0RELOH2EMHFW6\VWHPV MartÌn Abadi ............................................................................................................  ' $JHQWV)XWXUH6HFXULW\'LUHFWLRQV Robert S. Gray .......................................................................................................... 



7DEOHRI&RQWHQWV

;9

$0XOWL/HYHO,QWHUIDFH6WUXFWXUHIRUWKH6HOHFWLYH3XEOLFDWLRQRI6HUYLFHVLQDQ 2SHQ(QYLURQPHQW Jarle Hulaas, Alex VillazÛn, J¸ rgen Harms .............................................................  $3UDFWLFDO'HPRQVWUDWLRQRIWKH(IIHFWRI0DOLFLRXV0RELOH$JHQWVRQ&38/RDG %DODQFLQJ Adam P. Greenaway, Gerard T. McKee...................................................................  5ROH%DVHG3URWHFWLRQDQG'HOHJDWLRQIRU0RELOH2EMHFW(QYLURQPHQWV Nataraj Nagaratnam, Doug Lea...............................................................................  &RDUVHJUDLQHG-DYD6HFXULW\3ROLFLHV T. Jensen, D. Le MÈtayer, T. Thorn..........................................................................  6HFXUH5HFRUGLQJRI,WLQHUDULHVWKURXJK&RRSHUDWLQJ$JHQWV Volker Roth...............................................................................................................  $0RGHORI$WWDFNVRI0DOLFLRXV+RVWV$JDLQVW0RELOH$JHQWV Fritz Hohl .................................................................................................................  $JHQW7UXVWZRUWKLQHVV Lora L. Kassab, Jeffrey Voas....................................................................................  3URWHFWLQJWKH,WLQHUDU\RI0RELOH$JHQWV Uwe G. Wilhelm, Sebastian Staamann, Levente Butty· n .........................................  3RVLWLRQSDSHU6HFXULW\LQ7DFRPD Nils P. Sudmann .......................................................................................................  7\SH6DIH([HFXWLRQRI0RELOH$JHQWVLQ$QRQ\PRXV1HWZRUNV Matthew Hennessy, James Riely...............................................................................  0RELOH&RPSXWDWLRQVDQG7UXVW Vipin Swarup ............................................................................................................  &DVH6WXGLHVLQ6HFXULW\DQG5HVRXUFH0DQDJHPHQWIRU0RELOH2EMHFWV Dejan Milojicic, Gul Agha, Philippe Bernadat, Deepika Chauhan, Shai Guday, Nadeem Jamali, Dan Lambright .............................................................................. 

XII. 3rd Workshop on Mobility and Replication Birger Andersen, Carlos Baquero, Niels C. Juul

307

8EL'DWD$Q$GDSWDEOH)UDPHZRUNIRU,QIRUPDWLRQ'LVVHPLQDWLRQWR0RELOH8VHUV Ana Paula Afonso, Francisco S. Regateiro, M· rio J. Silva .....................................  7ZLQ7UDQVDFWLRQV'HOD\HG7UDQVDFWLRQ6\QFKURQLVDWLRQ0RGHO A. Rasheed, A. Zaslavsky ..........................................................................................  3DUWLWLRQLQJDQG$VVLJQPHQWRI'LVWULEXWHG2EMHFW$SSOLFDWLRQV,QFRUSRUDWLQJ2EMHFW 5HSOLFDWLRQDQG&DFKLQJ Doug Kimelman, V.T. Rajan, Tova Roth, Mark Wegman......................................... 

;9,

7DEOHRI&RQWHQWV

2SHQ,PSOHPHQWDWLRQRID0RELOH&RPPXQLFDWLRQ6\VWHP Eddy Truyen, Bert Robben, Peter Kenens, Frank Matthijs, Sam Michiels, Wouter Joosen, Pierre Verbaeten .........................................................................................  7RZDUGVD*UDQG8QLILHG)UDPHZRUNIRU0RELOH2EMHFWV Francisco J. Ballesteros, Fabio Kon, Sergio ArÈvalo, Roy H. Campbell.................  0HDVXULQJWKH4XDOLW\RI6HUYLFHRI2SWLPLVWLF5HSOLFDWLRQ Geoffrey H. Kuenning, Rajive Bagrodia, Richard G. Guy, Gerald J. Popek, Peter Reiher, An-I Wang ....................................................................................................  (YDOXDWLRQ2YHUYLHZRIWKH5HSOLFDWLRQ0HWKRGVIRU+LJK$YDLODELOLW\'DWDEDVHV Lars Frank ................................................................................................................  5HIOHFWLRQ%DVHG0RELOH5HSOLFDWLRQ Luis Alonso ...............................................................................................................  6XSSRUWIRU0RELOLW\DQG5HSOLFDWLRQLQWKH$VSHFW,;$UFKLWHFWXUH Martin Geier, Martin Steckermeier, Ulrich Becker, Franz J. Hauck, Erich Meier, Uwe Rastofer ............................................................................................................  +RZWR&RPELQH6WURQJ$YDLODELOLW\ZLWK:HDN5HSOLFDWLRQRI2EMHFWV" Alice Bonhomme, Laurent LefËvre ...........................................................................  7UDGHRIIVRI'LVWULEXWHG2EMHFW0RGHOV Franz J. Hauck, Francisco J. Ballesteros................................................................. 

XIII. Learning and Teaching Objects Successfully J¸ rgen Bˆ rstler

333

7HDFKLQJ&RQFHSWVLQWKH2EMHFW2ULHQWHG)LHOG ErzsÈbet Angster .......................................................................................................  $1HZFRPHU V7KRXJKWVDERXW5HVSRQVLELOLW\'LVWULEXWLRQ Be· ta Kelemen..........................................................................................................  $Q(IIHFWLYH$SSURDFKWR/HDUQLQJ2EMHFW2ULHQWHG7HFKQRORJ\ Alejandro Fern· ndez, Gustavo Rossi .......................................................................  7HDFKLQJ2EMHFWV7KH&DVHIRU0RGHOOLQJ Ana Maria D. Moreira .............................................................................................  ,QYROYLQJ/HDUQHUVLQ2EMHFW2ULHQWHG7HFKQRORJ\7HDFKLQJ3URFHVV)LYH:HE %DVHG6WHSVIRU6XFFHVV Ahmed Seffah ............................................................................................................  +RZWR7HDFK2EMHFW2ULHQWHG3URJUDPPLQJWR:HOO7UDLQHG&RERO3URJUDPPHUV Markus Knasm¸ ller .................................................................................................. 



7DEOHRI&RQWHQWV ;9,,

XIV. ECOOP'98 Workshop on Reflective Object-Oriented Programming and Systems Robert Stroud, Stuart P. Mitchell 363 023SLQJXS([FHSWLRQV Stuart P. Mitchell, A. Burns, A. J. Wellings..............................................................  $0HWDREMHFW3URWRFROIRU&RUUHODWH Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten ...  $GDSWLYH$FWLYH2EMHFW JosÈ L. Contreras, Jean-Louis Sourrouille...............................................................  @DUFKLWHFWXUHV HVSHFLDOO\GHVLJQHGIRUIUDPHZRUNGHYHORSPHQWDQGVSHFLDOL]DWLRQ,QDGGLWLRQRIEH LQJDGHYHORSPHQWWRRO)5('LQWURGXFHVDXQLIRUPPRGHORIDVRIWZDUHDUFKLWHFWXUH DQGVRIWZDUHGHYHORSPHQWWKDWPDNHVKHDY\XVHRIJHQHUDOL]DWLRQRIGHVLJQSDWWHUQV )5(' LV DQ RQJRLQJ SURMHFW EHWZHHQ WKH GHSDUWPHQWV RI &RPSXWHU 6FLHQFH DW WKH 8QLYHUVLW\RI7DPSHUHDQG8QLYHUVLW\RI+HOVLQNLVXSSRUWHGE\7 (.(6 7HFKQRORJ\ 'HYHORSPHQW&HQWUH)LQODQG DQGVHYHUDO)LQQLVKLQGXVWULDOSDUWQHUV

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 105-109, 1998. Springer-Verlag Berlin Heidelberg 1998

106

M. Hakala et al.

2 The FRED Model Both frameworks and applications are software architectures. FRED, as a development environment, is a tool for creating such architectures. In FRED, an architecture is always created based on another architecture or architectures. A typical example is an application that is derived from an application framework. 'HVLJQDWHGE\WKHREMHFWRULHQWHGGRPDLQHDFKDUFKLWHFWXUHHYHQWXDOO\FRQVLVWVRI FODVVHVDQGLQWHUIDFHVZKLFKLQWXUQFRQWDLQILHOGV -DYDV\QRQ\PIRUDWWULEXWHV DQG PHWKRGV$OVRWKHWHUPGDWDW\SHLVXVHGWRUHIHUWRERWKFODVVHVDQGLQWHUIDFHV Data types alone are insufficient to represent architectural constructs when reusability is essentially required. They do not provide enough documentation for the architecture, nor control the specialization of the architecture. To meet these two requirements, a pattern is hereby defined as a description of an arbitrary relationship between a number of classes and interfaces. Patterns range from generic design patterns to domain and even application specific patterns. A pattern is an architectural description, but needs not to be general. In this context, general constructs such as those listed by Gamma et al. [6] are called design patterns. No distinction between patterns on the basis of their generality is made in FRED. 3DWWHUQV DUH XVHG WR FRXSOH WRJHWKHU DUELWUDU\ GDWD W\SHV WKDW SDUWLFLSDWH LQ D SDU WLFXODU GHVLJQ GHFLVLRQ RU DUFKLWHFWXUDO IHDWXUH 7KLV NLQG RI FRXSOLQJ RI GDWD W\SHV SURYLGHVVWUXFWXUDOGRFXPHQWDWLRQIRUWKHDUFKLWHFWXUH$Q\GDWDW\SHPD\SDUWLFLSDWH LQPRUHWKDQRQHSDWWHUQLQZKLFKFDVHLWSOD\VVHYHUDOUROHVLQWKHDUFKLWHFWXUH 2.1 Structures An architecture is a complex construction of patterns, data types and both their static and dynamic relations. Structural elements of an architecture, such as patterns, data types, methods and fields are called structures. Also, the architecture itself is a structure. $UFKLWHFWXUHVSDWWHUQVDQGGDWDW\SHVDUHFRPSRVLWHVWUXFWXUHVZKLFKFRQWDLQRWKHU VWUXFWXUHV $Q DUFKLWHFWXUH FRQWDLQV SDWWHUQV SDWWHUQV FRQWDLQ GDWD W\SHV DQG GDWD W\SHVFRQWDLQPHWKRGVDQGILHOGV OHDIVWUXFWXUHV 7KXVDGLUHFWHGDF\FOLFJUDSKFDQ EHSUHVHQWHGIRUDQDUFKLWHFWXUH$QH[DPSOHKLHUDUFK\DQGFRUUHVSRQGLQJQRWDWLRQLQ )5('DUHVKRZQLQILJXUH

6RPH)UDPHZRUN

6RPH)UDPHZRUN

6RPH3DWWHUQ

6RPH3DWWHUQ

VRPH2S

6RPH&ODVV

$QRWKHU&ODVV RSHUDWLRQ

6RPH&ODVV

$QRWKHU3DWWHUQ

VRPH2S

DQRWKHU2S

VRPH)LHOG

VRPH)LHOG

$QRWKHU&ODVV RSHUDWLRQ

$QRWKHU3DWWHUQ Figure 1. An example architecture as a directed graph and using FRED tree-like notation.

Pattern-Oriented Framework Engineering Using FRED

107

2.2 Templates All structures may be classified as implementations or templates. An implementation is a structure that is actually implemented in the architecture. In a conventional application, all structures are essentially implementations. A template defines a blueprint of an implementation. Providing a template in an architecture means defining a gap that must be filled in when specializing the architecture. 7HPSODWHV DUH VWUXFWXUHVMXVW OLNH LPSOHPHQWDWLRQV$Q DUFKLWHFWXUH WHPSODWH FRQ WDLQVSDWWHUQVDSDWWHUQWHPSODWHFRQWDLQVGDWDW\SHVDQGGDWDW\SHWHPSODWHVFRQWDLQ PHWKRGV DQG ILHOGV $UFKLWHFWXUH DQG SDWWHUQ WHPSODWHV PD\ FRQWDLQ ERWK WHPSODWHV DQG LPSOHPHQWDWLRQV EXW GDWD W\SH WHPSODWH FRQWDLQV RQO\ WHPSODWHV ,I D VWUXFWXUH FRQWDLQVDWHPSODWHLWLVLWVHOIDWHPSODWH ,Q )5(' HDFK VWUXFWXUH LV DOZD\V EDVHG RQ WHPSODWHV DW OHDVW RQ FRUUHVSRQGLQJ PHWDVWUXFWXUH7KHUHLVDPHWDVWUXFWXUHIRUHDFKW\SHRIVWUXFWXUH 7HPSODWHVDUHXVHGLQFUHDWLQJQHZVWUXFWXUHV7KLVLVFDOOHGLQVWDQWLDWLQJWKHWHP SODWH 7KH LQVWDQWLDWHG WHPSODWH LV FDOOHG D PRGHO LQ UHODWLRQ WR LWV LQVWDQFH 7KH LQ VWDQFH FDQ EH DQ LPSOHPHQWDWLRQ RU DQRWKHU WHPSODWH )RU LQVWDQWLDWLRQ SXUSRVHV D WHPSODWHSURYLGHVWKHIROORZLQJSURSHUWLHV 1.Free-form hyperlinked documentation that guides in creating an instance. 2.Parameterized default implementation that automatically adjusts to the instantiation environment. 3.Tools for instantiating the template. 4.Constraints that all of the template’s instances must conform to. 2.3 Patterns Using Templates In FRED, a pattern is described using templates. A pattern template couples together data type templates and data type implementations. The constraints of the contained templates define the required relationships between collaborating structures. The default implementation makes it easy to instantiate a pattern in a software architecture. In addition, specialized tools may be provided. Instantiating a (design) pattern means binding the domain-specific vocabulary and implementation. Frameworks usually provide only partial implementations for design patterns and leave specific parts to be supplemented by the specializer. In FRED this means providing templates that instantiate the original templates of the pattern. This instantiation chain may be arbitrary long for any structure. Constraints of a template apply to all following instances in the chain. Thus constraints cumulate and the set of possible implementations becomes smaller in every instantiation. This implies kind of inheritance hierarchies for frameworks and design patterns. Layered frameworks are discussed e.g., by Koskimies and Mössenböck [9].

108

M. Hakala et al.

3 A Brief Example An architecture in FRED must be based on another architecture. FRED environment provides a special architecture called PatternCatalog, which collects arbitrary design patterns by several authors. PatternCatalog can be expanded by individual developers. When the developer begins to implement an architecture based on PatternCatalog, a special OtherClasses pattern is automatically generated for the architecture. Data types that are not involved in any specific pattern, will be placed in the OtherClasses pattern. Suppose that the developer creates a class named Transformer in that pattern, but soon realizes that the Singleton design pattern [6] may be used here. PatternCatalog contains a template for that pattern. The developer instantiates that pattern and names 0HWD$UFKLWHFWXUH 0HWD3DWWHUQ

>Q@

0HWD7\SH

/LVW%R[)UDPHOHW

3DWWHUQ&DWDORJ

7UDQVIRUPHU3DWWHUQ

6LQJOHWRQ3DWWHUQ

>Q@

>Q@

FODVV

PHWD0HWKRG

>Q@

6LQJOHWRQ

SULYDWH

FODVV

6LQJOHWRQ

VWDWLF6LQJOHWRQ

7UDQVIRUPHU

SULYDWH

JHW8QLTXH,QVWDQFH

YRLG

PHWD)LHOG

>Q@

SULYDWHVWDWLF6LQJOHWRQ

XQLTXH,QVWDQFH

7UDQVIRUPHU

VWDWLF7UDQVIRUPHU

JHW,QVWDQFH

IURP6UF7R'HVW )LHOG$FFHVVRU)LHOG$FFHVVRU

 SULYDWHVWDWLF7UDQVIRUPHU

LQVWDQFH

Figure 2. TransformerPattern is based on SingletonPattern.

the instance as TransformerPattern. The existing Transformer class in the OtherClasses pattern can be associated with the Singleton template within the SingletonPattern pattern, as in figure 2. SingletonPattern can now be used for generating code for the required methods and fields. ,IWKHGHYHORSHUOHDYHVSDUWVRIVHOHFWHG SDWWHUQV XQLPSOHPHQWHG WKH\QHHG WREH LPSOHPHQWHGE\WKHVSHFLDOL]HURIWKHDUFKLWHFWXUH7KXVSDWWHUQVIRUPDQLQWHUIDFH EHWZHHQ D IUDPHZRUN DQG DQ DSSOLFDWLRQ  $ GHWDLOHG H[DPSOH RI WKH VSHFLDOL]DWLRQ SURFHVVFDQEHIRXQGLQ>@

4 Conclusions and Related Work Many design pattern tools (see, e.g., [1, 3, 15]) use macro expansion mechanisms to generate implementation code. This implies design – implementation gap [11]: changing generated code breaks the connection between design patterns and the implementation. We think that a better way is to use explicit representation of (design) patterns that stays at the background all the way from design to implementation. )XUWKHUPRUHPHUHFRGHJHQHUDWLRQLVQRWHQRXJK,WLVHVVHQWLDOWREHDEOHWRFRP ELQH PXOWLSOH SDWWHUQ LQVWDQFHV DQG WR DQQRWDWH H[LVWLQJ FRGH ZLWK SDWWHUQV 2XU LP SOHPHQWDWLRQ RI WKLV UROH ELQGLQJ IXQFWLRQDOLW\ LV LQIOXHQFHG E\ .LP DQG %HQQHU¶V

Pattern-Oriented Framework Engineering Using FRED

109

3DWWHUQ2ULHQWHG(QYLURQPHQW 32( >@$QRWKHUVLPLODUWRROLVSUHVHQWHGE\)ORULMQ 0HLMHUVDQGYDQ:LQVHQ>@ %HVLGHV VXSSRUWLQJ IUDPHZRUN GHYHORSPHQW WKH )5(' HQYLURQPHQW DOVR DLGV WKH IUDPHZRUNXVHUV,QVWDQWLDWLQJDQ\VWUXFWXUHLQ)5('LQYROYHVFXVWRPL]DWLRQRIGHIDXOW LPSOHPHQWDWLRQV ZLWKLQ WKH ERXQGV RI FRQVWUDLQWV DVVRFLDWHG ZLWK WKH VWUXFWXUH LQ D VSHFLDOL]HGYLVXDORUDFWLYHWH[W>@HGLWRU JXLGHGE\WKHDVVRFLDWHGGRFXPHQWDWLRQ 7KLVPDNHVHJGHULYLQJDQDSSOLFDWLRQIURPDIUDPHZRUNDV\VWHPDWLFSURFHVV$O WKRXJK)5('LVHVSHFLDOO\VXLWHGIRUGHULYLQJVPDOOIUDPHZRUNV>@ODUJHUDUFKLWHF WXUHVDUHVXSSRUWHGE\WKHXVHRISDWWHUQVFRPSRVHGRIRWKHUSDWWHUQV

References 1. Alencar P., Cowan D., Lichtner K., Lucena C., Nova L.: Tool Support for Design Patterns. Internet: IWSFVJXZDWHUORRFDSXE$'9WKHRU\IPVSSVJ] 2. Arnold K., Gosling J.: The Java Programming Language, 2nd ed. Addison-Wesley, 1998. 3. Budinsky F., Finnie M., Vlissides J., Yu P.: Automatic Code Generation from Design Patterns. IBM Systems Journal 35, 2, 1996, 151-171. 4. Coplien J., Schmidt D. (eds.): Pattern Languages of Program Design. Addison-Wesley, 1995. 5. Florijn G., Meijers M., van Winsen P.: Tool Support for Object-Oriented Patterns. Proc. ECOOP ‘97 European Conference on Object-Oriented Programming, Jyväskylä, Finland, June 1997, LNCS 1241, Springer-Verlag, 1997, 472-495. 6. Gamma E., Helm R., Johnson R., Vlissides J.: Design Patterns Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995. 7. Hakala M., Hautamäki J., Tuomi J., Viljamaa A., Viljamaa J.: Pattern-Oriented Framework Engineering Using FRED. In: OOSA ’98, Proceedings of the ECOOP ’98 on Object-Oriented Software Architectures, Research Report 13/98, Department of Computer Science and Business Administration, University of Karlskrona/Ronneby 8. Kim J., Benner K.: An Experience Using Design Patterns: Lessons Learned and Tool Support. Theory and Practice of Object Systems (TAPOS) 2, 1, 1996, 61-74. 9. Koskimies K., Mössenböck H.: Designing a Framework by Stepwise Generalization. In: Proc. of ESEC'95, LNCS 989, Spinger-Verlag, 1995, 479-497. 10. Lewis T. (ed.): Object-Oriented Application Frameworks, Manning Publications Co., 1995. 11. Meijler T., Demeyer S., Engel R.: Making Design Patterns Explicit in FACE — A Framework Adaptive Composition Environment. In: Proc. 6th European Software Engineering Conference, Zurich, Switzerland, September 1997, LNCS 1301, Springer-Verlag, 1997, 94-110. 12. Mössenböck H., Koskimies K.: Active Text for Structuring and Understanding Source Code. Software Practice & Experience 26(7), July 1996, 833-850. 13. Pree W., Koskimies K.: Framelets - Small and Loosely Coupled Frameworks. Manuscript, submitted for publication, 1998. 14. Sun Microsystems Inc.: JavaBeans Documents. Internet: KWWSMDYDVXQFRP EHDQVGRFV.

15. Wild F.: Instantiating Code Patterns — Patterns Applied to Software Development. Dr. Dobb’s Journal 21, 6, 1996, 72-76.

Exploiting Architecture in Experimental System Development1 .ODXV0DULXV+DQVHQ Department of Computer Science, University of Aarhus, Åbogade 34, DK-8200 Aarhus N, Denmark [email protected]

Abstract. This position paper takes as outset experience obtained during development of an object-oriented prototype for a global customer service system. The nature of the system development process: many, short iterations, shifting requirements, evolution over a long period of time, and many developers working in parallel, forced us to constantly focus on software architecture. Insofar as the project was a success the problems were resolved in the project context. Nevertheless the experiences point to the need for tools, techniques and methods for supporting architectural exploitation in experimental system development

Introduction Just as software architectures are gaining increasing awareness in industry and academia, experimental system development [1] is gaining interest and importance. Using a mixture of prototyping, object-orientation and active user involvement these approaches try to overcome limitations of traditional, specification-oriented system development. Prototyping tries to overcome the problems of traditional specification by embodying analysis and design in executable form in order to explore and experiment with future possibilities. Object-orientation secures a real-world reference throughout the whole development process. Using concepts from the application domain ensures flexibility and “correctness” in the context of actual use [2]. Active user involvement and participatory design in particular is used for two reasons: (1) (End) users are domain experts and thus have invaluable input to understanding current and design practice and (2) designers have a moral or ethical responsibility towards (end) users. The project – now known as the Dragon Project – that this position paper reflects upon involved using a rapid, experimental system development approach [3]. In this way the development approaches employed in the project were evolutionary, exploratory, experimental and very rapid. Given this setting – combining well-founded but diverse approaches – special demands are put on the actual implementation activities of the project: The danger 1

The work described was made possible by the Danish National Centre for IT-Research (CIT; http://www.cit.dk), research grant COT 74.4. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 110-114, 1998.  Springer-Verlag Berlin Heidelberg 1998

Exploiting Architecture in Experimental System Development

111

that code-and-hack prototypes will be the result of the development effort is great. We claim the reconciliation of software engineering qualities of mature systems and of experimental system development goes through an explicit focus on software architecture ([4] gives further information about the software engineering experience gained in this project). The remainder of this position paper is devoted to elaborating on this claim.

Software Architecture It is commonly agreed that software architecture of a system is concerned with overall composition and structure of computational elements and their relations. [5] denotes the computational elements components and their relations connectors. [6, p.404] lists among others the following important issues at the software architecture level: Changeability, interoperability, reliability and testability. If we extend this list with human factor properties as intelligibility of design and provision for division of labour the list sums up the architectural qualities that were recognised during the Dragon Project. The concrete architecture solutions and evolutions will make this explicit.

Architectures in the Dragon Project Diagrams shown in the following are little more than “boxes and arrows” showing components and dependencies/visibility. This minimal notation will nevertheless suffice for our purpose: Sketching solutions and evolutions. Initial Architecture The initial architecture was structured around a traditional model-view-controller architecture. This was done in order to overcome uncertainties in the early prototyping phases: Nothing was known about the problem domain, and a quick kick off was found appropriate. A very central element in this architecture was the object model serving as a constant common frame of reference between members of the developer group and to some extent also between developers and members of the business. As it turned out the initial architecture was in place within two weeks such that implementation could start almost immediately. The architecture nevertheless succeeded in dividing labour among the developers working on the prototype: It provided for parallel development of object model and views, which meant that it was possible to a large extent to involve end users in the early development phases. CVS provided for the ability to work simultaneously on the functions. Although the architecture stayed as a basis for development for three months its shortcoming became clear: It provided only a single (persistent store) storage abstraction, and the controller was centralised. These insights combined with a broadened scope for the prototyping project involving e.g. investigations into legacy

112

K.M. Hansen

systems and in-depth coverage of additional business areas demanded for an architectural restructuring. View1

View2

Viewn

Control

Object Model

Functions

PS

Fig. 1. Sketch of initial architecture.

Intermediate Architecture As the persistent store used in the project provided for transparent, orthogonal persistence it turned out to be a relatively straightforward process to reengineer the database component of the system. This meant that throughout the rest of the project a transparent, heterogeneous storage mechanism was available. Since the prototyping process had now run for over three months a substantial amount of knowledge about current and future work practice of the prospective end users was now known. View’

View’

View’

Control1

Control2

Controln

PS

Object Model Database Functions Fig. 2. Sketch of current architecture.

SPS RDB

Exploiting Architecture in Experimental System Development

113

Therefore a further refinement of the architecture with respect to the work practice was possible: The central controller was divided into several independent controllers. This turned out to be a problem however in the development environment used: The syntax directed code editor used only provided for syntactically valid transformation of implementation code via its abstract presentation and high-level operations. Semantic “adjustments” had to be done using the semantic browsing facilities of the editor and the support for using the fragment system was also ad-hoc. Furthermore the transformation introduced a number of errors such that the prototype needed renewed testing. Our experience was though that this restructuring was well worth the effort: The architecture had become more intelligible, testable, changeable and easier to “work within”. This called for a demand for explicit restructuring phases in the development process – something that has shown beneficial to have between major reviews.

Current Architecture Although the above mentioned restructuring provided a major improvement later development showed the need for yet another major restructuring. View’1

View’2

View’n

Control1

Control2

Controln

Functions1

Functions2

Functionsn

Object Model

Database Fig. 3. Sketch of current architecture.

The nature of changes to functionality had shown that the dependencies in the architecture were ackward: The knowledge of problem domain work processes and related functionality now showed that making the architecture very representative would provide for the needed flexibility and independence between components. Also a shift in strategy leading to a component object development orientation facilitated this shift. Although the architecture now has become somewhat more complex to handle the representativeness of component objects (view, control and functions) has provided for a successful reconciliation of the software architecture and the problem domain.

114

K.M. Hansen

Architectures for experimental system development? In the Dragon Project we faced the following general problems pertaining to software architecture: Evolution: The prototype should (possibly) evolve into the final product. Experimentation and exploration: The prototype had to be flexible. Parallel development: The prototype had to support the division of labour. In order to reconcile these problems a focus on tools, techniques and methods for supporting experimental system development is needed. [7] states that in order for an architectural analysis and design tool to be useful it should be able to describe any architecture, determine conformance to interfaces, analyse architectures with respect to metrics, aid in design as a creative activity, and design as an analysis activity. Furthermore it should hold designs, design chunks, design rationales, and requirements, scenarios and provide for the generation of code templates. However, in experimental system development architecture analysis and design is an ongoing activity. Using specification-oriented approaches would lead the process to fall prey to the same problems that introduced e.g. prototyping in the first place. Thus tools for (re)engineering, analysing and designing software architectures as an iterative process are needed: Is software architecture – the structuring of software systems – inherently specification oriented? We also need to understand how to evolve sound, mature architectures experimentally. This could be done by incorporating software architecture concerns into the promising approach described in [1]. This raises an interesting question though: In what senses are “user involvement” and “software architecture” compatible terms?

References [1] Grønbæk, K., Kyng, M., Mogensen, P.: Toward a Cooperative Experimental System Development Approach, Computers and Design in Context (Eds. Kyng, M. & Mathiassen, L.), MIT Press, 1997. [2] Madsen, O.L., Møller-Pedersen, B., Nygaard, K. (1993) Object-Oriented Programming in the BETA Programming Language, ACM Press, Addison Wesley, 1993. [3] Christensen, M., Crabtree, A., Damm, C.H., Hansen, K.M., Madsen, O.L., Marqvardsen, P., Mogensen, P., Sandvad, E., Sloth, L., Thomsen, M.: The M.A.D. Experience: Multiperspective Application Development in evolutionary prototyping. To appear in Proceedings of the 12th European Conference on Object-Oriented Programming (ECOOP’98), Brussels, Belgium, July 1998. [4] Christensen, M., Damm, C.H., Hansen, K.M., Sandvad, E., Thomsen, M.: Architectures of Prototypes and Architectural Prototyping. To be presented at Nordic Workshop on Programming Environment Research (NWPER’98), Bergen, Norway, June 1998. [5] Garlan, D., Shaw, M.: An Introduction to Software Architecture, Advances in Software Engineering and Knowledge Engineering, Volume I (Eds. Ambriola, V. & Tortora, G.), World Scientific Publishing Company, 1993. [6] Buschmann, F., Menuier, R., Rohnert, H., Sommerlad, P., Stal, M.: Pattern-Oriented Software Architecture: A System of Patterns, Wiley, 1996. [7] Kazman, R.: Tool Support for Architecture Analysis and Design. Joint Proceedings of the SIGSOFT '96 Workshops (ISAW-2), San Francisco, CA, October 1996, pp. 94-97.

Object-Orientation and Software Architecture Philippe Lalanda and Sophie Cherki Thomson-CSF Corporate Research Laboratory Domaine de Corbeville F-91404 Orsay, France E-mail: flalanda, [email protected]

Abstract. In the context of very large and complex systems, object-

oriented methods alone do not supply designers with sucient assets for development and maintenance. We believe that software architecture, providing rst solutions to important life-cycle concerns, should add the missing assets. The main purpose of this paper is to show that the two approaches complement each other and to provide rst large-grained solutions for integration.

1

Introduction

The development of software systems has been considerably improved by the emergence of object-oriented technology. Object orientation, going from analysis to implementation, has brought better traceability during the development process, and has o ered new opportunities in terms of exibility and reusability. It appears however that it is not su cient today to tackle the development of very large and complex software systems. We believe that, in such context, object-oriented methods alone do not allow an appropriate level of reuse and do not guarantee easy evolution. In order to handle such software systems, people need rst to design and communicate in large chunks in order to lay out the gross organization of the systems, that is the architecture. Starting the design process at the architectural level permits designers to provide rst solutions to important life-cycle concerns like suitability, scalability, reusability or portability. It sets adequate foundations for further developments at the components level that can be based on object techniques. Software architecture is thus a new level of design that needs its own methods and notations, and which purpose is not to replace object-oriented development process. On the contrary, the architecting phase has to be integrated in traditional object-oriented approaches in order to constitute a seamless development process. The purpose of this paper is to show that the two approaches complement each other and to provide rst large-grained solutions for integration. It is organized as follows. First, the notion of software architecture is presented in section 2. Then, the integration of an architecting phase in traditional object-oriented development processes is discussed in section 3. Section 4 focuses on the issues of designing and representing software architectures and show that object-oriented techniques can be used at this level. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 115-119, 1998.  Springer-Verlag Berlin Heidelberg 1998

116

2

P. Lalanda and S. Cherki

Software architectures

Getting higher levels of abstraction has been a long lasting goal of computer science in order to master the development of complex systems. In this context, software architecture is emerging as a signicant and dierent design level 1. Its main purpose is to organize coarse-grained objects or modules identied in the domain model, to explain their relationships and properties, and to bring solutions for their implementation. Several denitions of software architecture have been proposed so far 1, 2. A denition which seems to synthesize them is given in 6: a software architecture is the structure of the components of a programsystem, their interrelationships, and principles and guidelines governing their design and evolution over time. The goal of a software architecture is to address all the expectations of the various stakeholders involved e.g. schedule and budget estimation for customer, performance and reliability for user . With respect to this goal, a software architecture incorporates many dierent views of the system. A non-exhaustive list of the views which are more commonly developed can be found in 3. Among them, one can nd the structural view, the behavioral view, the environmental view or the growth view, respectively describing the structure of the components and the connectors of the system, the scheduling of system actions, the way of using middleware and hardware resources, and the way of dealing with properties like extensibility of the system. 3 The architecting phase

As indicated in gure 1, the architecting phase comes very early in the development process. Its purpose is to dene the gross organization of a system in order to provide rst solutions partially meeting the system requirements and reaching some non-functional qualities like reusability, adaptability or portability. If not prepared at the architectural level, most requirements and non-functional qualities cannot be met at the code level. This is why architectural decisions are most of the time hard to take and require deep expertise both in software engineering and in the domain under consideration. More precisely, the architecting stage involves the following tasks:  Performing a rst domain analysis and understanding the requirements,  Designing an architecture providing rst solutions meeting the system requirements and reaching targeted qualities,  Allocating requirements to components and connections,  Representing the architecture,  Analyzing and evaluating the architecture with regards to the requirements,  Documenting and communicating the architecture. The architecting phase inuences the development plan. It is followed by a phase of implementation of the architecture, which is mainly concerned with the

Object-Orientation and Software Architecture

117

implementation of the components. Components development can be performed using object-oriented techniques. An integration phase is dedicated to components composition. The purpose here is to verify the conformance of the components with the architecture and to make sure that the implementation meets the requirements and provides the expected qualities. The integration phase is generally performed incrementally with partially implemented components. Such approach leads to more robust and better suited components, and allows easier feedback on the architecture that can be adapted if needed. Software architecture Requirements

Architecture analysis/validation

Domain analysis

Projection on computing environment

Requirements

OO Analysis

OO Design

Testing Integration and system validation

Traceability

Fig. 1.

Architecture in the development cycle

Most tasks of the architecting phase are insuciently supported by methods or tools. In fact, although it has been dened in the seventies, the software architecture eld is still in its infancy. Architectures are still described with informal, personal notations and are often the result of more or less chaotic development. Techniques for validation of software architectures in early phases that is before the implementation of components are only emerging. Regarding architecture description, object-oriented modeling languages represent a very promising approach. These languages actually dene several complementary notations to represent the various aspects of object-oriented software, including for example static and dynamic aspects. Such aspects are also present in software architectures see section 2 and could be modeled with similar notations. However, although object-oriented modeling languages are currently evolving in order to better model software architectures, they are not ready yet. For example, UML Unied Modeling Language still presents important liabilities concerning for example the description of di erent levels of hierarchy in an architecture, soft-

118

P. Lalanda and S. Cherki

ware components, relationships with the computing environment, architectural styles.

4 Architectural design Designing or selecting a software architecture for a given application is a complex and still open issue. A recent trend in the software community consists in collecting design knowledge arising from experience in the form of patterns. A pattern is the comprehensive description of a well-proven solution for a particular recurring design problem in a given context. Patterns have been rst introduced in the object-oriented eld 4 but are now used in many other domains. Buschmann and his colleagues 5 categorized patterns according to their level of abstraction: Architectural patterns are concerned with the gross organization of systems. Design patterns are concerned with subsystems or components. Language-specic patterns or idioms capture programming experience. An architectural pattern describes a set of components, the way they cooperate and the associated constraints, the rationale, and the software qualities it provides. It encapsulates important decisions and denes a vocabulary to name architectural elements. Although there is no standard formalism, most authors agree that a good notation for a pattern should include the following aspects: An application context, A problem statement, A discussion on the con icting forces being part of the problem, An architectural solution proposing a tradeo resolving the forces, Hints for implementation, The advantages and liabilities of the solution. We readily acknowledge that patterns do not generate complete architectures

which in addition are generally heterogeneous. However, they provide valuable guidance to designers in their building of an architecture that must obey a set of mandatory requirements. This is due to the fact that they express knowledge gathered by experienced practitioners. They describe structures and organizations with well understood properties that can be repeated with condence analogies can be drawn pitfalls can be avoided based on recounted experiences. They also improve communication between designers and developers by providing and naming shared backgrounds. This speeds up the design process and permits easier confrontation of alternative solutions.

5 Conclusion Working at the architectural level provides many advantages, including the following ones:

Object-Orientation and Software Architecture

119

Presenting the system at a high level of abstraction and dealing with both constraints on system design and rationale for architectural choices lead to a better understanding of the system. Involving dierent views of the system, software architecture provides a common high-level communication vehicle between the various stakeholders. Software architecture embodies the earliest set of design decisions about a system. These decisions are the most dicult to get right and the hardest ones to change because they have the most far-reaching downstream eects. With respect to these issues, software architecture brings new possibilities for early analysis and validation that reduce risk and cost. Since software architecture comes very early in the life-cycle, it allows not only code reuse but also design reuse. Software architecture integrates the dimensions along which the system is expected to evolve thus increasing predictability and overall quality. The ability of software architecture to use existing assets and to establish a more eective integration allows to reduce time-to-market. Then, the architecturing phase plays a major role in the design of large, complex software systems. By logically and physically structuring the system into software components and dening communication models, it provides solutions to important life-cycle concerns. Design methods and software architecture complement each other: architecture sets foundations for object-oriented developments at the components level. However, the software architecture eld has received wide attention only recently, and many topics still need to be investigated, including the denition of expressive notations for representing architectural designs, and the development of design methods. In these domains, works from the object-oriented community may have a signicant impact. First, modeling languages like UML constitute a very promising way to model architectures. Ongoing evolutions should provide a remedy to the current liabilities of the approach. In addition to this, we believe that the notion of patterns that has been originally dened in the object eld could constitute the foundation of architectural design.

References

1. Mary Shaw and David Garlan, Software Architecture. Perspectives on an Emerging Discipline. Prentice Hall, 1996. 2. Dewayne E. Perry and Alexander L. Wolf, Foundations for the study of software architecture, ACM SIGSOFT Software Engineering Notes, vol. 17, no 4, 1992. 3. Ahmed A. Abd-Allah, Composing Heterogeneous Software Architectures, PhD Thesis, University of Southern California, August, 1996. 4. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Addison-Wesley, 1995. 5. Frank Buschmann, Regine Meunier, Hans Rohnert, Peter Sommerlad, and Michael Stal, Pattern-oriented Software Architecture: A System of Pattern, Wiley & Sons. 6. David Garlan and Dewayne E. Perry, Introduction to the Special Issue on Software Architecture, IEEE Transactions on Software Engineering, vol. 21, number 4, 1995.

Semantic Structure: A Basis for Software Architecture Robb D. Nebbe Software Composition Group Insitut für Informatik und angewandte Mathematik Universität Bern, Neubrückstrasse 10 CH 3012 Bern, Switzerland [email protected]

Introduction There are many valid notions of software architecture each describing a software system in terms of components and connectors. Notions of software architecture may differ in their level of abstraction as well as in their choice of components and connectors. Here, we will concentrate on base-level notions of software architecture. A baselevel notion of architecture is characterized by a one-to-one relationship between components and connectors in the architecture with language features from the source code. This would include modules, classes, instances, methods, dependencies as possible candidates. The need for a base-level notion of software is clear if we wish to talk about concepts such as architectural drift that are related to the difference between the actual architecture of a system and its "ideal architecture". This is important in reengineering where decisions must be based on the accurate information about the system as it is implemented rather than design information that might have been correct but is now out of sync with the source code. Limiting the topic to base-level notions of software architecture makes the choice of components and connectors fundamental. What the architecture of a system will be and what it tells us about that system are both consequences of the choice of components and connectors. Understanding the ramifications of a particular choice of components and connectors is they key issue in defining the notion of a base-level architecture; not all possible choices are good choices. The first section identifies semantic relevance as the principle criteria for separating notions of architecture from other non-architectural notions of structure. The second section further refines our concept of architecture based on the distinction between semantic and algorithmic structure, which corresponds to the choice between dependencies or method invocations as connectors. The final section discusses domain and situation architectures which are two complimentary notions of architecture based on semantic structure. Domain architectures have classes as components while situation architectures have instances as components. Each is presented along with an explanation of what it tells us about a software system. This is followed by a discussion of the relationship between the two kinds of architectures and how they can be used in conjunction to better understand a software system’s capabilities and its possibilities for evolution. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 120-124, 1998.  Springer-Verlag Berlin Heidelberg 1998

Semantic Structure: A Basis for Software Architecture

1

121

Architecture versus Structure

The concepts of components and connectors are sufficiently general that they can describe any notion of software structure. For example, if we choose the components to be tokens and the connector to represent follows then we have defined a notion of software structure that is obviously not "architectural" in nature. Another example of a structure, which I do not consider to be architectural, is the module structure, i.e. include files in C++ or packages in Ada. In general there is no guarantee that the module structure tells us very much about what a system does or how it can evolve. What is missing is the principle of semantic relevance. Semantic relevance implies that the components and connectors are closely related to some semantic concept used to define the software system and thus to the semantics of the software system. If we want to understand what a software system does we are interested in its semantics. If the choice of components and connectors is not semantically relevant then our ability to understand the software system is severely undermined. However, if we understand how modules relate to semantically relevant concepts then they do tell us something about a system.. For example, if we understand what principles determine how classes are organized into modules then we can infer semantically relevant information from the module structure. Unfortunately, the relationship between modules and classes, to take an example, is not guaranteed; it depends on conventions that are not enforced by the language. If we restrict our choice of components and connectors to those that are semantically relevant and present in the source code then components are either classes or instances and connectors are either dependencies or method invocations. However, while the principle of semantic relevance is necessary it is not sufficient to ensure that a notion of software structure is architectural.

2

Semantic versus Algorithmic Structure

The second issue relates to the choice of connectors and determines whether we define a semantic or an algorithmic notion of structure. If we choose dependencies then we have defined a semantic notion of structure while the choice of method invocations results in an algorithmic notion of structure. A semantic structure reflects what a software system does while an algorithmic structure reflects how it does it. The two are not unrelated. What a system needs to do largely influences the choice of how, while how a system does something determines what gets done. The difference mirrors the distinction between an axiomatic approach to semantics as compared to an operational approach. An algorithmic notion of structure fails to be architectural in a number of ways but its biggest drawback is that it if we consider static notions then it is not substantially different than what is provided already by the source code. A closely related notion is the dynamic behavior of instance, which is very interesting in its own right, but the resulting base-level notion of architecture can be extremely sensitive small changes in the input and can vary greatly from one execution to the next making it unstable.

122

R.D. Nebbe

Furthermore, adopting method invocations as connectors means that the resulting notion of structure is particularly sensitive to changes in both algorithms and data structures, even when they have no impact on the semantics of the system. It is like considering the architecture of a building to change every time the elevator goes from one floor to the next or somebody opens a window. This instability compromises the ability to infer information about the system based on this structure and, in my opinion, makes algorithmic structures unsuitable as a basis for software architecture. Choosing dependencies as connectors results in a base-level notion of architecture that is stable. This is a consequence of the fact that choosing either classes or instances as components results in an interesting notion of architecture that is static1. Finally, it is relatively straight forward to understand what each notion of architecture tells us about a software system.

3

Domain and Situation Architectures

Once we have restricted our choice of components and connectors to those that are semantically relevant and in particular the choice of connectors to dependencies we have two possibilities left. If we choose classes as the components then we have a notion that we will call a domain architecture; choosing instances as components results is what we will call a situation architecture. Domain and situation architectures are two complementary notions of architecture. Each tells us something different about a software system and the relationship between the two is particularly revealing. 3.1

Domain Architectures

A domain architecture represents the structure between the concepts that make up the problem domain. Accordingly it provides information about if and how different concepts are related. It is very close to the idea of a schema in a database. It can be defined as follows: Domain Architecture: the set of classes (components) that are connected by the potential relationships (connectors) between these classes as expressed through their sets of dependencies. Relationships appear in the parameter types of a class’s methods. A single relationship will often appear across more than one method. For example, a component typically appears in the constructor as well as in a method to access the component. There are also more than one kind of relationship; For example the relationship of an array with the type used to index it is not the same kind of relationship as the one existing with they type of items it contains, even when they are the same type. Finally, relationships are either permanent or temporary. Consider the fact that a list always has a length but not necessarily a first item in the list. 1

This is further elaborated in the following section.

Semantic Structure: A Basis for Software Architecture

123

At this point it is obvious that there are different kinds of relationship, that they do not correspond to individual methods and that they may have different durations. However, it will require more work both on cataloguing the different kinds of relationships as well as on approaches for identifying each relationship. An important consequence of the definition of domain architecture is that information hiding effectively eliminates dependencies that relate only to implementation strategies. Accordingly an architecture is often much smaller than the system as a whole and since the source code is at different levels of abstraction there is a core architecture that typically provides, relative to the rest of the code, a fairly high-level view of how the system is structured. 3.2

Situation Architectures

A situation architecture represents the structure between entities from the problem domain. If we think of classes as modeling ideas then instances model incarnations of these ideas. A situation architecture is similar to an instance of a database but there is an important difference. It can be defined as follows: Situation Architecture: the set of instances (components) that are connected by the actual relationships (connectors) between these instances. The term situation architecture reflects the fact that a situation architecture represents the structure of a particular concrete situation from the problem domain as defined by a configuration of instances and their actual relationships. This is in contrast to a domain architecture, which represents potential relationships. The relationships are given by the parameters of the methods (as they relate to the state of an instance) in contrast with a domain architecture where it is the types of the parameters that are important. The kinds of relationship are the same as in a domain architecture but because they represent actual rather than potential relationships they are either present or they are absent. We could consider a snapshot of a system at anytime during its execution as a situation architecture. This would seem to be unstable since as the system executes this structure will change. However, if we look at every snapshot we will see that there is a part of this structure that is always the same; it is constant throughout the execution of the software system2. This is what we will call the situation architecture; it represents the initial state of the software system. One case worth pointing out is that of singleton classes, i.e. classes with a single instance. Due to the nature of a singleton class (only one instance) they embody information about actual relationships within the software system and are particularly relevant within the situation architecture. They are typically key pieces in the situation architecture and are vital during architectural recovery.

2

To convince yourself of this consider how garbage collection works; the root set is closely related to the existence of a situation architecture.

124

3.3

R.D. Nebbe

The Relationship between Domain and Situation Architectures

The relationship between a domain and situation architecture is one of instantiation. A situation architecture is an instance of a domain architecture. The situation architecture’s components and connectors are instances of the domain architecture’s components and connectors. The two are complementary, with each playing its own role in understanding the software system. Put simply, the domain model defines what can and can not happen while the situation model defines, in conjunction with any input, what does and does not happen. A domain architecture documents the complete set of potential relationships. It captures the possible while excluding the impossible and is the arbitrator of what relationships can and cannot arise during the execution of a software system. A domain model3 is a domain specific language for describing different situations that arise within the problem domain. A situation architecture represents a actual configuration of instances and their existing relationships. It determines which configurations of instances are reachable during execution and as a consequence constrains the possible run-time configurations of the software system in much the same way that the domain architecture constrains the possible situation architectures. As an example, if a relationship does not appear in the domain architecture then we can be sure it will not appear in the situation architecture. This has consequences for the evolution of a system. Either the domain architecture supports a relationship, in which case we need only adapt the situation model, or the domain architecture itself must be revised in order to support the relationship.

Conclusion A base-level architecture is necessary in order to meaningfully discuss the architecture of a software system as it was built. Using the notion of semantic structure as a basis for software architecture, two different kinds of architectures, domain architectures and situation architectures, were identified. Each provides relevant information about a software system’s capabilities as well as its possibilities for evolution. Further research is needed to better understand the problems related to recovering and understanding both kinds of architectures from software systems.

Acknowledgments This work has been funded by the Swiss Government under Project NFS-200046947.96 and BBW-96.0015 as well as by the European Union under the ESPRIT project 21975 3

The difference between a model and an architecture is that a model includes the complete semantics rather than just the underlying structure; for example, a stack and a queue both have the same semantic structure but different semantics.

A Java Architecture for Dynamic Object and Framework Customizations

Linda M. Seiter Computer Engineering Department, Santa Clara University Santa Clara, CA 95053, USA [email protected]

1

Workshop Contribution

A collection of design patterns was described by Gamma, Helm, Johnson, and Vlissides in 19941. Each pattern ensures that a certain aspect can vary over time, for example the operations that can be applied to an object or the algorithm of a method. The patterns are described by constructs such as the inheritance and reference relations, attempting to emulate more dynamic relationships. As a result, the design patterns demonstrate how awkward it is to program natural concepts of reuse and evolution when using a traditional object-oriented language. We investigate the generic evolution patterns common among many design patterns, and the role that language has in supporting evolution within various software architectures. Class-based models and languages generally require dynamic behavior to be implemented through explicit delegation, using a technique similar to the state , strategy and visitor design patterns. The use of explicit delegation to achieve dynamic implementation is awed in two respects. From a design point of view, the relation between an object's interface and its implementation is not adequately captured. From an implementation point of view, additional virtual invocation is required, and the host reference this is not properly maintained. Java member classes come close to providing the language construct necessary for clean implementation of the state design pattern, however they still require explicit delegation, and they do not support delegation to an external object. Thus, member classes do not support multiple object collaborations, such as those described by the visitor pattern. Frameworks elevate encapsulation and reuse to the level of largegrained components, namely groups of collaborating classes. The abstract model dened in a framework is easily customized to an applicationspecic model through static subclassing and method overriding. However, it is often necessary for an application to dynamically customize a S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 125-129, 1998.  Springer-Verlag Berlin Heidelberg 1998

126

L.M. Seiter

framework in multiple, potentially conicting ways. This could require multiple andor dynamic inheritance. We propose the workshop discussion topic of architectural support of reuse and evolution. We investigate both small-scale reuse and evolution, as on an individual object level, as well as large-scale reuse and evolution as is described by design patterns, frameworks and other collaborationbased constructs. The existing Unied Modeling Language UML notation is not su cient to clearly represent the many aspects of software evolution and reuse. Existing languages like C++ and Java must be extended to cleanly implement dynamic evolution and large-scale reuse. We have proposed an architecture for managing multiple customizations of a framework within an application 5. Each customization denes its own view of the application class model, thus allowing an object to have multiple and dynamic implementations without requiring multiple or dynamic inheritance. The proposed architecture is based on a small extension of the existing Java syntax. We are investigating several implementation approaches, including special Java class loaders, a variant of the visitor pattern, and multi-dispatch variants.

2 Small scale: object level Numerous design patterns have been dened to allow the binding between an object and its implementation to vary 1. The state pattern in particular allows an object to alter its behavior as its internal state changes, and represents a class-based implementation of the dynamic model. While statecharts eectively capture the dynamic aspects of an object, their translation into a class-based implementation loses the relation between an object's static interface and its dynamic implementation. Class-based languages such as C++ and Java do not allow a class implementation to dynamically vary. Virtual methods support a specic form of dynamic binding, allowing a method invocation to be bound based on the class of the host object, rather than the class of the invocation variable. However, the host object's class is xed, thus its behavior is xed. A design technique similar to the state design pattern must be used to dynamically vary object behavior. The pattern relies on explicit delegation to a separate implementation hierarchy. There are several aws in implementing the dynamic model of a class using the state pattern design. When an object receives a message, it must forward both the request and itself to its state reference, which provides the appropriate state-specic implementation. Thus, each method

A Java Architecture for Dynamic Object and Framework Customizations

127

in the base class must explicitly delegate to the state hierarchy, requiring multiple virtual method invocations. A more interesting issue involves the scope of the methods implemented in the state hierarchy. The methods dened in the state hierarchy have a very distinct purpose: they implement behavior for another object. Note however that a Java implementation does not reect this purpose. Rather, the state methods simply appear to take an object reference as an argument. There does not exist a language construct to clearly document the semantics of the implementations dened within the state class. The UML class diagram also fails to capture the relation between an object's static interface and its dynamic implementors. The recent introduction of Java inner member classes alleviates some of the scoping problems of the state pattern implementation. The base class could be redesigned to nest the state hierarchy as member classes. However, the inner class solution does not scale to collaboration-based designs, such as the visitor design pattern. 3

Large scale: collaboration based design

A collaboration captures the structural and behavioral relations required to accomplish a specic application task. When implemented using a framework approach, a collaboration is described through a set of abstract classes along with their structural and behavioral relations. Each abstract class represents a role in the collaboration. The abstract class may contain concrete methods that dene the actual collaboration the object interaction , along with abstract methods that allow applicationspecic customization of behavior. The abstract model dened in a framework is easily customized to an application-specic model through static subclassing and method overriding. An alternative approach is to use parameterization or template classes rather than abstract classes, with customization achieved through template instantiation. A collaboration may simply be viewed as a slice of an application's class model. Collaborations are thus easily modeled using static class diagrams for structural relations and collaboration diagrams for behavioral relations . Some issues arise however when the design must be mapped into an implementation. VanHilst and Notkin note that implementations based on the framework approach result in the excessive use of dynamic binding 9 , and alternatively propose an approach based on templates and mixins. VanHilst and Notkin state however that their approach may result in complex parameterizations and scalability problems.

128

L.M. Seiter

Smaragdakis and Batory solve this problem by elevating the concept of a mixin to multiple class granularity, using C++ parameterized nested classes7. While addressing the scalability problem, the approach does not address the issue of dynamic andor con icting customizations as described in Holland's work on Contracts 3. The contract mechanism allows multiple, potentially con icting component customization to exist in a single application. However, contracts do not allow con icting customizations to be simultaneously active. Thus, it is not possible to allow di erent instances of a class to follow di erent collaboration schemes. The contract mechanism is also based on a special purpose language. In designing a solution for framework customization, we have several requirements that should be satised. A framework architecture must support the following: FrameworkApplication independence. The framework and the application should be independent. This allows the framework to be reused with many di erent applications, and the application to reuse many di erent frameworks. Adaptor independence. The mechanism used to adapt an application to a framework should be independent of the application itself. This allows new adaptations to be dened without modifying the existing application class model. A framework may be customized by an application in multiple, independent ways. Customizations may be introduced dynamically as the application is running. Thus, both the framework class model and the application class model must be independent of the framework adaptors. Interface incompatibility. Adaptation of an application class to a framework role may not be achievable through the use of parameterization templates. The signature of an application class method may not correspond to a framework interface, due to con icts in name, argument type and cardinality. Additional control ow may also be required when adapting the application to a framework. Thus, a simple mapping among method names may not be sucient. 4

Conclusion

We have extended the UML class diagram notation to directly model the relation between an instantiable class interface and its multiple implementations 6, 5. We are extending the Java Virtual Machine to directly

A Java Architecture for Dynamic Object and Framework Customizations

129

support dynamic object behavior and dynamic framework customizations, and are experimenting with several alternative implementation approaches 5. The architecture we propose is an extension of the existing Java language, thus we follow an application framework style of implementation using abstract classes and interfaces to describe the collaboration roles. However, the architecture could alternatively be used with the template andor mixin approach to collaboration-based design, given the proposals for adding such features to Java 4,8.

References 1. Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software, Professional Computing Series. Addison-Wesley, Reading, MA, October 1994. 2. Rational Software Corporation, UML Semantics, http: www.rational.com uml html semantics. 3. I. Holland, Specifying Reusable Components Using Contracts, in Proc. ECOOP'92, 287-308. 4. M. Odersky and P. Wadler, Pizza into Java: Translating theory into practice, in ACM Symposium on Principles of Programming Languages, 1997. 5. Linda M. Seiter and Ari Gunawan, A Java architecture for dynamic framework customizations," Technical Report, Santa Clara University, Department of Computer Engineering, submitted to ICSE'99, 6. Linda M. Seiter, Jens Palsberg, and Karl J. Lieberherr, Evolution of Object Behavior using Context Relations," in IEEE Transactions on Software Engineering, vol. 24, no. 1, January 1998. 7. Y. Smaragdakis and D. Batory, Implementing Layered Designs with Mixin Layers, in Proc. ECOOP'98, 550-570. 8. K. Thorup, Genericity in Java with Virtual Types, in Proc. ECOOP'97, 444-471. 9. M. VanHilst and D. Notkin, Using C++ Templates to Implement Role-Based Designs, In JSSST International Symposium on Object Technologies for Advanced Software. Spring-Verlag, 1996, 22-37.

WCOP '98 Summary of the Third International Workshop on Component-Oriented Programming Jan Bosch1, Clemens Szyperski2 , and Wolfgang Weck3 University of KarlskronaRonneby, Dept. of Computer Science Ronneby, Sweden. [email protected] 2 Queensland University of Technology, School of Computing Science Brisbane, Australia. [email protected] 3 Turku Centre for Computer Science and  Abo Akademi University Turku, Finland. Wolfgang.Weck@abo. 1

1

Introduction

WCOP'981, held together with ECOOP'98 in Brussels, Belgium, was the third workshop in the now established series of workshops on component-oriented programming. The previous two workshops were held with ECOOP'96 in Linz, Austria, and with ECOOP'97 in Jyv askyl a, Finland. WCOP'96 had focussed on the principal idea of software components and worked towards de nitions of terms. In particular, a high-level de nition of what a software component is was formed. WCOP'97 concentrated on compositional aspects, architecture and gluing, substitutability, interface evolution, and non-functional requirements. WCOP'98 had a closer look at issues arising in industrial practice and developed a major focus on the issues of adaptation. Quality attributes as non-functional requirements are preferably now called and component frameworks featured as well, although much less than was hoped by the workshop organisers. WCOP'98 had been announced as follows: After WCOP'96, focusing on the fundamental terminology of COP, and WCOP'97, expanding into the many related facets of component software, WCOP'98 shall concentrate on those software architecture aspects of component-software that directly aect the actual design and implementation, i.e., programming of component-based solutions. In particular, a focus on component frameworks, as introduced below, is suggested. COP aims at producing software components for a component market and for late composition. Composers are third parties, possibly the end user, who are not able or willing to change components. This requires standards to allow independently created components to interoperate, 1

The workshop reader contains short versions of the papers. Full length papers have been published by the Turku Centre for Computer Science TUCS

in the TUCS General Publications Series, Vol. 10, ISBN 952-12-0284-X, 1998. http:www.tucs.publicationsgeneralG10.html .

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 130-135, 1998.  Springer-Verlag Berlin Heidelberg 1998

Summary of the Third Int. Workshop on Component-Oriented Programming

131

and specications that put the composer into the position to decide what can be composed under which conditions. On these grounds, WCOP'96 led to the following denition: A component is a unit of composition with contractually specied interfaces and explicit context dependencies only. Components can be deployed independently and are subject to composition by third parties. A problem discussed at length at WCOP'97 are non-functional requirements. Another key problem that results from the dual nature of components between technology and markets are the non-technical aspects of components, including marketing, distribution, selection, licensing, and so on. While it is already hard to establish functional properties under free composition of components, non-functional and non-technical aspects seem quickly beyond controlability. One promising key approach to establishing composition-wide properties of functional and non-functional nature is the use of component frameworks. A component framework is a framework that itself is not modied by components, but that accepts component instances as "plug-ins". A component framework is thus a deliverable on its own that can enforce sub system-wide properties of a component system. As such, a component framework is sharply distinct from application frameworks that are subject to partial whitebox reuse and that do not retain an identify of their own in deployed systems. The call for contributions in the area of systems rather than individual components and their pairwise coupling was addressed in only a minority of the submissions. It can be speculated that this is symptomatic for the relative youth of the component software discipline. Fifteen papers from seven countries were submitted to the workshop and formally reviewed. Due to the good quality, all papers were accepted for presentation at the workshop and publication in the proceedings. About 40 participants from around the world participated in the workshop. Based on the accepted submissions, the workshop was organised into four sessions: 1. Adaptation and composition three papers . 2. Adaptation and conguration four papers . 3. Component frameworks and quality attributes four papers . 4. Large-scale applications and experience four papers . The workshop was opened by an invited keynote presented by Pierre America and Henk Obbink entitled `Component-based domain-specic family architectures', nicely setting the scene with a look at some of the domains and issues targeted by teams at Philips Research. All of the following sessions were organised into dense bursts of a few presentations followed by an extended period of discussion, with the session's presenters forming a panel. This format was experimentally chosen over one that uses break-out groups, to allow all participants to follow all activities. All sessions were moderated by one of the workshop organisers.

132

J. Bosch, C. Szyperski, and W. Weck

2 Adaptation and composition The rst session focused on detailed technical problems of adaptation in a component setting. A rst issue Kniesel was type-safe delegation to enhance objectcomposition based adaptation to the potential of inheritance-based composition of classes in terms of maintenance of a common identity of the adapted object. This was countered by a formal analysis of the semantic problems of inheritance Mikhajlova, arguing that inheritance of implementation is more errorprone than forwarding-based composition, and that the same problems hold for delegation-based composition. Finally, composition based on trading was proposed Outhred & Potter, where it was noted that trading would not be based on run-time matching of semantics but on something similar to COM category identiers CATIDs. The discussion then wandered o into the pros and cons of formal methods. Starting with the question whether the presented type-safe delegation mechanism would not su er from the inheritance anomaly, the answer was that while this approach was provably safe, it may well be overly constraining and thus rule out type-safe cases that are important in practice but not allowed in the proposed scheme. This led to the observation with problem isgaeneral that this algebraic approaches and that renement calculus like approaches would be of advantage. While the participants agreed that in the context of component software formal reasoning with an explicit avoidance of global analysis! was useful and cost-e ective, the suspicion remained that it is always those aspects that get formalised that are easy to formalise. Quality attributes were seen as an example. Returning to the immediate matters at hand, the participants then focused on the programming language issues in the context of the proposed delegation and trading mechanisms. A single programming language approach was clearly seen as inadequate in a practical setting the agreement was that while multiple programming languages ought to be allowed, a common underlying model would be required for both the trading and the delegation mechanism.

3 Adaptation and conguration While extending the rst session by continuing the theme of component adaptation, the second session somewhat shifted the emphasis from the adaptation of individual components and the resulting compositional consequences to the role of adaptation in the conguration of systems. This line is rather ne, though, and the workshop organisers could not outright reject the suspicion that this was merely done to avoid a single dominating morning session on adaptation. The current interest in a wide variety of adaptation issues and approaches is indeed wide-spread. The discussion-half of the second session culminated in the existential question: do we really want to adapt? Or much rather: should architecture, standardisation, and normalisation not aim to eliminate all but a few cases that require adaptation? This question was left open in the end.

Summary of the Third Int. Workshop on Component-Oriented Programming

133

Keller & Holzle opened the session discussing late component adaptation in Java through the binary adaptation of classes during program loading. Welch & Stroud discussed the adaptation of connectors in software architectures whereas Kucuk et al. addressed customizable adapters for black-box components. Finally, Niemela & Marjeta discussed the dynamic conguration of distributed components. Particular points addressed in the discussion were: The wish to separate quality-attribute behaviour and domain behaviour and to allow for late adaptation. The problem of object identity in conjunction with adaptation. Should an adapted object have a new identity or should it share the identity of the original object? Can meta-programming be really used for adaptation in multiple quality dimensions, or is this just a very sophisticated way to hack a system in order to integrate con icting issues? The key problem here is that the apparent orthogonality of separated issues may not actually hold. Resulting e ective non-composability of meta-level fragments can lead to very subtle errors. At the same time, it is clear that the undisciplined approach to composition and handling of quality attributes is not promising either. Components can be adapted in their binary form and very late in process of application creation, e.g., during the loading component binaries. One of the issues raised during the discussion was whether the correctness of these adaptations could be guaranteed or formally proven? In addition, assuming that an adapted component maintains the same identity, how should the case be dealt with where multiple extensions are applied to the same component

although to di erent instances? Otherwise, for one component identity, multiple behaviours may exist in the system. Finally, several authors mentioned that rather than working with small or large components, they made use of medium-grained components. However, no author gave a denition of medium-grained components nor of the di erence to small and large components in other than very vague terms. Nevertheless, a shared view existed among the workshop participants that reusable components should be larger than individual classes but not have the size of, e.g., an entire object-oriented framework.

4 Component Frameworks and Quality Attributes Moving from smaller to larger scale, the third session focused on system-wide architectural issues. The rst contribution Robben et al. proposed to separate application components from non-functional components addressing distribution etc. using meta components, picking up on a thread started in the second session. In particular, it was proposed to use a meta hierarchy to layer separate aspects, e.g., application + reliability + security + distribution. The second contribution Lycett & Paul proposed to make communication, cooperation, and

134

J. Bosch, C. Szyperski, and W. Weck

coordination as composable as the components themselves. The third Graw & Mester looked into the issues of federated component frameworks which delighted the organisers, since at least one presenter directly addressed the proposed workshop theme . The main issue being the interoperability and cooperation between component frameworks. The fourth contribution Alencar et al. proposed a methodology to systematically determine components and glue. The separation composability proposals relaunched the discussion on whether di erent quality attributes really can be handled in an orthogonal way. Participants spontaneously found examples where that would not be the case, leading to the observation that dependencies between qualities would need to be pinpointed. A general consensus among the participants, and in particular backed by those with signi cant industrial experience, was that well-performing systems today are always the result of competent overall design. It was emphasised that quality attributes today are designed in and that their separation may help to explore design alternatives, but at least today does not seem to really solve the problems. The layered approach to the separation of quality attributes was also questioned. If the qualities were orthogonal, single-layer composition would su ce. However, since they admittedly are not, the order of layering e ects semantics and making it explicit may help to organise things. The state-of-the-art does not permit systematic analysis of more than one quality at a time, while experienced designers know how to handle many simultaneously, but usually are unable to fully explain their heuristics. 5

Large-scale application and experience

The fourth session attempted to zoom out fully and look at current industrial activities and large-scale issues. An interesting contribution Grossman on component testing proposed 100  code coverage testing for all components, based on test harnesses that systematically fail such intricate things as heap space allocation requests. The approach does not reach 100  path coverage, of course. Based on the di culties with domain-neutral component approaches, the second contribution Ingham & Munro proposed domain-specic languages. The third contribution Helton looked at the business impacts of basing large-scale development on components and frameworks. Finally, the fourth contribution to the session Cherinka et al. proposed to use static analysis methods to eliminate dead code in component-based solutions, which can become a real problem in very large systems that evolve over time. The component testing approach raised concerns regarding scalability and coverage. Also, would `protocol-based' testing be needed to properly deal with internal states? The proposed approach does allow for `spying' of componentinternal states through special interfaces between the component and the test harness, addressing some of these issues, but the concern remained whether one interface could be used to take a component into a state that would not be

Summary of the Third Int. Workshop on Component-Oriented Programming

135

reachable through another interface, also supported by the component. Another concern was whether the testing approach would be strong enough if it concentrated on individual components at a time. For example, how would callbacks be tested? Nevertheless, it was reported that the approach actually works and that those components that passed all test have been released to developers without any errors being reported so far.

6 Brief summary The workshop was organised in four sessions without a nal session to explicitly gather conclusions and trends. Nevertheless, there was a strong indication that the eld is currently focusing on adaptation and quality attributes. Adaptation needs can be seen as a certain sign of discipline immaturity and thus as a problem in the relatively small, although there will always be remaining cases that require adaptation. Quality attributes on the other hand cannot be captured usefully by concentrating on the small and are really systemic properties that require focusing on the large. This tension between the problems in the small and the problems in the large is really characteristic of component technology.

Type-Safe Delegation for Dynamic Component Adaptation G¨unter Kniesel University of Bonn [email protected], http://javalab.cs.uni-bonn.de/research/darwin/

The aim of component technology is the replacement of large monolithic applications with sets of smaller components whose particular functionality and interoperation can be adapted to users’ needs. However, the adaptation mechanisms of component software are still limited. Current proposals concentrate on adaptations that can be achieved either at compile time or at link time ([1], [2]). There is no support for dynamic component adaptation, i.e. unanticipated, incremental modifications of a component system at run-time. This is especially regrettable since systems that must always be operational would profit most from the ability to be structured into small interchangeable components that could evolve independently and whose functionality could be adapted dynamically. Existing component adaptation techniques are based on the replacement of one component by a modified version. This approach is unapplicable to dynamic adaptation: at run-time components cannot simply be replaced or modified because their “old” version might still be required by some other parts of the system. Thus we are faced with the problem to change their behaviour solely by adding more components. This problem has two aspects. On one hand, the new components must be used instead of the old ones by those parts of the system that should perceive the new behaviour. This requires the component infrastructure to allow “re-wiring”, i.e. dynamic modification of the information and event flow between components. On the other hand, the new and the old component must work together “as one”. One reason might be that both have to manage common data in a consistent fashion. Another reason arises from the initial motivation of component-oriented programming, incrementality: the new component should not duplicate functionality of the old one. Thus there must be some way for the new component to “inherit” all unmodified behaviour but substitute its own behaviour where appropriate. In traditional, statically typed, class-based object models, where component interaction at run-time is solely based on message sending, this is impossible to achieve without compromising reuse ([1]). An interesting alternative is the concept known as delegation ([5]). An object, called the child, may have references to other objects, called its parents. Messages for which the message receiver has no matching method are automatically forwarded to its parents after binding their implicit self parameter to the message receiver. Thus, all subsequent messages to self will be addressed to the message receiver, allowing it to substitute its own behaviour for parts of the inherited one. Many authors have acknowledged the modelling power and elegance of delegation but at the same time criticised the lack of a static type system that made delegation incompatible with traditional object models. It is the main achievement of DARWIN ([3]) to have shown that type-safe dynamic delegation with subtyping is possible and can S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 136-137, 1998.  Springer-Verlag Berlin Heidelberg 1998

Type-Safe Delegation for Dynamic Component Adaptation

137

be integrated into a class-based environment. Compared to composition based only on message sending, delegation in the DARWIN model is easy and results in more reusable designs because – it requires minimal coding effort (addition of a keyword to a variable) – it introduces no dependencies between “parent” and “child” classes, allowing parent classes to be reused in unanticipated ways without fear of semantic conflicts and child classes to adapt themselves automatically to extensions of parent types (no “syntactic fragile parent class problem”). In the context of component-oriented programming, type-safe delegation enables extension and modification (overriding) of a parent component’s behavior. Each extension is encapsulated in a separate component instance that can be addressed and reused independently. Delegating child components can be transparently used in any place where parent components are expected. Unlike previous approaches, which irrecoverably destroy the old version of a component, delegation enables two types of component modifications. Additive modifications are the product of a series of modifications, each applied to the result of a previous one. They are enabled by the recursive nature of delegation: each new “extension component” can delegate to the previous extension. Additive modifications meet the requirement that the result of compositions / adaptations should itself be composable / adaptable. Disjunctive modifications are applied independently to the same original component. They can be implemented as different “extension components” that delegate to the same parent component. Disjunctive extensions are most useful in modeling components that need to present different interfaces to different clients. A sketch of DARWIN and a detailed description of the way in which it supports dynamic component adaptation and independent extensibility of components is contained in [4].

References 1. Harrison, William and Ossher, Harold and Tarr, Peter. Using Delegation for Software and Subject Composition. Research Report RC 20946 (922722), IBM Research Division, T.J. Watson Research Center, Aug 1997. 2. Keller, Ralph and H¨olzle, Urs. Supporting the Integration and Evolution of Components Through Binary Component Adaptation. Technical Report TRCS97-15, University of California at Santa Barbara, September 1997. 3. Kniesel, G¨unter. Darwin - Dynamic Object-Based Inheritance with Subtyping. Ph.D. thesis (forthcoming), University of Bonn, 1998. 4. Kniesel, G¨unter. Type-Safe Delegation for Dynamic Component Adaptation. In Weck, Wolfgang and Bosch, Jan and Szyperski, Clemens, editor, Proceedings of the Third International Workshop on Component-Oriented Programming (WCOP ’98). Turku Center for Computer Science, Turku, Finland, 1998. 5. Lieberman, Henry. Using Prototypical Objects to Implement Shared Behavior in Object Oriented Systems. Proceedings OOPSLA ’86, ACM SIGPLAN Notices, 21(11):214–223, 1986.

Consistent Extension of Components in Presence of Explicit Invariants Anna Mikhajlova Turku Centre for Computer Science,  Abo Akademi University Lemminkaisenkatu 14A, Turku 20520, Finland

In an open component-based system, the ultimate goal of creating an extension is to improve and enhance functionality of an existing component by tuning it for specic needs, making it more concrete, implementing a faster algorithm, and so on. Eectively, the client of a component benets from using an extension, only if the extension does not invalidate the client. Imposing semantic constraints on extensions ensures their consistency from the client perspective. We view a component as an abstract data type having an encapsulated local state, carried in component attributes, and a set of globally visible methods, which are used to access the attributes and modify them. In addition, every component usually has a constructor, initializing the attributes. Each component implements a certain interface, which is a set of method signatures, including the name and the types of value and result parameters. An extending component implements an interface which includes all method signatures of the original component and, in addition, may have new method signatures. This conformance of interfaces forms a basis for subtyping polymorphism and subsumption of components. We consider a component composition scenario when a component is delivered to a client, who might also be an extension developer, as a formal specication with the implementation hidden behind this specication. In general, several components can implement the same specication, and one component can implement several dierent specications. The formal specication of a component is, essentially, a contract binding the developer of the implementation and the clients, including extension developers. We assume that the specication language includes, apart from standard executable statements, assertions, assumptions, and nondeterministic specication statements, which abstractly yet precisely describe the intended behaviour. Assumptions p and assertions fpg of a state predicate p are the main constituents of a contract between the developer of the implementation and its clients. The assumptions state expectations of one party that must be met by the other party, whereas the assertions state promises of one party that the other party may rely on. Naturally, the assumptions of one party are the assertions of the other and vice versa. When a party fails to keep its promise the asserted predicate does not hold in a state , this party aborts. When the assumptions of a party are not met the assumed predicate does not hold in a state , it is released from the contract and the other party aborts. Invariants binding the values of component attributes play an important role in maintaining consistency of component extensions. An implicit, or the strongest, invariant characterizes exactly all reachable states of the component, S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 138-140, 1998.  Springer-Verlag Berlin Heidelberg 1998

Consistent Extension of Components in Presence of Explicit Invariants

139

whereas an explicit invariant restricts the values the component might have. The implicit invariant is established by the component constructor, preserved by all its methods, and can be calculated from the component specication. As suggested by its name, the explicit invariant, on the other hand, is stated explicitly in the component specication, being part of the contract the component promises to satisfy. The component developer is supposed to meet the contract by verifying that the constructor establishes the explicit invariant and all methods preserve it. In most existing component frameworks the implicit invariant is not safe to assume, and clients relying on it may get invalidated. This is especially the case when one component implements several specications with dierent interfaces. One client, using this component as the implementation of a certain specication, may take it to a state which is perceived as unreachable from the perspective of another client having a dierent specication of this component's behaviour. Moreover, the implicit invariant is, in general, stronger than necessary, and preserving it in client extensions might be too restrictive. When one component implements several specications, ensuring that it preserves the strongest invariants of all these specications can be unimplementable. We concentrate on the issue of extension consistency for component-based systems employing forwarding as the reuse mechanism, in the style of Microsoft COM. In this composition scenario, an extension aggregates an original component and forwards external method calls to this aggregated component. The original component is eectively represented by two, the specication component and the implementation component the extension developer sees the original component only through its specication with the implementation being hidden. All participants of this composition have explicitly stated invariants. Our analysis indicates that, in order to guarantee consistency in presence of explicit invariants, the specication component, the implementation component, and the extension component must satisfy the following requirements. The component constructor must establish the explicit invariant of this component and the component methods must preserve this invariant. Establishing an invariant means asserting that it holds in the end, and preserving an invariant means asserting it in the end under the assumption that it holds in the beginning. Each component must establish its explicit invariant before invoking its methods via self.

When extension is achieved through forwarding, the extending component should match the contract of the original component and simultaneously satisfy this contract. To match the contract of the original component, the extension should behave at least as clients expect the original component to behave, by looking at its specication. To satisfy the contract of the original component, the extension should assert at least the conditions that are assumed in the specication component. These ideas bring us to formulating the following requirements. The explicit invariant of the extension must be stronger than or equal to the explicit invariant of the specication component.

140

A. Mikhajlova

The constructor of the extension must rene the constructor of the specication component and every method of the extension must rene the corresponding method of the specication component. Renement means preservation of observable behaviour, while decreasing nondeterminism.

The implementation of a component has freedom to change the attributes of the specication completely, being hidden behind this specication. However, in presence of explicit invariants, the implementation attributes must be such that it is possible to formulate an invariant which is stronger than the specication invariant with respect to an abstraction relation coercing the concrete attributes to the abstract ones. Just as was the case with the specication and the extension, semantic conformance in the form of renement must be established between the specication of a component and its implementation. The explicit invariant of the implementation must be stronger than or equal to the explicit invariant of the specication with respect to an abstraction relation. The constructor of the implementation must data rene the constructor of the specication and every method of the implementation must data rene the corresponding method of the specication. Data renement means renement with respect to an abstraction relation.

Our analysis of component extension in presence of explicit invariants indicates that ensuring consistency of component extensions is easier and, as a consequence, less error-prone with forwarding than with inheritance. Inheriting attributes of the original component opens a possibility for method inheritance through super-calling from the extension methods of the original component. Moreover, self-referential method invocations, also known as call-backs, become possible in this case. As such, a component and its extension become mutual clients, and are required to satisfy each other's contracts when invoking methods via self and super. However, reestablishing invariants before all self and supercalls still does not guarantee consistency, because an invariant of the extension can be broken in the original component before a self-call redirected to the extension, due to dynamic binding. Since the original component is, in general, unaware of extensions and their invariants, there is no possibility of reestablishing such invariants in the original component before self-calls. Obviously this constitutes a serious problem and we intend to study in a formal setting the restrictions that must be imposed on components and their extensions to avoid such and similar problems.

Component Composition with Sharing Geo Outhred and John Potter Microsoft Research Institute Macquarie University, Sydney 2109

fgouthred,[email protected]

Currently the strongest support in industry for component-based frameworks appears with Microsoft's component architecture COM1 . However with COM it is di cult to program for evolution or adaptation. It is common for class identi ers to be bound into component structures, naming services are limited and there is no support for trading of services in distributed environments. CORBA2 has basic support for these services in the form of a trader speci cation and type repositories but there is little industry adoption of these services. By integrating such services into the programming model for a component architecture we believe that we can provide a more practical and useful industrial-strength approach to the construction and deployment of componentbased distributed applications. This abstract outlines our approach to the construction of component-based applications. As with most component architectures, we promote reuse of existing components via aggregation. Applications are described within our component composition language Ernie. We support explicit creation and connection as found in existing languages such as Darwin3 but the key aspects that dierentiate Ernie are the use of constraints for component selection and support for sharing of component instances. Successful integration of black box components requires not only syntactic but also semantic compatibility. Since the source code is not available, further weight is placed on the ability to capture semantic behaviour in external descriptions for which there are many approaches. Our design focuses not on a speci c speci cation language, but on the provision of hooks to support integration of a mixture of approaches. We call these hooks constraints. Constraints allow component parts to be selected based on naming behavioural properties with no attempt to interpret their formal descriptions. Constraints determine not only component types, but may also determine the state and environmental parameters that describe suitable component instances. An example is the selection of an appropriate mailserver component based on supported mail protocols and the identity of the current user. Within our composition model we support sharing of component instances between aggregations. Sharing allows separation of construction and use, and allows component instances to participate in more than one aggregation. Ernie includes constructs for determining the scope of sharing. Components may be private or shared shared components are made available through named scopes. When an aggregate requires a particular component, depending on its description within Ernie, it may either bind to an existing component matching S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 141-142, 1998.  Springer-Verlag Berlin Heidelberg 1998

142

G. Outhred and J. Potter

the required constraints within accessible scopes, or cause a new component to be instantiated. This integration of binding and component instantiation allows Ernie to achieve a high degree of independence between the parts of an application. For example, an application fragment can bind to existing services within its execution environment or operate in a stand-alone manner, creating all required services. The Ernie programming model is designed to integrate the functionality of a type repository and a trader service in order to seamlessly support binding of services, either provided by other parts of the application or by the execution environment. By providing this functionality we aim to blur the lines between the use of proprietary and third party components. Instead of producing monolithic applications that cannot adapt to existing infrastructure or evolving environments, we allow the programmer to specify where and how an application may introduce and override existing functionality. Currently, a prototype development environment is under construction. Modications to the COM aggregation model were required to allow aggregation both after construction to permit sharing and dynamic aggregation and also across process and apartment boundaries to allow construction of distributed aggregations. The key aim of our work then is to provide a component-based model for the construction and connection of components that will allow sharing of components within and between applications, and that can easily be extended to distributed environments with the support of compliant trader services.

References 1. The Component Object Model Specication, Microsoft, Seattle, WA, 1995. 2. Object Management Group, The Common Object Request Broker: Architecture and Specication, Revision 2, 1995. 3. Je Magee, Naranker Dulay, Susan Eisenback and Je Kramer, Specifying Distributed Software Architectures, Proceedings of the Fifth European Software Engineering Conference, Sitges, Spain, September, 1995.

L a t e C o m p o n e n t Ad ap t at ion Ralph Keller and Urs Hölzle Department of Computer Science University of California, Santa Barbara, CA 93106 {ralph,urs}@cs.ucsb.edu

Extended Abstract Binary component adaptation (BCA) [KH98] is a mechanism to modify existing components (such as Java classfiles) to the specific needs of a programmer. Binary component adaptation allows components to be adapted and evolved in binary form. BCA rewrites component binaries while they are loaded, requires no source code access and guarantees release-to-release compatibility. Rewriting class binaries is possible if classes contain enough symbolic information (as do Java class files). Component adaptation takes place after the component has been delivered to the programmer, and the internal structure of a component is directly modified in place to make changes. Rather than creating new classes such as wrapper classes, the definition of the original class is modified. The general structure of a BCA system integrated into a Java Virtual Machine (JVM) is quite simple: The class loader reads the original binary representation of a class (class file) which was previously compiled from source code by a Java compiler. The class file contains enough high-level information about the underlying program to allow inspection and modification of its structure. The file includes code (bytecodes) and a symbol table (constant pool), as well as other ancillary information required to support key features such as safe execution (verification of untrusted binaries), dynamic loading, linking, and reflection. Unlike other object file formats, type information is present for the complete set of object types that are included in a class file. These properties include the name and signature of methods, name and type of fields, and their corresponding access rights. All references to classes, interfaces, methods, and fields are symbolic and are resolved at load time or during execution of the program. Most of this symbolic information is present because it is required for the safe execution of programs. For example, the JVM requires type information on all methods so that it can verify that all potential callers of a method indeed pass arguments of the correct type. Similarly, method names are required to enable dynamic linking. The loader parses the byte stream of the class file and constructs an internal data structure to represent the class. In a standard implementation of the JVM, this internal representation would be passed on to the verifier. With BCA, the loader hands the data structure to the modifier which applies any necessary transformations to the class. The modifications are specified in a delta file that is read in by the modifier at start-up of the VM. (We call it delta file since the file contains a list of differences or deltas between the standard class file and the desired application-specific variant.) The user defines the changes in form of an adaptation specification, which is compiled to a binary format (delta file) in order to process it more efficiently at load time. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 143-145, 1998.  Springer-Verlag Berlin Heidelberg 1998

144

R. Keller and U. H lzle

After modification, the changed class representation is passed on to the verifier which checks that the code does not violate any JVM rules and therefore can safely be executed. After successful verification, the class representation is then handed over to the execution part of the JVM (e.g., an interpreter and/or compiler). BCA does not require any changes to either the verifier or the core JVM implementation.

Conclusions BCA differs from most other techniques in that it rewrites component binaries before (or while) they are loaded. Since component adaptation takes place after the component has been delivered to the programmer, BCA shifts many small but important decisions (e.g., method names or explicit subtype relationships) from component production time to component integration time, thus enabling programmers to adapt even third-party binary components to their needs. By directly rewriting binaries, BCA combines the flexibility of source-level changes without incurring its disadvantages: • It allows adaptation of any component without requiring source code access. • It provides release-to-release binary compatibility, guaranteeing that the modifications can successfully be applied to future releases of the base component. • It can perform virtually all legal modifications (at least for Java), such as adding or renaming methods or fields, extending interfaces, and changing inheritance or subtyping hierarchies. Several of these changes (e.g., extending an existing interface) would be impossible or impractical without BCA since they would break binary compatibility. • BCA handles open, distributed systems well because the programmer can specify adaptations for an open set of classes (e.g., all subclasses of a certain class) even though the exact number and identity of the classes in this set is not known until load time. • Since binary adaptations do not require the re-typechecking of any code at adaptation time, BCA is efficient enough to be performed at load time.

References [KH98]

Ralph Keller and Urs Hölzle. Binary Component Adaptation. Proceedings of ECOOP'98, Brussels, Belgium. Springer Verlag, July 1998.

Adaptation of Connectors in Software Architectures Ian Welch1 and Robert Stroud University of Newcastle upon Tyne, Newcastle upon Tyne NE1 7RU UK fI.S.Welch, [email protected], WWW home page: http:www.cs.ncl.ac.ukpeople

1

Introduction

We want to be able to adapt the behaviour of existing software components in order to add fault-tolerance or enforcement of security properties. We believe that metaobject protocols 1 can be used to perform this transparent and reusable adaptation without recourse to source code. Unfortunately, there is currently no general formal model developed for metaobject protocols, which makes it dicult to reason about their use. However, we believe that recent work in software architectures - in particular the WRIGHT 2 architectural speci cation language allows us to model metaobject protocols as parameterised connectors. 2

Metaobject Protocols

Stroud and Wu 3 describe metaobject protocols as interfaces to a system that give users the ability to modify the system's behaviour and implementation incrementally. We have implemented metaobject protocols as wrappers in other work 4 in order to add behaviours dynamically or statically without needing access to the source code of the component being adapted. 3

WRIGHT

WRIGHT is an architectural description language that allows a formal description of the abstract behaviour of architectural components and connectors 2. Components are modelled in terms of their interface as de ned by ports and behaviour modelled using CSP5. Connectors are modelled in terms of the roles played by the components at either end and the glue or protocol that governs the ow of messages across the connector modelled using CSP. Connectors are particularly interesting to us as we view metaobject protocols as a type of connector although one that is orthogonal in behaviour to other connectors. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 145-146, 1998.  Springer-Verlag Berlin Heidelberg 1998

146

4

I. Welch and R. Stroud

Metaobject Protocol as a Connector

Our view of a metaobject protocol is that it is actually a parameterised connector where the original connector between two components in the system is the parameter. This would allow its protocol to be specied and veried independently of the connector it is applied to. A WRIGHT expression for a secure communication metaobject protocol where two metaobjects cooperate in encrypting data owing over a connector is shown below: Connector Secure Communication MOPC:Connector Role A = Data Role B = Data Glue = Configuration Secure Communication Component EncryptDecrypt Metaobject A ... Component EncryptDecrypt Metaobject B ... Connector C End Secure Communication End Secure Communication MOP

5

Discussion

Being able to model metaobject protocols as parameterised connectors provides many benets. It increases understanding of metaobject protocols and how they relate to system architectures. It provides a tool for composing systems that allows the reuse of non-functional behaviours. Finally, WRIGHT's formal basis allows analysis and reasoning about the composed connectors. Formal analysis will allow modelling of metaobject protocols separately from the connectors that they are composed with. It can then be shown that after composition the adapted connector preserves both the non-functional and functional behaviours of the metaobject connector and the unadapted connector respectively. Our future aim is to experiment with implementing this approach using WRIGHT. We believe it is a promising approach to formalising metaobject protocols, and designing systems that make use of metaobject protocols. References 1. Kiczales, G., J. des Rivires, and D. G. Bobrow. The Art of the Metaobject Protocol. The MIT Press, 1991. 2. Allen J. R. 1997. A Formal Approach to Software Architecture. PhD Thesis. School of Computer Science, Carnegie Mellon University. 3. Stroud, R. J. and Z. Wu 1996. Using MetaObject Protocols to Satisfy NonFunctional Requirements. Chapter 3 from "Advances in Object-Oriented Metalevel Architectures and Re ection", ed. Chris Zimmermann. Published by CRC Press. 4. Welch, I. S. and Stroud, R. J. 1998. Using MetaObject Protocols to Adapt ThirdParty Components. Work in Progress paper to be presented at Middleware'98. 5. C.A.R. Hoare 1985. Communicating Sequential Processes. Prentice Hall.

Connecting Incompatible Black-Box Components Using Customizable Adapters Bulent Kucuk, M. Nedim Alpdemir, and Richard N. Zobel Department of Computer Science, University of Manchester Oxford Road, Manchester M13 9PL, U.K. fkucukb, alpdemim, [email protected]

EXTENDED ABSTRACT The highly promising idea of building complex software systems from readymade components 5, 4 is challenged by the fact that in the eld of software where information exchange between components can take exceptionally complex forms, it is not feasible to produce binary o -the-shelf components to suit the requirements of every possible application in an optimum manner. Similarly it is impossible to develop protocols to describe every type of component-set cooperation. Consequently, extra coding is usually necessary to compensate for the inevitable interface mismatches. A straightforward solution is to access an incompatible component through another component, called an adapter which converts its interface into a form desirable by a particular client 2 . Traditionally a binary component is coupled with an interface de nition le and a set of examples illustrating the usage of the component through the interface. The adaptation of the component is usually done in an ad hoc manner through copying and pasting of these examples. Research e orts devoted to adapter construction have adopted a formal approach 6, 3 , or used a speci c object model 1 . We suggest an approach which: 1 does not require a speci c environment or formalism 2 enables adapter development with a considerably small amount of e ort 3 eliminates ad-hoc reuse by introducing a structure and 4 is powerful enough to facilitate a wide range of adaptation types. The main objective of our approach, as illustrated in Figure 1, is to let component developers contribute to adapter development as much as possible. Basically, component developers are asked to o er customizable e.g. abstract or parameterized classes that can be used directly in the implementation of speci c adapters for their components, performing the complicated types of adaptation that are hard to be achieved manually or automatically. Such a white-box i.e. source-level customizable adapter turns the rigid interface of a black-box component into an extremely exible one, and it makes coding much easier for the component assembler. A challenge is that a component's developers cannot know the exact speci cations of the adapters required to connect it to other components. However, if the component is in a client position, the developers know what functionality it needs and can predict, to a certain extent, what aspects of this functionality S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 147-148, 1998.  Springer-Verlag Berlin Heidelberg 1998

148

B. K c k, M.N. Alpdemir, and R.N. Zobel supplies

Component A

supplies

A‘s Client Model

Ideal A

Component B

Adapter

Fig. 1.

Adaptation by contribution from the component producers.

are suitable for implementation by an adapter similarly, if the component is in a server position, the developers will know what it provides and what other functionality could be needed by its clients that may be conveniently performed by an adapter. The technique is open to eclectic contributions in the sense that straightforward forms of adaptation such as parameter conversion and interface matching can be performed by automatic adapter generation techniques and integrated into the adapter construction process.

References 1. J. Bosch. Adapting object-oriented components. In W. Weck and J. Bosch, editors, Proc. 2nd International Workshop on Component Oriented Programming, pages 1321. Turku Centre for Computer Science, September 1997. 2. E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, 1995. 3. D. Konstantas. Interoperation of object-oriented applications. In O. Nierstrasz and D. Tsichritzis, editors, Object-Oriented Software Composition, chapter 3, pages 69 95. Prentice-Hall, 1995. 4. O. Nierstrasz and L. Dami. Component-oriented software technology. In O. Nierstrasz and D. Tsichritzis, editors, Object-Oriented Software Composition, chapter 1, pages 328. Prentice-Hall, 1995. 5. C. Szyperski. Component Software: Beyond Object-Oriented Programming. Addison-Wesley, 1998. 6. D.M. Yellin and R.E. Strom. Protocol specications and component adaptors. ACM Transactions on Programming Languages and Systems, 192:292333, 1997.

Dynamic Conguration of Distributed Software Components Eila Niemela1 and Juha Marjeta2 1 2

VTT Electronics, P.O.Box 1100 FIN-90571 Oulu, Finland,

[email protected] Sonera Corporation, Laserkatu 6, FIN-53850 Lappeenranta, Finland,

[email protected]

Extended Abstract. Dynamic conguration, implying the ability to add, delete

and replace software components, needs to be used in conjunction with conguration data and adaptable interfaces to develop exible distributed systems. Software adaptability and reusability often generate contradictory requirements, which have to be balanced against those that emerge from the application domain and product features. Modern real-time systems evolve during their life cycle due to the introduction of new customer features and hardware extensions, etc. Software adaptability is a means for expanding and scaling systems during their evolution. There are also more and more cases in which systems cannot be shut down while performing the required modications. A software architecture designed according to the requirements of the application domain is a good basis for a congurable system, but the conguration support also has to be adjusted according to various customer requirements and dierent kinds of commercial and in-house implementation technologies. Maintainability, reuse and exibility have to be determined when allocating requirements to the available implementation technologies in the software design phase. The evolution of a system should be considered in advance by solving the following issues: 1. The goal is to provide a exible and scalable software architecture and software components that are suited to this and large enough for e cient reuse. Product features are allocated into architectural components which can be added to the system afterwards. 2. A generic interface technology is needed to create loosely connected software modules, and software adapters have to be used to adapt such as COTS and o-the-self components to object-oriented software components. Component connections have to full the timing requirements of the application domain and support dierent execution environments. 3. The component-based software platform should provide built-in knowledge and mechanisms for run-time conguration when software updates are made without execution breaks. Modern distributed systems are based on a three-tier topology and a number of embedded real-time controllers and workstations. The top level consists of transaction management with graphical user interfaces and a database. The S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 149-150, 1998.  Springer-Verlag Berlin Heidelberg 1998

150

E. Niemel and J. Marjeta

next levels down use embedded controllers to response to soft or hard real-time requirements. An embedded controller can have the two real-time requirements, hard and soft and a mixed communication policy at the same real-time level. In exible distributed systems the viewpoints of software developers, system integrators and end-users have to be considered simultaneously with the evolutionary aspects of the product. The PAC Presentation, Abstraction, Control design pattern was employed as a basis for decomposing software to distributed architectural components, as a layered architecture facilitates the denition of application domain-oriented software, product features and adaptive software packages which can be developed, changed, managed and used in di erent ways. Domain-specic features are designed and implemented by object-oriented ne-grain components which, at the top-level, o er the core services of the system, e.g. implementation-independent connections, primary elements of graphical user interfaces, control and scheduling algorithms. The bottom-level of the PAC architecture consists of customisable software container components which use primary components of the core services. Middle-level PAC agents are used as architectural components, with interfaces designed in accordance with the feature-based domain analysis. Product analysis denes interface types, i.e. how the components are connected, and adaptability requirements dene the connections used by the conguration manager. The use of middle-level PAC agents as architectural components with runtime conguration ability presupposes that components are designed with loose connections. By allocating the user interface to PAC components the whole architectural component can be congured dynamically at the same time. We used a at user interface which acts as a presentation component for a PAC agent. The user interface is optional, as not all components need a presentation part, e.g. a co-operator between a control component and a data management component. A congurable module interface COMI is a set of object classes which behaves like a two-level adapter consisting of a generic application layer and a implementation-specic layer to adapt the selected message transfer layer MTL to the COMI interface. The interface can also be used as a wrapper for COTS and OTS components if the components are otherwise suitable to use as mediumgrained PAC agents. Parts of the COMI interface, the MTL and the connection manager create communication services, which support distribution and transparent interconnections. Distributed PAC agents are provided by means of a separate connection component between the control and presentation components. The user interface consists of a UI component and a connection component which joins the presentation and the control components of a DPAC agent. A DPAC agent is executed as two concurrent tasks, which could be allocated to any node in the distributed system.

Components for Non-Functional Requirements Bert Robben*, Wouter Joosen, Frank Matthijs, Bart Vanhaute, Pierre Verbaeten *Research assistant of the Fund for Scientic Research - Vlaanderen F.W.O. K.U.Leuven, Dept. of Computer Science. Celestijnenlaan 200A, B3001 Leuven - Belgium [email protected]

Abstract. Building distributed applications is very hard as we not only

have to take care of the application semantics, but of non-functional requirements such as distributed execution, security and reliability as well. A component-oriented approach can be a powerful technique to master this complexity, and to manage the development of such applications. In such an approach, each non-functional requirement is realised by a single component. In this extended abstract we describe how the metalevel architecture of Correlate can be used to support such an approach.

1 Software Architecture Building distributed applications is very hard as we not only have to take care of the application semantics, but of non-functional requirements such as distributed execution, security and reliability as well. These aspects are often handled by very complex subsystems that need to be developed by domain experts. A promising approach to master this complexity is to apply componentoriented techniques and describe each non-functional requirement as a single component. We might have for instance a component that ensures secure communication, one for reliability and another for physical distribution. A solid software architecture is required that denes the standard interface for these components. This standard interface enables the construction of components by di erent organizations. The task of the application programmer becomes much easier and is reduced to describing the application's semantics inside the software architecture. Non-functional requirements can be easily realized by just plugging in the appropriate components.

2 The Correlate Metalevel Architecture This approach can be supported by Correlate 1 . Correlate is a concurrent objectoriented language with a metalevel architecture 2 . In Correlate, the metalevel is a higher sphere of control that controls object interaction, object creation and object destruction. An application programmer can dene a new metalevel and thus control the way messages are sent and objects are instantiated. In addition, S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 151-152, 1998.  Springer-Verlag Berlin Heidelberg 1998

152

B. Robben et al.

the metalevel has access to the state of the baselevel objects. The metaobject protocol MOP of Correlate denes this interface between the baselevel and its metalevel. The MOP implements a reication protocol that transforms reies each baselevel interaction into an object of a specic class dened in the MOP. An important consequence is that a metalevel becomes application independent it does not matter what the actual classes of the baselevel objects are, the metalevel only depends on the xed interface dened by the MOP. In addition, the MOP of Correlate is strictly implicit. This means that a baselevel can never directly interact with its meta-level. This kind of interaction is completely transparent and realized by the execution environment. As a result, a baselevel never depends on its metalevel. Consequently, a subsystem at the metalevel can be developed as a component that can be deployed independently. In this view, the language together with the MOP can be seen as the contractually specied interface that enables independent development of application and non-functional components. We have built a working prototype for Correlate that supports this MOP and have developed a set of non-functional components as a proof of concept. More concrete, we have implemented components for reliability based on checkpointing and replication algorithms, security simple message encryption and distributed execution. These simple examples show the genericity of our MOP and validate our approach. 3

Conguration aspects

In practice however, non-functional components are more complex and require tuning to get optimal results. In this case, a component should implement the mechanism but o er the application programmer the opportunity to tune the policy. For instance, a distribution component might implement location transparent object invocation and a remote creation mechanism but might at the same time allow the application programmer to specify where new objects are to be allocated. In our prototype, we are experimenting with a set of mechanisms that supports such tuning. A drawback of this approach is that conguration is no longer simple plug-and-play but requires some knowledge at least about the semantics of the components. An evaluation of the extent of this drawback and further experiments with respect to the expressiveness of our mechanisms are subject to future work. References 1. Bert Robben, Wouter Joosen, Frank Matthijs, Bart Vanhaute and Pierre Verbaeten. "Building a Metalevel Architecture for Distributed Applications". Technical report CW265, dept. of Computer Science, K.U.Leuven, Belgium, May 1998. 2. Shigeru Chiba and Takashi Masuda. "Designing an Extensible Distributed Language with a Metalevel Architecture". In Proceedings ECOOP `93, pages 483-502, Kaiserslautern, July 1993. Springer-Verlag.

The Operational Aspects of Component Architecture Mark Lycett and Ray J. Paul Department of Information Systems and Computing Brunel University, Uxbridge, Middlesex. UB8 3PH. United Kingdom [email protected], www.brunel.ac.uk/research/clist

Abstract. This position paper adopts the line that operational policies embodied in component frameworks and architectures should be treated as composable entities. Further, it is proposed that means must be established to deal with the duplication or conflict of policies where several frameworks participate in an architecture.

1 Background Component-based development may be argued to represent a logical evolution to objectoriented development that offers the promise of increased evolutionary flexibility and further promotes notions of design reuse in addition to code reuse. The former is witnessed in the concept of independent extensibility combined with composition. The latter is witnessed in the inherent reuse of components, alongside a growing interest in architectural concepts such as component frameworks. These provide a basis for both component independence and interoperation via a set of ‘policy decisions’ that curb variation by limiting the degrees of freedom given to mechanisms of communication, co-operation and co-ordination. The prominent means of static composition is based on the binding interfaces; a component makes explicit the interfaces that it both provides and requires and it is the job of ‘glue’ to resolve the provided/required relationships by indicating, for each facility, where the corresponding definition can be found [1]. Dynamic composition relates to different forms of run-time behaviour. These may include deciding where requests should be placed, co-ordinating concurrent (or simultaneous) access to shared resources, establishing valid execution orders for requests, maintaining consistency of persistent state and gathering and integrating results from various resources. Essentially, these relate to the operational requirements of a system that need to be addressed in addition to the functionality provided by components.

2 Position Certain classes of operational requirements, such as performance or fault containment, may be inherent in the design of a component. Others, such as those listed above, arise out of the interaction of components and need to be addressed at a different level. Frameworks provide one such level. These group related sets of components and provide a set of ‘operational policy decisions’ to which components must adhere if they wish to communicate, co-operate and co-ordinate with each other. This ‘fixed’ approach S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 153-154, 1998.  Springer-Verlag Berlin Heidelberg 1998

154

M. Lycett and R.J. Paul

is increasingly under fire from architectural quarters, who note that it is appropriate that the mechanisms that ‘glue’ components together should be as open and composable as the components themselves [1]. The proposal made in such quarters is that the connectors that mediate interaction should be promoted to the same order as components themselves (see, for example [2]). A connector may be thought of as a protocol specification that defines properties that include the policies regarding the types of interfaces that it can mediate for, alongside certain assurances regarding the operational aspect of interaction [1]. The dual role of a connector is that of specifying and enforcing policies over a collection of components, both covering the individual protocols that govern interaction and higher-level policies that control how components are deployed in an architectural setting [2]. Making connectors explicit has some value from the perspective of both markets and system evolution. Clear separation of operational requirements gives a component more context independence, possibly allowing applicability (and thus reuse) across a wider range of contexts. It also allows for connectors to be marketed as components. Clear separation of connectors, while allowing for high-level mediation of changing collections of components, also means that the relations between components may be treated in a dynamic fashion [1]. Interface-based and connector-based approaches are not mutually exclusive; component interfaces establish the means of communication, connectors allow specific manipulation of aspects of co-operation and co-ordination. The other issue that is raised relates to the general means by which components and connectors may ‘go about their business’ in heterogeneous environments. For example, connectors may assume responsibility for policy enforcement, but if policies exist both at the framework and architectural level there may be duplication or conflict of interest. Some means by which such problems may be resolved are as follows [3]: – Negotiation. Where a connector controls a transaction between other components, requesting a service, obtaining agreement that the service will be performed, delivering the result and obtaining agreement that the result conforms to the request. – Mediation. Where a connector provides a translation service as a means of reconciling or arbitrating differences between components. – Trading. Where a connector links components requesting a service with components providing that service in a given domain. – Federation. Where a connector provides a means of negotiation that binds a collection of components or frameworks that wish to come together in a federation whilst retaining their autonomy.

References 1. M. Shaw and D. Garlan. Software Architectures: Perspectives on an Emerging Discipline. Prentice-Hall, Englewood Cliffs, NJ, 1996. 2. G. A. Agha. Compositional development from reusable components requires connectors for managing protocols and resources. In OMG-DARPA Workshop on Compositional Software Architectures, Monterey, California, 1998. http://www.objs.com/workshops/ws9801/cfp.htm. 3. J. R. Putman. Interoperability and transparency perspectives for component based software: Position paper. In OMG-DARPA Workshop on Compositional Software Architectures, Monterey, California, 1998. http://www.objs.com/workshops/ws9801/cfp.htm.

Architectures for Interoperation between Component Frameworks Extended Abstract Gunter Graw1 and Arnulf Mester12 Universitat Dortmund, CS Dept., D-44221 Dortmund, Germany [email protected], Dr. Materna GmbH, Open Enterprise Systems Division, D-44131 Dortmund 1

2

Abstract. We contrast the architectural alternatives currently known for the interoperation between component frameworks: the two-tier approach and a trader federation approach. Furthermore, we sketch our example for a system for technical IT management built from several component frameworks from di erent management domains.

1 Introduction and Problem Statement Components oer the possibility of increased reuse. Component frameworks 1 provide reference architectures for dedicated domains. Taking the step beyond isolated solutions, cooperation and interoperation of component frameworks is possible and is strongly encouraged to allow multi domain software architectures. Although dierent solutions to the composition mechanisms of software components exist, the understanding of the interoperation of component frameworks is still immature: neither developers are able to choose from a range of dierent architectures for this purpose, nor they possess detailed information on the applicability of each architecture. We look at the architectural alternatives currently known for the interoperation between component frameworks: the two-tier and the trader federation approaches. Our motivation searching for alternative architectures stem from the interoperation requirements of a project from technical IT management: The trend in network or system management is determined by a shift towards a componentoriented construction of management applications. Management of complex ITinfrastructure needs dierent component frameworks, each suited for a specic class of management tasks or managed elements. Typical actual problems now are delivering a contracted service level by coordinated element management. Each element class, as well as the service management domain, will have their own component frameworks. As IT-infrastructure are evolving within operation and management decisions also have to be based on current operational states, a trader-based approach is of help in mediating cooperations between dierent component frameworks. Additionally, this scenario also depicts independent 1

"A component framework is a software entity that supports components conforming to certain standards and allows instances of these components to be plugged into the component framework." 3

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 155-156, 1998.  Springer-Verlag Berlin Heidelberg 1998

156

G. Graw and A. Mester

evolvements of the components and their frameworks, as dierent vendors are participating. Under the premises of established conventions for service description encodings, a trader-based approach oers the high autonomy demanded in this scenario. 2

Two-tier and trader-federation architectures

Szyperski and Vernik 2 recognized that component frameworks can be organized in multiple layers or better tiers. They state that two tiers will su ce in most of the cases. The basic level tier contains the component frameworks for the dierent domains which are connected to the framework in the higher level tier, which serves for interoperation and integration purposes. Thus the framework of the higher tier has to provide slots in which basic tier components are plugged in. The operation of plugging is performed by making connections between component instances of a basic tier framework and the higher tier framework. The plugging of component instances of dierent frameworks is performed at system design time. The interaction of higher and basic tier components is realized in a

xed or hardwired way. We proposed 1 a dierent approach based on the idea of federation architectures e.g. federated databases, distributed systems trading and brokering . In contrast to hierarchically organized systems in which one component manages interoperation or interactions of basic level components, in federative structures of autonomous components each partner is equipped with higher autonomy, which may be crucial for guaranteeing local framework properties and thus global system properties, too . In particular, these autonomies cover participation, evolution, lifecycle control, and execution. We re ne the component framework de nition from above for trader federations as follows: a federated component framework is a component architecture, where components are dynamically plug- and unpluggable, where intra- and interframework usage relations can also be established by searching for static component attributes andor current values of dynamic component attributes, where usages of single operations and usage sequences like seen in an UML interaction diagram can be constructed dynamically, and where dierent kinds of autonomies are granted. Essential part of such an architecture is one trader in each component framework. The traders constitute a federation whose interoperation behaviour is ruled by federation contracts. For a detailed comparison the reader is directed towards 1. References 1. Graw, G., Mester, A.: Federated Component Frameworks extended version. in: Proc. of the 3rd Int. Workshop on Component-Oriented Programming WCOP'98, TUCS report, 1998 2. Szyperski, C. and Vernik, R.: Establishing System-Wide Properties of ComponentBased Systems - A Case for Tiered Component Frameworks. in: Workshop on Compositional Software Architectures, January 1998 3. Szyperski, C.: Component Software, Addison Wesley, 1997

A Model for Gluing Together P.S.C. Alencar1 , D.D. Cowan1, C.J.P. Lucena1 , and L.C.M. Nova1 University of Waterloo, Department of Computer Science, Waterloo, Ont, Canada N2L 3G1 falencar, dcowan, lucena, [email protected]

1

Introduction

One of the goals of component-based software engineering CBSE development is the construction of software from large-grained reusable components rather than re-writing an entire system from scratch" for each new application. However, accomplishing e ective reuse within the component-based approach depends not only on the selection of appropriate reusable components, but also on the ways these components are adapted and combined 4 . The element of programming concerned with putting things together" is generally called glue" 3 . The role of the glue is essential, as there is often a need to connect components that have not been designed to be composed. Therefore, glue deals with adaptation and combination issues, including dealing with any mismatches" between components such as di erent interfaces or interaction styles. We propose a model for gluing object-oriented software components together with our viewpoint-based development approach for CBSE. This glue model is presented as a design relationship between components that is characterized by a set of semantic properties. In this way, the glue relationship can be isolated and dened separately from the components and di erent ways of adaptingcombining components can be dened by choosing a specic subset of the glue properties.

1.1 A Viewpoint-Based Development Approach for CBSE

Our general objective is the creation of a complete development approach for component-oriented systems based on viewpoints 1 . In this respect, we plan on reusing pre-built components to create black box frameworks in a more methodical fashion. We propose an approach for the specication, design, and implementation of component-based software consisting of ve steps: 1. Determine the perspectives or viewpoints of an application viewpoint analysis. 2. Determine the kernel or framework of an application using unication of viewpoints. 3. Glue the components and frameworks together to create a complete application - the glue semantics is characterized through the views-a relationship. Pre-built components are connected to frameworks by using this glue model. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 157-158, 1998.  Springer-Verlag Berlin Heidelberg 1998

158

P.S.C. Alencar et al.

4. Map the glue into design patterns - the views-a semantics is used to guide the selection of appropriate design patterns. Pattern-based implementations of the applications are produced. 5. Transform implement the resulting object-oriented design into real" components. 2

Modeling the Glue

Each di erent type of object-oriented relationship connects objects in a speci c way. The semantic properties of these relationships de ne static and dynamic constraints that characterize the interaction between two components. These constraints determine how an action triggered in a component a ects the subsequent related component. The semantic properties of the views-a relationship support a methodical separation of components representing di erent concerns in a software speci cation. As a result, the views-a relationship is used to glue viewpoints and black-box components. In this relationship, one of the viewpoint objects views" the state of another object in the reused component. This second object is completely independent and unaware about the existence of the viewing object. The views-a relationship also guarantees consistency between the viewer and viewed objects. A formal de nition of the constraints and properties of the views-a relationship is provided in 2. 3

Conclusion

The viewpoint-based approach for CBSE provides a practical background for the characterization of the semantics for gluing object-oriented components together. The glue model is presented as a design relationship between components both at the design and implementation levels. At the design level the composition is represented by the views-a relationship. At the implementation level the composition is represented by design patterns satisfying the views-a semantic properties. References 1. P.S.C. Alencar, D.D. Cowan, and C.J.P. Lucena. A logical theory of interfaces and objects. revised for IEEE Transactions on Software Engineering, 1998. 2. P.S.C. Alencar, D.D. Cowan, and L.C.M. Nova. A Formal Theory for the Views Relationship. In Proceedings of the 3rd Northern Formal Methods Workshop, Ilkley, UK, September 1998. 3. G.T. Leavens, O. Nierstrasz, and M. Sitaraman. 1997 workshop on foundations of component-based systems. ACM SIGSOFT Software Engineering Notes, pages 38 41, January 1998. 4. Mary Shaw. Architectural Issues in Software Reuse: It's Not Just the Functionality, It's the Packaging. In Proceedings of Symposium on Software Reusability, Seattle, USA, April 1995.

Component Testing: An Extended Abstract Mark Grossman Microsoft Corporation One Microsoft Way Redmond, WA 98052 USA [email protected]

The software industry has made and continues to make attempts to address the problems of large-scale software development. Recent slips of major products indicate that the problems with today’s development approaches are getting worse. We plan on eliminating these problems by creating software with components that are well factored and finished. Since the cost of revisiting a component after shipment is high, we plan on pushing the quality of a component to a high level. A summary of our testing techniques follows. Since we are trying to create useful and reusable components we don’t know all the environments the components will be in or all the ways in which they will be used. This reduces the utility of the typical scenario or user-script testing common in most companies. It encourages us to test interfaces and object behavior exhaustively and examine the state transitions of the object as well as the user observable behavior. Our testing approach includes several techniques: 1. Interface-specific tests that can be re-used to test each implementation of a given interface. 2. Progressive failure generation for system API’s and interface methods called by the object being tested 3. Automatic permutation of interesting parameter values for an interface so that all parameter combinations are exercised 4. Verification of abstract and concrete object state behavior 5. Tests designed and verified to exhibit 100% code coverage 6. Class-specific tests to verify behavior specific to a given class (such as internal state or performance characteristics) Our test harness is designed to enable each of these techniques to be applied to any given component with a minimum of investment in new test code. The test harness is a process, testcomp.exe, which parses the command line and instantiates the test engine. The test engine implements ITestClass and runs the test suite. For classspecific testing any test engine can be substituted, but normally the generic class test engine will be used. The generic test engine is an object that runs the normal suite of tests against any target test object. The generic class test engine uses registry information to query the test object and determine the interfaces to be tested. The test engine will instantiate, as necessary, interface-specific test objects and state-mapping objects and connect them up with the test subject. Then the test engine will call the various interface tests, including parameter permutation, with state checking and progressive failure support. Interface specific tests are implemented by interface test objects that know how to exercise a given interface. The test engine invokes these tests via the interface test object’s IBlackBox interface methods. The test objects may also provide an ITestInfo S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 159-160, 1998.  Springer-Verlag Berlin Heidelberg 1998

160

M. Grossmann

interface to support parameter permutation testing. Class specific interface test objects can be provided to implement interface tests in a class specific manner. The test engine may also use special interface test objects that implement IWhiteBox and IOpaqueState to obtain an abstract view of the test object’s internal concrete state during testing. The test engine for a particular test object via the test object’s state mapping factory object obtains these interface specific objects. There are several acceptable levels of state support; the developer must determine which level is appropriate. At a minimum the canonical state number from the state transition diagram for each supported interface must be exposed via the IWhiteBox interface. Support for additional abstract information for each interface can be provided via additional properties on IWhiteBox. These properties should be specified in the interface contract. For example, IStream might expose its state transition diagram’s state number as property zero, in this case the state number would always be zero because IStream is a stateless interface. But in addition, IWhiteBox might expose as property one the stream’s seek pointer, and as property two the size of the stream. In addition, changes in the opaque internal state of a test object that are not reflected in the state of an interface may be exposed through an interface testing object that implements the IOpaqueState interface. This interface gives the test engine a generic mechanism for detecting changes in the test object’s internal state. In other words, the test engine can use IOpaqueState methods to detect if a particular interface test has caused the appropriate change in the state of the test object without the test engine having to know any internal details of the specific test object. Thus far we have been able to reach 100% code coverage of our completed components and are working towards 100% arc/path coverage

Applying a Domain Specific Language Approach to Component Oriented Programming

James Ingham and Malcolm Munro Centre for Software Maintenance, University Of Durham, United Kingdom, [email protected]

Abstract. A number of methods have been suggested to deal with com-

ponent specication e.g. Buichi and Sekerinski 1 , re-use e.g Lalanda 2 and fault-management e.g Baggiolini and Harms 3 . At Durham we propose the use of a Domain Oriented method in order to specify the semantic and syntactic properties of components, to provide a framework in which to re-use and re-congure the components and to provide additional optimisation and fault-tolerant behaviour. We are currently developing a prototype Domain Specic Language DSL

which describes a "model" domain of cost-accounting. In order to implement this prototype we are using Java, CORBA and JESS Java Expert System Shell 4 and a distributed component model. Dierent categories of component types e.g. persistent components are being identied and guidelines for their use documented. By developing many little languages as per Deursen and Klint 5 it is claimed that the maintenance eort will be reduced. After the implementation and evaluation of this "toy domain", we propose to apply these techniques to an industrial software system by working closely with a large telecommunications company. This paper identies a number of issues which the authors feel are important for Component Oriented Programming to succeed. Then we dene DSL' s and outline how and why we are using them, rst in general terms and then in terms of the issues outlined earlier. In order to promote component re-use we are advocating automating some methods of error detection which will be encoded into the DSL. This will enable a current conguration of components to detect certain error conditions and, with the help of extra domain knowledge and the underlying system architecture, attempt to remedy the situation. This is followed by a brief overview of the supporting architecture which has been developed to allow the mapping of DSL constructs to component code and to automatically insert test code where applicable. This architecture is currently being implemented in Java and CORBA at the University of Durham. We have also included an outline of the "toy domain" DSL language. Although this architecture addresses many important aspects of re-use, it is acknowledged that it is still based on the assumption of "as is" re-use or human intervention at times of component development. However it is argued that for this approach these are not unreasonable assumptions.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 161-162, 1998.  Springer-Verlag Berlin Heidelberg 1998

162

J. Ingham and M. Munro

References 1 M. Buichi and E. Sekerinski Formal methods for component software : The renement calculus perspective. presented at WCOP 97, Finland, 1997. 2 P. Lalanda A control model for the dynamic selection and conguration of software components presented at WCOP 97, Finland, 1997. 3 V. Baggiolini and J. Harms toward automatic, run-time fault management for component-based applciations presented at WCOP 97, Finland, 1997. 4 E. Friedman-Hill Java Expert system Shell JESS vol. 4.0Beta. http:herzberg.ca.sandia.govjess:

Sandia National Laboratories, 1997. 5 A. v. Deursen and P. Klint Little Languages: Little Maintenance? CWI, Amsterdam December 16, 1996

The Impact of Large-Scale Component and Framework Application Development on Business David Helton1 Department of Information Systems and Quantitative Sciences, Texas Tech University, Lubbock, Texas, USA. [email protected]

Abstract. Recent advances in computer hardware have generated corresponding demands for complex software that is increasingly difficult to build, maintain, and modify. This problem has become so serious that industry analysts have called it a ”software crisis.” Building applications from reusable components, component-based development (CBD), emerged as a possible solution to the software crisis. Most CBD development, to date, has focused upon user-interface type components that are merely attachments to larger applications. However, the component-based approach is essentially one that focuses upon the development of large software systems. Large-scale CBD generally requires components of large size. In this paper, we address the effect of large-scale component and framework development upon business. Theoretically, composing collections of components into frameworks should be similar to assembly of a atomic components. However, scaling up increases complexity significantly.

Recent advances in computer hardware have generated corresponding demands for complex software that is increasingly difficult to build, maintain, and modify. This problem has become so serious that for several decades industry analysts have called it a ”software crisis.” Despite all of the technological advances of the 1990s, corporations are facing the twenty-first century encumbered with aging legacy applications. The year 2000 problem accentuates the deficiencies of these old transaction processing applications. Companies have resorted to a wide variety of fixes to patch up old code. Some have attempted to reengineer poorly documented programs. Other organizations have wrapped graphical user interfaces (GUIs) around existing code. Irrespective of the plethora of visual, object-oriented and fourth generation languages (4GLs) on the market, and despite the availability of numerous CASE and reengineering products, corporations have been reluctant to redo large-scale legacy applications. The introduction of object-oriented (OO) development seemed to offer a solution for building large systems quickly and inexpensively. However, there has been a scarcity of major business applications successfully implemented with OO tools. Building applications from reusable components, component-based development (CBD), emerged as an alternate solution to the software crisis. A component is a separately created piece of software that may be combined or ”composed” with other similar units to build an application. A component hides or encapsulates its implementation details. Thus, a component’s implementation may contain an object-oriented paradigm, procedure, code, or even assembly language. A separate interface provides a group of service specifications for the component. A distinguishing characteristic of a component is that it may be deployed independently. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 163-164, 1998.  Springer-Verlag Berlin Heidelberg 1998

164

D. Helton

Ideally, CBD would be as easy as plugging together stereophonic components from different manufacturers. Industry interface standards would guaranteethe interoperability of the system’s heterogeneous components. There are three de facto competing sets of standards for components. These conflicting standards are detrimental to the creation of such CBD tools. Most CBD development, to date, has focused upon user-interface type components that are merely attachments to larger applications. Though the componentbased approach is essentially one that focuses upon the development of large software systems, CBD has been used little in mission-critical corporate applications. Largescale CBD generally requires components of large size. Fine-grained components simply cannot depict the high level of data abstraction needed to perform an entire business function. An application framework is a collaborating group of components with an extensible interface for a particular domain. In other words, this type of framework is an incomplete application from a vendor, in which the customer extends the code to fit the particular needs of the business. A component framework is an entity that enforces rules and standards for components plugged into it. A component framework may operate independently or with other components and component frameworks. Thus, we may model a component framework as a component. Component frameworks offer promise for streamlining the development of large systems. However, as developers aggregate more and more components, increasing levels of abstraction make it difficult to maintain functionality. Though there is substantial CBD at the desktop level, developers have encountered many obstacles in using components to build mission critical systems. Despite the hesitancy of corporations to attempt major CBD, some of the forerunners have reported results that suggest that large-scale CBD has considerable potential. Several of the few developers that have used CBD for large mission-critical applications have had to patch together crudely a variety of heterogeneous elements. As an interesting solution to many of these problems, some vendors of monolithic applications are preparing alternate versions of each package by breaking it down into large components. Assembly of the large components is quicker than building entire applications from scratch, yet it offers more flexibility than buying enterprise off-the-shelf software. In this paper, we address the effect of large-scale component and framework development upon business. Theoretically, composing collections of components into frameworks should be similar to assembly of a few atomic components. However, scaling up increases complexity significantly.

Maintaining a COTS Component-Based Solution Using Traditional Static Analysis Techniques R. Cherinka, C. Overstreet, J. Ricci, and M. Schrank The MITRE Corporation, 1 Enterprise Parkway, Hampton, VA 23666, USA [email protected]

Abstract. Even as a process that integrates commercial off-the-shelf (COTS) products into new homogeneous systems replaces “traditional” software development approaches, software maintenance problems persist.. This work reports on the use of static analysis techniques on several medium-sized COTS solutions that have become difficult to maintain. We found that by exploiting semantic information, traditional techniques can be augmented to handle some of the unique maintenance issues of component-based software.

1 Introduction In an effort to decrease software costs and to shorten time-to-market, industry and government alike are moving away from more "traditional" development approaches and towards integration of commercial off-the-shelf (COTS) components. An interesting aspect of COTS-based development is that automated solutions are comprised of a variety of "non-traditional" constructs (e.g. forms, code snippets, reports, modules, databases, and the like) that have been glued together to form an application. It is well understood that throughout an application's lifecycle the cost of software maintenance is typically much higher than the original cost of development. The fact that COTS component-based solutions represent a new methodology and set of challenges does not alter this. This research asserts that developing appropriate analytic aids to assist with the understanding, development and maintenance of component-based applications is critical to the long-term goal of decreasing software development time and costs without unduly complicating future software maintenance.

2 Maintaining a Component-Based System Traditional static analysis encompasses a mature set of techniques (such as dead code identification, program slicing and partial evaluation) for helping maintainers S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 165-166, 1998.  Springer-Verlag Berlin Heidelberg 1998

166

R. Cherinka et al.

understand and optimize programs. They can be applied to debugging, testing, integration and maintenance activities. Through experience in developing and maintaining a COTS component-based solution for the Department of Defense [1,2,3], we have identified maintenance issues associated with a component-based solution., and have experimented with the use of traditional static analysis techniques to aid our maintenance efforts. Our results show that the traditional static analysis techniques that we used were useful, but not necessarily sufficient to handle some of the unique characteristics of component-based solutions. Through further experimentation, we show that traditional techniques may be augmented to address some of these constructs, thereby increasing the accuracy of static analysis results and ultimately making the task of maintaining these applications manageable. Based on this study, we can make several observations:

The traditional maintenance problems are still present in this new methodology,

A majority of the code that comprises a COTS component-based solution is not generated or often understood by the developer/maintainer,

COTS component-based solutions appear to increase the presence dead code that increases software maintenance costs,

Semantic information can be used to supplement traditional static analysis approaches in order to increase the precision and accuracy of the results from analyzing COTS component-based software.

3 Conclusions Our research shows that the use of traditional and not-so traditional static code analysis techniques, at least for those used within this effort, can aid in the understanding of unfamiliar code and in monitoring potential side-effects that can be caused by modifications to source code. It has application for debugging, testing, integration, maintenance, complexity estimation, resource allocation, and training. We feel that the initial work reported here is promising and indicates that further research should be performed in this area, especially since COTS component-based solutions are becoming a widely used development technique in software engineering.

References 1. Aitkin, P.; Visual Basic 5 Programming Explorer;1997; Coriolis Group Books 2. Cherinka, R., J. Ricci and G. Cox, Air Force JINTACCS Configuration Management Environment (AFJCME) Strawman Architecture and Automation Requirements, Technical Report, The MITRE Corporation, August, 1997. 3. Salste, T.; Project Analyzer; Aivosto Oy Software Company; 1998; http://www.aivosto.com/vb.html.

Second ECOOP Workshop on Precise Behavioral Semantics (with an Emphasis on OO Business Specifications) Haim Kilov1, Bernhard Rumpe2 1

Merrill Lynch, Operations, Services and Technology, World Financial Center, South Tower New York, NY 10080-6105, [email protected] 2 Institut für Informatik, Technische Universität München 80333 Munich, Germany, [email protected]

1

Motivation for the Workshop

Business specifications are essential to describe and understand businesses (and, in particular, business rules) independently of any computing systems used for their possible automation. They have to express this understanding in a clear, precise, and explicit way, in order to act as a common ground between business domain experts and software developers. They also provide the basis for reuse of concepts and constructs (“patterns”) common to all – from finance to telecommunications –, or a large number of, businesses, and in doing so save intellectual effort, time and money. Moreover, these patterns substantially ease the elicitation and validation of business specifications during walkthroughs with business customers, and support separation of concerns using viewpoints. Precise specifications of business semantics in business terms provide a common ground for subject matter experts, analysts and developers. All users of these specifications ought to be able to understand them. Therefore languages used to express such specifications should have precise semantics: as noted by Wittgenstein, “the silent adjustments to understand colloquial language are enormously complicated” [4]. (Not only English may be colloquial; graphical representations also may have this property1.) If business specifications do not exist, or if they are incomplete, vague or inconsistent, then the developers will (have to) invent business rules. This often leads to systems that do something quite different from what they were supposed to do. Business specifications are refined into business designs (“who does what when”), from where creation of various information system (software) specifications and implementations based on a choice of strategy and – precisely and explicitly 1

Probably, the most serious problem in this context is the usage of defaults and “meaningful” names. These are highly context-dependent and usually mean (subtly or not) different things for different people, including writers and readers. As a result, a possible warm and fuzzy feeling instead of a precise specification may lead to disastrous results.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 167-188, 1998. Springer-Verlag Berlin Heidelberg 1998

168

B. Rumpe and H. Kilov

specified! – environment, including technological architecture, are possible. In this context, precision should be introduced very early in the lifecycle, and not just in coding, as it often happens. In doing so, “mathematics is not only useful for those who understand Latin, but also for many other Citizens, Merchants, Skippers, Chief mates, and all those who are interested” (Nicolaus Mulerius (1564-1630), one of the first three Professors of Groningen University). Precise specification of semantics – as opposed to just signatures – is essential not only for business specifications, but also for business designs and system specifications. In particular, it is needed for appropriate handling of viewpoints which are essential when large and even moderately sized systems, both business and computer, are considered. Viewpoints exist both horizontally – within the same frame of reference, such as within a business specification – and vertically – within different frames of reference. In order to handle the complexity of a (new or existing) large system, it must be considered, on the one hand, as a composition of separate viewpoints, and on the other hand, as an integrated whole, probably at different abstraction levels. This is far from trivial. Quite often, different names (and sometimes buzzwords) are used to denote the same concept or construct used for all kinds of behavioral specifications – from business to systems. “The same” here means “having the same semantics”, and thus a good candidate for standardization and industry-wide usage. Various international standardization activities (such as the ISO Reference Model of Open Distributed Processing and OMG activities, specifically the more recent ones around the semantics of UML, business objects, and other OMG submissions, as well as the OMG semantics working group) are at different stages of addressing these issues. OMG is now interested in semantics for communities of business specifications, as well as in semantic requirements for good business and system specifications. Again, mathematics provides an excellent basis for unification of apparently different concepts (with category theory being a good example); and the same happens in science (“laws of nature”), linguistics, and business (for example, the Uniform Commercial Code in USA). It is therefore the aim of the workshop to bring together theoreticians and practitioners to report about their experience with making semantics precise (perhaps even formal) and explicit in OO business specifications, business designs, software and system specifications. This is the 8th workshop on these issues; we already had 7 successful workshops, one at ECOOP and six at OOPSLA conferences. During workshop discussions, reuse of excellent traditional “20-year-old” programming and specification ideas (such as in [1,2]) would be warmly welcomed, as would be reuse of approaches which led to clarity, abstraction and precision of such century-old business specifications as [3]. Experience in the usage of various object-oriented modeling approaches for these purposes would be of special interest, as would be experience in explicit preservation of semantics (traceability) during the refinement of a business specification into business design, and then into a system specification. The scope of the workshop included, but was not limited to:

Second ECOOP Workshop on Precise Behavioral Semantics

             

169

Appropriate levels and units of modularity Which elementary constructs are appropriate for business and system specifications? Simplicity, elegance and expressive power of such constructs and of specifications. Using patterns in business specifications Making Use Cases useful Discovering concepts out of examples (Generalization techniques) Providing examples from specifications What to show to and hide from the users How to make diagram notations more precise Equivalence of different graphical notations: "truth is invariant under change of notation" (Joseph Goguen) Semantics above IDL Rigorous mappings between frames of reference (e.g. business and system specifications) Role of ontology and epistemology in explicit articulation of business specifications Formalization of popular modeling approaches, including UML On complexity of describing semantics

The remainder of this paper is organized as follows. We overview the workshop’s presentations (this overview is certainly biased; but its draft was provided to all and updated by some participants) and present the workshop’s conclusions. Finally, the bulk of the paper consists those abstracts that the workshop’s authors submitted (after the workshop) for inclusion. (Some authors did not submit any abstracts.) The Proceedings of our workshop were published [6].

2

Overview

Starting almost without delay, the up to 24 participants have been viewing an interesting workshop. With even some visitors dropping in from the main conference, the workshop was an interesting event, at least as successful as its predecessors at previous years’ OOPSLA and ECOOP conferences. 17 presentations gave an interesting overview of current and finished work in the area covered by this workshop. The workshop was organized in the following three sections:

170

B. Rumpe and H. Kilov

UML Formalization and Use Kevin Lano described translation of UML models to structured temporal theories, with the goal of enabling various developments that a UML user will typically do. The user, however, does not need to use (or even see) the underlying framework. This led to a discussion about appropriate ways of showing the formalism to the users (e.g., by translating the formulas into rigorous English). Roel Wieringa emphasized the need to make explicit the methodological assumptions usually left implicit in UML formalization. (It was noted that UML formalizations tend to evolve into Visual C++.) The need to distinguish between the essential (“logical”, determined by the external environment) and the implementation (determined by an implementation platform) was clearly emphasized. This essential decomposition results in circumscription of design freedom, making requirements explicit, being invariant under change of implementation, and providing explicit design rationale. Claudia Pons presented a single conceptual framework (based on dynamic logic) for the OO metamodel and OO model. UML—as a language – was considered as just a way to represent the axioms, so that the approach can be used for any two-level notation. Tony Simons presented a classification and description of 37 things that don’t work in OO modeling with UML, and asked whether an uncritical adoption of UML is a “good thing”. The presentation was based on experience with real development projects. Problems of inconsistency, ambiguity, incompleteness and especially cognitive misdirection (drawing diagrams, rather than modeling objects) were illustrated. UML diagrams mixed competing design forces (e.g. both data and client-server dependency, both analysis and design perspectives), which confused developers. Business and other Specifications Offer Drori showed how requirements for an information system were described using hypertext in industrial projects. He emphasized the need to bridge the gaps between the end user and the planner, and between the planner and programmer. He also stressed the need to understand (and make explicit) the “good attributes” of the existing system that are often ignored (remain implicit) in information management. Bruce Siegel compared a business specification approach for two projects – a rulesbased and a Web-based one. Mixed granularity of requirements (very detailed datatype-like vs. very top-level understanding) was mentioned. The ANSI/IEEE SRS standard was used; use cases (which should have used pre- and postconditions) helped to define system boundaries; and loosely structured text was used to document screen functionality. State-based systems should specify pre- and postconditions for external system interactions. Haim Kilov described the business of specifying and developing an information system, from business specification, through business design and system specification, and to system implementation. The realization relationship between these stages was precisely defined, leading to clear traceability both “up” and “down”, with an explicit emphasis on the (business, system, and technological) environment and strategy. Different realization variants and

Second ECOOP Workshop on Precise Behavioral Semantics

171

the need to make explicit choices between them were also noted. Ira Sack presented a comprehensive specification of agent and multi-agent knowledge including an epistemic ladder of abstraction. A hierarchy of knowledge types was developed using information modeling [14]; it clearly showed the power of precision. With information modeling, management people without specialized knowledge were able to understand epistemology in a business context. Fatma Mili described elicitation, representation and enforcement of automobile-related business rules. The need to find and gracefully update business rules (in collections of thousands of them) – without rewriting the whole system – was specifically noted. Reuse was made of invariants (what), protocols (how), and methods (when). Constraints are contextdependent, and a constraint should be attached to the smallest context possible. Angelo Thalassinidis described a library of business specifications (using information modeling [14]) for pharmaceutical industry; it was shown to business people who understood and appreciated the specifications. The goal was to enhance communication and to cope with change. It was proposed to use information modeling as a common approach to describe both the industry and the strategy at different layers (e.g., for industry – from the world through industries and corporations to divisions). “We have the HOW – information modeling; we need research as to WHAT to model.” Laurence Philips described precise semantics for complex transactions that enable negotiation, delivery and settlements. He emphasized the need to be explicit about what is important and what can be ignored. Semantics was approximated by translation from “feeders” into “interlingua” (that nobody externally uses) – this was the hardest step – added by “opportunistic” conflict resolution. Building code from “interlingua” is easy assuming that infrastructure is in place. Birol Berkem described traceability from business processes to use cases. Since objects are not process-oriented, reusable “object collaboration units” are needed. A behavioral work unit consists of a dominant object and its contextual objects. Traceability between and within process steps and towards software development was noted. Formalization Bernhard Rumpe presented a note on semantics specifically mentioning the need to integrate different notations. Semantics is the meaning of a notation; “if you have a description of the syntax of C++ you still don’t know what C++ does”. Therefore “semantics is obtained by mapping a notation I don’t know to a notation I do know” (and a notation is needed for the mapping itself). Luca Pazzi described statecharts for precise behavioral semantics and noted that behavior shapes critically the structure of the domain. Events compose to higher-level events. They denote state changes, are not directed, and should not be anthropomorphical. Veronica Arganaraz described simulation of behavior and object substitutability. Objects were described using Abadi-Cardelli imp-calculus. A commuting diagram for a simulation relation was demonstrated. Zoltan Horvath presented a formal semantics for internal object concurrency. Proofs are done during, and not after, the design steps.

172

B. Rumpe and H. Kilov

To define a class, the environment of the system, and the user, are explicitly included, and so is the specification invariant.

Small discussions after each presentation and a larger discussion at the end allowed not only to clarify some points, but also to define points of interesting future research directions, and – most importantly – draw conclusions from this workshop.

3

Results of this Workshop

In order not to start from scratch, we started with the conclusions of our previous Workshop at ECOOP’97 [5]. And the participants came up with the following results. Most of them where accepted unanimously, some – only by majority (the latter are marked by *):     

Distinguish between specification and implementation Make requirements explicit Specifications are invariant under change of implementation Provide design rationale Distinguish between specification environment and implementation environment



Specifications are used by specification readers and developers for different purposes  How can the users assert/deny the correctness of specifications – what representation to use?  Common ontology is essential:*  Ontology is the science of shared common understanding  Reality changes; and ontological frameworks need to change together with reality (Example – with the invention of surfboards, the legal question arose: is a surfboard a boat?)



Cognitive problems in using (writing, reading) a notation may break a system*.



A business specification may be partially realized by a computer system and partially by humans State-based systems should specify at least pre- and post-conditions for external system interactions; however, pre- and post-conditions may not be sufficient Different fragments (“projections”, aspects) of system architecture should be visible to different kinds of (business) users

  

A notation, to be usable, should be:  Understandable (no cognitive problems)

Second ECOOP Workshop on Precise Behavioral Semantics

   



173

Unambiguous (* here no general agreement could be achieved. The authors believe that this mainly comes from a misunderstanding: “precise” is not the same as “detailed”) Simple (not “too much stuff”)

Different notations or representations are appropriate for different purposes (users) Reusable constructs exist everywhere:  in software specifications and implementations  in business specifications  in business design  Abstractions are also applicable throughout the whole development Composition exists not only for things but also for operations, events, etc. and often means a higher level of abstraction (Composition is not just “UMLcomposition” but a concept in much wider use)

Points that have been discussed about, but no (common agreed) conclusions have been drawn:  Common semantics, what it means and how to build? Comparison to mathematics, and other engineering disciplines.  Is software engineering an engineering discipline?  Are there some semantic basics and of what nature are they?

Presentation abstracts 4

Traceability Management From ‘Business Processes’ to ‘Use Cases’ (Birol Berkem)

The goal of this work is to evaluate the applicability of the UML’s activity diagram concepts for the business process modeling needs and to make a proposal for extensions to these concepts in order to define formal traceability rules from business processes to use cases. Robustness, testability, and executability of the business specifications appear as direct results of these extensions. Object Oriented Business Process Modeling may be considered as the backbone tool for the Business Process Management, since it plays an important role for the design of the business process steps (business activities) around the right business objects and holds essential business rules for the system development. Managing a precise traceability from the business specifications layer to the system specifications layer is also useful to derive process-oriented use cases. The usage of some basic elements of the UML’s activity diagram doesn’t allow this traceability between process steps. After presenting the characteristics of the

174

B. Rumpe and H. Kilov

UML’s activity diagram in order to compare them with the Business Process Modeling needs, we remarked some lacks in the activity diagram concerning :  the management of the ‘progress’ for a given business process :The internal representation of an activity (action state) is not provided to respond to ‘why and how the action is performed’ in order to model the goal and the emerging context ( i.e., what is the behavioral change that happens in the owning object of an action state to perform the action and what are the responsibilities requested from other objects participating in the action, allowing that way the measurement of the performances for a business process via its different stages).  the management of the object flows between activities : There is no information about the destination of the outputs produced by an activity inside its corresponding action state. Indeed, any organizational unit that executes a process step should have the knowledge of the destination of its outputs depending on the status of completion of the activity. This requires the definition of relevant output objects inside each activity (action state), each one expressing the required responsibility. In order to formalize objects responsibilities within business process, we proposed to make a zoom on each action state to discover participating objects then determine all the necessary information that output objects should carry out via their links to the target process steps. This should also reduce the information fetch time for the actors that use these target process steps. Considering that business processes must be directed by goals, we have been conducted to apply to the UML’s Activity Diagram the concept of ‘Contextual Objects’ where objects are driven by goals. This allows:  a robustness in the implementation of executable specifications via the chronological forms (to execute, executing, executed) of the object behaviors then a precise response for the progression of a process,  a formal way in the definition of the behavioral state transition diagram (an extension to the UML’s state transition diagram) which represents the internal behavior of the ‘action states’ and their transition,  an implicit definition of the right business objects that can be captured along the process modeling,  finally, a formal way to find out process-oriented ‘use cases’ and their ‘uses / includes’ relationships using work units or actions states internals of a process step.

5

Definition of Requirements for an OODPM-Based Information System Using Hypertext (Offer Drori)

Information systems are developed along a time axis known as the system life cycle. This cycle comprises several stages, of which the principal ones are: initiation, analysis of the existing situation, applicability study, definition of the new system, design, development, assimilation, and maintenance. The system definition stage is

Second ECOOP Workshop on Precise Behavioral Semantics

175

effectively the stage in which the systems analyst summarizes the user’s needs, and constitutes the basis for system design and development. Since this is a key stage, every effort must be made to ensure that all relevant issues are actually included, and that the requirements definition extracts the full range of user needs for the planned information system. The present article aims to describe a method for defining the requirements of an OODPM-based information system using hypertext, based on a HyperCASE computerized tool. All elements of an information system can be controlled and supervised by its specification. The specification of the various elements is essential, and creating the link between the specification of the elements and the actual elements is vital. An integrative approach embracing all information system elements will enable confronting a large portion of the problems associated with information systems development. OODPM - Object Oriented Design using Prototype Methodology is a system planning and design method that integrates the two approaches contained in its title. OOPDM focuses primarily on system planning, but also addresses the business specification stage. According to this approach, user needs to be implemented in the future system must be studied, but time must also be dedicated to studying the current situation in order to complete the requirements definition. Experience has shown that users tends to focus on those needs that have not been met by the current system, and tend to ignore the parts of the system that have met their needs. Without a stage to examine the current situation, only partial definition of the requirements is likely to achieved. In sum, with OODPM, the system analysis and planning process begins with a study of the current situation, but with a view to identifying the needs, rather than the study for its own sake. This means that a defined period of system planning time, proportional to the overall process, is assigned to the study of the current situation. This process ends with the business specifications, or as it is usually called, the business specifications for the new system. System planning with OODPM is done by defining the activities required for the system. A system activity is defined as a collection of data and processes that deal with a defined subject and are closely linked. Data affiliated with a particular process should be collated in a natural manner; however, should there be tens of data pertaining to a specific process, secondary definition of sub-processes is desirable. One can also add to the activity definition the requirement that the user, on a single screen, process the data in a single, continuous action in a single sequence. A collection of user activities in a particular area with the relationships between them defines an information system. (Some of these user activities may be manual.) The software crisis, which resulted in a crisis of confidence between information systems developers and users, can be resolved. There are many ways to go about this. This article focused on two such ways; one, the adoption of OODPM - a systematic, structured methodology for planning and developing information systems. Two, the use of a hypertext-based tool to create a superior requirements specification taking into account the future system users. The experience gained with this tool in both academic and “real life” systems development environments points to

176

B. Rumpe and H. Kilov

positive results for this approach. Recently we also see CASE tools that are partially have hypertext-based tools for systems managing but they still need more comprehensive and profound attitude.

6

A Formal Semantics of Internal Object Concurrency (Ákos Fóthi, Zoltán Horváth, Tamás Kozsik, Judit Nyéky-Gaizler, Tibor Venczel)

We use the basic concepts of a relational model of parallel programming. Based on the concepts of a problem, an abstract program and a solution we give a precise semantics of concurrent execution of abstract data type operations. Our approach is functional, problems are given an own semantical meaning. We use the behavior relation of a parallel program which is easy to compare to the relation which is the interpretation of a problem. In this paper we show a simple method to solve the complex problem of correct i.e. surely adequate both to the specification and to the chosen representation - parallel implementation of abstract data types. Finally the practical advantage of its use is shown on a concrete example. Our model is an extension of a powerful and well-developed relational model of programming which formalizes the notion of state space, problem, sequential program, solution, weakest precondition, specification, programming theorem, type, program transformation etc. We formalize the main concepts of UNITY in an alternative way. We use a relatively simple mathematical machinery. Here we do not consider open specifications. The specification is given for a joint system of the program and the environment. The programs are running on different subspaces of the state space of the whole system, forming a common closed system based on principles similar to the generalized action-oriented object model. A generalization of the relational (e.g. not algebraic) model of class specification and implementation for the case of parallel programs is shown. Instead of using auxiliary variables for specifying objects in parallel environments we generalize the concept of type specification and type implementation. A difference is made between the specification and implementation of a data type as like as between problem and program. So we define the class specification consisting of the type value set, specification invariant, the specification of the operations and specification properties for the environment of the data type. Next we introduce the implementation of a class, which is build up of the representation, type invariant and the implementation of the operations. We give a relational semantics to an implementation being adequate to a class specification. The semantics is used for a precise refinement calculus in problem refinement process and for verification of the correctness of the abstract parallel program. The verification of the correctness of the refinement steps and the correctness of the program in respect to the last specification may be performed by an appropriate temporal logic based verification tool in the future. The introduced model makes it possible to define operations that can run in parallel i.e. internal concurrency of

Second ECOOP Workshop on Precise Behavioral Semantics

177

objects is allowed. Nonterminating operations are allowed as well. The class set'n'cost is used to demonstrate the applicability of this methodology.

7

The graph-based Logic of Visual Modeling and Taming Heterogeneity of Semantic Models (Zinovy Diskin)

The goal of the paper is to explicate some formal logic underlying various notational systems used in visual modeling (VM). It is shown that this logic is a logic of predicates and operations over arrow diagrams, that is, a special graphbased logic of sketches: the latter are directed multi-graphs in which some diagrams are marked with labels taken from a predefined signature. The idea and the term are borrowed from categorical logic, a branch of mathematical category theory, where sketches are used for specifying mathematical structures. Thus, VM-diagrams are treated as visual presentations of underlying (formal graph-based) sketch specifications. In this way the diversity of VM notations can be presented as a diversity of visualizations over the same specificational logic. This gives rise to a consistent and mathematically justified unification of the extremely heterogeneous VM-world. The approach can be realized only within some formal semantic framework for VM. And indeed, in the paper it is outlined how basic constructs of conceptual modeling (like IsA, IsPartOf, various aggregation and qualification relationships) can be formally explicated in the framework of variable set semantics for sketches (see [7] for details). In a wider context, the goal of the paper is to manifest the arrow style of thinking as valuable both in the practice of conceptual modeling and design and in stating their logically consistent foundations as well. VM-diagrams are to be thought of as high-level semantic specifications rather than graphical interfaces to relational and the like low-level schemas. The machinery of diagram predicates and operations proposed in the paper is intended to support this thesis on the technical logical level.

8

An Information Management Project: What to do when your Business Specification is ready (Haim Kilov, Allan Ash)

We present a business specification of the business of developing an information system. Since different activities during an information management project emphasize different concerns, it makes sense to separate these concerns and, in particular, to use different names to denote them – business specification, business design, system specification, and system implementation. Each activity builds a description of the system from its viewpoint, which then is “realized” or moved to a more concrete (more implementation-bound) viewpoint, – by the next activity. The frames of reference of these activities have a lot of common semantics which is essential for being able to bridge the gap from businesses to systems.

178

B. Rumpe and H. Kilov

The Realization relationship relates the “source” activity and the “target” activity which realizes it. A target is not uniquely determined by its source. There may be more than one business design that will realize a given business specification, for example. Generally, any number of realization variants may be generated which will result in multiple versions of the target activity, although only the “best” one will be chosen to be realized by the next activity. A more detailed specification of this relationship is provided in the figure below. feed back

Ref

Ref

S o urce

T arge t

refin em en t

CA

R ea liz a tio n

SE R ea liz a tio n V aria n t

CA

s ys te m

CA

b u sine s s E nviro nm e nt

te ch n o lo g ic a l

CA C on s tra in ts

O p p o rtu nitie s

S tra teg y

Second ECOOP Workshop on Precise Behavioral Semantics

9

179

Formalising the UML in Structured Temporal Theories (Kevin Lano, Jean Bicarregui)

We have developed a possible semantics for a large part of the Unified Modelling Notation (UML), using structured theories in a simple temporal logic. This semantic representation is suitable for modular reasoning about UML models. We show how it can be used to clarify certain ambiguous cases of UML semantics, and how to justify enhancement or refinement transformations on UML models. The semantic model of UML used here is based on the set-theoretic Z-based model of Syntropy [9]. A mathematical semantic representation of UML models can be given in terms of theories in a suitable logic, as in the semantics presented for Syntropy in [8] and VDM++ in [11]. In order to reason about real-time specifications the more general version, Real-time Action Logic (RAL) [11] is used. A typical transformation which can be justified using our semantics is source splitting of statechart transitions. We intend that such transformations would be incorporated into a CASE tool, so that they could be used in practical development without the need for a developer to understand or use the formal semantics.

10 Alchemy Is No Substitute For Engineering In Requirement Specification (Geoff Mullery) Attempts to be "formal" in specifying computer systems are repeatedly reported as having failed and/or been too expensive or time-consuming. This has led some people to assert that the use of science in this context has failed and that we should instead concentrate on use of what are perceived as non-science based methods (for example approaches driven by sociological investigation). In reality what has been applied from science is a limited subset based on mathematics (characterised by Formal Methods) or on pseudo mathematical principles (characterised by Structured Methods). There is more to science than this subset and failure to apply all aspects of science is arguably a prime reason for failure in previous approaches. Science in general can be characterised in terms of Models and Disciplines. Models lead to the ability to characterise a proposed system in terms of deductions and theorems based on the model definition or its application to description of a specific problem. Disciplines produce, make use of and evaluate models in varying ways, depending on the primary thrust of their sphere of interest. Models may be Abstract (not representing on the observable universe, though possibly derived from it) or Real World (based on the observable universe and representing a subset of it). A Well Formed model has an internally consistent definition according to mathematically accepted criteria-derived deductions and theorems are likely to be correspondingly consistent. A model which is not well formed is likely to demonstrate inconsistencies in deductions and theorems derived from its use.

180

B. Rumpe and H. Kilov

In computing specifications are models and represent only a subset of a proposed system and its environment so it must be incomplete (otherwise it would be a clone of the proposed system + environment). Also, even for well formed models, it is impossible to be 100% certain that all deductions/theorems are correct, so a model is suspect even over its domain of applicability. The Pure Science discipline defines models, but rarely worries about their application to the Real World. The Applied Science discipline defines models based on the Real World or maps part of the Real World onto a model defined by Pure Science. It is frequently the case that Applied Scientists are more interested in producing the models than they are in ensuring that they are well formed. The Engineering discipline tests and uses models in the Real World, discovering areas of applicability and margins for safety of application. The Control Engineering discipline facilitates model application, evaluation and improvement by looking for divergence between model predictions (via deductions and theorems) and behaviour observed when used in a Real World mapping. It is in Engineering and Control Engineering that the computer industry has made little practical use of science. Evaluation of methods and their underlying models has been based on anecdotal evidence, marketing skills, commercial pressures and inappropriate, frequently biased experiments. Method and model advocacy has been more akin to alchemy than to science. The alternative of ignoring science and using methods with a series of ad hoc pseudo models is merely one which accentuates alchemy - with the result that we are in danger of getting only more of the same. Nevertheless it is also clear that, since all models represent only a subset of the universe of interest, there is no guaranteed profit in assuming that all that is needed are well formed models. What is needed is an attempt to integrate the use of well formed and ad hoc models, with co-operation in the process of translating between the models - and in both directions, not just from ad hoc to formal. And finally, in setting up such a co-operative venture the critical thing to ensure that the venture really works and can be made gradually to work even better is to apply the disciplines of Engineering and the more specialised Control Engineering as characterised here. That is the minimum requirement for the achievement of improved systems development and that is the basis of the Scientific Method.

11 Part-Whole Statecharts for Precise Behavioral Semantics (Luca Pazzi) The work presented by Luca Pazzi suggested that a close and critical relationship exists between the behavioral and structural knowledge of complex engineering domains. It may be observed that most of the formalisms for representing aggregate entities present a tendency towards either an implicit or explicit way of representing structural information. By the implicit approach, a complex entity is modeled through a web of references by which the component entities refer to one another. This is typical, for example, of the object-oriented approach, which models an associative relationship between two objects, for example car A towing trailer B, by an

Second ECOOP Workshop on Precise Behavioral Semantics

181

object reference from A to B. This way poorly reusable abstractions results (for example car A becomes, structurally, a tower). The counterpart is represented by the explicit approach, where the emphasis is on the explicit identification of a whole entity in the design, be it an aggregate or a regular entity. The claim is that such identification may be driven by analysing the associative knowledge, i.e. usually behavioral relationships, observed in the domain. Behavior contributes thus in determining additional structure in the domain and such identification impacts critically on the overall quality of the modeling. Behavioral specifications play thus a mayor role in committing a modelling formalism towards the explicit approach.

12 Integrating Object-Oriented Model with Object-Oriented Metamodel into a single Formalism (Claudia Pons, Gabriel Baum, Miguel Felder) Object oriented software development must be based on theoretical foundations including a conceptual model for the information acquired during analysis and design activities. The more formal the conceptual model is, the more precise and unambiguous engineers can be in their description of analysis and design information. We have defined an object-oriented conceptual model [13] representing the information acquired during object-oriented analysis and design. This conceptual model uses explicit representation of data and metadata into a single framework based on Dynamic Logic, allowing software engineers to describe interconnections between the two different levels of data. We address the problem of gaining acceptance for the use of an unfamiliar formalism by giving an automatic transformation method, which defines a set of rules to systematically create a single integrated dynamic logic model from the several separate elements that constitute a description of an object-oriented system expressed in Unified Modeling Language. The intended semantics for this conceptual model is a set of states with a set of transition relations on states. The domain for states is an algebra whose elements are both data and metadata. The set of transition relation is partitioned into two disjoint sets: a set of transition representing modifications on the specification of the system (i.e. evolution of metadata), and a set of transition representing modifications on the system at run time (i.e. evolution of data). The principal benefits of the proposed formalization can be summarized as follows:  The different views on a system are integrated into a single formal model. This allows one to define rules of compatibility between the separate views, on syntactical and semantical level.  Formal refinement steps can be defined on model.  This approach introduces precision of specification into a software development practice while still ensuring acceptance and usability by current developers.  The model is suitable for describing system evolution; it is possible to specify how a modification made to the model impacts on the modeled system. By

182



B. Rumpe and H. Kilov

animating the transition system defined by the formal specification it is possible to simulate the behavior of the specified system and also it is possible to analyze the behavior of the system after evolution of its specification (either structural evolution or behavioral evolution or both). The model is suitable for formal description of reuse contracts, reuse operators, design patterns and quality assessment mechanisms.

13 Simulation of Behaviour and Object Substitutability (Maria José Presso, Natalia Romero, Verónica Argañaraz, Gabriel Baum, and Máximo Prieto) Many times during the software development cycle, it is important to determine if an object can be replaced by another. In an exploratory phase, for example, when proto-typing a system or module simple objects with the minimum required behaviour are defined. In a later stage they are replaced by more refined objects that complete the functionality with additional behaviour. The new objects must emulate the portion of functionality already implemented, while providing the implementation for the rest of it. During the evolution of a system, to improve the performance of a module for example, there is also the need to change some objects for others with a more efficient implementation. In this case we need to assess that the replacing object has exactly the same behaviour as the replaced, retaining the functionality of the whole system unchanged. What we need is a substitutability relation on objects that can be used to determine if an object can be replaced by another. We may say that an object can substitute another if it exhibits “at least the same behaviour”. Or we could strengthen the condition, and ask it to exhibit “exactly the same behaviour”. In both interpretations of substitutability, the behaviour of the first object must be emulated by the new object, but in the first one the new object is allowed to have extra functionality. The purpose of this work is to define the notion of two objects having the same behaviour. We have discussed two different relations that characterise the idea. In order to have a useful and clear notion of these relations we must define them rigorously. Such a rigorous formulation allows us to precisely state when two objects have the same behaviour, so that one can replace the other while ensuring the preservation of the semantics of the whole system. We take the simulation and bisimulation techniques, widely used in semantics of concurrent systems, as formal characterisations of the relation of having the same behaviour. As a formal framework to represent objects we use the imp-calculus of Abadi and Cardelli, which is a simple, object based calculus, with an imperative semantics. The imperative semantics allows to model appropriately some key issues of object oriented programming languages such as state, side effects and identity of objects. The simulation and bisimulation for objects in the calculus are defined using a labelled transition system based on the messages that objects understand, and takes

Second ECOOP Workshop on Precise Behavioral Semantics

183

into account the possible side effects of message passing, present in the imperative calculus semantics. We propose the defined simulation relation can be used to formally characterise the idea of one object having “at least the same behaviour” as another. Similarly, bisimulation is defined to capture the idea of an object having “exactly the same behaviour” as another.

14 A Note on Semantics with an Emphasis on UML (Bernhard Rumpe) "In software engineering people often believe a state is a node in a graph and don't even care about what a state means in reality.” David Parnas, 1998 This note clarifies the concept of syntax and semantics and their relationships. Today, a lot of confusion arises from the fact that the word "semantics” is used in different meanings. We discuss a general approach at defining semantics that is feasible for both textual and diagrammatic notations and discuss this approach using an example formalization. The formalization of hierarchical Mealy automata and their semantics definition using input/output behaviors allows us to define a specification, as well as an implementation semantics. Finally, a classification of different approaches that fit in this framework is given. This classification may also serve as guideline when defining a semantics for a new language.

15 Towards a Comprehensive Specification of Agent and Multiagent Knowedge Types in a Globalized Business Environment (Ira Sack, Angelo E. Thalassinidis) We produce a detailed level specification based on information modeling as defined in [14] refined by a frame-based semantics presented in [15] to precisely specify various types of agent and multi-agent knowledge types in a globalized business environment. Our approach results in a definition of agent and multi-agent knowledge types based on the collective works of the authors cited in the references. The highest level of specification consists of information molecules (displayed as Kilov diagrams) that show concept linkage between epistemic notions such as agent knowledge types and possible worlds. It is our belief that information modeling is preferable to object-oriented modeling when specifying very high level abstractions (super-abstractions?) such as possible worlds, knowledge types, and information partitions and the linkages (known in information modeling as associations) between them. It is also a reasonable and appropriate means to present and “socialize” notions which are increasingly becoming focal points for new approaches to infor-

184

B. Rumpe and H. Kilov

mation and business system design (e.g., market oriented systems, intelligent agents, negotiation systems). Whereas Wand and Wang have addressed the issue of data quality of an information system from an ontological perspective premised on a one-to-one correspondence between a set of information states and a set of states representing real-world perceptions, they did not specifically address the issue of uncertainty. Using information modeling and Aumann structures we have extended the work presented in [16] to include and model uncertainty in agent perception within a multiple agent perspective. We have also created an information molecule that includes five multi-agent knowledge subtypes: distributed knowledge, some agent knows, mutual knowledge, every agent knows that every agent knows, and common knowledge. These five subtypes are assembled to form an Epistemic Ladder of Abstraction (ELA) – an information molecule subject to a set of logical constraints that order the various knowledge types in terms of their levels of abstraction – with common knowledge sitting at the top.

16 `37 Things that Don't Work in Object-Oriented Modelling with UML (Anthony J H Simons, Ian Graham) The authors offer a catalogue of problems experienced by developers using various object modelling techniques brought into prominence by the current widespread adoption of UML standard notations. The problems encountered have different causes, including: ambiguous semantics in the modelling notations, cognitive misdirection during the development process, inadequate capture of salient system properties, features missing in supporting CASE tools and developer inexperience. Some of the problems can be addressed by increased guidance on the consistent interpretation of diagrams. Others require a revision of UML and its supporting tools. The 37 reported problems were classified as: 6 inconsistencies (parts of UML models that are in self-contradiction), 9 ambiguities (UML models that are underspecified, allowing developers to interpret them in multiple ways), 10 inadequacies (concepts which UML cannot express adequately) and 12 misdirections (cases where designs were badly conceptualised, or drawn out in the wrong directions). This last figure is significant and alarming. It is not simply that the UML notation has semantic faults (which can be fixed in later versions), but rather that the increased prominence given to particular analysis models in UML has in turn placed a premium on carrying out certain kinds of intellectual activity, which eventually prove unproductive. Our analysts enthusiastically embraced the new use-case and sequence diagram approaches to conceptualising systems, generating control structures which our designers could not (and refused to) implement, since they did not map onto anything that a conventional software engineer would recognise. Similar disagreements arose over the design interpretation of analysis class diagrams due to the tensions between the data dependency and client-supplier views; and the place and meaning of state and activity diagrams. Most problems can be

Second ECOOP Workshop on Precise Behavioral Semantics

185

traced back to the awkward transition between analysis and design, where UML's universal philosophy (the same notation for everything) comes unstuck. Modelling techniques that were appropriate for informal elicitation are being used to document hard designs; the same UML models are subject to different interpretations in analysis and design; developers are encouraged to follow analytical procedures which do not translate straightforwardly into clean designs. UML is itself neutral with respect to good or bad designs; but the consequences of allowing UML to drive the development process are inadequate object conceptualisation, poor control structures and poorly-coupled system designs.

17 Association Semantics in Medical Terminology Services (Harold Solbrig) This paper describes some of the interesting issues encountered during the development of a standard interface specification for accessing the content of medical terminologies. It begins by briefly describing some of the common medical coding schemes and outlines some of the motivations for developing a common interface to access their content. It then proceeds to describe one such specification, the Lexicon Query Services (LQS) interface specification, which was recently adopted by the Object Management Group. Medical terminology systems frequently define the concepts behind the terminology in terms of their associations with each other. Concepts are frequently defined as a type of another concept, or broader than or narrower than another concept in scope. Various forms of subtype and subclass associations occur as well as associations like contains, is composed of, etc. As the meaning behind a given medical term is dependent on the communication of these associations, it was determined that a formalization of the association semantics would be necessary if this specification was to be generally useful. This paper describes some of the issues that were encountered when the authors attempted to apply a form of association semantics used in object-oriented modeling to the semantics of medical terminology associations.

18 Building the Industry Library – Pharmaceutical (Angelo E. Thalassinidis, Ira Sack) Both OO and business research communities have not yet operationalized nor even formalized high-level business concepts such as strategy, competition, market forces, product value, regulations, and other “soft” business notions. This may be attributed to two main reasons: a) OO researchers are using an incremental approach in building libraries that is fundamentally bottom-up believing it is only a matter of time until they can address high-level business concepts; and b) Researchers from the business strategy side have never attempted to formalize these concepts due to the difficulties engendered by their differing backgrounds (psychology, economics,

186

B. Rumpe and H. Kilov

etc.) or the different audiences they must address (CEOs will not read a detail library). This paper constitutes the first of a series that present an ongoing effort in building “Business Industry Libraries” (BIL, for short) using modeling constructs introduced in [14]. A BIL will model the specific characteristics of an industry. The BIL will be accompanied by a “Business Strategy Library” that the authors have started working on in [18,19,20], an “Organizational Theory Library” whose foundation is presented in [17], and other constructs. BIL will assist in illuminating difficult business considerations facilitated by the employment of as much precision as the vagaries, instabilities, etc., allow. The type of information that the business library should be maintaining is still being researched and will be presented in an upcoming paper. This paper accomplishes the analysis of the pharmaceutical industry; one of the most complicated industries. The pharmaceutical industry is generally viewed by analysts as the composition of Human-use, Animal-use, Cosmetic-use, and Food-use products. The Human-use products are drugs developed to address human health needs; the Animal-use products are drugs developed to address either animal health needs or human needs from animal products; Cosmetic-use products are drugs developed to address human esthetic needs; and; Food-use products are drugs developed to address human, animal, or even plant needs. The pharmaceutical industry is also international by nature. In many countries the pharmaceutical industry has a relationship with its customers that is unique in manufacturing. The industry provides drugs, the physician decides when to use them, the patient is the consumer, and the bill is predominantly paid by a private or national insurance subject to a copayment.

19 Formalizing the UML in a Systems Engineering Approach (Roel Wieringa) This discussion note argues for embedding any formalization of semiformal notations in a methodology. I present a methodological framework for software specification based on systems engineering and show how the UML fits into this framework. The framework distinguishes the dimensions of time (development strategy and history), logic (justification of the result), aspect of the delivered system, and aggregation level of the system. Aspect is further decomposed into functions, behavior and communication. Development methods offer techniques to specify these three aspects of a software system. The UML offers use case diagrams to specify external functions, class diagrams to specify a decomposition, statecharts to specify behavior and sequence and collaboration diagrams to specify communication. Next, an essential modeling approach to formalizing the UML within this framework is argued. This means that the we should define an implementationindependent decomposition, that remains invariant under changes of implementation. Finally, a transition system semantics for the UML is discussed, that fits within the semantic modeling approach. The semantics models an object system as going

Second ECOOP Workshop on Precise Behavioral Semantics

187

through a sequence of steps, where each step is a finite set of actions performed by different objects. Objects communicate by means of signals. This semantics has been formalized as a transition system semantics. No formal details are given, but references are given to places where these can be found.

References 1. E.W.Dijkstra. On the teaching of programming, i.e. on the teaching of thinking. In: Language hierarchies and interfaces (Lecture Notes in Computer Science, Vol. 46), Springer Verlag, 1976, pp. 1-10. 2. System description methodologies (Ed. by D.Teichroew and G.David). Proceedings of the IFIP TC2 Conference on System Description Methodologies. North-Holland, 1985. 3. Charles F.Dunbar. Chapters on the theory and history of banking. Second edition, enlarged and edited by O.M.W.Sprague. G.P.Putnam's Sons, New York and London, 1901. 4. Ludwig Wittgenstein. Tractatus Logico-Philosophicus. 2nd corrected reprint. New York: Harcourt, Brace and Company; London: Kegan Paul, Trench, Trubner & Co. Ltd., 1933. 5. Haim Kilov, Bernhard Rumpe. Summary of ECOOP’97 Workshop on Precise Semantics of Object-Oriented Modeling Techniques. In: Object-oriented technology: ECOOP'97 Workshop Reader. Ed. by Jan Bosch and Stuart Mitchell. Springer Lecture Notes in Computer Science, vol. 1357, 1998. 6. Haim Kilov, Bernhard Rumpe. Second ECOOP Workshop on Precise Behavioral Semantics (with an Emphasis on OO Business Specification). Workshop proceedings, Technical report, Technische Universität München TUM-I9813. 1998. 7. Z. Diskin and B. Kadish. Variable set semantics for generalized sketches: Why ER is more object-oriented than OO. To appear in Data and Knowledge Engineering. 8. J C Bicarregui, K C Lano, T S E Maibaum, Objects, Associations and Subsystems: a hierarchical approach to encapsulation. ECOOP 97, LNCS, 1997. 9. Cook S., Daniels J., Designing Object Systems: Object-oriented Modelling with Syntropy, Prentice Hall, 1994. 10. Lano K., Bicarregui J., UML Refinement and Abstraction Transformations, ROOM 2 Workshop, Bradford University, 1998. 11. K Lano, Logical Specification of Reactive and Real-Time Systems, to appear in Journal of Logic and Computation, 1998. 12. Rational Software et al, UML Notation Guide, Version 1.1, http://www.rational.com/uml, 1997. 13. C.Pons, G.Baum, M.Felder. A dynamic logic framework for the formal foundation of object-oriented analysis and design, Technical Report. lifia-info.unlp.edu.ar/~cpons. 14. Kilov, Haim, and James Ross (1994), Information Modeling: An Object-Oriented Approach. Prentice-Hall Object-Oriented Series. 15. Osborne, Martin J., and Ariel Rubinstein (1994), A Course in Game Theory. MIT Press. 16. Wand, Yair, and Richard Wang (1994), “Anchoring Data Quality Dimensions in Ontological Foundations,” TDQM Research Program, MIT. 17. Morabito, J., Sack, I., and Bhate, A., forthcoming book on Organizational Modeling, Prentice-Hall. 18. Thalassinidis, A.E., and Sack, I. “An Ontologic Foundation of Strategic Signals,” OOPSLA’97 Workshop on Object-oriented Behavioral Semantics, pp. 185-192.

188

B. Rumpe and H. Kilov

19. Thalassinidis, A.E., and Sack, I. “On the Specifications of Electronic Commerce Economics,” ECOOP’97 Precise Semantics for Object-Oriented Modeling Techniques, pp. 157-163. 20. Thalassinidis, A.E., “An Ontologic and Epistemic, Meta-Game-Theoretic Approach to Attrition Signaling in a Globalized, Electronic Business Environment,” Ph.D. Thesis, Stevens Institute of Technology, May 1998.

Workshop Report | ECOOP'98 Workshop 7 Tools and Environments for Business Rules Kim Mens1 , Roel Wuyts1 , Dirk Bontridder2, and Alain Grijseels2 1

Vrije Universiteit Brussel, Programming Technology Lab Pleinlaan 2, B-1050 Brussels, Belgium [email protected] [email protected] http:progwww.vub.ac.be

Wang Global Belgium, System Integration & Services Division, Application Engineering Madouplein 1 box 8, B-1210 Brussels, Belgium 2

[email protected]

[email protected]

Abstract. This workshop focussed on the requirements for tools and environments that support business rules in an object-oriented setting and attempted to provide an overview of possible techniques and tools for the handling, de nition and checking of these rules and the constraints expressed by them during analysis, design and development of objectoriented software.

1

Workshop Goal

Business rules are nothing new. They are used by every organisation to state their practices and policies. Every business application contains many business rules. One of the current problems with business rules is that code, analysis and design models specify business rules only implicitly and that current software engineering tools provide inadequate support to explicitly and automatically deal with business rules when building object-oriented business applications. With this workshop we intended to investigate which tool and environmental support for handling business rules during software development and evolution is needed andor desired and which is already available. To come to a categorisation of available support tools and techniques, position papers were solicited from both academia and industry. In our call for contributions, we asked the participants to focus on the following topics: 1. What tools and environments currently exist to handle business rules? 2. Which extra support is needed or desired? 3. How can business rules be made explicit during the di erent phases of the software life cycle? 4. How to specify business rules and the constraints they express in a rigorous way? 5. How can we deal with enforcement of business rules, compliance checking, reuse and evolution of business rules? S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 189-196, 1998.  Springer-Verlag Berlin Heidelberg 1998

190

K. Mens et al.

6. How can code be generated automatically from business rules or how can business rules be extracted from existing software systems? We agree with Margaret Thorpe that tool support for business rules is important: Giving the business community the tools to quickly dene, modify and check business rules for consistency as well as get those changes quickly implemented in the production environment has made them able to respond much more quickly to changes in the business environment." 6

2 About the Organisers We were interested in organising this workshop for several reasons: Kim Mens' 4 current research interests go out to intentional annotations and the semantics thereof. He thinks that business rules could provide some insights in this matter: what extra intentional" information do business rules provide and how? Roel Wuyts' research focusses on the use of declarative systems to reason about the structure of object-oriented systems 10 . He is particularly interested in business rules that can be expressed declaratively and then used to extract or view particular information from an object-oriented system. Dirk Bontridder and Alain Grijseels wanted to validate their current experiences on business rules in object-oriented framework-based software development projects with the insights of other people working on or with business rules.

3 About the Workshop About twenty persons actually participated in the workshop, with an equal amount of participants from industry and the academic world. Eight of them were invited to present a position statement during the rst part of the workshop. We categorised the dierent presentations based on the topics on which they focussed. Bruno Jouhier 2 , Paul Mallens and Leo Hermans 1 reported on existing tools and environments for dealing with business rules, and on the qualities and shortcomings of these tools and the applied techniques. Michel Tilman

7 , Hei Chia Wang 9 and Stefan Van Baelen 8 discussed some techniques for dealing with business rules based on their experience or inspired by their research interests. Gerhard Knolmayer 3 and Brian Spencer 5 related business rules to data base rules. These presentations certainly gave an inside in some of the topics enumerated in section 1. For more details we refer to the position statements which are included at the end of this chapter. Because we wanted to come to understand what characteristics and properties tools and environments for handling business rules should have, we took the following approach in the remainder of the workshop. The goal was to obtain a list of requirements for tools and environments. To this extent, we assigned

Tools and Environments for Business Rules

191

the participants to dierent working groups, composed of both industrial participants and researchers, to construct such lists from dierent perspectives. The perspectives adopted by the dierent working groups were:

Function of the person. Dierent kinds of persons may have a dierent per-

spective on the kinds of requirements that are essential for tools and environments for business rules. We asked the members of this working group to assume the role of a particular kind of person e.g. project manager, problem domain expert, application developer, end user, business manager, ..., and to reason about requirements from that perspective. Nature of the application. We assumed that the particular nature of an application might have an impact on the requirements for tools and environments for business rules. Therefore, we asked the dierent members of this working group to reason about such requirements from the perspective of particular kinds of applications e.g. administrative, nancial, telecom, ... Software life cycle. This workshop group focussed on nding the requirements for tools and environments to support business rules throughout the entire software life cycle. In a concluding session, the results of the dierent working groups were merged and discussed with the other working groups.

4 Requirements for Tools and Environments for Business Rules 4.1 Initial List of Requirements This initial list of requirements for tools and environments was intended to serve as the basic input for discussion in the dierent working groups. It was extracted from the position papers submitted by the participants. First of all we wanted to know whether this list was complete or not if not, participants were encouraged to add to this list. Secondly, we were interested in the participants' motivations why or why not the listed requirements were deemed necessary in tools and environments supporting business rules. The initial list of tentative requirements is given below:

Centralised? repository: There should be a centralised? repository of business rules. Adaptable: Allow for easy changing, re ning and removing of existing rules. Con ict detection: Support for detecting con icting rules is needed. Debugging facilities: Provide support for debugging systems containing lots of business rules. Declarative language: Use a declarative language to express business rules. Dedicated browsers: Use dedicated browsers for querying" business rules. E ciency: Achieve an acceptable" eciency in tools and environments for business rules.

192

K. Mens et al.

Explicit business rules: Make the soft business rules explicit in ware. First class business rules: Business rules should be rst class. Formal and rigorous foundation: Need for formal and rigorous foundation of business rules. Identication and extraction: Support for identication and extraction of business rules from the real world. Maintain integrity and consistency: Support for maintaining software integrity and consistency. Representation: Use of metaphors of business rule representation that or more interesting than if-then-else" constructs. Open: Easily allow new rules as well as rules about new items. Reasoning engine or mechanism: Use a reasoning engine to allow inferencing about rules rather than having stand-alone" rules. Scopeviews of rules: Support di erent scopes or views of business rules e.g. application dependent as well as independent. Integration with existing tools: Provide a symbiosis between business rule

tools and environments and integrated development environments for managing the rest of the software. Every working group took these initial requirements and separately discussed them according to their perspective. This resulted in an annotated requirements list, containing extra comments or considerations made by some groups according to their viewpoint. The working groups also added new requirements they deemed important from their perspective. The next two subsections present the annotated requirement list everybody agreed on and a list of added requirements.

4.2 Annotated Requirement List Centralised? repository: Every group agreed on this, without much dis-

cussion. The Function of the Person" working group FoP stated that the repository should not necessarily be physically centralised, but certainly virtually. The other two groups explicitly mentioned that the repository should contain all business rules, about software components at any phase of the software life cycle. Adaptable: Everybody agreed on this obvious requirement. Con ict detection: The Software Life Cycle" working group SLC argued that con ict detection is important, but further investigation should make clear what kinds of con icts are interesting or important to be detected. The FoP group mentioned that they currently see two di erent levels where rules can con ict: at the business level or at the technological level. Debugging facilities: Everybody agreed that there certainly is need for debugging support, for example for prototyping business rules, or for tracing facilities. Declarative language: Was considered important by all working groups. Dedicated browsers: Dedicated browsers for several types of users should be available. The FoP group related this to the scope view requirement: browsers should support di erent scopes or views of business rules as well.

Tools and Environments for Business Rules

193

Eciency: The FoP group mentioned that eciency should at least be rea-

sonable", but more importantly, things should remain ecient when scaling the system. The SLC group recognised that there is a subtle trade-o between eciency versus exibility e.g. adaptability . Building an application by generating code for the business rules could make it more ecient, but limits its exibility. On the other hand, an application that accesses and uses the rule-base at run-time is very exible open, adaptable, ... but is probably much less ecient. Explicit business rules: Everyone agreed. First class business rules: Everyone agreed. Formal and rigorous foundation: Two important remarks were made here. First, the FoP group mentioned that maybe rules could be informal in an initial phase during requirements elicitation, but in the end they should be declared in an explicit, formal and declarative way. The Nature of the Application" working group NoA noted that it is important to have a formal notation language, but preferably, it should be a standard one. A number of potential candidates are: KIF, OCL, UML, ... Identication and extraction: According to the NoA group, there is a need for knowledge elicitation tools to extract business rules from human sources, examples cases and electronic sources reverse engineering . The FoP group agreed that fully automated extraction of business rules is a beautiful goal, but seems unrealistic. Maintain integrity and consistency: To achieve integration and consistency, only the proper tools should have access to the rule base. It is not allowed to access the rule base from the outside. A question is what kinds of tools and techniques are currently available to support consistency maintenance? Representation: Appropriate representations should be supplied when accessing the rule base e.g., dierent representations for dierent users . These representations should not necessarily be completely formal, but should best suit specic users FoP . Possible alternative representations could be tables, trees or graphs NoA . One way of allowing dierent representation schemes could be to use a single language for internal representation of and reasoning about business rules, but many dierent external languages SLC . Open: Everyone agreed. Reasoning engine or mechanism: It should be investigated which kind of reasoning mechanism is most appropriate to deal with business rules. Inferencing, constraint solving, event handling, forward or backward chaining, etc... . Scopeviews of rules: The FoP group stated that mechanisms are needed to classify the business rules according to dierent views. This will facilitate in browsing the rule base and nding particular rules in the rule base. The NoA group elaborated further on this by proposing a notion of contexts or scopes that should allow the classication of business rules in conceptual groups. Integration with existing tools: All tools should be integrated, and consistent at all times. For example, when changing a view in some tool, it should be changed automatically in the other tools. It is also important to integrate

194

K. Mens et al.

the tools and environments for business rules with existing object-oriented methods, techniques, and notations.

4.3 Additional Requirements

Some of the working groups proposed some additional requirements to be added to the initial F list. or example, the FoP group was able to formulate some extra requirements by looking at the requirements from a managerial perspective. The list with all extra requirements formulated by the dierent working groups is presented below: Life cycle support: The SLC group claimed that support for business rules is needed throughout the entire software life-cycle. The other working groups agreed on this. Management decision support : The FoP group mentioned the need for support for workload assignment, progress management, and decision management. Traceability : There is a need for traceability between a business rule and the source from which it was extracted, at dierent phases of the software life cycle. Furthermore, traceability is not only important within a single phase of the software life-cycle, but also throughout the dierent phases. Traceability is important because it facilitates reasoning about the business application. Code generation : The application developer could be supported by generating code, templates or components from the business rules. But although generating code from business rules seems an interesting issue, some questions immediately spring to mind: when should code be generated only at end? , and what should be generated? Team development : Tools should provide team development support such as system con guration management, multi-user support etc. This additional requirement was mentioned by several working groups. Evolution : Support for tracking the evolution of business rules, in and throughout dierent phases of the life cycle. SLC group Completeness : Being able to check completeness, i.e. is the business described completely by the business rules, seems like an interesting requirement but might not always be feasible e.g. how to check completeness? , wanted e.g. not important in an initial phase or relevant. Regression testing : How to build tools for regression testing in the context of business rules? Impact analysis : Techniques are needed for analysing the impact of changing a business rule on the rest of the system, again at all phases of the software life cycle. SLC group Note that this requirement is somewhat related to the requirements of evolution and con ict checking.

4.4 Further Remarks

A conclusion of the working group NoA group was that the requirements of business rule tools and environments seem rather independent of the application domain. However, there was some disagreement with this conclusion by the

Tools and Environments for Business Rules

195

other working groups. For example, they mentioned the example of real-time applications which seem likely to give rise to more specic requirements. The FoP group not only formulated extra requirements by reasoning for example from a managerial perspective but also remarked that new jobs such as an auditing or rule manager may need to be dened when adopting a business rules. The SLC group eectively identied several additional requirements by focussing on the use of tools for business rules throughout the software life cycle: support for traceability, evolution, con ict checking, impact analysis, ... not only at a single phase of the software life cycle, but also between dierent phases in the life cycle. There was some unresolved discussion about the internal language that should be used for representing business rules. One viewpoint was that a standard language or notation such as UML should be used in which it is possible to declare as many kinds of business rules as possible. The opponents of this approach preferred the complete openness of a meta-approach.

5 Conclusion During the workshop, there seemed to be a lot of agreement regarding the constructed list of requirements for business rule tools and environments. This is a hopeful sign indicating that there is a clear feeling of what characteristics such tools and environments should have, despite of the fact that there still is no precise and generally accepted denition of what a business rule is.

6 Acknowledgements Thanks to all workshop participants for making this a great and successful workshop.

References 1. Hermans, L., van Stokkum, W.: How business rules should be modeled and implemented in OO. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 2. Jouhier, B., Serrano-Morale, C., Kintzer, E.: Elements Advisor by Neuron Data. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 3. Knolmayer, G. F.: Business Rules Layers Between Process and Work ow Modeling: An Object-Oriented Perspective. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 4. Mens, K.: Towards an Explicit Intentional Semantics for Evolving Software. Research abstract submitted to the ASE'98 Doctoral Symposium. To be published in the Proceedings of Automated Software Engineering 1998 ASE'98 .

196

K. Mens et al.

5. Spencer, B.: Business Rules vs. Database Rules | A Position Statement. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 6. Gottesdiener, E.: Business Rules show Power, Promise. Cover story on software engineering, Application Development Trends ADT Magazine , March 1997. 7. Tilman, M.: A Reective Environment for Congurable Business Rules and Tools. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 8. Van Baelen, S.: Enriching Constraints and Business Rules in Object Oriented Analysis Models with Trigger Specications. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 9. Wang, H.-C., Karakostas, V.: Business-Object Semantics Communication Model in Distributed Environment. Position paper at the ECOOP'98 Workshop on Tools and Environments for Business Rules. Published in this workshop reader same chapter . 10. Wuyts, R.: Declarative Reasoning about the Structure of Object-Oriented Systems". Proceedings of Technology of Object-Oriented Languages and Systems TOOLS'98 , 1998, pp. 112-124.

Enriching Constraints and Business Rules in Object Oriented Analysis Models with Trigger Specications Stefan Van Baelen K.U.Leuven, Department of Computer Science Celestijnenlaan 200A, B-3001 Leuven, BELGIUM [email protected]

WWW home page: http:www.cs.kuleuven.ac.be~som

1

Motivation

Current object oriented analysis methods focus especially on class centered information and inter-object behavior, expressed in static structural and object interaction diagrams. The specication of constraints and business rules1 is not a major concern to them. Although UML Unied Modeling Language provides support for con-straint specications through OCL Object Constraint Language, its integration with other model concepts is rather minimal. Constraints are treated as formal comment specications rather than distinct and important model elements. For instance, the interaction between constraints and object behavior is often neglected. It is not clear how a message that violates a certain constraint can be refused without crashing the whole system. UML states that the condition of a constraint must always be true, otherwise the system is invalid with consequences outside the scope of UML. As such, constraints are not really imposed on a model and its behavior, but serves only as a validation of a model state at a certain moment in time. To highlight the importance of constraints in analysis models, they should be treated as rst-class model elements with a clear semantic impact on existing static and dynamic model elements. As such, validation and verication of constraints becomes possible. Due to their broadness in nature, constraints can have a local or a global impact on a model. A single constraint can delimit certain functionality of a large number of involved classes, or can have a focussed impact on a specic attribute or association. Several kinds of reaction patterns for constraint violation should be supported. It is not suitable for a system to end up in an inconsistent model state after a constraint violation. Optimal constraint support includes concepts for specifying event refusal, event transformation and event triggering mechanisms 1

With Constraint Business Rule, we mean rules de ned on the problem domain, restricting certain events or services or forcing certain business policies. It describes normal or wanted situations in the problem domain, excluding exceptional, not allowed or alarm situations. Derivation rules are outside the scope of this position paper.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 197-199  Springer-Verlag Berlin Heidelberg 1998

198

S. Van Baelen

in order to deal with possible constraint violations. Realization of constraints during design can be obtained through the introduction of constraint checker objects or the transformation of declarative constraint specications into operational behavior restrictions and checks.

2 Specifying Constraints as rst-class Model Elements The semantic impact of constraints on a model is so that each instantiation of the static structural model over time must comply with the specied constraints. This means that a system will never arrive in an unsafe state, unless the specied model is faulty and inconsistent. As a consequence, the dynamic model is restricted so that behavioral elements | such as events, messages and methods | cannot occur unless they preserve all constraints. A method that causes a constraint violation must raise an exception or is refused instead of being executed. As such, the analyst can rely on the constraints and correctness of the system at any moment in time. Notice that this rely on the detection of possible constraint violations in order to be able to refuse method execution, or on the presence of a rollback mechanism to undo the previous eects of a method violating certain constraints.

3 Specifying Triggers as a Reaction Mechanism for Constraint Violations In past experiments, we noticed that a method refusal mechanism was not adequate for most cases to avoid constraint violation. In fact, an object that is planning to call a method often knows that this method can cause a constraint violation, and therefore tries to anticipate this by investigating the system state and avoiding a constraint violation. On the one hand, this introduces a lot of unwanted overhead for a method call setup and duplication of constraint checking code. On the other hand, such approach is impossible to manage and maintain on a longer term. To overcome these problems, we extended the constraint denition with optional trigger specications, dening the reaction of the system on a constraint violation. Each constraint can be extended with a number of event specications that will only re when the constraint is actually violated. These events can be dened on a general level, on a specic caller object, on a method causing a violation or on the actual parameters of a method.. As such, the specication of the anticipation of a constraint violation is not a part of the caller or the callee, but is an integral part of the constraint itself. The semantic meaning of a trigger in reaction to a constraint violation can be made in two distinct ways. On the one hand, an addition trigger can be performed after the method that violates the constraint. As such, the trigger must try to resolve the constraint violation by reforming the system state into a valid one. This can for instance be done by extending certain deadlines or creating certain credit notes.

Enriching Constraints and Business Rules in Object Oriented Analysis Models

199

On the other hand, a replacement trigger refuses the whole method causing the constraint violation. Instead it will transform the method call into another call that is conform to the applicable constraints. This can for instance be applied when a certain service is replaced with a newer one, changing the old service call into a call on the new service. Another example is when a unauthorized person request a certain service. The constraint that enforces the authorization can transform the service call into a registration of the authorization violation into the database. Although this is a dierent kind of reaction pattern, both reactions can be specied using a method replacement trigger for a constraint violation.

Business Rules vs. Database Rules A Position Statement Brian Spencer Faculty of IT, University of Brighton, United Kingdom [email protected]

1 Position Statement There is a widely held view that production rules i.e. rules following the E-C-A paradigm controlled by a database management system represent an appropriate technology for the implementation of business rules. This paper identies some serious limitations in the scope of this active database paradigm and proposes an alternative technology based on the concept of the provision of rule services within a distributed component architecture. The paper argues that a distributed object architecture enables the power of the E-C-A paradigm to reach beyond the connes of the single database and provide a generalized architecture for exible, controlled implementation of business rules.

2 Business RuleDatabase Rule Mapping Much research has been carried out over the last fteen years on the development of active database systems. This research has greatly enhanced our understanding of rule denition and semantics, of rule languages, of events and transactions and so on. It has also suggested a variety of applications for the technology including constraint enforcement, view maintenance, derived data, workow management and controlled replication. Active database technology has also found its way into the mainstream relational database systems of the major vendors. Parallel to these active database developments a body of work has developed around the concept of business rules". As the benets of direct business rule implementation became apparent to researchers and software developers, active databases became the technology of choice for this implementation.

3 Business RuleDatabase Rule Mismatch Despite the fruitful alliance described above, there are serious limitations to the application of active database technology to business rule implementation. Active database management systems deal with database events, database conditions and database actions. Typically, such systems are unable to capture the parameters of business events, to test conditions of non-database objects or to instigate actions beyond the database boundary. To achieve the full power of S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 200-201, 1998.  Springer-Verlag Berlin Heidelberg 1998

Business Rules vs. Database Rules - A Position Statement

201

business rule automation it will be necessary for the business rule system to encompass elements from the entirety of the software system, rather than being restricted to the shared-resource domain of the database management system.

4 An Open Architecture for Business Rule Automation To address the problems outlined above and to provide an architecture for business rule automation that is suciently generalizable the following requirements are proposed: 1. Rules are repository-based objects invoked under the control of one or more rules engines. 2. Events are explicit objects within the system, capable of exposing their parameters and notifying the rules engine s . 3. Conditions are arbitrary predicates capable of accessing the state via the interface of any relevant objects, including objects beyond the database boundary. 4. Actions are procedures, expressed in a computationally complete language, capable of invoking methods on any relevant objects. 5. Nested transaction models should be supported to allow partial recovery and alternative routes through long transactions. 6. The rule system should be open, making use of distributed component standards such as those provided by CORBA, Java Beans etc. 7. In order to provide exibility, rules should bind dynamically to appropriate events, condition objects and action objects. 8. Existing languages should be used whenever possible. 9. The rule system should allow the system designer to decide whether a rule is dened explicitly within the rule base or implicitly in application code. 10. Rules should be subject to mechanisms of authorization, persistence, security and recovery.

5 Summary As the new world order" of distributed objects permeates the world of information systems and object wrappers envelop our non-object systems, our business rule systems have an opportunity to expand beyond their existing home in database systems into the virgin territory which the object revolution has cleared. The development of these new, powerful rule systems will enable the debate to shift from the requirements issue to the question of how we can maximize the potential of explicit business rules to facilitate business change. The author believes that business rules will be carried through to explicit implementation on the basis of a risk and cost analysis and that explicit business rules will be a key technology in business environments in which frequent policy change is the norm.

Elements Advisor by Neuron Data Bruno Jouhier Chief Architect, Carlos Serrano-Morale Vice President of Software Architecture, and Eric Kintzer Chief Technology Ocer Neuron Data

1 Business Rules Automation In today's changing world, the success of a business depends heavily on its ability to quickly adapt itself to its market and its competition. As more and more business processes are being automated, there is a growing need for Information Technologies that can cope with change. Most of today's Information Systems have been developed with procedural programming languages. Over time, the languages have improved: Object Orientation has made programming safer and has promoted the reuse of components, SQL and scripting have made programming and interaction with data more accessible. These innovations have supported the development of more powerful and more sophisticated Information Systems but they have only provided very partial answers to the issues raised by rapid changes to the business practices. The business policies are spread through the system and often expressed in different procedural languages at di erent levels SQL, 3GL, VB. Then, changing a business policy requires several steps: Find where the policy is expressed it may be replicated. Analyze the impact of the change that needs to be done. Modify the procedure that implements it you need to know the language. Rebuild the system you need to know the tools. Test and Validate. This process is tedious and rather inecient, which explains why companies have diculties adapting their Information Systems to follow the pace of their business. During the last few years, technology analysts have been advocating for a new approach based on Business Rules Automation. In this vision, the business policies are expressed in the form of rules and managed separately from the rest of the IT infrastructure. This brings several major advantages: Finding how a policy is implemented becomes easy because the rules are managed separately. Changing rules is easier than changing procedures because policies are naturally expressed in the form of declarative rules, and because rules are more independent from each other than procedures. The system can be quickly updated to take the new rules into account. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 202-204, 1998.  Springer-Verlag Berlin Heidelberg 1998

Elements Advisor by Neuron Data

203

The rules can be tested and validated with a single of tools. This is much easier than testing and validating logic that is spread through the system and possibly expressed in several languages.

2 Elements Advisor Neuron Data shares this vision and bring it to reality with Elements Advisor, its new generation of Business Rules engine. Since its creation in 1985, Neuron Data has been a leader in the rules industry, delivering powerful rule engines that have been integrated at the heart of mission critical" applications in many domains

scoring, con guration, diagnostic . With Elements Advisor, Neuron Data brings to the market a new generation of Rules technology speci cally designed to capture, manage and execute Business Rules. Elements Advisor is a complete product line that supports the whole cycle of Business Rules applications, from development to deployment and maintenance: Advisor Builder is a sophisticated development tool with visual editors, powerful debugging facilities and wizards to assist you in the integration of your rule based applications with databases, Java objects, CORBA objects, ActiveX objects, etc. Advisor Engine is a versatile high performance rule engine that can either be deployed on an application server or executed directly on a client platform. Elements Advisor bene ts from all the experience that Neuron Data has gathered around rule based applications for over 10 years but it also incorporates a number of innovations that make it unique among today's business rules products. Neuron Data designed a brand new product with special emphasis on the following subjects: Ease of use. The rules are expressed in a natural language. They are easy to write and easy to understand. A very intuitive visual tool assists the development process. Integration. Advisor can work with Java objects, CORBA objects, ActiveX

COM objects, or on objects that have been retrieved from SQL databases. The development tool includes some intuitive wizards that assist you in connecting the rules to your world. Also, the rule engine can be controlled from Java, to be run as part of a desktop application, as an applet, or on an application server of various avors:  Web servers  CORBA servers  Publish  subscribe messaging servers  ActiveX containers such as Microsoft Transaction Server  Custom Java application servers Performance. The rules are indexed in a very ecient way and the engine can nd very quickly which rules apply to which objects, even if it is monitoring a large number of complex rules. In most practical cases, the rules approach compares favorably to conventional means of expression business rules.

204

B. Jouhier, C. Serrano-Morale, and E. Kintzer

Altogether, the expressive power, ease of change and performance of Advisor makes for a very compelling application architecture. This is illustrated in the gure below.

An Advisor-based application architecture compares favorably with traditional architectures in several ways: 1. The dynamic rules which would otherwise be hard-coded into procedural code are placed in a rule base which can be modied by business analysts thus allowing for the behavior of the application to change without recourse to information technology professionals. 2. Rules that exist in the heads of end users such as salespeople, customer service agents, underwriters, lending agents, administrative clerks can be also described as rules in the Advisor rule base. Thus, when a company wishes to directly o er its services to end customers through the Internet, there is no company human agent" acting between the customer and the company. Instead, the Advisor rule base acts as that company agent and the company's customers benet as if there were a human interceding on their behalf. 3. Advisor supports the introduction of new knobs" to control an application behavior. For example, in conventional systems it would require an MIS professional to implement new procedural code to introduce a notion of favored customers Platinum vs. Gold vs. Silver. In e ect, a new knob that has settings which determine when a customer becomes platinum. Contrast this labor intensive approach to an Advisor-based architecture where the business analysts can simply dene rules which determine a customer's tier without having to change anything in the overall application code. Thus, a new knob can be introduced and furthermore, its settings can be redened at will as the company's sales team dreams up new ways to determine customer tier levels.

Business Rules Layers Between Process and Workow Modeling: An Object-Oriented Perspective Gerhard F. Knolmayer University of Bern, Institute of Information Systems, Engehaldenstr. 8, CH-3012-Bern [email protected]

1 Introduction Business rules can be dened as statements about how the business is done, i.e., about guidelines and restrictions with respect to states and processes in an organization 1996 . Originally, the term was used with reference to integrity conditions in Entity-Relationship-Models or in NIAM. More powerful business rules can be described according to the Event-Condition-Action ECA paradigm developed for active Database Mana-gement Systems. From this point of view, business rules trigger an action of the IS, send an alerter to a human actor, or dene the feasible space for human actions. There-fore, the rules are not necessarily prescriptive but may also be semi-structured or soft". Recently, the important role of business rules in understanding the real system and, thus, in system analysis was stressed 1995,1996, 1994 .

2 Process and workow modeling Many methods for describing business processes have been developed, ranging from the mathematically well-dened, clear, but in user-communication widely unaccepted Petri nets PN to less rigorous but in practice successfully applied

Event-oriented Process Chains" EPC 1992,1998 . PN use states and transitions, EPC employ events, functions and logical connectors AND, OR, XOR as base constructs. In practice, the rough, global models of business processes are often not trans-formed in a systematic way to IS and their control mechanisms provided by workow management systems WFMS . A systematic renement of models is a main paradigm in system development. However, rene-ment concepts found thus far only limited interest in process modeling methods.

3 Business rules as components of process and workow descriptions The main components of processes and workows can be described by business rules. These components may be rened in several steps, leading from a process S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 205-207, 1998.  Springer-Verlag Berlin Heidelberg 1998

206

G.F. Knolmayer

to a work-ow description. The components of dierent granularity can be stored in a rule repository 1997 which supports the single point of de nition"-concept 1997 .

4 Business rules layer One can think of business rules as a standardized representation of business processes. The process models eventually obtained by employing dierent process modeling tools in decentralized or virtual enterprises or along a supply chain may be transformed to a rule-based description of the business processes 1997 . This business-rule-model may be stepwise re ned until the system is represented by elementary rules Fig. 1. This rule layer should be suciently detailed to allow an automatic generation of the speci cations for the workow model.

5 An object-oriented view In the data-model of the object class one can de ne the events and provide the basis for checking conditions. In the method part of the object class one has to care for the detection of certain types of events, checking conditions, and the execution of actions. The rules or its components may be re-used at several levels. However, the encapsulation goal cannot be ful lled because business rules very often reference 2 and some-times even more than 2 context objects. Thus, a ruleobject-model showing these de-pendencies is important. It has been proposed to treat business rules as rst class objects 1993,1998 and to relate them to context objects.

Business Rules Layers between Process and Workflow Modeling

207

References 1993 Anwar, E., Maugis, L., Chakravarthy, S., A new perspective on rule support for object-oriented databases, in: SIGMOD Record 22 1993 2, pp. 99-108. 1998 Dayal, U., Buchmann, A.P., McCarthy, D.R., Rules Are Objects Too: A Knowledge Model For an Active, Object-Oriented Database System, in: K.R.Dittrich Ed. , Advances in Ob-ject-Oriented Database Systems, Berlin: Springer 1998, pp. 126-143 1995 Graham, I., Migrating to Object Technology, Wokingham: Addison-Wesley 1995. 4. Herbst, H., Business Rules in Systems Analysis: A Meta-Model and Repository System, in: Information Systems 21 1996 2, pp. 147-166 1996 Herbst, H., Business Rule-Oriented Conceptual Modelling, Heidelberg: Physica 1997 1997 Herbst, H., Myrach, T., A Repository System for Business Rules, in: R. Meersman, L. Mark Eds. , Database Application Semantics, London: Chapman and Hall 1997, pp. 119-138 1994 Kilov, H., Ross, J., Information Modeling, An Object-Oriented Approach, Englewood Clis: Prentice Hall 1994 1997 Knolmayer, G., Endl, R., Pfahrer, M., Schlesinger, M., Geschftsregeln als Instrument der Modellierung von Geschftsprozessen und Workows, SWORDIES Report 97-8, Univer-sity of Bern, 1997 1997 Mallens, P., Business Rules Automation, Naarden: USoft 1997 1992 Reisig, W., A Primer in Petri Net Design, Berlin: Springer 1992 1998 Scheer, A.-W., ARIS - Business Process Modeling, Berlin: Springer 1998

Business-Object Semantics Communication Model in Distributed Environment Hei-Chia Wang and V. Karakostas Department of Computation, UMIST Manchester M60 1QD, UK. fhcwang,[email protected]

1

Introduction

Object communication usually uses message passing to transfer the request and the reply. However, in business applications many of the object requests are in fact events which trigger other business objects, and they usually form sequences. Therefore, in business applications, many types of object communication will be easier to be presented by the event model than message expression. Following Steven Cook's argument 1 , message sending is an over-speci cation for the purpose of specifying stimulus-response behaviors. Therefore, communication with events will be a better choice when the broadcast happens very frequently. 2

On the other hand, in dynamic invocation, the major source of the target object interface requesting is coming from an interface repository that contains the object interface syntax information only. 3 Retrieving the information needs to go through a naming or trading service to nd the object reference. Actually, such information is not enough for the dynamic call when the object does not have knowledge about the target objects since the major location for the logical search is the trader service. In CORBA's Trader, there is no standard location or format for these data. 3 Therefore, more invocation information will be needed when the object is using the dynamic call. A business object represents a real world entity, for example an employee, and is combined with business rules to form a single component in business applications. However, the mapping between the real world entity business and its business object interface implementation is not one to one. This means a real world entity can dier from its distributed object interface. 4 Therefore, business objects could be more dicult to integrate than other types of objects because of various data architectures required for dierent business objects. 5

In view of this, a semantic model based on the event logic model is proposed in this paper. This business object semantic model is mainly used for the business objects dynamic communication in a distributed environment. The semantic model will cover the business rules of business object and event logic formats which can be added on Trader or Naming services. This semantic model is used in a broker to perform business objects communication and business objects S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 208-210, 1998  Springer-Verlag Berlin Heidelberg 1998

Business-Object Semantics Communication Model in Distributed Environment

209

search. The semantic model contained in the broker is used by the consumer business object to nd the related supplier business object by the information which describes in the semantic model. 2

Event Logic Concept

We use a logic equation to present an event and its related event and business rules. Each event has its own name and parameters. Every event has its creating object and receiving objects. This information can be used to register, modify and remove information in the repository. Unlike the OMG's interface repository, this repository includes business rules and allows the business application to nd the related object and build the relation. 3

Rules and event binding

One of the major contributions of our model is the binding of business rules and events in order to generate the information which leads to the wanted object to be found. Business rules can be divided into two categories namely common business rules and private business rules. Although many of the business rules are common in similar business, there are a lot of business rules that depending on a company's individual requirements and dierent rules can apply at dierent time. These "domain specic" business rules are our focus. These kinds of business rules will be part of the event equation. The other common business rules are located in the business objects and are xed. The advantage of dividing the common and private business rules is that the business object can be reused very easily and the non-common rule information just needs to be included in the event description. Rule and event binding is using the rule equation and embeds the rules into the event description. The advantage is that the search object can describe the business rule and nd the related objects through the event description. Moreover, each object can describe its own business rule in its body. It is not necessary that all the business rule interfaces need to be used as some rules could be left for future development. 4

Conclusion

The model in this paper uses business rules and binds them with an event describing language. Each object will register its business rules, interfaces and possible generated events in search tables. Through searching tables, the related objects and events can be found and linked together by the event name and business rule. After the system is dynamically formed, the system managers can check the system ow and decide whether the system ow matches their requirement. Our work therefore improves the concept of dynamic invocation. In our

210

H.-C. Wang and V. Karakostas

work, the relation between objects is built automatically. The system ow can be reorganised dynamically and the designer can change the ow through the specication. The specication of interaction behaviour is based on the relation of object, event, and business rules. Events with parameters are used as the medium to transfer the requests. Our model uses an event-describing language and embeds business rules in the transferred message. Each business object keeps its own business rule in the event's rule parameter to achieve the user's requirement. When a new object is "hooked" in the system, if the existing objects accept the new object's rules, the new object can be merged into this system immediately. Business object combination and business rules with event binding, the main contributions of our work, provide a powerful technology for searching for server objects in a distributed environment.

References 1. Steve Cook, John Daniels, 1994 Object communicaiton, Journal of Object Orient Programming, pp. 14-23 Sigs. Sep. 1994. 2. Ian Mitchell, 1998 Driven By Events: Using events rather than messages can help decouple a model, OBJECTMAGZINE.COM, April. http:

www.objectmagazine.com features 9804 mitchell.html. 3. Jon Siegel, 1996 CORBA Fundamentals and Programming, OMG press. 4. Mark Roy and Alan Ewald, 1997 Distributed Object Architecture Dening and building, Distributed Object Computing, pp. 53-55. Feb. 1997. 5. Open Application Group, 1998 OAGIS - Open Applications Group Integration Specication General view,http:

www.openapplications.org specs s1ch01.htm.

How Business Rules Should Be Modeled and Implemented in OO Leo Hermans and Wim van Stokkum Everest, The Netherlands

1

Introduction

Overwhelmed by the need to specify constraints, most of the OO-community does not yet take into account the most important kind of business rules : interrelated derivation rules representing supporting knowledge e.g. policy or regulation for the operation of some business process, that need reasoning to be applied = rule-based paradigm. OOA D needs a paradigm shift to enable more realistic modeling of business, including these knowledge aspects. This can be achieved by integration of advanced OO methods, like Catalysis and SDF-OO, with CommonKADS, the de facto standard for knowledge modeling. The only development tool that supports complete integration of the rule-based paradigm with the OO-paradigm today, is Platinum's Aion. So this tool is ready for a structure preserving implementation of the next generation of OO models, that will take knowledge aspects into account. 2

Constraints, business rules and knowledge

2.1 Constraint versus inference rules

The concept business rule" happens to appear more and more frequently in OOliterature today. When taking a tour of current literature, we nd that a precise meaning of business rule" is lacking. OO-literature as well as data modeling literature mostly elaborate business rules as just constraints = conditions on structural and behavioral aspects of well business models as design models. Even parts of static model structures are sometimes quali ed as rules. If even an association is considered to be a rule, what isn't a rule? Modeling and structure preserving implementation of derivation rules, however, is hardly ever elaborated any further in the OO-community. No solution is oered yet for modeling sets of interrelated business rules representing supporting knowledge e.g. policy or regulation for the operation of some business process, that need reasoning to be applied. The OO-community just considers rules that have no interrelation with each other and therefore easily can be represented as invariants to model elements. The usual solution oered for implementation of derivation rules today, is translation to procedural code. This procedural code is hard to maintain however, because control logic and derivation logic are mixed. The only way to S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 211-213, 1998.  Springer-Verlag Berlin Heidelberg 1998

212

L. Hermans and W. van Stokkum

separate control logic from derivation rules is the introduction of a reasoning engine, that chains rules together and thus interrelates them by performing inference search across them. The combination of declarative derivation rules with an inference engine that applies them when needed, is often qualied as the rule-based paradigm. The derivation rules involved are usually called inference rules" or knowledge rules".

2.2 Paradigm shift from rule-constrained to rule-based Ian Graham is the only methodologist within the OO-community who not only recognizes the need for inferencing, but also stresses the prime importance of modeling inference rules as supporting knowledge for agents performing business processes Like Ian Graham, we are convinced that OO business modeling needs a paradigm shift in order to signicantly reduce the cognitive dissonance between models of the business and system designs. The current paradigm in the data-modeling and OO-community can be characterized as rule-constrained :  processes pre-exist" the rules constraints imposed on an existing business process, e.g. training registration,  rules limit outcome of process. The new paradigm can be characterized as rule-based :  processes exist to apply the rules e.g. paying taxes,  rules dene the process,  rules are interrelated manuals of rules,  rules require reasoning" to be applied. In between are processes like credit approval" and insurance underwriting". Activities would be done in these cases regardless of the rules, but the rules overwhelm the process to make it rule based.

Because the rule-based paradigm incorporates the rule-constrained paradigm, while the reverse is not the case, the rule-based paradigm can be considered to be of more strategic importance for business process modeling than the ruleconstrained paradigm.

3 OO Modeling and implementation of inferencing rules 3.1 Integration of OOAD and knowledge modelling To be able to apply the rule-based paradigm, we need a methodology to model business rules as supporting knowledge for business tasks agents. CommonKADS has been the de facto process oriented standard for this kind of knowledge modeling for more then 10 years already. Integration of CommonKADS with OOA D certainly would enhance the ability of OOA D to model reality more closely. This is also recognized by Peter Fingar : Most OO methods

How Business Rules Should be Modeled and Implemented in OO

213

are immature. Although current eorts towards standardization of methods and techniques will prove valuable, techniques such as requirements gathering usecases are likely to give way to more powerful approaches such as those used for years in knowledge engineering i.e. Common KADSKADS-II and ontologies will come to replace data dictionaries." ... Powerful modeling techniques derived from years of AI experience and research will allow us to more closely model reality, to more fully model the real business." Only a few OO methodologies are capable of modeling knowledge in an object oriented manner : SOMA Ian Graham and SDF-OO methodology for agent oriented OO knowledge modeling developed by CIBIT in cooperation with Everest. Several knowledge engineering parties in the Benelux started the Cupido project, guided by CIBIT, that aims at further integration of OO-modeling ipso Catalysis, the methodology for component based development, and SDF-OO with CommonKADS. Platinum is one of the active members. Catalysis has been chosen because it comes very close to CommonKADS already process decomposition and role based frameworks, while it also oers state of the art component modeling modeling of intelligent components thus comes into reach. The tooling Cupido has in mind for this OO-knowledge modeling and implementation, is a combination of Platinum's Paradigm Plus OO-case tool and Aion, sharing their models in a state of the art repository.

3.2 Implementation of inferencing rules with Aion Platinum's AionDS has been an implementation tool enabling structure preserving implementation of CommonKADS models for many years already. The newest release version 8, called Aion oers complete integration of the rulebased paradigm with the OO-paradigm. Inference rules can be used to represent constraints as well as derivations. Aion integrates rules and inferencing in a thoroughly consistent manner with the principles of OO. Rules are expressed as discrete IF THEN language constructs in the implementation of methods. So rule-sets can be represented as methods containing only rules. Moreover, rule-sets are inherited and can be specialized. A rule tab showing only rule-set methods gives an overview of the rules. The rst results from the Cupido project give rise to the expectation that it will be quite straightforward to map OO-CommonKADS models to Aion implementations in a structure preserving manner. Rules can refer to attributes as well as methods of classes and instances. Instance rules as well as patternmatching rules are supported. The inference engine supports forward and backward chaining. The same rules can be used by backward as well as forward chaining. Rules can be activated by calling the rule-set methods. Inferencing is invoked by statements in the implementation of some control method of a business object or an agent object.

A Reective Environment for Congurable Business Rules and Tools Michel Tilman System Architect Unisys [email protected]

1

Context

This paper describes some experiences in implementing business rules within the context of an object-oriented framework. We use the framework to build applications in administrative environments. These applications often share a common business model, and typically require a mix of database, document management and workow functionality. The framework TD98 uses a repository to store meta-information about end-user applications. This includes object model, object behavior, constraints, speci cations of application environments object types, attributes and associations that can be accessed , query screens, layout de nitions of overview lists and forms, authorization rules, workow process templates and event-conditionaction rules. Fully operational end-user tools consult this meta-information at run-time, and adapt themselves dynamically to the application speci cations in the database. Thus we e ectively separate speci cations of a particular organization's business model from the generic functionality o ered by the end-user tools. Rather than coding or generating code, we develop end-user applications by building increasingly complete speci cations of the business model and the various business rules. These speci cations are immediately available for execution. 1.1 End-user congurability

One of the main objectives of the framework is a high-degree of end-user con gurability. Often end-user tailorability is just skin-deep. In our case it involves all aspects of end-user application development. Thus knowledgeable users adapt the business rules, whereas regular end-users adapt the form and overview list layouts and query screens to their own needs. The business rules ensure consistency in all cases, because their speci cations are de-coupled from the application functionality. Giving the users access to the development tools is not sucient. Users are becoming increasingly aware that change is a constant factor and that applications are never truly nished. We take a similar view with regards to the development tools. For this reason we aim to develop most of the development tools in the system itself. Since this requires one or more bootstrapping steps, S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 214-216, 1998  Springer-Verlag Berlin Heidelberg 1998

A Reflective Environment for Configurable Business Rules and Tools

215

we originally started with hard-wired tools, such as an object model editor and a tool to dene event-condition-action rules. In a later phase we replaced these editors by applications congured in the system, and discarded the original tools. This way we also re-use all the existing functionality of the end-user tools, such as reporting, printing, importing and exporting and support by business rules. Another advantage arises from the fact that we no longer need to the change the framework in order to customize many aspects of the development tools, such as views, additional management tools or consistency checkers.

1.2 The need for a reective architecture The key to this approach is a highly reective architecture. We not only model the meta-information explicitly, i.e. structure and relationships of object types, association types and business rules, we also dene the meta-model in terms of itself. We store this meta-meta-information in the repository too.

Some examples While we provide `o-the-self' constraints, such as totality

and uniqueness constraints, the constraint mechanism enables the user to design and re-use new constraints. For instance, we can easily dene generic exclusivity constraints that are parameterized by the associations actually the roles that should be exclusive. In part this is possible because the constraint denitions have access to their own structure. Our authorization mechanism consists of a rule-base and a simple inference engine. Although the semantics of the rules allow very dynamic and concise denitions, the authorization mechanism is less well suited for the regular enduser. The reective architecture enables us to develop simpler tools in the system itself for more casual usage. The former types of business rules are rather passive, i.e. they guard over the consistency of data and business policies. Event-condition-action rules on the other hand are much more active: they initialize objects, enforce explicit cascaded deletes and support the user in workow processes. But we can put them to other uses as well. For instance, since these rules in fact all types of business rules are de-coupled from the repository interface object store and since the meta-model is expressed in itself, we have the means to enhance the structure of the meta-model and implement additional semantics by means of

amongst others event-condition-action rules. Thus we keep the kernel metamodel and object store small, and delegate implementation of additional features to conguration in the system itself. For instance, the default behavior in case of totality constraint violations is to raise an exception. In several cases however we would like to enforce cascaded deletes automatically. Thus we extended the meta-model with a `auto-cascaded-delete' ag. We implemented the semantics of the ag by means of event-condition-action rules.

216

M. Tilman

1.3 Using appropriate tools and languages

Rather than chasing a `one-size-ts-all' goal, we prefer to use appropriate tools and languages for the right job. For one, as we explained before, most of the tools can be recongured and enhanced to a large degree. We use Smalltalk as our scripting language in constraints and event-condition-action rules because it is a simple language and it suits our purposes well. For instance, we avoid an impedance mismatch when accessing the framework class library. The scripting language gives the user implicit access to the query language by means of Smalltalk select-style blocks and to the authorization mechanism. In our experience, a typical, say, logic-based language is not su ciently `expressive' to cover all our needs. Using a `general purpose' language makes it less suitable for more extensive or formal analysis, however. The query language provides high-level access to the repository, hiding details of the underlying database and type of database. All the database operations are dened in terms of the object model rather than relational tables and columns. We also dene authorization rules in terms of query expressions. To support design of work ow processes we provide a graphical editor. Internally, this editor manipulates high-level event-condition-action rules, rather than generating, say, pieces of code. It is also worth stressing that the various business rules are always triggered, whether we access the repository through interactive end-user and development tools, or through the scripting language.

1.4 Performance issues

When confronted with dynamic approaches such as ours, people often fear that performance may be severely compromised. We feel this need not be the case. For one, although it is a dynamic and very re ective architecture, it is still targeted to a particular use, and thus can be optimized with that goal in mind. For instance, as in any application, paying attention to typical patterns of usage and optimizing for these patterns often yields the most signicant benets. In contrast to more hard-wired applications, however, we need only do these optimizations once at the framework level. If done appropriately, most end-user applications and development tools ultimately benet. Even more importantly, the very re ective nature of the architecture lends itself well to optimization, as the tools can re ect on the business rules to a large degree. The authorization rules, for instance, when executed single-mindedly, tend to be rather computation intensive in some of our applications. By analyzing the rules, the tools automatically deduce if any rules need to be checked at all.

References

TD98 Michel Tilman and Martine Devos. Application Frameworks, chapter A Repository-based Framework for Evolutionary Software Development. Wiley Computer Publishing, 1998.

Business Process Modeling – Motivation, Requirements, Implementation Ilia Bider, Maxim Khomyakov IbisSoft, Box 19567, SE-10432, Stockholm, Sweden Magnificent Seven, 1-st Baltiyskiy per. 6/21-3, Moskva, Russian Federation [email protected], [email protected]

Abstract. The paper discusses Business Process Modeling from the viewpoint of application system development. It highlights the needs for having a business processes model (BMP) when developing a new generation of applications. Requirements for BMP suitable for application development are specified, and a version of BMP that satisfies the requirements is presented.

Motivation Any business application, or in fact any computer system, can be evaluated from two major aspects: functionality, and quality. Functionality determines the usefulness of the system to the end-user, while quality determines the tolerance to changes in business environment and computer technology. Nowadays, we can observe a shift in the system requirements that concern both functionality, and quality. The shift in functionality may be described as transition from the systems that are essentially human-assisting to those that are human-assisted. A human-assisting system helps a human being only to perform certain functions, e.g. write a letter, print an invoice, etc. Connection between those functions, and the objective of the whole process is beyond the system understanding; this is a prerogative of the human participant. In a human-assisted system, the roles are reversed, the system knows the process and does all bookkeeping. When the system can’t perform an action on its own, it asks the human participant for help. The shift in quality reflects the fact that we live in the world that changes faster and faster. The changes concern both business reality and technology. As the changes happen during the system lifetime, the system should possess a degree of reality tolerance, i.e. adaptability to both kinds of changes. The key to obtaining the properties of human-assisted behavior and reality tolerance lies in business process modeling. The model should help the system to control the business processes, and it should be separated from the software structure so that changes in the structure and model could be made independently of each other. Requirements When building a model to represent business processes, the following essential features of them should be taken into consideration:  A business process may stretch over a long period of time, some processes may take years to complete.  A business process often requires human participation. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 217-218, 1998.  Springer-Verlag Berlin Heidelberg 1998

218

 

I. Bider and M. Khomyakov

Several people may participate in the same process. Some are engaged simultaneously, others will be working on the process at different times. The same person normally participate in many business processes simultaneously.

To be of help in controlling business processes, the business process model (BPM) should be able to represent the following concepts:  Objective of the process (aim, goal).  Its current state (how long is it to the goal?).  Its history (what was done in the frame of the process in the past? Who did what?).  Business rules which determine the evolution of the process in the future (from the current state to the goal). Implementation Our version of BPM [1-4] is based on the idea of representing a business process as an object which state, at any given time, reflects the state of the process. The state is defined as a complex structure that includes attributes, sub-objects and active unites which we call planned activities. The objective of the process is specified by a set of equations that show when the state of the process-object can be considered as final. The process history is defined as the time-ordered sequence of all previous states of the process-object. Besides, each time the state of the process is changed, a special event object is born that registers additional information about the transition from one state to another, like date and time when the transition had occurred in the real world, date and time when it was registered in the system, the person whose actions caused changes in the object (if applicable), his comments on the event, etc. The process is driven by activities execution. An activity may be completed automatically, or with human-assistance. Business rules express the laws that define when the process is in the correct state. The state is considered to be correct when it includes all activities required for the process to move in the right direction (to the goal). If this is not the case, the rules ensure that those activities are added to the process plan. When planning and executing activities, not only current state is taken into consideration, but the process history as well. This ensures that the process goes through all steps required for reaching the goal, without any illegal shortcuts that can be introduced by human participants, intentionally, or by mistake. References 1. 2. 3. 4.

Bider, I.: ObjectDriver - a Method for Analysis, Design and Implementation of Interactive Applications. In Data Base Management (22-10-25). Auerbach (1997). Bider, I.: Developing Tool Support for Process Oriented Management. In Data Base Management (26-01-30). Auerbach (1997). Bider, I., Khomyakov, M.: Object-Oriented Model for Representing Software Production Processes. In: Bosch, J., and Mitchell, S. (eds.): ECOOP'97 Workshop Reader, LNCS 1357, Springer (1997). Bider, I., Khomyakov, M., Pushchinsky, E.: Logic of Change: Semantics of Object Systems with Active Relations. In: Broy, M., Coleman, D.,Maibaum, T., Rumpe, B. (eds.): PSMT -Workshop on Precise Semantics for Software Modeling Techniques. Technische Universität München, , TUM-I9803 (1998) 11-30.

An Integrated Approach to Object-Oriented Modeling of Business Processes Markus Podolsky Technische Universitat Munchen, Institut fur Informatik, Orleansstr. 34, D-81667 Munchen, Germany [email protected]

Abstract. Object-oriented approaches often exhibit limitations in the

area of business process modeling. In this paper we present a way that should help to bridge the gap between object-orientation and business process modeling. Our approach is based on a combination of objectorientation and work ows and o ers ve graphical modeling techniques which have clear operational semantics. This enables features like simulating the modeled system or checking whether runs of the system satisfy certain properties and facilitates an integrated development process.

1

Introduction and Motivation

Many object-oriented methods like for example OMT 6 or UML 1 emphasize the structural aspects of the software system to build. The techniques they oer for describing the system behavior are not expressive enough to cover all relevant aspects of business processes and object interaction. Therefore, object-oriented approaches exhibit limitations in the context of business process modeling. How could a suitable approach for object-oriented modeling of business applications look like? First of all, the approach should emphasize the business processes and the activities that take place therein, because these aspects build the backbone of business applications. Besides that, we need expressive techniques to model the complete behavior of a system and not only exemplaric scenarios. It should be possible to simulate the modeled system or to check whether it satis es certain properties. That means, the modeling techniques should have clear semantics. In this paper we present a modeling technique that should help to bridge the gap between object-orientation and business process modeling. Our approach is based on a combination of object-oriented models and ideas from work ow modeling, c.f. 3. It oers ve graphical modeling techniques with clear operational semantics given by a mapping of the description to a certain kind of high-level petri nets 5. This enables important features like simulating the system models or generating code from them. Moreover, we are also able to automatically check whether runs of the modeled system satisfy certain properties which can be formulated using a temporal logic with object-oriented extensions 2. These features allow an integrated support of the development process. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 219-221, 1998.  Springer-Verlag Berlin Heidelberg 1998

220

2

M. Podolsky

Modeling Techniques

Our approach is based on ve graphical modeling techniques. Their main ideas will be sketched briey in this section. The workow diagram is used to cover the relevant aspects of business processes. It originates from petri nets and is used to dene a high-level description of the system to build. The structure of business processes is modeled using activities and containers which can be compared to transitions and typed places of high-level petri nets. The workow diagram shows the ow of data objects that are stored in the containers and the causal dependencies between the activities. Activities take references to data objects from their input containers, perform some actions on them, and put output objects into their output containers. To each activity a corresponding method of an instance is assigned. Whenever an activity should be performed the method of the corresponding instance will be executed with the input objects as arguments. Inscriptions of the workow diagram allow to formulate pre- and post-conditions for input and output objects of activities and to describe branches of the workow. The renement of activities using sub-workows with the same input and output connections and corresponding inscriptions allows to hierarchically structure the process models. The class diagram is similiar to that of other object-oriented methods. It is used to dene the structure of classes and the relationships between them. Each class description consists of a class name, state variables, reference variables and methods and relationships between classes like association, aggregation, and inheritance. Besides that, the class diagram o ers constructs to describe synchronization properties. Methods can be classied as reading and writing methods which means that for each instance writing methods have to be performed mutually exclusive as well as the execution of writing and reading methods. Guard-conditions for methods provide an additional mechanism to express synchronization properties. The instance diagram shows the instances that exist at the system start and of which class they are. It shows which other instances they know according to the corresponding aggregation or association relationship dened in the class diagram. When describing the workow diagram we have already mentioned that to each activity an instance is assigned that performs the tasks of the activity. Of course, each of these instances has to be dened in the instance diagram. Additionally, this diagram allows to dene the initial marking of containers, that means to dene which data objects they contain at the system start. Object behavior charts OBC provide means to dene the interaction of objects as well as their interior behavior 4 . They can be seen as a visual programming language and o er graphical high-level constructs that can be used for a detailed description of object methods. Their main advantage lies in the fact that they allow to explicitely describe the interaction of objects and to specify which actions take place when a method is executed. The programming constructs that object behavior charts o er are object creation, method call, iteration, conditional or nondeterministic branch, state change, assignment, object passing, and return. An explicit construct for passing objects to containers allows a seam-

An Integrated Approach to Object-Oriented Modeling of Business Processes

221

less embedding of OBC-descriptions to workow models. Object behavior charts provide a composition mechanism using the call relationship between methods. That means, they allow to specify complex interactions by integrating the object behavior chart of the called methods. State transition diagrams are used to model the lifecycle of a class by means of a kind of a nite state automaton. For each state variable dened in the class diagram an automaton can be dened that shows how the values of the state variable change due to method calls. The state of an object is given by the cartesian product over the values of its state variables. Additionally, state transitions can be labeled with guard conditions in order to express dependencies between the state variables of an object. In general, state transition diagrams are useful for describing the lifecycle of data objects because of their passive character. For modeling the behavior of activity objects which are the central points of interaction we use object behavior charts. 3

Conclusions

In this paper we present a semantically well founded approach to object-oriented modeling of business processes. We combine object-oriented technologies and ideas from workow modeling to cover all relevant aspects of the system from dierent views. The workow model shows the activities of the business processes and their causal ordering. Each activity of the workow diagram is assigned to a method of a concrete instance. So, when an activity should be executed the method of the corresponding instance is called. Object behavior charts allow to specify the actions of object methods and lead to a well integrated, objectoriented model of the system to build. References 1. Booch, G., Rumbaugh, J., Jacobson, J.: The Unied Modeling Language. Rational Software Coorp. 1997 2. Frey, M., Oberhuber, M., Podolsky, M.: Framework for Testing based Development of Parallel and Distributed Programs. In Proc. of the 3rd Workshop on Parallel and Distributed Software Engineering, IEEE Computer Society 1998 3. Hollingsworth, D.: The Work ow Reference Model. Work ow Management Coalition, Document Number TC00-1003, Brussels 1994 4. Podolsky, M.: Visual Programming using Object Behavior Charts. To appear in Proceedings of the 2nd Workshop on End-User Development, University of Hull Press 5. Reisig, W.: Petrinetze: Eine Einfuhrung. Springer-Verlag, Berlin 1986 6. Rumbaugh, J.: Object-oriented Modeling and Design. Englewood Clis, PrenticeHall 1991

Enterprise Modelling Monique Snoeck

Rakesh Agarwal

Chiranjit Basu

K.U.Leuven - DTEW Naamsestraat 69, 3000 Leuven [email protected] .ac.be

Infosys Technologies Limited, Near planetarium, Bhubaneswar, India [email protected]

Cincom Systems Inc, [email protected]

1 Introduction General models of software development focus on analysing data flows and transformation. This kind of modelling only accounts for organisational data and that portion of the process which interacts with the data. The correct integration of information systems in the business administration requires however a more integrated approach to system specification. The re-framed Zachman framework for information systems architecture [7, 5] proposes a layered approach to the specification of information systems, that puts information systems in a much larger context. In the context of this paper we will discuss the second and third layer of this framework, namely the business layer and the service layer. Some specifications may origin from a required business functionality, addressing the essential components of a particular business value chain. Other specifications may address more an administrative type of functionality (including input facilities, generation of output reports, electronic data interchange formatting, as examples), referred to as information services in general. The specifications addressing the first type of functionality constitute an enterprise model (also called business domain model), that contains the relevant domain knowledge for a business administration (including business objects, business events as well as business constraints). Around this enterprise model a service model is developed as a set of input- and output services offering the desired information functionality to the users of the information system. Output services allow the users to extract information from the enterprise model, and present it in a required format on paper, on a workstation or in an electronic format. Input services provide the facilities for adding information or updating information that is relevant for the business administration. As a result, enterprise modelling addresses those issues that are still relevant even if there is no information system. On the other hand, the service or functionality model is concerned with the information system’s functionality. However, the service model can be put in the broader context of workflow and business processes. Most current software development methods have no distinction between such business and information functionality. They typically group in a business object not only the core business attributes and business routines, but also input and output procedures. Some methods offer somewhat analogous concepts. Already in [1] the necessity for specifying “real world models” was pioneered. OOSE [2], with some inherited concepts in UML, allows also to distinguish entity objects, as opposed to S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 222-227, 1998.  Springer-Verlag Berlin Heidelberg 1998

Enterprise Modelling

223

interface and control objects. In these methods however, the choice of techniques is critical in addressing the appropriate functionality level: the use of flows or streamed communication mechanisms (such as message passing) may contain implicit implementation choices, which should be addressed in the implementation, and not in the specification. The separation between enterprise modelling and service modelling is an essential element in the integration of information systems in the business administration in general. The next paragraphs exemplify this by demonstrating the value of an enterprise model for the evaluation and implementation of an ERP package, by showing how an enterprise model already contains the basic building blocks for task and workflow modelling, and by illustrating how the object-oriented paradigm let us go beyond the matrix organisation of enterprises.

2 ERP Implementation Many manufacturing companies are taking advantage of recent advances in software and hardware technology to install integrated information systems called Enterprise Resource Planning (ERP) packages. These systems can provide seamless and realtime data to all who need it. However, the tasks of selection and implementation of such systems can be daunting. Essential steps in the implementation of an ERP-package are the development of an enterprise model, business process re-engineering and the identification of desired information system services [3]. The development of an enterprise model allows to gain better insight in the business functioning. The enterprise model centralises all the business rules that remain valid even if their is no supporting information system. The service model relies directly on these business rules and consists of a collection of services that allow to maintain these rules and question the enterprise data. As a result, any business process re-engineering can act at two levels. A change in the work organisation that leaves the fundamental business rules unchanged, operates at the level of the service model only. A more radical business re-engineering will also change the basic business rules contained in the enterprise model. Such change will nearly necessarily induce a change in the work organisation. In the context of the evaluation and acquisition of an ERP package, business modelling allows to match the own business rules against the business rules supported by the ERP package. The more of the own business rules are supported by an ERPpackage, the less changes in business functioning will be required when implementing that package. In addition, the separation of business rules from functionality requirements, allows for a better insight in the cost of requested changes: changes to the enterprise model equal to changes in the basic business rules. Hence, these kind of changes will in general be more costly than changes to services. Indeed, a change in the enterprise model will generally require changes to all services that are based on the modified portion of the enterprise model. A full comparison of the own enterprise model and the enterprise model supported by the ERP package allows to better evaluate which part of the business will have to be adapted to the use of this package. As the modification of business rules implies a more fundamental change in business

224

M. Snoeck, R. Agarwal, and C. Basu

functioning than a modification in tasks and workflow, the availability of a business model is an interesting tool in any ERP evaluation process.

3 Task and Workflow Modelling 3.1 Object Interaction in Enterprise Modelling In the enterprise modelling approach of MERODE [5], object interaction is not modelled by means of message passing, but by means of business events that involve enterprise objects. The argument is that the concept of message passing is too much implementation biased. Imagine for example a library where members can borrow copies. In this type of domain MEMBER and COPY are examples of business object types. In a classical object-oriented approach, the fact that a member borrows a copy can be modelled by means of a message from the member to the copy or by a message from the copy to the member. However, at the level of business domain modelling, it would be irrelevant to debate the direction of the message. Indeed, in reality, none of the objects is sending a message to the other. What really happens is that on occurrence of a borrow event, two objects are involved. The implementation bias of message passing becomes even more apparent when the process of borrowing a copy is modelled by means of sequence charts or interaction diagrams [4, 2]. As one can see from this little example, at the business domain modelling level, object interaction can be specified in a natural way by identifying business events and indicating which objects are involved in these events. For the library example this means that we define a business event called “borrow” and that the two domain objects MEMBER and COPY are involved in this event. In order to specify the effect of the business event borrow, both involved objects are equipped with a method ‘borrow’ and it is agreed that these methods will be executed simultaneously upon occurrence of a borrow business event. As a result, in MERODE, an enterprise model is not only composed of enterprise objects, but also of business events. Business event types are characterised by the following properties:

A business event is atomic, that is, it occurs or is recognised at one point in time; its possible duration can be abstracted.

A business event matches something that happens in the real world within the universe of discourse; it is not just an information system event.

An event is not decomposable, it cannot be split into sub-events. The grouping of event types to transactions or tasks is dealt with in when modelling input services. When introducing additional detail in the analysis model, business events can be associated with operations on the involved enterprise objects.

Enterprise Modelling

225

3.2 Modelling Business Processes As explained previously, the functionality of an information system is conceived a set of input and output services around the enterprise model The separation between an enterprise model and a service model leads to a natural partition of systems in three weakly-coupled subsystems, each corresponding to a single aspect of the problem, and evolving independently of all other aspects. The three independent aspects are - the business logic (enterprise model) - the work organisation (the input subsystem) - the information needs (the output subsystem) Business events have a prevalent role in the modelling of input services. Indeed, input services gather information about events or groups of events occurring in the real world and they broadcast business events to business domain objects. For example, upon arrival of a patient in the hospital, an input service will allow to gather all required information about this patient and notify the PATIENT business domain object of the occurrence of a create-patient event. In many cases, an input service broadcasts the same event to several enterprise object of the same or of different types. If needed, the input services can inspect the state vector of enterprise objects to obtain all information required for producing event messages. Input services will often call groups of business events. There are two reasons to group business events: for ensuring consistency and to account for task organisation. As business events have to be atomic, they do not necessarily keep the object base in a consistent state. For this purpose, we need consistent event type. Assume for example a Mail Course Company, students are only of interest to the company as long as they are enrolled for a particular course. As a result, students must mandatorily be enrolled for a course. The object model for the Mail Course Company contains an object type STUDENT and an object type ENROLMENT. Atomic Event types are (a.o.) enrol and cr_student, end_enrolment, end_student. The mandatory nature of the association between STUDENT and ENROLMENT (each student must be enrolled at any time) implies that when a student is created, an enrolment must be created as well. Similarly, when the last enrolment is ended, either the student must be enrolled for a new course or the student must be ended as well. Ensuring this type of consistency gives rise to the definition of consistent event types that are the grouping of atomic event types. Before and after the occurrence of a consistent event, the database must be in a consistent state, this is, a state where all constraints are satisfied. An other reason to group events is the definition of units of work or tasks. For example, let us assume an enterprise model for Hotel Administration with object types CUSTOMER and STAY. It is most likely that new customers will be registered when they arrive at the hotel for their first stay. Therefore the grouping of the event types and create_customer and create_stay is an example of a useful work unit or task. The input subsystem can thus be conceived as a set of successive layers. At the lowest level are the atomic event types that have been defined in the enterprise model. The second level groups these atomic event types to consistent event types where necessary. Finally, at the highest level are the tasks, which can in turn be grouped to more complex tasks. The lowest level tasks are composed of one, possibly more

226

M. Snoeck, R. Agarwal, and C. Basu

consistent events. In order to ensure consistency, only consistent events should be available to tasks. When the work organisation changes, events will be grouped differently. This only affects the input subsystem and not the output subsystem, nor the enterprise model.

4 Beyond the Matrix Technology has always played a critical role in both facilitating and demanding change in the organization structure. Functional representations of organization has been pretty much the dominant principle and especially since the advent of the industrial age. In later years, when the vertical stovepipes of functional specialization have proven to be limited in their capacity to respond to changed technological and economic environments a new perspective was brought into the defining principle of an organization. A process view was considered more relevant as it avoided the ingrained alleyways of vertical specialization and the vested interests that infested those alleyways. Defining organizations along only one dimension of either function or process is however not enough. If the challenge is to manage changeability, scalability and competencies of an organization in an environment defined by simultaneous multidimensional change then the organizing principle must be closer to the characteristics of organic phenomena than to the austere clarity of an engineering component. It must be closer to the self-organizing phenomena of Chaos theory than the static building block metaphors of the factory. The Object Oriented Organization is a natural extension of Object Oriented Information systems. The OO Organization model will have a natural and direct correspondence to an OO information model thereby allowing, for the first time, the much sought after Enterprise model to achieve the state of self consistence required for its viability and stability. Without such correspondence, an enterprise model is ripped apart by the static nature of existing enterprise models and the dynamic capabilities of multidimensional OO based information models pulling in opposing directions. The Object-Oriented Organization model bases itself on self-organizing and selfconsistent competency constructs that can go through phase and state changes without losing its inherent definition. It is a competency definition that can interact in both the functional plane and process plane depending upon contextual demands and also move from plane to plane without having to compromise its integrity to the vested interests of either plane, as the competency definition would be at a level that would encompass those interests. The political power of hierarchical command and the technical power of process expertise would both be modeled in this higher level organizational object. It would thus provide a ’wormhole’ pathway for the task being processed to move from one plane to another and do so in a time continuum without creating a fundamental breakdown in the functional and/or process organizations. The matrix is resolved at a higher dimension, that of the Organizational Object.

Enterprise Modelling

227

5 Conclusion Enterprise modelling is an important activity if we wish to correctly integrate our information systems in the business administration, both when developed from scratch or when composed by means of an ERP-package. It allows to gain better insight in the business functioning and in the implications of a change in business rules on the work organisation. Finally, the object-oriented approach to enterprise modelling allows to go beyond the uni-dimensional functional or process organisation. It was further argued that joint participation to common business events is a communication mechanism that is better suited for enterprise modelling purposes than message passing. We have shown how business events can be used as basic building blocks for the development of an input subsystem: task can be defined as aggregates of business events. In addition, this ensures a flexible system architecture: when work organisation changes, it suffices to define new groups of events corresponding to the newly defined tasks.

References 1. Jackson M. , Cameron J. R., System Development, Prentice Hall, Englewood Cliffs (N.J.), (1983). 2. Jacobson I., Object Oriented Software Engineering, A Use Case Driven Approach, Addison-Wesley, (1992). 3. Argawal, R., Key to ERP Implementation, Proc. of the National Conference on IT Management & ERP, Vijawada, India, Feb. 1998 4. Rumbaugh, J., Blaha M., Premerlani, W., Eddy, F., Lorensen, W., Object Oriented Modelling and Design, Prentice Hall International, 1991 5. Snoeck M. et al., Object-Oriented Enterprise Modelling with MERODE, Leuvense Universitaire Pers, to appear in Feb. 1999, see also http://www.econ.kuleuven.ac.be/tew/academic/infosys/research/merode.htm 6. Snoeck Monique, Dedene Guido, Existence Dependency: the key to semantic integrity between structural and behavioural aspects of object types, of IEEE Transactions on Software Engineering, April (1998). 7. Zachman J.A., A Framework for Information Systems Architecture, IBM Systems Journal, 26(3), (1987), 276-292.

Requirements Capture Using Goals Ian F Alexander ([email protected]) Scenario Plus, 17A Rothschild Road, London W4 5HS, England

Introduction Business processes may be carried out by people or by systems. In either case, the most intuitive model of a process is a sequence of tasks [4], performed in order to achieve a goal. But life and business are full of choices: events may dictate a change at any time. A better model of a business process is therefore not a sequence but a task model which contains branch points [2]. An exceptional case may lead to a secondary sequence of tasks which branches off from the primary sequence. There may be several alternatives available, each leading to a different set of subtasks; or there may be some subtasks which can, or must, be performed in parallel with each other. Finally, some tasks may recur, either periodically, or at irregular intervals. The resulting structure is a typed hierarchy, representing in compact form a large number of possible real-world scenarios. Different scenarios can be generated by choosing different combinations of parallel subtasks (if they are not all mandatory), or can execute them in different orders. The names of tasks and subtasks in such a hierarchy can be chosen to represent user (or system) Goals. The top-level goal is to solve the problem described by the users; or in the case of a system model, to achieve the primary purpose for which the system is being developed. Goals are less likely to change than activities and processes, as they reflect aims and objectives, which are usually more stable [5, 6]. To formalise this concept, every path through a Goal Hierarchy is a Scenario, one particular sequence of events that might occur in the real world (such as interactions with a system), as defined by [3]. A scenario is similar to the Scripts [7]used in artificial intelligence for planning. Our objective is to describe a way to capture and organize requirements. Much of this requires communication and cooperation with users, and this is much easier to achieve with appropriate conceptual tools. When user requirements are organised into goals that seem natural to users, or when these are built cooperatively with users, the requirements are easier for users to understand and to validate. A hierarchy of goals forms an ideal heading structure to arrange requirements into naturally-ordered groups.

Patterns of Goal Execution In a nested set of goals, the overall aim is to achieve the top-level goal [2, 3, 5]. This top level goal can be accomplished by executing some combination of its subgoals. The simplest pattern of goal execution is that all the subgoals must be completed in sequence. Other patterns include executing any one of the subgoals (the ’alternatives’ pattern), any selection from the subgoals in any order (the ’parallels’ pattern), and all the subgoals at once or in any order (the ’strong parallels’ pattern). S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 228-231, 1998.  Springer-Verlag Berlin Heidelberg 1998

Requirements Capture Using Goals

229

Especially important for business process modeling is effective description of exceptions [4]. Whereas in software an exception implies an error, in business an exception goal describes the successful handling of a situation that may be critical to the business. Exception goals can be decomposed to any level of detail. For example, the contingency precondition ’Aircraft engine fails’ gives rise to a top-level exception goal ’Complete flight safely’. Possible exception subgoals might include ’Restart engine’, ’Fly without failed engine’, and ’Make emergency landing’. These are instances of two common goal patterns, namely ’Resume normal operations after handling contingency’ and ’Stop operations safely after contingency’. Such patterns of goal execution, can be stored in a library [2] and instantiated when needed. The library pattern ’Operations with Shutdown’ (Figure 1) combines both of these exception-handling patterns with a normal operations cycle. On screen, the tool uses colours to distinguish the different goal types and a tree diagram to show the hierarchy. The goal types are indicated here as labels with the type’s name in parentheses, and the hierarchy is shown by indentation. Goals without labels are of type Sequence. The link type is indicated by the number of the goal to which the link points, e.g. ’[27]’. In Figure 1, goal 24 offers the user two alternatives: i) to resume the normal Operations Cycle, by allowing the periodic goal 16 to be repeated; or ii) to shut down the cycle of operations by invoking the Shutdown goal 27. To use the goal pattern, the requirements engineer instantiates it in the desired place in a goal hierarchy, and changes the wording of the individual goals to describe the actual situation. Requirements and constraints for each goal can then be attached.

15 Operations with Shutdown 16 Operations Cycle (Periodic) 17 Normal First Step 18 Normal Next Step 19 Operating OK? (Alternatives) 20 Yes, Continue Cycle 21 No, Contingency (Exception) 22 Record the Event 23 Take Remedial Action 24 Remedied Successfully? (Alternatives) 25 Resume Operations 26 Shutdown Operations (Link [27])

27 Shutdown Procedure Figure 1: A Goal Pattern using Alternatives and Exceptions

Representation The Goal Hierarchy consists of any number of Goals. Each Goal has zero or more child Goals. The top-level goal names the overall problem to be solved. Each Goal has a Name, which is a verb phrase, a Short Text and a Description. Each Goal (at any level) has one of the following Types:

230

      

I.F. Alexander

Sequence - an ordered list of subgoals, all of which are mandatory Alternatives - a set of exclusive subgoals, i.e. one must be chosen Parallels - a set of non-exclusive subgoals. At least one must be chosen; if more than one is chosen, they may be executed in any order, including simultaneously Strong Parallels - a set of mandatory subgoals, but which may be executed in any order including simultaneously Link - a reference to another goal, typically used to indicate a contingency procedure (such as for shutting down a process), or to indicate a non-periodic repetition of a goal (such as the reworking of an incorrect process) Periodic - a sequence of subgoals repeated indefinitely at a definite frequency Exception - a sequence of subgoals which are executed in response to an identified contingency, typically an event but in general any Precondition

The first five are mutually exclusive, although a subgoal in a set of Parallels can degenerate into Alternatives if only one subgoal may be chosen. Periodicity could be treated as an orthogonal attribute of goals, so that a goal could be Periodic + Alternatives, for instance. In practice this (less common) effect can easily be achieved by composing a Periodic goal which has just one child, an Alternatives goal. Similar considerations apply to Exceptions, though in this case there is often a sequence of recovery activities. There is a strong advantage in keeping Exception goals simple, unambiguous, and easy to follow, given that they occur in contingency conditions. Each goal has an optional Precondition, which is a short text, but which can be configured in a standardised way by a tool. For example, in a Sequence, each goal naturally has the default Precondition ’ completed’. Similarly, in a set of Parallels, each goal has the default Precondition ’ started’. Each goal has a Priority, which may be primary, secondary, tertiary, or none. Prioritisation focuses attention on key areas, while allowing the modeler to describe goals which are currently perceived as less important or out of scope. Analysts can choose to show only those goals that are in scope. This can be achieved visually with a tool by filtering on the level of Priority. The goal hierarchy approach thus makes clear the effects of scoping decisions, and allows trade-offs to be evaluated. Each goal is associated with a list of Actors, which may be classes, such as quality assurance or engineering, or systems. External systems which can cause contingencies can also be listed as Actors. For example, ’Mains Power Supply’ can be treated as an Actor in some Exception scenarios involving computer systems (Precondition ’Power fails’).

Tool Support A tool can be helpful in making a goal hierarchy and the scenarios derived from it accessible to users. Tools can also provide abilities not available with manual methods: for example, animating, filtering, etc. A tool with these capabilities, Scenario Plus [1], based on the DOORS requirements engine, is freely available. Figure 2 shows a fragment of a model of a Marketing process during animation. The user has decided to Go ahead (goal 60) with a product launch, and the tool is about to explore the pre-launch sequence (goal 12). Animation proceeds

Requirements Capture Using Goals

231

interactively; the user chooses the desired path when an Alternatives goal is encountered. Animation involves users in the model. Each animation generates a Scenario, a fully-determined path, and these form the basis for acceptance test scripts.

Figure 2: Animating a Marketing Goal Hierarchy

Conclusions This article has summarized a simple approach to describing business processes as a hierarchy of goals. Goal hierarchies are readily understood by both users and engineers, forming a bridge between their views. There is a close relationship between business goals, actors, and scenarios and object-oriented system development concepts, such as object collaboration, roles, and use cases. However, the goal hierarchy is valid and useful for business process description regardless of whether the subsequent development approach is functional or object-oriented.

References 1. 2. 3. 4. 5. 6. 7.

Alexander, Ian, Scenario Plus User Guide, http://www.scenarioplus.com 1997 Alexander, Ian, "A Co-Operative Task Modelling Approach to Business Process Understanding", http://www.ibissoft.se/oocontr/alexander.htm ECOOP 1998 Cockburn, Alistair, "Structuring Use Cases with Goals", http://members.aol.com/acockburn/papers/usecases.htm 1997 Graham, Ian, "Task scripts, use cases and scenarios in object oriented analysis", Object Oriented Systems 3, pp 123-142, 1996 Kendall, E., "Goals and Roles: The Essentials of Object Oriented Business Process Modeling", http://www.ibissoft.se/oocontr/kendall.htm ECOOP 1998 a Kendall, E., S. Kalikivayi, "Capturing and Structuring Goals: Analysis Patterns," European Pattern Languages of Programming, Germany, July, 1998 b Schank, R.C. and Abelson, R.P., "Scripts, Plans, Goals and Understanding", Lawrence Erlbaum Associates, Boston, USA, 1977

‘Contextual Objects’ or Goal Orientation for Business Process Modeling Birol Berkem Independent Consultant / CNAM 36, Av. du Hazay 95800 Paris-Cergy / FRANCE tel : +33.1.34.32.10.84 e-mail : [email protected] Abstract. In this paper, we propose extensions to object-oriented notions to represent business processes. The idea avoids considering object types independently, but takes into account their collaboration based on the context emerging from a goal requirement. In this way, we introduce ‘Contextual Objects’ that incites objects to collaborate in order to realize a business goal.

Traditional object orientation doesn’t meet business process modeling requirements since objects do not incorporate the goals for which they collaborate. Indeed, business processes like Order Management, Sales, etc. and their steps (activities) must be driven by a goal in order to take dynamic decisions at any step of the process and to evaluate the progression of activities. Thus a business process may be assumed as a goal-oriented graph with each step representing a goal-oriented node. Listing attributes and operations within classes does not help to model business process steps since the goal and the emerging context of objects are not expressed in today’s object-oriented diagrams (even in the UML’s activity, sequence, statetransition or collaboration diagrams). ‘Contextual Objects’ represent Goals and Contexts by object’s contextual behaviors that express implicitly attributes, relationships and methods that depend on a goal and on a context. Modeling goals requires that each activity inside a process step is conducted by a driver object [1]. Contextual objects collaborate to realize the goal whenever a driver object enters into one of its lifecycle stages. For example, an ‘order’ incites other objects, such as product, customer, delivery, invoice, etc., to react, depending on the context. UML 1.1. has a work unit [2] whose state is reflected by the state of its dominant entity (driver object ) [3]. But nothing is said about the internal dynamics of a work unit whose detailed description becomes necessary to define the responsibilities of its participating objects. In such a way, contextual objects should contribute to modelling activities inside business processes (see Figure 1). Secondly, we propose the Behavioral State Transition Diagram to model the goal and responsibilities of objects inside each step of a business process. This diagram (Figure 2) represents an activity's internal behavior as a reusable component. As a bridge toward use cases, a Work Unit Class should be considered as a Use Case Class and Nested Work Units represent target (destination) use cases classes or Packages within ‘uses / includes’ relationships. That way, we can obtain business process driven use cases and their relationships. In summary, ‘contextual objects’ provide the following :  a robust implementation of executable specifications S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 232-233, 1998.  Springer-Verlag Berlin Heidelberg 1998

·Contextual Objects· or Goal Orientation for Business Process Modeling

233

 a formal definition of the behavioral state transition diagram which represents the internal behavior of the ‘action states’ and their transition,  a formal way to find process-oriented use cases and their relationships. B u sin ess P rocess Steps are m odeled via ‘B eh avioral W ork U n its’ O rd er (R ecord in g ) O rder (recordin g) 1:

*

2: 1

Produ ct (to record)

O rder (to d eliv er)

O rd er (D eliverin g) O rd er (d elivering ) *

1:

3:

O rder (to b ill)

2:

1

P ro duct (to v erify)

D eliv ery (to ex ecute)

O rd er (B illin g) O rder (b illin g) 1

1:

2: 1

D elivery (to verify )

B ill (to prep are)

Figure 1. Business Process Steps modeled using behavioral work units. Object collaboration is a behavioral stereotype. Classes with in ‘executing’ are the input or suppliers of the interfaces. Output objects, origins of flows (dashed lines) play a role of output or client parts . O r d e r ( D e l iv e r in g ) O rd er ( d e l iv e r in g ) *

1:

P ro d u c t ( to v e r ify )

2:

1

D e liv e r y ( to e x e c u te )

D e liv e r y ( e x e c u tin g ) D e liv e ry ( e x e c u tin g ) 2:

P ackage ( to s h ip )

1:

*

P ro d u c t ( to p r e p a r e )

P r o d u c t ( p r e p a r in g ) P ro d u c t ( p r e p a r in g ) 2: 1

1:

S to c k ( to d r o p )

P ro d u c t ( to p a c k a g e )

Figure 2. A Behavioral State Transition Diagram for the delivery step of the Order process. Rounded rectangles indicate the boundary for each individual or nested activity. Dashed rounded rectangles depict the boundary of object’s collaboration inside each action state.

References 1. Berkem, B., BPR and Business Objects using the Contextual Objects Modeling, 8th. International Software Engineering & Its Applications Conference - Univ. Leonard de Vinci / Paris November 1995 2. UML Summary version 1.0 - Business Modeling Stereotypes / January 1997 3. UML 1.1.- Extensions for Business Modeling - Stereotypes and Notation www.rational.com

Mapping Business Processes to Software Design Artifacts Pavel Hruby Navision Software a/s Frydenlunds Allé 6 2950 Vedbæk, Denmark Tel.: +45 45 65 50 00 Fax: +45 45 65 50 01 E-mail: [email protected] Web site: www.navision.com (click services) Abstract. This paper explains the structure of a project repository, which enables you to trace business processes and business rules to the architecture and design of the software system. The structure identifies types and instances of business processes, which are mapped to software design artifacts by means of refinements, realizations and collaborations at different levels of abstraction.

Even when using a visual modeling language such as UML, a useful specification of a business system is based on precisely defined design artifacts, rather than on diagrams. The design artifact determines the information about the business system, and the diagram is a representation of the design artifact. Some design artifacts are represented graphically in UML, some are represented by text or tables and some can be represented in a number of different ways. For example, the class lifecycle can be represented by a statechart diagram, an activity diagram, state transition table or in Backus-Naur form. The object interactions can be represented by sequence diagrams or by collaboration diagrams. The class responsibility is represented by text. Business processes, in UML shown as use cases, are considered as collaborations between organizations, business objects, actors, workers or other instances in a business system. Business process (use case) is a type of collaboration, specifying the collaboration responsibility, goal, precondition, postcondition and operations involved in the collaboration. Business process instance (use case instance) is an instance of collaboration, specifying concrete sequences of actions and events. Fig. 1 shows relationships between design artifacts specifying business processes and logical design of the software system. Artifacts are structured according to the level of abstraction: the organizational level, the system level and the architectural level. At each level of abstraction and in each view, the system can be described by four artifacts. They are the classifier model (specifying static relationships between classifiers), the classifier interaction model (specifying dynamic interactions between classifiers), the classifier (specifying classifier responsibilities, roles and static properties of classifier interfaces) and the classifier lifecycle (specifying dynamic properties of classifier interfaces). The classifier model is represented by a static structure diagram (if classifiers are objects, classes or interfaces), a use case diagram (if classifiers are use cases and actors), a deployment diagram (if classifiers are nodes) and a component diagram in its type form (if classifiers are components). The classifier interaction model is S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 234-236, 1998.  Springer-Verlag Berlin Heidelberg 1998

Mapping Business Processes to Software Design Artifacts

235

represented by a sequence or collaboration diagram. The classifier is represented by text. The classifier lifecycle is represented by a statechart, an activity diagram, a state transition table and in Backus-Naur form.

Organizational Level

Business Objects View (Logical View)

Organization Model

Business Process View (Use Case View in UML)

Organization Interaction Model

«instance»

System Level

Organization Use Case Interaction Model

Organization Use Case

Organization Use Case Lifecycle

«collaborations» Organization

Organization Lifecycle «realize»

«refine»

System Model

System Interaction Model

«instance»

«refine» System Use Case Model

System Use Case Interaction Model

System Use Case

System Use Case Lifecycle

«collaborations» System / Actor

System / Actor Lifecycle «realize»

«refine» Architectural Level

Organization Use Case Model

Subsystem Model

Subsystem Interaction Model

«instance»

«refine» Subsystem Use Case Model

Subsystem Use Case Interaction Model

«collaborations» Subsystem

Subsystem Lifecycle

Subsystem Use Case

Subsystem Use Case Lifecycle

Fig. 1. Mapping business processes to objects at organizational, system and architectural levels.

The organizational level of abstraction specifies the responsibility of an organization (such as a company) and the business context of the organization. The artifact organization specifies responsibility and relevant static properties of the organization. The artifact organization model specifies relationships of the organization to other organizations. The artifact organization use case specifies the business process with the organizational scope in terms of the process goal, precondition, postcondition, business rules that the process must meet and other relevant static properties of the process. This business process is a collaboration of the organization with other organizations. All collaborations of the organization with other organizations are described in the artifact organization use case model, see the dependency «collaborations» in Fig. 1. The instances of organization business processes are specified in the artifact organization interaction model in terms of the interactions of the organization with other organizations. The organization business processes can be refined into more concrete system business processes, see the dependency «refine» in Fig. 1. Allowable order of the system business processes is

236

P. Hruby

specified in the artifact organization use case life cycle. The organization use case interaction model specifies typical sequences of business process instances, see the dependency «instance» in Fig. 1. This artifact can be represented in UML by sequence or collaboration diagram, in which classifier roles are use case roles. An example of such a diagram is in the reference [2]. The realization of the organizational business process is specified by the interactions between the software system and its users (team roles) see the dependency «realize» in Fig. 1. The system level specifies the context of the software system and its relationships to its actors. The artifact system specifies the system interface, the system operations with responsibilities, preconditions, postconditions, parameters and return values. The artifact actor specifies the actor responsibilities and interfaces, if they are relevant. The system lifecycle specifies the allowable order of system operations and events. The system model specifies relationships between the software system and actors (other systems or users), and the system interaction model specifies interactions between the software system and actors. These interactions are instances of system business processes, see the dependency «instance» in Fig. 1. The artifact system use case specifies the static properties of the business process with the system scope. This business process is a collaboration of the system with other systems and users. All collaborations of the system with its actors are described in the artifact system use case model, see the dependency «collaborations» in Fig. 1. The dynamic properties of the business process interface, such as the allowable order of system operations in the scope of the business process, are specified in the system use case life cycle. The system use case interaction model specifies typical sequences of business process instances. The system business processes can be refined into subsystem business processes, see the dependency «refine» in Fig. 1. The realization of the system business process is specified by the subsystems at the architectural level, their responsibilities and interactions, see the dependency «realize» in Fig. 1. Artifacts at the architectural level are structured in the very same way. The architectural level specifies the software system in terms of subsystems and components, their responsibilities, relationships, interactions and lifecycles. The same structure can also specify the software system at the class level and the procedural level of abstraction. Please see reference [1] for examples of UML diagrams representing the artifacts discussed in this paper.

References 1. Hruby, P.: "Structuring Design Deliverables with UML", ’98, Mulhouse, France, 1998, http://www.navision.com/default.asp?url=services/methodology/default.asp 2. Hruby, P.: "Structuring Specification of Business Systems with UML", OOPSLA'98 workshop on behavioral semantics of OO business and system specifications, Vancouver, Canada, 1998. http://www.navision.com/default.asp?url=services/methodology/default.asp

Mapping Business Processes to Objects, Components and Frameworks: A Moving Target ! Eric Callebaut [email protected], METHOD Consulting Kraainesstraat 109, 9420 Erpe BELGIUM

Abstract. Recently some material has been published on how to map or even integrate business processes with objects. However, object technology is moving fast and so is our target : the OO community is gradually moving towards components (including business components) and frameworks (including business frameworks). This document outlines an approach for: i) mapping business processes to business components, and ii) refining business processes for the development of business components and frameworks. Variations of this approach have been implemented by the author in leading OO projects, such as IBM’s San Francisco Project and JP Morgan/Euroclear ’s Next Project.

1. Mapping Business Processes to Business Components The OO paradigm is gradually shifting towards component based development (CBD). It’s not really a replacement of OO but rather a natural course of events. The main objective is to enable higher levels of reuse. Business process modeling focuses on the chain of activities triggered by business events. Business component modeling focuses on identifying the main building blocks and their collaborations. A business component provides a well defined set of business services (’functionality’) required to fulfil these business processes. A business component is typically built from a set of objects, or other components, which are invisible to users of the component [1]. So a component is a more ’large grained’ concept than an object. In order to hide the internal complexity of the business component, it is assigned an ’interface class’ that acts as a representative, facade, or mediator for the business component. This interface class holds all the services that are made available to users of the business component. The terms and conditions under which these services can be used should be well defined as ’service contracts’. The key question is : how can business process models be mapped into business component models? This can be done by re-using a technique that is quite popular in the OO community : interaction diagrams (sequence diagrams or collaboration diagrams in UML). Let’s illustrate this with an example. The sequence diagram in Fig. 1 maps the elementary business processes (or business activities) to the components and its services. In the example below Business Activity 1 is S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 237-239, 1998.  Springer-Verlag Berlin Heidelberg 1998

238

E. Callebaut

implemented by the Business Service A1 that is provided by the ComponentA; to execute Business Activity 2, ComponentA requests the service Bn provided by ComponentB.

Global sequence diagram: Business Process/Components BusinessProcess Description 1 Business Activity 1

ComponentB

ComponentA

ServiceA1 ServiceBn

2 Business Activity 2 3 ...

ComponentA

ComponentB

Fig 1. Global sequence diagram: Business Process/Components

This approach has some important advantages :  Separation of concerns : both models (business process model and business component model) have their own purposes, audience and are created separately.  Validation : mapping both models allows for validation of the completeness and accuracy of business processes and business component services. In practice, this is often done via walkthrough sessions between the business experts/ users and the business architect. As the business expert runs through the business process description, the business architect explains which business components/services are involved.  Traceability: tracing between business processes and business components/ services is provided  Black box approach : the internal details of business activities and components can be delayed to a later stage (each component service can be detailed via a child sequence diagram).

2. Refining Business Processes for Components and Frameworks As business components and business frameworks are receiving more interest, a key question is how to refine the business process models so that they provide proper input for the definition of business components and business frameworks. For business components and frameworks to be reusable, they need to be sufficiently generic (applicable in multiple business domains, business contexts) and adaptable to changes in the business environment. These business variations and changes may occur for several reasons: e.g.  Internationalisation : business areas which require variations on the basis of country that may be legal, cultural, etc..  Legal : business areas which reflect legal requirements that are prone to change

Mapping Business Processes to Objects, Components and Frameworks

239



Business policies : business areas which have different implementations and can change. These variations may result from changing market conditions, differences in company size and complexity, business transformations, etc. Given these variations and changes we can refine our business processes/business activities based on the following typology [2]. By applying this typology, business processes can be refined and provide a better input to the definition of reusable business components.  Primary business activities : these correspond to the main, concrete business tasks (initiate trade deal, register purchase invoice, confirm sales order execution, etc.).  Common business activities : these correspond mainly to an operation or set of operations that is common for all or most of the activities within different processes. Common activities affect several business processes and need to be handled in the same or similar way. Two examples are calendar handling and currency handling. Common business activities will be major candidates for common services provided by a common component layer.  Abstract business activities : these activities correspond mainly to similarities, generalisations or patterns within and across different business processes. Some examples are similarities between orders (sales, purchase, etc.), similarities between business partners (supplier, customer, etc.), similarities between finance ledgers (GL, A/R, etc.). Abstract business activities provide the main input to the definition of business patterns during component modelling  Extension business activities : these activities correspond to volatile business activities due to variations or changes in legal rules (e.g. VAT calculations), business policies (pricing policy, inventory policy), or internationalisation. Extension activities will provide the main input to the definition of variation points in the services provided by business components. These variation points are places that are likely to be changed by different users of the business component.

References 1.

2.

Allen, P., Frost, S., Component-Based Development for Enterprise Systems: Applying the SELECT Perspective, Cambridge University Press-SIGS Books, 1998. Callebaut, E., IBM San Francisco Framework requirements, IBM San Francisco Project, IBM Boblingen, Germany, 1996

238

E.A. Kendall

Partitioning Goals with Roles

239

Partitioning Goals with Roles Elizabeth A. Kendall [email protected], [email protected] Intelligent Business Systems Research, BT Laboratories MLB1/ PP12, Martlesham Heath, Ipswich, IP5 3RE ENGLAND

Once goals have been captured and structured, they should be partitioned and assigned to roles that appear in role models. As roles can be played by objects, systems, or people, this results in a unified approach to object oriented business process modeling.

Roles and role models [1, 3, 5, 6, 7] are relatively new abstractions in object oriented software engineering. Roles have also been widely used in business process modeling [4]. Work at BT aims to clarify, integrate, and extend the role and role model concepts and notation from object oriented software engineering and business process modeling. Role model patterns are also being identified. Role models respond to the following needs:  Role models emphasize how entities interact. Classes stipulate object capabilities, while a role focuses on the position and responsibilities of an object within an overall structure or system.  Role models can be abstracted, specialized, instantiated, and aggregated into compound models [1, 5]. They promote activities and interactions into first class objects.  Role models provide a specification without any implication about implementation. They are unified models that can encompass people, processes, objects, or systems.  Role models can be dynamic [3]. This may involve sequencing, evolution, and role transfer (where a role is passed from one entity to another). There are a few approaches to role modeling; the summary presented here is closely based on [1] and [6]. A role model is very similar to a collaboration diagram in UML, which effectively captures the interactions between objects involved in a scenario or use case. However, a collaboration diagram is based on instances in a particular application; its potential for reuse and abstraction is limited. Further, it is just one perspective of an overall UML model; usually it is subordinate to the class diagrams. Class diagrams adequately address information modeling, but not interaction modeling [1]. This is because classes decompose objects based on their structural and behavioral similarities, not on the basis of their shared or collaborative activities and interactions. This is where role models come in. A role model describes a system or subsystem in terms of the patterns of interactions between its roles, and a role may be played by one or more objects or some other entity. Once you have a base role model, you can build on it to form new models. One role model may be an aggregate of others. Also, a new role model may be derived from one or more base models; in this case, the derived role must be able to play the S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 240-241, 1998.  Springer-Verlag Berlin Heidelberg 1998

Partitioning Goals with Roles

241

base roles. Combined roles must be addressed in a system specification. Synergy can make a combined role more than just the sum of the parts. An example of this is the Bureaucracy pattern [7]. This pattern features a long chain of responsibility, a multilevel hierarchical organization, and centralized control. This pattern can be constructed by bringing together the Composite, Mediator, Observer, and Chain of Responsibility patterns [2], which involve eighteen roles in total. However, there are only six roles in the Bureaucracy pattern because the resulting compound pattern is more than just the sum of the individual patterns Chain of Responsibility, Mediator, Observer, and Bureaucracy are all role models that are relevant to business process modeling. They can be used to model business systems comprised of people, organizations, systems, and objects. Other relevant role models can be found [1, 3, 7]. Patterns are needed for identifying roles and role models that are relevant to business process modeling, and a role model catalog is under development at BT as a first step. Whereas static role models have been presented here, it is anticipated that role dynamics will be a fruitful area for modeling changing business processes. Here, roles can be transferred from one entity to another, or follow a certain sequence. This may be valuable for modeling mobility.

References 1. 2. 3.

4. 5. 6. 7.

Andersen, E. (Egil), Conceptual Modeling of Objects: A Role Modeling Approach, PhD Thesis, University of Oslo, 1997. Gamma, E.R., R. Helm, R. Johnson, and J. Vlissides, Design Patterns: Elements of Reusable Object-Oriented Software. 1994: Addison-Wesley. Kristensen, B. B., Osterbye, K., “Object-Oriented Modeling with Roles”, OOIS’95, Proceedings of the 2nd International Conference on Object-Oriented Information Systems, Dublin, Ireland, 1995.. Ould, M., Business Processes: Modelling and Analysis for Reengineering and Improvement, John Wiley & Sons, West Sussex, England, 1995. Reenskaug, T., Wold, P., Lehne, O. A., Working with Objects, The OOram Software Engineering Method, Manning Publications Co, Greenwich, 1996. Reenskaug, T., "The Four Faces of UML," www.ifi.uio.no/~trygve/documents/, May 18, 1998. Riehle, D., “Composite Design Patterns”, OOPSLA ’97, Proceedings of the 1997 Conference on Object-Oriented Programming Systems, Languages and Applications, ACM Press, Page 218-228, 1997.

Object Oriented Product Metrics for Quality Assessment (Workshop 9) Report by Houari A. Sahraoui CRIM er 550 Sherbrooke ouest, 1 ét., Montréal (QC) Canada H3A 1B9 [email protected]

1. Introduction Software measures have been extensively used to help software managers, customers, and users to assess the quality of a software product based on its internal attributes such complexity and size. Many large software companies have intensively adopted software measures to better understand the relationships between to software quality and software product internal attributes and, thus, improve their software development processes. For instance, software product measures have successfully been used to assess software maintainability and errorproneness. Large software organization, such NASA and HP, have been able to predict costs and delivery time via software product measures. Many characterization baselines have been built based on technically sound software measures. In this workshop, we were mainly concerned in investigating software products measures for Object-Oriented Software systems which can be used to assess the quality of large OO software systems. OO paradigm provides powerful design mechanisms which have not been fully or adequately quantified by the existing software products measures.

2. Suggested papers topics Papers that investigate analytically or empirically the relationship between OO design mechanisms and different aspects of software quality were specially welcome. In particular, the suggested topics for papers included: 1. 2. 3. 4. 5.

metrics versus quality attributes (reliability, portability, maintainability etc.); Automatic collection (collection tools, collection OO CASE tools); Validation of OO metrics (Empirical and Formal); Relationships between OO product and process metrics; Standards for the collection, comparison and validation of metrics;

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 242-272, 1998. Springer-Verlag Berlin Heidelberg 1998

Object-Oriented Product Metrics for Quality Assessment

243

3. Organisation

3.1 Workshop organizers Walcelio L. Melo, Oracle Brazil and Univ. Catolica de Brasilia, Brazil Sandro Morasca, Politecnico di Milano, Italy Houari A. Sahraoui, CRIM, Canada 3.2 Participants Adam Batenin, University of Bath, United Kingdom Fernando Brito e Abreu, INESC, Portugal Bernarde Coulange, VERILOG, France Christian Daems, Free University of Brussels, Belgium Serge Demeyer, University of Berne, Switzerland Reiner Dumke, University of Magdeburg, Germany Eliezer Kantorowitz, Technion Haifa, Israel Hakim Lounis, CRIM, Canada Carine Lucas, Free University of Brussels, Belgium Geert Poels, Katholieke Universiteit Leuven, Belgium Teade Punter, Eindhoven University of Technology, Netherlands Marinescu Radu, FZI, University of Karlsruhe, Germany Houari A. Sahraoui, CRIM, Canada Frank Simon, Technical University of Cottbus, Germany 3.3 Program Eight papers were accepted to the workshop. The workshop was organized in three separate sessions. Each session consisted of group review of two or three papers followed by a concluding discussion. The papers discussed during the workshop are listed below.1 Session 1 (OO metrics versus quality attributes)  Metrics, Do They Really Help? (by Serge Demeyer and Stephane Ducasse)  An Assessment of Large Object Oriented Software Systems (by Gerd Köhler, Heinrich Rust and Frank Simon)  Using Object-Oriented Metrics for Automatic Design Flaws Detection in Large Scale Systems (by Radu Marinescu)

1

Extended summaries of these papers are presented at the end of this chapter.

244

H.A. Sahraoui

Session 2 (collection tools, collection OO CASE tools)  An OO Framework for Software Measurement and Evaluation (by Reiner Dumke)  A Product Metrics Tool Integrated into a Software Development Environment (by Claus Lewerentz and Frank Simon)  Collecting and Analyzing the MOOD2 Metrics (by Fernando Brito e Abreu and Jean Sebastian Cuche) Session 3 (Validation of OO metrics)  An Analytical Evaluation of Static Coupling Measures for Domain Object Classes (by Geert Poels)  Impact of complexity metrics on reusability (by Yida Mao, Houari A. Sahraoui and Hakim Lounis)

4. Conclusions The wide variety of papers presented and the high level of expertise at the workshop led to the following conclusions (with a more or less general agreement): 1. Many work shows that metrics can be used to measure successfully the quality of a system. This can be done through the detection of problematic constructs in the code (and the design). 2. Empirical validation studies on OO metrics should be conducted intensively in order to derive conclusions on their application as quality indicators / estimators. 3. There seems to be precious little investigation into generalization of case study results for their systematic application in the future. 4. There is a widening recognition for the need for industrial case studies to validate metrics. 5. It is far from clear that industry perceives that metrics can be useful in the improvement of their OO product quality. We have to find ways to overcome the skepticism of our industrial partners. To be complete, we present in following paragraphs the conclusions of the participants: Adam Batanin: The workshop was a success in a number of ways, firstly it demonstrated the diversity of research interests in the field. This is surely a sign that we are exploring many new avenues and progressing product measurement. Secondly, it highlighted the shortcomings in our work, pointing out the areas that will require greater focus in the future such as getting businesses interested in systematic application of metrics and formal analysis of measures. Measurement science can benefit from giving more attention to its foundation. Intuitive

Object-Oriented Product Metrics for Quality Assessment

245

understanding of concepts such as modularity needs to be backed up by a formal analysis of it. Modularity concepts are not specific to designs or code but also apply to specifications, program documentation, etc. The principles underlying good modularity are shared by all these products. We can benefit from it by accepting that there are common elements and by separating these from the context-specific information associated with problem. F. Brito e Abreu: The editor of the french Nantes-based magazine "L'Object" challenged the participants to submit papers to an edition to be dedicated to software metrics on OO systems to appear soon. All participants felt that this workshop, due to its success, deserved to be continued on next ECOOP to take place in Lisbon, Portugal. There was a general agreement that empirical validation studies on OO metrics should be conducted intensively in order to derive conclusions on their application as quality indicators / estimators. Repetitiveness of those experiments by different researchers and comparison of results is one important issue. Published work in this area is very scarce. B. Coulange: This workshop was very interesting because several very different points of view where presented. About metrics, one asked : "do they really help ?" and two other speakers presented helpful results when using metrics on large projects. what is the meaning of each metric ? Which metrics to use ? Which value for these metrics ? These subject are still open. This workshop proposed some answers. Other presentations about case tools gave a good idea of what is the expectation of developers when using metrics and what can be the role of a tool. S. Demeyer: The first session provided some interesting discussion concerning the question whether metrics can be used to measure the quality of a system. On the one hand, participants like Radu Marinescu and Frank Simon reported on the usage of metrics to detect problematic constructs in source code. On the other hand, Serge Demeyer claimed that problematic constructs detected by metrics do not really hamper the evolution of a system. This debate boils down to the old question whether internal attributes can be used to assess external quality factors. Hakim Lounis: The workshop was organized in three sessions. The first session was about OO metrics versus quality attributes ; speakers present their experimentation mainly for large OO software systems. A discussion took place on the real usefulness of OO internal attributes for assessing quality features of systems under study. Speakers of the second session present their work around measurement and evaluation frameworks and tools. On the other hand in this session, a great amount of discussions turned around metric collection tools. Finally, the third session was concerned with the validation of OO metrics two approaches were presented: an analytical evaluation and a machine learning approach that has the advantage of producing explicit rules capturing the correlation between internal attributes and quality factors. A very important aspect

246

H.A. Sahraoui

was pointed out by some of the participants : it concerns generalization of case study results for their systematic application in the future. I consider that researchers meetings similar to the one held in the ECOOP'98 workshop are of a great benefit for the promotion and affirmation of such results. Radu Marinescu: Three major points became very clear to me as a result of this workshop. First of all the fact that it is very hard to "portate" the experimental results from one experimental context to another. We have to find ways to make the conclusion of experimental studies more trustworthy and usable for other cases. A second important issue is that on the one hand it is vital for the metrics-research to validate metrics on more industrial case-studies, but on the other hand it is very hard to obtain these case-studies. We have to find ways to overcome the fear of our industrial partners. Last, but not least, we observed that size-metrics are in most of the cases irrelevant and therefore we should seriously think about defining new metrics that might reflect deeper characteristics of object-oriented systems. G. Poels: I especially liked the focus on theoretical and applied research. All workshop participants had an academic background and much emphasis was laid on experimental design, validation of research results and measurement theoretical concerns regarding measure definitions. Quite innovative research was presented on the use of software measurement for software re-engineering purposes (the FAMOOS project). Also, a number of comprehensive measurement frameworks were presented to the participants. Further, the empirical research described by some presenters sheds some light on the relationship between OO software measures and external quality attributes like reusability. Generally, I very much appreciated the rigorous, technically detailed, and scientific approach of the research presented in the position papers, which is certainly a factor that differentiates this workshop from similar events. The proceedings contain a richness of information and I hope that future workshops will continue emphasizing research results. T. Punter: Metrics are important to assess software product quality. The purpose of the researchers of OO-metrics, seem to find OO-representants –like Number of Children and Depth of Inheritance Tree- which can replace the conventional metrics to measure size, structure or complexity of the product. Attendees of the OOPM-workshop did agree with each other about the statement that software product quality depends on the application area of the product. The target values of the metrics depend on the situation in which the product should operate. Therefore system behavior –which is measured by external metrics- should be part of the subject of evaluation. So, despite of the focus on internal metrics, external metrics are recognized as important product metrics. Besides system behavior, internal metrics and their target values are influenced by the application area of the product too. In our case-experiences –as a third party evaluator of C/C++ code in applications for retail petroleum system-, we found that

Object-Oriented Product Metrics for Quality Assessment

247

internal metrics and their target values differ with their application area too. We think that no universal set of metrics for different products exist. For each assessment an appropriate set should be selected and criteria should be set. This influences the way to validate product metrics. Normally –and also during the workshop- validation focusses upon the relationships between the metrics and the properties they should predict. However, taking the application dependency of software products serious, means that the results can not be generalized. For each assessment a dedicated set of metrics and its associated target values should be selected and argumented. Concepts and ideas to conduct this are developed at Eindhoven University of Technology. F. Simon: The use of object-oriented product metrics to improve software quality is still very difficult: Although there exist some great tools for measuring software, and although there exist several dozen of different metrics, the validation of usefulness of measurement is still at the beginning. In my opinion the workshop showed some weak points: - There's missing some theory for establishing a framework to validate product metrics, - new metrics are needed to represent interesting software properties with a strong correlation to external software attributes, - there's only a poor knowledge about the subjectivity of quality models and their dependent variables, - and there are only a few case studies, how to introduce successfully a measurement program within the software lifecycle. But in my opinion, the measurement community is on the right way toward these goals.

5. Abstracts of presented papers

Workshop 9, paper 1: Do Metrics Support Framework Development ? Serge Demeyer, Stéphane Ducasse Software Composition Group, University of Berne {demeyer,ducasse}@iam.unibe.ch http://www.iam.unibe.ch/~scg/

Introduction It is commonly accepted that iteration is necessary to achieve a truly reusable framework design [4], [6], [8]. However, project managers are often reluctant to apply this common knowledge, because of the difficulty controlling an iterative development process. Metrics are often cited as instruments for controlling software development projects [5], [7]. In the context of iterative framework design, one would like a metric programme that (a) helps in assigning priorities to the parts of the system that need to be redesigned first, and (b) tells whether the system design is

248

H.A. Sahraoui

stabilising; in short, a metric programme should detect problems and measure progress. This paper discusses whether metrics can be used to detect problems and measure progress. We summarise the results of a case-study performed within the context of the FAMOOS project (see http://www.iam.unibe.ch/~famoos/); a project whose goal it is to come up with a set of reengineering techniques and tools to support the development of object-oriented frameworks. Since we must deal with large-scale systems (over 1 million lines of code) implemented in a variety of languages (C++, Ada, Smalltalk and Java) metrics seem especially appealing.

Case-Study To report on our experiments with metrics, we selected a case-study outside the FAMOOS project — the VisualWorks/Smalltalk User-Interface Framework. Besides being an industrial framework that provides full access to different releases of its source code, the framework offers some extra features which make it an excellent case for studying iterative framework development. First, it is available to anyone who is willing to purchase VisualWorks/Smalltalk, which ensures that the results in this paper are reproducible. Second, the changes between the releases are documented, which makes it possible to validate experimental findings. Finally, the first three releases (1.0, 2.0 & 2.5) of the VisualWorks frameworks depict a fairly typical example of a framework life-cycle [3], meaning it is representative. The metrics evaluated during the case study were selected from two sources, namely [7] and [1]. They measure method size (in terms of message sends, statements and lines of code); class size (in terms of methods, message sends, statements, lines of code, instance variables and class variables) and inheritance layout (in terms of hierarchy nesting level, immediate children of a class, methods overridden, methods extended, methods inherited, methods ). We are aware that other metrics have been proposed in the literature and are not included in the evaluation. Especially the lack of coupling and cohesion metrics might seem quite surprising. This incompleteness is due to several reasons: first, because some coupling and cohesion metric lack precise definitions; second because coupling and cohesion metrics are subject to controversy in the literature and third because most such metrics cannot accurately be computed because of the lack of typing in Smalltalk.

Experiment and Results Note that within the scope of this paper, we can only provide a summary of the experiment and the results. We refer the interested reader to [2] for a full report including the actual data. In the near future, we will run the same experiment on different case studies to verify these results. We will also evaluate other metrics, especially coupling and cohesion metrics. Problem Detection. To evaluate all above metrics for problem detection, we applied each metric on one release and examined whether the parts that are rated

Object-Oriented Product Metrics for Quality Assessment

249

'too complex' improved their measurement in the subsequent release. We ran every test with several threshold values to cancel the effects of the threshold values. We observed that between 2/3 and 1/2 of the framework parts that are rated 'too complex' did not improve their measurement in the subsequent release. On the contrary, quite a lot of them even worsened their measurement. Improving these parts was definitely not necessary for the natural evolution of the framework. Consequently, we conclude that the evaluated metrics (i.e., size and inheritance metrics) are unreliable for detecting problems during an iterative framework design. Progress Measurement. To evaluate all above metric for progress measurement, we measured the differences between two subsequent releases. For those parts of the framework that have changed their measurement, we performed a qualitative analysis in order to verify if we identify relevant changes. The qualitative analysis was based on manual inspection of documentation and source code. Afterwards, we examined the change in measurements to see whether the design was indeed stabilising. We observed that the metrics are very accurate in analysing the differences between two releases and as such can be used to measure progress. Especially, interpretation of changes in inheritance values reveals a lot about the stability of the inheritance tree.

Acknowledgements This work has been funded by the Swiss Government under Project no. NFS-200046947.96 and BBW-96.0015 as well as by the European Union under the ESPRIT programme Project no. 21975.

References 1.

2. 3. 4. 5. 6. 7. 8.

Shyam R. Chidamber and Chris F. Kemerer, A Metrics Suite for Object Oriented Design, IEEE Transactions on Software Engineering, vol. 20, no. 6, June 1994, pp. 476493. Serge Demeyer and Stéphane Ducasse, Metrics, Do They Really Help ?, Technical Report. See http://www.iam.unibe.ch/~scg/. Erich Gamma, Richard Helm, Ralph Johnson and John Vlissides, Design Patterns, Addison Wesley, Reading, MA, 1995. Adele Goldberg and Kenneth S. Rubin, Succeeding With Objects: Decision Frameworks for Project Management, Addison-Wesley, Reading, Mass., 1995. Brian Henderson-Sellers, Object-Oriented Metrics: Mesures of Complexity, PrenticeHall, 1996. Ivar Jacobson, Martin Griss and Patrik Jonsson, Software Reuse, Addison-Wesley/ACM Press, 1997. Mark Lorenz and Jeff Kidd, Object-Oriented Software Metrics: A Practical Approach, Prentice-Hall, 1994, (2) . Trygve Reenskaug, Working with Objects: The OOram Software Engineering Method, Manning Publications, 1996.

250

H.A. Sahraoui

Workshop 9, paper 2: Assessment of Large Object Oriented Software Systems - a metrics based process -2 Gerd Köhler, Heinrich Rust and Frank Simon Computer Science Department, Technical University of Cottbus P.O. Box 101344, D-03013 Cottbus, Germany (hgk, rust, simon)@informatik.tu-cottbus.de

Motivation This extended abstract presents an assessment process for large software systems. Our goal is to define a self-improving process for the assessment of large object oriented software systems. Our motivation for the development of such a process is our perception of the following problem in software development practice: There are large software projects which tend to outgrow the capacities of the developers to keep an overview. After some time, it is necessary to rework parts of these "legacy systems". But how should capacities for software reengineering be applied? Finite resources have to be efficiently used. Our process should help project teams to identify the most critical aspects regarding the internal quality of large software systems. The need for such a process increases if a software has got a long lifecycle because then risks like application area shift, quick additions caused by customer complaints, development team split, documentation/implementation synchronisation problems etc. might occur. Programs with problems like these have rising costs for error corrections and extensions. Many aspects of the program have to be considered before recommendations for the reengineering can be given. Our work focuses on one part of the total assessment process which has to be performed by an external review organisation, that is, the source code assessment. A process for audits of large software systems The purpose of our process is to help teams to identify the spots where reengineering capacity is best applied to improve a system. The following ideas shaped our process:  Human insight is necessary to check if a method, class or subsystem has to be reworked.  In very large systems, not all modules can be inspected manually.  Automatic software measurements can help to preselect suspicious modules for manual inspection. We assume good correlation between the manual identification of anomalous structures and the identification of anomalies based on automatic measurements. For our purpose we developed an adjustable metrics tool called Crocodile, that is fully integrated into an existing CASEtool. 2

For a full version and references see our paper in [W.Melo, S. Morasca, H.A. Sahraoui: "Proceedings of OO Product Metrics Workshop", CRIM, Montréal, 1998], pp. 16-22

Object-Oriented Product Metrics for Quality Assessment

251

 Thus, in very large systems, the efficient application of human resources should be guided by an automated analysis of the system. The process is dividable into the subprocesses assessment preparation for considering all special requirements like schedules, maximal effort, specialised quality goals and measurement data calculation, assessment execution for the source code review of the most critical software parts, that are identified by Crocodile and assessment reflection for considering process improvements for further assessments, including the validation of the automatic raw data collection and its usage for module selection: Experiences This assessment process was used for three object oriented programs provided by an industrial project partner (Java and C++-Projects with several hundred classes). Every assessment process was planned with an effort of about 40 person hours, distributed over the phases assessment preparation and assessment execution. The initial quality model that was used for the assessment was built from our own analysis and programming experiences. A report about the results was sent to the customer. The results of the largest project in detail: The module selection was done by analyzing some frequency charts for every used metric, outlier-charts and a cumulated diagram. The automatic software measurements were used to preselect suspicious modules for the inspection, in this case three classes with altogether 95 methods. Our checklist contains 42 questions to identify anomalies for 9 indicators. The items identifying method-anomalies were applied to every method. 206 items failed and showed anomalies for all indicators for all three classes. Independently of the customer feedback we spent further work on the reflection phase: Thus, we reviewed three further, randomly chosen classes. Doing so we detected only 11 failed items. They detected anomalies for two indicators of one class and anomalies for only one indicator of the other two classes. Thus, the process for this project seemed to be quite effective and because of the few resources that we spent on this process, it seemed to be quite efficient. A process model for quality assessment of large software systems feedback

Assessment preparation

Product data

measurement parameter

1.

assessment

2.

Assessment reflection

3.

Assessment execution

Assessment preparation Resource plan

Assessment execution

Assessment reflection

Reverse (2.1) engineering

(1.1)

Quality-model (1.2) definition

Randomized (3.1) meas. test

Module selection

Measurement (1.3) environment

Check efficiency

Source code (2.3) review

Report Legend process subprocesses

X

Y

Process y has to be finished before process x starts

Datastore (paper or electronic)

X

(3.3)

(3.4) Process improvements

(2.4) Anomaly identification

(1.4) Automatic raw data collection

Customer feedback

Check (3.2) effectiveness

(2.2)

Z

results of process X are stored into the datastore Z

252

H.A. Sahraoui

Workshop 9, paper 3: Using Object-Oriented Metrics for Automatic Design Flaws Detection in Large Scale Systems Dipl.Ing. Radu Marinescu “Politehnica” University in Timisoara Faculty of Automation and Computer Engineerring [email protected]

Introduction In the last decade the object-oriented paradigm has decisively influenced the world of software engineering. The general tendency manifested in the last time is to redesign these older industrial object-oriented systems, so that they may take full advantage of the today’s knowledge in object-orientation, and thus improve the quality of their design. In the first stage of a redesign process it is needful to detect what are the design-flaws contained in the application and where are this flaws located. The detection of design problems for large or very large systems is impossible to be fulfilled manually and must therefore be accomplished by automated methods. This paper presents the possibility of using object-oriented software metrics for the automatic detection of a set of design problems. We illustrated the efficiency of this approach by discussing the conclusions of an experimental study that uses a set of three metrics for problem detection and applies them to three projects. These three metrics are touching three main aspects of object-oriented design, aspects that have an important impact on the quality of the systems – i.e. maintenance effort, class hierarchy layout and cohesion. The importance of this paper is increased by the fact that it contributes to the study of object-oriented metrics in a point where this is specially needed: the addition of new practical experience and the practical knowledge that derives from it.

WMC – Weighted Method Count [Chid94] The metric proved to be a good indicator of the maintenance effort by indicating correctly the classes that are more error prone. In terms of problem detection, we may assert that classes with very high WMC values are critical in respect of the maintenance effort. The WMC value for a class may be reduced in two ways: by splitting the class or by splitting one or more of its very complex methods. A second conclusion is based on the observation that in all case-studies the classes with the highest WMC values were the central classes in the project. This relation can be exploited at the beginning of a redesign operation for a foreign project in order to detect the central classes.

NOC – Number of Children [Chid94] This metric may be used in order to detect misuses of subclassing, and in many cases this means that the class hierarchy has to be restructured at that point during the redesign operation. We have detected two particular situations of high NOC values that could be redesigned:

Object-Oriented Product Metrics for Quality Assessment

253

 Insufficient exploitation of common characteristics  Root of the class hierarchy

TCC – Tight Class Cohesion [Biem95] Classes that have a TCC-value lower than 0.5 (sometimes 0.3) are candidate classes for a redesign process. The redesign consists in a possible splitting of the class in two or more smaller and more cohesive classes. From another perspective subjects with a low TCC may indicate classes that encapsulate more than one functionality. In other words, the design flaw that can be detected using TCC is the lack of cohesion, and one concrete way to reduce or eliminate it might be the splitting of the class.

Conclusions This study has indicated that metrics can be efficiently used for the automatic detection of design flaws, and more generally they can be useful tools in the early stages of re-engineering operations on large systems. On the other hand, we should not forget that metrics will never be able to offer us 100% precise results; metrics will always be useful hints, but never firm certainties. A human mind will always be necessary to take the final decision in re-engineering matters. It is also strongly necessary to let the conclusions of this study be validated by more experimental studies made on other large scale systems.

References [Biem95] J.M. Bieman, B.K. Kang. Cohesion and Reuse in an Object-Oriented System. Proc. ACM Symposium on Software Reusability, April 1995. [Chid94] S.R. Chidamber, C.F. Kemerer. A metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering, Vol.20, No.6, June 1994.

Workshop 9, paper 4: An OO Framework for Software Measurement and Evaluation R. R. Dumke University of Magdeburg, Faculty of Informatics Postfach 4120, D-39016 Magdeburg, Germany Tel: +49-391-67-18664, Fax: +49-391-67-12810 email: [email protected] Software Measurement includes the phases of the modeling of the problem domain, the measurement, the presentation and analysis of the measurement values, and the evaluation of the modelled software components (process components, product components and resources) with their relations. This enables the improvement and/or controlling of the measured software components. Software measurement

254

H.A. Sahraoui

exists more than twenty years. But, still we can establish a lot of unsolved problems in this area. Some of these problems are  the incompleteness of the chosen models,  the restriction of the thresholds only for special evaluation in a special software environment,  the weaknesses of the measurement automization with metrics tools,  the lack of metrics/measures validation,  last but not least: the missing set of world-wide accepted measures and metrics including their units. The problems of software measurement are also the main obstacles to the installation of metrics programs in an industrial environment. Hence, a measurement plan/framework is necessary which is based on general experience of software measurement investigations. The application of a software measurement approach must be embedded in a business strategy as CAME strategy based on a CAME measurement framework using CAME tools. The CAME strategy stands for  community: the necessity of a group or a team that is motivated and qualified to initiate software metrics application,  acceptance: the agreement of the (top) management to install a metrics program in the (IT) business area,  motivation: the production of measurement and evaluation results in a first metrics application which demonstrate the convincing benefits of the metrics,  engagement: the spending of many effort to implement the software measurement as a persistent metrics system. We define our (CAME) software measurement framework with the following four phases:  measurement views as choice of the kind of measurement and the related metrics/ measures,  the adjustment of the metrics for the application field,  the migration of the metrics along the whole life cycle and along the system structure (as behaviour of the metrics),  the efficiency as construction of a tool-based measurement. The measurement choice step includes the choice of the software metrics and measures from the general metrics hierarchy that can be transformed in to a class hierarchy. The choice of metrics includes the definition of an object-oriented software metric as a class/object with the attributes: the metrics value characteristics, and services: the metrics application algorithms. The steps of the measurement adjustment are  the determination of the scale type and (if possible) the unit,  the determination of the favourable values (thresholds) for the evaluation of the measurement component including their calibrating,  the tuning of the thresholds during the software development or maintenance.,  the calibration of the scale depends on the improvement of the knowledge in the problem domain. The migration step is addressed to the definition of the behaviour of a metric class such as the metrics tracing along the life cycle and metrics refinement along the

Object-Oriented Product Metrics for Quality Assessment

255

software application. These aspects keep the dynamic characteristics that are necessary for the persistent installation of metrics applications and require a metrics data base or other kinds of the metrics values background. The measurement efficiency step includes the instrumentation or the automatization of the measurement process by tools. The tools supporting our framework are the CAME (Computer Assisted software Measurement and Evaluation) tools. The application of the CAME tools with the background of a metrics data base is the first phase of measurement efficiency, and the metrics class library the final OO framework installation. Some first applications are described in detail in [1], [2] and [3] applying CAME tools. However, in this short paper was only possible to indicate the principles and aspects of the framework phases as measurement choice, measurement adjustment, measurement migration and measurement efficiency. However, this approach clarifies the next steps after the initiatives of ISO 9000 certification, CMM evaluation and, on the other hand, the special metrics definition and analysis for small aspects. Further research effort is directed at using the OO measurement framework to the UML-based development method using a Java-based metrics class library.

References [1] R. R. Dumke and E. Foltin, Metrics-Based Evaluation of Object-Oriented Software Development Methods, Proc. of the European CSMR, Florence, Italy, March 8-11, 1998, pp. 193-196 [2] H. Grigoleit, H.: Evaluated-based Visualization of Large Sclale C++ Software Products (German), Diploma Thesis, University of Magdeburg, 1998 [3] SMLAB: The Virtual Software Measurement Laboratory at the University of Magdeburg, Germany, http://ivs.cs.uni-magdeburg.de/sw-eng/us/

Workshop 9, paper 5: A Product Metrics Tool Integrated into a Software Development Environment - extended abstract -3 Claus Lewerentz and Frank Simon Software and Systems Engineering Group Computer Science Department, Technical University of Cottbus P.O. Box 101344, D-03013 Cottbus, Germany (lewerentz, simon)@informatik.tu-cottbus.de

3

For a full version and references see our paper in [W.Melo, S. Morasca, H.A. Sahraoui: "Proceedings of OO Product Metrics Workshop", CRIM, Montréal, 1998], pp. 36-40

256

H.A. Sahraoui

Introduction The goal of the project Crocodile is to provide concepts and tools for an effective usage of quantitative product measurement to support and facilitate design and code reviews. Our application field is the realm of object oriented programs and, particularly, reusable frameworks. The main concepts are  Measurement tool integration into existing software development environments, using existing tool sets and integration mechanisms,  mechanisms to define flexible product quality models based on a factorcriteria-metrics approach,  the use of meaningful measurement contexts isolating or combining product components to be reviewed,  effective filtering and presentation of measurement data. Our current implementation platform for a tool providing these concepts is TakeFive's SNiFF+, an industrial strength integrated C++/Java programming environment.

Measurement environment The Crocodile measurement tool is designed to be fully integrated into existing SDEs that provide for object-oriented design and coding tools (e.g. structure editors, parsers, source code browsers) and version management. The main components of the Crocodile tool are abstract interfaces to the SDE services. These services are used to display critical program parts through the SDE user interface and to extract the data that are necessary to measure (see at right). According to the goal to use measurement-based analysis as early as possible in the development process, we concentrate on structural data available on architecture level. Because Crocodile does not parse the source code itself but extracts the data from the SDE internal database, it is language independent. Our current implementation platform supports among other languages C++ and JAVA.

Flexible and adaptable quality models Crocodile uses the factor-criteria-metrics approach to connect the measures with some high-level goals. The measures are defined using an interpretative metrics and query definition language on top of a SQL database system that consists of a table structure to implement the architectural data of the OO-System. Besides simple selection and joining of basic data, arithmetic operators are used to scale and to combine simple measures into more complex measures. To be as flexible as possible Crocodile does not come with fixed built-in quality models. So the full model has to be defined. Starting from the root which could be a general quality goal like reusability descriptions of directed paths from this goal down to the concrete measures have to be entered. It is possible to connect one measure to different design principles and criteria. The quality model itself is used by Crocodile to provide an interpreting of the measurement results.

Object-Oriented Product Metrics for Quality Assessment

257

0,m consist of

0,n

name

Packages 1 1

implemented in

1 implemented in

implemented in

0,n 1 1

have 0,n

have 0,n

name Attributes visibility 0,m

0,n

0,n

name visibility 0,n Methods length abstract? 0,m 0,n benutzt use

Classes

name abstract?

0,n 0,m inherit from

benutzt use

Measurement contexts and filtering To be able to measure large object oriented systems Crocodile provides some helpful settings of Measurement contexts:  Class-focus: It contains all classes to be measured. To measure particular parts of a composed program only those classes are included in the focus.  Inheritance-context: The functionality of classes - containing methods and attributes - from the inheriting context is copied into subclasses of the focus.  Use-context: When measuring a focus there are only considered references of used attributes and methods from classes within the focus. Selecting an additional use-context gives the possibility to selectively include use-relations to outside the class focus. To allow for an interpretation of the measurement results as either good or critical with respect to a particular (sub-) goal we provide the definition of thresholds for every measure. These quality levels provides a means to filter the huge amount of measurement values to those indicating critical situations. In Crocodile we support the following threshold-definition. Values are critical if values are critical if they  are inside an absolute interval / outside an absolute interval,  belong to the group with the x highest respectively lowest values,  belong to the group with the y percent highest respectively lowest values. Many different kinds of diagrams like frequency charts or outlier charts helps to visualize the measurement results to get an easy overview of the software.

Experiences Crocodile provides quite simple but powerful means to create a specialized measurement process. The quality models can be easily adapted to the user’s specific goals and can be used to support different activities in engineering and reengineering of object oriented applications. Due to Crocodile’s integration into a software development environment like SNiFF+ the measurement activities are smoothly integrated into existing software development processes.

258

H.A. Sahraoui

Workshop 9, paper 6: Collecting and Analyzing the MOOD2 Metrics Fernando Brito e Abreu1, Jean Sebastien Cuche2 1 ISEG/UTL, 2 École de Mines de Nantes INESC, R. Alves Redol, nº9, Lisboa, Portugal {fba, sebastien.cuche}@inesc.pt Abstract. An experiment of systematic metrics collection is briefly described. The MOOD2 metrics set, an extension of the MOOD set, was used for the first time. To collect them, a new generation of the MOODKIT tool, with a WWW interface and built upon the GOODLY design language, was built.

Introduction The MOOD metrics set (Metrics for Object Oriented Design) was first introduced in [Abreu94] and its use and validation was presented in [Abreu95, Abreu96a, Abreu96b]. During the corresponding experiments, it became evident that some important aspects of the OO design were not being measured in the initial set. The need also arose on expressing aspects of OO design at different levels of granularity. Those shortcomings were abridged in the MOOD2 metrics set [Abreu98a], which can be obtained with the MOODKIT G2, a tool built upon the GOODLY language. The MOOD2 metrics set includes several granularity levels: Attribute, Operation, Class, Module, Inter and Intra-Specification. A detailed definition of each of the metrics can be found in [Abreu98a]. Metrics at the lower granularity levels are used to compute the ones at the higher levels. In this experiment we focused our attention on specification level metrics to assess overall OO design quality.

GOODLY and the MOODKIT G2 tool GOODLY (a Generic Object Oriented Design Language? Yes!) allows to express design aspects such as modularization, class state and behavior, feature visibility, inheritance relations, message exchanges and information hiding [Abreu97]. The higher level organization unit in GOODLY is called a specification (a set of modules). Instead of the single-tiered architecture of MOODKIT G1, G2 has a two-tiered one. The first tier consists of formalism converters that generate GOODLY specifications either by direct engineering from OOA&D specifications contained in a CASE tool repository (such as OMT or UML models produced with Paradigm Plus) or by reverse engineering of source code written in OO languages such as C++, Eiffel, Smalltalk, OOPascal or Java. The second tier consists of an analyzer of GOODLY code, a repository and a web-based tool interface. The analyzer does lexicalsyntactic and referential integrity verification (traceability analysis), generates HTGOODLY, extracts the MOOD2 metrics and also produces other coupling information to allow modularity analysis [Abreu98b]. HT-GOODLY is a hypertext

Object-Oriented Product Metrics for Quality Assessment

259

version of GOODLY that allows improved understandability by context swapping through navigation.

The experimental results The analyzed sample consists of around fifty GOODLY specifications with varied profiles, generated by reverse engineering of systems written in Smalltalk, Eiffel and C++. To avoid sample noise, we applied outlier removal techniques before analysis. Each specification was classified according to criteria such as application domain, origin, original language, version and production date. A set of hypothesis was tested against the sample, from where the following conclusions arose. The metrics correlate very low with each other, and thus represent different design aspects. The metrics correlate very low with system size, whether we use LOC, number of classes or other size measures. Therefore they are size-independent. When comparing different versions of the same systems we could find signs of design quality evolution experienced throughout time. Therefore the metrics are sensible enough to assess incremental quality changes. While we could not find evidence of application domain impact on resulting design quality, the same does not apply to software origin. The metrics allow to assess the amount of reuse in great detail. We were able to observe a large variance on the effective reuse. The detailed results of this experiment will soon be published in a journal and meanwhile can be obtained by contacting the authors.

References [Abreu94] Abreu, F.B. & Carapuça, R., «Object-Oriented Software Engineering: Measuring and Controlling the Development Process», Proc. 4th Int. Conference on Software Quality, ASQC, McLean, VA, USA, October 1994. [Abreu95] Abreu, F.B. & Goulão, M. & Esteves, R., «Toward the Design Quality Evaluation of Object-Oriented Software Systems», Proc. 5th Int. Conference on Software Quality, ASQC, Austin, TX, USA, October 1995. [Abreu96a] Abreu, F.B. & Melo, W., «Evaluating the Impact of Object-Oriented Design on Software Quality», Proc. 3rd Int. Software Metrics Symposium, IEEE, Berlin, March 1996. [Abreu96b] Abreu, F.B. & Esteves, R. & Goulão, M, «The Design of Eiffel Programs: Quantitative Evaluation Using the MOOD Metrics», Proc. TOOLS USA’96, Santa Barbara, California, USA, August 1996. [Abreu97]* Abreu, F.B. & Ochoa, L. & Goulão, M., «The GOODLY Design Language for MOOD Metrics Collection», INESC internal report, March 1997. [Abreu98a]* Abreu, F.B., «The MOOD2 Metrics Set», INESC internal report, April 1998. [Abreu98b] Abreu, F.B. & Pereira, G. & Sousa, P., «Reengineering the Modularity of Object Oriented Systems», ECOOP’98 Workshop 2, Brussels, Belgium, July 1998. * - available at http://albertina.inesc.pt/ftp/pub/esw/mood

260

H.A. Sahraoui

Workshop 9, paper 7: An Analytical Evaluation of Static Coupling Measures for Domain Object Classes (Extended Abstract) Geert Poels Department of Applied Economic Sciences, Katholieke Universiteit Leuven Naamsestraat 69 B-3000 Leuven, Belgium [email protected]

Why Measuring Coupling ? In the context of object-oriented software systems coupling refers to the degree of interdependence between object classes. According to Chidamber and Kemerer coupling between object classes must be controlled [3, p. 486]: 1. Excessive coupling between object classes is detrimental to modular design and prevents reuse. The more independent a class is, the easier it is to reuse it in another application. 2. In order to improve modularity and encapsulation, inter-object class couples should be kept to a minimum. The larger the number of couples, the higher the sensitivity to changes in other parts of the design, and therefore maintenance is more difficult. 3. A measure of coupling is useful to determine how complex the testing of various parts of a design are likely to be. The higher the inter-object class coupling, the more rigorous the testing needs to be.

The Context The method we use to specify an OO domain model is MERODE [8]. This method combines formal specification techniques with the OO paradigm in order to specify and analyse the functional and technical requirements for an information system. A characteristic of MERODE is that it is model-driven. A clear distinction is made between functional requirements related to the application domain (i.e., business functionality), functional requirements related to user-requested information system functionality, and technical requirements. These three types of requirements are respectively specified in the domain model, the function model and the implementation model. The method is model-driven in the sense that starting from the domain model the other types of models are generated by incrementally specifying the other types of requirements. Specific to MERODE is that no message passing is allowed between the instances of the domain object classes. However, as MERODE supports the modelling of generalisation/specialisation and existence dependency4 relationships between 4

The existence dependency relation is a valuable alternative to the aggregation relation as its semantics is very precise and its use clear cut, in contrast with the concept of aggregation [8].

Object-Oriented Product Metrics for Quality Assessment

261

domain object classes, there exist two types of static coupling between domain object classes. The first type is inheritance coupling. As a rule, a child object class inherits the features of its parent object class [7]. A change of the features of the parent object class is propagated into the child object class as the new defined features of the child may reference the inherited features. The second type of coupling is abstraction coupling. The relationship between an object class X and an object class Y, where X is existent dependent of Y, is established by declaring in X an attribute v of type Y.

The Static Coupling Measures The following measures quantify the extent of static class coupling. Let X be a domain object class defined within the domain model DM. Inbound inheritance coupling: IIC(X) = 1 if X inherits from an object class in DM = 0 otherwise Outbound inheritance coupling: OIC(X) = the count of object classes in DM that inherit from X Inbound abstraction coupling: IAC(X) = the count of attributes of X that have a class in DM as type 5 Outbound abstraction coupling: OAC(X) = the count of attributes of type X that have been declared in classes of DM The next two measures quantify the extent of static coupling in a domain model DM. Inheritance coupling: IC(DM) =

IIC(X) = X  DM

OIC(X) = the count of inheritance relationships in X  DM

DM Abstraction coupling: IAC(X) =

AC(DM) = X  DM

OAC(X) = the count of abstraction relationships X  DM

in DM

5

If there are n existence dependency relationships between X and Y, then X will have n attributes of type Y.

262

H.A. Sahraoui

The above measures take into account the strength of the coupling between domain object classes. The measure IIC reflects for instance the MERODE constraint of single inheritance. The abstraction coupling measures count all occurrences of coupling relationships between two classes. However, the measures do not count indirect coupling relationships. Moreover, they are coarse-grained measures that are derived in an early stage from high-level conceptual models.

Analytical Evaluation We checked whether the above static coupling measures for domain object classes satisfy the coupling properties published by Briand et al. [1]. Measure properties must be regarded as desirable properties for software measures. They formalise the concept being measured such as intuitively understood by the developers of the property set. However, most sets of measure properties consist of necessary, but not sufficient properties [4]. As a consequence, measure properties are useful to invalidate proposed software measures, but they cannot be used to formally validate them. In other words we wished to check whether our static coupling measures do not contradict a minimal set of subjective, but experienced, viewpoints on the concept of coupling. However, showing the invariance of the coupling properties for our measures is not the same as formally proving that they are valid. The coupling properties of Briand et al. were proposed in [1] and further refined in [2], [5], [6]. We refer to these papers for the formal definition of the coupling properties. Our static coupling measures are invariant to the coupling properties under certain assumptions regarding the representation of a domain model as a generic software system.

Conclusions In this paper a set of coarse-grained static coupling measures was proposed for domain object classes. The measures are constructed such that they measure the strength of the couplings between object classes, and not merely the number of classes that are coupled to the class of interest (such as done by CBO [3]). The measures are analytically evaluated using a well-known set of coupling properties. The conformance to these measure properties does not formally validate our proposed measures, but shows that they do not contradict popular and substantiated beliefs regarding the concept of coupling.

References [1] L.C. Briand, S. Morasca and V.R. Basili, ‘Property-Based Software Engineering

Measurement’, IEEE Transactions on Software Engineering, Vol. 22, No. 1, January 1996, pp. 68-86. [2] L.C. Briand, S. Morasca and V.R. Basili, ‘Response to: Comments on «PropertyBased Software Engineering Measurement»: Refining the Additivity Properties’, IEEE Transactions on Software Engineering, Vol. 23, No. 3, March 1997, pp. 196197.

Object-Oriented Product Metrics for Quality Assessment

263

[3] S.R. Chidamber and C.F. Kemerer, ‘A Metrics Suite for Object Oriented

Design’, IEEE Transactions on Software Engineering, Vol. 20, No. 6, June 1994, pp. 476-493. [4] B.A. Kitchenham and J.G. Stell, ‘The danger of using axioms in software metrics’, IEE Proceedings on Software Engineering, Vol. 144, No. 5-6, OctoberDecember 1997, pp. 279-285. [5] S. Morasca and L.C. Briand, ‘Towards a Theoretical Framework for Measuring Software Attributes’, Proceedings of the IEEE 4th International Software Metrics Symposium (METRICS’97), Albuquerque, NM, USA, November 1997. [6] G. Poels and G. Dedene, ‘Comments on «Property-Based Software Engineering Measurement»: Refining the Additivity Properties’, IEEE Transactions on Software Engineering, Vol. 23, No. 3, March 1997, pp. 190-195. [7] M. Snoeck and G. Dedene, ‘Generalisation/ Specialisation and Role in Object Oriented Conceptual Modeling’, Data and Knowledge Engineering, Vol. 19, No. 2, June 1996, pp. 171-195. [8] M. Snoeck and G. Dedene, ‘Existence Dependency: The key to semantic integrity between structural and behavioural aspects of object types’, IEEE Transactions on Software Engineering, Vol. 24, No. 4, April 1998, pp. 233-251.

Workshop 9, paper 8: Impact of Complexity on Reusability in OO Systems Yida Mao, Houari A. Sahraoui and Hakim Lounis CRIM 550, Sherbrooke Street West, #100 Montréal, Canada H3A 1B9 {ymao, hsahraou, hlounis @crim.ca

Introduction It is widely recognized today that reuse reduces the costs of software development [1]. This reduction is the result of two factors: (1) developing new components is expensive, and (2) the reusable components are supposed to have been tested and thus are not expensive to maintain. Our concern is to automate the detection of potentially reusable components in existing systems. Our position is that some internal attributes like complexity can be good indicators of the possibility of reuse of a component. We present an experiment for verifying a hypothesis on the relationship between volume and complexity, and the reusability of existing components. We derived a set of related metrics to measure components’ volume and complexity. This verification is done through a machine-learning approach (C4.5 algorithm, windowing and cross-validation technique). Two kinds of results are produced: (1) a predictive model is built using a set of volume and complexity metrics, and (2) for this predictive model, we measure its completeness, correctness, and global accuracy.

264

H.A. Sahraoui

A Reusability hypothesis and its Derived metrics Different aspects can be considered to measure empirically the reusability of a component depending on the adopted point of view. One aspect is the amount of work needed to reuse a component from a version of a system to another version of the same system. Another aspect is the amount of work needed to reuse a component from a system to another system of the same domain. This latter aspect was adopted as the empirical reusability measure for our experiment. To define the possible values for this measure, we worked with a team in CRIM specializing in developing intelligent multiagents systems6. The obtained values classes are : 1. Totally reusable: means that the component is generic to a certain domain (in our case "intelligent multiagents systems"). 2. Reusable with minimum rework: means that less than 25% of the code needs to be altered to reuse the component in a new system of the same domain. 3. Reusable with high amount of rework: means that more than 25% of the code needs to be changed before reusing the component in a new system of the same domain. 4. Not reusable at all: means that the component is too specific to the system to be reused. Measures of complexity and volume has been shown to be able of predicting the maintenance effort and the cost of rework in reusing software components [3], [2]. We define our hypothesis as follows: Hypothesis : A component's volume and complexity somehow affect its reusability Following are some of the thirteen metrics that are selected to measure the volume and complexity of a candidate class according to the hypothesis: WMC (Weighted Methods Per Class), RFC (Response For Class), NAD (Number of Abstract Data Types).

Hypothesis Verification We have used the data from an open multiagent system development environment, called LALO. This system has been developed and maintained since 1993 at CRIM. It contains 87 C++ modules/classes and approximately 47K source lines of C++ code (SLOC). The actual data for suite of measures we have proposed in our hypothesis were collected directly from the source code. We have exploited an OO metrics tool: QMOOD++ [5] which was used to extract Chidamber and Kemerer's, and Bansiya's OO design metrics. To verify the hypothesis stated in section 2, we built characterization models, that can be used to easily assess class reusability based on their type and their level of complexity. The model building technique that we used is a machine learning algorithm called C4.5 [4]. C4.5 induces classification models, also called decision trees, from data. C4.5 derives a rule set from a decision tree by writing a rule for each path in the decision tree from the root to a leaf. In that rule the left-hand side is

6

Details on the work of this team can be found in http://www.crim.ca/sbc/english/lalo/

Object-Oriented Product Metrics for Quality Assessment

265

easily built from the label of the nodes and the labels of the edges. The resulting rule set can be simplified. To evaluate the class reusability characterization model based on our measures, we need criteria for evaluating the overall model accuracy. Evaluating model accuracy tells us how good the model is expected to be as a predictor. If the characterization model based on our suite of measures provides a good accuracy it means that our measures are useful in identifying reusable classes. Three criteria for evaluating the accuracy of predictions are the measures of correctness, completeness, and global accuracy. As we can see in table 1, C4.5 presents good results in our experiment. The results are pretty high for the verification of the hypothesis of complexity. On the other hand, C4.5 induced a rules-based predictive model for the hypothesis. The size of the predictive model is 10 rules. An example of these generated rules is given by the following rule: Rule 5: NOT > 2 NAD ~>ed by some st akeholder, and then they may be ·under~>tood by some other stakeholder. As soon as requirements are mq>ressed , they as sume a synt a.ctic and Je:;.ical form (in some arbitrary language and not ation) that encodes t heir intended semantics. It is wort hwhile to not ice that this en coding can well be imprecise: actua.lly, the most common language for expressing requirmmmts, natura.! language (NL), is typically quite fuzzy when it comes to precise semantics; the same holds for other inform al or semi-formal notations t hat a.re in widespread use. Idea.lly, the semant ic COiltent of a requirement should be independent from a particular language, and in most cases o m be connnunicated by diffenmt means (e.g., an entity-relationship diagnun can be "read a.loud''). On the other hand, t he lexical content (i.e., t he choice of names for t hings) depends on the rea.l-world entities found in t he dom ain, and can usually be moved among diffenmt languages a.nd not ations (e.g. , "user" can be used both as a. noun in a. NL requirement, or as a labei in a data flow diagram). Correct and complete comm1.mication among the st akeholders occurs when the sem antic content of a. requirement, tied to the problem doma.in via its lexical content , can be t ran sfern1d without degradation from "t he head" of a. stakeholder t o "the hea.d" of a. diffenmt one. T his observation takes us to t he m ain requirmmmt t hat a CR.\V supporting system should sat isfy: it must be able t o su ccessfully tran slate requirements forth and back among diffenmt models, languages and not ations. Nat urally, a full supporting environment has an opportunity to perform severa! other u seful functions, e.g., t he forma.! va.lidation of models, t he extra.ction of product and process metric data, the handling of conflicts. However, in t his pa.per we concentrate on the issue of providing multip le views on t he requirement s.

2

CIRCE:

supporting CRW processes

In order t o experiment with an effective CRW set ting, we developed a prototype environment ca.lled CTRCE. At the architectura.l levd , CTRCE can be described in terms of four classes of compommts, communicating t hrough a sha.red blackboard: a. centra.lized semantic repositor.IJ stores t he (possibly inconsistent) knowledge ext racted from the requirmmmts expressed by the users of t he system (i.e., by the st akeholders); a number of modeler~> use t he information st ored in t he repository to build models from t he requirements; higher-level modelers can also exploit models built bv lower-level modelers: tran~>lators provide an effective representat ion for t he models, turning thmn from abstract entities into text, t ables or graphs; last , t hese representat ion s a.re output in an appropriate interaction environrnent that t akes care of showing t hem to the user and, in some case, allows t he user t o edit thmn . In su ch cases, t he whole series of tnmsformation can be taken in t he opposite direction so t hat user editing is reflected by an updated semantic repository.

The Case for Cooperative Requirement Writing

479

Our curnmt prototype indudes a translator called Crco that handles NL input and output [1], and several modelers for data flow diagrams, ent ity-relationships diagrams, aggregation and derivation diagrams, communication diagrams and a limited dynamic behaviour representation. A user of the systmn om freely move among ali these diffenmt views on t he requirements he or she (or even a diffenmt user alt ogether) has expressed. C!RCE also indudes a munber of modelers devoted to the extraction of metric data and to the validation of t he requirements, bot h at the representation level (e.g., linguistic correctness of the NL requirements, ambiguities, redundancies) and at the modellevel (e.g., well-formedness of data flow diagrams). As intera.ction environment we opted for a standard \Veb browser complemented by some custom Java and Java.Script code. This choice has several advantages; among t hem, the immediate availability of the interaction environment on pra.ctically every computing platform, and the opportunity to make good use of t he layout control offered by HTML. Another advantage that has proven to be of paramount importance in practicat use is the seamless integra.tion of the various representations with widesprea.d personal and group comm1.mication methods: sta.keholders can exchange t heir views via email or discussion groups, and can easily publish snapshots or even "live" diagrams on the \Veb. This integration grmtly simplifies direct intera.ction among the stakeholders, and this in turn favours conflict resolution and composition.

3

Conclusions

Cooperative requirement engineering is a promising methodology, but it cannot deliver its promises without a flexible and practicat supporting enviromnent. Previous work in t his area has concentrated in providing support to the requirement enrrineer, but not t o other dasses of sta.keholders. In view of the substantial number of stakeholders that should participa.te in the requirement definition in a true CR.\V process, these approaches turns out to be too weak. \Ve believe t hat with environments providing easy integration of different views, like our prototype described in Section 2, and by effective use of existing inter-persona.l communication facilities, cooperative requirmmmt writing can prove not only more complete, but also simpler and less expensive than the t radit ional, lahour-intensive mediated approach.

References 1. V. Ambriola and V. Gervasi. Processing naturallanguage requirements. In Proceul1:ng$ of ASE 1.9.97, pages 3fr-45. IEEE Press, 1997. 2. P. G. Brown. QFD: Edwing the voice of the customer. ,4Tt5T TedmJm,l Journf"l, 70(3):18-32, Mar.-Apr. 1991. 3. A. Cucchiarelli et al. Supporting user-analyst interaction in functional requirements elicitation. In Proceeding$ of the Fir.~t A~1:a-Pacijic Software Eng1:neen:ng Conj~Tence, pages 114-123, Los Alamitos, California, Dec. 1994. IEEE CS Press. 4. J. Jvi. Drake, W. \V. Xie, W. T. Tsai, and I. A. Zaulkernan. Approach aud case study of requirement aualysis where end users take au active role. In E. Straub, editor, Proceeding.~ of the 15th International Conj~Tence on Software Engirwen:ng, pages 177-186, Balt imore, MY, Iviay 1993. IEEE Computer Society Press. 5. K. El Emam, S. Quintin, and N. H. Iviadhavji. User participation in the requirements ensrineering process: an empirical study. R threshold is a computation whose result is used by the control. If we want to prove that when the speed s is greater than threshold, a given signal is emitted, the computation induced by this test must be done outside the control module, so that only a boolean occurs as an argument of the if. Furthermore, if the speed must be in a certain range, instead of being less than threshold, the change will not concern the control module — which might have been validated — but only the modules that evaluates the condition. Moreover, if the evaluation of the control criterion is complex, one will be able to use standard libraries, and will not have to take into account what is provided by the language used for the control module when writing the code associated with the computation. This example illustrates two major inconveniences that arise when control and computations are intertwined: the scope of the validation tools is limited by the occurrence of data in the control, and the reuse of computations modules is limited by the need to change the way they are controlled. We propose an approach and tools for the modular development of control and computation modules. The main difficulty is to make explicit the retro-action loops that are often hidden inside the computations. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 515-518, 1998.  Springer-Verlag Berlin Heidelberg 1998

516

F. Boulanger and G. Vidal-Naquet

One advantage of this approach is that it enables one to choose the best paradigm for each component. We will use the synchronous reactive approach [2] for control, and a synchronous (in its signal processing meaning1 ) data flow approach for data processing. Such a modular development brings the need of a communication interface between modules, and an execution machine that enables them to run together [1]. We have based the associated tools on the Ptolemy [3] platform2 , developed at the University of California at Berkeley. This platform uses objects to enable the integration of several computation paradigms called “domains”. A major benefit coming from the use of Ptolemy is that it is possible to use several domains within the same application, and that a domain used for simulation, for instance “Synchronous Data Flow”, may have a corresponding dual domain used for code generation, for instance “Code Generation C”). It is therefore possible to simulate a system, then to generate the corresponding code just by choosing the dual domain.

2

Control and Computation

The distinction between control and computation is not as clear as it is shown in the above example, and it cannot be made by considering only syntactical aspects. During the conception of the application, decisions must be taken on what will be considered as control and what will be considered as computations. When verification tools are used on a module, it means that this module has already been identified as control, and it is then easy to take away all computations. The main difficulty is to identify control that is embedded into computations, for instance, adaptive filters that change their coefficients according to their input. A possible criterion for the identification of control is that it changes the way data is handled, so some programming language control statements may not be considered as control in some contexts—think of a for loop used to compute the scalar product of two vectors. The approach we advocate is therefore to introduce an explicit step where control modules are identified in the development of an application. All computations are then removed from these modules and put into specific computation modules or implemented using standard modules from libraries. Thus, control modules will only receive and produce boolean values, and it will be possible to use formal verification tools on them. This would be very difficult or even impossible to do if the module had to deal with data types even as simple as integers. 1 2

data production and consumption rates are in fixed ratio. http://ptolemy.eecs.berkeley.edu

Modular Development of Control and Computational Modules

3

517

Example: A cruise controller

Although this example has been completely treated, we will only discuss its results for the sake of brevity. We consider a cruise control system for a car, and whatever the sophistication of the system, it is mandatory to prove that an action on the brakes disables the cruise control. A sensor gives the position of the brake pedal as, for instance, an integer between 0 and 255. If the control module receives this raw information, it must compare the position of the pedal to its default position. The number of possible states for the control module is therefore multiplied by the number of values an integer can take (or at least by 256 if the verification tool is clever). With several valued inputs, the number of possible states can become so large that no tool could explore them in realistic time and memory. Moreover, to handle such values, the verification tool must have a formal specification of their data type—it must know what pedal > 0 means. By processing the comparison in a computation module and feeding the control module with the boolean result of the comparison, we could formally check that the dangerous state—active control and brakes—could not be reached for any combination and history of the inputs. We are therefore sure that the design of this control module is correct with respect to this point.

4

Integration of Control and Computation modules

Separate development of control and computations leads to another difficulty: they must finally be integrated to build the application. In this example, we use a simple method, where the control modules are written in Esterel or Lustre and translated into SDF or CGC. We have developed a translator ocpl3 from the OC state machine produced by the Esterel or Lustre compiler to the PL language that describes stars (basic entities of Ptolemy). The semantics of the communications between control and data processing is therefore built into the translator. Moreover, with this approach, all data processing modules run continuously, and the control modules select the right outputs according to the current mode. This is not very efficient because some time is wasted to compute outputs that are not used. Another approach is to use the “Synchronous Reactive” domain to specify the control, and the SDF domain to specify the computations. SR [4] allows to build synchronous reactive systems by assembling components. At the start of each instant, all signals are unknown excepted the inputs of the system. The components are activated according to a statically determined schedule, and each time they are activated, they produce as much output as they can from their known inputs. The final values of the signals are the least fixed point of the system. 3

available by anonymous ftp on ftp://ftp.supelec.fr/pub/cs/distrib/ .

518

F. Boulanger and G. Vidal-Naquet

SR is a good candidate for writing control modules in Ptolemy, and we have developed the dual SRCGC domain (for C code generation) in collaboration with Thomson-CSF Optronique. This way, a complete application can be built, with control developed in SR or SRCGC, and data processing in SDF or CGC. Unfortunately, we can only generate “strict” SR stars from OC (strict stars can react only when all their inputs are known), so we have developed a new sscpl code generation tool based on the SSC format of Esterel. This format contains information on the dependencies between signals, and it is therefore possible to compute some outputs without knowing all inputs. With this approach, the semantics of the communications between control and data processing is implemented by special interfaces called “Worm holes” in Ptolemy. This allows the control to change the schedule of computations, and therefore to trigger only the necessary computations. However, a few technical issues prevent this scheme from working with the code generation domains, but they should be solved soon.

5

Conclusion

The object oriented approach used in Ptolemy allowed us to integrate control modules written in Esterel or Lustre with data processing modules written in SDF, both for simulation (use of the SDF domain) and code generation (use of the CGC domain). Work remains to be done for the full exploitation of this approach, both at the methodology level and at the tool level. The immediate benefits are more structured code and the ability to check properties on the control. In the long term, we should benefit from well designed libraries of data processing and control components.

References [1] C. Andr´e, F. Boulanger, M.A. P´eraldi, J.P. Rigault, G. Vidal-Naquet Objects and Synchronous Programming European Journal of Automation, Vol. 31, # 3/1997, p.418-432 [2] A. Benveniste, G. Berry The Synchronous Approach to Reactive and Real-Time Systems. Proceedings of the IEEE, vol 4, April 1994 [3] J.T. Buck, S. Ha, E.A. Lee, D.G. Messerschmitt Ptolemy, a Framework for Simulating and Prototyping Heterogeneous Systems International Journal of Computer Simulation, 19(2):87-152, November 1992. [4] S.A. Edwards The Specification and Execution of Heterogeneous Synchronous Reactive Systems Ph.D. thesis, University of California, Berkeley, May 1997

TDE: A Time Driven Engine for Predictable Execution of Realtime Systems1 F. De Paoli, F. Tisato, C. Bellettini Dipartimento di Scienze dell’Informazione, Università degli Studi di Milano Via Comelico, 39 – 20135 – Milano – Italy {depaoli, tisato}@dsi.unimi.it

Abstract. One of the most important qualities of run-time supports for realtime systems is predictability. The knowledge of behavior and execution time of a system is necessary to define feasible and dependable real-time scheduling. Often real-time operating systems are based on concurrency models that are intrinsically not suitable for real-time execution since timing remains external to the execution model. This approach imposes a translation of time constraints to perform realtime executions. TDE bases the execution of tasks directly on their time constraints. This approach allows the system designer to concentrate on timing issues, and the run-time support to constantly control the system behavior according to timed plans. TDE has been integrated with a commercial real-time operating system. This paper presents the concepts behind TDE and describes the architecture of the implementation.

1.

Introduction

Run-time supports for real-time systems can be classified as event-triggered or timetriggered architecture. The former is based on event or interrupt detection and the consequent activation of an action. The latter operates in accordance with clock times as shown by an integrated independent clock or as determined from the readings of a system of clocks. As the logical clock reading reaches certain predefined values, an appropriate action is selected for execution from a lookup table [1]. Both approaches have advantages and drawbacks. Event-triggered architectures are best suitable for reactive systems, as they are triggered by external interrupts. Timedriven architectures simplify reasoning about the behavior of a system since time issues are more explicit. Event-triggered architectures are more effective for systems that have to (re)act “as quick as possible”, while time-driven are better for systems that have to (re)act “at the right time” [2], [3]. In the latter case, a time-driven approach leads to more robust systems, since it delivers predictable systems, which are easier to test and verify. 1

This work was partially supported by the European Community – Esprit Projects OMI/CORE and OMI/MODES.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 519-524, 1998.  Springer-Verlag Berlin Heidelberg 1998

520

F. De Paoli, F. Tisato, and C. Bellettini

This paper presents an architecture for synchronous, time-driven execution models and its implementation and integration in the commercial operating system EOS. TDE is part of the HyperReal project that has the goal of developing a complete development environment and run-time support for hard real-time systems [4]. A preliminary description of the TDE approach was presented at the 1996 workshop on Object Technology and Real-time of ECOOP’96 [5]. The integration with EOS is one of the results of the Esprit project OMI/MODES. Most of the real-time operating systems, including EOS [6], are event-driven systems that adopt task priority and time slice scheduling policies. More recently, timedriven systems have been introduced, examples are MARS [7] that exhibits a totally deterministic periodic behavior, and ESTEREL [8] that introduces a synchronous approach to the design of real-time distributed systems. There are systems, like the one we present in this paper, that support also event-driven computation. Examples of systems accommodating either sporadic behavior (i.e., activated by external events) or periodic behavior (i.e., activated on a timing basis) are Spring [9], and MARUTI [10]. The TDE approach aims at defining of abstractions to support the definition of a system as a set of objects that behave as autonomous, reactive components (agents), and controllers, whose task is to drive the agents' execution. Controllers include all the issues related to synchronization and scheduling, thus making them "programming-in-the-large" issues. In other terms, agents, which are "programming-in-thesmall" components, are designed without any embedded assumption on timing and synchronization. The controller drives the agents' execution to meet the system's requirements. This separation of concerns has several advantages, ranging from readability and tailorability enhancement, to formal verifiability. In particular, this approach leads to modular design of systems. Modularity allows designers to build up tailored systems out of existing components, and supports the integration of these systems with existing environments. The paper is organized as follows: Section 2 illustrates the TDE concepts and architecture; Section 3 discusses the implementation of TDE under the EOS operating systems; Finally, Section 4 draws some conclusions.

2.

The architecture of TDE

The goal of Time Driven Engine (TDE) is the definition of an infrastructure to support the execution of systems with the characteristics described above. In particular, TDE supports the definition and the implementation of the reactive components (agents) and provides the run-time support to drive their execution. A dispatcher that receives timed sequences of actions from a planner and lets agents run accordingly implements the run-time support. The planner and the dispatcher form the controller of a system. Fig 1 illustrates the TDE architecture. In the following, each component is examined in detail. Agents are designed as objects with a private part and an interface. Agents are components that reacts to commands issued by the dispatcher. The implementation splits a command in two parts: the selection of the operation to be executed and the

TDE: A Time Driven Engine for Predictable Execution of Realtime Systems

521

execution of that operation by the agent. To let the dispatcher select the operation, an agents exports a set of control variables that can be set to signal events to the agent. When an agent has a control variable set, it can execute. The effect of the execution is described by the behavior of the agent. The behavior is modeled as finite-state automaton that defines how the agent reacts to a certain event. This means that an agent behaves according to its internal status and external events. Controller Planner

Dispatcher t current plan t working plan

Events

Environment

Commands

Agent 1

Agent 2

Agent 3

Fig 1. The architecture of TDE.

The private part encapsulates data and operations. Operations are atomic, since they do not include any suspensive primitive, and they cannot be pre-empted to ensure deterministic execution. Agents are basic building blocks for systems that are driven by the controller. They are not aware of when operations are executed, nor are capable of taking any scheduling-related decision. An agent may notify the environment about the internal status by means of signals (events), and may exchange data with other components without explicit synchronization. The control has been split into two parts to separate planning from dispatching. This assumption leads to modular controllers that separate the machine dependent aspects from those dealing with the semantic of the application. A planner is in charge of setting up a timed plan that describes the activities to be performed by agents. The dispatcher is in charge of executing that plan by translating the actions of the plan in commands that drive the agents. A plan is a set of actions that represents the commands that can be scheduled and dispatched to agents. An action specifies an operation, the timing constraints associate with it. Constraints are expressed by the worst-case execution time (wcet), and by a validity interval ([after, before]), i.e., the lower bound after which the action can be executed and the upper bound before which the action must be completed. A plan is associated with a reference clock that defines the timeline. The rate of the reference clock is set by the application itself as a rational number x/y to state that there are x ticks of the reference clock every y ticks of the system clock. Value x=y=1

522

F. De Paoli, F. Tisato, and C. Bellettini

represents a reference clock with the same speed of the system clock. The rate can be modified to change the execution speed of the associated plan. The internal architecture of a planner is not part of TDE. Every designer is free to choose the solution that best fits the application domain requirements, what is necessary is to conform to the TDE interfaces, i.e., reads from the environment object, and writes the current plan. The planner is in charge of setting the rate of the reference clock associated with the current plan and used by the dispatcher. Planning activities can be carried on by default assumptions or by inspecting the environment, which means monitoring events coming from outside the application or from agents. The former case leads to predictable scheduling, while the latter leads to dynamic scheduling. TDE can accommodate both situation, but assuming the former as basic model. For details on planner design and time issue, the reader can refer to [5]. The dispatcher is the kernel of any TDE implementation. It is a platformdependent component that simply reads the current plan and executes the specified actions according to the underlying hardware/software system. It checks timing constraints to verify that the actual behavior matches the behavior specified by the planner through the current plan. The next section is describes the implementation of TDE under EOS, which includes the implementation of the dispatcher.

3.

The implementation of TDE

TDE has been implemented as a collection of C functions that can be compiled and executed as Unix process under SunOS and Linux, as a collection of tasks under EOS, and finally the TDE kernel was embedded into the EOS operating system to drive the execution of EOS tasks as TDE agents. In this paper we refer to the EOS implementation of TDE. The TDE programming interface for the design and implementation of agents and planner remains unchanged for any implementation. TDE components are linked to different libraries to get different implementations. To design planners, TDE provides a set of predefined classes to handle plans and actions. Through them it is possible to create actions, set up the current plan, and control the reference clock. Moreover, an environment object has been defined to let the planner detect events and receive data inputs. For a detailed description of the programming interface the reader can refer to [11] and [12]. EOS is a scalable, multi-tasking operating system designed specifically for realtime embedded applications and to address the key-manufacturing requirement of cost efficiency. The goal of EOS is to support embedded application developers in achieving their objectives of low-cost, high quality, timely products, whilst providing the assurance that a configurable software platform can easily be adapted to their chosen hardware architecture. The state-of-the-art EOS architecture overcomes the problems of the traditional monolithic approach because each class of services, and each service within a class, is scalable - a full set of functionality down to the barest minimum. In this way, a complete system can be built from a simple, explicit task scheduling to a full operating system with time slice and priority scheduling.

TDE: A Time Driven Engine for Predictable Execution of Realtime Systems

523

The TDE architecture can be implemented in different ways to accommodate different scenarios. In the EOS implementation of TDE agents are executed by tasks under the supervision of the controller. The controller can be implemented by a single task or by two independent tasks. In the former case; planner and dispatcher are executed by the same task; in the latter case, planner and dispatcher are executed by dedicated tasks sharing the current plan. If the controller is implemented by two tasks, the dispatcher becomes the kernel of the system, since it is in charge of controlling the execution of any other task, including the planner-task. As a consequence, the choice between single task and multi task implementation affects the behavior of the system, and therefore the kind of scheduling policy. There are three main scenarios: predictable scheduling, dynamic scheduling and a combination of predictable and dynamic scheduling. Predictable scheduling means that the planning activity can be carried on off-line. A planner defines the timed plan, then the dispatcher executes that plan. The planner does not exist at run time, so the controller is the dispatcher only. In this case the multi task implementation is preferred, since the programmer has only to implement the planner task being the dispatcher-task a standard one. Dynamic scheduling requires event detection and reaction. This means that planning is a run-time real-time activity. In this case, the single task implementation of the controller has the advantage of simplicity. Since the planner has to detect event occurrence and react by defining the next sequence of action to be executed, it becomes the system driver. The dispatcher becomes a slave that executes the plan under the control of the planner. Multi task implementation is less natural, since the dispatcher needs to be instructed to execute the planner-task when an event has to be served. This means that the planner has to include planning actions into the current plan. Therefore, event detection becomes synchronous and predictable with respect to system behavior. However, this is a common situation for many real-time systems, since both predictable and dynamic scheduling is often required. The sharing of the current plan between planner and dispatcher introduces a critical section that should be executed in mutual exclusion by the two tasks. This introduces the classic problem of “priority inversion”. To solve the problem a second plan that can be freely manipulated by the planner, while the dispatcher executes the current plan has been introduced. The TDE architecture sketched in Fig 1 illustrates a multi-task implementation of the controller and highlights the current plan and the second plan called working plan.

4.

Conclusions

Most of the existing real-time kernels derive from time-sharing systems. Computation times, time constraints and precedence relations are not considered in the scheduling algorithm. As a consequence, no guarantee on timed behavior can be achieved. The only achievable behavior is a “quick” reaction to external events, and a “small” average response time. These features are suitable for soft real-time systems, but they are

524

F. De Paoli, F. Tisato, and C. Bellettini

too weak for time critical applications. A time-driven kernel has been demonstrated to be more robust, since time issues are taken into account even at run-time. Moreover, a time-driven approach often simplifies the design and verification process, since the domain expert can express the application constraints in a natural way and the programmer has just to translate them into the programming paradigm. TDE is a time-driven system providing supports for synchronous, deterministic computation. TDE has been demonstrated a suitable platform for predictable, periodic systems. The advantages are related to design simplicity –time constraints are visible at any level of the developing process-, reusability –agents are threaded components that exhibit context-independent behaviors-, tailorability –controller can be designed in different ways to meet several application needs-, finally, robustness –any TDE application can be tested in host environments and then ported to targets, since every components remains unchanged and is executed in the same way-.

References 1. Nissanke N., “Realtime systems”, Prentice Hall Series in Computer Science, ISBN 0-13651274-7, Prentice Hall, 1997. 2. DePaoli F., Tisato F., “On the complementary nature of event-driven and time-driven models”, Control Engineering Practice - An IFAC Journal, Elsevier Science, June, 1996. 3. Stankovic J., Misconceptions about Real-Time Com puting, Computer, Vol. 21, October 1988, pp. 10-19 4. F. DePaoli, F. Tisato, C. Bellettini, "HyperReal: A Modular Control Architecture for HRT Systems", Euromicro Journal of System Architecture, 1996. 5. Tisato F., Bellettini C., De Paoli F., “Agents, Plans and Virtual Clocks: Basic Classes For Real-Time Systems”, in Special Issues in Object Oriented Programming, M. Muhlhauser editor, ISBN 3-920993-67-5, dpunky-Verlag, 1997. 6. EOS web site, http://www.etnoteam.com/EOS. 7. Damm A., J. Reisinger, W. Schwabl, H. Kopetz, “The Real- Time Operating System of MARS”, Operating System Review., July 1989, pp. 141 – 157. 8. Andre C., Peraldi M., Boufaied H., “Distributed synchronous processes for control systems”, in Proc. of the 12th IFAC Workshop on Distributed Computer Control Systems, Toledo, Spain, September 28-30, 1994. 9. Stankovic J., The Spring Kernel: a New Paradigm for Real-Time Systems, IEEE Software, May 1991, pp. 62-72 10. Gudmundsson O., D. Mose, K. Ko, A. Agrawala, S. Tripathi, “MARUTI, an Environment for Hard Real-Time applications”, in Mission Critical Operating Systems, A Agrawala, K. Gordon, and P. Hwang, Eds., IOS Press, 1992 11. De Paoli F., “Specification of Basic Architectural Abstractions of TDE”, Esprit Project 20592 OMI/MODES TR 5.9.1 version 2, October 1997 12. De Paoli F., “Engineered implementation of TDE”, Esprit Project 20592 OMI/MODES TR 5.9.4, October 1997.

Virtual World Objects for Real Time Cooperative Design Christian Toinard1 and Nicolas Chevassus2 CEDRIC, Centre d'Etudes et de Recherche en Informatique CNAM, 292 rue Saint-Martin, 75141 Paris Cedex 03, France [email protected] AEROSPATIALE - Centre Commun de Recherches,12 rue Pasteur, BP 76, 92152 Suresnes Cedex, France [email protected] 1

2

Abstract. This proposition presents an architecture for "the rapid pro-

totyping" of manufacturing products. This architecture satises the following requirements: cooperative design of a shared virtual world, concurrent and real time interactions of dierent users, dynamic distribution of the virtual scene among the dierent users, distributed world consistency, persistence and faults recovery. Currently, few solutions answer these requirements entirely. The virtual scene is distributed dynamically over private spaces according to the cooperative interactions. A user is aware in real time of operations carried out by other participants. A distributed concurrency control guaranties the consistency of a distributed scene. The persistence is provided in a distributed way. A user can go on working despite the faults of other machines.

1

Presentation and Justication

1.1 Principles

The system manages a copy of the scene for each user. A copy is composed of two parts: a private space and the replicas of distant private spaces. A private space enables a participant to work, at the same time, on dierent subtrees of the scene. Concurrent operations are processed within dierent private spaces. A subtree can be transmitted to another private space. A private space is consistent because it is modied by local interactions. A replica can be inconsistent but the inconsistency can be recovered through a subtree transmission. A private space is persistent through savings in the home directory of the user. Only a subtree transmission requires a central server. So, the private spaces and the replicas have the ability of working without the central server. Moreover, a private space can be used on an isolation basis: it does not require the presence of any distant machine. A public space is involved in the transmission of a subtree. The public space runs on the central server. The server provides name services in order to localize a subtree. A user can update the public space with saving operations when he decides to make more available his private space modications. Thus, the location of a newly created node becomes available to the distant users. Otherwise, a new S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 525-528, 1998.  Springer-Verlag Berlin Heidelberg 1998

526

C. Toinard and N. Chevassus

node can be observed by a distant user but can not be transmitted to a distant private space. The public space realizes a redundant storage in the event of breakdown of a private space. The public space does not ensure the global persistence. In fact, the persistence is carried out in a distributed way by the various private spaces. Generally, the solutions do not distribute the scene tree according to the cooperative interactions and a unique space of work is maintained. In 8 3 2 1, a ltering of received events is carried out when a user only observes a part of the scene. These solutions rest on a cutting of the virtual world in cells in order to limit the size of the presented graphic scene. The world is divided according to a geographic or static cutting. 3 de nes a dynamic cutting which is not perfect according to the motion of objects. 4 1 do not guarantee the consistency of the copies with respect to concurrent actions. 2 considers a technique of concurrency control in order to manage the motion of objects. Our system speci cally addresses the cooperative context. The animation of moving objects in a simulation context video games or military simulations of battle elds is not our main purpose. A distributed cooperation on a global scene is provided within the disjoined private spaces. A subtree of a private space can be split again into several subtrees within dierent private spaces in response to other users. Thus, the distribution is carried out dynamically according to the user requests. Dierent cooperative schemes are allowed using a distributed concurrency control for the distributed scene tree. Thus, concurrency is achieved while preserving consistency. Our new solution suits a cooperative activity.

1.2 Real Time Animation A user operation creation, destruction and update of a node is transmitted in real time to the replicas. The events are multicast through a best eort transmission using UDP. The solution does not require any reliable multicast. Thus, the transmission time is as short as possible. Our solution uses neither a reliable transmission nor a reordering of events. For that reason, a replica can be inconsistent. The inconsistency is recovered, when required, using the subtree transmission. For example, a user wants to modify a subtree that belongs to a distant private space: a subtree transmission is carried out and all the possible inconsistencies are recovered for the requester. At the same time, the subtree transmission allows the error recovery and the consistency. Generally, the solutions of the literature aim at a real time which is as much as possible exact 5 3 8 4 1 2. Some solutions 4 6 2 try to recover the transmission errors. For the update of moving objects, solutions 4 6 try to transmit a recent state of the object at the time of a transmission error. Our solution uses the transfer of the private space to recover the errors when this is necessary for the work consistency. The simplicity has a drawback since the quality of animation is not perfect.

Virtual World Objects for Real-Time Cooperative Design

527

1.3 Distributed Designation

The system provides a distributed designation of the scene objects and nodes. At the root of the tree a unique name is attributed locally when a user creates a new object. The unique name contains the address of the creation machine plus a local date e.g. name @IP1, date1 denes an object subtree. The subsequent nodes are designated according to their position in the subtree of the corresponding object e.g. @IP1,date1,2 is the second node at the rst level of the object @IP1,date1. Only the user that is the owner of a node this node belongs to his private space can create a child node e.g. the rst child name @IP1,date1,2.1 is created by the private space that includes the father node @IP1,date1,2. That way, unique names are dened in a distributed way by the di erent private spaces. Some solutions dene a distributed designation of cells like 2 . In that case, a unique server manages the cell. The cell is not able to move to another server because the name is associated with a classical URL Uniform Resource Locator. 4 shows how to localize the nearest copy for a given object. For us, at a given time a unique copy private space replies to a name. This name can move from a private space to another private space. A global server allows localizing the moving name. This name server is used when a subtree transmission is requested. Its breakdown does not prevent the system from running. It prohibits the subtree transfers. The names are generated locally. So, the name server is not involved by the creation of a new object or a child node when the father already belongs to the private space. 1.4 Concurrency control

The name server, managing the shared space, permanently maintains the ownership of a subtree. A copy, which requests the transfer of a subtree, sends an ownership request to the name server. This one replies with the identity of the current owner. It memorizes the identity of the requester. Then, the subtree transmission is achieved between the new owner and the requester. This distributed concurrency control allows di erent ways of working. First, a participant can work on an ownership basis. He preserves the ownership until he accepts modications from other users. Second, he can work on a community basis. At any time, he lets another user get the ownership. Other kinds of cooperation are allowed. 2 proposed a mechanism to acquire a lock in order to solve the conicts on the motions of the objects. In particular, a lock is released automatically on a time-out so that the motions are not blocked. In our case, a distributed concurrency control is processed within the context of a distributed scene tree. It is used to carry out a consistent cooperation. The concurrency control does not relate specically to operations of motion update. Several ways of working are allowed ownership, community, etc.

528

C. Toinard and N. Chevassus

1.5 Persistence

The solutions of persistence 7 2 generally use a centralized server who ensures the persistence of the scene. In our proposal, the server does not achieve the persistence of a private space. A local saving carries out the persistence of a consistent operation. A user can work locally, using his private space, without requiring the presence of the server. The server makes a scene available at a certain stage of the cooperative work. It also realizes a redundant storage. Thus, the persistence is carried out in a distributed way and a certain level of fault tolerance is provided. 2

Conclusion and Future Work

This paper presents the principles of a distributed system for supporting real time cooperative design within a virtual environment. Real time awareness between the di erent users is provided. The graphic scene is distributed in order to provide concurrency and cooperation between the users. A dynamic distribution is provided. A distributed concurrency control guaranties the consistency of the distributed scene. A practical solution is proposed to manage the faults of the di erent machines. Currently, we are working on the implementation and the integration of this system in the framework of a cooperative design application. References 1. Barrus, J. W., Waters, C., Anderson, D.B.: Locales: supporting large multiuser virtual environments. IEEE Computer Graphics and Applications. November 1996 2. Broll, W.: Distributed virtual reality for everyone a framework for networked VR on the Internet. In Proc. of IEEE 1997 Annual International Symposium on Virtual Reality. Albukerque USA. May 1997 3. Defense Modeling & Simulation O ce: DIS++HLA Frequently Asked Question. http:www.dmso.mil. 22 April 1996 4. Hagsand, O.: Interactive multiusers VEs in the DIVE system. IEEE Multimedia 3-1 Spring 1996 5. IEEE Computer Society: IEEE Standard for Distributed Interactive Simulation applications protocols. IEEE Std 1278.1 1995 6. Kessler, G.D., Hodges, L.F.: A network communication protocol for distributed virtual environment systems. Virtual Reality Annual Symposium. Santa Clara CA. 1996 7. Leigh, J., Johnson A.E., Vasilakis, C.A., Defanti, T.A.: Multi-perspective collaborative design in persistent networked virtual environments. Proc. IEEE Virtual Reality Annual International Symposium. Santa Clara CA. March 1996 8. Macedonia, M. R. et al.: Exploiting reality with multicast groups. IEEE CG & A 15-5 1995 9. Stytz, M.R., Adams, T., Garcia, B., Sheasby, S.M., Zurita, B.: Rapid prototyping for distributed virtual environments. IEEE Software 1997

Providing Real-Time Object Oriented Industrial Messaging Services R Boissier3,M. Epivent1,E. Gressier-Soudan1,F. Horn2,A. Laurent1,D. Razafindramary3 1

CEDRIC-CNAM, 292 rue St Martin 75 141 Paris Cedex 03, France {gressier, andrel, epiven_m}@cnam.fr, 2 CNET-France Telecom, Issy-les-Moulineaux, France [email protected], 3 GRPI, IUT St. Denis, Université PARIS-NORD, 93 206 St Denis CEDEX 1, France {boissier, razafin}@iutsd.univ-paris13.fr

Abstract : This paper describes the way to provide object oriented realtime industrial messaging services on top of a real time ORB.

1. Introduction This position paper describes an ongoing project. Its goal is to provide object oriented real-time industrial messaging services on top of a real time Object Request Broker (ORB). This project has been split in two phasis. The first phasis investigated object oriented solutions to industrial messaging services without support for temporal Quality of Service (QoS). The second one considers the integration of QoS properties in the previous solution. Currently, we are now able to offer an object oriented messaging service, now we are working on the design of real time extensions. Manufacturing Automation Protocol (MAP) is an ISO communication stack targeted to low level shopfloor applications. It allows cooperation of supervising devices and industrial equipments (robots, machine-tools and other automated devices) handled by programmable logic and numerical controllers. The main interest of MAP is the Manufacturing Message Specification protocol (MMS)[7][8] that defines an abstract representation of a device called Virtual Manufacturing Device (VMD). Most relevant MMS abstractions are : VMDs, Domains, the Program Invocations and Variables. MMS communications are message oriented. The most often used communication scheme is a synchronous request/reply interaction, a confirmed service, that provides some client/server model between devices and user's application. The server role is mostly associated with the device, and the client role is dedicated to user's programs. MMS also uses a few oneway interactions, unconfirmed services : they allow devices, which are not solicited, to send asynchronously state changes and status reports to clients. ISO-MMS does not address temporal QoS (i.e. means to negotiate the required delays, jitters or response times and to configure the corresponding communication and execution infrastructures). In fact QoS is supposed to be supported by specific ad hoc engineering solutions which are outside the scope of the standard. Object oriented design is also expected from MMS application developpers. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 529-532, 1998.  Springer-Verlag Berlin Heidelberg 1998

530

R. Boissier et al.

2. A non Real Time "objectified" MMS MMS considered on its own provides a set of generic business abstractions for industrial messaging environments. These later can serve as a good basis for a full fledge object oriented specification and design of an industrial messaging service, built on top of an ORB. A distributed object-oriented version of MMS has been designed assuming the underlying ORB is CORBA 2.0 [6]. It has been derived from ISO-MMS mainly by adapting the following features. VMD are promoted to primary server objects that can be manipulated directly by the applications. The related services are uniformly handled as generic interfaces. Encapsulation is enforced, and as a consequence VMD internal resources (such as variables, domains, program invocations), can only be handled through specific methods supported by VMDs interfaces. These abstractions still exist but only as VMD parameters (these abstractions are implemented by objects but do not have any remote interface). ASN1 message specifications have been replaced by IDL interface definitions. MMS request PDUs are transformed in method parameters. MMS positive response PDUs are transformed in method results. Negative response PDUs are transformed in CORBA exceptions. Naming is based on a distributed system approach.VMDs have been given names and can be accessed in the system through object references independantly from their location. In ISO-MMS, VMDs are named through associations service access points which are fully location dependent. The ISO message oriented communication scheme has been replaced by standard CORBA interactions between clients and servers. They are mapped onto standard CORBA operations. Confirmed services are mapped onto standard CORBA synchronous method invocations. Unconfirmed services could be mapped onto one-way method invocations but no reception guarantee would have been offered. They have been mapped onto void synchronous method invocations. A non-blocking behavior is obtained using multithreading. An important consequence of using CORBA for running the objects is the availability of additional generic services. Objects persistency and events management could use CORBA common services. This is a straightforward way of implementing MMS Data Stores and MMS Events.

3. Providing Real Time extensions to an Object Oriented industrial messaging service Many process control applications targeted by the proposed messaging services are subject to temporal and reliability constraints. Our approach is brand new and differs from [11] that defines a real-time MMS slightly different from the design choices of ISO-MMS. In our case, we fully respect ISO-MMS interaction model, and extend it taking into account temporal QoS specifications. To validate our proposal, we are interested in remote control of a Numerical Controller (NC). In this context, we have presently identified a few parameters of temporal QoS : simple delays (expressing the time elapsed between the emission of a request and its reception in the target object) or compound delays (expressing the time elapsed between the emission of a request and the reception of the response), object response times. More precisely remote control is delay sensitive. While working a piece of metal, a late delay of 10 ms when reading an axis position given a velocity of 5m/mn causes an error of quite 1mm. A safety alarm must be sent to a NC in 20 ms max. Sending an on-line command needs an

Providing Real-Time Object Oriented Industrial Messaging Services

531

acknowledgement less than 60ms after the send. For modern NC control, application level throughput (domain download mainly) doesn't need stringent performances: 200kb/s with fixed size message length. This list is not limitative. These constraints can only be met with the help of the underlying communication and execution infrastructures: for instance through the use of specific network protocols, of specific scheduling mechanisms ... These can be rigidly hardwired in specific and predefined systems but should be obtainable in a flexible and dynamical way in open platforms which should therefore provide powerful configuration tools. Our approach to account for hard real-time constraints in industrial messaging services is based on the ReTINA project [1]. The goal of this project is to specify and implements a flexible ORB architecture that complies with ODP (Open Distributed Processing) standards [1] and whose machinery is exposed to the systems or application programmers. The prime characteristic of this architecture is the ability to plug arbitrary forms of binding policies between objects beyond the implicit model for client-server interactions assumed by CORBA. The term binding should be understood as both the process of interconnecting different objects according to a specific communication semantic and with implementation mechanisms (protocol stack, etc). In ReTINA the end-result of a binding process is a binding object. Examples of binding objects include : client-server binding objects with a specific invocation semantics (persistent servers, replicated servers), QoS-constrained binding objects whose life cycle and resource multiplexing policy is controlled by the application. In this architecture CORBA just appears as a particular personality (i.e. a set of APIs and language mappings). In this way, the ReTINA architecture allows to build nonCORBA personalities such as RMI-like platforms or specific real time distributed object platforms (complying to some or all CORBA standards) required in the context of process control and manufacturing application objects. It can in particular be a basis for the proposed industrial messaging services by providing an extended CORBA platform that supports ISO-MMS adaptations i.e. binding objects corresponding to end-to-end connections that satisfy hard real-time requirements. The basic programming model supported by ReTINA conforms to the ODP computational model [1]. Objects may interact with their environment (i.e. other objects) at typed interfaces. Interactions between objects are mediated by bindings: a binding is a (distributed) object encapsulating communication resources and supporting a given communication semantics between bound interfaces. A binding object is an end-to-end object that encapsulates all the QoS properties of the communication infrastructure (i.e. the networking and the local execution infrastructures). Different binding objects correspond to different sets of QoS properties. When temporal guarantees are involved, a binding object generally encapsulates a specific networking infrastructure (an ATM or FDDI network, for example), a specific transport protocol, specific stubs and skeletons (using specific encoding/decoding algorithms, specific buffer management policies), a specific object adapter (implementing specific thread dispatching/scheduling policies...). The concept of binding object is completely defined in [1]. As any other object, a binding object is created by invoking a specific factory object (a Binding Factory). In the ReTINA programming model, the binding factories available on a given platform are known by

532

R. Boissier et al.

application programmers and can thus be explicitly chosen according to the application needs (there is an implicit default-binding factory). As a consequence the binding protocol and API can be tailored at application level to fit specific needs.

4. Current prototyping and Conclusion A first prototype [6], written in C++, implementing the most important features of our objectified non real time MMS service runs over Chorus Systems' ORB, COOL [2], under Unix and under the Chorus microkernel. This industrial messaging service has been recently re-written in Java over ORBacus [9] and a flexible lightweight ORB called Jonathan[4]. Our approach will be validated by an application controlling remotely a Numerical Controller in a true manufacturing environment [5]. Next step will tackle the use of a real-time object request broker based on QoS management. Jonathan is going to evolve to take into account QoS parameters. The full prototype will run over Jonathan for these platforms : Linux and Chorus. For the Chorus platform, we will use the Java Virtual Machine personnality currently called JAZZ [3]. ATM is the network chosen. With Chorus, we will use QoS extensions of the interprocess communication service defined in[10]. Our approach will be able to provide a distributed object oriented real-time platform.

References 1 . G. Blair, J-B. Stefani. "Open Distributed Processing and Multimedia". Addison-Wesley. 1997. 2 . Chorus Systems. "Chorus/COOL-ORB Programmer's Guide". CS/TR-96-2.1. 1996. 3 . Chorus Systems. "CHORUS/JAZZ release , Technical Overview". CS/TR97-142.1. May 1997. 4 . B. Dumant, F. Horn, F. Tran and J. Stefani "Jonathan: an Open Distributed Processing Environment in Java". Middleware'98 : IFIP International Conference on Distributed Systems Platforms and Open Distributed Processing. Lake District. England. September 15-18. 1998. 5 . E. Gressier-Soudan, M. Epivent, A. Laurent, R. Boissier, D. Razafindramary, M. Raddadi. "Component Oriented Control Architecture, the COCA project". Workshop o n European and Scientific Industrial Collaboration on promoting Advanced Technologies in Manufacturing. Gerone. Spain. June 1998. 6 . G.Guyonnet, E. Gressier-Soudan, F. Weis. "COOL-MMS: a CORBA approach to ISOMMS". ECOOP'97. Workshop : CORBA: Implementation, Use and Evaluation. Jyvaskyla. Finland. June 1997. 7 . ISO 9506-1. "Industrial Automation Systems - Manufacturing Message Specification Part 1: Service Definition". 1990. 8 . ISO 9506-2. "Industrial Automation Systems - Manufacturing Message Specification Part 2: Protocol Specification". 1991. 9 . M.Laukien, U. Seimet, M. Newhook, M. Spruiell. "ORBacus for C++ and Java". ObjectOriented Concepts Inc. 1998 10. C. Lizzi, E. Gressier-Soudan. "A Real-Time IPC Service over ATM networks for the Chorus Distributed System". Euromicro'98. Vasteras. Sweeden. August 1998. 11. M.G. Rodd, G.F. Zhao. "RTMMS - An OSI-based Real-Time Messaging System", Journal of Real-Time Systems, Vol 2, 1990.

A Train Control Modeling with the Real-Time Object Paradigm *+

+

Sébastien Gérard , Agnès Lanusse et François Terrier *

+

PSA - Peugeot Citroën / Direction des Technologies de l’Information et de l’Informatique

+

LETI (CEA - Technologies Avancées) DEIN - CEA/Saclay F-91191 Gif sur Yvette Cedex France Phone: +33 1 69 08 62 59; Fax: +33 1 69 08 83 95

E-mail: [email protected], [email protected], [email protected]

Abstract. The train case study is tackled with the ACCORD method developed at the CEA-LETI. This approach aims to provide a framework for real-time development as close as possible to classic object oriented methods. Thanks to high level abstractions (namely the real-time active object concept) real-time modeling can be achieved without mixing up implementation issues with domain specific one. This approach maximizes reusability and designers may fully benefit from object oriented experience acquired in other domains. In the first part of this paper, we rapidly describe the ACCORD CASE tool, especially the method step and the underlying models. In this part, we will describe too the Framework itself and the automatic implementation aspect. Second we will go back through each stage of the method by illustrating them thanks to the train control example. [2] We focus on the real-time aspects and we will try to answer to both the main questions: How and where real-time may be specified? Keywords: Real-time, UML, Concurrent programming, Active Object

1 Introduction We show on the train control example [2] that real-time developments can be fully object oriented and handled with classical object oriented approaches quite easily, in exactly the same way with the same concepts and notations and for most development steps as any usual software. This can be achieved by providing high level abstractions for communication and concurrency management. Thanks to the real-time object paradigm, these matters can be handled transparently by the underlying object system, then, a real-time application can be simply described in terms of communicating objects with (possibly) time constraints attached to requests. The development process can then stay close to classic object oriented ones, and so most classic tools can be used. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 533-538, 1998.  Springer-Verlag Berlin Heidelberg 1998

534

S. GØrard, A. Lanusse, and F. Terrier

2 The modeling method ACCORD provides an object oriented toolkit for real-time design and development as close as possible to classic object oriented environments [7 et 8]. The idea is to make a real-time object oriented application look like classical object oriented applications as far as possible thanks to high level abstractions (namely the real-time object concept) for handling in a transparent way parallelism and real-time constraints control, scheduling and tasks management. The original main motivation behind it, is to provide a way for real-time developers to use almost classic object oriented design and development techniques instead of proposing yet another specialized method. UML notations and diagrams are used all along these steps to express the application model [2]. In a specific real-time design stage, ACCORD extensions to UML are provided and design rules are added. Some UML diagrams are specialized in order to provide a better visibility on real-time characteristics of the application [4].  The structural model describes classes involved in the application and their dependencies. It is described with UML Class diagrams.  The interaction model describes possible interactions between objects. This model is described with UML Use Cases and Sequence diagrams.  The behavior model describes for each class or operation its possible behaviors characterized by states and possible transitions in each state. This model is described with UML Statechart diagrams. Four stages are distinguished during the development of an application : 1) The analysis stage is fully standard. A Use Cases analysis is conducted in order to identify interactions between the system and its environment represented by actors. Sequence diagrams are built to help the developer to identify both main classes of the application and their in Class diagrams. One specificity of ACCORD is its ability to specify behavioral information very early in the process. Temporal information can be captured in the interaction model by Sequence diagrams and in other hand object behavior can be specified at the class and operation levels in the Behavior model by Statechart diagrams. 2) The object design follows here again pure object oriented design style. The idea is to define through iterative steps the full logical design of the application. At this stage, communications between classes are not necessarily specialized into signal or operation, and communication modes (synchronous, asynchronous, ...) are not yet decided. This model is actually a common platform for various possible real-time design models. The idea behind it is to postpone as far as possible design decisions that might reduce reusability.

A Train Control Modeling with the Real-Time Object Paradigm

535

3) The real-time design stage is devoted to the real-time specialisation of the object design. During this stage the specialisation of communications is done (signal/operations, synchronous/asynchronous), possible concurrent objects are identified, time constraints are attached to requests and real-time behaviours are detailed (triggering conditions on statecharts, periodic operations, ...). 4) The implementation stage is greatly facilitated by the use of the real-time active objects paradigm [7] and the ACCORD libraries that support it, defined as an object oriented Virtual Machine. Most of implementation issues can be automated thanks to high level code generation facilities offered by the Objecteering tool and its Hypergenericity component [1], and to the ACCORD execution environment that provides specific components for tasks creations and management, communication mechanisms, synchronisation, resource sharing protection, scheduling and so on [6], [3].

3 Real-Time concepts used within the design stage UML notations and semantics [5] define a particularly rich, but precise, terminology of concepts very important for real-time application behavior description. In particular we find: requests (defined by the source object, the target object and specification of a processing); messages (they are the instances of the requests); events (used only to specify trigger condition on the statecharts transitions and corresponding to three situations: the receipt of a message; a time date becoming the current time; a logical condition becoming true). Moreover, UML distinguishes signals and operation calls, but only by the fact that signals can not have output parameter... To clarify this, ACCORD adopts the following conventions:  Operation calls are point to point communications where the emitter knows the target object, that must have in its class specification an operation matching exactly to the called operation.  Signals are always broadcasted to all the application objects; only object specified as sensible to a particular type of signal can catch the signal and react (trigger some processing) to it; the emitter of a signal does not need to know the objects sensible to it; communication through signals follows a virtual asynchronous communication scheme; object communicating through signals have no structural dependencies. At implementation stages, signals map often to interrupts of operating system signals. However, this must not conduct the choice between signal and operation. The only reason to choose signal instead of operation call is to make an object totally independent to an other. Asynchronous communication between objects is not a reason to use signals instead operation calls.

536

S. GØrard, A. Lanusse, and F. Terrier

4 How to design real-time systems? Real-time characteristics will be obtained from the design object model mainly through specialization of communications and classes. ACCORD will thus consider in sequence the communications, the identification of the possible sources of parallelism that will determine real-time objects, the refinement of their behaviors, and the refinement of temporal issues (deadlines on requests, periodicity on operations, ready times, watch dogs,...) and the refinement of operations descriptions through State Machines. The goal of communications specialization is twofold: identification of signals and operations represented up to now as messages in previous steps ; Identification of communication modes (Asynchronous/Synchronous). As a consequence of the specialization of communications in the Sequence diagrams, several updating will occur both in Class diagrams and in Behavior diagrams attached to classes. The goal of this concurrency specification is to identify the possible sources of parallelism in the application and provide object oriented means to support and handle this parallelism. For that purpose we rely on the active object paradigm that is specialized in ACCORD to handle real-time constraints. We proceed as follows.  Real-time objects identification: Once the communication analysis is done, a certain number of asynchronous requests have been specified. This demonstrates an implicit potential parallelism in the application. The most natural way of handling such parallelism between classes, and later on objects, is to introduce the concept of concurrent object, that is an object able to handle its own computational resources. Such an object can thus handle concurrently with other objects the processing of requests. They can be considered as servers offering various services provided by operations. The concurrency analysis results in the identification of active objects, also called in ACCORD real-time objects, stereotyped with « RealTimeObjects ». In our example [2], we want that several operations might be executed in parallel, namely : display controls, users control from the controlBoard and controls from the trainControlSystem. We also want that within displayBoard, all operations might be executed in parallel. Moreover, we would like to implement a distributed control over the system. This means that, we will handle several instances of the trainControlSystem class, one for each rail locomotive. This design choice for a distributed control was dictated by the will to increase multi-tasking in the design.  Resource sharing identification: Some objects that are typically data objects may be associated through several links with other objects (active or passive objects). In order to facilitate the sharing of such objects by several active objects a class stereotype has been introduced: « ProtectedPassiveObjects ». So stereotyped classes automatically insure data access protection thanks to a concurrency control mechanism. Once each class has been properly defined, an iteration is performed on the Behavior models in order to complete them. Class statechart diagram used are restrictions of UML

A Train Control Modeling with the Real-Time Object Paradigm

537

statecharts. The action part of a transition label is systematically restricted to an operation name (no decomposition is authorized at this level). Operations themselves will in turn be described through Operation statechart diagram where transitions will represent individual actions whose type will be one of the UML specified ones (SendAction, CallAction, InvokeAction,...). A Class statechart diagram describes the possible states and transitions associated with a class. In a given state, a class will possibly execute an operation depending on occurring events or satisfied conditions. The RTC (Run To Completion) semantics of UML statechart is observed. The action performed on a transition is the activation of the method associated to the operation specified in the statechart and its execution is ran to completion. During real-time design stage, triggering conditions are systematically specified. In particular, each incoming signal is associated with at least one operation name that determines the action to be performed on arrival of this signal, when an object, instance of this class, will be on that particular state (Figure 1). eltNewState / displayRailState{dl=300}

create trainDetected / displayTrainPosition{dl=300}

onOff / initDisplay {dl=50}

ON

OFF delete

onOff / resetDisplay

eltNewState / displayRailPointState{dl=300}

trainBreakdown / displayTrainBreakdown{dl=300} Figure 1: Class statechart diagram of displayBoard class, triggering view

In view of precise exception situations that is to say when signals or operation calls are not wished in particular states of an object, one might want to specify explicitly what to do with such unexpected requests (events or operation calls). We have introduced three possible actions, namely: ignore, reject and defer. A deferred action will wait until the object will be in the right state. Ignore explicitly declares that the message will be discarded if received while the object is in the specified state and finally the reject option will discard the message and produce an exception (an error signal). All along the development process time constraints have been expressed in the Sequence diagrams but it is during the real-time design stage that they are systematically completed and checked. Real-time constraints may take different forms. They are generally attached to requests (operation calls or signals) and may specify deadline, ready-time for the operation invoked to complete its execution (Figure 1). These constraints will be taken into account at runtime by the scheduler component of the ACCORD execution environment. Other temporal constraints concern the definition of periodic operations. In ACCORD they are specified in the Class statechart diagrams as cyclic transitions. A cyclic transition is a transition owning a condition constraint which is always evaluated to true (Figure 2). This specification is taken into account during

538

S. GØrard, A. Lanusse, and F. Terrier

code generation in order to instanciate specific mechanisms devoted to periodic processing of operations. in itia liz e

O FF

ON reset

t r u e / s c r u t in iz e { d l= 2 0 0 , T = 2 0 0 }

Figure 2: Class statechart diagram of controlBoard class, triggering view

5 Conclusion Through the ACCORD model, we succeeded to express the set of real-time specifications of the case study: deadline on event reaction, periodic processing etc. Moreover, it allows to easily introduce parallelism within the model itself and to specify consistency constraints through the use of high level abstraction concepts. The platform integrates automatic code generation mechanisms, compiling and linking facilities that make realization easier. Concerning animation or simulation of application model, the tool do not own any facilities yet. It is only possible to have a trace of execution. Regarding the more important place of validation problems, it is on this point that the effort are now focussed. Phd student, Sébastien Gérard, is working in collaboration with PSA in order to supply a method and techniques of validation for such real-time system developments.

References [1]

P. Desfray, Modélisation par objets : La fin de la programmation, Masson, MIPS, France, 1996.

[2]

S. Gérard et al., Modélisation à Objets Temps-Réel d’un Système de Contrôle de Train avec la Méthode ACCORD, Proceedings of Real-Time Systems’98, Paris, France, January, 1998.

[3]

S. Gérard et al., Developing applications with the Real-Time Object paradigm: a way to implement real-time system on Windows NT, in Real-Time Magazine, special issue on Windows-NT, 3Q, 1998.

[4]

A. Lanusse et al., Real-Time Modeling with UML: The ACCORD Approach, in UML’98, Mulhouse, June 1998.

[5]

UML Proposal to the Object Management Group, Version 1.1, September 1997, http://www.omg.org/library/schedule/AD_RFP1.htm.

[6]

L. Rioux et al., Scheduling Mechanisms for Efficient Implementation of Real-Time Objects, to appear in ECOOP’97 Workshop Reader, S. Mitchell & J. Bosch Ed., Springer Verlag, Pub. December 1997.

[7] [8]

F. Terrier et al., A real time object model, TOOLS Europe'96, Paris, February 1996. F. Terrier et al., Des objets concurrents pour le multitâche, Revue L'Objet, Editions Hermès, Vol. 3, n°2, juin 1997.

Demonstrations Jan Dockx, Eric Steegmans Katholieke Universiteit Leuven, Department of Computer Science Celestijnenlaan 200A, 3001 Heverlee, België [email protected]

Abstract. 9 demonstrators discuss the demonstrations they gave at ECOOP '98, including links to the products. The demonstrations chair includes some reflections.

Reflections on a demonstration chair Jan Dockx Katholieke Universiteit Leuven, Department of Computer Science Celestijnenlaan 200A, 3001 Heverlee, België [email protected]

This is the first time ever that the "Workshop Reader" of the European Conference on Object Oriented Programming includes a chapter on the demonstrations. I believe this is an interesting evolution, because it gives people the chance (and the references) to get acquainted with the products shown in the luxury of their own working place. The program of the demonstrations can be found at . There were 9 demonstrations, each of which was presented twice. Demonstrations are of a strange breed. They often represent the first materialization of an idea people have been working on for quite a while. The demonstrators are of course proud of their work. They have shown that the idea they vigorously defended for so long can really function, and maybe this is the first step to a real finished product that will be adopted by their peers. However, it seems that the public of a scientific conference is not that interested in these results. Maybe this is because working software is too down–to–earth for eager scientists. The ideas are interesting. Pondering different approaches with fellow scientists we meet all too seldom is far more fascinating. Once the initial idea is molded into a plan that might achieve it, it seems that the work of the scientist is done. The realization of ideas does not appeal to scientists. Maybe that is why some sessions had so small an audience. The first session even had to be cancelled because there was no audience. But that probably should be attributed to the interesting invited talk that started the conference. On the other hand, I witnessed the most captive audience in a one–on–one session. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 539-540, 1998.  Springer-Verlag Berlin Heidelberg 1998

Demonstrations

541

The scientist depicted above is wrong. Especially in the world of software development, the domain of human endeavor that is most plagued by realization failures, crafting a useable result is a most important outcome. If not for the product itself, then for the parameters that created it. In no other field of engineering is it that difficult to "get the damn thing working (and keep it running)". If you did not attend the demonstration sessions, you missed something. You missed an outlook on the near future and insight in the foremost engineering goal: creating. The demonstrators themselves surely are the kind of people that realize the importance of practical results. That is probably also the reason why most of the sessions demonstrated some kind of CASE–tool. A tool to help in the requirements phase, a tool to semi–automate code design, a tool to more easily transform business semantics to code, a tool to analyze the behavior of running software… We are inbred. All in all, the demonstrations where a success. It would be interesting however to have more demonstrations of successful applications of current, not future, state–of–the–art, not experimental object oriented technology in the large. This is something the audience of a conference like ECOOP lacks. From personal experience, I can testify that most of us suffer from the small–example problem. You, the reader that works in the industry, in banking or agriculture, who forges real life applications using object oriented technology, who has its own sets of do's and don'ts: consider this an invitation (see ). I want to acknowledge some people in this space. Most important, I extend large amounts of gratitude to all the demonstrators, and especially to the people of the IBM Thomas J. Watson Research Center and John Brant and Don Roberts of the University of Illinois. IBM had 3 entries, and all 3 where of the highest quality, while John Brant almost made me go back to Smalltalk. The second presentation of these entries filled the auditorium. Word of mouth probably did its work. Thank you also to the people that helped with the technical setup. Their apt response prevented some disasters.

Visualizing Object–Oriented Programs with Jinsight Wim De Pauw, John Vlissides Demonstrated by Wim De Pauw IBM Thomas J. Watson Research Center P. O. Box 704, Yorktown Heights, NY 10598, USA [email protected], [email protected]

Abstract. Jinsight is a tool that lets you visualize and explore many aspects of your Java program's run–time behavior. It is helpful for performance analysis, debugging, and any task in which you need to better understand what your code is really doing. Jinsight was developed at IBM's Research Division, and is available free as a technology preview.

Background Jinsight is a visualization tool designed specifically for object–oriented and multithreaded programs. It reveals many characteristics of running code beyond that of most performance analysis tools, including CPU bottlenecks, object creation, garbage collection, thread interaction, deadlocks, and repeated patterns of messages. It can also help you find the causes of memory leaks. Jinsight's unique visual and analytic capabilities for managing large amounts of execution information allow you to see and understand the behavior of real–world programs. Jinsight has two parts: a special instrumented version of the Java virtual machine (VM), and a visualizer. To use Jinsight, first run your program under the instrumented VM. As your program runs, the VM produces a Jinsight trace file containing information about the execution sequence and objects of your program. When you have finished tracing, you load the trace file into the Jinsight visualizer. Then you can select one or more views, depending on the type of information you want to gather. What was demonstrated Each Jinsight view is designed to bring out information useful for specific tasks:  The Histogram View (Figure 1, left) lets you see performance bottlenecks from the viewpoint of individual classes, instances, and, methods. It also shows object references, instantiation, and garbage collection.  The Execution View (Figure 1, right) lets you see the program execution sequence, either as an overview or at any level of detail. It helps you understand concurrent behavior, letting you see thread interactions, deadlocks, and the timing of garbage collection. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 541-542, 1998.  Springer-Verlag Berlin Heidelberg 1998

542

W. De Pauw and J. Vlissides

Fig. 1. Histogram (left) and Execution Views

Fig. 2. Reference Pattern (left) and Execution Pattern Views

•  The Reference Pattern View (Figure 2, left) summarizes the interconnections among the objects in your program. It also has features to help you find the causes of memory leaks. •  The Invocation Browser (not shown) and the Execution Pattern View (Figure 2, right) let you explore repetitive execution sequences, helping you analyze performance in long execution traces. Where to find the product Jinsight is available free from IBM. For more information about Jinsight, check the IBM alphaWorks site at or contact the development team at [email protected].

SoftDB — A Simple Software Database Markus Knasmüller BMD Steyr Sierninger Straße 190. A–4400 Steyr, Österreich1 [email protected]

Abstract. Using the persistent development environment Oberon–D, we implemented the software database SoftDB, which models the properties of a program, i.e., its modules, procedures, types and variables as well as the relationships between them. The module information can be displayed using a simple command. Furthermore, it is possible to access the software database via OQL.

Background While object–orientation has become a standard technique in modern software engineering, most object–oriented systems lack persistence of objects. This is rather surprising because many objects (e.g. objects in a graphical editor) have a persistent character. Nevertheless, most systems require the programmer to implement, load and store operations for the objects. In the Oberon–D project [Kna97] we demonstrate the seamless integration of database functionality into an object–oriented development environment, in which, the survival of objects is for free. Persistence is obtained by a persistent heap on the disk. Persistent objects are on this heap, while transient objects are in the transient memory. The idea of this work is to offer the impression of an indefinitely large dynamic store on which all objects live. The programmer does not have to distinguish between "internal" and "external" objects. All objects can be referenced and sent messages as if they were in main memory. The underlying language does not have to be extended. Other database features, such as schema evolution or recovery are embedded in this persistent environment. Furthermore, an Oberon binding for ODL/OQL [Cat96] is implemented as part of this work What was demonstrated Using this new tool, we implemented the software database SoftDB, which models the properties of a program, i.e., its modules, procedures, types and variables as well as the relationships between them.

1

Markus Knasmüller is on leave from Johannes Kepler University Linz, Insitute for Practical Computer Science (Systemsoftware).

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 543-544, 1998.  Springer-Verlag Berlin Heidelberg 1998

544

M. Knasm ller

Calling a simple command adds the information about a module to the database. The module information can be displayed using another simple command, which opens a dialog showing the information of the module.

On the left side, some general information about the module is displayed. Two list boxes show the list of imported modules as well as the list of defined procedures, and an edit window for queries is offered. Using the mouse it is possible to select one of the modules or procedures and by pressing one button the information about the selected module or procedure can be displayed. The dialog showing the procedure shows the list of used procedures and of defined and used variables. It is possible to select a procedure or a variable and to display the information of the selected item. In addition, the module information of the module in which the selected procedure is defined can be accessed. Information about variables can be displayed as well. Furthermore, it is possible to access the software database via OQL. This can be done using either embedded OQL or the query window. As a starting point, the persistent root (i.e., the collection of modules) can be chosen. Where to find the product More information about the projects Oberon–D and SoftDB as well as the full source code can be found at . References [Cat96] R.G.G.Cattell (ed.), The Object Database Standard: ODMG–93, Addision–Wesley, 1996 [Kna97] M.Knasmüller, Adding Persistence to the Oberon System, Proc. of the Joint Modular Languages Conference, Hagenberg, Lecture Notes in Computer Science, Springer 1997

OO–in–the–Large: Software Development with Subject–Oriented Programming Harold Ossher and Peri Tarr IBM T.J. Watson Research Center P.O. Box 704, Yorktown Heights, NY 10598 USA [email protected], [email protected]

Abstract. Subject–oriented programming (SOP) is an extension of OO programming that permits non–invasive extension, customization and integration of OO components. Support for SOP in C++ and Java was demonstrated.

Background SOP is a practical approach to OO p r ogramming–in–the–large. It addresses well–known limitations of OO technology without forcing developers to adopt new languages. The limitations arise when using OO technology to develop large systems, suites of integrated applications, and evolving systems; they include weaknesses in:  Non–invasive system extension and evolution: Creating extensions to, and configurations of, software, without modifying original source, and keeping the deltas for multiple platforms, versions, and features separate.  Large–scale reuse and integration: With its focus on small–scale objects, OO development is insufficient to achieve large–scale reuse or integration of design patterns and off–the–shelf system components without significant prior planning.  System decomposition: By–object/class system decomposition is useful for modeling data–centric aspects of a system. Other decompositions (e.g., by feature, function, and requirement) are better for modeling other aspects, however. Without them, maintainability, traceability, comprehensibility, and reusability suffer.  Multi–team and decentralized development: OO development leads to contention over shared, centralized classes. It forces all developers to agree on a single domain model, rather than using models more appropriate to their tasks. Standard OO techniques (subclasses, design patterns, etc.) help but are inadequate. They require major preplanning, which is prohibited by development and runtime cost, so many flexibility needs cannot be satisfied non–invasively. SOP allows OO systems to be built by composing subjects. Subjects are collections of classes defining coherent units of functionality; e.g.         lasses in subjects may be incomplete; they only include details needed to accomplish the task. Subject composition integrates classes in separate subjects, reconciling differences in their respective views and combining their functionality. Composition provides novel opportunities for developing, customizing, adapting, extending and modularizing OO programs. Subject–oriented programming–in–theS. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 545-546, 1998.  Springer-Verlag Berlin Heidelberg 1998

546

H. Ossher and P. Tarr

–large involves deciding how to subdivide systems into subjects and writing the composition rules needed to integrate them. It complements OO programming. Code is written in any OO language, with no changes to the language. SOP allows decomposition of code into modules using any criteria, and integration of subjects to form complete systems. It also allows components to be extended, adapted and integrated. What was demonstrated We demonstrated tool support for SOP in C++ (an extension of IBM VisualAge® for C++ 4.0 [VAC++]) and Java (ongoing work). Key features we highlighted include: Subject definition: Subjects are written in standard C++ and Java. Two additional, separate declarations are required: a subject specification, indicating which classes and interfaces belong to which subjects, and the composition rule, which contains the composition rules to be used to guide the composition. Composition: In the C++ system, composition of source code occurs. VAC++ is an open extensible compiler, so composition occurs during compilation. The Java system composes class files; thus, it is compiler–independent. Further, reusable component vendors need not make proprietary source code available to clients. Programming environment support: The C++ and Java systems explore different aspects of what we believe is necessary support for subject–oriented programming–in–the–large. The C++ system, as an extension to VAC++, includes many important programming environment features, such as tools for creating, debugging, and visualizing classes and subjects. The Java system incorporates support for WYSIWYG composition—a user interface for interactively tailoring compositions. This tool allows trial–and–error development of composition rules. The user starts by choosing an overall default composition rule. The tool presents the inputs and the resulting composed subject. Users can interact with the composed subject, tailoring the composition using a variety of commands provided through menus and buttons. All changes are recorded as composition rules. The rules can be viewed and turned on or off individually. Traditional undo/redo is also supported. If input subjects change, the rules can be reapplied, to yield a result that, in many cases, will be either correct or close to what is desired. Any rules that are no longer valid will turn themselves off. The user can interact further, improving the result in the light of the new inputs. Where to find more information The subject–oriented programming web site is . Please contact the authors regarding availability of the tools. References W.Harrison and H.Ossher. Subject–oriented programming (a critique of pure objects). In Proceedings OOPSLA ’93, pages 411–428, Washington, D.C., September 1993. H.Ossher, M.Kaplan, A.Katz, W.Harrison, and V.Kruskal. Specifying subject–oriented composition. TAPOS, 2(3):179–202, 1996. Special issue: Subjectivity in Object–Oriented Systems.

Dynamic Application Partitioning in VisualAge Generator Version 3.0 Doug Kimelman, V.T. Rajan, Tova Roth, Mark Wegman, Beth Lindsey, Hayden Lindsey, Sandy Thomas Demonstrated by Doug Kimelman IBM Thomas J. Watson Research Center P.O. Box 704, Yorktown Heights, NY 10598, USA {dnk,vtrajan,tova,wegman}@watson.ibm.com IBM Software Solutions Division Raleigh, NC, USA {blindsey,hlindsey,tsandy}@us.ibm.com

Abstract This demonstration highlights the technical issues underlying Dynamic Application Partitioning (DAP) in VisualAge Generator Version 3.0. DAP addresses a fundamental problem in client–server and n–tier systems: partitioning distributed object applications, i.e., determining the machine (from high–end servers to tier–0 devices) on which each object should be placed and executed for best overall performance of the application. The DAP tool is based on communication dynamics (a history of all relevant object interactions in representative runs of the application) modeled as a graph. It employs multi–way graph cutting algorithms to automatically determine near–optimal object placement. It also incorporates visual feedback (graphic animation of object clustering) to guide programmers in manually refining the partitioning, as well as to guide them in refining the design of the application to achieve even greater performance improvements. This is the only commercial system of which we are aware that supports automated partitioning for application logic components, as well as GUI and data access components in distributed object applications. Further, it is the only system of which we are aware, in either the product or research community, to employ object dynamics for automated partitioning, and to include graphic animations as a guide to design refinement. Background Please see [1] for further discussion of the importance of automatic partitioning to the performance of distributed object applications.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 547-548, 1998.  Springer-Verlag Berlin Heidelberg 1998

548

D. Kimelman et al.

What was demonstrated1

Visual Presentation and Animation of Communication Dynamics The components of an application, and their interactions, are presented visually as the application runs. The more components communicate with each other, the more they float towards one another in the view. Strong clustering of a set of components indicates that they communicate extensively, and suggests that they should all be placed on the same machine. If this is not possible, their communication will have to take place over the network, with a corresponding adverse effect on the overall performance of the distributed application. When a component is placed onto a machine, the position of the component in the view is subsequently constrained so that the component remains very near to the machine. The color of an edge between two components represents "tension" — the amount by which the communication between two nodes would tend to draw them closer together, into a cluster, but cannot due to other counteracting forces. Application Partitioning "Automatic partitioning", which may be invoked by the user at any time, employs a multi–way graph cutting algorithm, described in the literature, operating on the current state of the object communication graph. For example, after running the application until the clustering in the view stabilizes, a user might invoke automatic partitioning to place business logic components. Visual Feedback for Refinement Areas of residual tension in the view indicate opportunities to improve performance by refining the design of the application, not just the partitioning. One possible refinement is subdivision of a component into smaller components that can be individually placed into separate partitions. Where to find the product

References [1] D. Kimelman, V.T. Rajan, T. Roth, and M. Wegman, "Partitioning and Assignment of Distributed Object Applications Incorporating Object Replication and Caching", To appear in the ECOOP '98 Workshop Reader [2] D. Kimelman, T. Roth, H. Lindsey, and S. Thomas, "A Tool for Partitioning Distributed Object Applications Based on Communication Dynamics and Visual Feedback", COOTS '97 Advanced Topics Workshop. Also available at

1

A more detailed description of this tool appeared in [2].

The Refactoring Browser John Brant & Don Roberts Demonstrated by John Brant University of Illinois 1304 W. Springfield Ave. Urbana, IL 61801, USA {brant,droberts}@cs.uiuc.edu

Abstract. The refactoring browser is a complete re–implementation of the standard Smalltalk system browser, which incorporates automatic refactorings into the tool. In addition to the browser, the accompanying Smalllint tool can quickly detect hard–to–find errors and style violations.

Background Refactoring is an important part of the evolution of reusable software and frameworks. Its uses range from the seemingly trivial, such as renaming program elements, to the profound, such as retrofitting design patterns into an existing system. Despite its importance, lack of tool support forces programmers to refactor programs by hand, which is tedious and error–prone. The Refactoring Browser is a tool that carries out many refactorings automatically. By integrating these operations directly into the development environment, developers can restructure their programs safely and quickly. Determining where your program needs to be refactored is also a difficult problem. The Smalllint tool allows programmers to quickly search for hard–to–discover bugs. The rules incorporated in this tool were initially taken from the Classic Smalltalk Bugs list and have been extended as we found other cases that recurred in production code. What was demonstrated At the demonstration, we presented all of the refactorings that the refactoring browser can perform automatically. To demonstrate its safety, we renamed the class Object and the + operation on a running image. Additionally, we showed the Smalllint tool in operation by finding violations in the standard VisualWorks image. Where to find the product The Refactoring Browser is freely available from , both for VisualWorks and VisualAge Smalltalk. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 549, 1998.  Springer-Verlag Berlin Heidelberg 1998

Business Objects with History and Planning Ilia Bider, Maxim Khomyakov IbisSoft, Box 19567, SE–10432 Stockholm, Sweden Magnificent Seven, 1–st Baltiyskiy per. 6/21–3, Moskva, Russian Federation [email protected], [email protected]

Abstract. The demonstration presented an object–oriented approach to building business applications. It is based on the idea of modeling a business process as an object that has a history and a plan of actions as its integral part. The approach is supported by home–made tools: Hi–base — a historical database which is able to store the objects full histories, and Navi — an object–oriented navigation system which allows the end–user to browse along the links that connects different business objects in the current state or in the past.

Background In our approach, business processes are represented as objects, which we call organizing objects (or orgobjects for short). The state of an orgobject reflects the current state of the business process, and it indicates what further steps should be completed to transfer the object to its final state. The dynamic properties of orgobjects are represented with the help of history, events and activities. History is the time–ordered sequence of all the previous states of objects. The history of an orgobject shows the evolution of the process in time. Events are a special kind of (system) objects that present additional information about transitions from one state of the object to another, like date and time when the transition occurred in the real world, date and time when it was registered in the system, the person whose actions caused changes in the object (if applicable), his comments on the event, etc. A new event object is born each time some other object in the system changes. Activities represent actions that take place in the world of application domain, like getting a sales order, shipping goods, administering a drug to a patient, etc. In our model, the activity moves an orgobject to the next stipulated state. An activity may be planned first and executed later. This process is realized by a notion of planned activity. A planned activity is an object that contains information such as type of activity (shipping, compiling, etc.), planned date and time, deadline, reference to the person who is responsible for performing the activity, etc. Planned activities are included in the orgobjects that they will affect when executed. All planned activities included in the given orgobject compose the immediate plan of the process evolution. When executed, a planned activity changes the orgobject to which it belongs. This change may

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 550-551, 1998.  Springer-Verlag Berlin Heidelberg 1998

Business Objects with History and Planning

551

include adding new planned activities and deleting old ones. When executed, the planned activity becomes an event registered in the system. What was demonstrated A simple, specially created demo system called ProcessDriver was shown during the demonstration. Besides ORGOBJECTS, EVENTS, and ACTIVITIES mentioned above, the system operates the following classes of objects: ORGANIZATION, PERSON, EMPLOYEE, STAFF. Objects from ORGANIZATION represent companies or organizations that can participate in business processes. Objects from PERSON represent human beings that can be employed by an organization. Objects from EMPLOYEE serve as a link between organizations and human beings. Objects from STAFF represent people working with the ProcessDriver. The demonstration was aimed to visualize two main features of our approach, history and planning. Several new organizations, persons and processes were created. The new persons were assigned to work with the new organizations. The new processes were aimed to selling some products to the newly created organizations. The possibility of viewing all the events that concern a particular object, a particular organization, a particular member of staff, etc. where shown, along with the ways of viewing the state of the object before and after any particular event, or at a freely chosen time in the past. The means of interactive planning and executing activities in the frame of different processes were also demonstrated. The activities are accessible from both project plan, and the personal calendar of each member of staff. They were executed one at a time and as a group chosen from the list. Comment: Little interest was shown to our demonstration. This reflects the fact that most of academic interest is directed to solving technical problems around programming, rather than to the problems of business application development. Where to find the product More on applications built based on the demonstrated approach can be found at: . There is no demo version to download for the moment. References Bider, I.: ObjectDriver – a Method for Analysis, Design and Implementation of Interactive Applications. In Data Base Management (22–10–25). Auerbach (1997) Bider, I.: Developing Tool Support for Process Oriented Management. In Data Base Management (26–01–30). Auerbach (1997), Bider, I., Khomyakov, M., Pushchinsky, E.: Logic of Change: Semantics of Object Systems with Active Relations. In.: Broy, M., Coleman, D.,Maibaum, T., Rumpe, B. (eds.): PSMT — Workshop on Precise Semantics for Software Modeling Techniques. Technische Universität München, TUM–I9803 (1998) 11–30,

Poor Man’s Genericity for Java Boris Bokowski, Markus Dahm Demonstrated by Boris Bokowski Insititut für Informatik, Freie Universität Berlin Takustrasse 9, 14195 Berlin, Deutschland [email protected]–berlin.de, [email protected]–berlin.de

Abstract. Poor Man’s Genericity for Java is a proposal for adding parameterized types to Java. It is based on simple byte–code transformations both at compile–time and at run–time, and can be integrated into existing Java compilers by making only minimal changes.

Background Recently, a number of proposals for adding parametric polymorphism (generic classes) to Java have been published. With our proposal, we try to minimize the changes that need to be made to existing Java compilers. In particular, we found that changing only one method in Sun’s Java compiler already results in a reasonable implementation of parameterized types. For details of our proposal, the reader is referred to a research paper describing Poor Man’s Genericity for Java [1]. What was demonstrated We have demonstrated a fully working compiler that supports F–bounded parametric polymorphism. It has been implemented on top of Sun’s Java compiler, with only minimal changes to the existing code. During the presentation, example files were compiled, and some of the intermediate steps during compilation were visualized. Where to find the product

References [1] B. Bokowski and M. Dahm, Poor man’s genericity for Java, Proceedings of JIT’98, Frankfurt, Germany, November 1998, Springer

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 552, 1998.  Springer-Verlag Berlin Heidelberg 1998

An Object DBMS for Multimedia Presentations Including Video Data Rafael Lozano1, Michel Adiba, Herve Martin, Francoise Mocellin Demonstrated by Rafael Lozano Laboratoire Logiciels Systèmes Résaux — IMAG B.P. 72 – 38402 St. Martin d’Hères. France {Rafael.Lozano, Michel.Adiba, Herve.Martin, Francoise.Mocellin}@imag.fr

Abstract. This demonstration shows how an OO DBMS can be extended for taking into account multimedia data. An integrated environment for multimedia presentation authoring and playback has been developed on top of the O2 OO DBMS. It consists of a library of classes which can be used and shared by several multimedia database applications. The system has been developed using C++ and multithreading is used for performing parallel tasks. In addition, a graphical interface has been developed which allows to easily and interactively build multimedia presentations. Also, a video data model has been specified and implemented. Finally, a query language like OQL is used to query video and presentation objects and to automatically generate dynamic multimedia presentations.

Background We aim to show that merging conventional and multimedia data increases multimedia information systems (MIS) capabilities. We propose a model to capture information about each media. We focus on the management of video data because video delivery systems are a very important class of multimedia applications and video structure can be complex to model. We also propose an object–oriented approach to store, query and play multimedia presentations. Typical multimedia applications such as medical image database or geographical information systems involve a large number of multimedia objects. Other applications such as tourist databases or self–training systems require also to store and manage such kind of objects. They provide specific interfaces to interact with such information. We propose a model and a system for defining, typing and building multimedia presentations, taking into account spatio–temporal constraints. These presentations are considered as objects, which can be stored in the database, updated and queried. The model is implemented as an extension of an Object–Oriented DBMS.

1

Supported by SFERE–CONACYT Mexico

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 553-554, 1998.  Springer-Verlag Berlin Heidelberg 1998

554

R. Lozano et al.

Demonstration items Our demonstration is composed of two parts. In the first part, we focus in the creation and execution of intentional multimedia presentations. In the other part, video data management is demonstrated. We show that multimedia presentations can be created by analyzing query results. The basic idea is to use the database constructors used to create query results (e.g. set, unique set, list), as temporal relationships (sequential and parallel) among elements. Intentional presentations have several advantages like dealing with the presentation of an unknown number of objects or insuring the consistency between presented and database data (by avoiding dangling references and by taking into account any change in the database). We are also interested in the manipulation and reuse of video data. For that reason, we created and implemented a video data model, which enables us to store, to manipulate, to query and to present video data. We have extended an object–oriented DBMS with video management capabilities and we addressed the following aspects:  We developed a library of classes which capture the hierarchical video structure (shots, scenes, sequences).   For video indexing, we propose a generic schema, which can be refined in order to create a specific indexing structure.   From raw videos stored in the database, we edit a virtual video as a view on existing data. We offer facilities for creating new videos from those already stored in the database without replication.   Because video data must be queried like conventional data, we show how the query language can be extended to query video objects. Where to find the product Our demonstration unfortunately is not permanently available because it requires to have O2 OO DBMS running. Nevertheless if somebody is interested, he/she can contact us by e–mail and we can arrange a demo. References M. Adiba, R. Lozano, H. Martin, F. Mocellin, Management of multimedia data using an Object–Oriented Database System , DEXA Workshop QPMIDS (Query Processing in Multimedia Inforamtion Systems) en conjonction avec la 8ième internationale DEXA'97 – Toulouse, September 1997. J–C. Freire, R. Lozano, H. Martin, F. Mocellin, A STORM Environment for Building Multimedia Presentations., 12th International Conference on Information Networking (ICOIN–12), Koganei, Tokyo, Japon, January 1998 R. Lozano, H. Martin Querying Virtual Videos Using Path and Temporal Expressions., Symposium on Applied Computing (SAC'98), Atlanta, Georgia USA February 1998. H. Martin. Specification of Intentional Multimedia Presentations using an Object–Oriented Database, Advanced Database Research and Development Series – volume 8 – Edited by World Scientific – 1998

OPCAT — Object–Process Case Tool: an Integrated System Engineering Environment (ISEE) Dov Dori, Arnon Sturm Demonstrated by Arnon Sturm The William Davidson Faculty of Industrial Engineering and Management Technion, Israel Institute of Technology, Haifa 32000, Israel [email protected], [email protected]

Abstract. This demonstration concerns system development methodologies and their supporting CASE products. The Object–Process Methodology (OPM) integrates system structure and behavior within one model and manages complexity through a scaling mechanism that controls the visibility of things in the system. The demonstration presented OPM principles and it application through OPCAT — Object–Process CASE Tool — the product that supports OPM.

Background Object–Process Methodology (OPM) is a system development approach that integrates structure and behavior of the system within a single unifying model. The conventional wisdom has been that there is an inherent dichotomy between object– and process–oriented approaches, and that it is not possible to combine these two essential aspects of any system into one coherent integral frame of reference. This misconception has accompanied systems analysis to the extent that even the accepted UML standard (Booch and Rumbaugh, 1995; Booch and Rumbaugh, 1996) maintains the separation between structure and behavior, and spreads analysis activities across no less than eight types of models that use different diagram types. What was demonstrated In the first part of the demonstration, we present an overview of the OPM. Contrary to the accepted view, that structure and behavior cannot be merged, at the heart of the Object–Process Methodology is the fusion of structural and procedural aspects of a system into a single, unifying model. OPM distinguishes between objects and processes as two types of things that have equal status and importance in the specification of a system. The OPM model shows how objects interact with each other via processes, such that both the structural and the procedural system aspects are adequately represented. The underlying observation of the Object–Process paradigm is that every thing in the universe of interest is either a process or an object. This opens the door for the possibility of modeling a system using a single model that faithfully defines and describes both its structure and behavior. These two major aspects of any system are represented without suppressing

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 555-556, 1998.  Springer-Verlag Berlin Heidelberg 1998

556

D. Dori and A. Sturm

one another. Structural relations — primarily aggregation, generalization and characterization — and procedural relations, which model the behavior of the system over time, are seamlessly intertwined to provide a comprehensive understanding of the system. The Object–Process Diagram (OPD) — the graphic expression of a system or part of it, analyzed by OPM — is a concise and effective visual language. It incorporates elements from both process–oriented approaches (notably DFD and its derivatives) and object–oriented ones. The Object–Process Language (OPL) provides for a textual, natural–language–like equivalent specification of the system specified through the OPD set. OPL is designed to be read as natural English, albeit with stringent and limited syntax, such that no prior knowledge in analysis methodologies is required. OPL serves as both a feedback mechanism to the prospective customer and as the engine for activities that follows the design stage, notably code generation and database scheme generation. The second part of the demonstration we presented a case study of a corporate foreign travel management system that demonstrates OPM features and OPCAT current functionality. We exemplified how OPDs are constructed and what symbols they consist of. Through OPCAT’s GUI, we demonstrated OPM’s expressive power, including its zooming in/out and unfolding/folding scaling mechanisms. Finally, we demonstrated the translation of OPDs to OPL using the OPL syntax and the equivalence between the alternative graphical and textual representations. Where to find the product URL: References Dori, D., Object–Process Analysis: Maintaining the Balance Between System Structure and Behavior. Journal of Logic and Computation. 5, 2, pp. 227–249, 1995 Dov Dori, Unifying System Structure and Behavior through Object–Process Analysis. Journal of Object–Oriented Analysis, July–August, pp. 66–73, 1996 Dov Dori and Moshe Goodman, On Bridging the Analysis–Design and Structure–Behavior Grand Canyons with Object Paradigms. Report on Object Analysis and Design, 2, 5, pp. 25–35, January–February 1996 Dov Dori and Moshe Goodman, From Object–Process Analysis to Object–Process Design, Annals of Software Engineering, Vol. 2, pp. 20–25, 1996 Doron Meyersdorf and Dov Dori, The R&D Universe and Its Feedback Cycles: an Object–Process Analysis. R&D Management, 27, 4, pp. 333–344, October 1997 Mor Peleg and Dov Dori, Extending the Object–Process Methodology to Handle Real–Time Systems. Journal of Object–Oriented Programming (accepted July 1998)

The AspectIX ORB Architecture F. Hauck, U. Becker, M. Geier, E. Meier, U. Rastofer, M. Steckermeier University of Erlangen-N¨urnberg, Germany, fhauck, ubecker, geier, meier, rastofer, [email protected], http://www4.informatik.uni-erlangen.de/Projects/AspectIX/

The CORBA architecture defines the semantics of the interaction of distributed objects [1]. These semantics are hardly extensible. CORBA services can extend the basic functionality of an ORB, but they are based on those fixed semantics. AspectIX is an open and more flexible architecture than CORBA, but an AspectIX implementation can also host CORBA-compliant applications. AspectIX adopts a fragmented object model similar to the Globe system [2], which means that each client owns a local part of the distributed object and that these local parts (called fragments) can interact with one another. A local fragment can be intelligent and carry a part of the distributed object’s functionality, or it can act as a dumb stub as in the CORBAcompliant AspectIX profile. With fragments the internal communication semantics of a distributed object is entirely hidden to the client. For example, the object can decide on using replication and caching of data, and on different communication semantics (e.g., fast real-time communication). Often it is also desirable to let the client influence some of these properties. Controlling nonfunctional and functional properties in an orthogonal way is the goal of aspect-oriented programming. Therefore, a set of closely related properties is called an aspect [3]. AspectIX provides generic configuration interfaces for each distributed object that allow clients to activate and control the aspects supported by the object. The object may use a different local fragment if it is more suited to fulfill the configuration. The replacement of fragments is transparent to the client. Within the AspectIX project we investigate various application classes (profiles) and their requirements in form of aspect definitions (see also [4]): CORBA, wide-area systems (replication, consistency), mobile agents (mobility), and process control systems (real-time constraints, fault tolerance).

References 1. Object Management Group: The Common Object Request Broker Architecture, Version 2.2. (1998). 2. M. van Steen, P. Homburg, A. Tanenbaum: The architectural design of Globe: a wide-area distributed system, Technical Report IR-422, Vrije Univ. Amsterdam (1997). 3. F. Hauck, et al.: “AspectIX: A middleware for aspect-oriented programming.” In ECOOP’98 Workshop Reader, LNCS, Springer (1998). 4. M. Geier, M. Steckermeier, et al.: “Support for mobility and replication in the AspectIX architecture.” In ECOOP’98 Workshop Reader, LNCS, Springer (1998). S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 557, 1998.  Springer-Verlag Berlin Heidelberg 1998

Formalization of Component Object Model COM - The COMEL Language Rosziati Ibrahim

Clemens Szyperski

School of Computing Science, Queensland University of Technology, Brisbane, Australia, ibrahim@t.qut.edu.au [email protected]

Microsoft's OLE provides an application integration framework for Microsoft Windows. OLE rests on the Component Object Model COM, which species a programming language independent binary standard for object invocations, plus a number of interfaces for foundational services. COM is all about interoperability of independently deployed components and is used to develop component-based software. COM is language independent. The component software can be developed by independent vendors. Extensions to component software can also be developed and integrated by the client. Currently, COM o ers a set of informal rules in order to form a standard binary level of building interoperable software. However, COM does not have a formal specication. COM's rules are complex and subtle, making it worthwhile to formalize COM. We propose an approach to formalize COM. A model language is introduced in order to formalize COM. Since COM itself is language independent, the language introduced here just takes an examplary role. This example language for COM is called COMEL Component Object Model Extended Language, pronounced cho-mell. We formalized some of the important COM's rules. The approach we use is by introducing the abstract syntax of COMEL language that addresses COM's informal rules. Then the type system, the operational semantics and the subject reduction theorem of COMEL language are formed. The COMEL language demostrates the underlying concepts of components and the role of component types, object composition, object instantiation, interface lookup, and method call in COM. The COMEL language has a formally dened type system and operational semantics with established type soundness.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 558, 1998.  Springer-Verlag Berlin Heidelberg 1998

Oberon-D = Object-Oriented System + Object-Oriented Database Markus Knasmüller1 BMD Systemhaus Ges.m.b.H. Steyr Sierninger Str. 190, 4400 Steyr, Austria [email protected]

Object-orientation was invented twice: once by the programming languages people and once by the database people. The two camps are usually separated, in spite of the many commonalities in their goals and approaches. Programming languages deal with transient objects in main memory. They are concerned with data abstraction, inheritance and polymorphic operations, while persistent storage of objects is often ignored or poorly supported. Databases, on the other hand, deal with persistent objects on a disk. They are mainly concerned with modeling complex relations between objects as well as with efficient access methods and query languages. The separation is also evident in the use of different languages: Smalltalk, C++ or Java on the one hand, and mostly OQL on the other hand. Although it is usually possible to access a database from a program, the notations and access mechanisms differ from the techniques used for transient objects. This project aims at unifying the two worlds. A database is viewed as a set of objects that happen to be persistent but are otherwise accessed and manipulated just like any other object in main memory. The idea is to view the database as a virtual extension of the computer’s memory. All objects - transient or persistent - are referenced via pointers. The run time system makes sure that persistent objects are loaded from disk when they are needed and stored to disk when they are not used any more. For the programmer, there is no difference between transient and persistent objects. They are declared, generated and used in exactly the same way. In a radical view, there seems to be no database at all because it is maintained automatically behind the scenes. Oberon-D introduces database functionality in the Oberon System, which was originally developed by Niklaus Wirth (the father of Pascal) and Jürg Gutknecht at ETH Zurich. The Oberon System as well as the Oberon-2 programming language were selected because of their suitability for the project and because of their modern features: the Oberon System is an open environment with object-orientation, dynamic extensibility, garbage collection and metaprogramming facilities. More information about the project is available at http://www.ssw.uni-linz.ac.at/Projects/OberonD.html.

1

Markus Knasmüller is on leave from Johannes Kepler University Linz, Department of Practical Computer Science (Systemsoftware). S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 559, 1998.  Springer-Verlag Berlin Heidelberg 1998

OctoGuide - A Graphical Aid for Navigating Among Octopus/UML Artifacts Domiczi Endre, Nokia Research Center, Finland [email protected]

Novice users of object-oriented methods might encounter difficulties in locating the artifacts (diagrams, text tables, etc.) defined by the respective method and placing them into context. Our work experience is similar. CASE tools can be helpful in this area but they can be difficult to learn and expensive, too. During the development of Octopus/UML, a new, enhanced version of the Octopus method, we kept the above issues in mind and also the needs of teaching. This led to OctoGuide, an aid that is  capable of visualizing links and connections among Octopus/UML models and artifacts  easy to learn  suitable for self-study and giving tutorials, presentations, as well  easy to maintain, adapt to our needs  lightweight  inexpensive. OctoGuide has as its root Microsoft PowerPoint - the application used in Octopus education previously - but utilizes OLE (Object Linking and Embedding) and hyperlink technology. In its present form OctoGuide helps to become familiar with the artifact set of Octopus/UML visually and discover relationships. It can be used to gain hands-on experience with Octopus/UML by elaborating on small-scale case studies. In the future it may be used in the classroom, as well, to introduce case studies in a more flexible way and also to give guidance in solving exercises. With little effort one can have the same case studies, exercises, solutions converted to HTML and made readily available via browsers, which has actually already begun.

S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 560, 1998.  Springer-Verlag Berlin Heidelberg 1998

Run Time Reusability in Object-Oriented Schematic Capture

David Parsons, Tom Kazmierski Southampton Institute, U.K. {dave.parsons, tjk }@solent.ac.uk

This poster presents some important aspects of the architecture and functionality of an object-oriented schematic capture system for electronic circuit design and simulation. In particular it introduces the terms 'virtual polymorphism' and 'visual polymorphism' to describe techniques that provide run-time extensibility and flexible code generation within a visual environment. Schematic capture systems convert a graphical representation of an electronic circuit into some other form for simulation or synthesis. This particular system generates code in VHSIC Hardware Description Language - Analogue and Mixed Signal (VHDL-AMS). This language, standardised by the IEEE in 1997, allows for hardware descriptions that include both digital and analogue components. An important aspect of VHDL-AMS is that it allows for new types of component to be described using behavioural definitions rather than simply building larger aggregations out of sub-components that already exist in libraries. To enable a user to define new component types via the graphical interface, the system is built using an approach called 'virtual polymorphism'. This term is used to describe a situation where application level objects of different types (in this case electronic components) appear to have polymorphic behaviours even where they are represented by objects of the same class. This is achieved via a reflective architecture that allows component objects to be configured at run time by meta-data, enabling them to invoke various dynamically bound aggregations to provide their behaviour. For example, different component objects use objects of other classes to draw themselves using standard symbol sets and to generate code. By basing the system on this architecture, rather than using a traditional classification hierarchy of component classes, run time extensibility is provided by routines that dynamically add to the meta-data. The second important aspect described by the poster is termed 'visual polymorphism'. This concept is based on a particular characteristic of VHDL-AMS code generation for mixed mode (digital and analogue) circuits, where we find that a single type of component may be represented by one of a number of different code models depending on the nature of its connectivity to other elements of a circuit. Visual polymorphism describes how a single visual image of a component encapsulates the automatic selection of the appropriate code model. A single gate component, for example, is able to select the appropriate models from possibilities that include digital, analogue or mixed mode input. It does this by giving digital component objects the ability to interrogate their external connections to find out whether they are joined to terminal nodes (analogue objects) or signal nodes (digital S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, pp. 561-562, 1998.  Springer-Verlag Berlin Heidelberg 1998

562

D. Parsons and T. Kazmierski

objects). From this information, each component is able to invoke a model with an appropriate type signature via its code generating objects. This use of both virtual and visual polymorphism demonstrates that we can apply polymorphism as a conceptual approach, allowing objects to behave differently in different contexts, without necessarily using traditional implementation mechanisms. Systems can thus be made more flexible and easily extensible.

Replication as an Aspect Johan Fabry Vrije Universiteit Brussel, Pleinlaan 2, 1050 Brussels, Belgium [email protected]

1 Context An important problem in distributed systems is sharing of data between dierent computers. A possible solution is replication: a number of servers contain copies of the data, and clients can access this data through the network. Aspect-oriented Programming AOP allows the programmer to separate the base program from this replication aspect, by specifying it separately in a specialpurpose aspect language. This makes the code easier to write, understand and maintain, leading to greater productivity. A special tool called an Aspect Weaver combines the dierent les into the nal code. We have implemented two aspect languages and an aspect weaver to eectively allow this separation of concerns for replication.

2 The Aspect Languages and the Aspect Weaver We have chosen to create an AOP extension of Java, using three separate languages: Jav, Dupe and Fix. Jav is the language in which the base algorithm is written. Jav is Java 1.1, but without interface speci cation. When replicating a Jav object, we only replicate its instance variables also called elds, which we treat as primitive types. It is not always necessary to replicate all elds of an object. Using Dupe, the replication aspect language, programmers specify which elds need to be replicated and which need not be replicated. Another concern is the handling of exceptions that might be thrown because of the inherent uncertainty of network behavior. Catching these exceptions should not be done in the base algorithm, because it should not be aware of the replication aspect. The programmer must be able to specify exception handlers in a separate aspect language. This separate error-handling aspect language is called Fix. In Fix a programmer can specify exception handlers for three classes of errors: errors which occur while rst trying to contact the server, while trying to write to a certain eld, or while trying to read from a certain eld. The aspect Weaver combines the Jav, Dupe and Fix les into Java source code where existing Jav constructors also create a proxy object for the server, all accesses to replicated elds occur on this proxy and exceptions thrown during the above actions are handled using the given exception handlers. S. Demeyer and J. Bosch (Eds.): ECOOP’98 Workshop Reader, LNCS 1543, p. 563, 1998.  Springer-Verlag Berlin Heidelberg 1998

Author Index

Abadi, Martín Abreu, Fernando Brito e Abreu, Fernando Brito e Abreu, Fernando Brito e Adiba, Michel Afonso, Ana Paula Agarwal, Rakesh Agha, Gul Aksit, Mehmet Aksit, Mehmet Aksit, Mehmet Aksit, Mehmet Alencar, P.S.C. Alencar, Paulo S.C. Alexander, Ian F. Alonso, Luis Alpdemir, M. Nedim Alvarez, Xavier Álvarez-García, Fernando Ambriola, Vincenzo Ancona, Massimo Andersen, Birger Angster, Erzsébet Ankeny, L. A. Arévalo, Sergio Argañaraz, Verónica Aura, Tuomas Bachatene, Helene Baelen, Stefan Van Bagrodia, Rajive Ballesteros, Francisco J. Ballesteros, Francisco J. Ballesteros, Francisco J. Baniassad, Elisa L.A. Baquero, Carlos Bär, Holger Barbier, Franck Bardou, Daniel Barroca, Leonor

291 44 62 259 553 309 222 306 410 435 474 496 157 60 228 323 147 70 382 477 281 307 335 458 317 68 284 99 197 319 317 329 388 433 307 73 480 418 502

Basu, Chiranjit Batenin, Adam Baum, Gabriel Becker, Ulrich Becker, Ulrich Becker, Ulrich Becker, Ulrich Bellettini, C. Benedicenti, Luigi Benedicenti, Luigi Berg, Klaas van den Berger, L. Berkem, Birol Berkem, Birol Bernadat, Philippe Bider, Ilia Bider, Ilia Blair, Gordon S. Blair, Gordon S. Blair, Lynne Blank, Gregory Blay-Fornarino, Mireille Boissier, R. Bokowski, Boris Bokowski, Boris Bonhomme, Alice Bontridder, Dirk Borne, Isabelle Börstler, Jürgen Bosch, Jan Bosch, Jan Boulanger, Frédéric Brant, John Brant, John Briand, Lionel C. Brislawn, Kristi Brose, Gerald Brown, David L. Bryant, Anthony

222 267 68 325 420 426 557 519 1 37 483 422 38 232 306 217 550 390 436 436 437 76 529 380 552 327 189 44 333 99 130 515 81 549 48 446 279 446 84

Author Index

Bryce, Ciarán Burgett, Jeff L. Burns, A. Buttyán, Levente Cahill, Vinny Callebaut, Eric Campbell, Roy H. Campbell, Roy H. Caron, Delphine Carrière, S. Jeromy Carvalho, Dulcineia Cazzola, Walter Cazzola, Walter Champagnoux, P. Chauhan, Deepika Cherinka, R. Cherki, Sophie Chevassus, Nicolas Chiba, Shigeru Cho, Il-Hyung Cinnéide, Mel Ó Cinnéide, Mel Ó Ciupke, Oliver Clancy, S. P. Contreras, José L. Costa, Fábio M. Coulouris, George Coulouris, George Coulson, Geoff Counsel, Steve Counsell, S. Cowan, D.D. Cowan, Donald D. Cozzini, Stefano Cuche, Jean Sebastien Cummings, J. C. Dahm, Markus Davis, Kei Davis, Kei Davis, Kei Davis, Kei Decyk, Viktor Delatour, Jérôme Demeyer, Serge Demeyer, Serge Demeyer, Serge Dery, A.M.

288 50 365 301 438 237 317 388 448 48 388 281 386 384 306 165 115 525 372 29 79 93 73 458 369 390 273 285 390 96 74 157 60 450 259 458 552 444 446 452 453 462 511 66 82 247 422

Dery, Anne-Marie Díaz-Fondón, Marián Dockx, Jan Dockx, Jan Dollimore, Jean Dombiak, Gaston Dong, Jing Dori, Dov Dowling, Jim Drum, Philipp Ducasse, Stéphane Ducasse, Stéphane Ducasse, Stéphane Ducasse, Stéphane Ducasse, Stéphane Duchien, L. Dumke, Reiner R. Durr, Eugene Edwards, Helen M. Elswijk, Mark van Endre, Domiczi Enselme, D. Epivent, M. Ernst, Erik Ernst, Erik Fabry, Johan Fabry, Johan Fernández, Alejandro Fernandez, Eduardo B. Florin, G. Fondón, María Ángeles Díaz Fornarino, M. Fradet, Pascal Frank, Lars Frohner, Ákos Gajewski, R .R. Galal, Galal Hassan Galal, Galal Hassan García, Fernando Álvarez Geier, Martin Geier, Martin Geier, Martin Genssler, Thomas Gerard, Sebastien Gérard, Sébastien Gerhardt, Frank Gerhardt, Frank

565

76 382 539 539 285 70 60 555 438 472 72 75 76 78 247 384 253 502 84 486 560 384 529 1 30 424 563 344 281 384 277 422 394 321 5 456 44 46 277 325 426 557 80 14 533 1 6

566

Author Index

Gervasi, Vincenzo Gomez, Jaime Graham, R. L. Graw, Günter Gray, Robert S. Greefhorst, Danny Greenaway, Adam P. Gressier-Soudan, E. Grijseels, Alain Grossman, Mark Guday, Shai Gutiérrez, Darío Álvarez Gutiérrez, Darío Álvarez Guy, Richard G. Hagimont, Daniel Hagimont, Daniel Hakala, Markku Hall, J. H. Haney, S. W. Hansen, Klaus Marius Harms, Jürgen Harrison, R. Hauck, Franz J. Hauck, Franz J. Hauck, Franz J. Hauck, Franz J. Hauck, Franz J. Hautamäki, Juha Hedin, Görel Heiken, J. H. Helton, David Hennessy, Matthew Henshaw, William D. Hermans, Leo Hernández, Juan Hohl, Fritz Holian, K. S. Holmes, David Hölzle, Urs Holzmüller, Bernd Horn, F. 529 Hruby, Pavel Hulaas, Jarle Humphrey, W. F. Ibrahim, Rosziati Ibrahim, Rosziati Ingham, James

477 21 458 155 292 486 294 529 189 159 306 277 382 319 273 278 105 458 458 110 293 74 325 329 420 426 557 105 99 458 163 304 446 211 443 299 458 439 143 31 234 293 458 32 558 161

Ismail, Leila Itou, Shigeo J.Paul, Ray Jacob, Anita Jamali, Nadeem Jensen, Christian D. Jensen, Christian D. Jensen, T. Joosen, Wouter Joosen, Wouter Joosen, Wouter Joosen, Wouter Joosen, Wouter Jørgensen, Bo N. Jouhier, Bruno Juul, Niels C. Kaestner, Celso Kantorowitz, E. Karakostas, V. Karmesin, S. R. Kassab, Lora L. Kazman, Rick Kazmierski, Tom Kelemen, Beáta Keller, Ralph Kendall, Elizabeth A. Kendall, Elizabeth A. Kendall, Elizabeth A. Kenens, Peter Kenens, Peter Khomyakov, Maxim Khomyakov, Maxim Kiczales, Gregor Kilov, Haim Kim, Hyoseob Kimelman, Doug Kimelman, Doug Kimelman, Doug Kindberg, Tim Kintzer, Eric Kleinöder, J. Knasmüller, Markus Knasmüller, Markus Knasmüller, Markus Knasmüller, Markus Kniesel, Günter Knolmayer, Gerhard F.

288 460 153 39 306 273 278 296 151 315 367 428 503 503 202 307 64 270 208 458 300 48 561 340 143 217 240 440 315 428 217 550 398 167 39 313 441 547 280 202 420 22 359 543 559 136 205

Author Index

Köhler, Gerd Kon, Fabio Kon, Fabio Kop, Christion Koponen, Petteri Koskimies, Kai Kozsik, Tamás Küçük, Bülent Kuenning, Geoffrey H. Lalanda, Philippe Lambright, Dan Lange, Anthony Lantzos, Theodoros Lantzos, Theodoros Lanusse, Agnès Laurent, A. Lea, Doug Lee, S. R. Lefèvre, Laurent Lesueur, B. Lewerentz, Claus Lima, Cabral Lindsey, Beth Lindsey, Hayden Lopes, Cristina Videira Lopes, Cristina Videira Lorenz, David H. Lorenz, David H. Lounis, Hakim Lovelle, Juan Manuel Cueva Lovelle, Juan Manuel Cueva Lozano, Rafael Lucena, C.J.P. Lucena, Carlos J.P. Luksch, Peter Lumsdaine, Andrew Lumsdaine, Andrew Lunau, Charlotte Pii Lunau, Charlotte Pii Lycett, Mark Maat, Matthijs Mackie, R. I. Maijers, Rob Mao, Yida Marinescu, Radu Marjeta, Juha Marshall, J. C.

250 317 388 489 284 99 15 147 319 115 306 50 40 84 533 529 295 458 327 492 255 64 547 547 394 398 32 431 264 277 382 553 157 60 472 466 468 392 442 153 486 456 486 264 252 149 458

Martin, Herve Martínez, Lourdes Tajes Martínez, Lourdes Tajes Matsuoka, Satoshi Matthijs, Frank Matthijs, Frank Matthijs, Frank Matthijs, Frank Mayr, Heinrich C. McKee, Gerard T. McNamara, G. R. Meier, Erich Meier, Erich Meier, Erich Mens, Kim Mens, Kim Mens, Tom Mester, Arnulf Métayer, D. Le Meuter, Wolfgang De Michiels, Sam Michiels, Sam Mikhajlova, Anna Milojicic, Dejan Mitchell, Stuart P. Mitchell, Stuart P. Mocellin, Francoise Moore, Robert Moreira, Ana Maria D. Munro, Malcolm Murillo, Juan Manuel Murphy, Gail C. Nagaratnam, Nataraj Narisawa, Fumio Naya, Hidemitsu Nebbe, Robb D. Nebbe, Robb D. Niemelä, Eila Nithi, R. 74 Nixon, Paddy Noble, James Norris, B. Norton, Charles D. Nova, L.C.M. Ossher, Harold Ossher, Harold L. Outhred, Geoff

567

553 277 382 460 151 315 367 428 489 294 458 325 426 557 54 189 54 155 296 44 315 428 138 306 363 365 553 388 350 161 443 433 295 507 507 120 402 149 79 439 464 462 157 545 406 141

568

Author Index

Overstreet, C. Painter, J. W. Paludetto, Mario Paoli, Flavio De Parsons, David Parsons, David Pauw, Wim De Pavillet, Gabriel Pawlak, R. Pedraza, Enrique Pereira, Gonçalo Persson, Patrik Podolsky, Markus Poels, Geert Poels, Geert Pons, Claudia Pooley, Rob Popek, Gerald J. Potter, John Potter, John Predonzani, Paolo Presso, María José Prieto, Máximo Prieto, Máximo Punter, Teade Qiu, Winnie Quinlan, Daniel Quinlan, Daniel Quinlan, Daniel J. Rackl, Günther Radenski, A. Rajan, V. T. Rajan, V.T. Rapicault, Pascal Räsänen, Juhana Rasheed, A. Rashid, Awais Rastofer, Uwe Rastofer, Uwe Rastofer, Uwe Razafindramary, D. Redmond, Barry Regateiro, Francisco S. Reiher, Peter Revault, N. Reynders, J. V. Ricci, J. 165

165 458 511 519 23 561 541 33 384 443 62 7 219 52 261 16 85 319 141 439 58 68 68 70 269 17 452 453 446 8 464 547 313 76 284 311 24 325 426 557 529 438 309 319 492 458

Richner, Tamar 78 Richner, Tamar 95 Rieger, Matthias 75 Riely, James 304 Robben, Bert 151 Robben, Bert 315 Robben, Bert 367 Robben, Bert 428 Roberts, Don 81 Roberts, Don 549 Roberts, Marcus 285 Rodrigues, Luís 287 Romanczuk-Réquilé, Annya 64 Romero, Natalia 68 Rossi, Gustavo 344 Roth, Tova 313 Roth, Tova 547 Roth, Volker 297 Rowley, Andrew 286 Rumpe, Bernhard 167 Rust, Heinrich 250 Sabak, Jan 41 Saeki, Motoshi 493 Sahraoui, Houari A. 242 Sahraoui, Houari A. 264 Sánchez, Armando García-Mendoza 277 Sánchez, Fernando 443 Scalabrin, Edson 64 Schäfer, Tilman 438 Schmidt, Charles 283 Schnizler, Moritz 9 Schrank, M. 165 Schulz, Benedikt 80 Seffah, Ahmed 355 Seinturier, L. 384 Seiter, Linda M. 125 Serrano-Morale, Carlos 202 Siek, Jeremy G. 466 Siek, Jeremy G. 468 Silva, Cristina 287 Silva, Mário J. 309 Simon, Frank 250 Simon, Frank 255 Singhai, Ashish 25 Singhai, Ashish 388 Slottow, Joan 462

Author Index

Smaragdakis, Yannis Snoeck, Monique Sourrouille, Jean-Louis Sousa, Pedro Speck, Andreas Spencer, Brian Staamann, Sebastian Steckermeier, Martin Steckermeier, Martin Steckermeier, Martin Steindl, Christoph Stevens, Perdita Stevens, Perdita Steyaert, Patrick Stokkum, Wim van Stroud, Robert Stroud, Robert Stroud, Robert Stroud, Robert Sturler, E. de Sturm, Arnon Succi, Giancarlo Südholt, Mario Sudmann, Nils P. Sunyé, G. Swarup, Vipin Swarup, Vipin Szyperski, Clemens Szyperski, Clemens Tarr, Peri Tarr, Peri L. Tatsubori, Michiaki Tekinerdogan, Bedir Tekinerdogan, Bedir Tekinerdogan, Bedir Tekinerdogan, Bedir Telea, Alexandru Terrier, François Terrier, François Thomas, Sandy Thomson, Norman Thorn, T. Tichelaar, Sander Tilman, Michel Tisato, F. Toinard, Christian Troya, José M.

34 222 369 62 26 200 301 325 426 557 10 85 89 557 211 145 282 363 374 470 555 58 394 302 492 283 305 130 558 545 406 372 410 435 474 496 26 502 533 547 472 296 82 214 519 525 378

Truyen, Eddy Truyen, Eddy Tuomi, Jyrki Tyugu, Enn Uzun, Umit Valerio, Andrea Vallecillo, Antonio Vanhaute, Bart Vanhaute, Bart Vanhaute, Bart Vann, A. Vayngrib, Gene Verbaeten, Pierre Verbaeten, Pierre Verbaeten, Pierre Verbaeten, Pierre Verelst, Jan Vernazza, Tullio Vidal-Naquet, Guy Vieth, Werner Viljamaa, Antti Viljamaa, Jukka Villazón, Alex Virtanen, Pentti Vitek, Jan Vlissides, John Voas, Jeffrey Volder, Kris De Walker, Robert J. Wang, An-I Wang, Hei-Chia Weck, Wolfgang Wegman, Mark Wegman, Mark Weidmann, Matthias Weisbrod, Joachim Welch, Ian Welch, Ian Welch, Ian Wellings, A. J. Wilhelm, Uwe G. Williams, T. W. Wohlrab, Lutz Wüst, Jürgen Wuyts, Roel Wuyts, Roel Yokoyama, Takanori

569

315 428 105 499 42 58 378 151 367 428 464 437 151 315 367 428 56 58 515 86 105 105 293 35 288 541 300 414 433 319 208 130 313 547 472 72 145 282 374 365 301 458 376 48 78 189 507

570

Author Index

Zak, Felipe Zander, M. E. Zaslavsky, A. Zhang, Xiaogang Ziane, M. Zobel, Richard N.

70 458 311 18 492 147

Author Information Luigi Benedicenti,

,

[email protected]

DIST - Universita di Genova, Via Opera Pia 13, 16128 Genova, Italy Birol Berkem, [email protected], 36, Av. du Hazay, 95800 Paris Cergy, France

Il-Hyung Cho,

,

[email protected]

Clemson University, 504A Daniel Dr. Clemson, SC29631, U.S.A.

Erik Ernst,

, DEVISE  Center for Experimental Computer Science, Computer Science Department, University of Aarhus, Danmark [email protected]

A kos Frohner,

, Eotvos Lorand University, Department of Informatics, Muzeum krt. 4c., Budapest, 1088, Hungary [email protected]

Sebastien Gerard,

, Leti-DeinSLAGLSP SACLAY, 91191 Gif sur Yvette, FRANCE Frank Gerhardt, [email protected], Daimler-Benz AG, Dept. IOTM, Inf. Management and Organisation, Software Eng. MethodsProcess, HPC E702, [email protected]

D-70322 Stuttgart, Germany Jaime Gomez, [email protected], Departamento de Lenguages y Sistemas Informaticos, Universidad de Alicante, C San Vicente SN San Vicente del Raspeig, 369 Alicante, SPAIN

Bernd Holzmuller,

[email protected]

Stuttgart University, Institut fur Informatik, Breitwiesenstrasse 20-22, D-70565 Stuttgart, Germany

Rosziati Ibrahim,

, Queensland University of Technology, Australia, School of Computing Science, Faculty of Information Technology, Queensland University of Technology QUT, GPO Box 2434, Brisbane QLD 4001, Australia [email protected]

Anita Jacob,

, Nansen Environmental and Remote Sensing Center, Edvard Griegs Vei 3A, N-5037 Solheimsvik, Norway [email protected]

Hyoseob Kim,

, University of Durham, Department of Computer Science, South Road, Durham, DH1 3LE, England, U.K. [email protected]

,

572

Author Information

Markus Knasmuller, ,

[email protected]

BMD Steyr, Sierninger Str. 190, A-4400 Steyr, Austria Tamas Kozsik,

Box 118, S-221 00 Lund, SWEDEN Claudia Pons,

, Lia, Universidad de La Plata, Lia Calle 50 esq.115, 1er.Piso 1900, [email protected], La Plata, Eotvos Lor and University, Buenos Aires, Department of General Computer Science, Argentina Eotvos Lor and Tudom anyegyetem, A ltal anos Sz am t astudom anyi Tsz. 1088, Winnie Qiu, [email protected], Budapest, M uzeum krt. 6-8., Hungary Department of Software Engineering, School of Computer Science and EngineerTheodoros Lantzos, ing, [email protected], University of New South Wales, The Grange, Sydney 2052, Beckett Park, Australia Leeds LS63QS, UK Gunther Rackl, [email protected], David H. Lorenz, LRR-TUM, [email protected], The Faculty of Computer Science, Institut fur Informatik, Technion|Israel Institute of Technology, Technische Universitat Munchen, 80290 Munchen, Technion City, Haifa 32000, ISRAEL. Currently at Northeastern University, Germany [email protected] Awais Rashid, [email protected], David Parsons, Cooperative Systems Engineering Group, [email protected], Computing Department, Southampton Institute, Lancaster University, Systems Engineering Faculty, Lancaster LA1 4YR, East Park Terrace, UK Southampton, SO14 0YN, Jan Sabak, UK [email protected], Gabriel Pavillet, Warsaw University of Technology, ul. Sedomierska 10 m 23, 05-300, [email protected], Minsk Maz, LIRMM, Laboratory for Informatics, Robotics and POLAND Microelectronics, Moritz Schnizler, 161, rue Ada, [email protected], 34392 MONTPELLIER, Aachen University of Technology, Cedex 5, Department of Computer Science III, FRANCE Software Construction Group, Patrik Persson, Ahornstr. 55, 52074 Aachen, Germany [email protected], Dept. of Computer Science, Ashish Singhai, Lund Institute of Technology, [email protected], [email protected]

Author Information

University of Illinois, 1304 W. Springeld Ave., Rm. 3234 Urbana, IL 61801, USA Yannis Smaragdakis, [email protected], University of Texas at Austin, University of Texas at Austin, Computer Sciences Department, TAY 2.124, Austin, TX 78712, USA Andreas Speck,

Technische Universiteit Eindhoven, Dept. of Mathematics and Computing Science, Den Dolech 2 Postbus 513, 5600 MB Eindhoven, The Netherlands

Umit Uzun,

, University of Warwick, Department of Computer Science, University Of Warwick, CV4 7AL, ENGLAND [email protected]

, Wilhelm-Schickard-Institute for Com- Pentti Virtanen, puter Science, [email protected], Sand 13, Turku University, D-72076 Tubingen, Taivalmaki 9, Germany 02200 ESPOO, Christoph Steindl, FINLAND [email protected], Johannes Kepler Universitt, Xiaogang Zhang, Institut fr Praktische Informatik, [email protected], Gruppe Systemsoftware, Macquarie University, Altenbergerstrasse 69, MRI, School of MPCE, A-4040 Linz, Macquarie University, Austria Sydney, NSW 2109, Alexandru Telea, Australia [email protected], [email protected]

573