The Practice of Enterprise Modeling: 13th IFIP Working Conference, PoEM 2020, Riga, Latvia, November 25–27, 2020, Proceedings [1st ed.] 9783030634780, 9783030634797

This book constitutes the proceedings papers of the 13th IFIP Working Conference on the Practice of Enterprise Modeling,

278 52 35MB

English Pages XI, 416 [416] Year 2020

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Front Matter ....Pages i-xi
Front Matter ....Pages 1-1
The Uncertain Enterprise: Achieving Adaptation Through Digital Twins and Machine Learning Extended Abstract (Tony Clark)....Pages 3-7
Industrial Digital Environments in Action: The OMiLAB Innovation Corner (Robert Woitsch)....Pages 8-22
Front Matter ....Pages 23-23
Digital Twins of an Organization for Enterprise Modeling (Uwe V. Riss, Heiko Maus, Sabrina Javaid, Christian Jilek)....Pages 25-40
Modeling Products and Services with Enterprise Models (Kurt Sandkuhl, Janis Stirna, Felix Holz)....Pages 41-57
Structuring Participatory Enterprise Modelling Sessions (Michael Fellmann, Kurt Sandkuhl, Anne Gutschmidt, Michael Poppe)....Pages 58-72
Modeling Trust in Enterprise Architecture: A Pattern Language for ArchiMate (Glenda Amaral, Tiago Prince Sales, Giancarlo Guizzardi, João Paulo A. Almeida, Daniele Porello)....Pages 73-89
Towards Enterprise-Grade Tool Support for DEMO (Mark A. T. Mulder, Henderik A. Proper)....Pages 90-105
Front Matter ....Pages 107-107
M2FOL: A Formal Modeling Language for Metamodels (Victoria Döller)....Pages 109-123
ContracT – from Legal Contracts to Formal Specifications: Preliminary Results (Michele Soavi, Nicola Zeni, John Mylopoulos, Luisa Mich)....Pages 124-137
Towards Extending the Validation Possibilities of ADOxx with Alloy (Sybren de Kinderen, Qin Ma, Monika Kaczmarek-Heß)....Pages 138-152
Front Matter ....Pages 153-153
OrgML - A Domain Specific Language for Organisational Decision-Making (Souvik Barat, Balbir Barn, Tony Clark, Vinay Kulkarni)....Pages 155-170
Improvements on Capability Modeling by Implementing Expert Knowledge About Organizational Change (Georgios Koutsopoulos, Martin Henkel, Janis Stirna)....Pages 171-185
On Domain Modelling and Requisite Variety (Henderik A. Proper, Giancarlo Guizzardi)....Pages 186-196
Virtual Factory: Competence-Based Adaptive Modelling and Simulation Approach for Manufacturing Enterprise (Emre Yildiz, Charles Møller, Arne Bilberg)....Pages 197-207
Front Matter ....Pages 209-209
Relational Contexts and Conceptual Model Clustering (Giancarlo Guizzardi, Tiago Prince Sales, João Paulo A. Almeida, Geert Poels)....Pages 211-227
A Reference Ontology of Money and Virtual Currencies (Glenda Amaral, Tiago Prince Sales, Giancarlo Guizzardi, Daniele Porello)....Pages 228-243
Ontology-Based Visualization for Business Model Design (Marco Peter, Devid Montecchiari, Knut Hinkelmann, Stella Gatziu Grivas)....Pages 244-258
Front Matter ....Pages 259-259
Decentralized Control: A Novel Form of Interorganizational Workflow Interoperability (Christian Sturm, Jonas Szalanczi, Stefan Jablonski, Stefan Schönig)....Pages 261-276
Generation of Concern-Based Business Process Views (Sara Esperto, Pedro Sousa, Sérgio Guerreiro)....Pages 277-292
Designing an Ecosystem Value Model Based on a Process Model – An Empirical Approach (Isaac da Silva Torres, Jaap Gordijn, Marcelo Fantinato, Joao Francisco da Fountoura Vieira)....Pages 293-303
Front Matter ....Pages 1-1
Integrating Risk Representation at Strategic Level for IT Service Governance: A Comprehensive Framework (Aghakhani Ghazaleh, Yves Wautelet, Manuel Kolp, Samedi Heng)....Pages 307-322
Conceptual Characterization of Cybersecurity Ontologies (Beatriz F. Martins, Lenin Serrano, José F. Reyes, José Ignacio Panach, Oscar Pastor, Benny Rochwerger)....Pages 323-338
A Physics-Based Enterprise Modeling Approach for Risks and Opportunities Management (Nafe Moradkhani, Louis Faugère, Julien Jeany, Matthieu Lauras, Benoit Montreuil, Frederick Benaben)....Pages 339-348
Front Matter ....Pages 349-349
A Data-Driven Framework for Automated Requirements Elicitation from Heterogeneous Digital Sources (Aron Henriksson, Jelena Zdravkovic)....Pages 351-365
Applying Acceptance Requirements to Requirements Modeling Tools via Gamification: A Case Study on Privacy and Security (Luca Piras, Federico Calabrese, Paolo Giorgini)....Pages 366-376
Extended Enterprise Collaboration for System-of-Systems Requirements Engineering: Challenges in the Era of COVID-19 (Afef Awadid, Anouk Dubois)....Pages 377-386
Front Matter ....Pages 387-387
Supporting Process Mining with Recovered Residual Data (Ludwig Englbrecht, Stefan Schönig, Günther Pernul)....Pages 389-404
Space-Time Cube Operations in Process Mining (Dina Bayomie, Lukas Pfahlsberger, Kate Revoredo, Jan Mendling)....Pages 405-414
Back Matter ....Pages 415-416
Recommend Papers

The Practice of Enterprise Modeling: 13th IFIP Working Conference, PoEM 2020, Riga, Latvia, November 25–27, 2020, Proceedings [1st ed.]
 9783030634780, 9783030634797

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

LNBIP 400

Ja¯nis Grabis Dominik Bork (Eds.)

The Practice of Enterprise Modeling 13th IFIP Working Conference, PoEM 2020 Riga, Latvia, November 25–27, 2020 Proceedings

123

Lecture Notes in Business Information Processing Series Editors Wil van der Aalst RWTH Aachen University, Aachen, Germany John Mylopoulos University of Trento, Trento, Italy Michael Rosemann Queensland University of Technology, Brisbane, QLD, Australia Michael J. Shaw University of Illinois, Urbana-Champaign, IL, USA Clemens Szyperski Microsoft Research, Redmond, WA, USA

400

More information about this series at http://www.springer.com/series/7911

Jānis Grabis Dominik Bork (Eds.) •

The Practice of Enterprise Modeling 13th IFIP Working Conference, PoEM 2020 Riga, Latvia, November 25–27, 2020 Proceedings

123

Editors Jānis Grabis Riga Technical University Riga, Latvia

Dominik Bork TU Wien Vienna, Wien, Austria

ISSN 1865-1348 ISSN 1865-1356 (electronic) Lecture Notes in Business Information Processing ISBN 978-3-030-63478-0 ISBN 978-3-030-63479-7 (eBook) https://doi.org/10.1007/978-3-030-63479-7 © IFIP International Federation for Information Processing 2020 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

The 13th IFIP WG 8.1 Working Conference on the Practice of Enterprise Modeling (PoEM 2020), is aimed at improving the understanding of the practice of enterprise modeling, by offering a forum for sharing experiences and knowledge between the academic community and practitioners from industry and the public sector. The special focus in 2020 was on the role enterprise modeling in the digital age. PoEM 2020 took place during November 25–27, 2020. It was organized by the Riga Technical University, Latvia. It was meant to be held at the newly reinvigorated Ķīpsala campus within walking distance of the old city. As a response to the ongoing COVID-19 crisis, the PoEM 2020 Organizing Committee, together with the PoEM Steering Committee, decided to organize the conference for its first time completely virtual. A nice side-effect of this decision was the possibility to make PoEM 2020 free for all attendees. Following its tradition, PoEM 2020 was open for submissions in three categories that also form part of this proceedings: research papers describing original research contributions in enterprise modeling; practitioner/experience papers presenting problems, challenges, or experience related to any aspect of enterprise modeling encountered in practice; and short papers presenting work in progress and emerging enterprise modeling challenges. In total, we received 58 submissions for the main conference, including research papers, experience papers, and short papers. Based on three reviews by members of the Program Committee, we selected 19 full papers (32.76% acceptance rate) and 7 short papers (44.83% acceptance rate for full and short papers combined). Accepted papers are grouped by the following topics: Business Process Modeling, Foundations and Applications of Enterprise Modeling, Enterprise Modeling and Enterprise Architecture, Enterprise Ontologies, Formal Aspects of Enterprise Modeling, Requirements Modeling, Risk and Security Modeling, and Process Mining. Besides the main conference papers which form part of these proceedings, PoEM 2020 comprised two workshops: The First Workshop on Blockchain and Enterprise Systems (BES 2020) organized by Andrea Morichetta and Petra Maria Asprion; and the 5th Workshop on Managed Complexity (ManComp), organized by Mārīte Kirikova, Peter Forbrig, and Charles Møller. Moreover, a PoEM forum was organized by Estefanía Serral Asensio and Janis Stirna. We express our gratitude to the conference Steering Committee, who initially agreed to have this edition hosted in Riga, and later on, supported the transformation toward a virtual conference: Prof. Anne Persson, Prof. Janis Stirna, and Prof. Kurt Sandkuhl. An international, widely recognized forum of experts contributed to PoEM 2020, including the notable keynote speakers: Prof. Tony Clark from Aston University, UK, and Dr. Robert Woitsch from BOC Asset Management GmbH, Austria. We thank them, as well as all the authors who submitted their work, and the Program Committee members who ensured a high-quality selection of papers while providing insightful advice for

vi

Preface

improving the contributions. Moreover, we thank the organizers of workshops and the forum for making PoEM 2020 such a diverse and active event. We thank IFIP WG 8.1 for allowing this conference series to evolve under its auspices. We also thank the Springer team led by Alfred Hofmann and Ralf Gerstner for the technical support regarding the publication of this volume. Last but not least, we would like to thank the organization team led by Zane Solovjakova, Krišjānis Pinka, Kristaps P. Rubulis, and Evita Roponena for their hard work in ensuring the success of this event. November 2020

Jānis Grabis Dominik Bork

Organization

General Chair Renāte Strazdiņa

Microsoft Baltics, Latvia

Program Committee Chairs Jānis Grabis Dominik Bork

Riga Technical University, Latvia TU Wien, Austria

Steering Committee Anne Persson Janis Stirna Kurt Sandkuhl

University of Skövde, Sweden Stockholm University, Sweden University of Rostock, Germany

Program Committee Raian Ali Joao Paulo Almeida Steven Alter David Aveiro Judith Barrios Albornoz Dominik Bork Robert Andrei Buchmann Rimantas Butleris Tony Clark Kārlis Čerāns Sybren De Kinderen Paul Drews Michael Fellmann Hans-Georg Fill Ulrich Frank Frederik Gailly Ana-Maria Ghiran Jānis Grabis Giancarlo Guizzardi Jens Gulden Knut Hinkelmann Stijn Hoppenbrouwers Ivan Jureta

Hamad Bin Khalifa University, Qatar Federal University of Espirito Santo, Brazil University of San Francisco, USA University of Madeira, Portugal University of Los Andes, Colombia TU Wien, Austria Babeş-Bolyai University of Cluj Napoca, Romania Kaunas University of Technology, Lithuania Aston University, UK University of Latvia, Latvia University of Duisburg-Essen, Germany Leuphana University of Lüneburg, Germany University of Rostock, Germany University of Fribourg, Switzerland Universität Duisburg-Essen, Germany Ghent University, Belgium Babes-Bolyai University of Cluj-Napoca, Romania Riga Technical University, Latvia Free University of Bozen-Bolzano, Italy Utrecht University, The Netherlands FHNW University of Applied Sciences and Arts Northwestern Switzerland, Switzerland HAN University of Applied Sciences, The Netherlands University of Namur, Belgium

viii

Organization

Monika Kaczmarek Mārīte Kirikova Agnes Koschmider Robert Lagerström Elyes Lamine Birger Lantow Ulrike Lechner Raimundas Matulevicius Graham Mcleod Oscar Pastor Lopez Anne Persson Herve Pingaud Rūta Pirta-Dreimane Geert Poels Andrea Polini Henderik A. Proper Jolita Ralyté Ben Roelens David Romero Kurt Sandkuhl Khurram Shahzad Nikolay Shilov Monique Snoeck Janis Stirna Darijus Strasunskas Stefan Strecker Yves Wautelet Robert Woitsch Jelena Zdravkovic

University of Duisburg-Essen, Germany Riga Technical University, Latvia Kiel University, Germany KTH Royal Institute of Technology, Sweden Université de Toulouse, ISIS, Mines d’Albi, France University of Rostock, Germany Universität der Bundeswehr München, Germany University of Tartu, Estonia Inspired.org, South Africa Universitat Politécnica de València, Spain University of Skövde, Sweden Institut National Universitaire Champollion, France Riga Technical University, Latvia Ghent University, Belgium University of Camerino, Italy Luxembourg Institute of Science and Technology, Luxembourg University of Geneva, Switzerland Open University of the Netherlands, The Netherlands Tecnológico de Monterrey, Mexico University of Rostock, Germany University of the Punjab, Pakistan SPIIRAS, Russia Katholieke Universiteit Leuven, Belgium Stockholm University, Sweden HEMIT, Norway University of Hagen, Germany Katholieke Universiteit Leuven, Belgium BOC Asset Management, Austria Stockholm University, Sweden

Additional Reviewers Frédérick Bénaben Markus Fischer Henrihs Gorskis Simon Hacks Bohdan Haidabrus Fredrik Heiding Felix Härer Florian Johannsen

Lauma Jokste Kestutis Kapočius Fredrik Milani Inese Polaka Kristina Rosenthal Dirk van der Linden Wojciech Widel

Contents

Invited Papers The Uncertain Enterprise: Achieving Adaptation Through Digital Twins and Machine Learning Extended Abstract . . . . . . . . . . . . . . . . . . . . . . . . . Tony Clark

3

Industrial Digital Environments in Action: The OMiLAB Innovation Corner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Robert Woitsch

8

Enterprise Modeling and Enterprise Architecture Digital Twins of an Organization for Enterprise Modeling . . . . . . . . . . . . . . Uwe V. Riss, Heiko Maus, Sabrina Javaid, and Christian Jilek

25

Modeling Products and Services with Enterprise Models . . . . . . . . . . . . . . . Kurt Sandkuhl, Janis Stirna, and Felix Holz

41

Structuring Participatory Enterprise Modelling Sessions . . . . . . . . . . . . . . . . Michael Fellmann, Kurt Sandkuhl, Anne Gutschmidt, and Michael Poppe

58

Modeling Trust in Enterprise Architecture: A Pattern Language for ArchiMate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Glenda Amaral, Tiago Prince Sales, Giancarlo Guizzardi, João Paulo A. Almeida, and Daniele Porello Towards Enterprise-Grade Tool Support for DEMO . . . . . . . . . . . . . . . . . . Mark A. T. Mulder and Henderik A. Proper

73

90

Formal Aspects of Enterprise Modelling M2FOL: A Formal Modeling Language for Metamodels . . . . . . . . . . . . . . . Victoria Döller ContracT – from Legal Contracts to Formal Specifications: Preliminary Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michele Soavi, Nicola Zeni, John Mylopoulos, and Luisa Mich Towards Extending the Validation Possibilities of ADOxx with Alloy . . . . . . Sybren de Kinderen, Qin Ma, and Monika Kaczmarek-Heß

109

124 138

x

Contents

Foundations and Applications of Enterprise Modeling OrgML - A Domain Specific Language for Organisational DecisionMaking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Souvik Barat, Balbir Barn, Tony Clark, and Vinay Kulkarni

155

Improvements on Capability Modeling by Implementing Expert Knowledge About Organizational Change . . . . . . . . . . . . . . . . . . . . . . . . . Georgios Koutsopoulos, Martin Henkel, and Janis Stirna

171

On Domain Modelling and Requisite Variety: Current State of an Ongoing Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Henderik A. Proper and Giancarlo Guizzardi

186

Virtual Factory: Competence-Based Adaptive Modelling and Simulation Approach for Manufacturing Enterprise . . . . . . . . . . . . . . . . . . . . . . . . . . . Emre Yildiz, Charles Møller, and Arne Bilberg

197

Enterprise Ontologies Relational Contexts and Conceptual Model Clustering . . . . . . . . . . . . . . . . . Giancarlo Guizzardi, Tiago Prince Sales, João Paulo A. Almeida, and Geert Poels

211

A Reference Ontology of Money and Virtual Currencies . . . . . . . . . . . . . . . Glenda Amaral, Tiago Prince Sales, Giancarlo Guizzardi, and Daniele Porello

228

Ontology-Based Visualization for Business Model Design . . . . . . . . . . . . . . Marco Peter, Devid Montecchiari, Knut Hinkelmann, and Stella Gatziu Grivas

244

Business Process Modeling Decentralized Control: A Novel Form of Interorganizational Workflow Interoperability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Sturm, Jonas Szalanczi, Stefan Jablonski, and Stefan Schönig Generation of Concern-Based Business Process Views. . . . . . . . . . . . . . . . . Sara Esperto, Pedro Sousa, and Sérgio Guerreiro Designing an Ecosystem Value Model Based on a Process Model – An Empirical Approach. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Isaac da Silva Torres, Jaap Gordijn, Marcelo Fantinato, and Joao Francisco da Fountoura Vieira

261 277

293

Contents

xi

Risk and Security Modeling Integrating Risk Representation at Strategic Level for IT Service Governance: A Comprehensive Framework . . . . . . . . . . . . . . . . . . . . . . . . Aghakhani Ghazaleh, Yves Wautelet, Manuel Kolp, and Samedi Heng Conceptual Characterization of Cybersecurity Ontologies . . . . . . . . . . . . . . . Beatriz F. Martins, Lenin Serrano, José F. Reyes, José Ignacio Panach, Oscar Pastor, and Benny Rochwerger A Physics-Based Enterprise Modeling Approach for Risks and Opportunities Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nafe Moradkhani, Louis Faugère, Julien Jeany, Matthieu Lauras, Benoit Montreuil, and Frederick Benaben

307 323

339

Requirements Modeling A Data-Driven Framework for Automated Requirements Elicitation from Heterogeneous Digital Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Aron Henriksson and Jelena Zdravkovic

351

Applying Acceptance Requirements to Requirements Modeling Tools via Gamification: A Case Study on Privacy and Security . . . . . . . . . . . . . . . Luca Piras, Federico Calabrese, and Paolo Giorgini

366

Extended Enterprise Collaboration for System-of-Systems Requirements Engineering: Challenges in the Era of COVID-19 . . . . . . . . . . . . . . . . . . . . Afef Awadid and Anouk Dubois

377

Process Mining Supporting Process Mining with Recovered Residual Data . . . . . . . . . . . . . . Ludwig Englbrecht, Stefan Schönig, and Günther Pernul

389

Space-Time Cube Operations in Process Mining . . . . . . . . . . . . . . . . . . . . . Dina Bayomie, Lukas Pfahlsberger, Kate Revoredo, and Jan Mendling

405

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

415

Invited Papers

The Uncertain Enterprise: Achieving Adaptation Through Digital Twins and Machine Learning Extended Abstract Tony Clark(B) Aston University, Birmingham, UK [email protected]

1

Introduction

Systems, such as production plants, logistics networks, IT service companies, and international financial companies, are complex systems operating in highly dynamic environments that need to respond quickly to a variety of change drivers. The characteristic features of such systems include scale, complex interactions, knowledge of behaviour limited to localised contexts, and inherent uncertainty. Knowing how to analyse, design, implement, control and adapt such systems is a difficult problem that lacks suitable mainstream engineering methodologies and technologies. Grand challenges such as Smart Cities, large-scale integration of information systems such as national medical records, and Industry 4.0 can only be achieved through the deployment and integration of information systems with existing systems. The increasing connectedness of businesses and their reliance on software is leading to large-scale, networked, semi-autonomous interdependent system of systems. As a result, it is increasingly difficult to consider software systems in isolation, instead, they form a dynamically connected ecosystem characterised by a variety of interactions between them. Any new system is thus deployed into a connected world and must be resilient in order to continue to deliver the stated goals by learning to suitably adapt to situations that may not be known a priori. Moreover, even system goals may change over time. Traditional Software Engineering techniques tend to view the required system as having a fixed behaviour and being deployed into a well-understood operating environment. A typical development process expresses what the system must achieve in the form of a specification, how the system achieves the specified behaviour in the form of a design, and how the design is realised in terms of system implementation making appropriate use of underlying technology platforms. This works well when the characteristics of the system allows the design to be complete and when the environment into which the system is deployed is completely understood. c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 3–7, 2020. https://doi.org/10.1007/978-3-030-63479-7_1

4

T. Clark

Enterprise systems must dynamically adapt. In the first instance such a system must adapt in order to achieve its goals within an environment that can only be partially understood because of its complexity. Once the deployed system reaches a steady state, changes may occur in its environment or to its goals that require further dynamic adaptation. Approaches to adaptation include the following: Product Line Engineering: This is an approach that aims to identify and include variability points into the design of a system [2]. The key SPLE techniques are not appropriate for addressing the Uncertain Enterprise since the variations must be known in advance. Control Theory: Various forms of control theory have been developed in order to adapt physical systems. Generally these techniques measure the difference between observed behaviour and desired behaviour and translate these into a collection of control parameters that nudge the system into the right direction. A typical example of this approach is Model Reference Adaptive Control (MRAC) [1]. Traditionally these approaches use a collection of numerical equations to control real-time aspects of machinery; however, the architecture of the approach is appropriate for information systems providing there is an appropriate technology that can calculate the controls based on information system behaviour. Rule Based: Rules of the form if condition then adapt-action can be used to encode knowledge about adaptation actions that are required when certain system conditions arise [3,4]. Like SPLE, this approach requires the conditions and actions for adaptation to be known beforehand, which is challenging in many cases of complex systems. Architecture Based: An architecture-based approach organises a system as a collection of components that are co-ordinated via a manager. The manager can then change the way the components are co-ordinated depending on adaptation conditions. An example of this is the Monitoring, Analysis, Planning and Execution (MAPE) loop [5]. A key feature of this approach is to organise a complex system as a collection of decentralised components that can be co-ordinated in order to adapt. Model Driven Development: Model Driven Software Engineering aims to generate systems from an abstract representation. Models can be used during run-time to allow a system to reason about itself and to adapt to changes in its goals or environment [7]. Most of these approaches rely on understanding the range of variability that is required and using an appropriate technology to encode the variability. Software has two significant dimensions of ambiguity that requires adaptation: 1. The behaviour of the system that must adapt may only be partially understood. Generally, it is the case that the required behaviour of a system is known, the components of the system are known together with their localised data and behaviour, however the algorithm required to co-ordinate the components in order to achieve the desired outcomes is vague, leading to the

The Uncertain Enterprise

5

observation of emergent behaviour. The components must adapt in order to achieve the overall system behaviour. 2. When the environment changes or the goals of a system change it is often ambiguous as to how to modify the system components in order to continue operating. The system must be able to adapt in such a situation. Dealing with both these kinds of ambiguity leads to a requirement for an approach that supports adaptation in the original system design so that it can be deployed and immediately adapt to its operating environment and then subsequently adapt to changes that occur. In all cases the behaviour of the system is driven by both its goals and the need to interact with its environment. Given the requirement on information system adaptation, the key feature from existing approaches seem to be: organising information systems in terms of decentralised control (Architecture Based) together with an ability to compare the current execution history against the desired outcome in order to generate control parameter values (Control Theory). The use of decentralised components (actors or agents) organised using an MRAC-style architecture that uses some form of machine learning to dynamically calculate the control parameter values produces the idea of Digital Twin that runs along side a complex system and dynamically adapts it to changes in the goals and its environment. Various forms of machine learning can be used depending on the circumstances, but a fully dynamic digital twin might benefit from the use of Reinforcement Learning [6] which is model-free and does not rely on previous execution histories of the system. The importance and timeliness of applying Digital Twins to software and systems development is highlighted in the number of recent industry thought leadership editorials that describe the huge breadth and potential of this approach including Deloitte1 , Simio2 , Forbes3 and Gartner4 . Digital Twins can be applied in many different scenarios. Twins of physical systems can be used to provide a cost-effective way of exploring the design space for new products or optimisations. Twins of information systems can be used to achieve adaptation in complex ecosystems. Twins of populations (such as those modelled in the current Covid-19 pandemic) can be used to perform scenario playing where behaviour is inherently emergent. It is desirable to envisage a situation where digital twins are used in a variety of modes for complex system analysis and development: Analysis: A key requirement is to ascertain that a complex system is achieving its goals. A digital twin can provide a cost-effective solution through execution 1 2 3 4

https://deloitte.com/us/en/insights/focus/tech-trends/2020/digital-twin-applications-bridging-the-physical-and-digital.html. https://simio.com/blog/2019/11/14/top-trends-in-simulation-and-digital-twinstechnology-for-2020/. https://forbes.com/sites/bernardmarr/2019/04/23/7-amazing-examples-of-digitaltwin-technology-in-practice/#4cd0672d6443. https://www.gartner.com/en/documents/3957042/market-trends-software-providers-ramp-up-to-serve-the-em.

6

T. Clark

in a simulation environment producing traces that can be examined for occurrence of the desired (and undesired) behavioural patterns. A digital twin can also support what-if and if-what scenario playing to explore the system state space. Adaptation: An existing system may expose a control interface that can be used for dynamic adaptation. A digital twin can be used to address the problem of constructing the desired control inputs by running alongside the real-system and producing control commands based on a comparison of the observed and desired behaviour. This leads to the idea of a digital twin being used for continuous improvement of complex system behaviour through a variety of classical control theory and AI based techniques. Maintenance: This is the single most expensive activity in a system lifecycle and can be responsible for over 60% of the overall costs. This is largely due to the present inability to explore the solution space effectively and efficiently. A digital twin can overcome this hurdle through what-if and if-what scenario playing to help arrive at a feasible transformation path from the “as is” state to the desired “to be” state in silico. Once the transformation path is vindicated, the necessary changes can be introduced into the real system in the right order thus providing assurances of correctness. Design: A new complex system can start life as a digital twin that is used as a blueprint. The twin provides a specification of the behaviour for the real system and can be integrated with existing systems in the target ecosystem by observing their outputs. The design can then use adaptation to tailor its behaviour with respect to real ecosystem data. This leads to a vision for future enterprise information systems based on Digital Twins that are used to address the various forms of uncertainty encountered in modern enterprise systems including: the behaviour of the system, the environment into which it is deployed and the goals against which it operates. Such digital twins are based on decentralised agents whose individual behaviours are controlled via levers and which expose execution histories to a reinforcement learning algorithm producing controls that satisfy dynamically changing goals and environments. In order to achieve this vision we require technologies that support the design, verification and run-time environments for such systems.

References 1. Barkana, I.: Simple adaptive control-a stable direct model reference adaptive control methodology-brief survey. Int. J. Adapt. Control Sign. Process. 28(7–8), 567–603 (2014) 2. Chac´ on-Luna, A.E., Guti´errez, A.M., Galindo, J., Benavides, D.: Empirical software product line engineering: a systematic literature review. Inf. Softw. Technol. 128, 106389 (2020) 3. Jokste, L.: Comparative evaluation of the rule based approach to representation of adaptation logics. In: Proceedings of the 12th International Scientific and Practical Conference. Volume II, vol. 65, p. 69 (2019)

The Uncertain Enterprise

7

4. Jokste, L., Grabis, J.: Rule based adaptation: literature review. In: Proceedings of the 11th International Scientific and Practical Conference. Volume II, vol. 42, p. 46 (2017) 5. Mendon¸ca, N.C., Garlan, D.S., Bradley, C.J.: Generality vs. reusability in architecture-based self-adaptation: the case for self-adaptive microservices. In: Proceedings of the 12th European Conference on Software Architecture: Companion Proceedings, pp. 1–6 (2018) 6. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT press, Cambridge (2018) 7. Vogel, T., Giese, H.: Model-driven engineering of adaptation engines for self-adaptive software: Executable runtime megamodels, (66). Universit¨ atsverlag Potsdam (2013)

Industrial Digital Environments in Action: The OMiLAB Innovation Corner Robert Woitsch(B) BOC, Vienna, Austria [email protected]

Abstract. The digital transformation is a global mega trend that is triggered by the evolution of digital technology, that has the potential for every organisation to either optimize their current business via a digital innovation or by transforming the business via digital disruption. The challenge for every organisation is therefore to select and personalise the appropriate digital innovation. There is a plethora of methods and assessment frameworks, here we introduce the OMiLAB Innovation Corner that assists in (1) creating new business, (2) design the organisational model and (3) engineer proof-of-concept prototypes as a “communication media”. The unique value proposition of OMiLAB Innovation Corner is the model-based foundation that supports decision makers in key phases of the innovation. First, the creation of new business models by providing digital design thinking tools is assisted. Second, the design of the digital organisation by providing extended modelling capabilities is supported. Third, a proof-of-concept engineering providing robots and sensors is enabled. We share our practical exeriences by introducing (a) how new business models are created in the H2020 project Change2Twin to help manufacturing SMEs in their digital transformation, (b) how conceptual models are design in the H2020 project BIMERR to create digital twins of renovation processes and (d) how proof-of-concept engineering is performed in the FFG project complAI to analyse different robotic behaviour. Keywords: OMiLAB · Digital transformation · Digital innovation in industry

1 Introduction Digital transformation has the potential to create additional value of about 100 trillion $ (in Europe “billion”) in the next decade. Industry aims to capitalize this potential by creating new business and improve existing business by applying digital technology. The digitalization strategy of LEGOTM , demonstrates nicely, how new business can be created. In addition to the traditional production of toy bricks, the 3D Webdesign environment that was used to create new designs of toy bricks, was provided openly. The resulting open environment evolved towards a digital ecosystem where video-games, applications and movies had been created by different stakeholders. Films with LEGOTM figures as main characters, that was possible by a co-creation of the © IFIP International Federation for Information Processing 2020 Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 8–22, 2020. https://doi.org/10.1007/978-3-030-63479-7_2

Industrial Digital Environments in Action

9

LEGOTM development tools together with animation of the film industry, created a revenue of 468 million $. The digitalization strategy of Mack-International, demonstrates, how existing business can be improved. The traditional roller coaster producer and operator was confronted with the fact that similar customer experiences can be achieved with Virtual Reality glasses. Therefore, the virtual roller-coaster was realized combining Virtual Reality with physical roller-coaster. This required new skills, as the movement of each head of each customer must be perfectly synchronized with the Virtual Reality images the customer sees. Skill for real-time 3D image rendering using high performance computer where needed to generate not only a unique customer experience, but also to become the worlds first hybrid roller-coaster producer. Those two samples introduce the need of (a) a digital mindset and digital skills that go beyond a simple usage of a particular digital technology, (b) digital environment that evolves towards an ecosystem, (c) data access management with strong data infrastructure and appropriate analytics and communication capabilities, as well as (d) technological readiness in cloud computing, cybersecurity and interoperability. The challenge is to keep the momentum of the digital transformation within the organisation and within the whole ecosystem. The start is the process of inventing [1] – “to produce … for the first time through the use of the imagination or of ingenious thinking and experiment” in form of creative actions. Then the transformation continues with innovation [2] “a new idea, method, or device” that is understood as the process of emerging something. The transformation ends with providing digital offerings. This transformation process is unique for each digital strategy of an organisation and is influenced by [3] the (a) challenge in question, (b) the organisational culture, skills and capacity, (c) the market situation and innovation pressure, (d) the strategy, principles and approaches of the organisation, the stakeholders and partners. The challenge is therefore to find appropriate methods, instruments and tools that support organisations during the transformation phases. The OMiLAB Innovation Corner provides concepts, methods and tools that support organizations in their transformation process by providing a conceptual, technical and physical infrastructure. First, the creative innovation space fosters the co-creative generation of new business models by involving the experience from different experts from different fields. Second, the organizational models are designed with concept models to describe, how the digital technology is introduced into the organization. Analysis, simulation and mining tools support decision makers in assessing the impact of the different utilization of digital technologies. Third, the engineering space supports the construction of proof-of-concept prototypes as a “communication media”. The interaction with physical objects - like robots or sensors – introduces new challenges (a) the digitization of physical objects into a virtualised object using IoT Sensors as well as (b) the interaction with the physical object from the virtualised objects using IoT Adaptors. The evaluation space enables the demonstration of the proof-of-concept prototypes and integrates the expertise of different fields coming from stakeholders from e.g. legal department, business manager, information technology or engineer. We use OMiLAB Innovation Corner to focus on different parts of the digital transformation depending on the needs of the projects. In the H2020 EU project Change2Twin

10

R. Woitsch

for example, we used the OMiLAB Innovation Corner to support the creation of novel business models for manufacturing SMEs that aim to use digital twin technology. In the H2020 EU project BIMERR, we used the OMiLAB Innovation Corner for designing organizational models that can be used as digital twin for renovation process of buildings. In the Austrian FFG project complAI, we used the OMiLAB Innovation Corner to study the different interaction alternatives between information technology and robots. Finally, we provide an outlook on digital trends in industry and reflect how science could contribute.

2 The OMiLAB Innovation Corner The OMiLAB Innovation Corner provides a setting of conceptual, technical and physical infrastructure in which different actions can be performed. 2.1 Settings of OMiLAB Innovation Corner The Conceptual Framework The conceptual framework distinguishes between three abstraction layers, (a) business layer that is concerned with creating new business models, (b) the proof-of-concept layer that is concerned with engineering prototypes and (c) the conceptual modelling layer that is concerned with creating organisational models in order to link the business with the proof-of-concept layer (Fig. 1).

Fig. 1. Introduces the three abstraction layers of the OMiLAB Innovation Corner, whereas the conceptual description is on the left part and the realisation of the industrial OMiLAB Innovation Corner at BOC in Vienna is shown in the right part of the figure.

The OMiLAB Innovation Corner is based on the following principles: 1. Business Layer: Focus on Business Model Creation A business model describes the “rational of how an organisation creates, delivers, and captures value” [4]. The aim is therefore to either improve existing or to generate

Industrial Digital Environments in Action

11

new business models in order to generate added value. This layer therefore provides a high-level overview of the domain, the application scenario as well as the overall ecosystem in which the organisation operates. It follows the “Outcome based approach” principle, where digital innovation is always justified by the outcome. 2. Conceptual Model Layer: Focus on Organisational Model Conceptual models are successfully applied in enterprise modelling [5] and information systems [6] and hence capable to describe how the digital solution is applied within an organisation. The digital innovation is therefore described in a technology independent way using a knowledge-based approach. The knowledge can be interpreted by computer algorithms or by human experts, depending on its modelrepresentation. Hence, we follow the principle to “Invest on use cases and not technology” as the organisational models can be realised with different technologies. 3. Proof-of-Concept Layer: Focus on Robot Interaction Rapid prototyping is “… the idea of quickly assembling a physical part, piece or model of a product. “ [7] We apply rapid prototyping for both the development of a software application as well as for the development of a physical device. The engineering of a rapid prototypes is performed by configuring and integrating prepackaged features that are provided as services. Instead of full implementing the prototypes, we apply the “Fail Fast, fail cheap” principle by rapidly composing features in form of services that emulate the main behaviour of the intended solution.

The Technical Environment The technical environment supports the idea to use “prototypes are communication media”. Haptic elements enable a hands-on experience of involved experts with different background. Those haptic elements are additionally represented in form of conceptual models in order to enable documentation, analysis, simulation and the interaction on different channels to also involve experts that are not physically present during the workshop. For the models we use out-of-the-box modelling tools. OMiLAB provides the tool Bee-Up [8] that realises the most common modelling languages BPMN, EPC, ER, UML and Petri-Net (Bee-Up). Business process models are commonly represented in BPMN [9]. Data models have their origin in the Enhanced Entity Relationship (EER) [10] model. Technical specifications and software interaction is described with the Unified Modelling Language (UML) [11]. Formal state diagrams with a well-developed mathematical theory is commonly described with Petri-Nets (PN) [12]. There are additional 55 other modelling tools based on the meta modelling platform ADOxx. Hence, the modelling tools can be combined for the particular need of the organisation. ADOxx.org is an open community consisting of about 4.300 developer who implement their own modelling tools on top of the meta modelling platform ADOxx. It offers tutorials, documentation and FAQ interaction and development tools for realising own modelling tool, like the 55 modelling tools from OMiLAB. 14 building blocks and code snippets that provide additional features such as monitoring dashboard or simulation services, and 17 development spaces of projects. OLIVE is a Microservice Framework that provides several modelling-features that can be used when engineering a proof-of-concept prototype. It provides a connection to

12

R. Woitsch

ADOxx, hence any modelling tool that is built on ADOxx can use the OLIVE Microservices or one of the connectors that access third-party applications. This technical integration of an OLIVE Microservice and an ADOxx-based modelling tool requires also a conceptual integration which depends on the modelling language and the service in use. The Model Wiki for example is generic and can be used by any modelling language, whereas the monitoring dashboard for example requires the semantic of goals, KPIs and measures. The physical Infrastructure The physical environment is provided in form of spaces, chairs, large and small tables that can be flexibly arranged to divide participants. All aforementioned layers are supported with the corresponding physical infrastructure. The business model creation is supported with SAP Scenes paper figures, cameras and corresponding Scene2Model toolkit that generate models automatically. The proof-of-concept engineering is supported with tools, laptops and working places to sold, screw and programme the robots and IoT sensors. The consolidated evaluation and demonstration of the solution is supported by Plexi-glass tables that represent each layer and laptops with corresponding modelling tools. The OMiLAB Innovation corner provides physical devices like training robots and IoT sensors that provide the basics for the prototype engineering. The focus is thereby on the generation of a “prototype as a communication and assessment media” to encourage the co-operative evaluation of simplified proof-of-concept realisations. It provides a simplified setup using IoT Adapters that enable the emulatoin of different software behaviour. The robot behaviour can either be implemented on IoT-Adapter, with the benefit of the availability of smarter algorithms, but with the drawback that the reactions are slow. Alternative, the robot behaviour is implemented on the platform of the robot, which has the benefit that the robot acts fast and secure but in order to change the behaviour, the algorithm needs to be re-implemented on the platform level. Different robotic behaviour can be demonstrated in different configurations. 2.2 Actions Within the OMiLAB Innovation Corner The OMiLAB Innovation corner provides an environment for design thinking and conceptual modelling and supports therefore a wide range of methods and tools that can be applied to work on different organizational challenges. We consider “Design Thinking as a systematic, human-centred approach to solving complex problems … [where] … user needs and requirements as well as user-oriented invention are central ….” [13]. We follow a co-creation approach where human experts with different backgrounds like business, legal, engineering or software engineering – are co-creatively working on ideas for digital solutions. We stress the observation that “making ideas tangible facility communication”, hence “prototypes are communication media” [14] and extend this with our experience that business process modelling can complement and assist the communication of such prototypes. The domain-specific graphical documentation of the context, purpose, aim, requirements or challenges has proven to support the collaboration between stakeholders with different backgrounds. In addition to the documentation of the workshop results or

Industrial Digital Environments in Action

13

prototypes, the models can be analysed, simulated and distributed in order to share models and collect feedback form stakeholders. Business process modelling can complement the design thinking workshops by providing additional explanation. In the business model creation phase, we design a new business model by applying the SAP Scenes approach and complemented it with Business Process Modelling. In order to elicit creative ideas, and innovations the workshop moderate may use instruments like User Motivation Analysis, Persona, Value Proposition Chain, Research Mind Map, Stakeholder Map, User Journey, Hypothesis Generation, Collective Notebook or similar instruments to gain new ideas from the workshop participants. In the conceptual modelling phase, we model the Company Map to indicate where and how the innovation is foreseen in the organisation. Innovations are described as new capabilities that either improve existing or enable new business processes. Different alternatives are modelled to simulate them with different assumptions. Simulations results can be merged with test data or historical data in order to assess the different alternatives. Data mining, data analysis or the collection of expert opinions can complement this phase to support decision making. In the engineering phase, we combine prototyping of software and digital devices with Wizard of Oz experiments. In the Wizard of Oz experiments the user interaction is actually performed by a human person that acts like a full fletched computer programme to emulate the – yet not implemented - system behaviour. We extend this approach with robots and sensors in form of simplified demonstrations, like a robot arm is only picking-up cards with images of objects instead of picking up the real objects. The demonstration scenario in form of a proof-of-concept prototype, combines therefore human interactions, software prototypes and robots and sensors to emulate the possible behaviour of future systems. Although the aforementioned phases of the OMiLAB Innovation Corner can be mapped to the three phases of design thinking, “Ideate”, “Prototype”, and “Test” [15], we explicitly allow that projects wither focus in only one or two phases, that phases are visited in no particular order as well as the phases are worked out in a sequence.

3 The H2020 EU-Project Change2Twin: Focus on Business Model Creation The main goal of the H2020 EU Project Change2Twin is to ensure that 100% of manufacturing companies in Europe have access to 100% of technologies needed to deploy a digital twin. This is addressed by a series of initiatives, where so-called Digital Innovation Hubs help manufacturer in assessing, selecting and applying Digital Twin technology that is provided at marketplaces. We are using the OMiLAB Innovation Corner to analyse the current business model and work-out future solutions using Digital Twin technology. So-called exploitation items that are either market-ready offerings on a marketplaces or contributions with a lower technology readiness level like prototypes, approaches or ideas are proposed to the manufacturer in a creative workshop. The manufacturer needs are matched using expert knowledge and smart algorithms to offer appropriate available items. Different alternatives are worked out and with the help of analysis and simulation algorithms the most appropriate alternative is elaborated.

14

R. Woitsch

3.1 Co-creation Approach The different interests between technology provider, end users and research partner are externalized using the design thinking method called SAP Scenes [16]. Different stakeholders with different background and expertise are contributing in a co-creative manner to generate a scene of the new business model. We represent both, the consumer viewpoint as well as the vendor viewpoint in form of separate design thinking workshops. The result is an agreed set of so-called scenes that describe the main actors, actions, elements or locations. This method enables the individual extension of the figures. The physical workshop is extended by capturing the scene with a camera and automatically generating a graphical model out of it. This model can be used for documentation, for distribution to experts that are not physically present at the model as well as the starting point to detail the scene and evolve a business model and the corresponding business processes (Fig. 2).

Fig. 2. Shows a scene as a result of a design thinking workshop [17] explaining a technological ecosystem on the left. The transformation of the original workshop into a scene model using the OMiLAB Scene2Model Toolkit is shown on the right.

3.2 Usage of the OMiLAB Innovation Corner Usage of OMiLAB Innovation Corner Default Setting The Scene2Model infrastructure is used to support the design thinking workshop consisting of: paper figure with QR codes, camera reading QR codes, Raspberry-Pi and corresponding tag-recognition software, and the Scene2Model toolkit that retrieves the QR code and maps them to modelling objects within the scene model. ADOxx with the BPMN modelling library was used to model the business processes. The marketplace model [18] specifies the exploitation items that describe the different digital twin offerings that can be used by the manufacturer. A key challenge is to find, select and propose relevant digital twin technology for the manufacturer during the workshop. The introduction of appropriate samples, ideas or best practices has massive influence in the

Industrial Digital Environments in Action

15

creativity and the resulting outcome of the workshop. This is currently performed by the moderator who knows the available solutions and proposes the most appropriate one. Extending the OMiLAB Innovation Corner In order to support the moderator of the workshop, we implement an extension to the Scene2Model toolkit that searches for appropriate ideas and solutions, so that experiences from several moderators can be used by automatic matching the demand and the available innovation items using intelligent algorithms. We rely on the experiences we gained during the H2020 EU Project CloudSocket [19], were we apply a matching mechanism [20] that reflects the requirements of a business process on the one side and the service description on the other side in order to find the appropriate service for a business process. In order to match the business requirements against available ideas, we create a so-called innovation shop based on the experience of the H2020 EU-Project CaxMan, where innovation results are listed in form of items. The notation “innovation item” allows us to act flexibly and also include early ideas, software prototypes or consulting instruments that are not yet market ready. The difference between an innovation shop and a marketplace is the fact that the marketplace provides technology readiness level 9 items only, whereas the innovation shop has no restriction in the technological readiness. The “Innovation Shop” or “Marketplace” modelling is used to describe the corresponding offerings. The Annotation-Matcher loads the ontologies that were used when annotating both the requirements and the innovation items and match the innovations to business needs. Hence, during the creative workshop of business model creation, the design thinking workshop is enriched with appropriate offerings coming from innovation shops or marketplaces. We implemented the extension using the OLIVE Microservice framework and extensions in the ADOxx based Scene2Model toolkit.

4 The H2020 EU-Project BIMERR: Focus on Organisational Model The aim of BIMERR is to design and develop a Renovation 4.0 toolkit which will comprise tools to support renovation stakeholders throughout the renovation process of existing buildings, from project conception to delivery. Our focus is the development of a business process and workflow modelling toolkit that has the capability to design, execute, monitor and evaluate through mining the renovation processes. The renovation processes are designed in BPMN, whereas we realise different level of details. On the template level, the different renovation processes are modelled in a generic way in order to be applied by any renovation initiative. On the instance level, the renovation process is modelled in accordance with the concrete project plan of the particular construction site. On the workflow level, the renovation process is modelled in accordance of the applications, mobile apps, smart glasses, data repositories and legacy applications that are used by the workers on the particular construction site. 4.1 Knowledge Based Approach The aim is to create a digital twin of the renovation process on different levels in order to serve different purpose. In the template level, the digital twin is very generic and is

16

R. Woitsch

therefore only used to assess the prize and the duration of a renovation project. This estimation is on very high level and must therefore be interpreted by experts who can correct the results according their knowledge to make good offers. On the instance level, the digital twin is concrete in terms of weekly deadlines at the construction site, the used material, the manpower and the machinery. Hence, the status of the renovation process consists on the data received by the various project management and monitoring tools, the exert opinion when visiting the site and additional information that is provided by the residents who use the offered mobile apps during the renovation period as well as the smart glasses who is offered for subcontractors (Fig. 3 and Fig. 4).

Fig. 3. Shows the template of renovating a facade form the outside. The different possibilities like if a re-arrangement of telephone cables is necessary, or which type of facade is added are expressed as decisions in this template.

Fig. 4. Indicates the abstract workflow of the facade improvement process outside by using the swim lanes to express different IT-systems - construction onsite environment, management environment, reporting and sub-contractor environment.

This template is then instantiated for a concrete project, and each task is split according the sub-contractor and the time slot in which task has to be finished. Both models can be viewed in full size and downloaded at the development space of BIMERR [21] on ADOxx.org. The digital twin of the renovation process is used to monitor the status of the construction site, to execute the workflow and interact with the stakeholder and to simulate how the future work progress is estimated based on expert opinions. An evaluation of the renovation process by performing a process mining and reflecting the results by a human evaluation via collaboration1 enables iterative learning cycles.

1 Deliverable D6.3, D6.4, D6.6, D6.8 and D6.10 of BIMERR.

Industrial Digital Environments in Action

17

4.2 Usage of OMiLAB Innovation Corner Usage of OMiLAB Innovation Corner Default Setting The OMiLAB Innovation Corner provides the ADOxx meta modelling platform and the OLIVE Microservice Framework. • For the design of the construction process, we used ADOxx with an imported BPMN library that is available at ADOxx.org • For the simulation, we used the opensource petri-net based simulation service from ADOxx.org and extended it that it can read different parameters for each simulation run. We call this a token-based configuration of the simulation algorithm. • For the monitoring, we used the opensource dashboard service from ADOxx.org and configured it that the dashboard can be used in the context of processes. • For the mining, we use a third-party application called CELONIS and integrated it by exchanging log data and process information. • For the process improvement we implemented a new service called Model-Wiki, that shares models via an XWIKI and enables co-operative evaluation of the models via comments from experts. The realisation is provided in the development space of BIMERR on ADOxx.org. Extending the OMiLAB Innovation Corner One knowledge-intensive task that is performed with the help of the digital twin is the prediction if there will be a delay or additional costs. We demonstrate this with the rent of the scaffold, as a delay has a direct impact on the rental costs of an equipment that has to stay onsite. A dashboard monitors the Key Performance Indicators (KPI) “Scaffold Cost”, which consists of (a) the construction, (b) the de-construction and (c) the rental costs for the duration. For the prediction of the rental costs, we introduce the process-simulation that forecasts the most likely duration by continuously calculating three scenarios: (1) the optimistic, (2) the moderate and (3) the pessimistic scenario (Fig. 5).

Fig. 5. Shows a representation of KPIs including: (a) actual scaffold costs as well as (b) optimistic, (c) moderate and (d) pessimistic scenarios of the simulated duration

In order to fine-grain the simulation and to introduce expert knowledge in combination with historical data, we introduce the so-called “knowledge-based” simulation. A

18

R. Woitsch

precondition is, that each simulation run has its own probabilities for decisions, waiting or execution time. This enables the introduction of a context, that defines the probabilities concerning the likelihood for weather extremes, quality issues by some sub-contracture or unexpected other issues. We externalise expert knowledge through the concept of “weighted net sum” to define the simulation input. In our sample case, the “rental time of the scaffold” is the parameter which we simulate. To calculate the respective weighted net sum the following inputs are used: (1) is historical data that provides the base distribution for the expected time; (2) is an estimation on the weather conditions causing a delay in the facade renovation; (3) quantifies the potential risk that the customer does not pay and therefore the construction site needs to pause – where the rental costs of the scaffold continues; (4) refers to the potential risk that the subcontractor does not perform according the contract, hence a substitute needs to be put in place – with the rental costs of the scaffold continues; (5) indicates the likelihood of unexpected risks that may cause a delay. Each of the input is the result of the calculation of the probability and the weighted net summary that calculates the various inputs. The expert input consists of (a) the mathematical distribution and (b) the different weights for the net summary.

5 The FFG-Project ComplAI: Focus on Robot Interaction The complAI project develops an assistance system, which is to guide organisations in using AI. Relevant ethical, legal, safety and security issues will be assessed in a model-based risk management software tool. For this investigation, we choose compliant execution of processes on industrial robotic platform. Workflow engines are used for the execution of processes on industrial robotic platforms. The challenge is to assess such workflows by domain experts i.e. layers who are not familiar with the technical details of workflows or artificial intelligence (AI). BOC uses different modelling languages, to sperate the concerns and enable also non-technical experts the assessment, if a certain process can be executed on an industrial robotic platform or not. The OMiLAB Innovation Corner is use to engineer a proof-of-concept prototype that demonstrates the different behaviour of a robot arm. 5.1 Robot Interaction The digital supermarket has been chosen as a sandbox-scenario, as the three challenges in complAI (a) “Pick-and-Place”, (b) “Mobile robots” and (c) “Human and robotic collaboration” can be demonstrated. Although the use case of the digital supermarket is abstract and considered as a sandbox scenario, the worked-out mechanisms can be applied on concrete industrial use cases. In order to enable an open discussion on legal and ethical issues, we selected a hypothetical and abstract scenario being more flexible when discussion sensible issues. For the “Pick-and-Place” scenario we consider a robotic arm that configures a basket of fruits for the customer. The idea is that the customer can choose the fruits from a mobile app, enter the shop and pick up the prepared basket of fruits. The challenge is to describe not only the process of picking fruits and placing them in a basket, but also on the

Industrial Digital Environments in Action

19

decisions which fruit to pick and how to deal with the situation that fruits have been run out of stock. The simple sequence of picking up three fruits was modelled with a Petri-Net (Fig. 6).

Fig. 6. Shows a petri-net describing a simple pick-and-place procedure for a robot arm. Each transition invokes an action form the robot that is shown as a life stream picture on the right.

The target platform is the representation of the pick-and-place workflow in form of a flexible BPMN workflow that runs within a workflow engine. In order to transform the simple pick-and-place sequence, we switched to a flow chart representation, introducing sub-processes for certain robot movements and display user interfaces in form of selection boxes. This enables to explicitly demonstrate, how an intelligent interpretation is performed. In our case the sensor information if the selected fruit is available and the interpretation to select an alternative instead. This sensor-based inference is demonstrated in form of the human interaction using the selection boxes. The flow-chart representation therefore enabled a mock-up of the workflow and the intelligent interaction. The resulting BPMN workflow was implemented on a workflow engine combining the sub-workflows for movements, the orchestration of the movements and the indication of intelligent interaction (Fig. 7). 5.2 Usage of OMiLAB Innovation Corner Usage of OMiLAB Innovation Corner Default Setting The proof-of-concept engineering of intelligent robot interaction using workflows used the following default setting: • The pre-packaged Dobot Magician [22] was used to demonstrate the robot arm. • The corresponding IoT Adapter – Raspberry-Pi – and corresponding SW– Tomcat Web-Application, Dobot-Magician interfaces. • The pre-installed Modelling Toolkit Bee-Up is used for modelling the Petri-Net, the Flow Chart and the BPMN processes that accesses the IoT-Adapter. • A third-party workflow engine was used. The configurations can be downloaded from the complAI ADOxx.org developer space [23].

20

R. Woitsch

Fig. 7. Introduces the same simple pick-and-place procedure that picks up some cards with fruitsymbols but introduces a user interaction and sub-processes in form of a Flow Chart on the left and in form of a BPMN workflow on the right.

Extending the OMiLAB Innovation Corner In order to assess, if a process can be executed on the robotic platform or not, we extended the OMiLAB Innovation Corner in a two-step approach: First-Step: We implemented an assessment services that applies a pre-defined questionnaire on the process in order to find out, if the answer result in a (i) “green-light” to indicate that there is no risk when the workflow is operating, (ii) “yellow-light” that there are risks but they can be sufficiently handled following certain conditions or (iii) “red-light”, where additional corrections are required. The Microservice Framework OLIVE was used to realise this questionnaire services. Models from ADOxx-based tools can be visualised in the assessment service by displaying the model image, and the color-coded icons are laid over the model image. The colour-code item refers to a questionnaire that is displayed and the answers are summarized according its calculation algorithm. Details on how to use the assessment service, the questionnaire modelling within ADOxx, the JSON format of the questionnaire and the construction of questions and answers is provided in the complAI development space on ADOx.org. Second-Step: When a workflow model passes the assessment, the model is signed to ensure that only the signed workflows are executed on the robotic platform. The Microservice verifies the role or the person who signed the model. This is performed by creating a hashkey of the model to ensures that no changes have been performed. The haskey is stored in a separate database which is used for verification. In case a workflow has to be executed, it is first verified, if the signed workflow has not been changed.

6 Reflection and Next Steps We observed that (a) business model creation, (b) the assessment of innovations and the selection of the most promising innovation to be put into practice, (c) keeping the

Industrial Digital Environments in Action

21

momentum of innovation within a heterogeneous workforce – including digital natives and digital immigrants as well as (d) the digitizing of real world into a digital world are key challenges when introducing digital innovations. With the OMiLAB Innovation Corner we apply an instrument, consisting of conceptual framework, technical environment and physical infrastructure, in order to support organizations to solve aforementioned challenges. Business model creation is supported with a co-creative approach using design thinking to harvest creative ideas from different domain experts with different backgrounds and viewpoints. Assessment of innovations and decision making is supported by a knowledge-based approach using conceptual models, where the expertise of different stakeholders is incorporated into a hybrid knowledge base. Activities to keep the momentum of innovation is assisted by organizational models that are generated within the OMiLAB Innovation Corner. In contrast to other innovation workshop approaches, the models can be used within the organizational during the entire lifecycle of the innovation process. The digitization of real-world objects into digitized objects is supported by a proof-of-concept environment using robots and IoT sensors to enable prototypes as “communication media”. Hence, those prototypes focus on making physical issues explicit and challenges that only engineers are aware of, can be elaborated also to other domain experts within a consolidated proof-of-concept prototype. The OMiLAB Innovation Corner has been successfully applied in a series of H2020EU projects, national FFG-Projects and bilateral industrial co-operations. However, we see the need that science supports industry by investigating the following challenges: 1. What is a digital organisation? If everything can be virtual, what are the key components of an organisation, and how can those key elements be configured? 2. Is the digital transformation different from previous changes? Industry is under continues transformation and hence capable to adapt to new circumstances, but is the digital transformation special compared to previous changes? 3. How to transform the current workforce into a digital workforce? Digital natives and digital immigrants are both needed for the benefit of a digital organisation. But how to manage such teams and how to identify the needed capabilities? 4. What is a global digital ecosystem? Organisations know how to act in global ecosystems, but what is different in a digital globalism? We are convinced that the upcoming challenges to deal with digital transformation can be supported with model-based approaches acting as documentation, mediator environment, and as a knowledge base for intelligent approaches.

References 1. Merriam-Webster. https://www.merriam-webster.com/dictionary/inventing. Accessed 16 Sep 2020 2. Merriam-Webster. https://www.merriam-webster.com/dictionary/innovation. Accessed 16 Sep 2020

22

R. Woitsch

3. cmp. World Economic Forum. http://reports.weforum.org/digital-transformation/wp-content/ blogs.dir/94/mp/files/pages/files/dti-executive-summary-20180510.pdf. Accessed 16 Sep 2020 4. Osterwalder, A., Pigneur, Y., Clark, T.: Business Model Generation: A Handbook For Visionaries, Game Changers, and Challengers. John Wiley & Sons, Hoboken, NJ (2010). ISBN 9780470876411 5. Sandkuhl, K., et al.: From expert discipline to common practice: a vision and research agenda for extending the reach of enterprise modeling. Bus. Inf. Syst. Eng. 60(1), 69–80 (2018) 6. Frank, U., Strecker, S., Fettke, P., vom Brocke, J., Becker, J., Sinz, E.J.: The research field modeling business information systems. Bus. Inf. Syst. Eng. 6(1), 39–43 (2014) 7. Techopedia. https://www.techopedia.com/definition/9093/rapid-prototyping. Accessed 16 Sep 2020 8. OMiLAB Bee-Up. https://www.omilab.org/activities/bee-up.html. Accessed 16 Sep 2020 9. OMG BPMN, 2009, Business Process Modelling Notation. Version 1.2. http://www.omg.org/ docs/formal/09-01-03.pdf. Accessed 14 May 2009 10. Chen, P.P.: The entity-relationship model – towards a unified view of data. ACM Trans. Data. Syst. 1, 9–36 (1976) 11. OMG UML. https://www.omg.org/spec/UML/About-UML/. Accessed 16 Sep 2020 12. Petri, C.A., Reisig, W.: Petri net. Scholarpedia. 3 (4): 6477 (2008). https://doi.org/10.4249/ scholarpedia.6477. http://www.scholarpedia.org/article/Petri_net 13. Hasso-Platter Institut. https://hpi-academy.de/en/design-thinking/what-is-design-thinking. html. Accessed 16 Sep 2020 14. Meinl, C., Leifer, L.: Design Thinking: Understand – Improve – Apply. Hasso Plattner, Berlin Heidelberg, Springer (2011) 15. cmp. Malamed C.: Learning Solutions. https://learningsolutionsmag.com/articles/a-designeraddresses-criticism-of-design-thinking. Accessed 16 Sep 2020 16. SAP. https://experience.sap.com/designservices/resource/scenes. Accessed 16 Sep 2020 17. BIMERR Consortium, D3.1 Stakeholder Requirement for the BIMERR system. www.bim err.eu. Accessed 16 Sep 2020 18. ADOxx.org, Change2Twin Development Space: https://adoxx.org/live/web/change2twin/ downloads. Accessed 16 Sep 2020 19. ADOxx.org, CloudSocket Development Space. https://www.adoxx.org/live/web/clouds ocket-developer-space/bpaas-design-environment-prototype-research. Accessed 16 Sep 2020 20. Hinkelmann, Knut: Business process flexibility and decision-aware modeling—the knowledge work designer. Domain-Specific Conceptual Modeling, pp. 397–414. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-39417-6_18 21. ADOxx.org, BIMERR Development Space. https://adoxx.org/live/web/bimerr/downloads. Accessed 16 Sep 2020 22. Dobot. https://www.dobot.de/produkte/dobot-magician-4-achsen-roboter/. Accessed 16 Sep 2020 23. ADOxx.org, complAI Development Space. https://adoxx.org/live/web/complai/downloads. Accessed 16 Sep 2020

Enterprise Modeling and Enterprise Architecture

Digital Twins of an Organization for Enterprise Modeling Uwe V. Riss1,2(B) , Heiko Maus3 , Sabrina Javaid1 , and Christian Jilek3,4 1

Eastern Switzerland University of Applied Sciences, St. Gallen, Switzerland {uwe.riss,sabrina.javaid}@ost.ch 2 School of Humanities, University of Hertfordshire, Hatfield, Hertfordshire, UK 3 German Research Center for Artificial Intelligence, Kaiserslautern, Germany {heiko.maus,christian.jilek}@dfki.de 4 Computer Science Department, TU Kaiserslautern, Kaiserslautern, Germany Abstract. Today’s dynamic business environment requires enterprises to be agile in order to remain competitive. Competition also impacts enterprise modeling (EM) aiming at the systematic development of the enterprise’s business processes, organizational structures, and information systems. Although EM is a mature discipline, organizations still do not exploit the full potential of EM. We suggest, the concept of a Digital Twin of an Organization (DTO) provides a means of digitalization to introduce more agility. A DTO draws upon graph-based, machine readable knowledge representation of enterprise models. In order to run through various scenarios in real time, the DTO approach makes use of Context Spaces that provide the required information semantically structured, which improves the comprehensibility and applicability of the used models. The DTO combines EM with operational reality and, thus, increases the agility of the enterprise.

Keywords: Digital Twin of an Organization Process context · Context spaces.

1

· Enterprise modeling ·

Introduction

To survive in the long term in a globalized and digital economy, it is essential for organizations to continuously adapt to their environment. A prerequisite for successful transformation is organizational agility. Being agile means to react proactively and flexibly to new demands. Agile organizations have structures that enable them to adapt to changing market conditions within a short period of time. This can be achieved by redesigning organizational structures and processes, for example, to adapt the core competencies or to strengthen customer The authors like to thank Martyn Westsmith for the design study of the DTO. Part of this work was funded by the German Federal Ministry for Education and Research in the project SensAI (01IW20007). c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 25–40, 2020. https://doi.org/10.1007/978-3-030-63479-7_3

26

U. V. Riss et al.

and supplier relationships. Changes within the company that are not under control result in a multitude of heterogeneous, incompatible, and costly impacts on business processes, organizational structures, and business information systems. A prerequisite for transformation is a common understanding of the elements of an organization and how they relate to each other. Enterprise modeling (EM) as a discipline provides the concepts to comprehensively describe the elements and relations of an organization, such as processes, applications, and structures. Enterprise models are used to understand and analyze an organization with the goal to improve its performance by representing the current and target architecture. This target architecture guides and assists the organization to process change. Over the past decades various modeling approaches have developed. This can be seen in the huge body of knowledge that addresses various aspects differing in goals, scope, and granularity. Overall, EM can be regarded as a mature discipline that provides the necessary concepts to represent an organization. However, there is a gap between the approaches provided by the EM community and how models are used in organizations. The large-scale adoption of the modeling approach still fails in practice and many organizations do not exploit the potential of models. Organizational actors are often not willing to create and maintain enterprise models and do not fully utilize the models. EM is considered as a discipline that is mainly valued by enterprise architects and IT people. Since organizations do not regard it as mission critical, EM has not yet reached its maximum potential [37]. In today’s dynamic business world, there is a need for real-time information that provides organizations with direct added value. It is important for organizations to grasp their current situation and to react accordingly, for example, making the right decisions at the right time. However, such an ability enables organizations to adapt quickly and take advantage of existing business opportunities [35]. Organizations need means that go beyond the established paradigms and exploit the full potential of enterprise modeling. They need more dynamic representations of organizational elements, which reflect changes in the organization immediately. We think, Gartner’s proposal of the concept of Digital Twin of an Organization (DTO) addresses the above-mentioned challenges. The DTO concept enhances that of the established “digital twin”, which is mainly used in the context of the Internet of Things (IoT) [7,24]. The concept is now so successful that Gartner has claimed that digital twins will enter mainstream use and “over two-thirds of companies that have implemented IoT will have deployed at least one digital twin in production” [3]. In any case, the concept of the DTO opens up new opportunities of digitalization to enterprise architecture. In this paper, we will investigate the reasons for this optimism and what makes the DTO concept so promising. We will give an impression of what a DTO could look like and how it could work. The paper is organized as follows: Section two introduces the digital twin approach. Section three conceptualizes the idea of the Digital Twin of an Organization using Context Spaces. Section four introduces a construction process use case to show how a DTO can support

Digital Twins of an Organization for Enterprise Modeling

27

an organization in practice. Section five discusses the role of and interplay with enterprise models that helps us to draw some conclusions with respect to the implementation of the DTO. In Section six we summarize the findings and point to future research.

2

The Digital Twin Approach

In 2002, Michael Grieves introduced the digital twin in manufacturing as a virtual representation or dynamic model of a physical system [15]. One of its first and most prominent fields of employment was the application to satellites; digital twins were used to control satellites once they were launched into space and no longer physically accessible [39]. Among other uses, applications of digital twins include the design of new products, simulations that give early warnings of malfunctions, or the provision of further service offerings. The architecture of digital twins has undergone a significant evolution. Today, we can assume a 5-layer architecture of digital twins starting from the physical objects. They comprise (1) a Data Layer collecting the data available from these objects, (2) a Data Processing Layer for sensemaking of the data, which is based on (3) the Models & Algorithms Layer, (4) an Analytic Layer for further analysis of data and (5) a User Interface Layer, which finally provides a comprehensive user interface to the digital twin [5]. These layers are innervated by a bidirectional data flow between the physical object, which produces observation data in real time, and its representation used to send control data in the opposite direction. Digital twins are not only real-time representations but also enrich objects with additional information based on instant analysis. This holistic, comprehensible, up-to-date, and enriched view of the physical object makes the digital twin so unique [31] and in use autonomous and flexible [36]. Based on the success of digital twins in manufacturing, Gartner [7,24] has extended the idea to processes and entire organizations, launching the concept of DTO. Following the model of the digital twin from manufacturing, the DTO serves as unique point of data reference for each process, providing a comprehensive real-time picture and basis for process simulation [28]. Regarding the DTO’s architecture we can refer to their siblings from manufacturing and employ the same 5-layer architecture. Herein, human (and system) activities organized in processes replace the dynamics of physical objects. In the same way as sensors of physical objects produce data streams, these activities leave traces of data in information systems (IS) that fuel the DTO. Process mining is playing a prominent role in animating DTOs [24]. Based on the resulting extended understanding of processes, several possible applications of DTOs have been presented: Kerremans suggested the use of DTOs for linking strategy to operations or focusing on business change and digital transformation [24] while others see DTOs as a new tool for knowledge management [23] to mention but a few.

28

3

U. V. Riss et al.

Conceptualizing the Digital Twin of an Organization

Digital twins in manufacturing (DTMs) resemble DTOs in many ways. For example, we can directly transfer the 5-layer-architecture of the DTM to the DTO. However, there are also differences. While a DTM is fed by sensor data of the physical object, which has a clear boundary, processes are more diffuse so that it is less obvious which data have to be included. The most obvious solution is the use of data from corporate IS with process mining as an appropriate means of restoring process information [24]. However, there are some problems with this approach. First, the often complex contextual information of human activities is largely lost in IS, since it only serves the specific purpose of the respective application (e.g., financial accounting) that does not require most of the work context. Here it is important that the work context is often responsible for deviations from the planned procedure. Therefore, putting the isolated pieces from IS back together, is one of the essential challenges for process mining. Second, the DTO is not only a recipient of data but conceptualized as an active operational tool that makes it possible to build, use and save work contexts (and the often heterogeneous data generated in it). Third, while the spacial proximity of parts provides a suitable guideline how to navigate through a DTM, the question of proximity and relation in a DTO is more complex. For example, there is a proximity in time difference of use but also a proximity in reference. All these aspects give the User Interface Layer of the DTO a prominent role.

Fig. 1. Vision of a holistic view on a DTO with the cSpaces sidebar providing contextual information from the corporate memory as well as assistance for the resources of a DTO.

Digital Twins of an Organization for Enterprise Modeling

29

A central question is which design approach is most suitable for the DTO’s User Interface Layer. Usual visualizations of processes resemble road maps that lead from one task to another. Therefore, it seems obvious to use a geographic visualization schema [11] as a starting point to create an intuitive image of the process. Figure 1 depicts our vision of an adequate visualization of a DTO; it shows a corresponding design study. The graphic design indicates how tasks are related in the process model and displays the most significant work items related to them. On the right-hand side, a context sidebar supplements the graphic illustration of the process and shows the work items from the graphic plus additional objects in an ordinary list view. The user interface provides a zoom-in navigation, which leads from the depicted process overview to a representation that shows more semantic relations between work items. Table 1 describes this transition in the DTO visualization. Table 1. Overview of the levels of visualization in DTOs. Level of Displayed entities visualization

Displayed relations

1st level

Focus on process tasks and most Relations between process tasks relevant objects attached with attached objects

2nd level

Focus on one process task only, related objects come to the fore

Semantic relations between objects only

3rd level

Focus on related object groups with tasks attached

Semantic relationships between objects plus respective tasks

On the 1st level, users get an overview of the process they are working on. Focusing on one task and zooming in, work items of the focus task—these include persons and other objects—become visible while the non-focal tasks vanish in the background. Further zooming in makes the non-focal tasks disappear completely and brings more in-depth context information to the fore. Here we focus completely on the semantic relations between work items (2nd level). Finally, even the focal task disappears and only the work items and their relations remain visible; the name of the focal task may be shown in the headline. At this point tasks reappear, however, this time attached to the work items (3rd level). Which objects appear may depend on the strength of their relations or the recency of their use. The representation may also be used for navigation through the process in time, showing which work items were used in which order. To establish relations between the different work items used in the task we suggest the use of Context Spaces (cSpaces) [21]. They allow identifying, deriving and explicating contextual information from users’ daily work and keep exactly the contextual information that otherwise would get lost as well as provide context-specific assistance. We apply a twofold approach (see also Fig. 2): first, a corporate memory [1] infrastructure introduces a Knowledge Description

30

U. V. Riss et al.

Fig. 2. Comparing DTO & Corporate Memory architectures. The layers of DTO and Corporate Memory are largely corresponding.

Layer with ontologies—covering personal and organizational work items [33]— and knowledge graphs representing data, information and knowledge leveraged from the Resource Layer. This is realized by Adapters that semantify and represent resources to be machine understandable (e.g., [38]) and make them available to knowledge services in the Knowledge Services Layer. These services can be used in the User Interface Layer (e.g., to analyze an incoming e-mail) and derive entities from the knowledge graphs [22], proactively providing context information to users [2,25]. This approach is based on the corporate memory infrastructure CoMem1 which started as a research prototype [29] but is now matured in several projects and used as a pilot at a German energy provider2 Second, to be embedded in the user’s working environment and daily work, we use a Semantic Desktop that provides a system of plug-ins extending office programs (such as e-mail and chat tools or web browsers) as well as the file system. Personal knowledge graphs represent the user’s mental model and interlink with the knowledge graphs from the organization [29]. To support the user, cSpaces provides an assistant sidebar (as depicted in Fig. 1 on the right-hand side), which shows relevant information retrieved from repositories or belonging to the current context (e.g., a selected e-mail in an office program) [20]. This integration allows cSpaces to derive, build and formalize contexts that the DTO can use for authentic representation of process tasks supporting users’ daily work. 1 2

https://comem.ai. https://www.plattform-lernende-systeme.de/map-on-ai-map.html search for company ‘envia’ [10].

Digital Twins of an Organization for Enterprise Modeling

31

Information used during task execution is crucial for process sensemaking; it appears in cSpaces but is usually inaccessible for business information systems. Thus, cSpaces provides detailed insight of execution and information use, necessary for realistic process models in EM. Likewise, the approach improves the effectiveness of Business Process Management [34]. This is crucial since we expect that the importance of precisely these unstructured processes will increase in the future due to advancing digitalization. The demands for flexibility to solve knowledge-intensive tasks or malicious problems [8] as well as the consideration of the working context are also part of the requirements for case management. Here, personalization, context and intelligent assistance for knowledge work are likewise seen as a research challenge [30]. In earlier work we proposed weakly-structured workflows [12] as a means to combine knowledge work, workflows within corporate memories to derive task know-how on the Semantic Desktop for process know-how re-use [19], as well as the use of data mining and clustering for identifying hidden process patterns as means of process-related knowledge maturing [6]. Another question is how the representation in the DTO can give users a holistic view of the process. The DTO must show users the business process dynamics without overwhelming them with an excess of information. An obstacle in this respect is the opaqueness of the business process concept [27,43]. Following Davenport [9], a “process is thus a specific ordering of work activities across time and place, with a beginning, an end, and clearly identified inputs and outputs: a structure for action.” Here, key elements of a process, which a DTO should incorporate, are: (work) activities performed by actors, inputs, outputs, times and places. These features are grouped in individual instances started by a trigger and oriented toward certain customers as beneficiaries. The key feature that distinguishes the DTO from a business process model is the evolution of the respective entities in time and space. Once triggered, a business process instance is characterized by a state that consists of employees engaged in one or more activities associated with running processes. During execution, processes consume or produce resources, which can be material objects such as tools or information items such as data in IS or other repositories. Information items are semantically related to each other and only temporarily present to the users. This presence is to be reflected in the DTO and it must be clear which person used which resource and information at which time. To make sense of this multitude of data, the DTO may use various kinds of enterprise models. The idea of a digital twin suggests keeping the model space as open as possible since it can change which models are actually required. Thus, EM assumes a new role in making sense of data. Due to the variability of process executions, we expect that a small but significant amount of data does not fit to the models. Here, the additional open semantic knowledge space of cSpaces is required. Last but not least it should be mentioned that a DTO is not a replacement for traditional business information systems. These IS serve the purpose of supporting standardized sequences of work steps as efficiently as possible. To this

32

U. V. Riss et al.

end, they often implicitly fix the business process to a considerable degree. This works well as long as the incorporated process models mainly reflect the way processes are executed. This is the case in most process instances. In a smaller number of cases (or certain situations), it may be necessary to deviate from the standard process model. This means that the process leaves the zone of efficiency. Deviations are usually bought at the price of a significant delay and additional costs. We can refer here to an inverted Pareto principle saying that the 20% of deviations in process execution—the non-mainstream part—are likely to cause 80% of the costs. Nevertheless, the standard approach works well as long as we can limit the deviations to a small part of the process.

4

Use Case: Construction Process

In the following, we consider the example of a small to medium-sized construction project including third parties, such as craftspeople who serve as subcontractors. These projects are long-term processes with mainly consistent phases such as inception, design and construction, where notably the latter is split into various stages. Despite the clearly structured main process, construction projects are always unique in their composition; there are various exceptional situations during implementation that must be solved on the site and are often time-critical. The process phases are characterized by a variety of services, each of which generates data of the most different forms. These include, for example, data about clients and partners as well as project data from calculations, planning, purchases, equipment, materials, and the actual execution. These data (including documents) are stored in various function-related information systems, for example, financial data and invoices in ERP systems, customer information in CRM systems and building specifications in CAD systems. However, various process participants also organize their daily work with additional resources which are not always covered by the existing IS such as documents, notes, webpages, or e-mails. It is therefore difficult to get a uniform impression of the whole construction process. It is the goal of the DTO to collect these different sources of information in one information object and to make them accessible to the people involved in the construction process at any time. This does not only refer to the employees of the construction company but also to the other parties with suitably configured views. In order to fulfill this task, a DTO should support the work of the people in the project with components such as corporate memory, knowledge graphs (individual, organizational, procedural) and knowledge-based services, or work environment-integrated sensor technology & assistance (such as the Semantic Desktop and cSpaces). Hence, the corporate memory infrastructure taps into the legacy systems and represents the data with knowledge graphs allowing for a comprehensive view for the DTO. Due to its open Data Layer the DTO can take up more information than all individual information systems that only assume data related to their functions. Therefore, the DTO “knows” more about the construction process than all corporate systems together.

Digital Twins of an Organization for Enterprise Modeling

33

During construction, the process participants must deal with a lot of continuously changing data from different sources including those of partners. This includes documents such as delivery notes up to status messages received by phone or provided in unusual formats. cSpaces is an assistant to people which incorporates their information objects while working on their tasks. In their most basic form, cSpaces are collections that simply represent related information elements. They may evolve from a nucleus, e.g. a task, an event, a document a person has been working on, or more domain-specific items such as a support ticket arriving at the workspace of a clerk. When problems arise, external sources also provide valuable information for construction projects, such as addresses of local companies for unexpectedly arising orders such as repairs. The data can be available both electronically and in paper form. Smartphone apps may be used to take pictures of damage discovered by accident and forward them to other parties involved. Service providers may work in different inventory systems than the construction company or even use propriety software that cannot be easily processed. Although each individual problem can usually be tackled, it can ultimately lead to a situation where involved people, particularly on the construction site, lose the overview. The individual contributions captured in cSpaces increases the DTO’s information density that is now able to provide an in-depth view involving the work done and used information objects from the “outside” information space, which are not covered by internal business IS. We recognize the characteristics of weakly structured processes (see e.g., [19,34]) here. While the basic phases are mainly clear, the individual execution of every phase in each construction project can vary considerably. The occurrence of errors or planning inconsistencies usually leads to significant delays and increasing costs. To eliminate the errors, it is often necessary to significantly deviate from the predetermined process plan. Errors require additional activities, the involvement of new partners or even the involvement of the customers. In the DTO, these additional measures are recorded and their consequences for the overall process can be reviewed, for example, the impact on the schedule. The simulation property of the DTO is decisive when it comes to the prediction of consequences. Despite all deviation, the general objective will always be to return to the mainstream process and to continue other parallel activities as undisturbed as possible. In some cases, incidents can open new opportunities, for example, when a delayed timetable creates time margins elsewhere, or if customers accept deviations from the plan and take advantage of them. From a company’s strategic perspective, it is important that the management always has an up-to-date overview of all ongoing construction processes via the DTO. It is important for the accurate description of processes that process data are not extracted from information systems and reassembled to reconstruct the process execution, but that it is derived from real-time data via the DTO (see the example in Fig. 1). The awareness of deviations increases the quality of the process description and allows an efficient monitoring of all running construction processes as well as a timely control of each individual case. Analysis of deviations is important to the enterprise, for example: At which points do deviations occur

34

U. V. Riss et al.

most frequently? How well is the company prepared for this? What are the costs involved? How did the customers react to this? To check the quality of a process and the information systems supporting it, it is necessary to understand why deviations occurred and how they were handled. From the perspective of EM, individual circumstances are less relevant than systemic causes of deviation, for example misunderstandings due to the poor performance of information systems in communicating with partners or customers.

5

Digital Twins of the Organization and Enterprise Modeling

Enterprise models provide a multifaceted view of the organizational landscape forming the basis for the corporate memory infrastructure. There is a broad spectrum of semantics that support enterprise modeling. It ranges from modeling languages to formalized ontologies [41]. Exploiting the potential of a DTO requires a explicit specification of a conceptualization of the domain (ontologies; [17]) expressed in machine understandable models. Ontologies provide a formal language to represent the structure of a domain and allow combining data from different sources. Several approaches were suggested to describe enterprise ontologies, for example, TOVE (Toronto Virtual Enterprise) [14], the Enterprise Ontology [42] or the Context-Based Enterprise Ontology [26]. Besides enterprise ontologies, there are modeling languages that focus rather on the modeling aspect of the enterprise. Common modeling languages for describing organizational contexts are the enterprise architecture modeling language Archimate or the Business Process Model and Notation. Archimate provides a holistic view on the enterprise and allows modeling all layers from the strategy to the physical layer, BPMN focuses on the details of business process. These languages are based on practical experience and have proven to be effective. However, Archimate and BPMN do not provide formalized ontologies, but a graphical notation including the conceptual description of the elements and its relationships. Therefore, a graph-based representation of the concepts and relations of these languages is required. Here existing ontologies could be reused, such as Archimeo, which is a standardized enterprise ontology which includes the Archimate and BPMN conceptual model [18]. In general, we expect the following benefits from DTOs for EM: 1. Due to a continuous flow of data, the DTO shows the real-time performance of its actual counterpart. 2. Due to incorporating an EM, the DTO becomes itself a dynamic model. 3. Combining the previous two points, the DTO should serve as a conceptual and operational link between EM and the actual process, that is, it shows how the entities described in the models behave in reality. 4. Collecting data of past behavior, the DTO enables simulations and predictions of future process behavior under changed conditions.

Digital Twins of an Organization for Enterprise Modeling

35

5. Last but not least, connecting to users’ actual work, the DTO can provide a comprehensive view and sensemaking on all involved information sources, legacy systems and actually used information objects. With a growing number of DTO-enabled processes, we expect the representation of the enterprise to become more and more complete to finally provide the same holistic view that now we get from EM. A DTO could not only show consequences of process deviations but also the resilience of the process against disturbances. Hereby, it may help us to learn how to improve processes and show which information from which IS is required to achieve such improvement. This is not restricted to internal information but can include an increasing degree of information from customers, partners and public sources. In contrast to an IS, a DTO digests new information sources more suitably—a precondition for increasing process effectiveness—and does not primarily focus on process efficiency. The latter requires well-tuned interfaces provided by IS. The DTO is expected to provide information to enterprise architects, who plan the development of the IT landscape in the enterprise. This would allow them to identify those process aspects that significantly decrease the efficiency and to think about alternatives how IS may better support the processes. Thus, information from the DTO helps to judge whether and where IS require adaptation. With the rich context (such as semantically represented information objects, user interactions, concepts from knowledge graphs, their interrelations and relevancy) provided by cSpaces, the DTO can provide novel measures and services. The identification of similarities between contexts or anomalies within contexts will give a deeper insight into the process and its instances, instantaneously at any time. It can also give a more precise base for prediction and simulation. It is also worth noting that cSpaces are well-suited to adjust model granularity: (1) they are typically used as a hierarchy like a tree or a direct acyclic graph allowing for abstraction or gathering additional details when needed and (2) their information feed of observed user activities may be increased or reduced (by adding or removing sensors) according to given requirements. A DTO may not only show how efficient the existing enterprise architecture is but also how resilient to internal or external disturbance: How well can the IS landscape deal with such disturbance? A DTO simulating certain scenarios gives an answer. However, process mining does not help us to immediately reconfigure process instances. This requires an environment that is both process-aware and open to change. Moreover, this presence of a process instance allows us to introduce reconfiguration in individual process instances, based on the idea of value chain reconfiguration [32]. With a DTO we would have a reconfiguration already on the level of individual process instances. To explain this, imagine the case of the construction process. It is not unusual that mistakes occur which enforce the construction team to rethink their entire process. Of course, it would be possible to simply undo the mistake but usually this is too expensive and would often disturb the timing of the whole process. What makes the situation different from changes in project management is that the case cannot be handled individually. The situation requires a temporary process reconfiguration with the

36

U. V. Riss et al.

aim of reintegration into the standard process. Thus, any reconfiguration must be considered against the background of the process as a whole. Reconfiguration can mean that new parties must be involved and possible solutions to fix mistakes must be checked in terms of their consequences for the process. We can do this on the basis of a process representation allowing for simulation and prediction. A DTO is exactly that. To be agile, an organization must quickly perceive changes in its environment. A DTO is expected to do this because relevant changes will become apparent in the interaction with customers and partners during process execution, that is in cSpaces, even if they are not reflected in the rather inflexible business information systems. In this way, changes also become appreciable to the DTO and can be analyzed by managers and enterprise architects. Agility as successful adaptation is more likely if changes are tested before implementation. Due to the growing speed of change these tests must be performed rapidly and at low costs, in order to try as many scenarios as possible. This should be done by means of DTO. Thus, the DTO becomes a proper instrument for increasing an enterprise’s agility.

6

Conclusions

Closing the gap between Enterprise Models and real-time business process execution is still a challenge [13]. Data and process mining may appear as a way to close the gap. They provide information about past work activities that have become manifest in event logs of business information and other systems. This information can be used to enhance enterprise models. However, the time that passes between execution and model adaptation is long and a considerable amount of context information gets lost. If we assume an accelerated demand for adaptation, as we observe it in the digital economy, such procedure may appear difficult even though process mining is already moving towards real-time processing [4]. A second drawback is a certain lack of a comprehensive picture of business processes as they actually take place. We may derive process deviations from event logs but often it is less obvious what caused them. Data is widely spread over various systems and may not yield a consistent picture. By the combination of abstract enterprise models and work context data from cSpaces we expect a continuous mutual update and a significant positive impact on both sides. The trend leading from digital twins as object representations toward more process-oriented representations has become apparent due to the focus on including more dynamics [40]. By design, digital twins aim at closing the gap between static model descriptions and operational dynamics. As Grieves and Vickers described it, they show the differences between actual and imagined behavior indicating undesirable behavior or inadequate models [16]. Regarding the latter case, Fayoumi already referred to the role of future adaptive enterprise modeling platforms [13]. Moreover, we have to keep in mind that deviations often have a plausible cause: models do not meet the demands of the situation that people face in their work.

Digital Twins of an Organization for Enterprise Modeling

37

The DTO forms an intermediate between enterprise models and actual process execution. While EM represents the actual backbone of the organization, the DTO connects this backbone with the flow of activities, information and other resources in real-time. Thus, the DTO serves (at least) two purposes. On the one hand, it can inform an adaptive enterprise modeling platform about relevant mismatches between models and actual process execution and trigger model updates, on the other hand, it can streamline processes, making it easier for process executors to return to standard process paths after detours. This kind of interactiveness differentiates DTO from process mining. Moreover, we expect that the use of data from external sources (e.g., customers’ mobile apps) will more strongly influence process execution in the future. The data-providing applications may vary significantly and change frequently. DTOs may provide a suitable way to test the influence of including such data streams on a process before it is actually implemented. While we have already gained significant experience with the architecture of digital twins from their applications in manufacturing, it is still an open question what an effective workable image of a process looks like. Process modeling methodologies give us a first insight but they lack all the details that belong to EM and real work contexts. The particular strength of the DTO consists in its holistic and comprehensible process representation that replaces the provision of various tables, graphs and dashboards. In this way it can prevent us from not seeing the forest for the trees. In this area, we see a priority for future research in developing suitable DTO representations that help process executors in their daily work and provide a basis for adaptive EM.

References 1. Abecker, A., Bernardi, A., Hinkelmann, K., Kuhn, O., Sintek, M.: Toward a technology for organizational memories. IEEE Intell. Syst. 13(3), 40–48 (1998) 2. Abecker, A., Bernardi, A., Maus, H., Sintek, M., Wenzel, C.: Information supply for business processes: coupling workflow with document analysis and information retrieval. Knowl. Based Syst. 13(5), 271–284 (2000) 3. Augustine, P.: The industry use cases for the digital twin idea. In: Advances in Computers, pp. 79–105. Elsevier, Amsterdam (2020) 4. Batyuk, A., Voityshyn, V., Verhun, V.: Software architecture design of the realtime processes monitoring platform. In: 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP). IEEE, August 2018 5. Bazaz, S.M., Lohtander, M., Varis, J.: 5-dimensional definition for a manufacturing digital twin. Proc. Manufact. 38, 1705–1712 (2019) 6. Brander, S.: Refining process models through the analysis of informal work practice. In: Rinderle-Ma, S., Toumani, F., Wolf, K. (eds.) BPM 2011. LNCS, vol. 6896, pp. 116–131. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3642-23059-2 12

38

U. V. Riss et al.

7. Burke, B., et al.: Gartner top 10 strategic technology trends for 2020 (2019). https://iatranshumanisme.com/wp-content/uploads/2019/11/432920-top-10strategic-technology-trends-for-2020.pdf. Accessed 10 September 2020 8. Conklin, J.: Dialogue mapping: building shared understanding of wicked problems, chap. Wicked Problems and Social Complexity, pp. 1–25. Wiley, Hoboken (2005) 9. Davenport, T.H.: Process Innovation: Reengineering Work Through Information Technology. Harvard Business Press, Boston (1993) 10. Dengel, A., Maus, H.: Ein ‘Informationsbutler’ - mit Talent f¨ ur smarte Daten. (in German). DIGITUS 1, 22–27, Febuary 2019. https://digitusmagazin.de/2019/ 02/ein-informationsbutler-mit-talent-fuer-smarte-daten/. Accessed 10 September 2020 11. Dodge, M., McDerby, M., Turner, M.: In: Geographic Visualization. The power of geographical visualizations, pp. 1–10. John Wiley & Sons Ltd, Hoboken (2008) 12. van Elst, L., Aschoff, F.R., Bernardi, A., Maus, H., Schwarz, S.: Weakly-structured workflows for knowledge-intensive tasks: an experimental evaluation. In: 12th IEEE International Workshops on Enabling Technologies (WET ICE 2003). IEEE (2003) 13. Fayoumi, A.: Toward an adaptive enterprise modelling platform. In: Buchmann, R.A., Karagiannis, D., Kirikova, M. (eds.) PoEM 2018. LNBIP, vol. 335, pp. 362– 371. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02302-7 23 14. Fox, M.S., Barbuceanu, M., Gruninger, M.: An organisation ontology for enterprise modelling: preliminary concepts for linking structure and behaviour. In: Proceedings 4th IEEE Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises (WET ICE 1995). IEEE (1995) 15. Grieves, M.: Digital Twin: Manufacturing Excellence Through Virtual Factory Replication. White paper, Florida Institute of Technology (2014) 16. Grieves, M., Vickers, J.: Digital twin: mitigating unpredictable, undesirable emergent behavior in complex systems. In: Kahlen, F.-J., Flumerfelt, S., Alves, A. (eds.) Transdisciplinary Perspectives on Complex Systems, pp. 85–113. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-38756-7 4 17. Gruber, T.: Toward principles for the design of ontologies used for knowledge sharing. Int. J. Hum.-Comput. Stud. 43(5,6), 907–928 (1995) 18. Hinkelmann, K., Laurenzi, E., Martin, A., Montecchiari, D., Spahic, M., Th¨ onssen, B.: ArchiMEO: A standardized enterprise ontology based on the ArchiMate conceptual model. In: Proceedings of the 8th International Conference on Model-Driven Engineering and Software Development. SCITEPRESS (2020) 19. Holz, H., Maus, H., Bernardi, A., Rostanin, O.: From lightweight, proactive information delivery to business process-oriented knowledge management. J. Univ. Knowl. Manag. 2, 101–127 (2005) 20. Jilek, C., et al.: Managed forgetting to support information management and knowledge work. KI German J. Artif. Intell. 33(1), 45–55 (2018). https://doi.org/ 10.1007/s13218-018-00568-9 21. Jilek, C., Schr¨ oder, M., Schwarz, S., Maus, H., Dengel, A.: Context spaces as the cornerstone of a near-transparent and self-reorganizing semantic desktop. In: Gangemi, A., Gentile, A.L., Nuzzolese, A.G., Rudolph, S., Maleshkova, M., Paulheim, H., Pan, J.Z., Alam, M. (eds.) ESWC 2018. LNCS, vol. 11155, pp. 89–94. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98192-5 17 22. Jilek, C., Schr¨ oder, M., Novik, R., Schwarz, S., Maus, H., Dengel, A.: Inflectiontolerant ontology-based named entity recognition for real-time applications. In: Proc. of the 2nd Conf. on Language, Data and Knowledge (LDK-2019). OASIcs, vol. 70, pp. 11:1–11:14. Schloss Dagstuhl - Leibniz-Zentrum f¨ ur Informatik (2019)

Digital Twins of an Organization for Enterprise Modeling

39

23. Kaivo-oja, J., Kuusi, O., Knudsen, M.S., Lauraeus, T.: Digital twins approach and future knowledge management challenges: where we shall need system integration, synergy analyses and synergy measurements? In: Uden, L., Ting, I.-H., Corchado, J.M. (eds.) KMO 2019. CCIS, vol. 1027, pp. 271–281. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-21451-7 23 24. Kerremans, M.: Market guide for process mining. Gartner Inc. (2018) 25. Lampasona, C., Rostanin, O., Maus, H.: Seamless integration of order processing in MS Outlook using smartoffice: an empirical evaluation. In: ACM-IEEE International Symposium on Empirical Software Engineering and Measurement. ACM Press (2012) 26. Lepp¨ anen, M.: A context-based enterprise ontology. In: Abramowicz, W. (ed.) BIS 2007. LNCS, vol. 4439, pp. 273–286. Springer, Heidelberg (2007). https://doi.org/ 10.1007/978-3-540-72035-5 21 27. Lindsay, A., Downs, D., Lunn, K.: Business processes–attempts to find a definition. Inf. Softw. Technol. 45(15), 1015–1019 (2003) 28. Marmolejo-Saucedo, J.A., Hurtado-Hernandez, M., Suarez-Valdes, R.: Digital twins in supply chain management: a brief literature review. In: Vasant, P., Zelinka, I., Weber, G.-W. (eds.) ICO 2019. AISC, vol. 1072, pp. 653–661. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-33585-4 63 29. Maus, H., Schwarz, S., Dengel, A.: Weaving personal knowledge spaces into office applications. In: Integration of Practice-Oriented Knowledge Technology: Trends and Prospectives, pp. 71–82. Springer, Berlin (2013) 30. Motahari-Nezhad, H.R., Swenson, K.D.: Adaptive case management: overview and research challenges. In: Business Informatics (CBI), 2013 IEEE 15th Conference on, pp. 264–269, July 2013 31. Niemi, E.: Enterprise architecture benefits: perceptions from literature and practice. In: Proceeding 7th International Business Information Management Association (IBIMA) Conference on Internet & Information Systems in the Digital Age. IBIMA (2006) 32. Normann, R., Ramirez, R.: From value chain to value constellation: designing interactive strategy. Harvard Bus. Rev. 71(4), 65 (1993) 33. Riss, U.V., Grebner, O., Taylor, P.S., Du, Y.: Knowledge work support by semantic task management. Comput. Ind. 61(8), 798–805 (2010) 34. Riss, U.V., Rickayzen, A., Maus, H., van der Aalst, W.M.: Challenges for business process and task management. J. Univ. Knowl. Manag. 2, 77–100 (2005) 35. Rosemann, M.: Structuring in the digital age. In: Bergener, K., R¨ ackers, M., Stein, A. (eds.) The Art of Structuring, pp. 469–480. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-06234-7 44 36. Rosen, R., von Wichert, G., Lo, G., Bettenhausen, K.D.: About the importance of autonomy and digital twins for the future of manufacturing. IFAC-PapersOnLine 48(3), 567–572 (2015) 37. Sandkuhl, K., et al.: From expert discipline to common practice: a vision and research agenda for extending the reach of enterprise modeling. Bus. Inf. Syst. Eng. 60(1), 69–80 (2018) 38. Schr¨ oder, M., Jilek, C., Hees, J., Dengel, A.: Towards semantically enhanced data understanding. CoRR abs/1806.04952 (2018) 39. Shafto, M., et al.: Modeling, simulation, information technology & processing roadmap (2012) 40. Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design, manufacturing and service with big data. Int. J. Adv. Manufact. Technol. 94(9–12), 3563–3576 (2017)

40

U. V. Riss et al.

41. Thomas, O., Fellmann, M.: Semantic process modeling – design and implementation of an ontology-based representation of business processes. Bus. Inf. Syst. Eng. 1(6), 438–451 (2009) 42. Uschold, M., King, M., Moralee, S., Zorgios, Y.: The enterprise ontology. Knowl. Eng. Rev. 13(1), 31–89 (1998) 43. Vergidis, K., Turner, C., Tiwari, A.: Business process perspectives: theoretical developments vs. real-world practice. Int. J. Product. Econ. 114(1), 91–104 (2008)

Modeling Products and Services with Enterprise Models Kurt Sandkuhl1

, Janis Stirna2(B)

, and Felix Holz1

1 University of Rostock, Rostock, Germany

{kurt.sandkuhl,felix.holz2}@uni-rostock.de 2 Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden

[email protected]

Abstract. Products and services are essential to an enterprise’s operations and hence they need to be designed, developed, and delivered in congruence with the enterprise’s business strategy and design. This paper investigates how to identify relevant aspects of products and services to be added to enterprise models by extending existing Enterprise Modeling (EM) languages for product/service modeling. The proposal is to decide on the relevance for EM based on “touch points”. The proposal is applied to extending the 4EM method with a Product/Service Model and is based on literature analysis in the area of product and service modeling and on an industrial case study. Keywords: Enterprise Modeling · Product modeling · Digital transformation

1 Introduction The business purpose of an enterprise in general is to create value for its clients and revenues for its owners or stakeholders by providing products or services meeting the clients’ demands and required environmental, regulatory, and market standards. Thus, products and services are a core element of an enterprises’ operations and need to be designed, developed, and delivered in congruence with the enterprises business strategy and design. Consequently, they should be represented in enterprise models if the modeling purpose relates to them. This is particularly important in application domains with changing business models due to digital transformation (DT) [1]. A typical case of DT is that traditional physical products, even with well-established market positions, change due to the creation of IoT solutions or are extended into offerings in combination with service solutions [2]. Example are energy management systems for buildings, smart manufacturing systems, security monitoring and management systems. In this context, understanding dependencies between products and services, processes required for product/service delivery, as well as responsibilities and roles for product/service components is important or designing future organizational structures. This is a traditional application field for Enterprise Modeling (EM). However, most EM languages do not offer explicit concepts for product and service modelling. There are © IFIP International Federation for Information Processing 2020 Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 41–57, 2020. https://doi.org/10.1007/978-3-030-63479-7_4

42

K. Sandkuhl et al.

many approaches for modelling products in the field of mechanical engineering and production systems, but these approaches are often addressing aspects of (physical) products (geometric shape, material specification, surface design, mechanical characteristics, etc.) that are outside the scope of EM projects. Integrating these approaches with EM methods has not been widely done and is considered challenging because of different methodological and tooling principles. In the context of digital transformation many companies seek to offer advanced combinations of products and services, which requires explicit documentation and analysis of their and their customers’ business, products, services as well as digital twins as intermediaries for achieving tight integration between business operations and efficient runtime management [3]. The focus of this paper is on the investigation of how to identify relevant aspects of products and services to be added to enterprise models, i.e. we take the perspective of extending existing EM languages for product/service modeling. Our proposal is to decide on the “relevance” for EM based on two aspects, which we call “touch points”: • the relationships of product/service-related concepts to existing concepts in the EM language under consideration • the requirements from industrial practice for expressing links between operations and products/services. We use the extension of 4EM for product/service modelling as an example. The rest of the paper is structured as follows. Section 2 presents background information from EM and DT. Section 3 describes the research approach used. Section 4 analyzes existing work in product/service modeling. Section 5 investigates requirements from an industrial case study to product modeling. Section 6 presents the touch point approach and the extension of 4EM for product/service modeling. Section 7 provides concluding remarks and briefly explains the next steps of this research project.

2 Background to EM and Digital Transformation 2.1 Enterprise Modeling and 4EM EM is the process of the creation of an enterprise model that captures all the enterprise’s aspects that are required for a given modeling purpose. A key aspect in EM is the integrated view on the various aspects of the enterprise. An enterprise model therefore consists of a set of interlinked sub-models, each of them focusing on a specific aspect like processes, goals, concepts, business rules. Concerning applicability, EM is applicable for any organization, public or private, or its part. The rest of this section gives an overview of the 4EM method [4]. 4EM is a representative of the Scandinavian strand of EM methods. At its core is participatory stakeholder involvement and the modeling process is usually organized in the form of facilitated workshops. 4EM shares many underlying principles of the, so called, multi-perspective, approaches that recommend to analyze organizational problems from several perspectives, such as vision (goals, objectives), data (concepts, attributes), business processes (processes, tasks, activities), organizational structure (actors, roles, organizational units),

Modeling Products and Services with Enterprise Models

43

etc. Examples, of other approaches following the multi-perspective principle are Active Knowledge Modeling [5], ArchiMate [6], and MEMO [7]. The 4EM modeling language consists of six sub-model types, each of them focusing on a specific aspect or perspective of the enterprise – goals, business rules, concepts, business processes, actors and resources, as well as Information System technical components. 4EM distinguishes relationships with which the modeling components are related within the sub-model and relationships with components of other sub-models. The latter relationship type is called inter-model relationship. They are used to trace decisions, components and other aspects throughout the enterprise model. For example, the motivation for why certain business process exists in an enterprise is established with an inter-model relationship to a motivating goal. 4EM also supports integration with other modeling languages and methods by allowing to define new inter-model relationships between the 4EM components and components of the modeling language to be integrated. This is the approach for the inclusion of the product sub-model in 4EM discussed in Sect. 6. 2.2 Digital Transformation In scientific literature, digital transformation often is discussed in the general context of digitalization and considered as the most complex digitalization phase [8]. The focus here is on the disruptive social and economic consequences which, due to the potential of digital technologies to substantially change markets, lead to new technological application potentials and the resulting changes in economic structures, qualification requirements for employees and working life in general. More concretely, [9] proposed to distinguish between transformation of the value proposition and the value creation when analysing and planning digital transformation. These two “dimensions” can be divided into different steps of digitalization which form the prerequisite for the next step. In previous work, we proposed the following steps [10]: In the operations dimension, the steps can be defined as (1) replacing paper documents with digital representations, (2) end-to-end automated processing of this digital representation within a process and (3) integration of all relevant processes within the own enterprise and with partners. On the product dimension, the most basic step are physical products without any built-in information or communication technology (ICT). Digitization steps are (1) to enhance the product/service by providing complementary services (maintenance information, service catalogs) without actually changing it, (2) to extend functionality and value proposition of products by integration of sensors and actuators, and (3) redefinition of the product or service which leads to a completely new value proposition or offering. In general, a completed digital transformation would require completion of all three steps in both dimensions.

3 Research Approach This study is part of a research program aiming to provide methodological and tool support for organizations in dynamic contexts, e.g., supporting the process of digital transformation, capability management, product and service design. The 4EM method has

44

K. Sandkuhl et al.

been used to support parts of methods for capability management [11]. This work reports on the extension of 4EM with specific support for product and service design. It follows the five stages of Design Science Research (DSR) [12], namely, problem explication, requirements definition, design and development of the design artifact, demonstration, as well as evaluation. The need to extend 4EM has been identified in the projects it has been applied over the years, hence, in the DSR terms, the problem did not need additional investigations. This study concerns the DSR steps concerning requirements for the 4EM extensions, its construction, and demonstration. The requirements have been elicited from an analysis of existing literature on product and service modeling as well as from an industrial case study. The initial problem definition and requirements analysis led to a conclusion that 4EM extension should address the key aspects of product and service modeling instead of the entire spectrum of this area of modeling. Hence, the research question for the case study was to identify what are the central concepts of product modeling that need to be integrated with EM from use case perspective. Based on the research question, we identified industrial cases of digital transformation that deal with product and service development, i.e. we performed qualitative case studies in order to obtain relevant and original data (see Sect. 5). Qualitative case study is an approach to research that facilitates exploration of a phenomenon within its context using a variety of data sources. Such a multi-perspective view ensures that deep understanding of the subject. The case study used three different perspectives represented by the sources of data: we analyzed documents about business models, products, manufacturing process of the companies; we performed workshops targeting digital transformation and DTs as part thereof; and we interviewed domain experts. Yin [13] differentiates case studies explanatory, exploratory and descriptive. The case study in Sect. 5 are considered descriptive, as it describes the phenomenon of initiating product and service development and the real-life context of digital transformation.

4 Approaches to Product Modeling Based on a literature analysis, we identified approaches for modelling products and services from different fields which are presented in Sects. 4.1 to 4.5. In Sect. 4.6, we extract the perspectives on product/service models visible in these approaches. The perspectives are input for identification of touch points in Sect. 6. 4.1 Feature Model A feature model describes a product from the customer’s point of view by recording functionalities, characteristics, and quality features that are important for the customer and their interdependence. Feature models are frequently used in software development and in the corresponding description of product lines or system families. The semantics of feature models have both a descriptive and an informative function. According to Riebisch et al. [14] feature models are supposed to close the gap between the requirements and the solution. For example, they are used to define products and their variants or configurations, to describe different possibilities or characteristics of a product line, to design new products, and to add new features to an existing product line.

Modeling Products and Services with Enterprise Models

45

A product is described by means of its properties, that is, features. The features can be divided into further features to create a hierarchy. Not every feature has the same meaning for the product. This results in the following different categories of features: – Functional features describe the behavior of the product or the way the user can interact with it. The functions of the product can be seen as both static and dynamic functional features. – Interface features describe conformity of product with a standardized system. – Parameter features are countable, listable properties of the product, which can be non-functional or result from the application context. – Concept features represent an additional category to better structure a feature model. The root of a feature model always represents a concept feature. This abstract feature contains a class of solutions. To adequately describe a certain characteristic of a product family, i.e. a concrete product, the individual relations between the features must be represented. The connections between features can be hierarchical or non-hierarchical. In addition to “and”, “or”, “optional”, and “necessary” relationships, features can be related to each other, for example by means of the following categories: – Sub-feature relationship: This relationship is responsible for constructing the hierarchy to represent feature associations and affiliations. This hierarchical relationship can also include the semantics of a prerequisite relationship or exclusion relationship. – Refine relationship: This relationship routes from a feature to a more detailed subfeature. It includes an actual or partial meaning. 4.2 Bill of Material A Bill of Material (BoM) of a product according to [15] specifies how a product is constructed from its sub-components. A BoM is characterized by its structured representation for identification and selection of products. It can be used with its composition sequences used as a rough construction plan to identify bottlenecks within it or making design and service decisions. Especially with a high number of different variants of a product, it can make sense for a company to know which part is in which product, e.g. to check and refresh the stock of these parts or products (material planning) or use the implicit characteristics to drive forward a product and market analysis. The BoM model maps a product by placing its components in a hierarchical tree structure can be assigned. A product consists of components that in turn can also consist of components. In the root of the tree structure is the top product. This is the end product, so to speak, and not part of other product, as it does not have a parent node. The leaves of the tree structure are called primary products. They cannot be split into any other components and are mostly purchased from third parties in production. Model components, that have both child and parent nodes are called subassemblies and in the real world titled as components to be assembled. There is only one relation type in the BoM model for specifying Consists-of/belongs-to-meaning. A parent node consists of its child nodes; vice versa, the child nodes belong to the parent node or are part of it.

46

K. Sandkuhl et al.

Two more values are added to the relationship - the number of components that are required for the higher-level product (quantity-per), and the components are given an additional number to define the sequence to determine in which the components of the level are assembled to the higher product will be [15]. 4.3 Core Product Model 2 The Core Product Model (CPM)2 according to [16] is an abstract, generic model with a generic semantics. The semantics are domain independent and can be used to extrapolate to a concrete implementation model. CPM2 supports Product Lifecycle Management (PLM) by providing product information for different phases of product design. Thus, some model components at different stages have different purposes. In early concept phases of product design, when the shape of the product is still is not fully developed, the functional aspect of the model supports the identification of the requirements and functions and the allocation of the expected functions of the product. According to [16], the following object classes can be identified within the CPM2: – The artifact (artifact) is a primary concept of product design and the central concept of the CPM2, which provides a unique physical entity within of a product, be it a component, an assembly or a part of the product. By using artifacts and sub-artifacts, you can construct hierarchy. – A feature in CPM2 is a part of the shape of the artifact, which has a specific function. For example, an artifact can provide design, analysis or manufacturing features which determines their respective functions. Furthermore, features can arise from features. The feature is also created as an aggregate of function and form considered. – The function describes what the artifact should do. It can also be considered an intentional or expected behavior (intended behavior). – The form of the artifact describes the proposed solution of a (design-) problem. It is specified with its function and represented by their geometry and material. – The behavior describes, contrary to the function, what the artifact (or the form) actually does. By identifying the inputs and outputs, the behavior of the form evaluated and compared with the expected behavior will be. – The geometry describes some prominent or visible aspects of the artifact, by specifying and refining the shape. Under Geometry there is also to understand the arrangement of materials in the mold. – The material is the description of the internal composition of the artifact. – The specification represents a collection of relevant information about the artifact. The relevant aspects depend on customer requirements and technical prerequisites. A specification is composed of requirements or prerequisites. – These objects can be represented by the relation types are related to each other: – A constraint is a specific common property of a group of entities. – Entity Association describes a “membership” relationship between core entities of the product. – Usage is understood as a mapping function from one entity to another entity. – The path relationship (trace) shows dependencies between particular entities. – Decomposition: All object classes have their own independent decomposition hierarchy. This means that identical classes can be build a part-of-relationship.

Modeling Products and Services with Enterprise Models

47

4.4 Service Meta-Model For more than 20 years, there has been a discussion about “servitization” in manufacturing industries, as many manufacturers increasingly combine their products with services and solutions [17]. Closely related to servitization and equally relevant for modeling products and services is the research on product-service systems (PSS), such as, engineering methodologies [18], modeling methods and modeling languages for PSS [19]. This research confirms the importance of supporting the modelling of services. The term “service” can be defined in many fundamentally different ways. [20] discusses some of the problems and contradictions resulting from different definitions. In research and industry, service dominant logic (SDL) receives much attention. According to SDL [21] a service is characterized by, among other aspects, value co-creation. This means that “value is cocreated by multiple actors, always including the beneficiary” [21] and “value cocreation is coordinated through actor-generated institutions and institutional arrangements” [21]. This confirms our view that a model of an enterprise’s services should not be isolated but interlinked with other EM perspectives. The meta-model presented by [22] builds upon the service blueprinting approach [23]. It integrates its concepts into a formal meta-model and adds concepts for (IT) self-services. The following classes or concepts are present in a service model of [22]: – “Actions” can be used to represent the sequences of actions (action flows) of different categories of actors in a service process. – “Actor categories” offer the possibility to represent the communication flow and the four horizontal lines of a service blueprint separating the actor categories (line of interaction, line of visibility, line of internal interaction, and line of implementation). Actor categories are customer, onstage personnel (i.e., service personnel with faceto-face contact with the customer), backstage personnel (i.e., service personnel or system invisible to the customer), support personnel, and management (i.e., service managers responsible for planning, managing, and controlling). – “Capabilities” reflecting what is required to perform a service (input perspective). – “Service” representing the actual service (output perspective). – “Props and physical evidences”: the tangibles with which the customer interacts during the service process. – “Fail points” for actions that can fail when performed. – ”Constraints” define conditions for services for customer groups. Before provision of the service, the connected constraints rules are checked. 4.5 Step The STEP approach implements the ISO 10303 standard [24] and combines several concepts of product modeling with each other and specifies a uniform format, for the exchange of product information. The exchange format primarily supports computeraided systems by standardizing product model data and consistent data management. STEP has a wide range of applications and is used for different types of products, among others for electromechanical products, fiber compositions, ships, architectural projects etc., but also product life cycle stages are supported. Products before, after, or

48

K. Sandkuhl et al.

during the production process can be mapped and structured can be saved. The native information modelling language for STEP is EXPRESS. EXPRESS provides generic entities, which can deal with attributes, rules, and restrictions which allows forming generally valid data structures. The generic entities are independent of the specific application. Certain domains can use the application-specific resources integrated in STEP. Depending on the purpose of the model, new model components can be added without inconsistencies in data exchange. STEP allows a precise and useful presentation of products, which are suitable for the product is tailor-made. However, the presentation is mostly technical, making it rather unsuitable for interpersonal purposes. 4.6 Perspectives in Product/Service Models The analysis of the approaches presented in Sects. 4.1 to 4.5 resulted in different ways to structure the information included in existing product modelling approaches, like, e.g., product lifecycle phases (from target setting to end-of-life recycling), stakeholder concerns (user, designer, producer, owner, etc.) or aspects of product realization (function, geometry, material, surface, etc.). We decided to apply the product realization phases as aid to sharpen the view on the actual product information. Table 1 shows the product-related features identified this way. Table 1. Product-related features extracted from approaches to product modeling Requirements Design Feature Model

Construction

Production

Functional Interface and and quality parameter features, features characteristic

Bill of Material

Product and Construction sub-components, plan variants

Composition sequence

Construct hierarchy, intended behavior, technical prerequisites

Shape, form, geometry, material

Manufacturing features

Service Service meta-model catalog, customer groups, constraints

Action flow for service, fail points, capabilities

Refined action Actions at fail flows, points, communication solutions flow, other actors categories, props

STEP

2D, 3D Composition, geometry, shape, sub-components, surface, rules material, composition

Core Product Model 2

Operation

Functional and quality features, assignment to components,

Assembly sequence, production system spec., tool spec.

Maintenance spec., operations flow

Modeling Products and Services with Enterprise Models

49

5 Industrial Case Study Section 4 presented a view on product/service modeling dominated by the knowledge visible in scientific publications. This section is meant to add a perspective from industrial practice. Our aim is to contribute an indication what perspectives from 4.6 are of visible importance for industrial cases. The use case company is a medium-sized manufacturer of different kinds of pumps and pumping technologies, e.g. swimming pool pumps, sewage pumps, industrial pumps for heavy environments, and ship pumps. The company is well-established on the European and US-market and has in some market segments a market share of more than 70%. Although the overall business is stable and developing well, the management of the company decided to explore new service opportunities and business models applying digital technologies. Thus, the idea of the company’s product management is to integrate sensors into pumps and transmit the information to the back-office by using a data communication node. This general idea can be classified as enhancing pumps into smart connected products [25] or Internet-of-Things (IoT) devices. From a research perspective, the opportunity for data collection at the case study company in a qualitative case study emerged when the company agreed to start a study on digital transformation options. The study so far included two meetings at the company’s headquarter and several interviews by phone or skype. The first meeting was directed to the top management of the company with a focus on clarifying general steps of digital transformation, possible procedures and aspects of the enterprise to be considered. The second meeting was a workshop was directed towards identifying concrete digital transformation options and potential ways of implementation. For the research question given above, this workshop and the preparatory interviews were most relevant and will be in focus of the following. One purpose of the preparatory interviews for the workshop was to understand the current situation of IoT and sensor integration into the company’s products. The key expert here was the research and development manager. Before the interview, guidelines consisting of a list of questions and aspects to explore were prepared. During the interview that was conducted by one researcher and took 30 min, notes were taken. As a preparation of the digital transformation workshop, the participants were selected to include all relevant departments of the company (product development, production, marketing, sales & distribution, and services) and members of top and middle management. All eight participants were informed in beforehand about the purpose of the workshop and importance of their participation. The workshop as such included the collection and clustering of new product and services ideas from the participants, joint definition of priorities, and development of a business model prototype for the top 3 product/service ideas. The content of the workshop was documented in photo documentation of collected ideas and clusters, written documentation of the business model prototypes, notes taken during the workshop for capturing additional information regarding ideas and business model. In the context of this paper, the documented interview and workshop content were analyzed. The product manager stated as one of the motivations for the workshop:

50

K. Sandkuhl et al.

“Our device linking a pump to the Internet is nearly ready. It captures data and puts them into our own cloud. So far, we only capture data about malfunction or energy consumption that is anyhow visible on the pump’s display. But we do not have a good idea, how to do business with this data. And we probably need more sensors.” Among the top innovation ideas were (a) smart pumps and (b) pumping as a service, which the workshop participants both related to the need of combining products and services. When discussing the smart pump, the sales representative explained: “We understand that our bigger customers want to have control if our pumps do what they are supposed to do during operations. Some of them call it the digital twin. This would help us to sell pumps to them. We have to use or develop sensors that deliver this kind of information.” Pumping as a service basically aims at selling the functionality of the pump instead of the pump as physical device which would lead to a service agreement where the company is paid for pumped cubic meters or hours of pumping. One of the participants remarked to this idea: “For this, we need full control what is happening with the pump. So, we need something like continuous monitoring which raises alarms if something goes wrong.” When developing the business model prototype for pumping as a service, most of the discussion time was spent on organizational issues within the company: “where does all the information from our pumps arrive, how do we make sense out of it and how do we organize the reaction?” For the smart pumps, the discussion more was about “how do we integrate our pumps in the control system of our customer and what kind of sensors do we need?” Furthermore, the development department mentioned “We would need to know what technical basis our customers use for their control systems and the interfaces we have to provide. But most of our customers have no answers to these questions. Sometimes we get the impression that they simply don’t know.” In principle, it can be said that all phases of product development and all productrelated concepts identified in Sect. 4.6 have some relevance for the case study, as the identified new business models and the resulting products and services will have to be subject of a complete product development lifecycle. However, some concepts were more central and intensely used in the case study: – The characteristics of the products/services of relevance for business models (features, requirements, etc.) – The parts, composition and variants of products/services (component, sub-products, etc.) – The relation of actors and roles as well as organizational processes to products/services

6 Extension of 4EM for Product and Service Modeling 6.1 The Approach of “Touch Points” 4EM consists of six sub-models for modeling goals, concepts, business rules, business processes, actors and resources, and information system technical components and

Modeling Products and Services with Enterprise Models

51

requirements. They are designed to be sufficiently general to be able to capture and represent most modeling problems related to organizational designs. The study reported in [26] concluded that considering their generality the expressiveness for product and service knowledge is reduced. Hence, as discussed in the previous sections, this was deemed insufficient as more and more integrations of products and services with the rest of the organization’s design needs to be supported, especially in the context of digital transformation. The use case requirements also suggest that 4EM should be extended for this particular modeling purpose instead of suggesting to model the enterprise design issues with 4EM and product-related issues with one of the existing sub-models (see Sect. 2). In this context, one of the most difficult decisions was the level of integration of the product/service sub-model into 4EM and particularly the extent of product or service information to be documented with the constructs of the sub-model. We investigated the following options: – For a domain-specific modeling language the requirements of the domain are decisive. Thus, targeting the 4EM language for the domain of manufacturing probably would have resulted in a different set of product modeling concepts than for the domain of banking, to take one example. However, 4EM is meant to be application domain independent, which is why we did not specify it for a particular application domain. – In a design or construction-oriented mindset, we would have used a number of realworld cases and their business requirements to derive requirements to product modeling and define the meta-model accordingly. However, the selection of cases and their domains and modeling purposes would have been affecting or even be decisive to what requirements are visible. – For the support of specific modeling purposes, such as modeling for digital transformation, the requirements of the domain are decisive. – Extensive incorporation and use of approaches from other multi-perspective EM languages would violate the minimality principle of the 4EM modeling language. Finally, the decision was made to use what we termed the “touch point” centric approach. At its core is the principle to only include concepts in the product sub-model which can serve to represent aspects or characteristics of products where the other perspectives (modeled in the 4EM sub-models) “touch” products. The “touches” can be operationalized as follows: – A relationship type of the sub-model that can be used to connect a concept of the sub-model with a product-related concept indicates a touch point. – If a product-related concept complements the concepts of a sub-model, this indicates a touch point. – If a semantic relationship between a product-related concept and a concept of the sub-model can be defined that represents an aspect relevant for the enterprise, this indicates a touch point. Using the above operationalization, Table 2 shows the touch points for the different 4EM sub-models. The touch points will be modeled with 4EM inter-model links.

52

K. Sandkuhl et al. Table 2. The touch points between 4EM sub-models and the Product/Service Model

4EM sub-model

Relationship type to connect product-related concept

Product-related Semantic relationship concepts relevant for an complementing 4EM enterprise

Goals model

“motivates” or “supports” (e.g. goal motivates feature, feature supports goal; product/component supports goal)

Features, functions Feature supports goal; and requirements product/component support goals or sole supports goal problems of an enterprise

Business rules model

“triggers” delivery of product or execution of service

Quality aspects or regulations that are subject of rules

Concepts model

“related_to” (e.g. a Attributes of concept that relates to products, services or and specifies components products, services, components or features)

Specification of concepts related to products, services, components or features

Business process model

“related_to” (e.g. a process that defines activities regarding products, services, components or features)

Products, services, components or features are developed and delivered in processes

Actors and recourses model

“responsible_for” Actors’ (e.g. actor responsible responsibility for for product or feature) products, services, sub-components

Technical components and requirements model

The products, services, sub-components that were subject of activities in a process

Rules that trigger delivery of services or products

Actors are responsible for Products, services, components or features. Resources required for delivery

Product requirements when related to IT components in products

6.2 4EM Product/Service Sub-Model The Product/Service Model (PSM) describes what is offered to the enterprise’s customers in terms of (physical or digital) products and services, as well as what dependencies exist between them. The components of PSM can be used to describe essential characteristics of products and services in terms of decomposition structure (i.e., products, components) and value proposition for the customer expressed in features. Through links to and from the other sub-models, the PSM shows what processes and actors are involved in the

Modeling Products and Services with Enterprise Models

53

value creation and administrative activities for the different processes and services. The components of the PSM are related to each other through unidirectional semantic links of which the three main types are part_of , is_a and requires. Component types of the Product/Service Model are the following: product/service (with the possibility to specialize into product or service), component, feature. Product/Service: The model component Product/Service is used if either a combination of products and services is modeled or if the phenomenon under consideration is undecided to become product or service. In case of a combination of product and service, the naming of the product/service should indicate this by including the product name (substantive) and a verb phrase for the service, for example “ventilation system and providing maintenance”. Services are intangible and performed by the enterprise only if the customer requests them; the customer usually participates in service delivery. Services can be divided into Components that also can be Services and (optionally) can include Products. Each Service (and also every Component of a Service) should be associated to a Business Process that specifies how the service is provided. A Product is produced by an enterprise and offered to its customers in exchange for a (usually monetary) compensation. Products can be physical, digital, or virtual (see also the introduction to this section). Products can be divided into Components that also can be Products and (optionally) can include Services. Each Product (and also every Component of a Product) should be associated to an Actor responsible for the product/component and may be associated to a Business Process that specifies how it is manufactured. When modeling products and their decomposition it may be of relevance to distinguish between different variants of a product, which can be done in 4EM in two ways: – If the variants originate from the use of different components in the product, they can be modeled with the common product name in the root and the different variants as sub-trees below this root where the top-element in the sub-tree carries the name of the variant – If the variants originate from a configuration of the product which is not reflected in a difference in components, the variant attribute of the product should be used to describe the variants. Component is a distinguishable part of a product (or service) which is supplied by a partner or produced in the enterprise. If produced in the enterprise, a component should be associated to an Actor responsible for the component and may also be associated to a Business Process that specifies how it is manufactured. If a component is supplied by a partner, it should be associated to an external actor or external process used for procurement of the component. A feature is a characteristic or function of a product (or service), which is of value to the customers (or the target groups) using the product (or service). Thus, features also are distinguishable characteristics of the products and services. Features should be used to model – what the value propositions to the customers are and has to be taken into account when designing the product/service or marketing it

54

K. Sandkuhl et al.

– what the characteristics of the product/service is that create value for customers and are used or might be used in creating product variants or bundling/packing products/services with other products/services. The relationship types between the components of the PSM are: part_of , for specifying aggregation, is_a for specifying specialization, and requires to specify that the implementation of a feature require a certain product, service or component. See metamodel of the PSM on (Fig. 1). For the sake of brevity only one inter- model (IM) link of type goal from Goals Model (GM) motivates PSM modeling constructs is depicted (concepts in grey).

Fig. 1. Meta-model of the PSM

Figure 2 shows an example of a product (PS21) and its components using part_of relationship as well as features that require the components. Dashed lines are used for inter-model relationships, e.g. Goal 2 motivates the need to produce PS21 and Feature 3 supports Goals 2.1.

Modeling Products and Services with Enterprise Models

55

Fig. 2. Example of a PSM and inter-model relationships

7 Conclusions and Future Work We have investigated the current contributions in the area of product modeling and on the basis of an industrial case study proposed an extension for product and service modeling to the 4EM method. The elaboration of the newly proposed sub-model, namely PSM, for 4EM was based on the principle of “touch points”. That is, touch points needed to be supported with 4EM inter-model links in case there is a need for (i) relationships that connect other 4EM sub-models with components of the PSM, (ii) the PSM components complement the concepts of another 4EM sub-model, and (iii) the connection between concept of the PSM and another 4EM sub-model represents something that is relevant to the organizational design. The proposal for the PSM has been tested in a number of thesis projects and is currently being implemented in the 4EM modeling tool using the ADOxx modeling platform [27]. Concerning issues for future work, in [3] we proposed that digital twins need to support dynamic configuration and adaptation depending on the application context as part of capability management [11]. Hence, parts of organizational capability designs specifying aspects such as context, measurable properties, KPIs, data sources, and capability adjustment algorithms need to be linked to product and service models because these aspects motivate features and components in the product or service design. Capability management also supports model-based generation of monitoring dash-boards that would need to be supported by certain features in the products or services that are being monitored, particularly when it comes to runtime data provision and implementation of the triggering mechanisms in physical products.

56

K. Sandkuhl et al.

References 1. Matt, C., Hess, T., Benlian, A.: Digital transformation strategies. Bus. Inf. Syst. Eng., 57(5), 339–343. Springer (2015) 2. Porter, M.E., Heppelmann, J.E.: How smart, connected products are transforming competition. Harvard Bus. Rev. 92(11), 64–88 (2014) 3. Sandkuhl K., Stirna J.: Supporting early phases of digital twin development with enterprise modeling and capability management: requirements from two industrial cases. In: Nurcan., et al. (eds.) Enterprise, Business-Process and Information Systems Modeling. LNBIP, vol. 387. Springer (2020). https://doi.org/10.1007/978-3-030-49418-6_19 4. Sandkuhl, K., Stirna, J., Persson A., Wißotzki M.: Enterprise Modeling–Tackling Business Challenges with the 4EM Method. Springer, New York (2014) 5. Lillehagen, F., Krogstie, J.: Active Knowledge Modeling of Enterprises. Springer (2008). https://doi.org/10.1007/978-0-387-35621-1_10 6. The Open Group: ArchiMate 3.0 Specification, Published by The Open Group (2016) 7. Frank, U.: Multi-perspective enterprise modeling: foundational concepts, prospects and future research challenges. Softw. Syst. Model, 13(3), 941–962 8. Rifkin, J.: The third industrial revolution: How lateral power is transforming energy, the economy, and the world (2013) 9. Berman, S.J., Bell, R.: Digital transformation: creating new business models where digital meets physical. In: IBM Institute for Business Value (2011) 10. Sandkuhl, K., Shilov, N., Smirnov, A.: Facilitating digital transformation by multi-aspect ontologies: approach and application steps. IJSM (2020) 11. Sandkuhl, K., Stirna, J. (eds.): Capability Management in Digital Enterprises. Springer International Publishing, Springer, New York (2018) 12. Johannesson, P., Perjons, E.: An Introduction to Design Science. Springer, New York (2014) 13. Yin, R. K.: Case study research: Design and methods. SAGE (2002) 14. Riebisch, M., Streitferdt, D., Pashov, I.: Modeling variability for object-oriented product lines. In: Buschmann, F., Buchmann, A.P., Cilia, M.A. (eds.) ECOOP 2003. LNCS, vol. 3013, pp. 165–178. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-259343_16 15. Hegge, H.M.H., Wortmann, J.C.: Generic bill-of-material: a new product model. Int. J. Prod. Econ. 23(1–3), 117–128 (1991) 16. Fenves, S.J., Foufou, S., Bock, C., Sriram, R.D.: CPM2: a core model for product data. ASME J. Comput. Inf. Sci. Eng. 8(1), 014501 (2008) 17. Baines, T., Lightfoot, H., Smart, P.: Servitization within manufacturing. J. Manuf. Technol. Manage., 22(7), 947−954 18. Thomas, O., Walter, P., Loos, P.: Design and usage of an engineering methodology for productservice systems. J. Des. Res. 7(2), 177–195 (2008) 19. Boucher, X.; Medini, K.; Fill, H.-G.: Product-service-system modeling method. In: DomainSpecific Conceptual Modeling, pp. 455–482. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-39417-6_21 20. Alter, S.: Answering key questions for service science. In: Proceedings of the 25th European Conference on Information Systems (ECIS), Guimarães, Portugal, pp. 1822-1836, June 5–10. ISBN 978-989-20-7655-3 (2017) 21. Vargo, S.L., Lusch, R.F.: Institutions and axioms: an extension and update of service-dominant logic. J. Acad. Mark. Sci. 44(1), 5–23 (2015). https://doi.org/10.1007/s11747-015-0456-3 22. Baer, F., et al. DESERV IT: a method for devolving service tasks in IT services. Bus Inf Syst Eng (2020). https://doi.org/10.1007/s12599-020-00655-y

Modeling Products and Services with Enterprise Models

57

23. Bitner, M.J., Ostrom, A.L., Morgan, F.N.: Service blueprinting: a practical technique for service innovation. Calif. Manag. Rev. 50(3), 66–94 (2008) 24. Pratt, M.J.: Introduction to ISO 10303—the STEP standard for product data exchange. J. Comput. Inf. Sci. Eng. 1(1), 102–103 (2001) 25. Porter, M.E., Heppelmann, J.E.: How smart, connected products are transforming competition. Harvard Bus. Rev. 92(11), 64–88 (2014) 26. Lantow, B., Dehne, M., Holz, F.: Evaluating notations for product-service modeling in 4EM: general concept modeling vs. specific language. In Proceedings of PrOse@PoEM 2019: Ceurws.org, vol. 2499, pp. 26–36 (2019) 27. Fill, H., Karagiannis, D.: On the conceptualisation of modelling methods using the ADOxx metamodelling platform. Enterp. Model. Inf. Syst. Archit. 8(1), 4–25 (2013)

Structuring Participatory Enterprise Modelling Sessions Michael Fellmann1(B) , Kurt Sandkuhl1,2 , Anne Gutschmidt1 , and Michael Poppe1 1 Institute of Computer Science, University of Rostock, Rostock, Germany

{michael.fellmann,kurt.sandkuhl,anne.gutschmidt, michael.poppe}@uni-rostock.de 2 School of Engineering, Jönköping University, Jönköping, Sweden [email protected]

Abstract. The importance of involving enterprise stakeholders in organizational transformation and development processes has been acknowledged in many scholarly publications in the context of business information systems research. Method and tool support for this is particularly explored and provided by the field of participatory enterprise modelling (PEM). In PEM, modelling sessions involving all relevant stakeholders and guided by a modelling facilitator are a central element. However, the published work on PEM is not very extensive with respect to structuring such modelling sessions, in particular when combining analytical and design parts. It is hence hard for novice modelling facilitators to plan a workshop, to switch between different workshop phases and to react to unforeseen events. Since existing literature covers only generic aspects of workshop moderation, we fill this gap in providing an initial model that can serve to inform, structure and guide PEM sessions. The model has been developed by analysing examples from real-world modelling sessions. Keywords: Enterprise modelling · Participation · Innovation management · Digital transformation

1 Introduction The importance of involving enterprise stakeholders in organizational transformation and development processes has been acknowledged in many scholarly publications in the context of business information systems research. Recent examples are digital transformation, where the employees’ contribution is explicitly considered as success factor, technology adoption and innovation with respect to the design of diffusion processes, and enterprise architecture management when using “influence-centric” strategies for establishing architecture principles. Method and tool support for how to involve stakeholders is offered by the field of enterprise modelling (EM), in particular in participatory EM (PEM). In PEM, modelling sessions involving all relevant stakeholders and guided by a modelling facilitator are a central element. Published work on PEM and participatory modelling sessions includes method support, tool recommendations, advice for role distributions, and best practices (cf. Sect. 3). © IFIP International Federation for Information Processing 2020 Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 58–72, 2020. https://doi.org/10.1007/978-3-030-63479-7_5

Structuring Participatory Enterprise Modelling Sessions

59

However, the published work on PEM is not very extensive with respect to structuring activities or sequences of activities within the modelling sessions, in particular when combining analytical and design parts. What would be the best flow of activities for identifying digitization options in an enterprise and designing solutions for the most promising ones? When and how to switch from elicitation activities of the sessions (e.g., for collecting input from participants) to structuring, design and reflection of activities? In our impression, existing literature primarily covers more “generic” aspects for modelling sessions, like general preparation and planning, role distribution or questions to ask, than specific aspects of how to conduct such sessions (cf. Section 3). The intention of our research is to contribute to a more diversified picture of participatory modelling sessions by describing and analysing the “inner structure” of activities and discussing the experiences collected. These experiences are based on different industrial case studies of PEM with different goals. The cases and their analysis form the first contribution of this paper. Based on this, we further derive typical behavioural and metacognitive abilities needed for the different activities in PEM as well as criteria for shifting from one activity to another. We aggregate and consolidate our findings into a generalized process model of PEM which is our second contribution. This model incorporates procedural knowledge as well as abilities and decision criteria for shifting between activities. With this, we aim to contribute to the body of knowledge related to PEM as well as to inform practitioners that plan and execute PEM sessions. The rest of the paper is structured as follows. Section 2 introduces the research methods applied, Sect. 3 summarizes the background for our work and discusses related work. Section 4 describes a structured literature research conducted in order to identify relevant research focusing on fine-grained participatory modelling sessions. Section 5 is focused on the industrial cases studies of PEM. Section 6 analyses the cases, presents observations and derives the recommendations for structuring activities in PEM. Section 7 summarizes our findings and discusses future work.

2 Research Approach Work presented in this paper is part of a research program aiming at methodological and tool support for PEM. The research program includes experiments in controlled lab environments aiming at testing the effects of interventions in the methods and tool support on model results and the behavior of participants. Moreover, field work is conducted with real-world enterprises applying PEM to solve specific tasks or problems defined by the enterprises. This study is related to the field work and supposed to both contribute to the existing body of knowledge regarding PEM and guide practitioners engaged in planning PEM sessions. The focus of this paper started from the following research question which is based on the motivation presented in Sect. 1: RQ: In the context of participatory enterprise modelling, how should modelling sessions be structured and conducted? The research method used for working on this research question is a combination of literature study, descriptive case study and argumentative-deductive work. Based on the research question, we started identifying research areas with relevant work for this question and analysed the literature in these areas. The purpose of the analysis

60

M. Fellmann et al.

was to find theories or experience reports on activities in participatory and collaborative modelling suitable for structuring analysis and design work and their combination. Since the literature study showed a lack of publications in this area (see Sect. 3.2), we decided to analyse own material from qualitative case studies in order to contribute to the field (see Sect. 5). Yin (2013) differentiates various kinds of case studies [1]: explanatory, exploratory and descriptive. The case studies presented in Sect. 5 have to be considered as descriptive, as they are used to describe the combination of analysis and design activities in PEM sessions in real-world environments. Based on the reconstruction of the case study material and an analysis of resulting experiences in relation to the findings of the literature analysis, we inductively derive a generic process model for PEM. The model is enriched with required behavioral and metacognitive abilities to conduct the suggested activities as well as a list of typical decisions that have to be taken by a moderator in PEM sessions.

3 Background and Related Work 3.1 Collaborative and Participatory Enterprise Modelling In general, enterprise modelling (EM) addresses the systematic analysis and modelling of processes, organization and product structures, IT-systems and any other perspective relevant for the modelling purpose with the help of enterprise modelling methods [2] used for capturing, communicating, and sharing enterprise knowledge [3]. Depending on the way and extent of involvement of enterprise stakeholders in modelling, some EM methods can be characterized as collaborative or participatory. Collaborative modelling emphasizes the aspect of joining several experts into a coordinated effort [4]. In contrast to that, EM involving users or enterprise stakeholders is called participative modelling [5]. In the scope of this paper, we focus on participatory enterprise modelling (PEM) which by nature includes collaboration activities between enterprise stakeholders and modelling experts during joint modelling sessions. The aim with a participative approach for EM is to simultaneously work with different stakeholders in a collaborative way to avoid conceptual deviations (misalignment) between the stakeholders and their different perspectives. The necessity for this has also been put forward e.g. by vom Brocke & Thomas (2006) [6]. Many advantages are attributed to participatory enterprise modelling, among them improved quality of the models, better acceptance of the modelling results by the stakeholders [7] and improved knowledge sharing between stakeholders by co-creation of models [8]. Although these advantages were observed in many modelling projects [9] and investigated in a number of experiments, there is still a need for empirical work examining the phases and activities of PEM sessions in real-world cases. Much published work on participatory modelling is more exploratory and argumentative than explanatory and conclusive. In our research, we address this gap by developing a model for participatory modelling sessions that can be used to plan and structure such sessions. 3.2 Known Research Streams in Regard to PEM Regarding the processes of PEM, many approaches and methodologies refer to the whole enterprise modelling project to be executed. Stirna et al. (2007) present guidelines for

Structuring Participatory Enterprise Modelling Sessions

61

EM projects. For modelling, they suggest creating different models representing different perspectives simultaneously [9]. Similarly, Sandkuhl & Lillehagen (2008) describe the overall modelling processes following the C3S3P methodology, but focus on the early phases only. In some publications, one may also find phases such as elicitation, modelling, verification and validation [10, 11]. Many authors agree that PEM is an iterative process, going in cycles through these phases [12–15]. In this regard, several authors emphasize the importance of reaching a joint understanding of previous results [9, 15]. With regard to individual activities in PEM, rarely distinct phases are analysed in more detail [14, 16]. In this regard, Rittgen (2007) studied the formalization process with regard to negotiation patterns derived from communication processes. In regard to roles for enterprise modelling, one of the most established distinctions is between method experts and domain experts. The former are those who are trained in the modelling method and notation. The latter are experts representing the stakeholders of the company who contribute with their knowledge and experience [9, 12, 15]. With regard to method experts, more detailed roles refer to the facilitator leading the modelling sessions, and the tool operator formalizing the model [9, 15, 17]. Rittgen (2010) even tried to identify roles based on empirical data, and found 4 different profiles: facilitator, modeler, editor, and consultant, the latter three possibly influenced by modelling literacy and different levels of motivation. Based on the role structure in a team, Rittgen (2010) found different cooperation styles within the teams. Also in this regard, involving stakeholders is widely considered as important [9, 13, 15, 17]. In participative enterprise modelling, often, persons are involved who are not experts in modelling, but in their domain. One way to handle this challenge is to train the participants [9] or to use a modelling language which is easy to handle for domain experts [13]. Another area of research are tools for PEM. Concerning the use of tools, Stirna et al. (2007) give a very concrete advice to start with an analogous tool such as a plastic wall where everyone may equally contribute, and later formalize the model by a method expert with a computerized tool. Similar to a whiteboard, a tabletop may also enable participants to simultaneously work [18], although space limitations of the whiteboard may restrict usage possibilities to smaller models. One may, however, extend the workspace by using additional devices such as tablet, as suggested by Nolte et al. (2016), which represent private spaces where sub-groups may work on their ideas. To sum up, there exist case studies and approaches of how to carry out participative enterprise modelling projects. These, however, are described in a rough way, i.e. as more or less general guidelines. Only a few approaches in the area of process modelling consider the formalization of models in more fine-grained manner, but they focus on modelling experts and neglect other stakeholders. Hence, there is a lack of work on how to structure and shape the course of participative modelling sessions.

4 Structured Literature Analysis Since only a few relevant approaches were identified by the authors among relevant research areas in PEM, a structured literature analysis was performed to broaden the knowledge about relevant works. It aimed at identifying research work from participatory and collaborative enterprise modelling that explicitly addresses phases, activities or

62

M. Fellmann et al.

steps of modelling sessions. In order to identify relevant work, we decided to perform a systematic literature review (SLR) based on the procedure proposed by Kitchenham [19] that consists of six steps described in this section. Step 1 is to define the research questions for the SLR. Starting from the main RQ for our work presented in Sect. 2, we identified the following literature-focused research questions (RQL) for the SLR: RQL 1: What published work exists on tasks or activities to be performed in modelling sessions? The aim of this RQ is to find a set of basic activities or tasks that could be used to analyse and describe the elements of the PEM sessions in our industrial cases RQL 2: What approaches exist for deciding on the structure or sequence of activities when conducting PEM sessions? In addition to basic elements of PEM (see RQL 1), this RQL aims at finding recommended sequences or patterns of activities for defined purposes of PEM sessions, such as process or goal modelling. RQL 3: What recommendations exist for the transition between tasks in PEM sessions? In order to provide a reply to the main RQ presented in Sect. 2, RQL3 aims at identifying either a selection of basic activities (from RQL 1) or – preferably – a recommended sequence of activities in a PEM session, including how to recognize when to move on to design and to organize this transition. Step 2 is to specify the literature sources to be taken into account. We decided to examine the AIS electronic library (AISeL), IEEE Xplore and Scopus. Publications with significant impact on research should reach one of these major outlets. Step 3 addresses the construction of the search query that starts from a first query (called the query for the initial “population” of papers) which is stepwise refined (called “intervention”), for example by adding synonyms to the initial search terms or by adding more terms for more precise specification of the search. The final search queries resulting from this process for the three RQL are shown in Table 1. Table 1. Search queries and number of hits for the RQ Search query

AISeL

IEEE

Scopus

(participatory modelling OR participative modeling) AND (phase OR activity* OR step OR task)

2 (4)

0 (7)

18 (78)

(participatory enterprise modelling OR participative enterprise modeling) AND (phase OR activit* OR step OR task)

1 (1)

1 (2)

5 (5)

(collaborative modelling) AND (phase OR activit* OR step OR (40) task)

2 (40)

9 (38)

(collaborative enterprise modelling) AND (phase OR activit* OR step OR task)

1 (1)

0

1 (1)

Step 4 is selecting the papers relevant for the RQL. In most cases it was sufficient to read the abstract. In case of unclear situations, we read the full text. The number of relevant papers is shown in Table 1 with the number of hits given in parentheses. Most papers considered irrelevant were addressing the modelling of collaborative (software) agents or user behaviour, software systems or components supporting CM or PEM, or the general applicability of CM in specific domains. Step 5 is extracting the relevant

Structuring Participatory Enterprise Modelling Sessions

63

information to answer the RQLs. The results from this step are presented in the remainder of this section, which is at the same time result of Step 6, documenting the results. The search results show that there is a substantial amount of work on collaborative and participatory modelling in the IS community, in computer science, business administration, sociology, decision sciences and engineering. Most of the work can be sorted into four larger groups: • Research addressing steps and activities of modelling sessions independently of specific modelling purposes. Examples are recommendations for preparing and conducting sessions from [8], phases, interaction topics and rules observed by [20] or different perspectives to be considered during enterprise modelling sessions [15]. • Work addressing specific modelling purposes, such as process or goal modelling, with recommendations of steps or phases to be considered. Examples are business process modelling [21], structured decision making [22], the “commandments” for a socio-environmental modelling [23]. • Aspects of group interaction or collaboration between participants in modelling sessions, such as speech-acts and dialog games [24] or psychological ownership of models [25]. • Abilities, behavioural aspects and cognitive processes in collaborative/participative conceptual modelling. Examples are executive functions ([26]; see also below) and levels of participation or collaboration [27]. In the following, we summarise the results from the perspective of our RQLs. Regarding RQL 1, a line of work contributed by Hoppenbrouwers, Wilmont and Proper [26, 28] could be identified that investigates the use of executive functions for identifying tasks and activities. Executive function is an umbrella term for the complex cognitive processes that serve ongoing, goal-directed behaviours. In educational sciences and neuropsychology, the underlying concepts and scales have been used since many years to assess and classify behavioural and metacognitive abilities, for example by [29]: • • • • • •

Inhibit – stop engaging in a behaviour Shift – move freely from one activity/situation to another (switch or alternate) Emotional control – regulate emotional responses appropriately Initiate – begin an activity and independently generate content or results Working memory – hold information when completing a task Plan/organize – anticipate future events; set goals; develop steps; grasp main ideas; organize and understand the main points • Organization of materials – put order in work or documentation storage • Monitor – check work and to assess one’s own performance In regard to RQL 1 and RQL 2, much work on activities and tasks to be performed in modelling sessions also originates from the team of Stirna, Persson and Sandkuhl with various co-authors who through all publications give consistent recommendations for PEM sessions. However, the primary focus of this work is not on single activities but the whole process of PEM and required roles and competences, which makes the work equally relevant for RQL 2. Relevant results for this question include recommendations

64

M. Fellmann et al.

for conducting PEM, which consists of different general steps to take such as planning and preparing a session or setting up the room. They are not specific to the content and internal structure of a session. The authors explicitly state that they do “not describe details of how a modelling session is conducted” and recommend literature for specific modelling purposes. In regard to RQL 3, the search did not return explicit recommendations for the transition between different activities in PEM. Implicitly, the work by Stirna/Persson stating the need for “creativity, consolidation, consensus, critique and new focus” phases in PEM sessions address this RQL. However, they see this as activity during preparation only. Furthermore, there were no relevant publications specifically on steps/activities/phases of collaborative enterprise modelling. As a result of the literature analysis, we conclude that fine-grained guidance is missing how to conducting a participative modelling workshop.

5 Industrial Case Studies The two case studies described in this section were selected from different research and development projects with industrial partners conducted at Rostock University during fall 2019 and spring 2020. For all case studies, the participating researchers collected documents, minutes of meetings and interviews with company representatives, field notes taken when working with the companies, models of process, information structures and business models and other relevant information. This material concerns the situation before conducting participatory modelling sessions, the preparation of the sessions, the activities during the sessions as such and the results. It forms the basis for the case studies and is presented in a condensed way in this section. For all case studies, we will use the same structure of presentation starting with a description of the starting point, the different phases of the transformation and the final situation. 5.1 Case A: Modelling Digital Transformation Goals at Automotive Supplier Case study Company A is a subsidiary of a major automotive manufacturer responsible for producing tools for the metal parts of chassis production, such as roofs, doors, side panels, etc. These tools, called (press) forms, are developed individually for each car model variant in an iterative process of casting, milling and/or welding, and polishing. Company A is doing the largest share of its business with the automotive manufacturer. It also serves other automotive and truck suppliers. Due to its unique specialization on forms for a specific metal, Company A is well-positioned in the market. However, its management aims to increase efficiency and flexibility in the business model to be prepared for possible future market changes. The case study emerged when Company A decided to investigate radical digital innovation focusing on disruptive ways of working or technologies instead of gradual optimization or increase in efficiency. A workshop was planned to investigate the potential for radical innovation concerning the possibilities for drastic and seemingly unrealistic changes, like, reduction of production time for forms to 10% of the current value, no setup time of the production system or internal logistics requiring no staff. Preparation and execution of the workshop included several steps: the selected participants represented all relevant departments of the company (design, production, logistics,

Structuring Participatory Enterprise Modelling Sessions

65

procurement, human resources, economics, service and customer care), mostly represented by the head of the unit or senior experts. All ten participants (2 female, 8 male of all age groups) were informed beforehand about purpose of the workshop, the need to think “out-of-the-box” and the importance of their participation. The workshop consisted of three major phases: Phase 1 included the collection of proposals from the participants for the radical transformation of products and of operations. The facilitator asked the participants to write down their ideas for radical DT for the products of the company on paper cards. After 15 min, participant by participant were asked to briefly present their ideas and put them up on a plastic wall. Facilitator and participants started to sort the ideas into groups on the plastic wall. The same procedure was repeated for ideas to radically transform operations. The facilitators had own ideas available which were derived from analysing DT in related industries. These ideas were meant to inspire the discussion in case there was a lack of new ideas, but this was not needed. Phase 2 aimed at joint clustering the collected options and definition of priorities. The purpose was, essentially, to agree on a joint understanding and a clear separation of all clusters. The initial sorting of the participants‘ ideas turned out to be fine-grained and sometimes too fuzzy. The facilitators walked the participants through all initial groups of ideas and initiated a discussion about naming and boundaries of these groups. The clusters the participants agreed on were put on the plastic wall with paper cards of a different color. The definition of priorities was done using a voting approach. Each participant received a number of votes (sticky paper marks). All voted simultaneously by putting the marks on their prioritized clusters. Phase 3; Based on the priorities, an initial evaluation of the top three options for radical transformation of products and the top three transformations in operations was done. For this purpose, the workshop switched from a joint session with all participants to parallel sessions in two groups. Each group started with one option and had the task to elaborate the essentials of the option using five questions (what would be the exact vision/goal, what activities are required, who has to do what, what resources and partners are needed, what is the business value?). The result was documented with a canvas (paper size A0). After 30 min, the next option followed and the groups often changed membership. After having completed all options all participants gathered and for each group one member presented the group’s results. The content of the workshop was documented in photo documentation of collected ideas and clusters, written documentation of the evaluation results, and notes. The workshop was conducted by two researchers: one facilitator and one note taker. 5.2 Case B: Modelling the Innovation Process at Manufacturing Company Case study Company B is a metal processing/manufacturing company focusing on the production of lifting and pushing gearboxes. This company is faced to changing customer requirements, increasing knowledge intensification, and technical developments, which is why it is forced to create new products and services or update existing ones. This implies to constantly rethink and, if necessary, adapt processes in order to satisfy customer needs. In this context, employees are an indispensable source of new ideas due to their deep knowledge of the products, processes, and customer needs. With a large

66

M. Fellmann et al.

number of ideas, a systematic management becomes necessary for them, which can be supported by an IT-supported idea- and innovation management system (IMS). However, IMS and innovation processes are often developed and implemented in a top-down manner without asking the employees much about their needs. Since IMS should increase participation, Company B decided to develop the innovation process in a participatory way in a workshop to prepare for later implementation in an IMS. The workshop was attended by 10 employees, 3 females and 7 males, of all age groups. They work in the departments production, assembly, design, sales, and IT. All participants were aware of the context, as they had previously participated in interviews regarding possible requirements for an IMS. The workshop followed a predefined structure. In a first task possible process steps should be elaborated. In task 2, data elements which may be required in different process steps should be identified. Afterwards, in task 3 roles and responsibilities should be defined for each process step and in task 4 decision points should be determined. Regarding tasks 1 and 2, the following three phases could be identified during the workshop: Phase 1: In order to let the participants think for themselves and to support that really everyone shares their thoughts, the participants were asked to write their ideas and suggestions down on paper cards. In order to set a time limit for the participants, they were given 10 min for thinking and writing down. After this time, all participants had made it clear that they were ready. Otherwise they would have been given a few more minutes. Phase 2: In the next step, one participant after the other presented and explained each of their ideas and pinned the paper cards on a wall in chronological order. The other participants listened and discussed some points if there was any uncertainty. This phase ended when all participants have presented all of their thoughts and pinned the paper cards on the wall. Phase 3: Having all paper cards pinned on the wall, there was a final discussion with all participants where suggested process steps and data elements were aggregated and clustered. (This was gathering and bundling of ideas and suggestions - > task 3 and 4 also gathering but with a more decisive character) Task 3 and 4 should not be limited to suggestions. Since a company-wide process cannot look different for every employee, decisions had to be made, for example, who should be involved in which process step or at which points decisions had to be made. Therefore, a moderated discussion was conducted in which the participants were asked for each step of the process who should be involved and whether a decision should be made on this point. As a final task the participants were asked to name success factors for the long-term use of IMS that are relevant for them. Phase 1 to 3 was conducted in the same way as done before in task 1 and 2. Since it may not be possible to fulfil all the factors mentioned, it was relevant to find out which are the most important success factors for the participants. Therefore, a further phase was carried out in which the participants were asked to mark with dot stickers which factor is important for them. Each participant received three stickers which could be placed as desired. This allowed to identify the most important factors by the highest number of stickers received.

Structuring Participatory Enterprise Modelling Sessions

67

6 Case Study Analysis 6.1 Coding Scheme for Case Study Analysis To answer our research question and to finally derive the generic process model for PEM, we analyse our case data in a three step approach. In the first step documented in this section, we develop a coding scheme used to interpret the case data. In the second step, we apply this coding scheme to enrich an abstract, tabular-based reconstruction of our case data (cf. Section 6.2). In a third step, we use this enriched reconstruction to answer the overall research question and derive the generic process model for PEM (cf. Section 6.3). Answering this question in turn implies to answer three sub-questions. First, “what are the central activities in a workshop and how are they composed?” The answer should provide the process structure of the workshop. Second, “what are ending conditions for activities?” The answer should provide criteria useful to decide when the next activity should start. Third, “which skills are needed in the different phases of a workshop?” The answer should enrich the process structure with abilities. Taken together, these answers to all three sub-questions provide insights in regard to our general RQ “How should modelling sessions be structured and conducted?” (cf. Section 2). In the following, we derive a coding scheme by following these three sub-questions. In doing so, we identify codes for (i) each phase of the workshop, (ii) the ending conditions that triggered the next activity during the workshop and (iii) the skills that were required in each phase. In the following, we introduce our codes. Phases. For identifying relevant codes, we use the moderation cycle from Seifert [30], originally released in the 1980s. It comprises the phases Begin, Collect, Select, Elaborate, Plan, and Finalize. In the beginning, participants are e.g. welcomed and the goals are explained. In the following collection phase, items are elicited. They form the input for the subsequent elaboration phase in which they are worked on e.g. in small groups. In the subsequent planning phase, further actions are determined based on the results achieved so far. In the last phase, the workshop is closed thereby critically reflecting the achieved results and possible next steps. We add to this cycle a preparation phase (in line with the literature in Sect. 4). We furthermore rename “collect” into “elicit” in order to emphasize the active role of the moderator. We further replace “select” with “structure & prioritize” to reflect that elicited concepts or thoughts should be integrated or put into a common perspective, which of course may also involve selections. We also rename “plan” to “reflect” since the development of more detailed plans can be decided in the workshop but must not necessarily be carried out during the workshop, so “reflect” is more neutral and at the same time does not exclude making plans. We furthermore rename “elaborate” to “design” in order to accommodate the domain of enterprise modelling. We finally use the codes Prepare (PREP), Elicit (ELCI), Structure & Prioritize (SPRIO), Design (DSGN), Reflect (RFLC), and Finalize (FINA). Ending conditions. Codes have been collected from the workshop moderators of our case studies. We discussed and consolidated the list of codes. The final list comprises the condition of completion (COMP) or timeout (TMEO) if a phase is successfully completed due to an objective and measurable criterion or the time is over. Another condition can be saturation (SATU) if no new arguments are identified by the participants

68

M. Fellmann et al.

or exhaustion (EXHA) of arguments and thoughts. The latter can be the case when no criterion for completeness can be defined but it gets increasingly harder to proceed with the elicitation. Another ending condition is quality loss (CLOS) when ideas and arguments put forth by the participants are distractive or opposed to the workshop goals. Also, social issues (SOCI) can trigger the end, e.g. when suddenly conflicts pop up and dominate the discussion or participants show destructive behavior. Whereas completion, timeout, saturation, and exhaustion can be seen as normal endings of a phase, quality loss or social issues might cause an exceptional, i.e. unplanned end of a phase. Although it might be possible for the workshop moderators to apply interventions that tackle most of these conditions and then to proceed with a phase, reaching these conditions can nevertheless indicate a good opportunity for starting a new phase. This is even more so if it turns out that the interventions have a limited or no effect. Skills. For coding the skills required in each phase as it was perceived in the real-world case studies, we considered the list of metacognitive abilities introduced in Sect. 4. We however group abilities that are similar into more coarse-grained categories of skills that are relevant for conducting modelling sessions. Also, we use the term of “skills” here to reflect that both knowledge and experience are relevant to complement cognitive abilities. In this way, we group working memory, plan/organize, and organization of materials into a group of content structuring skills (CSTR). They are needed to process, organize and structure content such as summarizing or grouping arguments and thoughts or draw conclusions. Furthermore, we group initiate, monitor and shift into moderation skills (MODS). They are relevant to guide the workshop participants and navigate between different activities and parts of the workshop. Finally, we group inhibit and emotion control to the category social competence skills (SOCS).

6.2 Reconstruction of the Case Studies In the following, we reconstruct our case data in an abstract way by providing a Table 2. In the rows of the table, we list major activities during the workshop. In the columns of the table, we characterize these phases in the form of a short description and by assigning the codes from our code system introduced in the previous section. Based on the reconstruction of the case data, we derived generic workshop process model (cf. next section) in the form of a process model using the Business Process This model is constructed Model and Notation (BPMN) language. 6.3 Derivation of the Generic Workshop Process Model The tasks represented in the model (cf. Figure 1) correspond to the phases of our coding scheme (cf. Section 6.1). The transition between the tasks has been derived by inspecting the sequence of codes in Table 1. Moreover, tasks have been grouped under a headline according to major phases of each workshop (preparation, execution, and finalization) and the skill profile of tasks has been indicated below these phases.

Structuring Participatory Enterprise Modelling Sessions

69

Table 2. Abstract reconstruction of the workshop activities Workshop Activity (Short Description)

Phase

Ending Cond.

Req. Skills

CASE A – Digital Transformation Goals in Automotive Company Introduction to the workshop, setting workshop goals

PREP

COMP

MODS

Presentation of the company departments

PREP

COMP

MODS

Elicitation of radical transformation ideas on cards

ELCI

COMP

MODS

Presentation and clustering of the ideas on the wall

SPRIO

COMP

CSTR, MODS

Elicitation of ideas for transform. in operation on cards

ELCI

COMP

MODS

Presentation and clustering of the ideas on the wall

SPRIO

COMP

CSTR, MODS

Joint refinement of clusters until agreement reached

RFLC

SATU

MODS, SOCS

Definition of priorities of ideas within clusters

SPRIO

COMP

MODS

Detail work on three ideas for radical transformation

DSGN

TMEO

MODS, SOCS

Detail work on three ideas for operation transform

DSGN

TMEO

MODS, SOCS

Discussion of the results achieved

RFLC

SATU

MODS, SOCS

Final discussion of workshop results

FINA

COMP

MODS

Introduction to the workshop, setting workshop goals

PREP

COMP

MODS

Elicitation of innovation process steps on cards

ELCI

EXHA

MODS

Clustering of the steps into phases on the wall

SPRIO

COMP

CSTR, MODS

Adding a name for the identified phases

DSGN

COMP

CSTR

Discussion of the results achieved

RFLC

SATU

MODS, SOCS

Elicitation of data elements on cards

ELCI

EXHA

MODS

Clustering of the data elements on the wall

SPRIO

COMP

CSTR, MODS

Discussion of the results achieved

RFLC

SATU

MODS, SOCS

Design of process logic (flow, gates) on the wall

DSGN

COMP

CSTR

Definition of responsibilities and addition to the model

DSGN

EXHA

CSTR, SOCS

Discussion of results achieved

RFLC

SATU

CSTR, MODS

Elicitation of long-term success factors on cards

ELCI

EXHA

MODS

CASE B – Innovation Process in Production Company

Clustering of the success factors on the wall

SPRIO

COMP

CSTR, MODS

Labeling of the clusters as success factor categories

DSGN

COMP

CSTR

Discussion of the results achieved

RFLC

SATU

MODS, SOCS

Final discussion of workshop results

FINA

COMP

MODS

Codes: Completion (COMP), Content structuring skills (CSTR), Design (DSGN), Elicit (ELCI), Exhaustion (EXHA), Finalize (FINA), Moderation skills (MODS), Prepare (PREP), Quality loss (QLOS), Reflect (RFLC), Saturation (SATU), Structure & Prioritize (SPRIO), Social competence skills (SOCS), Social issues (SOCI), Timeout (TMEO).

70

M. Fellmann et al.

Fig. 1. Process model of participative modelling workshops

Regarding the ending conditions of tasks, they have been modelled with standard sequence flow notation for completeness, exhaustion or saturation ending conditions. In case when time is exceeded, a timer symbol attached to the activity boundary can activate the sequence flow leading to the next task. Likewise, in case of quality losses or social issues, an exception symbol is used to handle this situation and to trigger the next phase before the regular end. Furthermore, the skill profile of tasks has been indicated near the phase labels. Regarding sequence flow, the two central activities Structuring & Priotizing and Design that together form the Creation Activity can be skipped. This gives the flexibility that after elicitation, only reflection takes place. A drawback of the visualization as a generic process model is that the “specifics” of participatory modelling within the six tasks are not visible because they form a refinement level of integrated practices and sub-tasks that implement participatory modelling. In particular the tasks Elicit, Structure & Prioritize, Design and Reflect have to ensure that the modelling workshop is conducted and perceived as a joint, collaborative activity of users, enterprise stakeholders and modelling experts. Depending on the situation at hand, the facilitator might need to activate participants, start additional discussions, encourage certain stakeholders to establish equal opportunities to contribute, try to reach consensus among the participants, allow for different opinions and reflection, ensure commitment to jointly defined solutions, etc. (cf. Section 3.1). The modelling experts or at least the facilitator needs well-developed behavioural and metacognitive abilities (cf. Section 4) to decide which of these sub-tasks is required in what situation.

7 Concluding Remarks and Future Work Since knowledge about modelling phases and their interaction in modelling workshops is largely missing in the current literature, we address this knowledge gap. To do so, we analysed material from two different case studies in a systematic way. The coding scheme and its application is our first contribution.

Structuring Participatory Enterprise Modelling Sessions

71

The second contribution is our generic process model for PEM answering the questions: (i) “What are the central activities in a workshop and how are they composed?”, (ii) “What are the ending conditions of activities?”, and (iii) “Which skills are needed in different workshop phases?”. To the best of our knowledge, no such model exists up to now. The model reflects the flexible and dynamic nature of workshops via an extensive use of control flow and event handling mechanisms. We hope that our contribution serves both to better understand modelling sessions from a theoretical point of view and supports (novice) practitioners or workshop moderators to plan and execute modelling workshops. Finally, as a limitation, our model is still preliminary. Our existing case data still holds further valuable details leaving room for future work. Moreover, more case data is needed for complete justification of the model. These cases should add more diversity to the pool of collected experiences, e.g. in regard to industries. Also, modelling workshops with different purposes could be considered. Another option that we actively consider for our future work is to conduct lab experiments.

References 1. Yin, R.K.: Case Study Research. Design and Methods. SAGE Inc, Thousand Oaks (2013) 2. Vernadat, F.: Enterprise modeling and integration. Boom Koninklijke Uitgevers (1996) 3. Stirna, J., Kirikova, M.: Integrating agile modeling with participative enterprise modeling. In: EMMSAD’2008 in conjunction with CAiSE’2008, France (2008) 4. Nakakawa, A., Bommel, P., Proper, H.: Definition and validation of requirements for collaborative decision-making in enterprise architecture creation. Int. J. Cooperative Inf. Syst. 20, 83–136 (2011) 5. Barjis, J.: CPI modeling: Collaborative, participative, interactive modeling. In: Jain, S. (ed.) Winter Simulation Conference (WSC), Phoenix, AZ, USA, pp. 3094–3103. IEEE (2011) 6. Brocke, J.V., Thomas, O.: Reference modeling for organizational change: applying collaborative techniques for business engineering. AMCIS 2006, 670–678 (2006) 7. Sandkuhl, K., Stirna, J., Persson, A., Wißotzki, M.: Enterprise Modeling. TEES. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-43725-4 8. Stirna, J., Persson, A.: Enterprise Modeling. Springer International Publishing, Cham (2018) 9. Stirna, J., Persson, A., Sandkuhl, K.: Participative enterprise modeling: experiences and recommendations. In: Krogstie, J., Opdahl, A., Sindre, G. (eds.) CAiSE 2007. LNCS, vol. 4495, pp. 546–560. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-72988-4_38 10. Hoppenbrouwers, S.J.B.A., Proper, H.A(., van der Weide, T.P.: A fundamental view on the process of conceptual modeling. In: Delcambre, L., Kop, C., Mayr, H.C., Mylopoulos, J., Pastor, O. (eds.) ER 2005. LNCS, vol. 3716, pp. 128–143. Springer, Heidelberg (2005). https://doi.org/10.1007/11568322_9 11. Frederiks, P.J.M., van der Weide, T.P.: Information modeling: the process and the required competencies of its participants. In: Meziane, F., Métais, E. (eds.) NLDB 2004. LNCS, vol. 3136, pp. 123–134. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-277798_11 12. Rittgen, P.: Collaborative Modeling. Int. J. Inf. Syst. Model. Des. 1, 1–19 (2010) 13. Becker, J., Algermissen, L., Pfeiffer, D., Räckers, M.: local, participative process modelling the PICTURE-approach. In: Proceedings of the 1st International Workshop on Management of Business Processes in Government, Brisbane, Australia, pp. 33–48 (2007) 14. Pinggera, J., Soffer, P., Fahland, D., Weidlich, M., Zugal, S., Weber, B., Reijers, H.A., Mendling, J.: Styles in business process modeling: an exploration and a model. Softw. Syst. Model. 14(3), 1055–1080 (2013). https://doi.org/10.1007/s10270-013-0349-1

72

M. Fellmann et al.

15. Sandkuhl, K., Lillehagen, F.: The early phases of enterprise knowledge modelling: practices and experiences from scaffolding and scoping. In: Stirna, J., Persson, A. (eds.) PoEM 2008. LNBIP, vol. 15, pp. 1–14. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-54089218-2_1 16. Pinggera, J., Zugal, S., Weidlich, M., Fahland, D., Weber, B., Mendling, J., Reijers, H.A.: Tracing the process of process modeling with modeling phase diagrams. In: Daniel, F., Barkaoui, K., Dustdar, S. (eds.) BPM 2011. LNBIP, vol. 99, pp. 370–382. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28108-2_36 17. Rittgen, P.: Negotiating models. In: Krogstie, J., Opdahl, A., Sindre, G. (eds.) CAiSE 2007. LNCS, vol. 4495, pp. 561–573. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3540-72988-4_39 18. Gutschmidt, A.: On the influence of tools on collaboration in participative enterprise modeling - an experimental comparison between whiteboard and multi-touch table. In: Andersson, B., et al. (eds.) LNISO, vol. 34, pp. 151–168. Springer, Cham (2019) 19. Kitchenham, B.A., Charters, S.: Guidelines for performing systematic literature reviews in software engineering. In: Software Engineering Group, School of Computer Science and Mathematics, Keele University, pp. 1–57 (2007) 20. Ssebuggwawo, D., Hoppenbrouwers, S., Proper, E.: Interactions, goals and rules in a collaborative modelling session. In: Persson, A., Stirna, J. (eds.) PoEM 2009. LNBIP, vol. 39, pp. 54–68. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-05352-8_6 21. Nolte, A., Brown, R., Anslow, C., Wiechers, M., Polyvyanyy, A., Herrmann, T.: Collaborative business process modeling in multi-surface environments. In: Anslow, C., Campos, P., Jorge, J. (eds.) Collaboration Meets Interactive Spaces. LNBIP, pp. 259–286. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45853-3_12 22. Robinson, K.F., Fuller, A.K.: Participatory modeling and structured decision making. In: Gray, S., Paolisso, M., Jordan, R., Gray, S. (eds.) Environmental Modeling with Stakeholders. LNBIP, pp. 83–101. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-25053-3_5 23. Voinov, A., Seppelt, R., Reis, S., Nabel, J.E.M.S., Shokravi, S.: Values in socio-environmental modelling: Persuasion for action or excuse for inaction. Environ. Model Softw. 53, 207–212 (2014) 24. Hoppenbrouwers, S., Rouwette, E.: A dialogue game for analysing group model building: framing collaborative modelling and its facilitation. Int. J. Organ. Des. Eng. 2(1), 19–40 (2012). https://doi.org/10.1504/IJODE.2012.045905 25. Gutschmidt, A., Sauer, V., Schönwälder, M., Szilagyi, T.: Researching participatory modeling sessions: an experimental study on the influence of evaluation potential and the opportunity to draw oneself. In: Pa´nkowska, M., Sandkuhl, K. (eds.) BIR 2019. LNBIP, vol. 365, pp. 44–58. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31143-8_4 26. Wilmont, I., Hoppenbrouwers, S., Barendsen, E.: An observation method for behavioral analysis of collaborative modeling skills. In: Metzger, A., Persson, A. (eds.) CAiSE 2017. LNBIP, vol. 286, pp. 59–71. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-60048-2_6 27. Basco-Carrera, L., Warren, A., van Beek, E., Jonoski, A., Giardino, A.: Collaborative modelling or participatory modelling? A framework for water resources management. Environ. Model Softw. 91, 95–110 (2017) 28. Wilmont, I., Hengeveld, S., Barendsen, E., Hoppenbrouwers, S.: Cognitive Mechanisms of Conceptual Modelling. In: Ng, W., Storey, V.C., Trujillo, J.C. (eds.) ER 2013. LNCS, vol. 8217, pp. 74–87. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41924-9_7 29. Gioia, G.A., Isquith, P.K., Retzlaff, P.D., Espy, K.A.: Confirmatory factor analysis of the Behavior Rating Inventory of Executive Function (BRIEF) in a clinical sample. Child Neuropsychol. 8, 249–257 (2002) 30. Seifert, J.W.: Visualisation – Presentation – Facilitation. Translation of the 30th German edition, 1st edn. Books on Demand, Norderstedt (2015)

Modeling Trust in Enterprise Architecture: A Pattern Language for ArchiMate Glenda Amaral1(B) , Tiago Prince Sales1 , Giancarlo Guizzardi1 , João Paulo A. Almeida2 , and Daniele Porello3 1

2

Conceptual and Cognitive Modeling Research Group (CORE), Free University of Bozen-Bolzano, Bolzano, Italy {gmouraamaral,tiago.princesales,giancarlo.guizzardi}@unibz.it Ontology and Conceptual Modeling Research Group (NEMO), Federal University of Espírito Santo, Vitória, Brazil [email protected] 3 ISTC-CNR Laboratory for Applied Ontology, Trento, Italy [email protected]

Abstract. Trust is widely acknowledged as the cornerstone of relationships in social life. But what makes an agent trust a person, a resource or an organization? Which characteristics should a trustee have in order to be considered trustworthy? The importance of understanding trust in organizations has motivated us to investigate the representation of trust concerns in enterprise models. Based on a well-founded reference ontology of trust, we propose a pattern language for trust modeling in ArchiMate. We present a first iteration of the design cycle, which includes the development of the pattern language and its demonstration by means of a realistic case study about trust in a COVID-19 data repository.

Keywords: Trust modeling

1

· Enterprise architecture · ArchiMate

Introduction

Trust is a vital ingredient in productive relationships. According to Castelfranchi and Falcone [5], “trust in its intrinsic nature is a dynamic phenomenon” that changes with time. In times of crisis, such as the financial crisis of 2008 and the current COVID-19 health crisis, it becomes even more evident how fragile trust is. Therefore, the understanding of the building blocks that compose the trust of agents in a given trustee (such as an organization) is of paramount importance, as they reveal the qualities and properties the trustee should have in order to be considered trustworthy and effectively promote well-placed trust. Moreover, the identification of the trust components is fundamental to the assessment of risks that can emerge from trust relations. From the perspective of an organization trustee, the modeling of trust in the context of Enterprise Architecture (EA) enables to bridge the gap between the c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 73–89, 2020. https://doi.org/10.1007/978-3-030-63479-7_6

74

G. Amaral et al.

stakeholders’ trust concerns and the processes and other elements of the architecture that are needed to achieve the organization’s goal of being trustworthy. The idea of modeling social and organizational concepts in the context of Enterprise Architecture has already been proposed in the literature in the context of value [20], risk [14,18], service contracts [8], resources and capabilities [3], however, the problem of linking the enterprise architecture to the stakeholders’ trust concerns is still an open issue. In this paper, we address this issue by proposing a trust modeling approach for ArchiMate, which is based on a proper ontological theory that provides adequate real-world and formal semantics for the concept of trust. In particular, we leverage the concepts and relations defined in the recently proposed Reference Ontology of Trust (ROT) [1], an ontologically well-grounded reference model that formally characterizes the concept of trust and explains how risk emerges from trust. ROT is specified in OntoUML [10], and thus, compliant with the meta-ontological commitments of the Unified Foundational Ontology (UFO) [10]. Based on ROT, we propose a Trust Pattern Language (TPL) for ArchiMate–the most used modeling language in the EA field. A pattern language [4] consists of a set of interrelated modeling patterns and its main advantage is that it offers a context in which related patterns can be combined, thus, reducing the space of design choices and design constraints [7]. We designed TPL following the Design Science Research methodology [12]. In this paper, we present the first iteration of the design cycle (building and evaluating), which includes the development of the pattern language and its demonstration by means of a real case study of trust in a COVID-19 data repository. The remainder of this paper is organized as follows. Section 2 introduces the reader to the Reference Ontology of Trust (ROT) that provides the ontological foundations in which the Trust Pattern Language (TPL) is grounded. Section 3 presents the set of requirements identified for the language (Sect. 3.1), which are needed for a formal evaluation of the language. Afterward, the individual modeling patterns that compose TPL are presented (Sect. 3.2), as well as a method for combining them (Sect. 3.3). In Sect. 4, we demonstrate how TPL can be used by presenting a real case example of trust in a COVID-19 data repository. We conclude in Sect. 5 with some final considerations.

2

Research Baseline

2.1

The Reference Ontology of Trust

The Reference Ontology of Trust1 (ROT) is a UFO-based ontology that formally characterizes the concept of trust, clarifies the relation between trust and risk, and represents how risk emerges from trust relations [1]. ROT makes the following ontological commitments about the nature of trust: 1

The complete version of ROT in OntoUML and its implementation in OWL are available at http://purl.org/krdb-core/trust-ontology.

Modeling Trust in Enterprise Architecture

75

– Trust is relative to a goal. An agent, the trustor, trusts someone or something, the trustee, only relative to a goal, for the achievement of which she counts upon the trustee. – Trust is a complex mental state of a trustor regarding a trustee and her behavior. It is composed of: (i) a trustor’s intention, whose propositional content is a goal of the trustor; (ii) the belief that the trustee has the capability to perform the desired action or exhibit the desired behavior; and (iii) the belief that the trustee’s vulnerabilities will not prevent her from performing the desired action or exhibiting the desired behavior. When the role of trustee is played by an agent, trust is also composed of the trustor’s belief that the trustee has the intention to exhibit the desired behavior. – The trustor is necessarily an “intentional entity”. Briefly put, the trustor is a cognitive agent, an agent endowed with goals and beliefs [5]. – The trustee is not necessarily a cognitive system. The trustee is an entity capable of having a (hopefully positive) impact on a goal of the trustor by the outcome of its behavior [5]. A trustee may be a person, an animal, a car, a vaccine, etc. – Trust is context dependent. The trustor may trust the trustee for a given goal in a given context, but not do so for the same goal in a different context. We assume trust relations to be highly dynamic [5]. – Trust implies risk. By trusting, the trustor accepts to become vulnerable to the trustee in terms of potential failure of the expected behavior and result, as the trustee may not exhibit the expected behavior or it may not have the desired result [13, p 21]. Figure 1 depicts a ROT excerpt, which is represented in OntoUML, an ontology-driven conceptual modeling language based in UFO [11].

Fig. 1. A fragment of ROT depicting the mental aspects of trust

In ROT, Trust is modelled as a complex mode (an externally dependent entity, which can only exist by inhering in other individuals [10]) composed of an Intention whose propositional content is a goal of the

76

G. Amaral et al.

Trustor, and a set of Beliefs that inhere in the Trustor and are externally dependent on the Dispositions [3,9] that inhere in the Trustee. These beliefs include: (i) the Belief that the Trustee has the Capability to exhibit the desired behavior (Capability Belief); and (ii) the Belief that the Trustee’s Vulnerabilities will not prevent her from exhibiting the desired behavior (Vulnerability Belief). The Trustee’s Vulnerabilities and Capabilities are dispositions that inhere in the Trustee, which are manifested in particular situations, through the occurrence of events [9]. Social Trust is a specialization of Trust in which the Trustee is an Agent. Therefore, this form of trust is also composed of the Trustor’s belief that the Agent Trustee has the Intention to perform the desired action (Intention Belief). The relation influences represents that an instance of Trust can influence another (positively or negatively) [15]. ROT relies on the Common Ontology of Value and Risk (COVER) (Fig. 2) proposed by Sales et al. [19] to represent the relation between trust and risk. COVER proposes an ontological analysis of notions such as Risk, Risk Event and Vulnerability, among others. A central notion for characterizing risk in COVER is a chain of events that impacts an agent’s goals, which the authors name Risk Experience. Risk Experiences focus on unwanted events that have the potential of causing losses and are composed of events of two types, namely threat and loss events. A Threat Event is the one with the potential of causing a loss, which might be intentional or unintentional. A Threat Event might be the manifestation of: (i) a Vulnerability (a special type of disposition whose manifestation constitutes a loss or can potentially cause a loss from the perspective of a stakeholder); or (ii) a Threat Capability (capabilities whose manifestation enables undesired events that threaten agent’s abilities to achieve a goal). The second mandatory component of a Risk Experience is a Loss Event, which necessarily impacts intentions in a negative way [19].

Fig. 2. A fragment of COVER depicting risk experience [19]

2.2

ArchiMate

ArchiMate is a modeling standard that defines a layered structure by means of which the architecture of enterprises can be described [21]. The language

Modeling Trust in Enterprise Architecture

77

is organized in six layers, namely Strategy, Business, Application, Technology, Physical, and Implementation & Migration [21]. In this paper, we focus on the elements of the Strategy and Business layers. A model in ArchiMate is a collection of elements and relationships. In ArchiMate, each element is classified according to its nature, referred to as “aspects”: an Active Structure Element represents an entity that is capable of performing behavior, a Passive Structure Element represents a structural element that cannot perform behavior, a Behavior Element represents a unit of activity performed by one or more active structure elements, a Motivation Element is one that provides the context of or reason behind the architecture, and a Composite Element is simply one that aggregates other elements. The most relevant ArchiMate elements for the TPL are: (i) Stakeholder, Driver, Assessment and Goal (Motivation Elements); (ii) Resource (a Passive Structure Element); (iii) Business Actor (an Active Structure Element); (iv) Capability and Business Event (Behavior Elements); and (v) Grouping. As for relations, the most relevant ones are: (i) Composition and Realization (when applied to Structural elements); (ii) Influence (which is a sort of Dependency); (iii) Triggering (when applied to Behavior); and (iv) Association (which can be used flexibly in many contexts to relate elements when other more specific relations are not available). A detailed definition of the concepts of the language can be found in the ArchiMate specification [21].

3

A Pattern Language for Trust Modeling

3.1

Language Requirements

According to Buschmann et al. [4], “a pattern describes a particular recurring design problem that arises in specific design contexts and presents a well-proven solution for the problem”. Deutsch [6] defines a pattern language as “a set of patterns and relationships among them that can be used to systematically solve coarse-grained problems”. We have established two types of requirements in the design of the TPL: (i) analysis requirements, which refer to what the models produced with the language should help users to achieve, either by means of automated or manual analysis; and (ii) ontological requirements, which refer to the concepts and relations the language should have in order to accurately represent its domain of interest and thus support its intended uses. Below we present the list of the analysis requirements for the TPL: R1. Trustworthiness analysis: an enterprise should be able to gain insight into why it trusts certain key resources, actors or partners (or event if they should do it in the first place!). In particular, for a given trust relation, the enterprise should be able to identify the capabilities and vulnerabilities of a particular trustee that are the focus of its beliefs, so that it can detect potential threats to the achievement of its goals. From the opposite perspective, the enterprise should be able to identify what makes them trustworthy (or not) from the point-of-view of their customers and partners, possibly identifying what it

78

G. Amaral et al.

could change to increase trust levels, as well the key capabilities it needs to guarantee to promote well-placed trust. R2. Risk analysis: By modeling the elements that compose the trust complex mental state of a trustor regarding a trustee, an enterprise should be able to identify risks that can emerge as consequence of either the manifestation of a trustee’s vulnerability or the unsatisfactory manifestation of a trustee’s capability. As for the ontological requirements, they consist of an isomorphic representation of the concepts and relations defined in the Reference Ontology of Trust, in which it is based. In addition to the aforementioned requirements, we assume the following constraints for the TPL: R3. It should rely exclusively on constructs available in ArchiMate 3.0.1 [21], in an effort to retain its user base and tool support, as well as to prevent adding complexity to the language. R4. It should map trust-related concepts into ArchiMate constructs maintaining, as much as possible, their original meaning as described in the standard. Specialized semantics should be addressed via stereotypes, constituting thus a lightweight extension of the language. 3.2

Trust Modeling Patterns

Trust Assessment. This pattern allows modelers to represent a trust relation between a trustor and a trustee, in which the former trusts the latter with respect to an intention (whose propositional content is a goal, for the achievement of which the trustor counts upon the trustee). The trustor is always a cognitive agent, endowed with goals and beliefs. As for the trustee, it is an entity able to cause an impact (hopefully positive) on a trustor’s goal by the outcome of its behavior. Note that the role of trustee can be played not just by agents, but also by objects, such as rules, procedures, conventions, infrastructures, tools, artifacts in general, as well as different types of social systems. For this reason, this pattern has two variants, depending on the type of the trustee. The first variant, depicted in Fig. 3a, details the trust relation when the trustee is an object. It consists of a Structure Element, the trustee, connected to a «Trust» Assessment, which in turn is connected both to a Stakeholder, the trustor, and to the Goal she is counting on achieving. Attached to the «Trust» Assessment is the Trust Degree, which is an attribute that can be described as an entry in a scale chosen by the modeler, such as a discrete scale like or a continuous scale like . An example of this first variant is shown in Fig. 3b. In the second variant, the trustee is a cognitive agent and thus is modeled as a «Trustee» Stakeholder. Capability Belief. This pattern allows modelers to express which capability of the trustee is the focus of a capability belief of the trustor. Capabilities are dispositions that inhere in agents and objects, which are manifested in particular situations, through the occurrence of events. They are usually understood as positive dispositions, in the sense that they enable the manifestation of

Modeling Trust in Enterprise Architecture

79

Fig. 3. The trust assessment pattern

events desired by an agent. The generic structure of the Capability Belief Pattern is depicted in Fig. 4a. It connects a Capability Belief Assessment of a «Trustor» Stakeholder to the corresponding Capability of a «Trustee». An application of this pattern is presented in Fig. 4b.

Fig. 4. The capability belief pattern

Vulnerability Belief. This pattern allows modelers to express which vulnerability of the trustee is the focus of a vulnerability belief of the trustor. Vulnerabilities are a special type of disposition whose manifestation constitutes a loss or can potentially cause a loss from the perspective of a stakeholder. The generic structure of the Vulnerability Belief Pattern is depicted in Fig. 5a. It connects a Vulnerability BeliefAssessment of a «Trustor» Stakeholder to the corresponding Vulnerability of a «Trustee». Figure 5b presents an application example of this pattern.

Fig. 5. The vulnerability belief pattern

Intention Belief. This pattern allows modelers to express which intention of the trustee is the focus of an intention belief of the trustor. Its generic structure is depicted in Fig. 6a. It connects an Intention Belief Assessment of a

80

G. Amaral et al.

Fig. 6. The intention belief pattern

«Trustor» Stakeholder to the corresponding Goal of a «Trustee». Figure 6b presents an application example for this pattern. Trust Composition. To account for what makes an agent trust a resource or another agent, we introduce the Trust Composition Pattern, which details the complex mental state of the trustor. The understanding of the elements that compose trust is important because they reveal the qualities and properties the trustee should have in order to be considered trustworthy and effectively promote well-placed trust. This pattern refines the Trust Assessment Pattern by detailing the decomposition of the «Trust» Assessment into the beliefs of the trustor about the trustee. It has two variants, as the beliefs of the trustor vary according to the trustee type. The first variant, depicted in Fig. 7a, details trust when the trustee is not a cognitive agent. In this case, we make use of the Capability Pattern and the Vulnerability Pattern to represent that the «Trust» Assessment is composed of Belief Assessments of the trustor regarding the Capabilities and Vulnerabilities of the trustee (the trustor believes that the trustee has the capability to exhibit a desired behavior and that its vulnerabilities will not prevent it from exhibiting this behavior). Figure 7b shows an application example of this pattern. In the second variant the trustee is a cognitive agent endowed with goals and, therefore, her intentions are also part of the set of beliefs that compose trust. Besides believing that the trustee is capable of exhibiting a desired behavior and that her vulnerabilities will not stop her from doing that, the trustor believes that trustee has the intention to exhibit the aforementioned behavior. Therefore, in this case, in addition to the Capability Belief and Vulnerability Belief Patterns, the Intention Belief Pattern is also used to represent the «Trust» Assessment. Risk Experience. In order to account for how risk emerges from trust relations, we propose the Risk Experience Pattern, presented in Fig. 8. Once the components of trust are known (decomposed using the Trust Composition Pattern), it is possible to identify the risks related to the capabilities and vulnerabilities of the trustee, which are the focus of trustor’s beliefs. Our modeling strategy is directly inspired by the risk modeling approach proposed by Sales et al. [18]. Given the objectives of our pattern, we focus here on the perspective of risk as a chain of events that impact an agent’s goals, which the authors named Risk Experience. Risk Experiences focus on unwanted events that have the potential of causing losses and are composed by events of two types,

Modeling Trust in Enterprise Architecture

81

Fig. 7. The trust composition pattern

namely threat and loss events [18]. A Threat Event is the one with the potential of causing a loss. As described in [18], it might be the manifestation of: (i) a Vulnerability; or (ii) Threat Capability (as aforementioned, capabilities are usually perceived as beneficial, as they enable the manifestation of events desired by an agent. However, when the manifestation of a capability enables undesired events that threaten agent’s abilities to achieve a goal, it can be seen as a Threat Capability). The second mandatory component of a Risk Experience is a Loss Event, which necessarily impact intentions in a negative way. Following the strategy of Sales et al. [18], we mapped Risk Experience as a Grouping decorated with the «RiskExperience» stereotype. Such a grouping should aggregate the elements and the relations in an experience. Then, we associated the «RiskExperience» Grouping with risks, which are mapped as «Risk» Drivers, as drivers represent “conditions that motivate an organization to define its goals and implement the changes necessary to achieve them” [21]. The first variant, depicted in Fig. 8a, allows modelers to represent the existence of risks related to Vulnerabilities of the trustee that are the focus of beliefs of the trustor. «ThreatEvent» Event might be the manifestation of a Vulnerability and may lead to a «LossEvent» Event, which impacts the Trustor Intention in a negative way, as it hurts her Intention of reaching a specific goal. «HazardAssessment» Assessment stands for situations that activate vulnerabilities and threat capabilities, which in turn will be manifested as «ThreatEvent» Events. Since ArchiMate does not provide a native construct for modeling situations in general, we followed the approach used in [18] and represent hazardous situations as assessments about them. Figure 8b shows an application example of this pattern. The second variant is similar to the previous one, as it also represents the existence of risks related to a disposition of the trustee, though in this case the disposition is a Threat Capability. As previously mentioned, when the manifestation of a capability enables undesired events that threatens agent’s abilities to achieve a goal, it can be seen as a Threat Capability. Analogous

82

G. Amaral et al.

to the former variant, a «ThreatEvent» Event might be the manifestation of a Threat Capability of the trustee if the trustee fails to perform this specific Capability that was supposed to bring about an outcome desired by the trustor. Finally, the «ThreatEvent» Event can trigger a «LossEvent» Event, which has a negative impact on the Trustor Intention.

Fig. 8. The risk experience pattern

Risk Assessment. This pattern, also extracted from [18], complements our approach on the modeling of risks that emerge from trust relations. It consists of a Risk Assessment made by a Stakeholder about a «Risk» Driver, which in turn is associated to a «RiskExperience» Grouping. In addition, the Risk Assessment is connected to a «ControlObjective» Goal, a sort of high level goal that defines what the organization intends to do about an identified risk. Control Goals are connected to «ControlMeasure» Requirements that represent desired properties of solutions – or means – to realize such goals. Using this pattern, depicted in Fig. 9, it is possible to model the realization of control measures by any set of core elements, such as business processes (e.g. a data quality management process), application services (e.g. a scanning service) or nodes (e.g. a document management system).

Fig. 9. The risk assessment pattern

Trust Influencing Trust. This pattern allows modelers to represent that trust can influence trust, either positively or negatively. For example, one’s trust in the local police officer may increase one’s trust in the “judiciary system”. It can be further used to characterize the existence of “trust by delegation” . The idea behind “trust by delegation” is that when, for example, Alice trusts Bob, and Bob trusts Charlie, then Alice can derive a measure of “trust by delegation” in Charlie. In this case the «Trust» Assessments “Alice trusts Bob” and “Bob trusts

Modeling Trust in Enterprise Architecture

83

Charlie” positively influence the «Trust» Assessment “Alice trusts Charlie” . As shown in Fig. 10, the pattern makes explicit the influence association between a «Trust» Assessment and the other one under its influence.

Fig. 10. The trust influencing trust pattern

The mapping between the ontological trust-related concepts and their representation in ArchiMate is listed in Table 1. Table 1. Representation of trust and risk-related concepts in ArchiMate. Concept

Representation in ArchiMate

Trust

«Trust» Assessment

Trustor

«Trustor» Stakeholder

Trustee

«Trustee» Stakeholder or «Trustee» Structure Element

Trust Degree

Attribute of a «Trust» Assessment

Capability

Capability

Vulnerability [18]

«Vulnerability» Capability

Intention

Goal

Belief

Assessment

Capability Belief

Assessment connected to a Capability

Vulnerability Belief

Assessment connected to a «Vulnerability» Capability

Intention Belief

Assessment connected to a Goal

Risk [18]

«Risk» Driver

Risk Assessment [18]

Assessment connected to a «Risk» Driver

Risk Assessor [18]

Stakeholder connected to a Risk Assessment

Risk Experience [18]

«RiskExperience» Grouping

Threat Event [18]

«ThreatEvent» Event

Loss Event [18]

«LossEvent» Event

Hazard Assessment [18] «HazardAssessment» Assessment

84

3.3

G. Amaral et al.

Combining the Patterns

To use TPL, a modeler may start with the application of the Trust Assessment Pattern to identify both the trustor and the trustee, as well as the goal of the trustor, for the achievement of which she is counting on the trustee. Then, the user should use the Trust Composition Pattern by iteratively applying the Capability Belief Pattern, the Vulnerability Pattern, and the Intention Belief Pattern (this latter only if the trustee is an agent) in order to detail the components of trust: the capabilities, vulnerabilities, and intentions of the trustee, which are the focus of the trustor’s beliefs. For each vulnerability and capability, the modeler should apply the Risk Experience Pattern to identify the risks that can emerge when either the vulnerabilities are manifested or the capabilities are not manifested as expected (and in this case they play the role of threat capabilities). Finally, for each risk driver identified, the user may apply the Risk Assessment Pattern to evaluate the impact of risks and establish procedures for effective risk control, treatment, and mitigation. As previously mentioned, from this pattern it is possible to model the realization of control measures by describing how the many pieces of an enterprise’s application and technology infrastructure work together to properly manage risks that emerge from trust relations. Additionally, the Trust Influencing Trust Pattern can be applied to make explicit how trust relations influence each other (for instance, Alice trusting an online store can influence her brother trusting the online store too), as well as to characterize the existence of trust by delegation (for example, Alice trusts Bob, and Bob trusts an information source, then it may be the case that Alice trusts the information source “by delegation”). The detailed diagrams presenting the complete process of combining the patterns can be found https://purl.org/krdbcore/trust-archimate.

4

Case Study

In this section, we present a realistic study in which we use the TPL to model a case of “misplaced trust” in a COVID-19 data repository, which resulted in the retraction of a publication from a highly influential and prestigious medical journal. In particular, we refer to the case of a recent study published in The Lancet journal [16], which relied on data gathered by a US healthcare analytics company called Surgisphere to report issues on the efficacy and safety of hydroxychloroquine (HCQ) for treating COVID-19. When the study was first published it prompted the World Health Organization (WHO) along with several countries to pause trials on this drug. However, this very study was retracted [17] a few days later (and the clinical trials resumed), as concerns were raised with respect to the veracity of the data, leading the authors to recognize that they could no longer vouch for the veracity of the database at the heart of the study. Examples of problems encountered include errors in the Australian data and the fact that independent reviewers could not verify the validity of the data, as Surgisphere would not give access to the full dataset, citing confidentiality and client agreements [17].

Modeling Trust in Enterprise Architecture

85

Given the limited space available, we only present relevant fragments of the resulting model. The complete case study is available at https://purl.org/krdbcore/trust-archimate. An investigation of the characteristics a COVID-19 data repository should have in order to be held in a position of trust by the communities they intend to serve are presented in an accompanying technical report [2], available at the above-mentioned URL. We start with the application of the Trust Assessment Pattern to identify the trustees, the trustors, and their goals. In our case study, different trust relations can be observed: (i) the Publication Authors trust the COVID-19 data repository to evaluate the safety and effectiveness of hydroxychloroquine for treatment of COVID-19 ; (ii) the Publication Authors trust the Surgisphere Staff about creating and maintaining the COVID-19 data repository; (iii) The Lancet trusts the Publication Authors to accept publishing the study; (iv) WHO trusts The Lancet to have reliable information to make decisions w.r.t. recommendations on the treatment of diseases; (v) WHO trusts (by delegation) the Publication Authors to have reliable information to make decisions w.r.t. recommendations on the treatment of diseases; and (vi) Countries trust WHO to have reliable recommendations on the treatment of diseases. Figure 11a and 11b depict the modeling of trust relations (i) and (ii), respectively.

Fig. 11. Application of the trust assessment pattern

We proceed by iteratively applying the Capability Belief Pattern (Fig. 12a) and the Vulnerability Belief Pattern (Fig. 12b) to detail the Publication Authors’ beliefs with respect to the COVID-19 data repository (trust assessment depicted in Fig. 11a). Finally, in Fig. 13 we use the Trust Composition Pattern to detail the trust complex mental state of the Publication Authors in their trust relation with the COVID-19 data repository. Note that the capabilities and vulnerabilities which are the focus of the Publication Authors’ beliefs were identified based on the trust concerns for COVID-19 data presented in [2], such as transparency, privacy of data, respect for human rights and data quality.

Fig. 12. Capability and vulnerability beliefs

86

G. Amaral et al.

Fig. 13. Composition

Since the components of trust are known, it is possible to reason about possible manifestations of vulnerabilities and (threatening) capabilities of the COVID-19 data repository, which can enable undesired events that threaten the Publication Authors’ abilities to achieve their goal. Using the Risk Experience Pattern, we represent, in Fig. 14, the emergence of the risk of “repository loss of credibility” caused by the poor quality of data (a vulnerability), which revealed errors in the data, thus preventing the authors from attesting the validity of the study. Then we apply the Risk Assessment Pattern (Fig. 15) to represent the evaluation of the risk of “repository loss of credibility” by the Surgisphere Staff, as well as the establishment of procedures for effective risk control (improve data quality) and the definition of a control

Fig. 14. Risk experience

Fig. 15. Risk assessment

Modeling Trust in Enterprise Architecture

87

measure that describes how Surgisphere plans to realize these procedures (implement data quality management). Lastly, we use the Trust Influencing Trust Pattern to make explicit how some of these trust assessments influence each other. In Fig. 16a we may observe that “WHO trusting The Lancet”, positively influences “WHO’s trust in the Publication Authors”. Similarly, the Publication Authors’ trust in the Surgisphere’s Staff expertise positively influences their trust in the COVID-19 data repository (Fig. 16b). Note that as previously mentioned, this pattern can also be applied to characterize the existence of “trust by delegation”. For example, considering that (1) “WHO trusts The Lancet” and (2) “The Lancet trusts the Publication Authors”, there is a great chance that, (3) “WHO trusts the Publication Authors” by delegation, and in this case both (1) and (2) positively influences (3).

Fig. 16. Trust influencing trust

5

Conclusions

In this paper we presented TPL, a pattern language for modeling trust in ArchiMate that is based on ROT, a recently proposed ontology that provides clear realworld semantics for the constituting elements of trust and describes the emergence of risk from trust relations. Although trust towards agents and resources is a known concern in the literature, little has been said about what constitutes the stakeholders’ trust in a given organization or resource, as well as how these trust concerns permeate the enterprise architecture. The TPL was designed aiming at addressing these issues. In particular, it allows to represent: (i) the elements that constitute the trust of an agent with respect to a resource or another agent, including organizations; (ii) the capabilities and vulnerabilities of trustees that are the focus of the trustor’s beliefs, in a trust assessment; (iii) the influence that trust assessments have on each other; (iv) the risks that can emerge from trust relations; and (v) risk assessments related to these risk drivers. This work is part of a long-term research program that aims at using UFO as a semantic foundation for enterprise modeling (in particular, for ArchiMate). Next, we envision that this effort can be harmonized with previous work (on value [20], risk [18], service contracts [8], resources and capabilities [3]) to provide a comprehensive ontology-based enterprise modeling approach. We also plan to conduct empirical experiments to validate the TPL. In addition, we want to further evolve the trust ontology to allow the representation of “pieces of evidence” for trustworthiness, which comes from elements such as a history of performance and trusted third party certifications.

88

G. Amaral et al.

Acknowledgments. CAPES (PhD grant 88881.173022/2018-01) and NeXON project (UNIBZ). João Paulo A. Almeida is funded by the Brazilian National Council for Scientific and Technological Development CNPq (grant 312123/2017-5).

References 1. Amaral, G., Sales, T.P., Guizzardi, G., Porello, D.: Towards a reference ontology of trust. In: Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C.A., Meersman, R. (eds.) OTM 2019. LNCS, vol. 11877, pp. 3–21. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33246-4_1 2. Amaral, G., Sales, T.P., Guizzardi, G., Porello, D.: Trust Concerns for Digital Data Repositories: the COVID-19 Data Domain. Free University of Bozen-Bolzano, Technical Report (2020) 3. Azevedo, C.L., Iacob, M.E., Almeida, J.P.A., van Sinderen, M., Pires, L.F., Guizzardi, G.: Modeling resources and capabilities in enterprise architecture: a well-founded ontology-based proposal for ArchiMate. Inf. Syst. 54, 235–262 (2015) 4. Buschmann, F., Henney, K., Schmidt, D.C.: Pattern-Oriented Software Architecture, on Patterns and Pattern Languages. John Wiley & Sons, Hoboken (2007) 5. Castelfranchi, C., Falcone, R.: Trust Theory: A Socio-Cognitive and Computational Model. John Wiley & Sons, Hoboken (2010) 6. Deutsch, P.: Models and patterns. Software Factories: Assembling Applications with Patterns, Frameworks, Models and Tools. John Wiley & Sons, Hoboken (2004) 7. Falbo, R., Barcellos, M., Ruy, F., Guizzardi, G., Guizzardi, R.: Ontology pattern languages. Ontology Engineering with Ontology Design Patterns: Foundations and Applications. IOS Press, Amsterdam (2016) 8. Griffo, C., Almeida, J.P.A., Guizzardi, G., Nardi, J.C.: From an ontology of service contracts to contract modeling in enterprise architecture. In: Proceedings 21st IEEE EDOC, pp. 40–49 (2017) 9. Guizzardi, G., Wagner, G., de Almeida Falbo, R., Guizzardi, R.S.S., Almeida, J.P.A.: Towards ontological foundations for the conceptual modeling of events. In: Ng, W., Storey, V.C., Trujillo, J.C. (eds.) ER 2013. LNCS, vol. 8217, pp. 327–341. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41924-9_27 10. Guizzardi, G.: Ontological foundations for structural conceptual models. Telematica Instituut Fundamental Research Series, No. 15, ISBN 90-75176-81-3 (2005) 11. Guizzardi, G., Wagner, G., Almeida, J.P.A., Guizzardi, R.S.S.: Towards ontological foundations for conceptual modeling: the Unified Foundational Ontology (UFO) story. Appl. Ontol. 10(3–4), 259–271 (2015) 12. Hevner, A., Chatterjee, S.: Design science research in information systems. In: Design Research in Information Systems, pp. 9–22. Springer (2010). https://doi. org/10.1007/978-1-4419-5653-8_2 13. Luhmann, N.: Trust and Power. John Wiley & Sons, Hoboken (2018) 14. Mayer, N., Feltus, C.: Evaluation of the risk and security overlay of ArchiMate to model information system security risks. In: 2017 IEEE 21st International Enterprise Distributed Object Computing Workshop, pp. 106–116. IEEE (2017) 15. McKnight, D.H, Chervany, N.L.: Trust and distrust definitions: one bite at a time. In: Falcone, R., Singh, M., Tan, Y.-H. (eds.) Trust in Cyber-societies. LNCS (LNAI), vol. 2246, pp. 27–54. Springer, Heidelberg (2001). https://doi.org/10. 1007/3-540-45547-7_3

Modeling Trust in Enterprise Architecture

89

16. Mehra, M.R., Desai, S.S., Ruschitzka, F., Patel, A.N.: RETRACTED: Hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. The Lancet, May 2020 17. Mehra, M.R., Ruschitzka, F., Patel, A.N.: Retraction—hydroxychloroquine or chloroquine with or without a macrolide for treatment of COVID-19: a multinational registry analysis. The Lancet 395(10240), 1820 (2020) 18. Sales, T.P., Almeida, J.P.A., Santini, S., Baião, F., Guizzardi, G.: Ontological analysis and redesign of risk modeling in ArchiMate. In: 2018 IEEE 22nd International Enterprise Distributed Object Computing Conference, pp. 154–163. IEEE (2018) 19. Sales, T.P., Baião, F., Guizzardi, G., Almeida, J.P.A., Guarino, N., Mylopoulos, J.: The common ontology of value and risk. In: Trujillo, J.C., et al. (eds.) ER 2018. LNCS, vol. 11157, pp. 121–135. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-030-00847-5_11 20. Sales, T.P., Roelens, B., Poels, G., Guizzardi, G., Guarino, N., Mylopoulos, J.: A pattern language for value modeling in ArchiMate. In: Giorgini, P., Weber, B. (eds.) CAiSE 2019. LNCS, vol. 11483, pp. 230–245. Springer, Cham (2019). https://doi. org/10.1007/978-3-030-21290-2_15 21. The Open Group: ArchiMate 3.0.1 Specification. Standard C179 (2017)

Towards Enterprise-Grade Tool Support for DEMO Mark A. T. Mulder1,2(B) and Henderik A. Proper3,4 1

3

TEEC2, Hoevelaken, The Netherlands [email protected] 2 Radboud University, Nijmegen, The Netherlands Luxembourg Institute of Science and Technology (LIST), Belval, Luxembourg [email protected] 4 University of Luxembourg, Luxembourg City, Luxembourg

Abstract. The Design and Engineering Methodology for Organizations (DEMO) method is a core method within the discipline of Enterprise Engineering (EE). It enables the creation of so-called essential models of Organizations, which are enterprise models focusing on the organizational essence of an organization, primarily in terms of the actor roles involved, and the business transactions between these actor roles. The DEMO method has a firm theoretical foundation. At the same time, there is an increasing uptake of DEMO in practice. With the increased uptake of DEMO also comes a growing need for enterprisegrade tool support. In this paper, we therefore report on a study concerning the selection, configuration, and extension, of an enterprise-grade tool platform to support the use of DEMO in practice. The selection process resulted in the selection of Sparx Enterprise Architect for further experimentation in terms of configuration towards DEMO. The configuration of this tool framework to support DEMO modelling, also provided feedback on the consistency and completeness of the DEMO Specification Language (DEMOSL), the specification language that accompanies the DEMO method. Keywords: Enterprise Engineering · DEMO · Modelling tools

1 Introduction The Design and Engineering Methodology for Organizations (DEMO) [5] method is a core method (based on a theoretically founded methodology) within the discipline of Enterprise Engineering (EE) [6]. The DEMO method focuses on the creation of socalled essential models of Organizations. The latter models capture the organizational essence of an organization primarily in terms of the actor roles involved, as well as the business transactions [24] (and ultimately in terms of speech acts [13]) between these actor roles. More specifically, an essential model comprises the integrated whole of four aspect models: the Construction Model (CM), the Action Model (AM), the Process Model (PM) and the Fact Model (FM). Each of these models is expressed in one or more diagrams and one or more cross-model tables. c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 90–105, 2020. https://doi.org/10.1007/978-3-030-63479-7_7

Towards Enterprise-Grade Tool Support for DEMO

91

DEMO has strong methodological, and theoretical, roots [5, 6, 24]. At the same time, there is an increasing uptake of DEMO in practice. The latter is illustrated by the active usage (and certification) community1 , reported cases concerning the use of DEMO [1, 4] and [7, chapter 19], as well as integration with other mainstream enterprise modelling approaches such as ArchiMate [15, 25] and BPMN [2, 12, 19]. The increased uptake of DEMO, also triggers the need for enterprise-grade tool support. In this paper, we report on a study into the selection, and configuration, of a (generic) tooling platform to support the use of DEMO in practice. Configuring tool support for DEMO also requires an elaborate formalization of DEMO’s meta-model, which also enables the automatic verification of models. This exercise also provided interesting insights into limitations of DEMO Specification Language (DEMOSL), the specification language that accompanies the DEMO method. The remainder of this paper is structured as follows. Section 2 provides more background to the DEMO method. In Sect. 3, we discuss the meta-model of DEMO as it should be supported by a modelling tool, as such providing a first set of requirements for tool support. Section 4 then continues by identifying additional requirements for enterprise-grade DEMO tooling. Based on these requirements, Sect. 5 then briefly reports on the assessment of relevant tools and platforms, resulting in the use of Sparx Enterprise Architect for further experimentation. Section 6 reports on the configuration of Sparx Enterprise Architect to actually support DEMO models, also providing important feedback on the DEMOSL, the specification language that accompanies the DEMO method. Finally, before concluding, Sect. 7 reports on some experiences on the use of the resulting tooling in practice.

2 DEMO DEMO supports the modelling of the overall, as well as the more detailed, processes in an organization, and the underlying information processing. As mentioned before, the DEMO method has a strong methodological, and theoretical, foundation [5, 6, 24], while, at the same time, there is an increasing uptake of DEMO in practice [1, 4, 7]. Meanwhile, it has a proven track record in process (re)design and reorganizations, software specifications based on the organization, modelling business rules and proving General Data Protection Regulation (GDPR) and other International organization for Standardization (ISO) and NEderlandse Norm (NEN) norm compliance. The DEMO method involves with four key aspect models. Each aspect model involves its own kinds of diagrams and tables, providing viewpoints on the complete model. Construction Model – The Construction Model (CM) involves the organization Construction Diagram (OCD) showing the Transaction Kind (TK), Aggregate Transaction Kind (ATK), Elementary Actor Role (EAR), Composite Actor Role (CAR) within a Scope of Interest (SoI). These diagrams shows the dependencies between roles in execution and information. The high abstraction level makes this a compact diagram in relation to the implementation of the organization. 1

http://www.ee-institute.org/en. https://www.linkedin.com/company/enterprise-engineering-institute.

92

M. A. T. Mulder and H. A. Proper

The Transaction Product Table (TPT) shows the TK identification and description together with the product identification and description. This table is used to get insight in the products that are being created in the organization. Finally, the Bank Contents Table (BCT) shows the contents of the ATK. This contains the identification and name of the ATK and the Entity Type (ET) and attributes of those ETs that are present. This is used to show the extend of (external) data.

Fig. 1. Poligyn OCD modelled in the tool

Fig. 2. Poligyn TPT modelled in the tool

Process Model – The PM involves a single diagram kind, the Process Structure Diagram (PSD), which shows the relations between the process steps of interrelated transactions. This is used to explain the order and dependencies between transactions. Business rules are partially covered as well.

Fig. 3. Poligyn PSD modelled in the tool

Fig. 4. Poligyn OFD modelled in the tool

Fact Model – The FM also involves a single diagram kind, the Object Fact Diagram (OFD), which shows ETs and Product Kinds (PKs), and the Information Use Table (IUT). This model is often called the data model although it shows much more information. Action Model – The last aspect model is the AM with its Action Rules Specification (ARS). Per non-trivial process step minimal one specification shows the input and conditions to proceed in the transaction pattern or advance to other transactions. This specification is used to model all details of the (to-be) organization. Different aspect models contain overlapping elements, therefore, the DEMO essential model is the result of the combination of all aspect models.

Towards Enterprise-Grade Tool Support for DEMO

93

3 The DEMO Meta-Model Earlier work [20] towards DEMO tool support for DEMO already resulted in improvements to DEMOSL 3.7 [8]. This has, amongst others, resulted in extra concepts in the ontological part of the meta-model, as well as updates to the verification rules, the datameta-model and the exchange-meta-model. On the ontological level, common concepts have been added to the meta-model to support practical hierarchical concepts for actor roles as well as the concept of scope of interest. The data-meta-model extends the meta-model with attribute types that need to be registered as well as property types that are required in a implementation environment. This, in particular, involves property types that may seem redundant in an ontological model but are necessary during the modelling process have been added. The exchange-meta-model supports the storage and exchange of the allowed elements and connections. This model promotes the interchangeability of DEMO models between possible modelling tools and other verification, simulation or translation tooling available or needed. The verification rules allow for reducing the chance of creating an incorrect DEMO model. Combining all partial ontological meta-models from DEMOSL and extending this meta-model with the extra information, results in the data-meta-model. This data-meta-model is automation-oriented includes all the properties we want to register for the concepts, but does not involve the diagram-meta-model which includes e.g. information about the graphical layout of diagrams. The data-meta-model, as created inside the tool implementation, does contain all meta structures for elements of the model, and connections of the model. Each structure of those elements and connections has some required and some optional properties. Instantiations of these element and connection structures make up the data model. Subsequently, for every component of the meta-model we will define the data-metamodel, mathematical rules, exchange-meta-model (XML Schema Definition (XSD)), programming model, Visualization model and examples. For every diagram, there is a definition for the set of entity and property types called the diagram meta-model. Different modelling techniques allow for the representation of different properties of the object that is being modelled. Only a combination of those models will come close to the representation of the whole object. Therefore, for every component of the meta-models we will define: 1. Data-meta-model for the item and its property types. 2. Mathematical rules on the collections of items. 3. Exchange-meta-model of the objects: (a) XSD specification for the set (b) XSD specification for the item (c) XSD specification(s) for the types (d) XSD specification for the diagram element 4. Programming model to implement the rules and meta-model. 5. Visualization model in the selected tool platform. 6. Example model on the selected tool platform.

94

M. A. T. Mulder and H. A. Proper

With this list of models and meta-models, the representation of the DEMO meta-model proved [20] sufficiently complete to validate the model and exchange information to recreate the model. We remodelled the meta-model according to the findings in [21]. The ontological meta-model of DEMO shows the existing property types and concepts in black. The new property types and concepts are added in red. This meta-model is the base for the data-meta-model. The ontological meta-model is the closest metamodel to the meta-model that lists all ontological principles of DEMO models. For example, the CAR cannot ontologically be an initiator of a transaction kind because an underlying elementary actor must be the initiator. Therefore this property type does not exist in the ontological meta-model. In contrast to the ontological meta-model, the data-meta-model does contain all implementation attribute types and property types for the DEMO meta-model. The datameta-model of DEMO (see Fig. 5) has property types between the concepts. We did not visualize the removed property types, but these can be found by comparing our model with DEMOSL 3.7 (e.g. we removed a domain property type, precludes and precedes property types, TK initiated from the Transaction Process Step Kind (TPSK)). The example mentioned above, about the CAR not being an initiator, is not valid in the data-meta-model. Whenever one designs a CAR that initiates the transaction, this property type instantiation must be present in the model and needs a representation in the meta-model. Therefore, the data-meta-model also has the property type ‘AR is an initiator of TK’ from the CAR to the TK. In an ontological model one could reason that behind every CAR an EAR is present that covers the property type for the initiation. Furthermore, mathematical rules have been designed to make sure only the correct property type instantiations can be present in the final data-meta-model.

4 Requirements for Tool Selection For our research, we selected a tool (framework) to implement the DEMO meta-model based on the criteria as listed below. The criteria for selecting a tool are listed below. We do not pretend that this set is complete or adequate for all users of a DEMO modelling tool. However, for the purposes of our research, we deemed these to be appropriate. Additionally, an earlier version of these criteria, as reported in [22], has also already been used by other researchers [12]. In the selection of the tool (framework), the following overall criteria were used: 1. The tool must be able to support the executability of the final model; 2. The tool must support the ways of modelling and thinking as described in the DEMO methodology; 3. The tool must support the interchange of models, in particular the conform the VISI2 standard; 4. The tool must check the mathematical correctness of the model; 5. The tool must have an efficient storage of the model; 2

The ‘Voorwaarden scheppen voor de Invoering van Standaardisatie ICT in de bouw’ (VISI) standard originates from the construction sector, where a considerable workflow automation group has developed a DEMO-based set of standards.

Towards Enterprise-Grade Tool Support for DEMO

95

Fig. 5. DEMOSL data-meta-model

6. The tool must support the validation of a model with the stakeholders. From this list, we derived the following requirements for the tool. Modelling an organization requires the use of all four DEMO aspect models. Therefore, we define the requirement: Requirement 1. The tool must support the creation of all four aspect models of DEMO. The specification of DEMO [8] states the meta-model and describes the Visualization. Although the DEMOSL is not complete it certainly is the minimal description and guideline to visualize DEMO diagrams and the minimum set to create a meta-model to describe the internal relationships of a DEMO model. To be compliant with these specifications, we need another requirement: Requirement 2. The tool must be fully compliant with the DEMO meta-model, as specified in DEMOSL. One of the research topics in the field of DEMO is the integration and combination of other modelling methods and notations [16, 17, 19, 25]. When we follow the recommendations of these publications, the best way to support the combination of modelling

96

M. A. T. Mulder and H. A. Proper

methods is to have a tool that supports various modelling methods and notations. This is such a practical benefit, apart from supporting further research, that we add it as a requirement: Requirement 3. The tool must support multiple modelling methods and notations. When going through DEMO models in published papers, we find incompleteness, structural incorrectness, and syntactic failures in various models [10, 11, 16]. To solve this problem, the model should be verified against the specification language. To conduct such a verification, requires a considerable effort as it involves a large body of correctness rules. For example, the Xemod tool (see Sect. 5) already implements 23 rules for the CM [31]. When the task of verification takes more time than the modelling itself, we can accidentally introduce defects in the model. To compensate for the effort, the tool should automate verification of the models as much as possible. This leads us to the requirement: Requirement 4. The tool must allow for model verification against the DEMO metamodel, DEMOSL. The DEMO methodology itself does not prescribe the modelling order of the model such that it is the only correct model [5]. The organizational Essence Revealing (OER) method only determines the components that need to be modelled and how these components must be connected. Therefore, the model could not be build solely using requirement 4. Therefore, we need an extra requirement that allows for variation in modelling order: Requirement 5. The tool must allow for model verification for any order in which a model is created. Although four meta-models were specified in the original DEMOSL, we created the integration of these four meta-models into a single, integral meta-model. This single meta-model allows for model verification and reuse of components of the model. We find the usage of a single repository useful, and therefore, we state the requirement: Requirement 6. The tool must keep produced models in an integral repository. Modelling tools have been around for several decades, and still, there is no common modelling tool that is the de-facto standard for DEMO. The tools that are used for modelling DEMO seem to be missing something or are not capable of modelling all the aspect models. We looked at a selection of used tools. Although there might be more tools that are in use or that are capable, we limited our research to this set of tools. We conjecture that existing tools are not optimal for DEMO in the standard available version. Therefore, we introduce our last requirement: Requirement 7. The tool must allow for extending the tool capabilities when this is deemed necessary for full support of DEMO. Finally, considering the need to support the use of DEMO in larger enterprises, a selected tool needs to be enterprise-grade in the sense that is must be backed by a company/organization that can provide after-sales service and maintenance. This is not actually a requirement towards the actual tool, but rather on the provider. As such, the tools considered in this study (see Sect. 5) were already pre-selected on this enterprisegrade requirement.

Towards Enterprise-Grade Tool Support for DEMO

97

5 Considered Tools In searching for modelling tools that support modelling in DEMO and that comply with the requirements as listed above, we have found ten candidates. In searching for possibly relevant tools, the enterprise-grade requirement was used as a pre-selection criterion. The resulting ten candidates have been studied and checked against the requirements. ARIS – by Software AG [28] is a business modelling tool. With the use of the ArchiMate modelling possibilities (Req. 3), the set of objects has been extended with a transaction and actor role to allow for OCD modelling (Req. 1). CaseWise Modeler – by CaseWise is a business modelling tool that collects and documents the way in which Organizations work. The tool has an Application Programming Interface (API) and can, therefore, be extended. Information about the extent of the API is unavailable [3]. The tool provides multiple diagrams, e.g. Entity Relation Diagram (ERD), Calendar and Process simulation (Req. 3). Its diagram features can be used to create DEMO diagrams except for the AM, which is too complex for the tool (Req. 1). The repository can contain standard and extended objects for one model (Req. 6). Regretfully, no validation business rules can be added to the tool. Connexio Knowledge System – by Business Fundamentals [27] documents representation of the current organization implementation using the OCD diagram (Req. 1). It also stores ARS and Work Instruction Specification (WIS) to enable people to do and understand their job knowledge management. It does not support all aspect models of DEMO. It is proprietary and for internal use only. DemoWorld – by ForMetis [9] is an online modelling tool for DEMO processes (Req. 1). It does verify some business rules of the OCD (Req. 5) and allows for OCD modelling and process animation. The tool cannot model all aspect models and has no verification options. It is available for commercial use. Students pay e 50 per 6 months, professors get free usage for 6 months. EC-Mod – by Delta Change Consultants can visualize organizational flaws in the transaction kinds and actor roles, using a modified OCD (Req. 1). This OCD shows organization flaws and can be verified (Req. 4). It was presented in 2016 but is for internal use only. Enterprise Architect – by Sparx is an analysis and design tool [29] for Unified Modelling Language (UML), SysML, Business Process Model and Notation (BPMN), and several other techniques (Req. 3). It covers the process from analysis of requirements gathering through to model design, built, testing, and maintenance. Furthermore, it allows for the creation of meta-models to model one’s objects, connections and business rules (Req. 5, 7). The repository stores all objects to enable reuse on multiple diagrams (Req. 6). ModelWorld – by EssMod [14] can support ArchiMate, DEMO, BPMN, UML and Mockups (Req. 3). The repository contains all models (Req. 6). It can visualize three aspect diagrams of DEMO (Req. 1), but these are not validated using business rules. Though the model can be simulated, no extensions can be written on this tool. Unfortunately, this tool is no longer available. Open Modelling – [26] is a multi-model tool that supports Flowcharts, Integration Definition (IDEF) scheme, Application landscape, DEMO, ArchiMate, Use Case

98

M. A. T. Mulder and H. A. Proper

diagram, Component diagram, State diagram, ERD, Data FlowDiagram (DFD), and Screen sequence diagram. uRequire Studio – by uSoft is a requirement management tool [30] that has the option to add OCD (Req. 1). The requirements repository cannot store DEMO models. Thus, diagrams are stored and only connected to the requirement definitions by object description. No other aspect diagrams of DEMO are available. The tool also supports BPMN diagrams (Req. 3) and is available for commercial use. Visio – [18] by Microsoft is a drawing aid with templates for DEMO resembling the graphical representation (Req. 1). Multiple diagram types can be made in a single file (Req. 3). It does not have a repository of objects. Diagram validation is possible using programming in Visual Basic for Applications (VBA) (Req. 7), but diagram objects cannot easily be related. Xemod – by Mprise has the ability to integral model three DEMO aspect diagrams within a single Scope of Interest per project file [31] (Req. 1), which might result in an inconsistency between multiple scopes of interests, scattered in models. Business rules can be used to verify the model on consistency between several elements (Req. 4, 5). Furthermore, the tool can show the OCD, PM, and FM as diagrams and lists (Req. 6). Unfortunately, this tool is no longer available on the market.

Available

– – – 

Total score Cloud/Local Simulation

Business Rules (5)

– – – 

Repository (6)

Verifiable (4)

  – –

Extendable (7)

DEMOSL (2) – – – –

–  – –

– – – –

2 3 – 2

  – 

DEMO (1)

Multi-model (3)

Table 1. Summary of the findings

ARIS CaseWise Modeler Connexio Knwl. system DemoWorld

C--CP-F C--C---

EC-Mod Enterprise Architect ModelWorld

CP-- – ---- – CP-F –

–  – – – 1 L – –  –    4 L –  – –   – 3 C  

uRequire Studio Visio Xemod

C--- – C--F – CP-F –

– – – – – 1 C –  – – –  3 L – –    – 3 L –

L L L C

– – – 

  –

Table 1 provides a summary of the findings. We conclude from it that DEMOSL has not been completely implemented in any tool yet. Moreover, we conclude that no current tool can integrally model the whole method. To get beyond this impasse of tool builders waiting for the usage and vice versa, we decided to extend a commonly used tool. The scores suggest that Sparx Enterprise Architect (SEA) is the best candidate

Towards Enterprise-Grade Tool Support for DEMO

99

to fulfil the requirements, by extending the tool with DEMO. SEA is broadly used, supports multiple models, can apply business rules, has a repository, is extendable and is already used for architecture modelling. Therefore, this tool has been chosen to model the meta-model and implement DEMO as precisely as possible.

6 Implementation The modelling tool SEA allows for extending the basic UML model with the extenders own concepts. These extended concepts are called profiles. During this research, we used the tool version 13.0.1310 up to 14.1.1429. The modelling tool SEA has a metamodel base consisting of UML data types. By creating a new profile, these data types can be extended. Many meta-models in SEA are using these UML data types as a base for their own model (e.g. ArchiMate, BPMN). Therefore, the used UML types for our meta-model are Class for entity types and Association for the relations between entity types. To model the stereotype profile of the CM, we need to create an implementation of the DEMOSL concepts into the SEA modelling options. SEA uses a stereotype for each potentially visible object. Therefore, the meta-model of the CM needs to be reduced to visible entity types. In the Construction Meta-model of DEMOSL 3.7, we can see that the meta-model contains six entity types. The entity type Independent

Fig. 6. OCD profile

100

M. A. T. Mulder and H. A. Proper

P-Fact Kind (IFK) or EVENT TYPE is the link between the CM and the FM. It is not displayed on the OCD and can be left out of the SEA meta-model for the CM, just like the FACT KIND. We already mentioned the missing Scope of Interest Boundary (SIB) in the CM meta-model. We will use the CAR concept instead and remove IFK and reduce the number of stereotypes concerning the CM to four, as shown in Fig. 6. We extended the meta-class Class for EAR, CAR, ATK, and TK and added properties, attributes, and shape scripts to the stereotypes. Each stereotype has a ‘metatype’ property showing the default name of the instantiated model element. The how and why of other properties are available on request and will not be discussed in this section. We extended the meta-class Association to create connectors between the elements. The SEA allows for each element link to be represented by a graphical shape. The creation of the shape is supported by the programming language shape script. Shape script has a syntax similar to C but has a limited set of commands and functionality. Within the shape scripts, it is possible to get the element properties or the containing diagram properties. These properties can be used in simple logic to determine the required shape to be drawn on the diagram. When choosing the Visualization of a CAR, the programming features are too limited to either draw it as an internal rectangle and use it as SoI or draw it as a grey-filled external rectangle automatically. The programming features do not allow for a context check of the Visualization. Therefore, these visual properties of the elements within SEA must be set manually at the moment. We implemented the verification of the model for multiple reasons. First of all, the elements and connections can be added to the SEA repository by third parties using the API. Using this API entry path will bypass the checking of the business rules because only User Interface (UI) events are available. The presence of a verification engine allows modellers to create valid models. Subsequently, the model may be entered to the best ability with all information available but is still incomplete. The verification emphasizes the gaps and allows for corrective measures. Furthermore, the tool has its limitations on keeping the diagrams within specifications of DEMOSL. Therefore, the diagrams can be corrected after new elements have been entered into the model. The SEA tool has a flexible model extension feature but does not allow for a complete set of configurable business rules. Although some restrictions for interdependency can be made using, e.g. the Quick Link feature, most restrictions and checks have to be made using the API. This API allows for checking business rules at UI events such as menu click, element creation, deletion, and drag and drop onto a diagram canvas. We have used this API to implement business rules on relevant events. The business rules that have been implemented for the OCD are: 1. Elementary actor roles may only be the executor of a single elementary transaction kind 2. Every pair of actor role and transaction kind can only have a single connection of the type initiator, executor, or bank access 3. On an OCD diagram, only ATK, TK, EAR, and CAR elements can be added. All equations have been programmed info the verification model. We need to verify the action model with the mathematical model and the exchange model. Therefore, we built five program parts of Visualization:

Towards Enterprise-Grade Tool Support for DEMO

101

1. A verification model in code in the memory model (closely resembling the exchange model). This verification model is restricted to the action rule name, the TK and TPSK on which the rule applies, the entities involved in the with clause and, finally, the relations between the entities involved. For testing purposes, we used a simplified example. 2. A verbalisation generator to create the ARS from the memory model. This generator takes the memory structure tree and follows the structure depth-first towards the end of the action rule, thereby writing the language. The tree navigation is in compliance with the action rule grammar. More structures are allowed in the memory model than in the grammar due to the inherent strictness of the memory model. 3. A model converter to split the connected CM, PM and FM concepts needed for the AM. Although it would be more efficient to fetch the information directly from the original model, the assumed requirement of independence of the grammar module makes it necessary to create a new lookup structure for the model element. 4. An interpreter on top of the lexer-parser-listener of the created grammar that produces the exchange code. After lexing and parsing the input string, we can navigate through a tree which, again, follows the grammar structure. Now, instead of the language, we will fill the XML nodes into the XSD structure. Afterwards the nodes can be serialized to create an XML string. 5. A direct conversion from the memory model to the exchange model. The serialisation of the memory model can also be done via the XSD structure directly. This step creates an XML from the memory model. At the end of the path, we have two XML files that have to be identical for the AM part. The AM must represent the same information in all formats.

7 Cases Over the past two years, we have created business models for five Organizations. We used our tool for the creation and presentation of these models. DEMO is a non-domain specific methodology which allows us to use the tool throughout the various domains. The cases, which are labelled A-E, have been created in five companies in the Netherlands. Case A, D, and E were used at logistic wholesale companies. Case B was used at a small property management company whereas case C regards a small call centre. Case A is a medical wholesale organization which needs a only construction model and a fact model of their organization for the purpose of being aware of the organization and communication structure. We modelled around 49 TKs, 38 EARs, 12 CARs, 17 ETs, 4 ARSs, 0 WISs. We modelled the organizational structure in DEMO and combined it with the landscape in ArchiMate. Though only OCD have been made, the combination between the OCD of DEMO and the ALD of ArchiMate have been very useful, therefore confirming the need of requirement 3. In this case the connections between DEMO and other notations have been identified.

102

M. A. T. Mulder and H. A. Proper

Case B is a property management company that needed their processes and data modelled to be able to choose the right automation for their business. We modelled around 222 TKs, 191 EARs, 36 CARs, 149 ETs, 0 ARSs, 9 WISs. The complexity in this case was in the implementation. The OCD and OFD were the start of the implementation of both the landscape that was modelled in ArchiMate, as well as the application mapping. The OFD has been used for the base of the configuration of the domain application. The combination of the OFD and implemented entities is a concept that fulfills some demands and can be expanded. The security model that was needed for the application could neither be registered in DEMO nor in ArchiMate and needs further attention. Case C is a small call centre that needed to choose a process and data matching application. We modelled the organization and their landscape using DEMO (OCD and OFD) in combination with ArchiMate. We modelled around 58 TKs, 21 EARs, 19 CARs, 32 ETs, 0 ARSs, 55 WISs (see Fig. 7). By reverse engineering the software towards DEMO models we found a 80% match in process and data model. This was complete enough to buy the software licenses. Matchin implementation and the organization model is a part that is not completely covered by the tool in the used version. The meta-model lacks some connections between the various entity types.

Fig. 7. Partial CM/OCD case C

Case D is a logistic wholesale company that is seeking for business optimization. We mainly used the OCD to gather all products and processes in the organization. We modelled around 200 TKs, 168 EARs, 81 CARs, 107 ETs, 0 ARS, 3 WISs. With the help of the integration of the Disco process mining tool we gathered information at main TK and related that information to cash flow, processing time and material flows. The tool supported this modelling by combining proces mining information, ArchiMate landscape information and transaction and role information. Business optimization was found in the analysis of the connections between the different departments [23].

Towards Enterprise-Grade Tool Support for DEMO

103

Case E is a logistic wholesale company that lacked insight in the invoicing system. We modelled around 23 TKs, 20 EARs, 10 CARs, 12 ETs, 0 ARSs, 0 WISs. This analysis using just the OCD resulted in the insight that the communication was scattered around the organization and the responsibilities were not defined. Implementation of a single actor role was among several subjects. The presentation of this information needs some improvement in the tool.

8 Conclusions and Future Research After two years of testing and optimizing the tool we can conclude that, though not completely finished, the tool is capable of storing all DEMO models and visualize the business processes. The data meta-model that we implemented in the tool appears to be sufficient to store the information of the mentioned cases. The base of the tool, SEA, is capable of supporting the modelling in DEMO 3. Although SEA has not been build to display tables, we have been able to visualize tables within the add-on. The Visualization engine of SEA is based on a 100 × 100 pixel image that is resized to the required dimensions. This allows for most graphical visualizations (squares) but fall short for independent resizable shapes (see Fig. 3). This same graphical concept is applied to connections between elements and is also quite restricting. Business rules have been implemented in the add-on and the freedom within this add-on compensates the lack of possibilities of the internal tool framework. Besides the tool discoveries, modelling DEMOSL in SEA revealed some missing definitions and inconsistent definitions in DEMOSL. These will be discussed in another paper. More research is needed on the compliance of DEMO 4 and DEMOSL 4. Some rules of new diagrams and tables seem to complicate the modelling in SEA. We are also starting research on a number of newly found tools to investigate if these tools can also use the add-on that has been developed to extend the modelling capabilities towards DEMO.

References 1. Aveiro, D., Pinto, D.D.: A case study based new DEMO way of working and collaborative tooling. In: 2013 IEEE 15th Conference on Business Informatics, Los Alamitos, California, pp. 21–26. IEEE Computer Society Press (2013) 2. Caetano, A., Assis, A., Tribolet, J.: Using DEMO to analyse the consistency of business process models. In: Advances in Enterprise Information Systems II, pp. 133–146. CRC Press, June 2012 3. CaseWise: The CaseWise Suite and CaseWise Modeler (2016). http://www.casewise.com/ product/modeler/ 4. D´ecosse, C., Molnar, W.A., Proper, H.A.: What does DEMO do? a qualitative analysis about DEMO in practice: founders, modellers and beneficiaries. In: Aveiro, D., Tribolet, J., Gouveia, D. (eds.) EEWC 2014. LNBIP, vol. 174, pp. 16–30. Springer, Cham (2014). https:// doi.org/10.1007/978-3-319-06505-2 2 5. Dietz, J.L.G.: Enterprise Ontology-Theory and Methodology. Springer, New York (2006) 6. Dietz, J.L.G., et al.: The discipline of enterprise engineering. Int. J. Organ. Des. Eng. 3(1), 86–114 (2013)

104

M. A. T. Mulder and H. A. Proper

7. Dietz, J.L.G., Mulder, J.B.F.: Enterprise Ontology-A Human-Centric Approach to Understanding the Essence of Organisation. Springer, The Enterprise Engineering Series (2020) 8. Dietz, J.L.G., Mulder, M.A.T.: Demo specification language 3.7 (2017). https://www.eeitest.nl/mdocs-posts/demo-specification-language-3-7/ 9. Formetis: Online modeling tool for process design and animation (2017). https://www. demoworld.nl/Portal/Home 10. Gonc¸alves, A., Sousa, P., Zacarias, M.: Capturing activity diagrams from ontological model. Int. J. Res. Bus. Technol. 2(3), 33–44 (2013) 11. Gouveia, D., Aveiro, D.: Modeling the system described by the EU General Data Protection Regulation with DEMO. In: Enterprise Engineering Working Conference, pp. 144–158. Springer (2018). https://doi.org/10.1007/978-3-030-06097-8 9 12. Gray, T., Bork, D., De Vries, M.: A new DEMO modelling tool that facilitates model transformations. In: Enterprise, Business-Process and Information Systems Modeling, pp. 359–374. Springer (2020). https://doi.org/10.1007/978-3-030-49418-6 25 13. Habermas, J.: The Theory for Communicative Action: Reason and Rationalization of Society, vol. 1. Boston Beacon Press, Boston, Massachusetts (1984) 14. Hommes, B.J.: ModelWorld (2015). http://ModelWorld.nl 15. de Kinderen, S., Gaaloul, K., Proper, H.A.: On transforming DEMO models to ArchiMate. In: Bider, I., et al. (eds.) BPMDS/EMMSAD-2012. LNBIP, vol. 113, pp. 270–284. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31072-0 19 16. Krouwel, M.R., Martin Op’t Land, M.: Combining demo and normalized systems for developing agile enterprise information systems. In: Advances in Enterprise Engineering V, pp. 31–45 (2011) 17. Meertens, L.O., Iacob, M.E., Nieuwenhuis, L.J.M., van Sinderen, M.J., Jonkers, H., Quartel, D.: Mapping the business model canvas to ArchiMate. In: Proceedings of the 27th Annual ACM Symposium on Applied Computing (SAC 2012), New York, New York, Trento, Italy, pp. 1694–1701. ACM (2012) 18. Microsoft: Visio (2020). https://www.microsoft.com/en-us/microsoft-365/visio/flowchartsoftware 19. Mr´az, O., N´aplava, P., Pergl, R., Skotnica, M.: Converting demo psi transaction pattern into bpmn: a complete method. In: Aveiro, D., Pergl, R., Guizzardi, G., Almeida, J.P., Magalh˜aes, R., Lekkerkerk, H. (eds.) Advances in Enterprise Engineering XI, pp. 85–98. Springer International Publishing, Cham (2017) 20. Mulder, M.A.T.: Validating the demo specification language. In: Enterprise Engineering Working Conference, pp. 131–143. Springer (2018). https://doi.org/10.1007/978-3-03006097-8 8 21. Mulder, M.A.T.: A design evaluation of an extension to the DEMO methodology. In: Advances in Enterprise Engineering XIII, pp. 55–65. Springer (2019). https://doi.org/10. 1007/978-3-030-37933-9 4 22. Mulder, M.: Enabling the automatic verification and exchange of DEMO models. Ph.D. thesis, Radboud University, Nijmegen, The Netherlands (Forthcomming) 23. Op’t Land, M., Dietz, J.L.G.: Enterprise ontology based splitting and contracting of organizations. In: Liebrock, L.M. (ed.) Proceedings of the 23rd Annual ACM Symposium on Applied Computing (SAC 2008), Fortaleza, Cear´a, Brazil. ACM Press (2008) 24. van Reijswoud, V.E., Mulder, J.B.F., Dietz, J.L.G.: Communicative action based business process and information modelling with DEMO. Inf. Syst. J. 9(2), 117–138 (1999) 25. Roland, E., Dietz, J.L.G.: ArchiMate and DEMO-Mates to date? In: Albani, A., Barjis, J., Dietz, J.L.G. (eds.) Advances in Enterprise Engineering III, LNBIP, vol. 34, pp. 172–186. Springer (2009). https://doi.org/10.1007/978-3-642-01915-9 13 26. Santbrink, J.v.: Open Modeling (2020). http://open-modeling.sourceforge.net

Towards Enterprise-Grade Tool Support for DEMO

105

27. Severien, T.: Business Fundamentals-Verbeteren vanuit de essentie (2016). http://www. businessfundamentals.nl/ 28. Software AG: ARIS. http://www.softwareag.com/ 29. Sparx: Enterprise architect (2017). https://www.sparxsystems.eu/start/home/ 30. uSoft: uRequire Studio (2016). http://www.usoft.com/software/urequire-studio 31. Vos, J.: Business modeling software focused on DEMO (2011). http://wiki.xemod.eu

Formal Aspects of Enterprise Modelling

M2FOL: A Formal Modeling Language for Metamodels Victoria D¨ oller(B) Research Group Knowledge Engineering, Faculty of Computer Science, University of Vienna, Vienna, Austria [email protected]

Abstract. Enterprise modeling deals with the increasing complexity of processes and systems by operationalizing model content and by linking complementary models and languages, thus amplifying the model-value beyond mere comprehensible pictures. To enable this amplification and turn models into computer-processable structures a comprehensive formalization is needed. In this paper we build on the widely accepted approach of logic as basis for modeling languages and define them as languages in the sense of typed predicate logic comprising a signature Σ and a set of constraints. We concretize how the basic concepts of a language – object and relation types, attributes, inheritance and constraints – can be expressed in logical terms. This naturally leads to the denotation of a model as Σ-structure satisfying all constraints. We apply this definition also on the metalevel and propose a formal modeling language to specify metamodels called M2FOL. A thus formalized metamodel then rigorously defines the signature of a language and we provide an algorithmic derivation of the formal modeling language from the metamodel. The effectiveness of our approach is demonstrated by formalizing the Petri Net modeling language, a method frequently used for analysis and simulation in enterprise modeling. Keywords: Conceptual modeling Formal language · Predicate logic

1

· Metamodel · Modeling language ·

Introduction

Enterprise modeling has proven instrumental in facing the challenges of increasing complexity and interdependences of processes and systems in the modern world. Research on enterprise modeling has enhanced modeling languages from mere instruments for pictures supporting human understanding to highly specialized tools with value adding mechanisms like information querying, simulation, and transformation [2,14]. The nature of models has evolved from a visual representation of information to an exploitable knowledge structure [5]. Nevertheless the European enterprise modeling community experiences that the potential of enterprise modeling is currently not fully utilized in practice and modeling is c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 109–123, 2020. https://doi.org/10.1007/978-3-030-63479-7_8

110

V. D¨ oller

employed only by a limited group of experts, therefore in [34,35] a research agenda is formulated to establish “modeling for the masses” (MftM) and broadcast its benefits also to non-experts. Although the initiators of the MftM movement mention that the formality of model representation possibly hampers understandability, we argue that the idea behind MftM nevertheless requires an investigation of the formal foundations of models and languages. This is for three reasons: 1) According to the stakeholder dimension of challenges in enterprise modeling research [34, S.234] computers also have to be seen as stakeholders producing and consuming models. To make models computer-processable they have to be formalized as computers do not understand semi-formal or unstructured models and language specifications [3]. 2) The vision of models being not autotelic but being a means to the operationalization of information [34, S.229] calls for value-adding functionality beyond mere graphics like reasoning, verification & validation or simulation, which is formulated ideally computer-understandably and implementationindependently, i.e. formalized. 3) The vision of local modeling practices which are globally integrative [34, S.229] calls for a common foundation of what models and modeling languages are to enable the linking and merging of models in different domains with different semantics [18]. Formalization is also essential in the light of the emergent importance of domain specific modeling languages (DSMLs) [13] as well as an increasing agility in the advancement and extension of established languages and methods [21]. The lack of a common way for formalizing DSMLs leads to divergent formal foundations limiting the opportunities to compare or link models. Frequently the big standards are extended for a specific domain, e.g. the extension of i* with security concepts constituting the modeling language Secure Tropos [28, 33]. Therefore a common way of specifying the base languages as well as the extensions or modules is required. A silo like formalization of the big standards is not sufficient as this impedes a mutual interconnection and integration by divergent base concepts of models and different underlying formal structures. Another important building block for advancing the science of conceptual modeling is an exact and commonly applied method for specifying modeling languages. A survey conducted by Bork et al. showed that the specification documents of the standardized languages like UML and ArchiMate diverge in the concepts they consider as well as in the techniques they use to specify their visual metamodels [4]. Examples from recent scientific publications indicate that also in research on domain specific languages no common practice of metamodel specification is in use. Several contributions specify metamodels with UML class diagrams, declaring object types as classes and relation types as classes or named association arrows, e.g. [32,36,37]. Others simply define the object and relation types with box-and-line models devoid of an underlying language and rely on the intuitive understanding of the reader, e.g. [26,30]. This shows that although metamodels are models themselves and therefore subject of interest for enterprise modeling research no language for metamodels has been established yet.

M2FOL: A Formal Modeling Language for Metamodels

111

Nevertheless when a language has to be implemented or executed a precise and unambiguous definition of the metamodel is crucial [3]. In this paper, we contribute to fostering the formal foundations of modeling languages by defining them as formal languages in the sense of logic. This means they comprise a signature Σ for the syntax and a set of constraints, for which we use first-order predicate logic. Our definition concretely states how the core concepts of modeling languages as described in [23] and can be expressed in logical terms. Predicate logic provides the construct of a Σ-structure, i.e. an interpretation of the signature, which is the canonical correspondent to the model being an instantiation of a metamodel. With this definition of a language we are then able to specify M2FOL, a formal modeling language for metamodels. With M2FOL we are capable of modeling the syntax of a language to be specified, particularly the signature of the language according to the definition. The rest of this paper is structured as follows: In Sect. 2 we give an overview of related work on formalization of metamodels and modeling languages. In Sect. 3 we introduce our definition of formal modeling languages and models and concretize how the basic concepts of a language – object and relation types, attributes, inheritance and constraints – can be expressed in logical terms. We then use this definition in Sect. 4 to create M2FOL – a formal modeling language for metamodels – and outline its self-descriptive character. Given a metamodel specified with M2FOL we show how to algorithmically deduce the signature of the corresponding modeling language. Finally we give a conclusion and outlook to issues we plan to approach in the future with this formalization.

2

Background and Related Work

According to the Characterizing Conceptual Model Research (CCMR) framework we are interested in contributions located in the dimension Formalize working on the level of Conceptual Modeling Languages and Metamodeling Languages [9]. The various attempts addressing the formalization of a specific modeling language mostly aim at supporting a specific purpose or functionality and do not provide means to define arbitrary metamodels and languages. An example is the OSM-logic using typed predicate logic for object-oriented systems modeling with a focus on time-dependent system behaviour [7]. Another example is the SAVE method for simulating IoT systems using the δ-calculus [6]. These specific formalizations may offer ideas suitable to be generalized to a generic approach but will not be comprehensively discussed here. However, as soon as there is a common practice of formally defining the ubiquitous concepts of modeling languages these specific approaches can be constructed as reusable extensions and modules and be of value in a broader field of application. Research contributions on generic formal foundations of modeling languages can be categorized according to the underlying mathematical theory they use, mostly graph theory, set theory and logic. In the domain specific language KM3 presented by Jouault and Bezivin models are defined as directed multi-graphs conforming to another model, the

112

V. D¨ oller

metamodel, itself a graph [20]. Using this formalism the authors define a selfdescriptive metametamodel and deduce a domain specific language to specify metamodels. This approach puts an emphasis on the graph-like box-and-line structure of models, rather than on the linguistic aspects. The FDMM formalism introduced by Fill et al. uses set theory to specify metamodels and models [12]. The authors explicitly aim at a formalization of metamodels realized with the metamodeling platform ADOxx [1] and do not claim to be applicable for platform-independent specifications. Neither set theory, basis of FDMM, nor graph theory, basis of KM3, provides a canonical concept for instantiation, an essential characteristic of modeling languages. Therefore the technique and semantics of this instantiation relation between model and metamodel has to be constructed ad-hoc and lacks the beneficial knowledge stack of established mathematical theories. Formal languages as defined in mathematical logic inherently comprise the concept of instantiation as interpretation of the signature in logical terms, and they provide a rich knowledge about their properties. Therefore, in current research the notion of modeling languages as formal languages in the sense of mathematical logic is receiving increasing attention [8,15,29,31,40]. In his work on the theory of conceptual models Thalheim describes modeling languages as based on a signature Σ comprising a set of postulates, i.e. sentences expressed with elements of Σ [40]. Models are defined as language structures satisfying the postulates, which canonically corresponds to the concept of instantiation of a metamodel. This generic description does not restrict itself to a single type of logic. Consequently it does not provide a specification of the mapping of the formal signature to the modeling language concepts like object or relation types. In their investigation of formal foundations of domain-specific languages Jackson and Sztipanovits introduce typed predicate logic to handle object types in models [19]. In contrast to Thalheim, they do not adopt the concept of a language structure for model instances, but rather consider a model to be a set of valid statements about the model. This is also true for Telos [24], which builds on the premise that the concepts of entities and links are omitted and replaced by propositions constituting the knowledge base. The choice of typed first-order logic for the formalization of these propositions is natural and explained in great detail in [25]. Similar to Jackson and Sztipanovits knowledge is represented solely as a set of sentences in the formal language. In our approach on the other hand we do not adopt the transformation of models into propositions but rather directly deal with the ubiquitous concepts of objects and relations and an instantiation hierarchy between models and metamodels. This leads to a different view on models. In the attempts above a model is constituted by statements, whereas in our approach a model is any language interpretation not explicitly excluded by the statements constraining valid models. Guizzardi builds a theory of ontologically-driven conceptual modeling based on modal logic and develops in several contributions a comprehensive formal system for conceptualizations of domains as basis for truthful modeling languages, e.g. [16,17]. In this advanced theory, fruitful for the objective of an ontology-based,

M2FOL: A Formal Modeling Language for Metamodels

113

domain-faithful grounding for modeling languages, these languages as well as models and metamodels are a-posteriori concepts implicitly obtained from ontological considerations. In the paper at hand we do not restrict to ontology-driven conceptual modeling and approach models and languages as the a-priori concepts. Summarizing we conclude, that formal languages in logic prove to be suitable for the intended formalism, as the structure of modeling languages including its linguistic character can be grounded in concepts of formal languages. Therefore, in the work at hand we propose a formal definition of modeling languages in which we concretely point out the modeling concepts and their formal equivalent in logical terms with prospect of successive elaboration. This paper extends our prior work in [10].

3

Definition of Formal Modeling Languages

The intended definition shall serve as a cornerstone for a common way of specifying modeling languages, which thereby become comparable, reusable and modularizable. A formal definition for modeling languages in general enables an investigation of common features of the resulting subclass of formal languages as well as a sound mathematical foundation for their functionality. Therefore, we adopt the core concepts of modeling languages identified in [23] – object and relation types, attributes, inheritance and constraints – to be the basic constituents of formal modeling languages. We use typed (also called sorted) predicate logic in this approach. The mathematical basics can be found in textbooks on logic or mathematics for computer science, e.g. [11,27]. Some remarks on notation: To ease the differentiation between language and model level, we use capital letters for the symbols of the former and lowercase letters for the elements of the latter. Definition 1. A (formal) modeling language L consists of a typed signature Σ = {S, F, R, C} and a set C of sentences in L for the constraints, where: • S is a set of types, which can be further divided into three disjoint subsets SO , SR , and SD for object types, relation types and data types; – the type set SO is strictly partially ordered with order relation @Week display using Line 43 PublicationsCount = acceptedPaper @Week display using Line 44 ... 45 levers : 46 Lever I n c r e a s e T e a c h i n g P r e p a r a t i o n : ’ Increase Teaching propensity of academics from 50% to 80% ’ 47 apply [ p r o p e n s i t y O f T e a c h i n g P r e p a r a t i o n =80; ignore StudentComplain t ; ]; 48 } 49 OrgUnit Student { // Student Definition 50 ... } 51 }

Fig. 8. An illustration of organisation specification

4.3

Simulation and What-If Analysis

The what-if analysis is performed using a complete OrgML specification of a department of ABC University with 30 academics and 1200 students2 . First, the specification without any lever is translated to ESL specifications using OrgML to ESL translation rules, translated ESL specification is simulated for 52 Weeks (i.e. one year), and the specified measures are observed. An overview of the simulation dashboard is shown in Fig. 9. It shows the measure values of the department using a table (Fig. 9(a)), work schedule of an academic using a table, and work distribution of an academic using a pie chart (Fig. 9(b)). 2

The complete specification and simulation results of the case study can be found in Chap. 7.3 of [7].

168

S. Barat et al.

Fig. 9. A brief overview of simulation dashboard and what-if analysis

Subsequently, various what-if scenarios by applying levers to the initial configuration are explored. The outcomes of the what-if analyses are summarized in a table shown in Fig. 9(c). The observations of these explorations (rows) help to understand the efficacy of the levers (with respect to specified goals) in a quantitative term and arrive at an informed decision.

5

Concluding Remarks

Our key contribution in this paper is a novel domain specific language that is machine-interpretable and translates to simulation workbench to enable evidence-based informed organisation decision-making. The deeper analysis of the literature bought forth the core concepts, such as goal, measure and lever, and established the importance of socio-technical characteristics, such as modularity, compositional, reactive, autonomous, intentional, uncertainty and temporal behaviour, to precisely represent and comprehend a complex organisation. From a validation perspective, our focus is on the expressiveness of OrgML in the context of organisational decision-making, and the efficacy of its associated analysis capabilities. The key concepts of the language are validated through their derivation from current research literature of organisation decision-making. Sufficiency of expressive power of the OrgML language and analysis capability are demonstrated through an illustrative case study. In our research, we adopted design science methodology to develop and validated our research artifacts. We validated our contributions using a set of case studies. Our research methodology and other case studies are elaborated elsewhere [7]. Our validation establishes the efficacy and utility of OrgML and associated simulation capabilities. By implementing the language using established reference technology, we enable our research artifacts to become accessible for practitioners. The key take away from our validation and usage in industrial context [8] are twofold - (a) considering simulation as a decision-making aid raises its own

OrgML - A Domain Specific Language

169

validity concerns particularly with respect to epistemic value of simulations, (b) while efficacy, utility and completeness of OrgML are established, the usability of OrgML needs to improved. Exploring epistemological concerns raised through decision-making aids using simulation is our next focus area. A further potential area is the development of visual language notations to improve usability of our technology.

References 1. Agha, G.: Actors: A Model of Concurrent Computation in Distributed Systems. MIT Press, Cambridge, MA, USA (1986) 2. Aigner, W., Miksch, S., M¨ uller, W., Schumann, H., Tominski, C.: Visualizing timeoriented data - a systematic view. Comput. Graph. 31(3), 401–409 (2007) 3. Allen, J.: Effective Akka. O’Reilly Media Inc, Newton (2013) 4. Amagoh, F.: Perspectives on organizational change: systems and complexity theories. Innov. J. Public Sector Innov. J. 13(3), 1–14 (2008) 5. Anderson, D., Sweeney, D., Williams, T., Camm, J., Cochran, J.: An Introduction to Management Science: Quantitative Approaches to Decision Making. Cengage Learning, Boston (2015) 6. Armstrong, J.: Erlang - a survey of the language and its industrial applications. In: Proceedings of the Symposium on Industrial Applications of Prolog (INAP), p. 8 (1996) 7. Barat, S.: Actor based behavioural simulation as an aid for organisational decision making. Ph.D. thesis, Middlesex University (2019). https://eprints.mdx.ac. uk/26456/ 8. Barat, S., et al.: Actor based simulation for closed loop control of supply chain using reinforcement learning. In: International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), pp. 1802–1804 (2019) 9. Barat, S., Kulkarni, V., Clark, T., Barn, B.: Enterprise modeling as a decision making aid: a systematic mapping study. In: Horkoff, J., Jeusfeld, M.A., Persson, A. (eds.) PoEM 2016. LNBIP, vol. 267, pp. 289–298. Springer, Cham (2016). https:// doi.org/10.1007/978-3-319-48393-1 20 10. Barros, T., Ameur-Boulifa, R., Cansado, A., Henrio, L., Madelaine, E.: Behavioural models for distributed fractal components. Ann. Telecommun. 64(1–2), 25–43 (2009). https://doi.org/10.1007/s12243-008-0069-7 11. Boardman, J., Sauser, B.: System of systems-the meaning of of. In: 2006 IEEE/SMC International Conference on System of Systems Engineering, p. 6. IEEE (2006) 12. Candes, E.J., Tao, T.: Decoding by linear programming. IEEE Trans. Inf. Theory 51(12), 4203–4215 (2005) 13. Clark, T., Kulkarni, V., Barat, S., Barn, B.: ESL: an actor-based platform for developing emergent behaviour organisation simulations. In: Demazeau, Y., Davidsson, P., Bajo, J., Vale, Z. (eds.) PAAMS 2017. LNCS (LNAI), vol. 10349, pp. 311–315. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59930-4 27 14. Daft, R.: Organization Theory and Design. Nelson Education, Toronto (2012) 15. Erdweg, S.: The state of the art in language workbenches. In: Erwig, M., Paige, R.F., Van Wyk, E. (eds.) SLE 2013. LNCS, vol. 8225, pp. 197–217. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-02654-1 11 16. Fowler, M.: Domain-Specific Languages. Pearson Education, London (2010)

170

S. Barat et al.

¨ 17. Goralwalla, I.A., Ozsu, M.T., Szafron, D.: An object-oriented framework for temporal data models. In: Etzion, O., Jajodia, S., Sripada, S. (eds.) Temporal Databases: Research and Practice. LNCS, vol. 1399, pp. 1–35. Springer, Heidelberg (1998). https://doi.org/10.1007/BFb0053696 18. Holland, J.H.: Studying complex adaptive systems. J. Syst. Sci. Complexity 19(1), 1–8 (2006). https://doi.org/10.1007/s11424-006-0001-z 19. Iacob, M., Jonkers, D.H., Lankhorst, M., Proper, E., Quartel, D.D.: ArchiMate 2.0 Specification: The Open Group. Van Haren Publishing, Netherlands (2012) 20. Kulkarni, V., Barat, S., Roychoudhury, S.: Towards business application product lines. In: France, R.B., Kazmeier, J., Breu, R., Atkinson, C. (eds.) MODELS 2012. LNCS, vol. 7590, pp. 285–301. Springer, Heidelberg (2012). https://doi.org/10. 1007/978-3-642-33666-9 19 21. Levitt, B., March, J.G.: Organizational learning. Ann. Rev. Sociol. 14(1), 319–338 (1988) 22. Macal, C.M., North, M.J.: Tutorial on agent-based modelling and simulation. J. Simul. 4(3), 151–162 (2010) 23. Manzur, L., Ulloa, J.M., S´ anchez, M., Villalobos, J.: Xarchimate: enterprise architecture simulation, experimentation and analysis. simulation 91(3), 276–301 (2015) 24. McDermott, T., Rouse, W., Goodman, S., Loper, M.: Multi-level modeling of complex socio-technical systems. Proc. Comput. Sci. 16, 1132–1141 (2013) 25. Meissner, P., Sibony, O., Wulf, T.: Are you ready to decide? McKinsey Quarterly, 8 April 2015 26. Michelson, B.M.: Event-driven architecture overview. Patricia Seybold Group 2, 10–1571 (2006) 27. Paschke, A., Kozlenkov, A., Boley, H.: A homogeneous reaction rule language for complex event processing. arXiv preprint arXiv:1008.0823 (2010) 28. Simon, H.A.: The architecture of complexity. In: Klir, G.J. (ed.) Facets of Systems Science, pp. 457–476. Springer, Boston (1991). https://doi.org/10.1007/9781-4899-0718-9 31 29. White, S.A.: BPMN Modeling and Reference Guide: Understanding and using BPMN. Future Strategies Inc., Lighthouse Point (2008) 30. Yu, E., Strohmaier, M., Deng, X.: Exploring intentional modeling and analysis for enterprise architecture. In: Enterprise Distributed Object Computing Conference Workshops (2006). https://doi.org/10.1109/EDOCW.2006.36

Improvements on Capability Modeling by Implementing Expert Knowledge About Organizational Change Georgios Koutsopoulos, Martin Henkel, and Janis Stirna(B) Department of Computer and Systems Sciences, Stockholm University, Stockholm, Sweden {georgios,martinh,js}@dsv.su.se

Abstract. Modern digital organizations are constantly facing new opportunities and threats, originating from the highly dynamic environments they operate in. On account of this situation, they need to be in a state of constant change and evolution to achieve their goals or ensure survival, and this is achieved by adapting their capabilities. Enterprise Modeling and capability modeling have provided a plethora of approaches to facilitate the analysis and design of organizational capabilities. However, there is potential for improving management of capability change. This Design Science research aims to provide methodological and tool support for organizations that are undergoing changes. A previously introduced meta-model will serve as the basis for a method supporting capability change. The goal of this study is to explore expert knowledge about organizational change in order to evaluate the initial version of the meta-model and identify possible weaknesses. Ten semi-structured interviews have been conducted to explore the perspectives of experienced decision-makers on capability change. Three categories emerged from the analysis, reflecting on how capability change is observed, decided and delivered respectively. These have been used as input for revising the conceptual structure of the capability change meta-model. Keywords: Capability · Enterprise Modeling · Change · Adaptation · Transformation

1 Introduction The digital transformation occurring in modern societies has resulted in highly dynamic environments that entail a wide spectrum of changes, opportunities and threats for any organization [1]. These environments are conceived as internal or external interacting forces that motivate organizational changes constantly, whose aim is to improve efficiency in fulfilling business goals [2] and ensuring survival [3]. The environmental changes are not only faster than the organizational changes [4], but also hard to anticipate. As a result, the ability to trigger quick and appropriate organizational adaptations and transformations bears great importance. However, this situation poses significant challenges to organizations [3]. In this regard, organizations need a strategy guiding © IFIP International Federation for Information Processing 2020 Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 171–185, 2020. https://doi.org/10.1007/978-3-030-63479-7_12

172

G. Koutsopoulos et al.

how to address these unpredictable changes in order to remain sustainable and competitive [5]. Strategy is deciding the organizational goals and coherent choices that concern resource allocation, involved activities and approaches for the realization of the goals [6]. Usually, even though IT is becoming integrated with all aspects of business [7], organizations need to rely on human individuals for strategic decision-making activities, based on the assumption of human rationality [3]. Consequently, decision-makers are a significant part of any ongoing change in an organization. The operationalization of strategy needs support and the concept of capability is gaining ground among decision-makers and scientists as a means to capture and analyze information relevant to organizations because of its practical relevance [5]. Change, strategy and capability are inextricably linked concepts [8]. Therefore, capability has become the core concept of approaches that aim to support organizations by motivating and providing input for the development of an Information System (IS). A plethora of capability modeling approaches exists, for example, [9, 10], however, there is room for improvement by providing a method specifically designed for depicting capability change focusing on all the change-related elements and the transition itself, which has been neglected to date. This study is part of an ongoing Design Science Research (DSR) project that aims to provide methodological and tool support for the management of changing organizational capabilities. Having identified the opportunity to improve capability modeling, we explored the existing capability modeling approaches looking for concepts relevant to change [11]. Afterwards, the requirements for a modeling method focused on change were elicited [12], while, in parallel, the dimensions of changing capabilities were introduced in a typology [13]. Finally, a capability change meta-model was presented with purpose to serve as the basis for a modeling method [14]. The objective of this study is to explore expert knowledge on the phenomenon of capability change. Interviewing decision-making experts helps gain insight on how practitioners perceive capability change and provides an evaluation of the meta-model’s conceptual associations established in the earlier steps of our research. Additionally, it facilitates the identification of potential omissions in the current structure of the meta-model. These findings can provide input for the meta-model’s improvement. The paper is structured as follows. Section 2 provides a brief presentation of the concepts relevant to this study and existing literature relevant to modeling of capability change. Section 3 describes the methods employed for data collection and analysis. Section 4 presents the derived results and Sect. 5 summarizes how the results contribute towards the improvement of the meta-model and presents their conceptualization. Section 6 discusses the results and the conceptualization process. Finally, Sect. 7 provides concluding remarks.

2 Background and Related Research Organizations are conceived as social goal-directed systems that maintain boundaries reflecting their goals [3]. The phenomenon of changing organizations is important for business informatics focusing on the role of ISs in these changes [15]. It has been widely researched, utilizing several terms like change, transformation and adaptation, either

Improvements on Capability Modeling

173

interchangeably or reflecting differentiations in the scopes of change [16]. The same applies to the terms business, enterprise and organization, which are sometimes used interchangeably or are being distinguished by defining an enterprise as a goal-sharing group of organizations, as in [15]. Organizational changes are driven by (i) rationally adapting to environmental conditions, (ii) planning and choosing strategically as a means to have the organization shaped by decision-makers, or (iii) a combination of these and organizational inertia. This results in three main perspectives in organizational change theories which are considered deterministic, voluntaristic and reconciling respectively [3]. What is often being neglected is the causality of change, an aspect that should be implemented in any method with the objective to capture the complexity of ongoing changes [3]. Enterprise Modeling (EM) approaches aim to capture this complexity. EM concerns the process of creating an enterprise model capturing the aspects of the organization which are relevant for a given modeling objective like concepts, goals, processes or business rules [17], and the integrated view of these aspects is of great importance. As a result, an enterprise model often is a set of sub-models each focusing on a specific aspect of the organization. Such as model facilitates people in an organization to develop a deeper understanding how their work gets integrated in a bigger picture and the role of the supporting information systems [18]. Since there is no consensus in the literature, combining two earlier definitions from [9, 19], the concept of capability is defined in this project as a set of resources and behaviors, with a configuration that bears the ability and capacity to produce value by fulfilling a specific goal in a specific context. The lifecycle of a capability [9] consists of its design phase, which concerns its development, and its run-time phase, which concerns the time when a the capability is operationally active The concept is associated to several core business concepts like resource, goal, actor, process and context [5, 9]. Capability is often considered as the missing link in business/IT transformation because it (i) provides a common language to the business, (ii) enables accurate investment focus, (iii) serves as a baseline for strategic planning, change management and impact analysis, and (iv) leads directly to business specification and design [20]. Several modeling approaches have utilized the concept of capability including standalone approaches like Capability-Driven Development (CDD) [9] and Value Delivery Modeling language (VDML) [21], Enterprise Architecture frameworks, for example NATO Architecture Framework (NAF) [22], Department of Defense Architecture Framework (DoDAF) [23], Ministry of Defence Architecture Framework (MODAF) [24], and Archimate [25], extensions of existing modeling methods like i* [26] and Capability Maps [27], or new notations, for example CODEK [10]. The aim of this paper is to explore how experienced managers perceive the concept of capability in relation to changes and adaptations in their organizations and convert the findings to requirements for updating our previously developed meta-model for capability change [14].

3 Methodology This study follows the DSR paradigm, and in particular the guidelines of [28], according to which any DSR project consists of five iterative steps. These are (i) problem

174

G. Koutsopoulos et al.

explication, (ii) requirements definition, (iii) artifact design, (iv) artifact demonstration, and (v) artifact evaluation [28]. In particular, the aim of this study is to conduct an ex ante evaluation of an artifact [28]. An ex ante evaluation takes place without using or even fully developing the artifact. Specifically, an empirical ex ante evaluation has been conducted collecting data from expert decision-makers to evaluate the capability change meta-model introduced in our earlier work. A modeling method consists of a modeling language and a set of guidelines for modeling. A modeling language, in return, consists of the syntax, notation and semantics [29]. A common approach to express the syntax is via a meta-model and a textual description of its elements. Thus, the meta-model, along with the textual description introduced in [14], have addressed an initial version of the method’s main components. The meta-model under discussion is shown in Fig. 1.

Fig. 1. The capability change meta-model.

The selected data collection method is asynchronous semi-structured interviews [30], which is an appropriate method, for in-depth understanding of complex phenomena. Purposive and convenience sampling [31] has been used for the selection of participants, which resulted in a group of 10 expert decision-makers. The purpose and convenience concern the inclusion of participants that are not only available but also possess the required knowledge and experience for the given topic. The set of questions consisted of (i) participant data, (ii) general questions on capability change, (iii) specific questions on the three change functions elicited in our earlier work, i.e. observation, decision and delivery of change [11], and (iv) questions related to identifying the need to change, because it was identified as a potential weakness of the meta-model [14]. The specific questions related to the change functions were about the responsibility, the actual process, factors and criteria affecting each function, and related challenges.

Improvements on Capability Modeling

175

For example, the general questions on capability change included questions like “What does the concept of capability mean to you? Is it a term used in your organization?”, “Is capability change a common phenomenon in your organization/unit?”, “Based on your experience, what is the most common type of capability change?” and “Have you used any methods or tools to support capability changes? If yes, which ones?” The function-specific set included questions like “Who is responsible for observing a capability’s performance and identifying a need to change?” “How are the criteria determined for deciding on the best alternative for a capability change?” “What are the challenges when delivering a capability change?” In parallel, the participants have been asked not only to evaluate the association of the existing meta-model concepts to capability change using a list of Likert – scale questions but also identify other possible concepts that they consider valuable for describing such phenomena. In addition, they have been asked questions related to the responsibility, communication and identification of need to change. Follow-up questions have been posed whenever clarifications were deemed necessary. The collected data have been analyzed both quantitatively and qualitatively. A statistical quantitative analysis has been performed on the Likert-scale responses and the rest of the interview data have been analyzed by means of deductive thematic analysis [32], driven by the change functions. Initially, descriptive coding was applied on the raw data, which is considered an appropriate step for the initiation of thematic analysis [33]. The process requires familiarity with the dataset, systematic coding, generation of initial codes and identification of themes, which comprise the final report. The derived results have been mapped and systematically converted to input for the meta-model, using the UML notation standards [34] for consistency with the existing meta-model. Finally, the derived components have been integrated with the associated existing meta-model concepts.

4 Findings This section reports the findings derived from the analysis of the collected data. The analysis produced a set of results, in the form of conceptual associations, related both with capability change in general, and the meta-model in particular. The participants are stated in three European countries, Sweden, Greece and the United Kingdom. Their work experience ranged from 14 to 42 years of experience with an average of 28,9 and median 29. Two participants were not familiar with the concept of capability and seven were not familiar with EM but the definitions were introduced to ensure a common understanding during the interviews. The participants’ positions vary from owners and top managers to unit directors. They came into managerial positions having diverse educational backgrounds including studies in engineering, finance and management, humanities, IT and natural sciences. The organizations they have been working in are large (more than 250 employees) for seven participants and medium (51–250 employees) for the other three, without any participants from small organizations. They have been positioned as managers in various units, under roles like managing director, product specialist, in units responsible for risk analysis, strategic product management, sales growth excellence, customer service etc. Two participants are working in

176

G. Koutsopoulos et al.

public organizations and the rest are employed in private ones. For these reasons, there was a wide spectrum of responses associated with the concept of capability among the participants. This fact can be attributed to the wide range of backgrounds among the participating experts. The participants remain anonymous, hence, we will refer to them with the randomly assigned codenames P1-P10. The interview questions were grouped in the categories mentioned in the Methodology section and the results are reported in the following sections. 4.1 Evaluation of Existing Meta-Model Elements Regarding the evaluation of the concepts existing meta-model elements, the participants responded to Likert-scale questions, labeling the concepts from “Highly irrelevant” to “Highly relevant” to capability change. The concepts of capability, change, context, state and change function have not been included. The former two have been excluded because the phenomenon of capability change is by definition highly relevant to capability and change and the latter three because they exist in the meta-model as super-classes and they overlap conceptually with their specializations which are included. “Highly irrelevant” has been assigned the value of 1 and “Highly relevant” the value of 5 for the analysis. Overall, the results indicate that all concepts are relevant or highly relevant to capability change, since the means for each concept range from 4.0 to 4.8. The results per concept along with the percentage distribution of responses that generated the values is depicted in Fig. 2. Highly irrelevant

Irrelevant

Neutral

Relevant

Highly relevant

Means 100%

5 4,7 4,4 4

4,5 4,3

4,3

4,6 4,4

4,4

4,4

4,3

4,2 4

4,1

4,7

90%

4,8 4,3

4,4

4,5 4,3 4,1

80% 70% 60%

3

50% 40% 30%

2 20% 10% 1

0%

Fig. 2. The result of the evaluation per meta-model concept, including distribution and average.

Improvements on Capability Modeling

177

The concept that has been considered most relevant to capability change is the “Intention element” with a score 4.8/5 and the least relevant is “Capability configuration” which scored 4.0/5. Before the discussion on the meta-model concepts’ associations, eight of the participants considered the meta-model’s concept set as adequate, with P6 mentioning “Risk culture (e.g. attitudes behind the behaviors), policies, strategies, competence, steering etc.” as possible candidates for inclusion and P3 felt that there were “a lot of” concepts missing even though none were provided by the participant. 4.2 Overall on Capability Change The discussion around capabilities started from the participants’ understanding of the concept. Diverse perspectives were expressed from the participants. For example, P4, when asked if and how the concept is used in his work environment, responded: “Yes we use it widely in our organization. Capability is used in three aspects: 1. Human capabilities & Competence build up; 2. Processes and tools; 3. Ways of Working”. Other perspectives include “Knowledge to adapt and improve an organization to become efficient and competitive…” (P1), “Ability, organization & resources with the right competence” (P10), “Capacity, skills, expertise” (P7), “…capability means to manage every situation” (P3), “Qualification” (P9) and “…a list of things/abilities that an enterprise needs, does or can do to be successful.” (P8). An interesting point raised during the interviews is whether a capability is considered a resource or not, and whether it consists of other resources or simply needs other resources allocated to it. The results are divided. Half of the interviewees consider a capability as a resource while the other half disagree. However, six out of ten believe that a capability consists of resources. Capability change, in terms of frequency, is seen as a common phenomenon by eight of the participants with statements like “…it’s a continuous change” (P10) and “…things are moving too fast to stop.” (P2), while it is not common for P3 and P5. Concerning the types of capability change, eight participants consider the modification on an existing capability as the most common type and the other two consider introduction of a new capability as the most commonly encountered phenomenon. While discussing the positive and negative aspect of capabilities, six participants considered that capability can be both positive or negative, three others consider it an exclusively positive concept, while an interesting opinion was expressed by P2 stating that “A capability is a positive concept. A negative capability is just a weakness in observation.”, which emphasizes the importance of the observation function in capability change. Finally, the discussion shifted towards methods and tools used for capability management in their organizations with the participants providing various abstract responses like observation, training, support and explanations but also specifics like Driving Strategic Impact (DSI) from Columbia University, Jira, Confluence, Props-C, Aris and Six Sigma. 4.3 Observing Capability Change The specific part of the interviews that was devoted to the observation of capability change resulted in several findings. Initially, among the participants, six out of ten did

178

G. Koutsopoulos et al.

not consider that a capability is contextual, it only depends on the context of the organization that owns it. This fact supports our initial meta-model design. Regarding the responsibility to observe a capability to identify a need for change, the responses were similar, mentioning several high ranked managerial positions, for example, CEO, Head of Operations, business owner etc. An exception came from P2, who stated that the function is performed by “A specialized team that we call our “planning sector”. They are responsible for monitoring developments in the economic, legal, political, social, environmental and technological conditions. These people are not only experienced but also intelligent. They need to make informed assessment of a potential and report it to the top management.”. Concerning the identification of the need for change, two points have been emphasized by the participants: (i) the factors that are being monitored and assessed as a means to identify a need for change are expressed as set of Key Performance Indicators (KPIs), however, (ii) the association between a capability and its relevant KPIs occurs only during the design phase of the capability lifecycle and not during run-time. Emergent evaluations occurring during run-time rely both on KPIs and “common sense and experience” (P1). They also agreed that the only way that the observations are communicated are within scheduled reports and meetings, a fact that indicates that a supporting IS could potentially facilitate these activities. P4 expressed his disappointment on the current practices stating: “How it usually is: Complex, detailed, technical presentations with in depth knowledge of the data and data-points, without any context and what the effect on the organization is. How it should be: Analysis on effect and impact that the observed data highlights and indicated. Always fact-based.”. Regarding the observed contextual factors associated to a capability, all the participants agreed that “Each capability has its own factors but there may be some overlaps.” (P7). P8 was specific about these overlaps: “Yes! Changing technology or changing tools working with a certain capability (i.e. managing orders) may result in incompatibilities between IT systems, if different (i.e. in contract execution management, invoicing management, supply management and delivery management)… …Internally based on bad performance experience, or bad fit with rest of business, or customer changing capabilities, or new trends and need of business transformation, or externally caused by Competition or PESTLE aspects (Political, Economic, Social, Technological, Legal, Environmental).”. P2 added “…the PESTLE factors, but we try to be open-minded.”. Asking how a factor is associated to a capability produced a variety of responses which can be summarized in “experience and common sense” (P1), collaborators, and PESTLE, as for example in P6’s statement: “Both external (crises, changes at macro or micro level, updated regulatory or legal requirements, market and competitive environment, politics) and internal (e.g. incidents, KPI/KRI levels, identified risks, new strategies, updated business or operation plans etc.) events trigger the changes.”. All the above were missing in the current version of the meta-model, thus, an abstract “Monitored Factor” concept is required, usually expressed as KPIs. Finally, the challenges related to observation can be summarized as “To get the appropriate focus from other departments.” (P1) and the “Lack of specialized staff to make observations” (P7) because “…the intelligence, experience and common sense it takes to identify a potential. It is not something that can be easily taught.” (P2).

Improvements on Capability Modeling

179

Inter-organizational focus, attention and motivation, especially across organizational boundaries, has been neglected in the current version of the meta-model. 4.4 Deciding Capability Change Initially, the discussion about deciding a change concerned the responsibility of the decision. Similarly to the observation results, it concerns a variety of high ranked managerial positions, however, an interesting point was raised that indicated an omission in the metamodel. P1, 3, 4 and 9 agree that change itself has ownership, usually framed within a project that has an individual or unit that “drives the change” (P1) and is “profit and loss responsible” (P4). The owner also has the responsibility of communicating the change and motivating every involved party to actively participate in the planned activities. Another important weak point of the meta-model was identified while discussing the criteria for decision on a capability alterative over another. Two codes that emerged several times in the data were “budget” and “cost” of the change initiative used as decision criteria. This means that a change requires resources for its realization. The current version of the meta-model has not taken into consideration this important aspect and only addresses the differences among different capability configurations and their allocated resources, therefore, it needs to be implemented. The finding can be interpreted as follows. Change requires one or more resources allocated to it in order to achieve realization. Regarding the definition of criteria for a decision, the participants mentioned that the criteria are established and driven foremost by economic factors. It is noteworthy that even the two participants who are employed in public organizations funded by the government, mentioned “economy” (P5) and “cost reduction” (P2) as the primary decision criterion. Other factors for the establishment of criteria that were mentioned are “Logical assumptions and experience.” (P1) and “The many policies (internal and external e.g. regulatory) in different areas creates the framework for the changes.” (P6). They also agreed unanimously that measuring capability outcomes can result in the definition of decision criteria, as suggested in the current version of the meta-model. The initiation of a change initiative usually starts with “Top down approach, top managers or CEO talks about the importance of a change, defines the driver, the goal and the budget.” (P1), also including “…a Change Request that is submitted…” (P4). These change requests are used for “informing” (P9), sometimes formally (P4) or in some cases informally (P2). This communication is often complemented by analyses using “Gap analysis and swot” (P10). These findings support the inclusion of ownership and the association between the concepts of resource and change in the meta-model. As far as the challenges of deciding on a capability change are concerned, the participants emphasized even more the need “To get focus and attention from all involved parties that may be work-loaded with other priorities.” (P1), which emphasizes again the need to implement inter-organizational motivation in the meta-model. Other challenges mentioned included “The transition from the old to new. The resistance to change, specifically if not entirely understood. Benefit is not obvious even if need to change is obvious.” (P8).

180

G. Koutsopoulos et al.

4.5 Delivering Capability Change As far as the delivery of capability change is concerned, several points have been emphasized during the discussion that are missing from the meta-model. One such point is that the speed of change, which has been introduced in [13] as one of the dimensions of change and is included in the meta-model as the tempo attribute of the Change State class, is associated to the size of the organization, which is a missing concept. As P1 stated metaphorically “…constantly strive to improve but the company is a big elephant that moves slowly towards changes.”. P8 also mentioned while discussing the realization of capability changes, “…this is the intention but it can take a couple of years.” These perspectives indicate an association between the size of the organization and the speed of change. This suggests removing the Tempo attribute and implementing it as a class in the meta-model. Regarding the responsibilities and communication of delivering capability change, the responses were identical to the decision function, which means that there is no need for a repetition of the findings about change ownership and different types of informing. The only exception came from P2 who stated that there is no need to inform about delivering capability change because “If more than one unit is involved, they are also involved in the decision, so they are already aware of the deployed change.”. As far as the actual delivery of the capability is concerned, different perspectives were expressed, but the summary is that there is no predefined approach and it depends on the capability. For example, according to P4, “Two different approaches: 1. The agile approach with 3-week sprints to keep the focus on the change. 2. The change management approach with close connection to key stakeholders and impacted functions.”, while P2 stated: “Depends on the capability. It can be a single changed process which is deployed internally, to a major change that requires… even construction crews!”. While discussing the delivery of an active change initiative, the opinions were divided about the effect. Half of the participants consider that a change always has an impact on the organization, for example by changing processes and services, but the other half is reluctant to accept this, mentioning factors that are actually blocking the effects of change, for example, “there is resistance to change the KPIs since people get their bonuses based on the KPIs.” (P8) and “…in my experience often the culture in an organization often beats the strategy and capability changes necessary.” (P4). The impact of a delivered change was another discussed topic. Most of the participants responded in an abstract way, mentioning “monitoring” and “reporting” as the key activities, however, a few provided more elaborate answers, for example P4 “It should be analyzed with two type of data. The leading indicators that should reflect on the wanted position after the change. The lagging indicators that often are project based and strictly factbased and data-driven.”, and P1 “Measuring the results using KPIs that are monitored for a period of 3-4 years”, who added a temporal perspective. P2 also mentioned that evaluating the impact of delivered change requires “using experience and common sense”. Finally, while discussing the challenges of the delivery phase, the inter-organizational motivation and commitment was mentioned again by several participants, the “Customers’ and employees’ reactions…” (P7) and P4 asked to consider the “Culture of the organization. Power base of the stakeholders. Identify the hidden stakeholders and address the value of the change to them if you want to have any chance for a change to succeed.”

Improvements on Capability Modeling

181

5 Summary and Conceptualization of the Results This section presents a summary of the analysis depicted as a map of the findings that are not overlapping with and can extend the existing meta-model structure and the conceptualization of the findings as a meta-model fragment. A summary of the derived findings, which is also included in the map of Fig. 3:

Fig. 3. The result of mapping the findings of the analysis.

• • • • • • • • • • •

Capability consists of at least one resource. One or more resources need to be allocated to change. Change has at least one owner. Change requires at least one source of motivation. Motivation comes from an owner. Organization has a size. Size affects the tempo of change. Context consists of monitored factors. PESTLE conditions generate monitored factors. Monitored factor may be expressed as one or more KPIs. Experience and Common sense are required for establishing the relation between a capability and a monitored factor.

The conversion of the majority of the findings and their implementation with existing meta-model concepts resulted in a UML meta-model fragment, as shown in Fig. 4. The light grey elements’ origin is the initial meta-model and the teal elements are derived from the findings of this study.

182

G. Koutsopoulos et al.

Fig. 4. A meta-model fragment derived from the findings of this study.

6 Discussion The evaluation was realized as an empirical exploration of the experts’ knowledge on capability change by interviewing experienced decision-makers. The results (i) confirmed the association between the concepts existing in the meta-model and the phenomenon of capability change, (ii) indicated several existing omissions and improvements, for example, the association between change and ownership, the inclusion of motivation and organization size as essential internal context factors, and (iii) provided insight regarding the processes followed while dealing with the identification of the need for change. The latter included an indication that a systematic approach that involves KPIs is only employed during the design face of capabilities but during run-time, the decision-making bodies rely on “experience and common sense”. The two latter findings comprise additional requirements for the artifact. The fact that the experts are considering the meta-model’s concepts as relevant or highly relevant to capability change with no exception means that, from a modeling perspective, proper associations have been established in the initial design and no concepts needed to be removed or merged in the current conceptual structure. No significant positive or negative correlations between their capability perspective and their educational background, work experience or managerial responsibilities suggest that these factors are not relevant to how the capability change concept is shaped. As a result, these factors are not candidates for inclusion in the model. The ownership of change discussed during the interviews and the resources required for performing the change emphasized on two important omitted associations in the first version of the meta-model. These are the associations between change and ownership and change and resources. This has been implemented in the conceptualized results, so that the meta-model can enable the method under development to support change planning as well in accordance with the identified needs. The change motivation aspect has been implemented in the meta-model as a requirement for the method to depict any actions performed during a change project to shift

Improvements on Capability Modeling

183

the attention of every involved party towards the given change. This concept, which has been implemented in the presented fragment, can also facilitate change planning, since it has been identified as relevant to all functions. The identified correlation between the size of the organization and the change tempo suggested not only the inclusion of the organization’s size in the meta-model, but also the conversion of the tempo attribute to a class. Following the UML standards [34], this is the only way to model this correlation without missing information valuable for the given modeling goal. This course of action led to the conversion of classes to the remaining attributes of the Change State class. Finally, the inclusion of Monitored Factor class and the associated KPI improves the observation function’s information structure, however, the abstract concepts of “common sense” and “experience” are a challenge to implement without resorting to ambiguous assumptions and possible logical fallacies, therefore their inclusion at this point has been eschewed, nevertheless it is a subject worth researching further. The present study contributes to the field of capability modeling by advancing the conceptualization of capability change, having identified strengths and weaknesses of the existing meta-model and provided additional requirements for its expansion. This was achieved by analyzing the information provided by experienced decision-makers to gain a deeper understanding of the phenomenon of capability change. From a modeling perspective, the transition of capabilities, which had been neglected to date, can be modeled more precisely using the conceptual structure derived from the results of this study. As a result, the initially mentioned goal, which is a modeling method that addresses the negative, missing and inter-organizational aspect of capabilities is partially fulfilled since the main components’ development for such a method is progressing.

7 Conclusions This study, being an ex ante evaluation of a meta-model comprising main components of a capability change modeling method, was realized as an empirical exploration of expert knowledge on the phenomenon of capability change. The results confirmed the adequacy of the existing meta-model elements, indicated omissions and areas for improvement and provided valuable insight regarding the identification of the need for change in an organization. The findings have been converted to a UML class diagram which is a fragment of a meta-model which is planned to be integrated with the initial meta-model as future steps in our research, a fact which will result in an improved version. This is a step towards the finalization of the method and the initiation of developing an IS aimed to support digital organizations undergoing capability change phenomena. As DSR suggests, further evaluation is required, which is the next future step of this research project. In addition, the identified lack of a systematic approach of identifying factors relevant to capability change during run-time, suggests an additional possible gap that needs to be addressed within this project or any relevant research project. Acknowledgements. We would like to express our gratitude to all the interviewees for their time, effort, and valuable insight.

184

G. Koutsopoulos et al.

References 1. van Gils, B., Proper, H.A.: Enterprise modelling in the age of digital transformation. In: Buchmann, R.A., Karagiannis, D., Kirikova, M. (eds.) PoEM 2018. LNBIP, vol. 335, pp. 257– 273. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-02302-7_16 2. Burnes, B.: Managing change. Pearson, Harlow (2014) 3. Zimmermann, N.: Dynamics of Drivers of Organizational Change. Gabler, Wiesbaden (2011). https://doi.org/10.1007/978-3-8349-6811-1 4. Burke, W.W.: Organization Change: Theory and Practice. Sage Publications, Thousand Oaks (2017) 5. Wißotzki, M.: Exploring the nature of capability research. In: El-Sheikh, E., Zimmermann, A., Jain, L.C. (eds.) Emerging Trends in the Evolution of Service-Oriented and Enterprise Architectures. ISRL, vol. 111, pp. 179–200. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-40564-3_10 6. Nechkoska, R.P.: Tactical management contributions to managerial, informational, and complexity theory and practice. Tactical Management in Complexity. CMS, pp. 213–237. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-22804-0_5 7. Pearlson, K.E., Saunders, C.S., Galletta, D.F.: Managing and using Information Systems: a Strategic Approach. Wiley, Hoboken (2020) 8. Hoverstadt, P., Loh, L.: Patterns of Strategy. Routledge, London (2017) 9. Sandkuhl, K., Stirna, J. (eds.): Capability Management in Digital Enterprises. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90424-5 10. Loucopoulos, P., Kavakli, E.: Capability oriented enterprise knowledge modeling: the CODEK approach. Emerging Trends in the Evolution of Service-Oriented and Enterprise Architectures. ISRL, vol. 111, pp. 197–215. Springer, Cham (2016). https://doi.org/10.1007/ 978-3-319-39417-6_9 11. Koutsopoulos, G., Henkel, M., Stirna, J.: Dynamic adaptation of capabilities: exploring meta-model diversity. In: Reinhartz-Berger, I., Zdravkovic, J., Gulden, J., Schmidt, R. (eds.) BPMDS/EMMSAD -2019. LNBIP, vol. 352, pp. 181–195. Springer, Cham (2019). https:// doi.org/10.1007/978-3-030-20618-5_13 12. Koutsopoulos, G., Henkel, M., Stirna, J.: Requirements for observing, deciding, and delivering capability change. In: Gordijn, J., Guédria, W., Proper, H.A. (eds.) PoEM 2019. LNBIP, vol. 369, pp. 20–35. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35151-9_2 13. Koutsopoulos, G., Henkel, M., Stirna, J.: Modeling the dichotomies of organizational change: a state-based capability typology. In: Feltus, C., Johannesson, P., and Proper, H.A. (eds.) Proceedings of the PoEM 2019 Forum, Luxembourg, pp. 26–39. CEUR-WS.org (2020) 14. Koutsopoulos, G., Henkel, M., Stirna, J.: Conceptualizing capability change. In: Nurcan, S., Reinhartz-Berger, I., Soffer, P., Zdravkovic, J. (eds.) BPMDS/EMMSAD -2020. LNBIP, vol. 387, pp. 269–283. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49418-6_18 15. Proper, H.A., Winter, R., Aier, S., de Kinderen, S. (eds): Architectural Coordination of Enterprise Transformation. Springer, Cham (2017). https://doi.org/10.1007/2F978-3-319-695 84-6 16. Maes, G., Van Hootegem, G.: Toward a dynamic description of the attributes of organizational change. In: (Rami) Shani, A.B., Woodman, R.W., and Pasmore, W.A. (eds.) Research in Organizational Change and Development. pp. 191–231. Emerald Group Publishing Limited (2011). https://doi.org/10.1108/S0897-3016(2011)0000019009 17. Sandkuhl, K., Stirna, J., Persson, A., Wißotzki, M.: Enterprise Modeling: Tackling Business Challenges with the 4EM Method. Springer, Heidelberg (2014). https://doi.org/10.1007/9783-662-43725-4

Improvements on Capability Modeling

185

18. Frank, U.: Enterprise modelling: the next steps. Enterprise Model. Inf. Syst. Architect. (EMISAJ). 9, 22–37 (2014) 19. Koutsopoulos, G.: Modeling organizational potentials using the dynamic nature of capabilities. In: Joint Proceedings of the BIR 2018 Short Papers, Workshops and Doctoral Consortium, pp. 387–398. CEUR-WS.org, Stockholm, Sweden (2018) 20. Ulrich, W., Rosen, M.: The Business Capability Map: The “Rosetta stone” of Business/IT Alignment. Cutter Consortium, Enterprise Architecture 14 (2011) 21. Object Management Group (OMG): Value delivery modeling language (2015). https://www. omg.org/spec/VDML/1.0 22. NATO: NATO Architecture Framework v.4, (2018). https://www.nato.int/nato_static_fl2014/ assets/pdf/pdf_2018_08/20180801_180801-ac322-d_2018_0002_naf_final.pdf 23. USA Department of Defense: Department of defense architecture framework 2.02 (2009). https://dodcio.defense.gov/Library/DoD-Architecture-Framework/ 24. UK Ministry of Defence: Ministry of defence architecture framework V1.2.004 (2010). https:// www.gov.uk/guidance/mod-architecture-framework 25. The Open Group: Archimate 3.0.1. Specification (2017). https://publications.opengroup.org/ i162 26. Danesh, M.H., Yu, E.: Modeling enterprise capabilities with i*: reasoning on alternatives. In: Iliadis, L., Papazoglou, M., Pohl, K. (eds.) CAiSE 2014. LNBIP, vol. 178, pp. 112–123. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07869-4_10 27. Beimborn, D., Martin, S.F., Homann, U.: Capability-oriented modeling of the firm. Presented at the IPSI Conference, Amalfi, Italy, January 2005 28. Johannesson, P., Perjons, E.: An Introduction to Design Science. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10632-8 29. Karagiannis, D., Bork, D., Utz, W.: Metamodels as a conceptual structure: some semantical and syntactical operations. In: Bergener, K., Räckers, M., Stein, A. (eds.) The Art of Structuring. LNBIP, pp. 75–86. Springer, Cham (2019). https://doi.org/10.1007/978-3-03006234-7_8 30. Gubrium, J., Holstein, J., Marvasti, A., McKinney, K.: The SAGE handbook of interview research: the complexity of the craft. SAGE Publications, Inc., 2455 Teller Road, Thousand Oaks California 91320 United States (2012). https://doi.org/10.4135/9781452218403 31. Denscombe, M.: The good research guide: for small-scale social research projects. McGrawHill/Open University Press, Maidenhead (2011) 32. Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3, 77–101 (2006). https://doi.org/10.1191/1478088706qp063oa 33. Saldaña, J.: The Coding Manual for Qualitative Researchers. Sage, Los Angeles (2009) 34. Object Management Group (OMG): OMG® Unified Modeling Language® (2017). https:// www.omg.org/spec/UML/2.5.1/PDF

On Domain Modelling and Requisite Variety Current State of an Ongoing Journey Henderik A. Proper1,2(B) and Giancarlo Guizzardi3 1

2

Luxembourg Institute of Science and Technology (LIST), Belval, Esch-sur-Alzette, Luxembourg [email protected] University of Luxembourg, Luxembourg City, Luxembourg 3 Free University of Bolzano, Bozen, Italy [email protected]

Abstract. In the 1950’s, W. Ross Ashby introduced the Law of Requisite Variety in the context of General Systems Theory. A key concept underlying this theory is the notion of variety, defined as the total number of distinct states of a system (in the most general sense). We argue that domain modelling (including enterprise modelling) needs to confront different forms of variety, also resulting in a need to “reflect”/“manage” this variety. The aim of this paper is to, inspired by Ashby’s Law of Requisite Variety, explore some of the forms of variety that confront domain modelling, as well as the potential consequences for models, modelling languages, and the act of modelling. To this end, we start with a review of our current understanding of domain modelling (including enterprise modelling), and the role of modelling languages. We then briefly discuss then notion of Requisite Variety as introduced by Ashby, which we then explore in the context of domain modelling.

Keywords: Domain modelling Variety

1

· Conceptual modelling · Requisite

Introduction

In the context of software engineering, information systems engineering, business process management, and enterprise engineering & architecting in general, many different kinds of models are used. In this paper, we consider each of these kinds of models as being valued members of a larger family of domain models. Domain models have come to play an important role during all stages of the life-cycle of enterprises and their information and software systems, which also fuels the need for more fundamental reflection on domain modelling itself. This includes the act of modelling, the essence of what a model is, and the role of (modelling) languages. Such fundamental topics have certainly been studied by c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 186–196, 2020. https://doi.org/10.1007/978-3-030-63479-7_13

On Domain Modelling and Requisite Variety

187

different scholars (e.g. [1,14,24,32,37,38]), as well as by ourselves (see e.g. [7,8, 12,15–18,22,29,31]). At the same time many challenges remain (see e.g. [18,29]). In this paper, we focus on the challenge of how domain modelling (including enterprise modelling) needs to deal with different forms of variety, also resulting in a need to “reflect”/“manage” this variety. In line with this, we will explore some of the forms of variety that confront domain modelling, as well as the potential consequences for models, modelling languages, and the act of modelling. We will do so from the perspective of the Requisite Variety. This concept has been introduced by W. Ross Ashby [2], in the context of General Systems Theory, as part of the Law of Requisite Variety, where variety refers to the number of states of a system (system in the most general sense). We see this short paper as part of an ongoing “journey” we undertake, with the aim of deepening our insights into the foundations of domain modelling, mixing our theoretical work and practical experiences in developing (foundational and core) ontologies and domain models, associated modelling languages, frameworks, and methods. The remainder of this paper is structured as follows. Section 2 starts with a review of our current understanding of domain modelling, while Sect. 3 then complements this with our view on the modelling languages. We then continue, in Sect. 4, by reviewing the Law of Requisite Variety as introduced by Ashby [2], and the notion of Requisite Variety in particular. The notion of Requisite Variety is then, in Sect. 5, explored further in the context of domain modelling.

2

Domain Models

Based on general foundational work by e.g. Apostel [1], and Stachowiak [37], more recent work on the same by different authors [19,32,33,38], as well as our own work [7,15,16,22,31], we consider a domain model to be: An artefact that is acknowledged by an observer to represent an abstraction of some domain for a particular purpose. Each of the stressed words in this definition requires a further explanation, and as we will see in Sect. 5, results in different kinds of variety. A model is seen as an artefact; i.e. it is something that exists outside of our minds. In our fields of application, this artefact typically takes the form of some “boxes-and-lines” diagram. More generally, however, it can, depending on the purpose at hand, take different forms including text, mathematical specifications, physical objects, etc. With domain, we refer to “anything” that one can speak/reflect about explicitly. It could be “something” that already exists in the “real world”, something desired towards the future, or something imagined. The observer observes the domain by way of their senses and/or by way of (self) reflection. What results in the mind of the observer is, what is termed a conceptualisation in [16], but also what is termed conception in [13]. When the modelled domain pertains to a part/perspective/aspect of an enterprise, then one can indeed refer to the resulting domain model as an enterprise model. This may, include enterprise (wide) data models, enterprise architecture models, etc.

188

H. A. Proper and G. Guizzardi

As, it is ultimately the observer who needs to acknowledge the fact that the artefact is indeed a model of the domain, it actually makes sense to treat their conceptualisation/conception of the domain as the de-facto “proxy” for the domain. As such, we should also realise that the observer observes the model (as artefact) as well, which therefore also creates a conceptualisation (in their mind) of the modeler. The observer, therefore, needs to validate the alignment between their model-conceptualisation and their domain-conceptualisation, using the purpose as alignment criterion. Models are produced for a purpose. In the context of enterprise modelling, [28] suggest (at least) seven high-level purposes for the creation of enterprise models: understand, assess, diagnose, design, realise, operate and regulate. In specific situations, these high-level purposes will need to be made more specific in terms of, e.g., the need for different stakeholders to understand, agree, or commit to the content of the model [30], or for a computer to be able to interpret the model in order to e.g. automatically analyse it, use it as the base of a simulation/animation, or even execute it. A model is the representation of an abstraction of the domain. This implies that, in line with the purpose of the model, some (if not most) “details” of the domain are consciously filtered out. For domain modelling, important abstraction flavours are [5]: (1) selection, where we decide to only consider certain elements and/or aspects of the domain; (2) classification (including typing); (3) generalisation; and (4) aggregation. As a result, an observer actually needs to harbour (at least) four conceptualisations: (1) a “full” conceptualisation of the domain (as they “see” it), (2) a conceptualisation of the purpose for the model, (3) an abstracted conceptualisation of the domain, (4) a conceptualisation of the artefact that is (to be) the model representing the latter abstraction. The latter has been illustrated in Fig. 1, where we see how the conceptualisation of the purpose modifies the abstraction, and the alignment between the conceptualisations of the model and the abstraction. The purpose may actually already influence the original observation of the domain. When the model-conceptualisation corresponds to the abstraction-conceptualisation, then the observer would agree that the artefact is a model of the domain for the given purpose. As a consequence, different models may indeed result in the same model-conception, in which case (for the observer) they are equivalent models of the same domain (for the same purpose). If the observer is “the modeller”, i.e. the person creating the model, they also need to “shape” the model in such a way that it best matches their desired model-conceptualisation. In line with the above discussion, a domain model should be (the representation of) the abstraction of (the conceptualisation of) a domain. At the same time, for different computational purposes, such as the ability to use the model as a base for simulation, computer-based reasoning, execution, or database design, it may be necessary to make “compromises” to the conceptual model. These compromises result in a model that does not correspond to (an abstraction of) the

On Domain Modelling and Requisite Variety

189

Purpose

observe

Purpose conceptualisation

Model

Abstraction

observe

Domain abstract

aligned?

observe

Domain

represent

Model

Fig. 1. Conceptualisations involved in domain modelling

original domain. They are essentially models of a “close enough” approximation of (the conceptualisation of) the original domain. This is where we suggest to make a distinction between conceptual domain models and computational design model in the sense that a conceptual domain model is: “A model of a domain, where the purpose of the model is dominated by the ambition to remain as-true-as-possible to the original domain conceptualisation”, while a computational design model includes compromises needed to support computational design considerations that e.g. support simulation, animation, or even execution of the model. Note the use of the word ambition in the definition of conceptual domain model. We are not suggesting there to be a crisp border between conceptual models and computational design models. However, the word ambition also suggest that a modeller/observer, as their insight in a domain increases, should be driven to reflect on the conceptual purity of their conceptualisation and of the resulting model. Computational design models certainly have an important role to play. However, it is important to be aware of the compromises one has made to the original domain conceptualisation. As such, it is also possible that one conceptual model has different associated computational design models, each meeting different purposes.

190

3

H. A. Proper and G. Guizzardi

The Role of Modelling Languages

In its most general form a language identifies the conventions to which the expressions in the language should conform to. In a domain modelling context, these conventions are often equated to a definition in terms of a concrete visual syntax, and a grammar in terms of a set of rules, while the semantics are defined in terms of some mathematical structure (formal semantics [21]) or an ontological theory (ontological or real-world semantics [12]). However, style guides, reference models, patterns, etc., can also be seen as part of the set of conventions that define a (modelling) language. Sometimes, a modelling language comes in the form of a number of connected sub-languages. Typical examples are ARIS [34], UML [26] and ArchiMate [3,23], each featuring more specific languages focused on one aspect or layer, as well as (some more, some less) the integration/coherence between these aspects or layers. We argue that, if a model is represented in some (pre-)defined language, then the (relevant part of the) definition of that language should actually be seen as being a part of the model. This also allows us to illustrate the role of the modelling language in the sense of providing a trade off. If, given some purpose, there is a need to represent an abstraction A, and one has a choice between using a language L1 with an elaborate set of conventions (the light grey part), or using a language L2 with only a more limited set of conventions (the dark grey part), then this will lead to a difference in the size of the (situation specific parts of the) model one would still need have to specify. This has been illustrated in Fig. 2, where we show that the “size” of the two models as a whole remains the same. Of course, the notion of “size” is to be clarified. We will return to this in Sect. 5 when discussing the consequences of the Requisite Variety, as the “size” indeed connects directly to the variety of the model.

M1: A in L1

M2: A in L2

Fig. 2. Trade off between languages with different sizes of their definition

A final consideration is the fact that the conventions which define a modelling language need to reflect the intended set of valid models. Which also means that these conventions need to accommodate all the possible purposes of these intended models. For standardised general purpose languages, such as UML [26], BPMN [27] or ArchiMate [3,23], this does lead to tensions, as the set of purposes keeps growing, resulting in a continuous growth of the set of allowable models,

On Domain Modelling and Requisite Variety

191

thus also triggering a growth in the “size” of the body of conventions defining these languages [6,8]. As such, the we should also realise that the grey parts in Fig. 2 amount only to that part of the respective languages that are relevant for the interpretation of the white parts. However, the other parts of the language will also need to be learned by the “users” of the language, or at least be visible as part of the standard.

4

Requisite Variety

The Law of Requisite Variety by W. Ross Ashby [2] is one of the defining contributions to the field of General Systems Theory and Cybernetics in particular. This law postulates that when a system C aims to control/regulate parts of the behaviour of a system R, then the variety of C should (at least) match the variety of that part of R’s behaviour it aims to control; “only variety can destroy variety” [2, page 2020]. The notion of Requisite Variety, which also builds on Shannon’s Information Theory [35], refers to the variety that is required from the controlling system, where variety refers to the number of states of a system (system taken in the most general sense). In the context of the Law of Requisite Variety it is also important to clearly understand the scope of the system that would need to be controlled. For example, controlling a car in the sense of getting it into motion and steering it in a certain direction on an empty car park, is quite different from driving a car through busy traffic. The latter system clearly needs to deal with a larger variety. It is also important to realise that the earlier remark “the variety of C should (at least) match the variety of that part of R’s behaviour it aims to control”, is actually directly related to the notion of abstraction as discussed in Fig. 1. The controlling system C does need, in line with its steering goal/purpose, to have access to a relevant abstraction/model of the aspects of R it aims to control. More recently, [20] formulated this as (stress added by us): “ The analogy of structure between the controller and the controlled was central to the cybernetic perspective. Just as digital coding collapses the space of possible messages into a simplified version that represents only the difference that makes the difference, so the control system collapses the state space of a controlled system into a simplified model that reflects only the goals of the controller. Ashby’s law does not imply that every controller must model every state of the system but only those states that matter for advancing the controller’s goals. Thus, in cybernetics, the goal of the controller becomes the perspective from which the world is viewed .” In our understanding, domain modelling involves different flavours of variety. In the next section, we will explore these in more detail. The existence of these varieties (as well as complexities) has inspired us to try and follow the logic behind Ashby’s Law of Requisite Variety, and apply this in the context of domain modelling. In doing so, however, we should indeed not forget that the Law of Requisite Variety, is defined in the context of one system controlling (aspects of)

192

H. A. Proper and G. Guizzardi

another system. As such, it would have been better to refer to it as the Law of Requisite Variety of systems controlling systems. The latter raises three main questions: (1) how to define variety in the context of domain modelling, (2) what is “that” what may need to have requisite variety, and (3) would this indeed result in a law akin to the Law of Requisite Variety? In line with the explorative nature of this paper, this paper will certainly not provide the full answer to these questions. In the next section, while discussing (some of the) flavours of variety as we (currently) understand to exist in relation to domain modelling, we will also reflect on these three questions.

5

Requisite Variety in Domain Modelling

In this section, we explore (some of) the flavours of requisite variety we may encounter in the context of domain modelling. Requisite variety needed to conceptualise a domain – When conceptualising a domain (see Fig. 1), an observer needs to deal with variety in the domain, uncertainties about properties of the domain, as well as complexity of the domain. The combination of these yields a requisite variety that needs to be met by the observer(s) of the domain that is to be modelled, in the sense that: – the observer should harbour a conceptualisation in their mind, catering for the variety-in, uncertainties-about, and complexity-of the domain, – understanding this conceptualisation (e.g. to be able to make an abstraction in line with a modelling purpose) requires a certain “mental state space”, – the later corresponds to the need for variety from the observer; i.e. requisite variety. Towards future research, it would be beneficial to better qualify, or even quantify, how the variety-in, uncertainties-about, and complexity-of the domain results in a requisite variety that needs to be (potentially) matched by the cognitive ability of the observer. Residual requisite variety in relation to the model purpose – Driven by the model purpose, there is a need to capture a relevant (but not trivialised) part of the domain. In other words, the observer(s) will need to make an abstraction (see Fig. 1) of the domain. As the abstraction involves filtering out “details”, one would expect that the residual requisite variety needed from the observer, to harbour the abstraction in their mind, is less than the requisite variety needed for the “full” domain conceptualisation. In line with Fig. 1, the resulting model will also need to meet the residual requisite variety, in the sense that the latter needs to be captured in terms of the “informational payload” of the model (indeed, also linking back to the information-theoretical [35] roots of Ashby’s Law of Requisite Variety. The relation between the original requisite variety of a domain, and the residual requisite variety of an abstraction, is also related to the point made in [20] regarding the need of a controlling system to have a model of the controlled system, tuned to the steering goals.

On Domain Modelling and Requisite Variety

193

Requisite variety trade-offs related to modelling languages – In Sect. 3, we already pointed (see Fig. 2) at the need to essentially include the (relevant parts of the) definition of the modelling language in a model. The whole of the specified model (the white parts in Fig. 2) and the parts provided by the language (the gray parts in Fig. 2) need to match the variety of the abstraction. The border between these, however, depends on the language used. Of course, when using a “thin” language with a small set of conventions, it may be easy to learn the language, and also easy to create modelling tools that support the language. At the same time, specifying the actual models (the white parts in Fig. 2) will require more effort than it would cost when using a language with more pre-defined concepts and conventions. Provided of course, that these (pre-defined concepts and conventions) indeed meet the modelling purpose, and domain, at hand. Here it is also interesting to refer back to older discussions (in the field of information systems engineering) regarding the complexity of modelling notations. As also mentioned by Moody [25], models need to provide some informational payload. A simpler notation might be easier to learn, and easier to look at when used in models, but when it cannot “carry” the needed informational payload, then the more “simpler” notation may actually turn out to be a liability (given that it transfers the modeling complexity to the modeler). Towards future research, it would also be beneficial to better qualify, or even quantify, how abstraction actually results in a reduction of requisite variety facing observers of a domain, as well as the “informational payload” needed from the model. In addition, in terms of such insights, it would also be possible to more specifically reason about that part of the variety that should preferably by covered by a (domain specific language), and which part should be left to the actual model. Requisite variety originating from social-complexity – The context in which the model is to be used may involve different stakeholders, uncertainty about their interests, backgrounds, etc. This is where, based on experiences from the IBIS project [9], Conklin coins the term social complexity [10,11]. Social complexity poses a risk on the successful outcome of development projects. As such, it is an aspect (with its variety) of a system (the development project), which would need to be managed. In this case, the original Law of Requisite Variety in the sense of Law of Requisite Variety of systems controlling systems, seems to apply. Part of the variety that is due to the social complexity will need to be met/managed by the overall development/engineering process. However, the more finer grained processes/tasks in which domain models are actually created/used, in particular when multiple stakeholders are involved, will also need to deal with this variety. Especially since, in terms of Fig. 1, they will need to agree on the purpose/goals of the model, the abstraction to be made, and its representation in terms of the model.

194

H. A. Proper and G. Guizzardi

A future research challenge would be to, on the one side, once again further qualify or even quantify the involved variety, as well as show how different collaborative modelling strategies [4,36] may deal with this variety.

6

Conclusion

In this paper, inspired by Ashby’s Law of Requisite Variety, we explored some of the forms of variety that confront domain modelling, as well as the potential consequences for models, modelling languages, and the act of modelling. We first provided a review of our current understanding of domain modelling (including enterprise and conceptual modelling), and the role of modelling languages. Using this as a base, we then explored some of the possible consequences/challenges of the notion of requisite variety in a domain modelling context. As mentioned in the introduction, we see this paper as part of an ongoing “journey”, with the aim of deepening our insights into the foundations of domain modelling. In line with this, we certainly do not claim this paper to be a fully finished work. These are reflections providing a snapshot of our current understanding of these topics. We hope they will trigger debates in the modeling community, which we expect to provide fruitful insights to the next steps of this journey. Acknowledgements. We would like to thank the anonymous reviewers for their comments on earlier versions of this short paper

References 1. Apostel, L.: Towards the formal study of models in the non-formal sciences. Synthese 12, 125–161 (1960). https://doi.org/10.1007/978-94-010-3667-2 1 2. Ashby, W.R.: An Introduction to Cybernetics. Chapman & Hall, London (1956) 3. Band, I., et al.: ArchiMate 3.0 specification. The open group (2016) 4. Barjis, J., Kolfschoten, G.L., Verbraeck, A.: Collaborative enterprise modeling. In: Proper, E., Harmsen, F., Dietz, J.L.G. (eds.) PRET 2009. LNBIP, vol. 28, pp. 50–62. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-01859-6 4 5. Batini, C., Mylopoulos, J.: Abstraction in conceptual models, maps and graphs. In: Tutorial presented at the 37th International Conference on Conceptual Modeling, ER 2018, Xi’an, China (2018) 6. Bjekovi´c, M.: Pragmatics of enterprise modelling languages: a framework for understanding and explaining. Ph.D. thesis, Radboud University, Nijmegen, the Netherlands (2018) 7. Bjekovi´c, M., Proper, H.A., Sottet, J.-S.: Embracing pragmatics. In: Yu, E., Dobbie, G., Jarke, M., Purao, S. (eds.) ER 2014. LNCS, vol. 8824, pp. 431–444. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-12206-9 37 8. Bjekovi´c, M., Proper, H.A., Sottet, J.-S.: Enterprise modelling languages. In: Shishkov, B. (ed.) BMSD 2013. LNBIP, vol. 173, pp. 1–23. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-06671-4 1

On Domain Modelling and Requisite Variety

195

9. Conklin, J.: The IBIS Manual: a short course in IBIS methodology. Touchstone (2003) 10. Conklin, J.: Dialogue Mapping: Building Shared Understanding of Wicked Problems. Wiley, New York (2005) 11. Conklin, J.: Wicked Problems and Social Complexity. Technical report CogNexus Institute, Edgewater, Maryland (2006) 12. de Carvalho, V.A., Almeida, J.P.A., Guizzardi, G.: Using reference domain ontologies to define the real-world semantics of domain-specific languages. In: Jarke, M., et al. (eds.) CAiSE 2014. LNCS, vol. 8484, pp. 488–502. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07881-6 33 13. Falkenberg, E.D., et al. (eds.): A Framework of Information Systems Concepts. IFIP WG 8.1 Task Group FRISCO, IFIP, Laxenburg (1998) 14. Frank, U.: Multi-perspective enterprise modeling (MEMO) - conceptual framework and modeling languages. In: HICSS 2002: Proceedings of the 35th Annual Hawaii International Conference on System Sciences (HICSS 2002)-Volume 3, , Los Alamitos, California, Washington, DC, p. 72. IEEE Computer Society Press (2002) 15. Guarino, B., Guizzardi, G., Mylopoulos, J.: On the philosophical foundations of conceptual models. Inf. Model. Knowl. Bases XXXI 321, 1 (2020) 16. Guizzardi, G.: On ontology, ontologies, conceptualizations, modeling languages, and (meta)models. In: Vasilecas, O., Eder, J., Caplinskas, A. (eds.) Databases and Information Systems IV - Selected Papers from the Seventh International Baltic Conference, DB&IS 2006, July 3–6, 2006, Vilnius, Lithuania. Frontiers in Artificial Intelligence and Applications, vol. 155, pp. 18–39. IOS Press (2006) 17. Guizzardi, G.: Theoretical foundations and engineering tools for building ontologies as reference conceptual models. Semantic Web 1(1, 2), 3–10 (2010) 18. Guizzardi, G.: Ontological patterns, anti-patterns and pattern languages for nextgeneration conceptual modeling. In: Yu, E., Dobbie, G., Jarke, M., Purao, S. (eds.) ER 2014. LNCS, vol. 8824, pp. 13–27. Springer, Cham (2014). https://doi.org/10. 1007/978-3-319-12206-9 2 19. Harel, D., Rumpe, B.: Meaningful modeling: what’s the semantics of ”semantics”? IEEE Comput. 37(10), 64–72 (2004). https://doi.org/10.1109/MC.2004.172 20. Hillis, W.D.: The first machine intelligences. In: Brockman, J. (ed.) Possible Minds: Twenty-Five Ways of Looking at AI, pp. 170–180. Penguin Publishing Group, New York (2019) 21. Hofstede, A.H.M.T., Proper, H.A.: How to formalize it? Formalization principles for information system development methods. Inf. Softw. Technol. 40(10), 519–540 (1998) 22. Hoppenbrouwers, S.J.B.A., Proper, H.A.E., van der Weide, T.P.: A fundamental view on the process of conceptual modeling. In: Delcambre, L., Kop, C., Mayr, H.C., Mylopoulos, J., Pastor, O. (eds.) ER 2005. LNCS, vol. 3716, pp. 128–143. Springer, Heidelberg (2005). https://doi.org/10.1007/11568322 9 23. Lankhorst, M.M., et al.: Enterprise Architecture at Work - Modelling, Communication and Analysis. The Enterprise Engineering Series, 4th edn. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-53933-0 24. Mahr, B.: On the epistemology of models. In: Abel, G., Conant, J. (eds.) Rethinking Epistemology, pp. 1–301. De Gruyter, Berlin (2011) 25. Moody, D.L.: The “physics” of notations: toward a scientific basis for constructing visual notations in software engineering. IEEE Trans. Softw. Eng. 35(6), 756–779 (2009)

196

H. A. Proper and G. Guizzardi

26. OMG: OMG Unified Modeling Language (OMG UML), Infrastructure, V2.1.2. Technical report The Object Management Group, Needham, Massachusetts, November 2007 27. OMG: Business Process Modeling Notation, V2.0. Technical report OMG Document Number: formal/2011-01-03, Object Management Group, Needham, Massachusetts, January 2011 28. Proper, H.A.: Digital enterprise modelling - opportunities and challenges. In: Roelens, B., Laurier, W., Poels, G., Weigand, H. (eds.) Proceedings of 14th International Workshop on Value Modelling and Business Ontologies, Brussels, Belgium, 16–17 January 2020. CEUR Workshop Proceedings, vol. 2574, pp. 33–40. CEURWS.org (2020). http://ceur-ws.org/Vol-2574/short3.pdf 29. Proper, H.A., Bjekovi´c, M.: Fundamental challenges in systems modelling. In: Mayr, H.C., Rinderle-Ma, S., Strecker, S. (eds.) 40 Years EMISA 2019, pp. 13– 28. Gesellschaft f¨ ur Informatik e.V, Bonn (2020) 30. Proper, H.A.E., Hoppenbrouwers, S.J.B.A., Veldhuijzen van Zanten, G.E.: Communication of enterprise architectures. Enterprise Architecture at Work. TEES, pp. 59– 72. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-662-53933-0 4 31. Proper, H.A., Verrijn-Stuart, A.A., Hoppenbrouwers, S.J.B.A.: On utility-based selection of architecture-modelling concepts. In: Hartmann, S., Stumptner, M. (eds.) Conceptual Modelling 2005, Second Asia-Pacific Conference on Conceptual Modelling (APCCM2005), Newcastle, NSW, Australia, January/February 2005. Conferences in Research and Practice in Information Technology Series, Sydney, New South Wales, Australia, vol. 43, pp. 25–34. Australian Computer Society (2005) 32. Rothenberg, J.: The Nature of modeling. In: Widman, L.E., Loparo, K.A., Nielsen, N.R. (eds.) Artificial Intelligence. Simulation & Modeling, pp. 75–92. Wiley, New York (1989) 33. Sandkuhl, K., Fill, H.-G., Hoppenbrouwers, S., Krogstie, J., Matthes, F., Opdahl, ¨ Winter, R.: From expert discipline to common pracA., Schwabe, G., Uludag, O., tice: a vision and research agenda for extending the reach of enterprise modeling. Bus. Inf. Syst. Eng. 60(1), 69–80 (2018). https://doi.org/10.1007/s12599-0170516-y 34. Scheer, A.W.: Architecture of Integrated Information Systems: Foundations of Enterprise Modelling. Springer, Heidelberg (1992). https://doi.org/10.1007/9783-642-97389-5 35. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 22, 379–423 (1948) 36. Ssebuggwawo, D., Hoppenbrouwers, S., Proper, E.: Interactions, goals and rules in a collaborative modelling session. In: Persson, A., Stirna, J. (eds.) PoEM 2009. LNBIP, vol. 39, pp. 54–68. Springer, Heidelberg (2009). https://doi.org/10.1007/ 978-3-642-05352-8 6 37. Stachowiak, H.: Allgemeine Modelltheorie. Springer, Heidelberg (1973) 38. Thalheim, B.: The theory of conceptual models, the theory of conceptual modelling and foundations of conceptual modelling. In: Embley, D., Thalheim, B. (eds.) Handbook of Conceptual Modeling, pp. 543–577. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-15865-0 17

Virtual Factory: Competence-Based Adaptive Modelling and Simulation Approach for Manufacturing Enterprise Emre Yildiz1(B) , Charles Møller1 , and Arne Bilberg2 1 Center for Industrial Production, Aalborg University, Fibigerstraede 16,

9220 Aalborg, Denmark [email protected] 2 Mads Clausen Institute, University of Southern Denmark, 6400 Sønderborg, Denmark

Abstract. The evolution of industries is constantly forcing enterprises to adapt to ever-changing market dynamics. Companies are challenged by remodelling their resources, processes, and competencies as well as to define new goals in accordance with evolving complex and dynamic environments. Virtual Factory as a dynamic, cognitive, open, and holistic system promises a potential for an adaptive enterprise modelling tool to support manufacturing companies in dealing with such challenges. This short paper attempts to frame the theoretical concepts for evolving markets and adaptive organisations (systems) in terms of the theory of industrial cycles, systems theory, and competence theory. Furthermore, the Virtual Factory concept is presented and discussed based on framed theories and four dimensions of competence theory. Keywords: Enterprise modelling theory · Enterprise modelling practice · Enterprise modelling tool · Multi-level enterprise modelling · Modelling in industry 4.0 · Digital twins

1 Introduction In the age of Industry 4.0 and smart manufacturing, innovation, increasing competition, and rapidly changing demands can be considered among the main forces shaping business systems, organisation processes, products, and services as well as new strategies and methods of production. Enterprises need to evolve to adapt to continuously changing market demands, technologies, and regulations in order to stay competitive. This Change is resulting in an ever-increasing complexity in product, process, and system domains which affects organisations’ approaches to analyse and formalise business processes and related data structures. Another result of this evolution is ever more integration of design, simulation, management, and maintenance of product/service and system lifecycle processes which is also called the “era of enterprise integration” [1]. The need for more accurate “AS-IS” models of existing enterprise architecture and behaviour in order to revise more efficient “TO-BE” models and solutions is becoming more vital while © IFIP International Federation for Information Processing 2020 Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 197–207, 2020. https://doi.org/10.1007/978-3-030-63479-7_14

198

E. Yildiz et al.

lifecycles are ever-shortening. The modular design of products is considered beneficial for faster product evolution [2] and complexity management [3], and improves strategic flexibility of enterprises in answering to unpredictable futures [4]. However, capabilities for modular products and other strategic flexibilities requires integration of know-what, know-why, and know-how forms of knowledge [5] in terms of design, management, and maintenance of product, process, and system domains. Therefore, one of the most relevant challenges faced by manufacturing enterprises is the synchronisation and simultaneous generation of product, process, and system (organisation) models in the early modelling and planning stages [6]. The above-mentioned needs and challenges make Enterprise Modelling (EM) a more crucial activity for adapting to ever-changing complex environments and developing new competencies for the effective strategic alignment of an organisation to its environment. EM is defined “as the art of externalising knowledge in the form of models about the structure, functionality, behaviour, organisation, management, operations and maintenance of whole or part of an enterprise, or of an enterprise network, as well as the relationships with its environment” by Vernadat [1]. The virtual Factory (VF), a concept which was initially defined as an integrated simulation model of a factory and its subsystems representing the factory as a whole [7], evolved in practice over the last decade together with the recent technological developments in modelling and simulation (M&S), digital twin (DT), and virtual reality (VR), as well as approaches in developing and utilising VF tools and models [4, 5]. Such developments and approaches enabled the dynamic, cognitive, open, and holistic virtual representation of actual organisations in digital platforms. This progress provoked a reconsideration of the definition of VF on the grounds of Hegel’s motion concerning the existence and definition of concepts articulating “things are what they are through the activity of the Concept that dwells in them and reveals itself in them” [8]. Yildiz and Møller suggested that VF is “an immersive virtual environment wherein digital twins of all factory entities can be created, related, simulated, manipulated, and communicate with each other in an intelligent way” [9]. Accordingly, VF promises a potential for an adaptive enterprise modelling tool to support manufacturing enterprises during the evolution forced by smart manufacturing and Industry 4.0. This work attempts to frame some theories and concepts for evolving complex environments and the evolution of organisations. Furthermore, we discuss the VF based on framed concepts and principles to interpret its impacts on designing, modelling, optimisation, control, and maintenance of enterprise systems and processes. Enterprise models are not static and need to reflect the evolution of reality [1]. Therefore, we aim to contribute to the evolution of EM forced by new research trends and technological developments by addressing the model maintenance and update based on reality. The paper is organised as follows: Sect. 2 frames the theoretical foundations including the theory of industrial cycles explaining the dynamics of evolving markets; systems theory presents the principles of enterprise dynamics as a system and competence theory grounds the principles for guiding design and management of evolving systems; Sect. 3 introduces the vocabulary and VF concept; Sect. 4 evaluates the VF based on four dimensions of competence theory; Sect. 5 discusses the implementation of basic forms of change in organisation architectures using VF tools, before concluding in Sect. 6.

Virtual Factory

199

2 Theoretical Foundations Charles Fine’s work presented in his book called “Clockspeed: winning industry control in the age of temporary advantage” helps us to interpret external forces and the evolving nature of industrial forces and their effects on domestic domains of enterprises in terms of products, processes, and organisational systems [10]. The concepts and relatively universal principles introduced in his work, also called Theory of Industrial Cycles [11], is based on the idea that the ultimate core advantage for companies is the capability to adapt to ever-changing business environments. He also proposed 3-dimensional concurrent engineering in such domains in order to handle evolution [11, 12]. Evolution of models in product, process, and system domains, which is called the co-evolution paradigm, was also investigated in recent studies [13, 14] and some considered VF as the prerequisite to handle this co-evolution problem [15]. Evolution of enterprises from mechanical systems to organismic systems and eventually social systems is depicted by Russell Ackoff [16], and a system is defined as “a whole consisting of two or more parts (1) each of which can affect the performance or properties of the whole, (2) none of which can have independent effect on the whole, and (3) no subgroup of which can have an independent effect on the whole.” [16]. Such definition and principles of System Theory states the fundamental properties of a system taken as a whole stem from the interactions of its parts, not their separate actions. Thus, “a system is not the collection of its parts but the product of the interactions of its parts” [17]. Because of this, a system cannot be understood just by separating its parts and analysis of such parts. As a social system, enterprises have their own purposes, and they are open systems that are embedded into larger systems which have their own purposes too. Therefore, Ackoff [16] concludes that situations (or problems) faced in enterprises which are complex systems of strongly interacting problems requires analysis (taking apart to understand) and synthesis (the opposite of analysis) together as complementary activities and redesigning either the entity, complex system, or its environment to solve a problem [17]. In this regard, the Theory of Complex Systems provides some principles which enlarge our understanding about the social, natural, and artificial systems that we are discussing. Herbert Simon chooses to define a complex system as “a system made up of a large number of parts that interact in a nonsimple way.” [18]. His study on the structure of complex systems reveals the internal dynamics and structure of complex systems and argues that highly complex systems can be built faster when there are stable or quasistable intermediate forms of complex systems [19, 20]. Since simulation can facilitate digital integration across manufacturing lifecycles [21] and can be used for diagnostic analytics, predictive analytics, and prescriptive analytics [22], the VF concept can provide a foundation to build intermediary stable complex forms of the smart factories of the future. The theories addressed above provide concepts and principles to increase our understanding of dynamic phenomena of all types. Finally, Competence-Based Strategic Management Theory (or Competence Theory) incorporates the above-mentioned concepts and principles in a more inclusive, dynamic, and systemic way [23, 24] and leads new insights into more feasible and consistent organisation/system design principles and processes [25]. Competence theory extends the systems view of a firm by identifying

200

E. Yildiz et al.

strategic goal-seeking behaviours of a firm which correspond to real-world cognitive and decision-making situations. Ron Sanchez defines the essential characteristics of system design and proposes a concept of an organisation structure for the effective strategic alignment of an enterprise with its environment [26]. He identifies four basic types of strategic environments and proposes four basic types of change in organisation resources, capabilities, and coordination to respond to changing environments (convergence, reconfiguration, absorptive integration, and architectural transformation). Sanchez also proposed four cornerstones/dimensions (dynamic, open-systems, cognitive, and holistic) to achieve competence-based strategic management in terms of building and leveraging competencies in dynamically changing complex environments [25]. We suggest that VF can achieve these four cornerstones and provide a useful environment to design, analyse, optimise, and simulate four types of essential changes in complex enterprise models to stimulate management thinking at all levels about the kinds of flexibility and reconfigurability that need to be designed into manufacturing enterprises when future demands may differ significantly from past demands. Therefore, vocabulary and the VF concept will be presented in the next section before discussing the concept based on four dimensions of competence theory.

3 Vocabulary and Concept 3.1 Vocabulary: Factory as a Manufacturing Enterprise The term organisation, firm, and enterprise are used synonymously in this article since the subject theories state that, as a social system, they both have similar characteristics in terms of strategic goal-seeking and openness to larger systems. The factory which includes social, natural, and artificial systems defines our scope in terms of a key system of a manufacturing enterprise identified as an open system of assets and flows which covers tangible assets such as production tools and intangible assets like capabilities. Capabilities represent repeatable actions that are using other tangible and intangible assets in order to pursue specific goals. Goals are the set of interrelated objectives such as creating/producing products or semi-products that collectively motivate actions of a manufacturing enterprise and give direction to its competence building and competence leveraging activities. A manufacturing enterprise can achieve competence when it sustains the coordinated deployment of its stock of assets to achieve its goals. A manufacturing enterprise can leverage its competence by using existing assets and capabilities in current or new environmental conditions without qualitative changes. Competence building, however, requires acquiring and using qualitatively different stocks of assets and capabilities to pursue goals. A manufacturing enterprise links, coordinates, and manages various resources which are available, along with useful assets and capabilities, into a system to carry out goal-seeking activities. Coordinating and managing systemic interdependencies of internal and external resources of an enterprise may evolve alongside its competitive and cooperative interactions. VF may, therefore, be seen as a virtual twin of a goal-seeking open system which supports the competence building and leveraging activities of an enterprise to achieve strategic goals. VF can provide data-intensive simulation models of existing systems and processes to design, management, and maintenance of resources for creating and

Virtual Factory

201

adopting new technologies, processes, products, and forms of strategies. Thus, VF can contribute effective strategic alignment of manufacturing enterprises to align their environments by enabling the predictive capability to test different flexibility aspects (operating, resource, coordination, and cognitive flexibilities) for highly complex scenarios in manufacturing systems. Therefore, VF can support increasing managerial cognition to imagine, develop, and leverage enterprise competencies which shape competitive environments. 3.2 Virtual Factory Concept. Yildiz and Møller [27] positioned the VF based on its functions and role in the product and production lifecycle processes more distinctively than previously proposed concepts, as seen in Fig. 1. This separation extends the understanding of the link between the production system and product design as well as process planning. It can also be valuable to identify the function and role of VF for enterprises in which digitalisation in product development and production execution systems are uneven. Bidirectional data integration between engineering and execution systems such as enterprise resource planning (ERP), manufacturing execution system (MES), and product lifecycle management (PLM) enables the creation, relation, and manipulation of digital twins (DTs) using simulation tools as well as control of actual systems. Such capabilities promise a high potential to handle complexity, and to support flexibility and concurrent engineering. This also seems to be useful for strategic management decision processes especially in dynamic environments and during architectural transformation since strategic managers should not only identify the needs for existing resources and capabilities, but they should also identify the resources and capabilities for an imagined future environment to compete effectively.

Fig. 1. Virtual factory concept

202

E. Yildiz et al.

Modular VF Architecture for Manufacturing Enterprise. Figure 2 shows an example of instantiation to manufacturing enterprise level. Different types of simulations representing the enterprise and its subsystems can be integrated in a modular way to represent the enterprise holistically. Real-time integration capabilities of M&S and collaborative VR increase the dynamic and immersive representation of enterprise operations and interaction with the models [28]. Together with industry or organisation specific interfaces between different levels of simulations and between simulations and actual execution, engineering, and business platforms, the VF concept can achieve predictability/diagnosability, dynamic reconfigurability, and adaptability. Integrating simulations of different subsystems belonging to an enterprise, together with co-simulation capabilities, has the potential to contribute to the increased cognitive and holistic representation of a manufacturing enterprise.

Fig. 2. Virtual factory architecture

An extension of VF simulations by adding more enterprise-level simulations such as business process and data flow simulations can increase the capabilities of VF to support enterprise architecture governance and transition planning. Various opportunities, solutions, and migration scenarios can be simulated in VF to support decisions during transition planning. Similarly, architectural changes and discrete implementation scenarios can be simulated in different resolutions in VF simulations.

4 Virtual Factory and Four Dimensions of Competence Theory 4.1 Dynamic System One of the first dimensions of competence theory states that organisations/systems must be capable of performing the necessary changes dynamically in their resources and capabilities in order to respond to future needs and opportunities and stay competitive [25]. VF, as a virtual twin of actual systems, can represent the models and process in a more dynamic way and in real-time, as seen in Fig. 3. Therefore, VF simulations can be

Virtual Factory

203

used to model, simulate, and even respond in real-time to the changes in actual processes, resources, capabilities, and functions of organisations. Such competence enables the dynamic and realistic representation of actual systems of manufacturing enterprises and their environment, which allows changes to occur dynamically. Every time a simulation model of a manufacturing line or a machine runs, for example, a number of parameters can be imported from the actual MES and designing, modelling, and planning processes can be performed more dynamically according to the changes in the real world.

Fig. 3. 3D discrete event virtual factory simulation

4.2 Open System The second cornerstone of competence theory is identifying systems as open systems which are embedded in larger systems [25]. Manufacturing enterprises are integrated into larger systems like nations and industries from which they get inputs and to which they give outputs. To be able to stay competitive manufacturing enterprises need to be open to set up new connections and relations to their environments to stay competitive while environments are changing dynamically. VF simulations also represent reality both internally and externally. A whole manufacturing enterprise or its subsystems can be represented with a simulation model which is integrated into larger systems such as ERP or a supply chain system as it is in the real world. Moreover, each subsystem’s model is also represented by its relationship with other subsystems, as seen in Fig. 2. VF is also open to creating new connections with its environmental systems through DT and Internet of Things (IoT) technologies. Therefore, VF is an open system which can be embedded into other systems and establish new connections with its evolving environment to access and coordinate a changing array of input resources as well as the creation of changing array of outputs.

204

E. Yildiz et al.

4.3 Cognitive (Sense-Making) System Another aspect of competence theory is to achieve cognitive systems by increasing the dynamic and evolving complexity of internal and external system models of enterprises [25]. In order to support managerial cognition to be able to identify resources and capabilities to achieve goals for sustainable competition in highly dynamic and complex environments, systems and model need to be cognitive (easy to make sense of). As a result of dynamic representation together with technological advancements like 3D discrete event simulation (DES), DT, and VR, the VF concept enables dynamic and cognitive models for both horizontally and vertically diverse managers and engineers. Integration capabilities of technologies promise an increase in information quality. Modelling, simulating, analysing, and interacting with models collaboratively, as seen in Fig. 4, opens up new possibilities for enterprise modelling and management [27, 28]. Therefore, VF can support managerial cognition to support enterprise imagination for developing and exercising new resources and capabilities. Please scan the QR code in Fig. 5 for more visual data on VF demonstrations.

Fig. 4. Collaborative VR assembly simulation [27, 28]

4.4 Holistic System The fourth and last aspect of competence theory is to achieve a holistic view of the enterprise to function effectively and to adapt to environmental changes as an open system [25]. To be able to implement systematic changes into complex and evolving open systems, management of manufacturing enterprises needs to mediate various interdependencies among its internal and external resources and capabili- Fig. 5. Scan this ties. As described by the principles of the system theory “If each part for VF demo.

Virtual Factory

205

of a system, considered separately, is made to operate as efficiently as possible, the system as a whole will not operate as effectively as possible” [17]. Therefore, VF, as integrated simulations representing a manufacturing enterprise, including its subsystems, enables the holistic representation of existing complex systems. Integration and co-simulation capabilities enable designing and simulating any level of internal and external changes realistically in highly complex and dynamic models. Different level of simulations in VF can be integrated with each other with an interface to share either real-time operating states or objectives and targets (Fig. 2). When a change is made in a manufacturing operations level simulation model, for example, the effect of such change can be observed in material handling, warehouse, or even supply chain simulation models simultaneously.

5 Discussion The VF concept has the potential to integrate engineering models forming product, system and process domains in a dynamic, open, cognitive, and holistic way. Moreover, VF simulations can be useful tools to design and simulate four types of changes (convergence, reconfiguration, absorptive integration, and architectural transformation) described by Sanchez [26] to respond to different strategic environments. Advanced M&S technologies can efficiently perform convergence as an incremental improvement of existing competencies. Together with advanced data integration across the whole enterprise simulations can have actions responsive to changes in actual reality. Therefore, VF can be adaptive enterprise modelling tool that can support managers and engineers in manufacturing enterprises for remodelling enterprise resources, with capabilities to leverage and build new competencies as well as defining and testing new strategies based on evolving environments. Thus, competence-based adaptive modelling can be accomplished. VF with immersive simulation models can also provide a platform to (re)design new functional components into existing enterprise systems and architecture. It can also be convenient to analyse the chain of reactions of intended changes into related simulation models. VF tools can also provide a viable platform to implement architectural changes in the existing resources, capabilities, and functions, and their interdependent coordination in a simulation model. Moreover, it is possible to transform the existing architecture while reconfiguring existing functions. All four types of changes can be implemented in the enterprise architectures with VF tools. Automated and manual data integration, as well as capabilities to set constraints, limitations, and collisions in simulation models, can increase the realistic simulations of different scenarios. Competence theory is articulated at a high level of abstraction. Thus, it is suitable for all kinds of organisational processes, including production systems. Nonetheless, to the best of our knowledge, there has not been any work that attempts to depict the abstractions of competence theory in a production system context.

6 Conclusion Designing and developing enterprise models as an adaptive dynamic system in evolving complex environments is challenging for managers. The theory of industrial cycles

206

E. Yildiz et al.

explains dynamic forces and their relations in terms of the evolution of industries. Systems theory and complexity theory defines the basic principles which need to be considered while designing and developing a complex system. Competence theory inherits the principles of complexity theory and systems theory to formulate more comprehensive, dynamic, and practical principles for organisation design principles and processes. These principles and processes form the foundation of the VF concept and its utilisation as an adaptive enterprise modelling tool. In this short paper, we attempt to frame theoretical concepts and principles of the VF concept as a competence-based adaptive enterprise modelling and simulation tool. The VF concept is also discussed based on four fundamental concepts of competence theory. Disclosure Statement The authors and the stakeholders reported no potential conflict of interest. Funding. The research presented in this article was funded by the Manufacturing Academy of Denmark (MADE) and Vestas Wind Systems A/S, including equipment and development support (Grant: 6151-00006B).

References 1. Vernadat, F.: Enterprise modelling: Research review and outlook. Comput. Ind. 122 (2020). https://doi.org/10.1016/j.compind.2020.103265 2. O’Grady, P.J.: The Age of Modularity: Using the New World of Modular Products to Revolutionize Your Corporation. Adams & Steele, Iowa City (1999) 3. Baldwin, C.Y., Clark, K.B.: Design Rules: The Power of Modularity. MIT Press, Cambridge (2000) 4. Sanchez, R.: Strategic flexibility in product competition. Strateg. Manag. J. 16, 135–159 (1995) 5. Sanchez, R., Heene, A.: Managing articulated knowledge in competence-based competition. In: Strategic Learning and Knowledge Management, pp. 163–187. Wiley (1996) 6. Azevedo, A., Almeida, A.: Factory templates for digital factories framework. Robot. Comput. Integr. Manuf. 27, 755–771 (2011). https://doi.org/10.1016/j.rcim.2011.02.004 7. Jain, S., Choong, N.F., Aye, K.M., Luo, M.: Virtual factory: an integrated approach to manufacturing systems modeling. Int. J. Oper. Prod. Manag. 21, 594–608 (2001). https://doi.org/ 10.1108/MRR-09-2015-0216 8. Hegel, G.W.F.: The Encyclopaedia Logic. Hackett Publishing Company Inc, Indianapolis (1991) 9. Yildiz, E., Møller, C.: Building a virtual factory: an integrated design approach to building smart factories. J. Glob. Oper. Strateg. Source (2019). (In Review) 10. Fine, C.H.: Clockspeed: Winning Industry Control in the Age of Temporary Advantage. MIT Sloan School of Management (1998) 11. Lepercq, P.: The Fruit Fly and the Jumbo Jet From genetics to the theory of industrial cycles applied to the aircraft industry (2008). http://www.supplychainmagazine.fr/TOUTE-INFO/ Lecteurs/Fruit-PLepercq.pdf 12. Fine, C.H.: Clockspeed-based strategies for supply chain design. Prod. Oper. Manag. 9, 210 (2000). https://doi.org/10.1111/j.1937-5956.2000.tb00134.x 13. Tolio, T., et al.: SPECIES-Co-evolution of products, processes and production systems. CIRP Ann. Manuf. Technol. 59, 672–693 (2010). https://doi.org/10.1016/j.cirp.2010.05.008

Virtual Factory

207

14. Leitner, K.-H.: Pathways for the co-evolution of new product development and strategy formation processes: empirical evidence from major Austrian innovations. Eur. J. Innov. Manag. 18, 172–194 (2015). https://doi.org/10.1108/EJIM-01-2014-0002 15. Tolio, T., Sacco, M., Terkaj, W., Urgo, M.: Virtual factory: an integrated framework for manufacturing systems design and analysis. Procedia CIRP 7, 25–30 (2013). https://doi.org/ 10.1016/j.procir.2013.05.005 16. Ackoff, R.L.: Systems thinking and thinking systems. Syst. Dyn. Rev. 10, 175–188 (1994) 17. Ackoff, R.L.: Understanding business: environments. In: Lucas, M. (ed.) Understanding Business Environments, pp. 217–227. Routledge (2005) 18. Simon, H.A.: The architecture of complexity. Proc. Am. Philos. Soc. 106, 467–482 (1962). https://doi.org/10.1080/14759550302804 19. Simon, H.A.: The Sciences of the Artificial. MIT Press, London (1996) 20. Simon, H.A.: How complex are complex systems. In: Proceedings of the Biennial Meeting of the Philosophy of Science Association, pp. 507–522 (1976) 21. Mourtzis, D.: Simulation in the design and operation of manufacturing systems: state of the art and new trends. Int. J. Prod. Res. 58, 1927–1949 (2020). https://doi.org/10.1080/00207543. 2019.1636321 22. Jain, S., Shao, G., Shin, S.J.: Manufacturing data analytics using a virtual factory representation. Int. J. Prod. Res. 55, 5450–5464 (2017). https://doi.org/10.1080/00207543.2017.132 1799 23. Sanchez, R., Heene, A., Thomas, H.: Introduction: towards the theory and practice of competence-based competition. In: Dynamics of Competence-Based Competition Theory and Practice in the New Strategic Management, pp. 1–35. Pergamon (1996) 24. Sanchez, R., Heene, A.: Reinventing strategic management: new theory and practice for competence-based competition. Eur. Manag. J. 15, 303–317 (1997). https://doi.org/10.1016/ S0263-2373(97)00010-8 25. Sanchez, R.: Strategic management at the point of inflection: systems, complexity and competence theory. Long Range Plann. 30, 939–946 (1997). https://doi.org/10.1016/S0024-630 1(97)00083-6 26. Sanchez, R.: Architecting organizations: a dynamic strategic contingency perspective. Res. Competence Based Manag. 6, 7–48 (2012). https://doi.org/10.1108/S1744-2117(2012)000 0006004 27. Yildiz, E., Møller, C., Bilberg, A.: Virtual factory : digital twin based integrated factory simulations. In: 53rd CIRP Conference on Manufacturing Systems. Elsevier B.V. (2020). https://doi.org/10.1016/j.procir.2020.04.043 28. Yildiz, E., Møller, C., Melo, M., Bessa, M.: Designing collaborative and coordinated virtual reality training integrated with virtual and physical factories. In: International Conference on Graphics and Interaction 2019, pp. 48–55. IEEE Press (2019). https://doi.org/10.1109/ICG I47575.2019.8955033

Enterprise Ontologies

Relational Contexts and Conceptual Model Clustering Giancarlo Guizzardi1 , Tiago Prince Sales1(B) , Jo˜ ao Paulo A. Almeida2 , 3 and Geert Poels 1

3

Conceptual and Cognitive Modeling Research Group (CORE), Free University of Bozen-Bolzano, Bolzano, Italy {giancarlo.guizzardi,tiago.princesales}@unibz.it 2 Ontology & Conceptual Modeling Research Group (NEMO), Federal University of Esp´ırito Santo, Vit´ oria, Brazil [email protected] UGent Business Informatics Research Group, Ghent University, Ghent, Belgium [email protected] Abstract. In recent years, there has been a growing interest in the use of reference conceptual models to capture information about complex and sensitive business domains (e.g., finance, healthcare, space). These models play a fundamental role in different types of critical semantic interoperability tasks. Therefore, it is essential that domain experts are able to understand and reason with their content. In other words, it is important for these reference conceptual models to be cognitively tractable. This paper contributes to this goal by proposing a model clustering technique that leverages the rich semantics of ontology-driven conceptual models (ODCM). In particular, the technique employs the notion of Relational Context to guide automated model breakdown. Such Relational Contexts capture all the information needed for understanding entities “qua players of roles” in the scope of an objectified (reified) relationship (relator).

Keywords: Conceptual model clustering conceptual modeling

1

· Complexity management in

Introduction

In recent years, there has been a growth in the use of reference conceptual models, in general, and domain ontologies, in particular, to capture information about complex and critical domains [11]. However, as the complexity of these domains grows, often so does the sheer size and complexity of the artifacts that represent them. Moreover, in sensitive domains (e.g., finance, healthcare), these models play a fundamental role in different types of critical semantic interoperability tasks, therefore, it is essential that domain experts are able to understand and accurately reason with the content of these models. The human capacity for processing unknown information is very limited, containing bottlenecks in c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 211–227, 2020. https://doi.org/10.1007/978-3-030-63479-7_15

212

G. Guizzardi et al.

visual short-term memory and causing problems to identify and hold stimuli [18]. Therefore, there is an evident need for developing adequate complexity management mechanisms for reference conceptual models. One type of such complexity management mechanisms is conceptual model modularization or Conceptual Model Clustering (henceforth CMC) [1]. CMC is the process by which a model is fragmented into smaller interconnected parts [17], each of which can be more easily manipulated by a model user than the entire model. The greatest challenge in CMC is the process for module extraction, namely, coming up with adequate criteria for dividing the model into modules that ease model understanding. Traditionally, different techniques have been used for module extraction (e.g., [1,16]). However, almost the totality of these approaches address modularization in languages that are ontologically-neutral [14] such as UML, ER diagrams or OWL1 . While these languages may have a well-defined abstract syntax and a formal (logical) semantics, in general, they lack an ontological semantics. Consequently, the modularization techniques developed for them rely on criteria that leverage almost exclusively on the syntactical properties of the models, typically, topological ones [28]. In contrast, ontology-driven conceptual modeling (ODCM) languages are systematically designed to conform to an underlying ontological theory. In particular, an ODCM language contains exactly the modeling primitives that are necessary to represent the ontological distinctions put forth by its underlying ontology. ODCM approaches have enjoyed an increasing adoption by the Conceptual Modeling community as a number of independent results consistently show their benefits for improving the quality of conceptual models (e.g., [27]). An example of an ODCM language is OntoUML [10], whose primitives reflect the underlying UFO foundational ontology [10]. In this paper, we leverage the ontologically well-founded semantics of OntoUML to propose a formal approach for automated modularization in conceptual models. The proposed approach breaks down an OntoUML model in a number of Relational Contexts. Intuitively, Relational Contexts are modules that capture all the information needed for understanding entities qua players of roles in the scope of an objectified (reified) relationship (ontological speaking, the so-called relators). As reported in [23], Relators and Roles are clearly the most used OntoUML constructs in conceptual models. This is unsurprising, given the strong adoption of OntoUML/UFO in Business (Organizational/Social/Legal) domains [26], as well as the fact that in these realms the bulk of the domain knowledge is concentrated in relationships and roles. As argued in [12], specially in these realms, “we seldom interact with these entities qua-themselves, but we frequently conceive objects qua-playing-certain-roles in given ‘contexts’... For example, most 1

There is a long debate in philosophy regarding the ontological neutrality (or lack thereof) of formal languages. We simply mean here that they commit to a simple ontology of formal structures (e.g., that of set theory) in which sorts of types and relations are undifferentiated.

Relational Contexts and Conceptual Model Clustering

213

of our interactions with other human beings and, hence, our conceptualizations of these interactions are thought in terms of roles such as parent, employee, student, president, citizen, customer, etc. Analogously, when thinking about, for instance, cars, we think about them as means of transportation, insurable items, work-related resources, product offerings, etc. Moreover, we often conceive these ‘contexts’ as relational ones: marriages, employments, enrollments, and presidential mandates are themselves concrete ‘object-like’ entities that define a scope in which ordinary objects play complementary roles interacting with each other”. This view is also defended by other authors such as [4], who go as far as to claim that “[r]oles are useful not only to model domains that include institutions and organizations. Rather, every object can be considered as an institution or an organization structured in roles”. The proposal advanced here is, thus, aimed at conceptual models in business (organizational, social and legal) domains, which form the bulk of the Information Systems discipline. For models that are centered on taxonomic relations (e.g., product types, biological taxonomies), we recommend alternative complexity management techniques, in particular, the static ontological views as proposed in [6]. In fact, this paper can be seen as a companion to [6] and [13] in a general research program of defining ontology-driven complexity management theories, techniques and tools. While in these two papers the focus is on model recoding with ontology-design patterns, and on model abstraction, respectively, here we propose the notion of relationship-centric conceptual model modularization (or clustering). The contributions of this paper are two-fold: (i) firstly, we proposed a formalization of the notion of Relational Context by leveraging on the theory of relators from UFO/OntoUML; We then use this notion to propose a strategy for relationship-centric modularization termed Relator-Centric Clustering; (ii) secondly, we provide an implementation of this strategy integrated in the OntoUML toolset. The remainder of the paper is organized as follows. Section 2 positions our work in reference to related efforts; Sect. 3 briefly presents the OntoUML language and some of the ontological notions underlying it; Sect. 4 presents the contributions of this paper. Firstly, it defines the notions of Ontological Views, Relational Contexts, and Modular Breakdown. This is done both formally, in terms of a precise definition of these notions, as well as intuitively by making use of a running example in the domain of Car Rental. In addition, we report on an implementation of this approach as a plug-in to a model-based OntoUML editor. Finally, Sect. 5 presents some conclusions of the presented approach and some intended directions for future work.

2

Complexity Management of Conceptual Models

The discipline of complexity management of large conceptual models (henceforth CM-CM) has been around for quite some time and has been represented in the literature by a series of different approaches and techniques. In fact, [28]

214

G. Guizzardi et al.

claims that “one of the most challenging and long-standing goals in conceptual modeling... is to understand, comprehend and work with very large conceptual schemas”. The challenge and importance of this discipline lie in the following. On one hand, real information systems often have large and extremely complex conceptual models [28]. On the other hand, this complexity poses a serious additional challenge in the comprehension and, consequent, quality assurance of these models. For example, [21] reports on an empirical study conducted with a large and professionally constructed conceptual model.2 In that study, the authors managed to show that the model contained 879 occurrences of error-prone structures (anti-patterns), 52.56% of which really introduced representation errors according to the creators of the model. According to [28], the methods for CM-CM can be classified in three areas, namely, Clustering Methods, Relevance Methods, and Summarization Methods. Clustering is about classifying the elements of a conceptual model into groups, or clusters, according to some criteria (e.g., a similarity function); Relevance Methods are about the application of ranking functions to the elements of a model in order to obtain ordered lists (i.e., a ranking) of model elements according to their perceived relevance in representing the domain at hand; finally, Model Summarization is about producing from an original model a reduced version consisting only of the elements that are judged to be of more relevance for representing the domain at hand. In clustering methods, the goal is to break down a model in fragments such that the sum of these fragments should be informationally equivalent to the whole (i.e., to the original model). In contrast, relevance and summarization methods (including model abstraction) aim to produce partial views of the original model at hand. In other words, while clustering methods have lossless model transformations, the latter classes of methods are based on lossy transformations. A drawback that is common to the majority of existing methods in all these classes is that they are based on classic conceptual modeling notations (e.g., UML, ER) [28], they are constrained to rely almost exclusively on syntactic (mainly topological) properties of the addressed models. These properties include closeness (a quantitative evaluation of the links among elements in the model) [8], hierarchical distance (length of the shortest relationship path between entities), structural-connective distance (elements are considered closer if they are neighbors in a hierarchy mereological or subtyping structure), or category distance (elements are considered to be closer if one subtypes the other) [1]. For example, [5] proposes a (relevance) method based on the assumption that the number of attributes and relations characterizing an element in a model can be used as a (heuristic) measure of its relevance for that model. In the same spirit, [24,25] go as far as proposing PageRank-style algorithms to infer the relevance of elements in entity-relationship diagrams and RDF schemas (even ignoring 2

This model consisted of 3,800 classes, 61 datatypes, 1,918 associations, 3,616 subtyping relations, 698 generalization sets, 865 attributes, i.e., navigable association ends [21].

Relational Contexts and Conceptual Model Clustering

215

the difference between association and subtyping relations). The problem with relying solely on these properties is that there is no guarantee that a model element satisfying some topological requirement (e.g., a node with more edges connected to it) by necessity represents the model’s most important concepts. This is related to the work by [19,20], that while criticizing existing CM-CM methods, referred to it as lack of cognitive justification. The method proposed here is a type of clustering method. However, in contrast with all the aforementioned approaches, our proposal focuses mainly on the ontological semantics [10] of the elements represented in a conceptual model. As previously discussed, the idea is to use a formal and ontological notion of Relational Context (see Sect. 4) as a clustering mechanism. Relational Contexts are built from a focal reified relationship (relator ), and extrapolating from there on to the different roles played by entities in the scope of that relationship, the kinds defining the essential properties and identity principle characterizing the entities playing these roles, among other aspects. This approach (detailed in Sect. 4) is only made possible because it is based on a non-classical CM language, namely, the ODCM language OntoUML (briefly presented in Sect. 3). There are three CM-CM methods in the literature that are based on the same language, namely, the approaches of (i) [6], (ii) [13], and (iii) [16,17]. The first method is the one that is closer to work presented here, since it is also a clustering method and, hence, a lossless approach. What is presented there is an approach for what the authors name Model Recoding. The method takes a conceptual model and produces a series of views constituted by ontological design patterns centered around general (as opposed to model specific) ontological constructs. So, for example, it groups all the kinds of things the model in one view, all the roles played by things in a relational context in another view, etc. So, instead of breaking down the model into clusters that correspond to what one could intuitively call sub-domains, that approach brakes down the model in terms of general ontological categories. In contrast, the approaches of [13] and [16,17] differ from the approach presented here since these are approaches for model summarization and hence lossy approaches. Finally, [16,17] also differs from our approach since it requires user input in selecting a set of entities in the model that are of particular relevance. Our approach, instead, is a fully automated one, which we argue is an important feature in methods dealing with large-scale models.

3

A Whirlwind Introduction to UFO and OntoUML

OntoUML is a language whose meta-model has been designed to comply with the ontological distinctions and axiomatization of a theoretically well-grounded foundational ontology named UFO (Unified Foundational Ontology) [10,15]. UFO is an axiomatic formal theory based on contributions from Formal Ontology in Philosophy, Philosophical Logic, Cognitive Psychology, and Linguistics. A recent study shows that UFO is the second-most used foundational ontology in conceptual modeling and the one with the fastest adoption rate [26]. That study

216

G. Guizzardi et al.

also shows that OntoUML is among the most used languages in ontology-driven conceptual modeling. In the sequel, we briefly explain a selected subset of the ontological distinctions put forth by the Unified Foundational Ontology (UFO). We also show how these distinctions are represented by the modeling primitives of OntoUML (as a UML profile). For an in-depth discussion, philosophical justifications, formal characterization and empirical support for these categories one should refer to [9,10]. Take a domain in reality restricted to endurants [10] (as opposed to events or occurrents). Central to this domain we will have a number of object Kinds, i.e., the genuine fundamental types of objects that exist in this domain. The term “kind” is meant here in a strong technical sense, i.e., by a kind, we mean a type capturing essential properties of the things it classifies. In other words, the objects classified by that kind could not possibly exist without being of that specific kind. Kinds tessellate the possible space of objects in that domain, i.e., all objects belong to exactly one kind and do so necessarily. Typical examples of kinds include Person, Organization, and Car (see Fig. 1; stereotypes reflect the correspondence between the UML profile and UFO3 ). We can, however, have other static subdivisions (or subtypes) of a kind. These are naturally termed Subkinds. As an example, the kind ‘Person’ can be specialized in the subkinds ‘Man’ and ‘Woman’ (Fig. 1). Object kinds and subkinds represent essential properties of objects (they are also termed rigid or static types [10]). We have, however, types that represent contingent or accidental properties of objects (termed anti-rigid types [10]). These include Phases (for example, in the way that ‘being a living person’ captures a cluster of contingent intrinsic properties of a person, or in the way that ‘being a puppy’ captures a cluster of contingent intrinsic properties of a dog) and Roles (for example, in the way that ‘being a husband’ captures a cluster of contingent relational properties of a man participating in a marriage, or that ‘being a rental car’ captures contingent intrinsic properties of a car participating in a car rental, see Fig. 1). In other words, the difference between the contingent properties represented by a phase and a role is the following: phases represent properties that are intrinsic to entities (e.g., ‘being a puppy’ is being a dog that is in a particular developmental phase; ‘being a living person’ is being a person who has the intrinsic property of being alive; ‘being an available car’ is being a car that is functional and, hence, can be rented); roles, in contrast, represent properties that entities have in a relational context, i.e., contingent relational 3

The model of Fig. 1 is used here for illustration purposes only, as it is a much simplified version of a proper model in this domain. For example, in a more realistic model, we would have cases of “relators mediating relators” (e.g., a car rental mediating a car ownership and an employment). The example avoids these for the sake of space limitations. Our formal definition of RCC (see Sect. 4.7), however, has no such a limitation, thus, addressing these cases that result in nested contexts (i.e., contexts including other contexts).

Relational Contexts and Conceptual Model Clustering

217

properties (e.g., ‘being a husband’ is to bear a number of commitments and claims towards a spouse in the scope of a marital relationship; ‘being a student’ is to bear a number of properties in the scope of an enrollment relationship with an educational institution.) Kinds, Subkinds, Phases, and Roles are categories of object Sortals. In the philosophical literature, a sortal is a type that provides a uniform principle of identity, persistence, and individuation for its instances [10]. To put it simply, a sortal is either a kind (e.g., ‘Person’) or a specialization of a kind (e.g., ‘Student’, ‘Teenager’, ‘Woman’), i.e., it is either a type representing the essence of what things are or a sub-classification applied to the entities that “have that same type of essence”. Relators (or relationships in a particular technical sense [9]) represent clusters of relational properties that “hang together” by a nexus (provided by a relator kind). Moreover, relators (e.g., marriages, enrollments, presidential mandates, citizenships, but also car rentals, employments, and car ownerships, see Fig. 1) are full-fledged Endurants. In other words, entities that endure in time bearing their own essential and accidental properties and, hence, first-class entities that can change in a qualitative manner while maintaining their identity. As discussed in depth in [9], relators are the truth-makers of relational propositions, and relations (as classes of n-tuples) can be completely derived from relators [10]. For instance, it is ‘the marriage’ (as a complex relator composed of mutual commitments and claims) between ‘John’ and ‘Mary’ that makes true the proposition that “John is the husband of Mary”. Relators are existentially dependent entities (e.g., the marriage between John and Mary can only exist if John and Mary exist) that bind together entities (their relata) by the so-called mediation relations - a particular type of existential dependence relation [10]. As discussed in depth in [9], like in the MERODE approach [22] (but here for ontological reasons), all domain relations in business models (the so-called material relations) can be represented exclusively by employing relators and these existential dependence relations (mediation). Objects participate in relationships (relators) playing certain “roles”. For instance, people play the role of spouse in a marriage relationship; a person plays the role of president in a presidential mandate; a car plays the role of a rental car scope of a car rental, see Fig. 1. ‘Spouse’ and ‘President’ (but also typically student, teacher, pet) are examples of what we technically term a role in UFO, i.e., a relational contingent sortal (since these roles can only be played by entities of a unique given kind). There are, however, relational and contingent role-like types that can be played by entities of multiple kinds. An example is the role ‘Customer’ (which can be played by both people and organizations), see Fig. 1. We call these role-like types that classify entities of multiple kinds RoleMixins.

218

G. Guizzardi et al.

Fig. 1. A conceptual model in OntoUML in which relators are highlighted.

4

Views, Relational Contexts, and Relator-Centric Clustering

In this section, we present a formal definition of our structure of ontological views, which are then used to formally define our notion of Relational Context (RC) and of Relator-Centric Clustering (RCC). Built over UFO’s distinctions and for the OntoUML language, the approach presented here proposes rules to extract modules (clusters) from a conceptual model expressed in OntoUML. 4.1

Basic Definitions

Let a Model M be a graph defined such that M = Θ, Σ, Φ, where Θ = {C1 ..Cn } is the (non-empty) set of concepts in the model M ), Σ = {r1 ..rn } is the set of directed relations in the model and Φ = {gs1 ..gsn } is the set of Generalization Sets in the model. Let CT (Concept Type), RT (Relation Type) and GST be domains of types such that CT = {SORTAL, NON-SORTAL, KIND, SUBKIND, PHASE, ROLE, ROLEMIXIN, RELATOR}, RT = {MEDIATION, SUBTYPING}, and GST = {PHASE-PARTITION, SUBKIND-PARTITION} . Now, let < be partial order relation defined in CT in the following way to reflect the specializations in the taxonomy of types in UFO: KIND < SORTAL, SUBKIND < SORTAL, ROLE < SORTAL, PHASE < SORTAL, ROLEMIXIN < NON-SORTAL. Finally, we define a number of auxiliary functions: – C(M ) is a function that maps a model M to its associated set Θ; – R(M ) is a function that maps a model M to its associated set Σ; – GS(M ) is a function that maps a model M to its associated set Φ;

Relational Contexts and Conceptual Model Clustering

219

– EHasT ypeT is a relation connecting an element E to a type T in the following manner: if E is a concept, then T ∈ CT ; if E is a relation then T ∈ RT , and if E is a generalization set, then E ∈ GST . We should also add that for any two types T and T  such that T < T  , if E HasT ype T then E HasT ype T  ; – t(r) is a function that maps a relation r to the target (destination) of that directed relation; – s(r) is the complementary function that maps a relation r to the source (origin) of that directed relation; – r  gs connects a relation r with a generalization set gs such that r HasT ype SUBTYPING and: if gs HasT ype PHASE-PARTITION then s(r) HasT ype PHASE; if gs HasT ype SUBKIND-PARTITION then s(r) HasT ype SUBKIND. Moreover, for any two relations r1 and r2 such that r1  gs and r2  gs, we have that t(r1 ) = t(r2 ). As expected, we have that for every model M and every relation such that r ∈ R(M ), we have that both s(r) ∈ C(M ) and t(r) ∈ C(M ). Moreover, every generalization set gs ∈ GS(M ) is such that all r  gs implies that r ∈ R(M ). For example, let M be the model depicted in Fig. 1. Then, C(M ) amounts to exactly the types represented there, while R(M ) includes all the mediation and UML subtyping relations. Finally, GS(M ) amounts to the generalization sets: gender (a subkind partition comprising the subtyping relations connecting Man to Person, and Woman to Person); developmental status (a phase partition comprising the subtyping relations connecting Child to Person, Teenager to Person, and Adult to Person); Life Status (a phase partition comprising subtyping relations connecting Living Person to Person, and Deceased Person to Person); Operational Status (a phase partition comprising subtyping relations connecting Available Car to Car, and Under Maintenance Car to Car ). 4.2

Direct Subtyping and (Indirect) Subtyping

Let the functions ST (C, C  ) (symbolizing that C is a direct subtype of C  ), ST ∗(C, C  ) (symbolizing that C is a subtype of C  ) and IST ∗(C, C  ) (symbolizing that C is an improper subtype of C  ) be defined as follows: – ST (C, C  ) iff there is an r such that r HasT ype SUBTYPING and s(r) = C and t(r) = C  ; – ST ∗(C, C  ) iff ST (C, C  ) or there is a C  such that ST (C, C  ) and ST ∗(C  , C  ); and, – IST ∗(C, C  ) iff ST ∗(C, C  ) or C = C  . We also define the following auxiliary function: – K(C) mapping a sortal C to its unique supertyping KIND, i.e., we have that K(C) = C  iff C  HasT ype KIND and IST ∗(C, C  ). (Notice that if C is a KIND, then C = C  .) Again, using the model M of Fig. 1 as an example, we have that, for instance, K(CarAgency) = Organization and K(P ersonalCustomer) = P erson.

220

4.3

G. Guizzardi et al.

View

Let M and M  be models as previously defined. It follows that M is a view of M  (symbolized as V (M, M  )) iff: – C(M ) ⊆ C(M  ) and – R(M ) ⊆ R(M  ) and – GS(M ) ⊆ GS(M  ). Notice that, given our definition of a model, we have that all r ∈ R(M ) are such that s(r) ∈ C(M ) and t(r) ∈ C(M ), but also that for all r  GS(M ) we have that r ∈ R(M ). In other words, M is necessarily an original subgraph of M  . The views we are ultimately interested in are the so-called Relational Contexts (RC), which will be defined in Subsect. 4.6. Nevertheless, before we reach that, we need to establish two types of auxiliary views: Sortal Identity Paths and Non-Sortal Identity Paths. They are used later to support the definition of Relational Contexts. 4.4

Sortal Identity Path

We define that a view M is a Sortal Identity Path of M  based on a focus type c (symbolized as SIP (M, M  , c)), where c HasT ype SORTAL iff: – V (M, M  ) and – c ∈ C(M ) iff (IST ∗(c, c ) and IST ∗(c , K(c)) and – r ∈ R(M ) iff r HasT ype SUBTYPING and s(r) ∈ C(M ) and t(r) ∈ C(M ). SIP is a generic parameterizable view definition that, given a sortal type c, it provides with a view that includes that type and all its supertypes (if any) until its corresponding kind is reached. Taking the model of Fig. 1 and picking, for instance, Personal Customer as focus type, the corresponding SIP would be constituted by the types that generalize Personal Customer, i.e., Adult, Living Person, and, finally, Person. Later, we use SIP to determine which supertypes should be included in a Relational Context, namely those that reveal the nature of the entities in the context. 4.5

Non-Sortal Identity Paths

We define that the view M is a Non-Sortal Identity Paths of M  based on a focus type c (symbolized as N SIP (M, M  , c)), where c HasT ype NON-SORTAL iff: – V (M, M  ) and – c ∈ C(M ) iff IST ∗(c , c) or (there is a c such that IST ∗(c , c) and IST ∗(c , c ) and IST ∗(c , K(c ))) and – r ∈ R(M ) iff r HasT ype SUBTYPING and (s(r) ∈ C(M )) and (t(r) ∈ C(M ))).

Relational Contexts and Conceptual Model Clustering

221

The intention of the N SIP can explained as follows. Take a non-sortal type c in the model M  , this view should include: (i) c itself and all its non-sortal subtypes; (ii) the first sortal specializing c as well as the path from this sortal to the unique kind providing its identity principle [10]. Taking the model of Fig. 1 and picking, for instance, Customer as focus type, in the corresponding N SIP , we have, besides the rolemixin Customer, the sortals that immediately specialize it (the roles Personal Customer and Corporate Customer ) as well as the supertypes of each of these sortals that are in the path between them and their kinds (Person and Organization, respectively, in this case). 4.6

Relational Context

We define that M is a Relational Context of M  with focus on a relator type rel, where (rel HasT ype RELATOR) (symbolized as RC(M, M  , rel)) iff the following conditions are satisfied: – V (M, M  ); – c ∈ C(M ) iff: • c = rel, or • there is a r ∈ R(M ) and t(r) = c, or • there is a view M  and a c ∈ C(M ) such that (SIP (M  , M  , c ) or N SIP (M  , M  , c )) and c ∈ C(M  ), or • there is a gs ∈ GS(M ) and a r  gs and s(r) = c, or – r ∈ R(M ) iff: • (r HasT ype MEDIATION and s(r) ∈ C(M )) or • (r HasT ype SUBTYPING and s(r) ∈ C(M )) and ((t(r) HasT ype RELATOR) or t(r) ∈ C(M )), or • there is a gs ∈ GS(M ) such that r  gs – gs ∈ GS(M ) iff: • gs HasT ype PHASE-PARTITION and there is an r such that r  gs and r ∈ R(M ), or • gs HasT ype SUBKIND-PARTITION and for all r such that r  gs then r ∈ R(M ). Now, this definition can benefit from some unpacking. The Relational Context (RC) starts by (naturally) including the focal relator rel (c = rel). In addition, it includes all types that are connected by that relator via MEDIATION relations (henceforth, mediated types) ((r ∈ R(M ) and t(r) = c) and (r HasT ype MEDIATION and s(r) ∈ C(M ))). For example, if we take the relator Car Rental as focus, the corresponding RC would also include the types of entities that are bound by instances of Car Rental in that context, i.e., Customer and Rental Car. Furthermore, this RC should include in this context all the types going from these mediated types to their respective kinds. The rationale here is that in order to understand the nature of the entities connected by instances of the relator at hand, one must understand what kinds of things those entities essentially are,

222

G. Guizzardi et al.

i.e., what sort of principle of identity they obey. In case any of these mediated types c is a sortal, then the RC will include all types in its SIP (c ∈ C(M ) and there is and a view M  such that SIP (M  , M  , c ) and c ∈ C(M  )). So, in this example, for the sortal type Rental Car, it would include also the types Available Car and Car. In contrast, if any of the mediated types is a Non-Sortal, then the relational context will include all types in its N SIP (c ∈ C(M ) and there is a view M  such that N SIP (M  , M  , c ) and c ∈ C(M  )). The rationale here is analogous. However, since different instances of a non-sortal might take their identities from different kinds, in order to understand that context, we need to include all the information in the identity path between that non-sortal mediated type and the relevant kinds. For instance, for a Car Rental Relational Context, we need to understand the notion of Customer and, in order to understand this notion we have to understand the notions of Personal Customer and Corporate Customer. Finally, in order to understand the latter, we need to understand Organizations, and to understand the former, the notions of Adult, Living Person and Person. After all, instances of Personal Customer are adult living people. Besides the types in SIP and N SIP of mediated types, the Relational Context should also include all types that appear in phase partitions standing in the path between a mediated type and its identity supplier (i.e., its associated kind). The idea is that these types offer a contrast background that helps in the clarification of the semantics of the types in these paths. For example, in the Car Rental context, in order to understand that personal customers must be living adults, it is important to understand that they cannot be other alternatives of instances of Person, namely, living children, living teenagers, as well as deceased person. In particular, given the anti-rigidity of these types (phases), all instances of living person can cease to be so, thus, becoming deceased people, in which case they can no longer play the role of Personal Customer. Formally, if one of the subtyping relations in a (N )SIP is part of a phase partition, then that phase partition generalization set is included in the view (gs HasT ype PHASEPARTITION and there is an r such that r  gs and r ∈ R(M )). Additionally, all other types that share the common supertype in that generalization set are also included in the view (there is a gs ∈ GS(M ) and a r  gs and s(r) = c), and so are all these supertyping relations in that same generalization set (r HasT ype SUBTYPING and t(r) ∈ C(M ) and (there is a gs such that gs ∈ GS(M ) and r  gs)). Notice that subkind partitions are only included (a posteriori) if all subtyping relations comprising it are already included in the view (e.g., gender in an RC with Car Rental as the focus). Furthermore, we include in a relational context all subtyping relations involving two types included in that view (r HasT ype SUBTYPING and s(r) ∈ C(M ) and t(r) ∈ C(M )). Finally, we include all supertypes of relators already included in the view (r HasT ype SUBTYPING and s(r) ∈ C(M ) and t(r) HasT ype RELATOR). This is because a subtype inherits all the properties of its supertypes, and thus to understand the context of a sub-relator we must understand the general notion (e.g., to understand ‘foreign marriage’ as a ‘marriage’ recognized abroad, we must understand ‘marriage’ as a relation binding spouses).

Relational Contexts and Conceptual Model Clustering

4.7

223

Relator-Centric Clustering

We are now in position to define the notion of a Relator-Centric Clustering: – RCC Definition : a Relator-Centric Clustering of a model M is a set of views symbolized as RCC(M ) = {M1 ..Mn } such that for every Mi ∈ RCC(M ) there is a type rel such that rel ∈ C(M ) and RC(Mi , M, rel). Figure 2 depicts the application of this notion of RCC to the model of Fig. 1. Here we represent each Relational Context using UML packages and name these packages with the homonymous focal relator. As one can observe, the original model can be broken down into four contexts, namely: the Car Rental, the Marriage, the Car Ownership, and the Employment contexts. Each of these modules contains a view of the original model with all the information required to understand each of the contexts. The Car Rental RC shows the roles (and role mixin) directly mediated by the Car Rental relator (Responsible Employee, Rental Car, Customer ). The kinds involved are made explicit: Person, Car and Organization (when playing the role of Corporate Customer ). Important business rules the model imposes on a Car Rental are revealed: only an Adult (a Living Person) can rent a car, and only a car that is in the Available Car phase can be rented. A similar observation can be made for the Marriage RC, as it reveals that the original model reflects a heteronormative setting and with gender in static classification. Finally, the Car Ownership and the Employment RCs are examples of simpler views, as the path from directly mediated entities to the involved kinds is short. We implemented this approach for relational context identification and relator-centric clustering in Javascript as a service within ontouml-js4 , an open source library we have been developing for OntoUML. Currently, this library supports programatically manipulating OntoUML models, automatically verifying their syntax, and automatically transforming them into OWL specifications compatible with gUFO (the reference implementation of the Unified Foundational Ontology in OWL [3]). These services are then made available to final users via the OntoUML plugin5 for Visual Paradigm.

5

Final Considerations

In this paper, we propose a formal approach for conceptual model clustering by leveraging on the ontologically well-founded semantics of the modeling language OntoUML. In particular, we rely on the theory of relators underlying OntoUML to present a full formal account of the notions of Relational Context and Relator-Centric Clustering. An RCC is a model modular breakdown in terms of a number of adequate RCs. Each RC, in turn, captures all the information needed to understand the maximal scope of objects in the way they participate 4 5

See source code at https://github.com/OntoUML/ontouml-js. See source code at https://github.com/OntoUML/ontouml-vp-plugin.

224

G. Guizzardi et al.

Fig. 2. An RCC for the model of Fig. 1 organized as (Onto)UML packages.

in certain relationships. The approach is formally characterized (claim to formal precision) and it is based on a well-founded ontological theory of relators (claim to ontological adequacy).

Relational Contexts and Conceptual Model Clustering

225

Additionally, we have reported on a fully implemented plug-in tool for a Model-Based OntoUML Editor that automates this approach (claim to practical realizability). Following the formal characterization of this framework, the algorithm implementing it is a deterministic one (i.e., it generate the same RCC for a given model in every execution) and, in the worst possible case, the algorithm would execute a total of (ne − nr ) ∗ nr operations (where ne is the total number of model elements in the model and nr is the total number of classes stereotyped as relators). So, even in the worst possible case, the algorithm is tractable (claim to computational efficiency and scalability). In practice, nr is on average circa 6% of ne (as observed by analyzing 54 OntoUML models in different domains in the OntoUML repository [21,23]), and RCs are often largely disjoint with minimal intersections only in the level of kinds. In other words, in practice, the algorithm will often execute approximately ne steps as the different RCs tessellate the original model. Despite these encouraging results, we intend to subject it to a more comprehensive and systematic analysis and series of tests. In [2], the authors present an approach for representing reified events (occurrences) as first-class citizens in structural conceptual models. In those models, events have their own properties and can form taxonomic and temporal ordering structures. Moreover, objects participate in these events playing a number of ‘processual roles’ (e.g., the roles of victim and perpetrator in a crime). As an extension of the approach presented here, we intend to characterize contexts and clusters centered around this notion of events. The notion of Relational Context proposed here bears a resemblance also to the notion of Frames in C.J. Fillmore’s Frame Semantics [7]. In fact, we first considered using the term Ontological Frame (or Relational Frame) for this notion. Frames, in that tradition, are patterns that describe situations, events or relationships and in which elements appear playing interconnected and mutually dependent (semantic) roles. However, unlike our approach, frames have the primary goal of providing a background structure for the interpretation of lexical terms. RCs, in contrast, have as primary goal ontological transparency, focusing on connecting the entities playing complementary roles in the scope of bundles of relational properties (relators) to their identity-providing kinds. Finally, in order to properly evaluate the cognitive effectiveness of these contributions, we are already in the process of designing a series of empirical studies. The core focus concerns speed and recall in obtaining information from the business conceptual model, as well as naturalness to domain experts of the resulting breakdowns. Acknowledgments. We are grateful to Ricardo A. Falbo (in memoriam) for the spark that led to this investigation. This research is partially funded by the NeXON Project (UNIBZ). J.P. Almeida is funded by CAPES (grant number 23038.028816/2016-41) and CNPq (grants numbers 312123/2017-5 and 407235/2017-5).

226

G. Guizzardi et al.

References 1. Akoka, J., Comyn-Wattiau, I.: Entity-relationship and object-oriented model automatic clustering. Data Know. Eng. 20(2), 87–117 (1996) 2. Almeida, J.P.A., Falbo, R.A., Guizzardi, G.: Events as entities in ontology-driven conceptual modeling. In: Laender, A.H.F., Pernici, B., Lim, E.-P., de Oliveira, J.P.M. (eds.) ER 2019. LNCS, vol. 11788, pp. 469–483. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33223-5 39 3. Almeida, J.P.A., Guizzardi, G., Falbo, R.A., Sales, T.P.: gUFO: a lightweight implementation of the Unified Foundational Ontology (UFO). http://purl.org/ nemo/doc/gufo 4. Baldoni, M., Boella, G., van der Torre, I.L.: Interaction between objects in powerjava. J. Object Technol. 6(2), 5–30 (2007) 5. Castano, S., De Antonellis, V., Fugini, M.G., Pernici, B.: Conceptual schema analysis: techniques and applications. ACM Trans. Database Syst. 23(3), 286–333 (1998) 6. Figueiredo, G. et al.: Breaking into pieces: an ontological approach to conceptual model complexity management. In: Proceedings of 12th IEEE RCIS, pp. 1–10 (2018) 7. Fillmore, C.J., et al.: Frame semantics. In: Cognitive Linguistics: Basic Readings, vol. 34 (2006) 8. Francalanci, C., Pernici, B.: Abstraction levels for entity-relationship schemas. In: Loucopoulos, P. (ed.) ER 1994. LNCS, vol. 881, pp. 456–473. Springer, Heidelberg (1994). https://doi.org/10.1007/3-540-58786-1 96 9. Guarino, N., Guizzardi, G.: “We need to discuss the relationship”: revisiting relationships as modeling constructs. In: Zdravkovic, J., Kirikova, M., Johannesson, P. (eds.) CAiSE 2015. LNCS, vol. 9097, pp. 279–294. Springer, Cham (2015). https:// doi.org/10.1007/978-3-319-19069-3 18 10. Guizzardi, G.: Ontological Foundations for Structural Conceptual Models. CTIT, Centre for Telematics and Information Technology (2005) 11. Guizzardi, G.: Ontological patterns, anti-patterns and pattern languages for nextgeneration conceptual modeling. In: Yu, E., Dobbie, G., Jarke, M., Purao, S. (eds.) ER 2014. LNCS, vol. 8824, pp. 13–27. Springer, Cham (2014). https://doi.org/10. 1007/978-3-319-12206-9 2 12. Guizzardi, G.: Objects and events in context. In: Proceedings of 11th CONTEXT (2019) 13. Guizzardi, G., Figueiredo, G., Hedblom, M.M., Poels, G.: Ontology-based model abstraction. In: Proceedings of 13th IEEE RCIS, pp. 1–13 (2019) 14. Guizzardi, G., et al.: The role of foundational ontologies for domain ontology engineering: an industrial case study in the domain of oil and gas exploration and production. Int. J. Inf. Syst. Model. Des. 1(2), 1–22 (2010) 15. Guizzardi, G., et al.: Towards ontological foundations for conceptual modeling: the Unified Foundational Ontology (UFO) story. Appl. Ontol. 10(3–4), 259–271 (2015) 16. Lozano, J., Carbonera, J.L., Abel, M.: A novel approach for extracting well-founded ontology views. In: JOWO@ IJCAI (2015) 17. Lozano, J., et al.: Ontology view extraction: an approach based on ontological meta-properties. In: Proceedings of 26th IEEE ICTAI, pp. 122–129 (2014) 18. Moody, D.: The physics of notations: toward a scientific basis for constructing visual notations in software engineering. IEEE Trans. Softw. Eng. 35(6), 756–779 (2009)

Relational Contexts and Conceptual Model Clustering

227

19. Moody, D.L., Flitman, A.: A methodology for clustering entity relationship models — a human information processing approach. In: Akoka, J., Bouzeghoub, M., Comyn-Wattiau, I., M´etais, E. (eds.) ER 1999. LNCS, vol. 1728, pp. 114–130. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-47866-3 8 20. Moody, D.L., Flitman, A.R.: A decomposition method for entity relationship models: a systems theoretic approach. In: Proceedings of ICSTM2000, vol. 72 (2000) 21. Sales, T.P., Guizzardi, G.: Ontological anti-patterns: empirically uncovered errorprone structures in ontology-driven conceptual models. Data Know. Eng. 99, 72– 104 (2015) 22. Snoeck, M.: Enterprise Information Systems Engineering. The MERODE Approach. TEES. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-101453 23. Teixeira, M.: An Ontology-based Process for Domain-specific Visual Language Design. Federal University of Espirito Santo, Brazil/Ghent University, Belgium (2016) 24. Tzitzikas, Y., Hainaut, J.-L.: How to tame a very large ER diagram (Using link analysis and force-directed drawing algorithms). In: Delcambre, L., Kop, C., Mayr, H.C., Mylopoulos, J., Pastor, O. (eds.) ER 2005. LNCS, vol. 3716, pp. 144–159. Springer, Heidelberg (2005). https://doi.org/10.1007/11568322 10 25. Tzitzikas, Y., Kotzinos, D., Theoharis, Y.: On ranking RDF schema elements (and its application in visualization). J. Univ. Comput. Sci. 13(12), 1854–1880 (2007) 26. Verdonck, M., Gailly, F.: Insights on the use and application of ontology and conceptual modeling languages in ontology-driven conceptual modeling. In: ComynWattiau, I., Tanaka, K., Song, I.-Y., Yamamoto, S., Saeki, M. (eds.) ER 2016. LNCS, vol. 9974, pp. 83–97. Springer, Cham (2016). https://doi.org/10.1007/9783-319-46397-1 7 27. Verdonck, M., et al.: Comparing traditional conceptual modeling with ontologydriven conceptual modeling: an empirical study. Inf. Syst. 81, 92–103 (2018) 28. Villegas Ni˜ no, A.: A filtering engine for large conceptual schemas. Universitat Polit`ecnica de Catalunya (2013)

A Reference Ontology of Money and Virtual Currencies Glenda Amaral1(B) , Tiago Prince Sales1 , Giancarlo Guizzardi1 , and Daniele Porello2 1

Conceptual and Cognitive Modeling Research Group (CORE), Free University of Bozen-Bolzano, Bolzano, Italy {gmouraamaral,tiago.princesales,giancarlo.guizzardi}@unibz.it 2 ISTC-CNR Laboratory for Applied Ontology, Trento, Italy [email protected] Abstract. In recent years, there has been a growing interest, within the financial sector, in the adoption of ontology-based conceptual models to make the nature of conceptualizations explicit, as well as to safely establish the correct relations between them, thereby supporting semantic interoperability. Despite the wide number of efforts to create a unified view of the reality related to economic and financial domains, no comprehensive enough formal model has been developed to, on one hand, accurately describe the semantics regarding the world of money and currencies and, on the other hand, differentiate them from virtual currencies - of which cryptocurrencies are the most significant representative. This research aims at tackling these questions by conducting an ontological analysis of money and related concepts, grounded in the Unified Foundational Ontology, based on a literature review of the most relevant economic theories, and considering recent innovations in the financial industry. Keywords: Money · Currency · Virtual currency · Ontology · UFO · OntoUML

1 Introduction It is a curious paradox that some entities are so ever-present in our daily life that we tend to be oblivious to the importance of the mechanisms that support their operation, as well as to the vital role they play in our lives. One example is breathing. We breathe all the time without even thinking about it, but when unexpected events occur, like the recent worldwide spread of a virus with the potential to threaten our respiratory capacity, we realize the importance of ensuring the proper functioning of our respiratory system. The same goes for money. Money permeates most aspects of life in modern societies, however, the infrastructures that support the monetary system “remain invisible as long as they operate and fulfill their functions. In case of accident, disruption or crisis, their breakdown makes them visible and raises concerns and questions about their operation” [31, p. 6]. The financial crisis of 2007–2008 was an urgent reminder about the importance of money and finance. On that occasion, many banks could not aggregate risk exposures quickly and accurately, which “had severe consequences to the banks themselves and to the stability of the financial system as a whole” [4, p. 1]. c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 228–243, 2020. https://doi.org/10.1007/978-3-030-63479-7_16

A Reference Ontology of Money and Virtual Currencies

229

Making sense of a plethora of information in a dynamic and complex environment is paramount not only in the aforementioned example, but also in many other activities realized to ensure the proper functioning of the financial system, such as the formulation of monetary policy, the safeguarding of financial stability, and the maintenance of trust in the monetary system. Moreover, having a clear understanding of the ontological nature of these concepts is fundamental to understand the evolution of the economy before innovations in the finance industry. This can be seen in the case of the advent of cryptocurrencies [13]. Despite their increasing popularization and the impacts they may have on the wider economy, research on this subject still lacks conceptual and semantic rigor, and the definition of a formal concept of cryptocurrencies and their relationship with money is still an open issue. Semantic interoperability is a fundamental aspect for a number of applications in this context in which, for example, values referred to in cryptocurrencies need to be integrated with values referred to in legal tender currencies. For example, in applications such as for anti-money laundering, one must analyze information from multiple and heterogeneous sources to detect unusual patterns, such as large amounts of cash flow at certain periods, by particular groups of agents. Despite several efforts to create a unified view of our economic and financial reality [5, 12, 15], no formal model, comprehensive enough, has been developed to accurately describe the semantics of money and currencies. In a previous work [3] we have introduced an initial proposal for a money ontology, focusing on monetary objects and currencies. In this paper, we extend this ontology to consider monetary credit-related concepts, including electronic monetary credit, and improve the considerations on both exchange and purchasing power. In addition, we characterize the concept of virtual currencies and differentiate them from legal tender money. As a result, we propose a concrete artifact, namely, a Reference Conceptual Model (Reference Ontology) of Money and Virtual Currencies, which is specified in OntoUML [17] and thus, compliant with the ontological commitments of the Unified Foundational Ontology (UFO) [17]. The remainder of this paper is organized as follows. First, in Sect. 2, we briefly introduce the reader to UFO and OntoUML. Then, in Sect. 3, we present some characteristics of money and virtual currencies, as discussed in the literature. In Sect. 4, we analyze the nature of money, currencies, and virtual currencies and present our proposal, the Reference Ontology of Money and Virtual Currencies. We then conclude the paper with some final remarks in Sect. 5.

2 The Unified Foundational Ontology (UFO) The Unified Foundational Ontology is an axiomatic domain-independent formal theory, developed by consistently putting together a number of theories originating from areas such as Formal Ontology in philosophy, cognitive science, linguistics, and philosophical logic. Other examples of foundational ontologies include DOLCE [6] and GFO [22]. UFO, however, was created with the specific purpose of providing foundations for conceptual modeling. For example, unlike these other ontologies, UFO includes a rich ontology of relations [16], and an expressive system of formal distinctions among types of universals [19]. Furthermore, it provides an ontological treatment of higherorder domain types and the multi-level structures involving them [17]. As we shall see,

230

G. Amaral et al.

all these aspects are needed for properly dealing with the topic of this paper. Finally, again unlike DOLCE and GFO, UFO is formally connected to a set of engineering tools including a modeling language (OntoUML), as well as a number of methodological (e.g., patterns, anti-patterns) and computational tools [20]. UFO consists of three main parts: UFO-A [17], an ontology of endurants (roughly, objects), UFO-B [21], an ontology of perdurants (roughly, events and processes), and UFO-C [18], an ontology of social entities built on the top of UFO-A and UFO-B. For an in-depth discussion and formalization, one should refer to [17, 20, 21]. In our proposal of a Reference Ontology of Money and Virtual Currencies, we rely mainly on some concepts defined in UFO-C. For this reason, in the remainder of this section, we focus our discussion on this ontology, briefly explaining a subset of its ontological distinctions that are relevant for our analysis. A basic distinction in UFO-C is that between agents and (non-agentive) objects. An agent is a specialization of a substantial individual (existentially independent objects) that can be classified as physical (e.g., a person) or social (e.g., an organization, a society). Objects are non-agentive substantial individuals that can be further specialized in physical (e.g., a book) and social objects (e.g., language). UFO-C defines a normative description as a social object that may define rules/norms recognized by at least one social agent as well as social intrinsic and relational properties (e.g., social commitment types), social objects (e.g., the crown of the King of Spain) and social roles (e.g., president, or pedestrian). Examples of normative descriptions include the Italian Constitution and a set of directives on how to perform some actions within an organization. In the finance domain, the Treaty on the Functioning of the European Union [38] is an example of normative description, which defines euro banknotes and coins as legal tender money in the countries of the euro area. Over the years, UFO has been applied to analyze and (re)design a multitude of modelling languages and standards. One of these applications, however, stands out, namely the conceptual modelling language OntoUML [17, 20]. OntoUML is a version of UML class diagrams that has been designed such that its modelling primitives reflect the ontological distinctions put forth by UFO, and its grammatical constraints follow UFO axiomatization. We here employ OntoUML to model our proposed ontology [20].

3 On Money and Virtual Currencies 3.1

The Origins of Money

Different theories about the origin of money are reported in the literature and until today this topic is a matter for debate. Regarding the emergence of money in society, two leading schools of thought present fundamentally different arguments on its origins. A classic theory, known as the commodity theory of money or the catallactic theory [39], was defended by many classical economists like Carl Menger [29], Georg Simmel [36] and Ludwig von Mises [39]. They claim that money is an institution that spontaneously evolved in society, from some commodities (such as tobacco [36], salt [36] and cattle [9, 36]) until the current stage of fiat money that stands for any legal tender designated and issued by a central authority [34], which cannot be redeemed for a commodity [41].

A Reference Ontology of Money and Virtual Currencies

231

Alternatively, there are those who argue that money is a social construction [10, 23, 25, 26, 28], “an instrument representative of a debt owed by the state or even a token created and accepted by it as an instrument to pay taxes” [41, p. 12]. This view is known as chartalism or the state theory of money [26]. According to the chartalist school, money is what is stated in law [41]. In line with the state theory’s argument that money represents a debt owed by the state, is the position defended by the credit theory of money, (a.k.a. debt theory of money), which states that money is merely a token of a credit/debt relationship [23, 27]. Questions about the commodity theory versus the state theory of money have been the subject of intense debate in the literature. In the current state of research, the state theory view seems to have stronger arguments than the commodity theory [9]. One of these arguments is that the value of the first metal coined money was too high for everyday consumption, so it is not plausible to think that it was intended to be used in exchanges between private individuals, while it makes sense to conclude that it was issued by city-states for administrative purposes [41]. Another noticeable argument of the state theorists is the difficulty the commodity theorists have in explaining the decreasing value of money over time (simply put, inflation) [10]. 3.2 The Multiple Functions of Money Although the format of money has changed considerably over time, its functions remain unchanged. From the wide number of definitions proposed in the literature on economics, it is possible to deduce a consensus about three main functions, namely: – medium of exchange: a means of payment with a value that everyone trusts. For example, the statement “I bought this shirt for 20 euros” (from [34]) refers to this function. Note that here is also included the ability to make payments that have nothing to do with buying anything, like taxes and donations. – a unit of account: money acts as a standard numerical unit for the measurement of prices and costs of goods, services, assets and liabilities. For example, the statement “My car is worth 10,000 euros” (from [34]) refers to this function. – a store of value: “money can be saved and retrieved in the future” [13, p. 10]. For example, the statement “I have 1,000 euros in my bank account” (again from [34]) refers to this function. It is generally accepted in the literature that money performs its functions in virtue of the collective recognition of a certain status that makes it valuable and guarantees its acceptability [10, 23, 25, 26, 28, 31, 35]. When this status is recognized for a certain object, it acquires a function known as status function, which “is not performed in virtue of the physical features of the person or object, but in virtue of the fact that a certain status has been assigned to the person or object” [34, p. 1457]. This function can be performed only in virtue of the collective acceptance or recognition of that status in the community in question. Status functions are created by a certain type of speech act that Searle [34, 35] terms declaration, where “you make something the case by declaring it to be the case” [34, p. 1458]. According to Searle [34, p. 1455] “money always requires a declaration whereby some representation makes it the case that it is money”.

232

G. Amaral et al.

Currently, the status function of money is defined by law. The term legal tender refers to anything recognized by law that can be used to pay contractual debts. For example, a twenty-euros banknote fits this definition because it does have a definite status of being a twenty-euros banknote in Europe, as defined in the Treaty on the Functioning of the European Union [38]. People are willing to accept it in exchange for goods and services because they trust the monetary system that supports this status function. In [7], Castelfranchi and Falcone state that trust is the presupposition of money: originally money relies on the trust of the individuals accepting a monetary item as an instrument to indirectly acquire a certain amount of desirable goods [41]. Trust is therefore a crucial element of every monetary system. 3.3

Currency

The Oxford Dictionary [1] defines currency as “the system of money that a country uses”. Generally, the national government is the only party authorized to produce and distribute physical currency in its geographical area of control. The government also regulates the production of non-physical currency by banks through its monetary policy, usually implemented via the central bank. In some countries, alternate currencies are permissible (e.g., Ethiopian Birr and US dollar in Ethiopia), but only the nationally sponsored currency has the status of legal tender. And in still other countries a foreign produced currency is both acceptable currency and legal tender (e.g. US dollar in Ecuador). For example, in the countries of the euro area, only euro banknotes and coins are legal tender and therefore, by law, they must be accepted as payment for a debt within those countries. According to the Article 128 of the Treaty on the Functioning of the European Union [38]: “The European Central Bank (ECB) shall have the exclusive right to authorise the issue of banknotes within the Union. The ECB and the national central banks may issue such notes. The banknotes issued by the ECB and the national central banks shall be the only such notes to have the status of legal tender within the Union”. 3.4

Virtual Currencies (VC)

The ECB defines virtual currency as “a digital representation of value, not issued by a central bank, credit institution or e-money institution, which in some circumstances can be used as an alternative to money” [14, p. 4]. We could go a little bit further and include non-digital forms of virtual currencies in this definition such as tokens used in casinos. From the point of view of central banks and regulatory authorities, virtual currencies cannot be regarded as full forms of money at the moment [14]. Also from a legal perspective, they are not considered money: so far no virtual currency has been declared the official currency of a state, nor have a legal tender capacity backed by law. From an economic perspective, the virtual currencies currently known do not fully meet all three functions of money defined in economic literature [14]. In some cases they “have a limited function as a medium of exchange because they have a very low level of acceptance among the general public” [14, p. 23]. In addition, due to the high volatility of their exchange rates to currencies, they are not considered suitable to be used as store of

A Reference Ontology of Money and Virtual Currencies

233

value. Lastly, “both the low level of acceptance and the high volatility of their exchange rates and thus purchasing power make them unsuitable as a unit of account” [14, p. 24]. However, virtual currencies are similar to money within their user community. They necessarily have their own rules and processes enabling the transfer of value, as well as their payment systems [14]. These systems of rules and processes are called virtual currency schemes, and are organized into three categories: 1. closed virtual currencies, which have almost no link to the real economy, as they can only be spent by purchasing virtual goods and services offered within the virtual community and, at least in theory, they cannot be traded outside it. 2. virtual currencies with unidirectional flows, in which “units can be purchased using real money at a specific exchange rate but cannot be exchanged back to the original currency” [14, p. 6], and trading with other users is not allowed. Examples are loyalty programmes like airlines’ points programmes and the Pokemon Go’s PokeCoins [37] (which can be bought using real money and can be exchanged with in-game items). 3. virtual currencies with bi-directional flows, in which units can be bought and sold according to (floating) exchange rates. Examples include cryptocurrencies [30], such as Bitcoin and Ethereum, to name but a few.

4 The Ontology of Money and Virtual Currencies 4.1 Analysing Money, Currency and Related Concepts In general, we are in line with the widespread position defended by some authors in the literature, which assume that money depends on the collective acceptance or recognition of its status as money [10, 23, 25, 26, 28, 31, 35]. This dependence is straightforward in the case of fiat money, but is also true for commodity money, as it requires a status function “precisely to the extent that it is collectively recognized as money and not just as a commodity” [34, p. 1460]. In contemporary society the status function of money is constituted as legal tender by the law that creates it. For example, in Europe, the Treaty on the Functioning of the European Union [38] describes the status function that defines euro banknotes and coins as legal tender money in the countries of the euro area. Note that the law specifies both the currency and the objects that are considered legal tender in a particular country or region. It also defines a structure for the currency value domain. Examples of structures are: one-dimensional structure of numbers with two decimal places defined for euros, and one-dimensional structure of integers defined for Paraguay’s Guarani [24]. According to the literature on the history of money, different types of objects have been used as money in all its manifestations, such as (i) tobacco and salt, used as commodity money; (ii) banknotes and paper certificates, used as commodity backed money; and (iii) banknotes, coins and bank deposits in electronic format, used as fiat money. In our analysis we are focusing on the objects currently used as fiat money, such as banknotes and coins. We shall refer to these objects as monetary objects henceforth. Every monetary object has a nominal value (also known as face value) denominated in the currency defined in the law that describes its status function. Only in exceptional

234

G. Amaral et al.

cases in history (generally in times of crisis) there has been temporary reutilization of banknotes, “overstamped” with a nominal value different from the original one. For example, in 1986, in Brazil, the prevailing currency Cruzeiro was replaced by a new currency, named Cruzado, at a rate of 1 Cruzado to 1000 Cruzeiros. For a short period of time, some denominations of Cruzeiro banknotes were “overstamped” with the equivalent nominal value denominated in Cruzados [8]. During their life cycle, monetary objects can be considered either valid or not valid. For example, new banknotes are not considered valid until they are released and put into public circulation. Likewise, damaged banknotes fulfilling certain criteria defined in law are not considered valid (for example, an euro banknote is not considered valid if 50% or less of the banknote is presented and there are no proofs that the missing parts have been destroyed [11]). Obviously, only valid monetary objects can be exchanged for goods and services in the economy. In modern economies, money emerges a standard unit of account in which all other commodities express their exchange values. A valid monetary object has an exchange value that is equal to its nominal value; and an agent holding control of it is endowed with the capacity of making economic transactions in the amount corresponding to its exchange value. For example, a twenty-euros banknote has an exchange value of twenty euros. If the price of a Big Mac is five euros, an agent holding control of a valid twenty-euros banknote is capable of exchanging it for four Big Macs. Money also presupposes the existence of a credit/debt relation [23, 27]. Monetary objects establish this relation between the agent holding control of them and the monetary authority (e.g. central bank), which ultimately represents the State. As for bank deposits, they correspond to an electronic monetary credit denominated in a certain currency. In this case, the credit/debt relation involves also the financial institution in charge of the bank account, as intermediary. Agents holding control of monetary objects or owing electronic monetary credits are endowed with the capacity of making economic transactions in the amount corresponding to the exchange value of the monetary object or the electronic monetary credit value, respectively. This capacity is closely related to the media of exchange function of money. In this paper we name it exchange power. Moving in this direction, if we consider the exchange power resulting from the total of electronic monetary credits and monetary objects controlled by an agent, we obtain a kind of aggregated exchange power that corresponds to the total value in economic transactions the agent is capable to carry out. As previously mentioned, goods and services have their prices expressed in terms of currencies. As the price of goods and services can change, influenced by the economic environment and the dynamic of the system of prices, the purchasing power associated with these aggregated exchange powers also changes. The purchasing power describes the quantity of goods an amount of money can buy. It is related to the concepts of inflation and price indexes. The inflation rate means an increase in general price level, measured by the variation in a price index during a period. When there is inflation, the purchasing power decreases. It means that the exchange value of the transactions that the agent manages to carry out remains the same (and is equal to the aggregate

A Reference Ontology of Money and Virtual Currencies

235

exchange value), however, the quantity of goods and services that he manages to get with that value will vary, depending on the price of the commodities. Let us consider an example in which an agent named Mary has twenty euros in her bank account and a ten-euros banknote in her wallet. In this case, she has an aggregated exchange power of thirty euros and is able to carry out economic transactions in the amount corresponding to this value. Considering that the price of a Big Mac is five euros, the purchasing power of Mary is equivalent to six Big Macs. If the Big Mac’s price rises to six euros, Mary’s aggregated exchange power remains the same, but her purchasing power is no longer the same because now she’s able to buy only five Big Macs. It is worth mentioning that monetary objects can also be traded in the economy as regular commodities, like collectible items. For example, some rare banknotes are traded by banknote collectors at far more than their nominal (or face) value. Even valid banknotes in circulation can be traded as collectible items at a value above their face value. However, for the acquisition of goods and services in the economy, a banknote functions as a means of exchange and will always be worth its face value. Finally, another important aspect is money’s dependence on trust [2]. It is clearly recognized in the literature that trust is a crucial element for the well functioning of any monetary system [7, 31, 34, 41]. A precondition for the system to work is trust that the monetary objects and credits will be generally accepted, as well as that both price and financial stability will be maintained. Even in this day and age, in which the legal tender status of money is enforced by law, money depends on the trust of society in the monetary system, which guarantees that mechanisms, infrastructures and protective structures (such as law, regulations, processes, procedures and government enforcement bodies) are in place to ensure that money is widely accepted, transactions take place, contracts are fulfilled and, above all, agents can count on that happening. Nonetheless, as trust relations are highly dynamic [2], the decreasing level of trust in a particular monetary system can lead money to gradually lose its functions. When inflation rates are very high, money does not function as an effective store of value and people tend to spend it immediately rather than hold it [40]. Also, as prices start to rise rapidly, the function of money as unit of account diminishes. Finally, inflation reduces the function of money as a medium of exchange. In situations of hyperinflation, people may abandon the use of one currency for a more stable one [40]. For example, in 2007, hyperinflation was so problematic in Zimbabue that “people abandoned the Zimbabwean dollar, preferring to conduct transactions in U.S. dollars or South African rands. The Zimbabwean currency became nearly useless as money and was removed from circulation in 2009” [40, p. 2]. 4.2 Similarities and Differences Between Money and VC Virtual currencies have been the subject of intense policy debates, however there is currently no international agreement on how they should be defined. In this section, we elaborate on evidences that motivate us to advocate the position put forth by the European Central Bank [14], according to which virtual currencies are neither money nor legal tender currencies. In particular, we explore the roles of status function, legal tender status and trust, in the conceptualization of both VC and money.

236

G. Amaral et al.

Status Function. Virtual currencies are similar to money in the sense that both have their value grounded on a collective recognition of a certain status that makes them valuable. In the case of money this status function is defined by law. As for virtual currencies, it is part of their specification and dedicated retail payment systems, also known as virtual currency schemes. Legal Tender Status. According to the ECB [13, p. 5] “virtual currency schemes differ from electronic money schemes insofar as the currency being used as the unit of account has no physical counterpart with legal tender status”. In a virtual currency scheme, all digital representations of value map to “tokenised” representations of virtual currencies, which are not regulated by law. The lack of a legal framework leads to problems for redeeming funds, as the link between virtual currencies and currencies with legal tender status is not regulated by law [13]. Trust. Another similarity between money and virtual currencies is that both are dependent on trust. A precondition for the proper functioning of both the monetary and the VC system is trust that money and virtual currencies will be accepted, respectively. While in the case of money this acceptance comprises the whole society and trust includes the belief that both price and financial stability will be maintained, virtual currencies still have a limited level of acceptance among the general public, probably due to the high volatility of their exchange rates to currencies and to the “lack of a proper legal basis for virtual currency schemes” [13, p. 42]. As currently virtual currencies do not have a legal tender capacity nor are backed by law, “users do not benefit from legal protection such as redeemability or a deposit guaranty scheme, and are more exposed to the various risks that regulation usually mitigates” [14, p. 21]. 4.3

Representing the Ontology of Money in OntoUML

In this section, we present a well-founded ontology that formalizes the characterization of money and currency, as well as its embedded concepts and relations. In the OntoUML diagrams depicting this ontology, we adopt the following color coding: types are represented in purple, objects in pink, qualities and modes in blue, relators in green, and datatypes in white. Figure1 depicts the concept of Money Status Function Description as a type of Normative Description (concept from UFO-C). The Money Status Function Description defines a Currency and the Monetary Object Types that have the status of money. For example, the “Treaty on the Functioning of the European Union” [38] is an example of Money Status Function Description, which defines euro banknotes and coins as legal tender money in the countries of the euro area. In this case, “euro” is the Currency, while “euro banknote” and “euro coin” are Monetary Object Types. The Money Status Function Description also defines a Currency Quality Space Structure for the Currency Quality Space. The former corresponds to a Social Object (concept from UFO-C) that prescribes a structure for the domain of values (e.g. number with two decimal places), while the latter corresponds to the value domain itself (see [17] for quality spaces). In the ontology, Monetary Objects represent instances of Monetary Object Types. For example, a “twenty-euros banknote” is a Monetary Object and corresponds to an instance of the “euro banknote” Monetary Object Type, defined in the “Treaty on the

A Reference Ontology of Money and Virtual Currencies

237

Functioning of the European Union” [38]. The nominal value property corresponds to the nominal value stamped on the Monetary Object by the issuing authority. Valid Monetary Object and Not Valid Monetary Object represent two different phases of the Monetary Object’s life cycle. The distinction between “valid” and “not valid” allows for the representation of the life cycle of a monetary object. It is particularly important in the context of central banks, because they need to control the movement of monetary objects, such as banknotes, since they are printed until their destruction. For example, new banknotes are not considered valid until they are released and put into public circulation. The property exchange value is specific to Valid Monetary Objects as only they can be exchanged for goods and services in the economy. The exchange value of a Valid Monetary Object is equal to its nominal value. In UFO, properties can be directly evaluated (projected) into certain value spaces [17]. Both the exchange value and the nominal value of a Monetary Object are modeled as properties that have a value in a Currency Quality Space, which is structured according to a particular Currency Quality Space Structure. For example, euro has a measurable value in one-dimensional structure of numbers with two decimal places [24].

Fig. 1. Money and status function

Figure 2 depicts monetary objects and electronic monetary credit related concepts. As previously mentioned, money represents a credit/debt relation between the State (Monetary Authority) and an agent (Agent) either owing an Electronic Monetary Credit or holding control of a Monetary Object. Monetary Credit/Debt relations are composed of the Monetary Credit and the Monetary Debt, which have a value projected in a particular Currency Quality Space, and inhere both in the Agent (creditor) and in the Monetary Authority (debtor), respectively. In the ontology, the Monetary Credit/Debt Relation is specialized into Physical Monetary Credit/Debt and Electronic Monetary Credit/Debt. The former represents the credit/debit relation that a Valid Monetary Object establishes between the Agent that holds Control of it and the Monetary Authority (e.g. central banks). As for the Electronic Monetary Credit/Debt relation, it represents bank deposits and, consequently, involves the Financial Institution in charge of the bank account, as intermediary.

238

G. Amaral et al.

As previously argued, when an Agent holds control of a Valid Monetary Object, she is endowed with the power to make economic transactions in the amount corresponding to its exchange value. In Fig. 3, we capture it by means of the objectified relationship labeled Control, between Valid Monetary Object and Agent. The Exchange Power to carry out economic transactions inheres in the Agent and is grounded either on a Control relationship or in an Electronic Monetary Credit/Debt relation in which the Agent is the creditor. The Exchange Power’s property exchange power value assumes a value in a Currency Quality Space, which is equal to either the exchange value of the Valid Monetary Object or the value of the Electronic Monetary Credit/Debt. We model the exchange power resulting from the sum of electronic monetary credits and monetary objects controlled by an Agent by means of the entity Aggregated Exchange Power, which is represented as a kind of capability inhering in the Agent. Finally, the Aggregated Exchange Power has an underlying Purchasing Power that corresponds to the quantity of goods and services the Agent manages to get with this Aggregated Exchange Power. As previously discussed, the Purchasing Power depends on the Price of goods and services. We model Price as a quality value that is “attached” to an Object, as a result of an assessment made by an Agent. The relationship Pricing represents this assessment. We are aware that the current ontology does not provide a deep analysis of pricing. This analysis fall outside the scope of this paper, as our focus is the modeling of the relationship between money and prices. We make use here of the concepts and relations defined in the Reference Ontology of Trust (ROT) [2] to model the relation between money and trust, presented in Fig. 4. ROT formalizes the general concept of trust and distinguishes between two types of trust, namely, social trust and institution-based trust. The latter builds upon the existence of shared rules, regularities, conventional practices, etc. and is related to social systems [2], like the Monetary System. According to ROT, Institution-based Trust is a specialization of Trust in which the Trustee is a social system. In our ontology the entity Institution-Based Trust represents the Trust of the society (a social Agent) in the Monetary System.

Fig. 2. Monetary objects and electronic monetary credit

A Reference Ontology of Money and Virtual Currencies

239

Fig. 3. Money, exchange power and purchasing power

Fig. 4. Money and trust (instantiating a fragment of ROT [2])

4.4 Modeling Virtual Currencies in OntoUML Similar to money, virtual currencies have their value grounded on a status function, which is defined in their underlying virtual currency scheme. In Fig. 5, the entity Virtual Currency Scheme Description, which defines the Virtual Currency and the Virtual Currency Token Type, represents this concept. As well as for money, the Virtual Currency Scheme Description also defines a Virtual Currency Quality Space Structure for the Virtual Currency Quality Space. Frequent flyer program points and cryptocurrencies, such as Bitcoin and Ethereum are examples of Virtual Currencies. In the ontology, Virtual Currency Token represents instances of Virtual Currency Token Type. The property vc token value represents the token value and is projected in a Virtual Currency Quality Space. As aforementioned, virtual currencies are similar to money regarding the role played by trust. As we did for money, we made use of the concepts defined in the Reference Ontology of Trust (ROT) [2] to model the relation between virtual currencies and trust. Therefore, in Fig. 5, the entity Institution-Based Trust represents the Trust of Agents in the Virtual Currency System.

Fig. 5. Virtual currency

240

G. Amaral et al.

Fig. 6. Virtual currency and exchange power

Following the categorization proposed by the ECB [14], in Fig. 6 we distinguish Virtual Currency Token into Closed VC Token and Purchasable VC Token. Closed VC Tokens cannot be purchased nor converted to legal tender currencies. Differently, Purchasable VC Tokens can be purchased using legal tender currencies at a specific exchange rate. For this reason, they have an associated price value that is represented by means of the property vc token price, which takes a value in a Currency Quality Space. Within the category of Purchasable VC Token we can further distinguish into Unidirectional Flow VC Token and Bidirectional Flow VC Token. The difference between then is that only Bidirectional Flow VC Tokens can be exchanged to legal tender currencies. Therefore, Agents holding control of Bidirectional Flow VC Tokens have the power to exchange it to legal tender currencies, as well as to real goods and services. We model this capacity by means of the entity Exchange Power in Currency, which inheres in the Agent and is grounded on the control relation Bidirectional VC Control, between the Agent and Bidirectional Flow VC Token. Finally, every Agent holding control of a Virtual Currency Token has an exchange power to carry out economic transactions denominated in that particular virtual currency in the amount correspondent to the value of the Virtual Currency Token. The entity Exchange Power in VC represents this capacity.

5 Final Remarks Despite the financial sector’s interest in the adoption of ontology-based conceptual models to make the nature of the conceptualizations explicit [12, 15, 32, 33], to the best of our knowledge, no formal model, comprehensive enough, has been developed to accurately describe the semantics regarding the world of money and currencies. An initiative on this direction is the Financial Industry Business Ontology (FIBO), “an industry standard resource for the definition of business concepts in the financial services industry” [12]. Although FIBO includes a Currency Amount Ontology, it is not comprehensive and only marginally touches the notions of money and currency. For example, concepts related to money functions, types of money, legal aspects and trust are not explored in this ontology. Our analysis allows us to formally characterize money and related concepts, as well as virtual currencies. The ontology presented here can serve as a basis for future business ontologies and as a conceptual foundation for several types of information analysis and data integration.

A Reference Ontology of Money and Virtual Currencies

241

We conducted a preliminary evaluation of the ontology by means of interactions with experts in the field of economics and finance, including real practitioners directly working on monetary policy in the context of central banks. In addition, as the ontology was specified in OntoUML, it is compliant with the ontological distinctions put forth by UFO, thus preserving ontological consistency by design. As a next direction, we plan to apply our ontology to improve analytical data integration in the finance domain, as well as to support semantic interoperability across multiple cryptocurrencies blockchain networks. We also plan to integrate it to well-known ontologies in the finance domain (e.g. FIBO). Acknowledgments. This work is partially supported by CAPES (PhD grant# 88881.173022/2018-01) and NeXON project (UNIBZ). The authors would like to thank Nicola Guarino for comments and fruitful discussions on the topics of this article.

References 1. Oxford English Dictionary: Simpson. JA & Weiner, ESC (1989) 2. Amaral, G., Sales, T.P., Guizzardi, G., Porello, D.: Towards a reference ontology of trust. In: Panetto, H., Debruyne, C., Hepp, M., Lewis, D., Ardagna, C.A., Meersman, R. (eds.) OTM 2019. LNCS, vol. 11877, pp. 3–21. Springer, Cham (2019). https://doi.org/10.1007/978-3030-33246-4 1 3. Amaral, G., Sales, T.P., Guizzardi, G., Porello, D., Guarino, N.: Towards a reference ontology of money: monetary objects, currencies and related concepts. In: 14th International Workshop on Value Modelling and Business Ontologies (2020) 4. Basel Committee: principles for effective risk data aggregation and risk reporting (2013). https://www.bis.org/publ/bcbs239.htm 5. Blums, I., Weigand, H.: Financial reporting by a shared ledger. In: JOWO (2017) 6. Borgo, S., Masolo, C.: Foundational choices in DOLCE. In: Staab, S., Studer, R. (eds.) Handbook on Ontologies. IHIS, pp. 361–381. Springer, Heidelberg (2009). https://doi.org/ 10.1007/978-3-540-92673-3 16 7. Castelfranchi, C., Falcone, R.: Trust Theory: A Socio-Cognitive and Computational Model, vol. 18. Wiley, Chichester (2010) 8. Central Bank of Brazil: Synthesis of Brazilian monetary standards (2007). https://www.bcb. gov.br/ingles/museu-espacos/refmone-i.asp 9. De Bonis, R., Vangelisti, M.I.: Moneta - Dai buoi di Omero ai Bitcoin. Collana Universale Paperbacks il Mulino (2019) 10. De Bruin, B., Herzog, L., O’Neill, M., Sandberg, J.: Philosophy of money and finance. In: Stanford Encyclopedia of Philosophy (2018) 11. Decision of the European central bank of 19 April 2013 on the denominations, specifications, reproduction, exchange and withdrawal of euro banknotes (ECB/2013/10). Official Journal of the European Union L118, 37–42 (2013) 12. Enterprise Data Management Council: Financial Industry Business Ontology (2015). https:// spec.edmcouncil.org/fibo/ 13. European CentralBank: Virtual currency schemes. Technical report, European Central Bank, Frankfurt a. M., Germany (2012) 14. European CentralBank: Virtual currency schemes-a further analysis. Technical report, European Central Bank, Frankfurt a. M., Germany (2015)

242

G. Amaral et al.

c accounting and finance 15. Fischer-Pauzenberger, C., Schwaiger, W.S.A.: The OntoREA model: ontological conceptualization of the accounting and finance domain. In: Mayr, H.C., Guizzardi, G., Ma, H., Pastor, O. (eds.) ER 2017. LNCS, vol. 10650, pp. 506–519. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-69904-2 38 16. Fonseca, C.M., Porello, D., Guizzardi, G., Almeida, J.P.A., Guarino, N.: Relations in ontology-driven conceptual modeling. In: Laender, A.H.F., Pernici, B., Lim, E.-P., de Oliveira, J.P.M. (eds.) ER 2019. LNCS, vol. 11788, pp. 28–42. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-33223-5 4 17. Guizzardi, G.: Ontological foundations for structural conceptual models. Telematica Instituut Fundamental Research Series, No. 15, ISBN 90-75176-81-3 (2005) 18. Guizzardi, G., Falbo, R.A., Guizzardi, R.S.S.: Grounding software domain ontologies in the Unified Foundational Ontology (UFO). In: 11th Ibero-American Conference on Software Engineering (CIbSE), pp. 127–140 (2008) 19. Guizzardi, G., Fonseca, C.M., Benevides, A.B., Almeida, J.P.A., Porello, D., Sales, T.P.: Endurant types in ontology-driven conceptual modeling: towards OntoUML 2.0. In: Trujillo, J.C., et al. (eds.) ER 2018. LNCS, vol. 11157, pp. 136–150. Springer, Cham (2018). https:// doi.org/10.1007/978-3-030-00847-5 12 20. Guizzardi, G., Wagner, G., Almeida, J.P.A., Guizzardi, R.S.S.: Towards ontological foundations for conceptual modeling: the Unified Foundational Ontology (UFO) story. Appl. Ontol. 10(3–4), 259–271 (2015) 21. Guizzardi, G., Wagner, G., de Almeida Falbo, R., Guizzardi, R.S.S., Almeida, J.P.A.: Towards ontological foundations for the conceptual modeling of events. In: Ng, W., Storey, V.C., Trujillo, J.C. (eds.) ER 2013. LNCS, vol. 8217, pp. 327–341. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41924-9 27 22. Herre, H.: General Formal Ontology (GFO): a foundational ontology for conceptual modelling. In: Poli, R., Healy, M., Kameas, A. (eds.) Theory and Applications of Ontology: Computer Applications, pp. 297–345. Springer, Dordrecht (2010) 23. Innes, A.M.: What is money? The Banking Law Journal. maio (1913) 24. ISO: Codes for the representation of currencies - ISO 4217:2015 (2015) 25. Keynes, J.M.: A Treatise on Money: V. 1: The Pure Theory of Money. Macmillan, St. Martin’s for the Royal Economic Society (1971) 26. Knapp, G.F.: The state theory of money. McMaster University Archive for the History of Economic Thought, Technical report (1924) 27. Macleod, H.: The Theory of Credit, vol. 2. Green, and Company, Longmans (1890) 28. Mann, F.A.: The Legal Aspect of Money. Milford (1938) 29. Menger, C.: On the origins of money. Ludwig von Mises Institute, CA Foley, Auburn, AL (2009) 30. Mukhopadhyay, U., Skjellum, A., Hambolu, O., Oakley, J., Yu, L., Brooks, R.: A brief survey of cryptocurrency systems. In: 14th Annual Conference on Privacy, Security and Trust, pp. 745–752. IEEE (2016) 31. Papadopoulos, G.: The ontology of money: institutions, power and collective intentionality. Erasmus University Rotterdam (2015) 32. Polizel, F., Casare, S.J., Sichman, J.S.: Ontobacen: a modular ontology for risk management in the Brazilian financial system. In: JOWO@ IJCAI 1517 (2015) 33. Scholes, M., et al.: Regulating Wall Street: The Dodd-Frank Act and the New Architecture of Global Finance, vol. 608. Wiley, Hoboken (2010) 34. Searle, J.R.: Money: ontology and deception. Cambridge J. Econ. 41(5), 1453–1470 (2017) 35. Searle, J.: The Construction of Social Reality. Free Press, New York (1995) 36. Simmel, G.: The Philosophy of Money. Psychology Press, London (2004) 37. Stanley-Smith, J., Schwanke, A.: Pokemon go could get caught in a tax bubble. Int’l Tax Rev. 27, 22 (2016)

A Reference Ontology of Money and Virtual Currencies

243

38. Treaty on the functioning of the European union: official Journal of the European Union C326, pp. 47–390 (2012) 39. Von Mises, L.: The Theory of Money and Cred-It. Skyhorse Publishing, Inc., New York (2013) 40. Wolla, S.A.: Money and Inflation: A Functional Relationship. Page One Economics (2013) 41. Zelmanovitz, L.: The Ontology and Function of money: The Philosophical Fundamentals of Monetary Institutions. Lexington Books, Lanham (2015)

Ontology-Based Visualization for Business Model Design Marco Peter1,2(B) , Devid Montecchiari1,2 , Knut Hinkelmann1,2 , and Stella Gatziu Grivas1 1

School of Business, FHNW University of Applied Sciences and Arts Northwestern Switzerland, Olten, Switzerland {marco.peter,devid.montecchiari,knut.hinkelmann,stella.gatziugrivas}@fhnw.ch 2 School of Science and Technology, UNICAM University of Camerino, Camerino, Italy

Abstract. The goal of this paper is to demonstrate the feasibility of combining visualization and reasoning for business model design by combining the machine-interpretability of ontologies with a further development of the widely accepted business modeling tool, the Business Model Canvas (BMC). Since ontologies are a machine-interpretable representation of enterprise knowledge and thus, not very adequate for human interpretation, we present a tool that combines the graphical and human interpretable representation of BMC with a business model ontology. The tool connects a business model with reusable data and interoperability to other intelligent business information systems so that additional functionalities are made possible, such as a comparison between business models. This research follows the design science strategy with a qualitative approach by applying literature research, expert interviews, and desk research. The developed AOAME4BMC tool consists of the frontend, a graphical web-based representation of an enhanced BMC, a web service for the data exchange with the backend, and a specific ontology for the machine-interpretable representation of a business model. The results suggest that the developed tool AOAME4BMC supports the suitability of an ontology-based representation for business model design. Keywords: Business model design · Ontology · Agile and ontology-aided meta-modeling · Business model canvas

1

Introduction

Today, more and more competition takes place not only between products, services, or processes but between business models [15]. Therefore, companies continue to improve and innovate their current business model to sustain in today’s fast-paced world [6,8]. Companies that have decided to undertake an innovative transformation of their business model have found themselves to be more competitive in today’s market [54]. Several strategies on how to perform business c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 244–258, 2020. https://doi.org/10.1007/978-3-030-63479-7_17

Ontology-Based Visualization for Business Model Design

245

model innovation exist [5,15,16,49] and all have in common that the current business model is defined in detail at first. There are different tools to describe a business model, such as the Business Model Canvas (BMC) by Osterwalder and Pigneur [42] or Business Modeling Starter Kit by Breuer [5]. Nevertheless, there is missing a tool to describe the business model not only in more detail than the BMC but also in supporting the reusability of data and interoperability to other intelligent business information systems. This reusability of the specified business model would also allow the company to compare its business model to other business models and to gain knowledge out of it. Graphical and text-based business models like BMC are intended for use by humans, who can easily interpret them. Having a machine-interpretable representation would increase finding and comparing similar business models. Ontology-based representations of enterprise knowledge are machineinterpretable [22]. Combining machine-interpretable and human-interpretable representation allows for using the same models both by humans and machines [21,29]. Thus, the goal of this research is to develop a machine-interpretable representation of the Business Model Canvas and combine it with the graphical representation to make business models comparable and the knowledge about business models reusable. The research has the objective to answer the following research question: how can business model design be supported by combining visualization and reasoning over the models? The developed ontology provides a systematic structure for business models. Furthermore, the approach provides an agile visual representation for a business modeler. Since the approach collects the business model based on an ontology’s structure, the collected business models can be used for reasoning purposes to extract patterns of business models and enable a comparison of them. The outcome of this research is used for a web-based tool, which supports businesses on their business model innovation process by providing previously collected business model innovation cases and match them to the company’s individual business model. This process allows showing the impact the proposed new business model would have on the current one. This also allows providing information to estimate the business model transformation costs in detail. The challenge is to integrate the visual representation into the ontology and to allow the modeler to have an agile experience during the business modeling process. This paper is organized as follows: state of the art is discussed in Sect. 2 and our applied research method in Sect. 3. Section 4 describes our concept of an ontology-based business model. The implementation of the ontology into a web-based agile visualization software is presented in Sect. 5 and evaluated in Sect. 6. The conclusion in Sect. 7 closes this research paper and outlines future research areas.

246

2 2.1

M. Peter et al.

State of the Art Business Model

There is no commonly used definition for the term business model within the literature [2,36,43,47]. Yet, researchers agree that a business model is a conceptual model of different parts of a business, which can be seen as a blueprint [43,51]. Also, there has been done in-depth literature analysis regarding the elements of business models [47]. The outcome of that literature analysis provides the main elements of a business model, which are the features of the offering, customer interface including the market segments, financial aspects such as the revenue streams, as well as some mechanism to create value included within the element of infrastructure [9,42,43,45,47,51]. One can also classify the same business model elements into customer value proposition, profit formula, key resources, and key processes [26]. Another view is to distinguish the characteristics of a business model by aggregating four value types: value proposition, value delivery, value capture, and value creation [10]. While the features of the product or service are part of the value proposition, the market segment is part of the value delivery. The revenue streams are covered through the value capture and the value creation mechanisms can be mapped to value creation. 2.2

Business Model Visualization

There exist different forms to visualize a business model [17,19]. A business model can be visualized as a classification schema, in a transactional form, a cyclical form, or as a sequential model [17]. A classification schema of a business model consists of a set of components and thus, categorizes the business model [17]. An often-used tool for this visualization type is the Business Model Canvas by Osterwalder and Pigneur [42]. The drawback of it is that the value creation process is not visually shown to the viewer of the business model. A transactional business model is recommended for platform-based business model visualization, as it supports the visualization of exchange between the company, its partners, and its customers [24]. While a sequential form of a business model illustrates the model in a sequence of actions, a cyclical business model shows the model as a perpetual process [17]. The visualization form differs mainly concerning the type of information the business model should provide to a viewer. If the business model should deliver information regarding the different elements included within the business model, a categorized visualization is preferable [19]. However, if the business model should deliver information regarding the causal relationships between the business model elements, and thus, provide transformational information of a business model, it is usually illustrated using arrows between the different business model elements [18]. The understanding of the causal relationships between the different business model aspects is useful when it comes to e.g.. the understanding of revenue generation [2,37]. A widely used approach to describe a business model in a structured and classified way is the Business Model Canvas (BMC) [42]. It is a tool to describe

Ontology-Based Visualization for Business Model Design

247

the business model of a company on a one-pager [14]. The model consists of nine building blocks: customer segments, value proposition, channels, customer relationships, revenue streams, key resources, key activities, key partnerships, and cost structure [42]. There exist several tools, which enable users to visualize their BMC online. A well-known online tool for BMC representation is the tool from Strategyzer AG, of which A. Osterwalder is a founding partner [50]. Another online tool for BMC representation is the award-winning platform c (SMB) from Smart City Innovation Lab [48]. SMB is a smartbusinessmodeler result of academic research on sustainable business model patterns for business model innovation [33,34]. While the tool from Strategyzer is rather a simple virtual sticky-note-based whiteboard with the BMC as the background, the SMB is on the first glance very similar. Nevertheless, the SMB uses patterns from the entered value propositions within the business model to analyze it and find companies of similar value propositions. This information helps companies, especially start-ups, to identify possible competitors within their market. Yet, these solutions do not help in making the provided data of the business model machineinterpretable to treat it as knowledge of the company. Also, the data is not been reused further for analysis such as comparison with other business models for suggesting possible business model innovations. 2.3

Ontology-Based Metamodeling

There is a clear gap between a human intelligible modeling language and a machine-interpretable language [21]. Humans prefer graphical or textual models, but computers require a formal language to interpret models. A possible approach is the usage of models that are “cognitively adequate for humans and processable by machines” [21]. A solution can be a variant to the Meta Object Facility (MOF) metamodeling framework [41] where ontologies are used instead of UML as metamodeling language [21]. Through semantic lifting [1], a mapping is created to link graphical representation with the respective structure and semantics [23]. The ontology provides a formal semantics of the modeling language [25,28] such that related models are interpretable by machine. Ontology-based metamodeling [23] allows the automatic linking of the graphical notation to the semantics in the ontology, generating models which are both machine-interpretable and human intelligible. Because of the automatic linking, ontology-based metamodeling is the basis for the solution presented in Sect. 5. 2.4

Enterprise Ontologies

Ontologies represent knowledge of a particular domain of interest that can be interpreted by machines and used for automated reasoning. An enterprise ontology is defined as “[...] the essential knowledge of the construction and the operation of the organization of an enterprise, completely independent of the ways in which they are realized and implemented” [12]. In the literature there are several

248

M. Peter et al.

examples of enterprise ontologies, some of the most well-known are in chronological order: the TOronto Virtual Enterprise (TOVE) [13], the Enterprise Ontology [52], Core Enterprise Ontology (CEO) [4], Context-Based Enterprise Ontology [32], and the ontology-based enterprise architecture [27]. The enterprise ontology ArchiMEO aggregates and semantically enriches the modeling language standards ArchiMate for enterprise architectures [22] and BPMN for business processes [11,38]. In contrast to ontologies for single modeling language ontologies such as BPMN [42,43] ArchiMEO contains concepts of several disciplines allowing for cross-discipline applications like enterprise engineering, supply chain management, and risk assessment [10]. Among the different use cases and extensions of ArchiMEO, the interlinked case-based reasoning approach ICEBERG [35] for ontology-based case-based reasoning (OCBR) is relevant for reasoning over models. This extension of the ArchiMEO ontology enables an ontology-based design of a case-based reasoning system. It can be adapted for identifying and reusing known and successful business model innovations.

3

Research Method

The objective of combining the machine interpretability of ontologies with the widely accepted Business Model Canvas, has led to the decision to apply a designoriented research approach. We follow the Design Science Research process model [20], which consists of five research phases: awareness of the problem, suggestion, development, evaluation of the artifact, and conclusion [53]. For the problem awareness, besides the literature research on business model design and ontology-based modeling, we collected 17 cases of business models through desk research and semi-structured interviews. The cases are from companies of different sizes and different sectors to have a diversified dataset for the analysis as well as for the evaluation of the ontology-based visualization for business model design. The results of this analysis in combination with the knowledge from the literature analysis, especially from Osterwalder’s book regarding the Business Model Canvas [42], were used to build a concept of the tool. The procedure to develop the business model ontology was according to the approach of Noy and McGuinness [40]. The development and evaluation of the ontology and the business model visualization were done iteratively to create a sophisticated outcome. Once we extended the tool AOAME [29] with the ontology and the business model visualization, the artifact was applied to the collected business model cases to qualitatively evaluate the outcomes of the business models with the test use cases. The tool could formally populate the ontologies based on the BMC models. The research results were documented and will be used for further development on the tool. This research provide the grounding for the next extension of AOAME4BMC in which users will identify new business models and thus, improve their business model innovation via OCBR and/or other ontology-based reasoning technique.

Ontology-Based Visualization for Business Model Design

4

249

Conceptual Solution of Ontology-Based Business Models

To integrate business model design in an ontology-aided modeling approach, we started by modeling the current well-known BMC as an ontology. The result of this BMC ontology is shown in Fig. 1.

Fig. 1. The business model canvas ontology

The BMC was enhanced as the research has progressed since its release in 2010. One of the enhancements is regarding the key partnerships of a business model. Hereto the concept of an innovation ecosystem for creating innovations and also new business models was incorporated. Innovation ecosystems can be represented as quadruple helixes, which includes four types of partners: individuals, the government, academic institutes, and businesses [44–46] to declare for a business what kind of type their key partners are with which they do business. Also, it is simpler for a user to declare the type of their key partners, than to know why the partnership was done in the first place, as suggested by the BMC [42]. For the business model ontology, both views on key partnerships for businesses are included. Another enhancement of the BMC is the distinction between human interaction and non-human interaction when it comes to types of customer relationships. This distinction was done as in today’s world, which is driving towards a high level of digitalization, the non-human customer relationship types enabled by new technological capabilities such as chatbots for customer counseling are emerging [7,39]. Thus, businesses can distinguish their way of interacting with their customers. Furthermore, the chosen customer

250

M. Peter et al.

relationship types can give inputs regarding the level of digitalization of a business. The concept of the final ontology for business model design is illustrated in Fig. 2. In total, the developed business model ontology consists of 97 classes to define a business model in detail.

Fig. 2. The business model ontology

Both ontologies, the BMC ontology as well as the business model ontology, are required to run the developed online tool to create and represent the enhanced BMC. The BMC ontology acts as the language ontology and is required for the syntax of the tool, while the business model ontology represents the domain ontology and is required for the semantics of the tool. Also, a palette ontology for the BMC-notation was developed. The palette ontology represents the modeling notation [29]. For each BMC-notation a graphical element in the form of a stickynote has been created. All three ontologies are based on the general language RDFS, thus, the ontologies can be reused in a modular way. The conceptual solution for the graphical representation of an ontology-based business model is shown in Fig. 3.

Ontology-Based Visualization for Business Model Design

5

251

Implementation of AOAME4BMC

The resulting tool is called AOAME4BMC. It is implemented as an instantiation of the AOAME modeling environment [29] extending the ArchiMEO ontology with the above-mentioned business model ontology for the enhanced BMC. AOAME is implemented via a web service, a web application, and an ontology repository and shortly described in the following based on the work of [30]. The web service enables to create the link to the business model ontology and the web app. (1) The modeling environment through a web app calls the web service, (2) from which a query to the ontology is created. (3) The retrieved classes and instances from the query to the business model ontology (4) are shown graphically to the modeler. The interaction flow between the three players modeling environment, web service, and ontology is shown in Fig. 4.

Fig. 3. Conceptual solution for a graphical representation of an ontology-based business model

Retrieving values from the ontology whenever the tool is launched ensures not only consistent modeling of business models but also the adaptability of the system. If new classes or properties are created within the ontology the modeler can directly use them within the modeling environment and vice-versa. The user interface of AOAME4BMC for the modeler consists of a palette component and a model editor, as shown in Fig. 5. It gets accessible after the

252

M. Peter et al.

Fig. 4. Web service communication (adapted from [29])

modeler sets up the connection to the web service as described in the previous paragraph. The palette contains all the predefined graphical elements to fill the business model canvas, such as individual or government for key partnerships. These graphical elements represent the classes within the business model ontology and as such, there are also sub-classes as shown for Quantitative Values and its sub-class Performance in Fig. 5. By clicking on the graphical elements in the palette, the related image for modeling appears in the model editor.

Fig. 5. View of the AOAME4BMC (adapted from [31])

6

Evaluation of AOAME4BMC

The evaluation of the ontology-based modeling approach for a BMC was performed qualitatively by modeling 17 previously defined business model use cases.

Ontology-Based Visualization for Business Model Design

253

This evaluation approach provides insights regarding the feasibility to have an ontology-based business model representation for machines to interpret the provided data from a user. Seventeen business model cases were modeled to provide a heterogeneous data-set of different sectors and company sizes. Deeper insights are provided describing one of the use cases from the insurance sector, in the following section. 6.1

Description of the Insurance Use Case

This insurance use case was developed through a semi-structured interview with a business architect from a big Swiss insurance company. The use case encompasses a specific business model for an insurance company, which has as its goal to partner with a financial institution to provide its insurance services directly to bank customers. Such insurance services can be household insurances, natural hazard insurance, or water damage insurance. The insurances are mainly relevant for real estate owners, which usually need a bank loan to buy their real estate. Therefore, the bank advisor would be the first person to know that a customer soon will become a real estate owner. Since the bank advisor is the first person to get the information of new real estate owners, they are from an insurance sales perspective predestined to inform their customers regarding real estate related insurances. Thus, the Swiss insurance company decided to partner with a Swiss bank to have a new sales channel for their current insurance business model. This new sales channel not only affected their channel strategy but also new IT systems. New IT interfaces were required to provide real estate insurance information directly to bank advisors for their customer consultancy. Additionally, for bank advisors to provide qualitative insurance consultancy to their customers, the insurance had come up with a training concept and knowledge exchange between insurance advisors and bank advisors. 6.2

Application of AOAME4BMC on the Insurance Use Case

Figure 5 depicts the representation of the business model within AOAME4BMC for the introduced insurance use case. For this use case, the key partner is a Swiss bank, which wants to offer to their customers an insurance coverage for their mortgage. The use case has two value propositions. First, an efficient way for a bank customer to get an offer for insurance regarding real estate topics, such as water damage insurance. Second, to receive all the relevant insurance information directly at the bank, which thus acts as a one-stop-shop for real estate owners. The virtual sticky notes consist of three things to facilitate the identification of each sticky note for a modeler. – On the background of the sticky note, the class of the building block is mentioned. The ontology guides the user in assigning the most specific class. For example, within the building block value proposition, there are two sticky

254

M. Peter et al.

notes of the sub-class qualitative value. Since qualitative value has sub-classes as well, the individual sub-class, in this case, the sub-class convenience, is mentioned on the sticky note. – Each sticky note has an icon on the upper right corner depicted. This icon represents a class of each of the nine building blocks. For example, within the building block value proposition, since both sticky notes belong to the subclass qualitative value, both sticky notes have the icon for qualitative value, a diamond. – The sticky notes are color-coded based on the four domains of a business model and the color psychology for optimal web design by [3]. The domain infrastructure is colored in different shades of blue. The color blue should transmit the feeling of the firmness and the virtue of the backbone of the business model. The domain offering is colored in yellow because the value proposition should deliver success to the customer. The domain customer interface is colored in shades of green as the color of growth since the customer base of the company should grow through the business model. The fourth dimension, financial aspects, is colored in shades of red for signal cautiousness since the revenues should be higher than the costs. The ontology allows for reasoning and semantic retrieval of information about business model. For example, when asking for qualitative value proposition, the two sticky notes from Fig. 5 were retrieved, because “Convenience” is a subclass of the “qualitative value proposition”. Also, more complex reasoning is possible. Asking for “Automated Services in a business model with qualitative value proposition” delivers “Calculation of insurance price”.

7

Conclusion and Future Work

In this work, we highlight the feasibility of combining visualization and reasoning over models for business model design by combining the machine-interpretability of ontologies with the widely accepted business modeling tool, the Business Model Canvas, which is more adequate for human interpretation. We have developed an ontology-based visualization tool for a business model design called AOAME4BMC, which supports the following three benefits: human interpretation, machine interpretation, and reasoning over models. Regarding human interpretation, the tool AOAME4BMC serves a business modeler to create and represent an individual business model in a facilitated manner since the tool suggests different types of sticky notes for the canvas, which might be relevant for the modeler. Also, the evaluation has led to the conclusion that it is possible and helpful for the modeler to create and represent a business model in a facilitated manner using a specific ontology for business model design. The support of the modeling approach with the aid of color-coded business model building blocks, individual icons for each class, and the class name on the background of each sticky note facilitate the identification of each sticky note. This is for human interpretation since the modeler with a glimpse of an eye on a sticky note can identify to which building block of the business model

Ontology-Based Visualization for Business Model Design

255

the sticky note belongs, to which class within the building block. This helps for the representation as it supports an efficient understanding and interpretation of the created business model. Machine interpretation is a benefit of ontology-based metamodeling, as it relies on an ontology for business model design, which is composed of 97 classes. Hereto future work will extend the business model ontology with the enterprise architecture ontology. This will provide an in-depth analysis of the business model but also support decision making concerning the enterprise architecture on how to implement the business model. The tool AOAME4BMC benefits of reasoning over models as it enables data and ontology reuse. Intelligent business information systems can be connected to the tool through which reasoning over the models can be conducted. Future work will include the combination of the AOAME4BMC tool with ontologybased case-based reasoning. This will allow us to compare different business models and to even recommend business models. If, for example, a business model is similar to the current situation except for the customer segment and channels, the approach could recommend business models that have the same infrastructure and the same features of the offering, yet for a different client base and through different channels. Concluding, this research has shown that the previously defined research question can positively be answered. Supporting business model design by combining visualization and reasoning over the machine and human interpretable models is possible.

References 1. Azzini, A., Braghin, C., Damiani, E., Zavatarelli, F.: Using semantic lifting for improving process mining: a data loss prevention system case study. In: 3rd International Symposium on Data-driven Process Discovery and Analysis (SIMPDA 2013), pp. 62–73, vol. 1027. CEUR Workshop Proceedings, Riva del Garda, Italy (2013). http://ceur-ws.org/Vol-1027/paper5.pdf 2. Baden-Fuller, C., Morgan, M.S.: Business models as models. Long Range Plann. 43, 156–171 (2010) 3. Bernard, M.: Criteria for optimal web design (designing for usability). Accessed 13 Apr 2005 (2003) 4. Bertolazzi, P., Krusich, C., Missikoff, M.: An approach to the definition of a core enterprise ontology: CEO. In: OES-SEO 2001, International Workshop on Open Enterprise Solutions: Systems, Experiences, and Organizations, pp. 14–15 (2001) 5. Breuer, H.: Lean venturing: learning to create new business through exploration, elaboration, evaluation, experimentation, and evolution. Int. J. Innov. Manage. 17(03), 1–22 (2013). https://doi.org/10.1142/S1363919613400136 6. Burlton, R.T., Ross, R.G., Zachman, J.A.: The business agility manifesto (2017). https://busagilitymanifesto.org/. Accessed 14 Apr 2020 7. Cameron, G., et al.: Towards a chatbot for digital counselling. In: Proceedings of the 31st British Computer Society Human Computer Interaction Conference. HCI 2017, BCS Learning & Development Ltd., Swindon, GBR (2017). https://doi.org/ 10.14236/ewic/HCI2017.24

256

M. Peter et al.

8. Chesbrough, H.: Business model innovation: opportunities and barriers. Long Range Plann. 43(2–3), 354–363 (2010) 9. Cosenz, F., Noto, G.: A dynamic business modelling approach to design and experiment new business venture strategies. Long Range Plann. 51, 1–14 (2018) 10. Davies, I.A., Doherty, B.: Balancing a hybrid business model: the search for equilibrium at caf´edirect. J. Bus. Ethics (2010). https://doi.org/10.1007/s10551-0183960-9 11. Di Francescomarino, C., Ghidini, C., Rospocher, M., Serafini, L., Tonella, P.: Reasoning on semantically annotated processes. In: Bouguettaya, A., Krueger, I., Margaria, T. (eds.) ICSOC 2008. LNCS, vol. 5364, pp. 132–146. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89652-4 13 12. Dietz, J.L., Hoogervorst, J.A.: Enterprise ontology in enterprise engineering. In: Proceedings of the 2008 ACM Symposium on Applied Computing, pp. 572–579 (2008) 13. Fox, M.S., Barbuceanu, M., Gruninger, M.: An organisation ontology for enterprise modeling: preliminary concepts for linking structure and behaviour. Comput. Ind. 29(1–2), 123–134 (1996) 14. Frick, J., Ali, M.M.: Business model canvas as tool for SME. In: Prabhu, V., Taisch, M., Kiritsis, D. (eds.) APMS 2013. IAICT, vol. 415, pp. 142–149. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41263-9 18 15. Gassmann, O., Frankenberger, K., Csik, M.: The Business Model Navigator: 55 Models that will Revolutionise Your Business. Pearson, UK (2014) 16. Geissdoerfer, M., Savaget, P., Evans, S.: The Cambridge business model innovation process. Procedia Manuf. 8, 262–269 (2017) 17. Havemo, E.: A visual perspective on value creation: exploring patterns in business model diagrams. Eur. Manage. J. 36, 441–452 (2018). https://doi.org/10.1016/j. emj.2017.12.002 18. Heiser, J., Tversky, B.: Arrows in comprehending and producing mechanical diagrams. Cogn. Sci. 30(3), 581–592 (2006) 19. Henike, T., Kamprath, M., H¨ olzle, K.: Effecting, but effective? how business model visualisations unfold cognitive impacts. Long Range Plann. (2019). https://doi. org/10.1016/j.lrp.2019.101925 20. Hevner, A., Chatterjee, S.: Design Research in Information Systems: Theory and Practice. Springer, US (2010). https://doi.org/10.1007/978-1-4419-5653-8 21. Hinkelmann, K., Gerber, A., Karagiannis, D., Thoenssen, B., van der Merwe, A., Woitsch, R.: A new paradigm for the continuous alignment of business and it: combining enterprise architecture modelling and enterprise ontology. Comput. Ind. 79, 77–86 (2016) 22. Hinkelmann., K., Laurenzi., E., Martin., A., Montecchiari., D., Spahic., M., Th¨ onssen., B.: Archimeo: a standardized enterprise ontology based on the archimate conceptual model. In: Proceedings of the 8th International Conference on Model-Driven Engineering and Software Development - Volume 1: MODELSWARD, pp. 417–424. INSTICC, SciTePress (2020). https://doi.org/10.5220/ 0009000204170424 23. Hinkelmann, K., Laurenzi, E., Martin, A., Th¨ onssen, B.: Ontology-based metamodeling. In: Dornberger, R. (ed.) Business Information Systems and Technology 4.0. SSDC, vol. 141, pp. 177–194. Springer, Cham (2018). https://doi.org/10.1007/ 978-3-319-74322-6 12 24. Hoffmeister, C.: Digital Business Modelling: Digitale Gesch¨ aftsmodelle entwickeln und strategisch verankern. Carl Hanser Verlag GmbH & Co., KG (2015)

Ontology-Based Visualization for Business Model Design

257

25. Hrgovcic, V., Karagiannis, D., Woitsch, R.: Conceptual Modeling of the organisational aspects for distributed applications: the semantic lifting approach. In: COMPSACW 2013, 2013 IEEE 37th Annual Computer Software and Applications Conference Workshops, pp. 145–150. IEEE, July 2013. https:// doi.org/10.1109/COMPSACW.2013.17, http://ieeexplore.ieee.org/lpdocs/epic03/ wrapper.htm?arnumber=6605780 26. Johnson, M.W., Christensen, C.M., Kagermann, H.: Reinventing your business model (2008). https://hbr.org/2008/12/reinventing-your-business-model. Accessed 12 Apr 2020 27. Kang, D., Lee, J., Choi, S., Kim, K.: An ontology-based enterprise architecture. Expert Syst. Appl. 37(2), 1456–1464 (2010) 28. Kappel, G., et al.: Lifting metamodels to ontologies: a step to the semantic integration of modeling languages. In: Nierstrasz, O., Whittle, J., Harel, D., Reggio, G. (eds.) MODELS 2006. LNCS, vol. 4199, pp. 528–542. Springer, Heidelberg (2006). https://doi.org/10.1007/11880240 37 29. Laurenzi, E., Hinkelmann, K., Izzo, S., Reimer, U., van der Merwe, A.: Towards an agile and ontology-aided modeling environment for DSML adaptation. In: Matuleviˇcius, R., Dijkman, R. (eds.) CAiSE 2018. LNBIP, vol. 316, pp. 222–234. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92898-2 19 30. Laurenzi, E., Hinkelmann, K., van der Merwe, A.: An agile and ontology-aided modeling environment. In: Buchmann, R.A., Karagiannis, D., Kirikova, M. (eds.) PoEM 2018. LNBIP, vol. 335, pp. 221–237. Springer, Cham (2018). https://doi. org/10.1007/978-3-030-02302-7 14 31. Laurenzi, E., Hinkelmann, K., Montecchiari, D., Goel, M.: Agile visualization in design thinking. In: Dornberger, R. (ed.) New Trends in Business Information Systems and Technology. SSDC, vol. 294, pp. 31–47. Springer, Cham (2021). https:// doi.org/10.1007/978-3-030-48332-6 3 32. Lepp¨ anen, M.: A context-based enterprise ontology. In: Guizzardi, G., Wagner, G. (eds.) Proceedings of the EDOC International Workshop on Vocabularies, Ontologies and Rules for the Enterprise (VORTE 2005), pp. 17–24. Springer, Berlin (2005) 33. L¨ udeke-Freund, F., Bohnsack, R., Breuer, H., Massa, L.: Research on sustainable business model patterns: status quo, methodological issues, and a research agenda. In: Aagaard, A. (ed.) Sustainable Business Models. PSSBIAFE, pp. 25– 60. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-93275-0 2 34. L¨ udeke-Freund, F., Carroux, S., Joyce, A., Massa, L.: The sustainable business model pattern taxonomy - 45 patterns to support sustainability-oriented business model innovation. Sustain. Prod. Consump. 15, 145–162 (2018). https://doi.org/ 10.1016/j.spc.2018.06.004 35. Martin, A., Hinkelmann, K.: Case-based reasoning for process experience. In: Dornberger, R. (ed.) Business Information Systems and Technology 4.0. SSDC, vol. 141, pp. 47–63. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-74322-6 4 36. Massa, L., Tucci, C.L., Afuah, A.: A critical assessment of business model research. Acad. Manage. Ann. 11, 73–104 (2017). https://doi.org/10.5465/annals.2014.0072 37. Meyer, R.E., Ho, M.A., Jancsary, D.: The visual dimension in organizing, organization, and organization research: core ideas, current developments, and promising avenues. Acad. Manage. Ann. 7, 489–555 (2013). https://doi.org/10.5465/ 19416520.2013.781867 38. Natschl¨ ager, C.: Towards a BPMN 2.0 ontology. In: Dijkman, R., Hofstetter, J., Koehler, J. (eds.) BPMN 2011. LNBIP, vol. 95, pp. 1–15. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25160-3 1

258

M. Peter et al.

39. Nawaz, N., Gomes, A.M.: Artificial intelligence chatbots are new recruiters. Int. J. Adv. Comput. Sci. Appl. 10, 1–5 (2019). https://thesai.org/PdfFileHandler.ashx? filen=IJACSA Volume10No9 40. Noy, N.F., McGuinness, D.L.: Ontology development 101: a guide to creating your first ontology. Technical report, Stanford Knowledge Systems Laboratory (2001) 41. Object Management Group, Inc. (OMG): Meta object facility (MOF) core specification, version 2.4.2 (2014). https://www.omg.org/spec/MOF/2.4.2/PDF. Accessed 10 Apr 2020 42. Osterwalder, A., Pigneur, Y.: Business Model Generation. Wiley, Hoboken (2010) 43. Osterwalder, A.: The business model ontology - a proposition in a design science approach. In: Ph.D. dissertation. University of Lausanne (2004) 44. Rabelo, R.J., Bernus, P.: A holistic model of building innovation ecosystems. In: IFAC-PapersOnLine, vol. 28, pp. 2250–2257 (2015) 45. Schallmo, D., Rusnjak, A., Anzengruber, J., Werani, T., J¨ unger, M.: Digitale Transformation von Gesch¨ aftsmodellen. Springer, Wiesbaden (2017). https://doi.org/10. 1007/978-3-658-12388-8 46. Sch¨ utz, F., Schroth, F., Muschner, A., Schraudner, M.: Defining functional roles for research institutions in helix innovation networks. J. Technol. Manage. Innov. 13, 47–54 (2018) 47. Shafer, S.M., Smith, H.J., Linder, J.C.: The power of business models. Bus. Horiz. 48, 199–207 (2005) 48. Smart City Innovation Lab: Smart business model canvas (2018). https://app. smartbusinessmodeler.com/. Accessed 15 Mar 2020 49. Smit, S., Fragidis, G., Handschuh, S., Koumpis, A.: Business model boutique: a Prˆet-` a-porter solution for business model innovation in SMEs. In: Afsarmanesh, H., Camarinha-Matos, L.M., Lucas Soares, A. (eds.) PRO-VE 2016. IAICT, vol. 480, pp. 579–587. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-453903 49 50. Strategyzer AG: Business model canvas by strategyzer (ND). https://platform. strategyzer.com. Accessed 15 Mar 2020 51. Teece, D.J.: Business models, business strategy and innovation. Long Range Plann. 43, 172–194 (2010) 52. Uschold, M., King, M., Moralee, S., Zorgios, Y.: The enterprise ontology the knowledge engineering review, vol. 13. Special Issue on Putting Ontologies to Use (1998) 53. Vaishnavi, V., Kuechler, W.: Design Science Research Methods and Patterns: Innovating Information and Communication Technology. Auerbach Publications, Taylor & Francis Group, Boca Raton, New York (2007) 54. Witschel, H.F., Peter, M., Seiler, L., Parlar, S., Grivas, S.: Case model for the RoboInnoCase recommender system for cases of digital business transformation: structuring information for a case of digital change. In: Proceedings of the 11th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, KMIS, vol. 3, pp. 62–73. SciTePress, January 2019. https://doi.org/10.5220/0008064900620073

Business Process Modeling

Decentralized Control: A Novel Form of Interorganizational Workflow Interoperability Christian Sturm1(B) , Jonas Szalanczi2 , Stefan Jablonski1 , and Stefan Sch¨ onig3 1 University of Bayreuth, Bayreuth, Germany {christian.sturm,stefan.jablonski}@uni-bayreuth.de 2 NeuroForge GmbH & Co. KG, Bayreuth, Germany [email protected] 3 University of Regensburg, Regensburg, Germany [email protected]

Abstract. Companies deploy workflow management systems which interpret process models to ensure compliance of operational duties by automatically distribute work items to employees or machinery. Enterprises are often forced to outsource services and join interorganizational collaborations, e.g. in supply chain scenarios, to remain competitive on the market. Today, interorganizational process management cater for interconnecting publicly visible tasks of the participants’ local workflows whereby a predefined message exchange protocol ensures interoperability. Especially in flexible large-scale collaboration scenarios, this strategy lacks in global monitoring or transparent data-based routing the control flow due to a missing central controlling instance or global accessible data. Blockchain technology promises to automatically run applications in a decentralized fashion without any trusted third party needed, as participants will agree on a common valid state algorithmically. We capitalize on this consensus finding mechanism and tamper-resistant data storage to propose a novel form of workflow interoperability for interorganizational workflows which autonomously orchestrates the process inbetween. The approach solves issues regarding the monitoring of the global state, the distribution of work items to the respective business partners and achieves data-based routing by holding in-process variables decentralized in a trustworthy fashion. Keywords: Process execution · Choreography · Interorganizational process management · Workflow interoperability · Data-based routing Blockchain

1

·

Introduction

Business Process Management (BPM) is the discipline of modelling, executing and analyzing business processes [5]. The skeleton of a process [10] comprises the c IFIP International Federation for Information Processing 2020  Published by Springer Nature Switzerland AG 2020. All Rights Reserved J. Grabis and D. Bork (Eds.): PoEM 2020, LNBIP 400, pp. 261–276, 2020. https://doi.org/10.1007/978-3-030-63479-7_18

262

C. Sturm et al.

set of activities to perform and their temporal order (functional and behavioural perspective), the person, group or machinery which is responsible for the execution (organizational and operational perspective) and related data values (informational perspective) which may affect the other perspectives, for instance in the way that the control-flow is adapted based on current data values, called data-based routing. BPM distinguishes intraorganizational processes from interorganizational processes. In intraorganizational settings, the organization itself is process owner having all responsibilities and special interest in successfully completing the process. Technically, all data and information remain inside a certain environment, e.g. the companies’ boundaries. The process owner implements software that orchestrates internal routines and distributes work items to employees. In interorganizational settings, at least two separate participants or process owners strive to reach one common business goal by outsourcing or distributing the workload. This is accompanied with additional issues like the preservation of company secrets (hide internal workflows) [13], the lack of unconditional trust between participants [27] or a missing common IT infrastructure [19]. Six different possibilities to establish interorganizational workflows are described in literature as forms of interoperability, but each of them comes with certain drawbacks, e.g. prevents parallel execution or lacks monitoring of the global workflow progress [26]. We propose decentralized control as a novel form of interoperability to address requirements to interorganizational processes, and to solve the following dilemma: On the one hand, interorganizational processes do not have a particular process owner and a centralized coordinating instance is not considered applicable, so that participants are obliged to hold all data locally. On the other hand, all participants have to agree on a single source of truth to achieve a commonly accepted work item distribution or data-based routing. Decentralized control is build upon a decentralized peer-to-peer network and achieves both, decentralization resp. local data retention and common agreement on the global state of the process flow. To counteract intended fraud attempts or unintended technical malfunctions with locally corrupted data, decentralized control includes algorithms called consensus mechanisms that are applied to establish a fault tolerant system and to eliminate trust issues. In consequence, the group of business partners agree on a single source of truth, which is responsible for the global monitoring and routes the process flow automatically based on the process model. We will use blockchain technology to implement decentralized control in this paper. The Ethereum blockchain is able to store data in a decentralized fashion and with small programs (smart contracts) process information can be encapsulated and decisions are taken automatically. In the remainder of the paper, we describe decentralized control and enlarge upon both modelling and implementation issues in Sect. 2. Section 3 introduces basic concepts of the Ethereum protocol, a blockchain specification which we will use for our implementation. Sturm et al. proposed in [23,24] a concept for a reusable blockchain-ready software artefact which allows to interpret simplistic process models and thus drive the interorganizational process autonomously.

Decentralized Control Workflow Interoperability

263

We build upon this work and describe the preliminaries in Sect. 4. However, for trusted and advanced data-based routing, we have to extend this architecture in Sect. 5, so that data and in-process variables are also kept in a decentralized fashion. Section 7 summarizes related work before Sect. 8 concludes the paper.

2

Novel Interoperability: Decentralized Control

In this section, we propose a novel form of interoperability to execute interorganizational workflows. We will briefly introduce the present forms and point to weaknesses in regard of requirements to interorganizational process execution. The novel form is contextualized in terms of modelling and implementation aspects. In the matter of modelling, we refer to BPMN (Business Process Modelling and Notation) [21] as the de-facto standard in research due to the frequent usage. Conceptional Architectures and Interoperability. Van der Aalst identified six different forms of interoperability between workflows to establish an interorganizational collaboration [26]. Each of them comes with certain drawbacks which we will discuss in this section. Subcontracting (i) assumes a hierarchical structure of participants, chained execution (ii) requires the process to consist of sequentially ordered parts and the extended-/case transfer (iii/iv) restricts full parallelism, as cases remain at one business partner solely at the same time [26]. Capacity sharing (CS) and loosely coupled workflows (LCW) as two remaining forms promise higher flexibility and are in the spotlight now. Instead of portioning the workflow, CS assumes centralized control (cf. Fig. 1, left). One central workflow engine distributes tasks directly to resources of business partners, based on one global workflow. With the use of LCW (cf. Fig. 1, right), each participant have a local workflow running which is connected to related workflows of other participants. They synchronize at certain points to ensure the correct execution of the overall business process.

Fig. 1. Different forms of interoperability: Capacity sharing (left), Loosely coupled workflows (right) and Decentralized control (middle). Illustration based on [26].

264

C. Sturm et al.

We identify basic requirements to interorganizational process management in literature (cf. Table 1) and point out the downsides of CS and LCW to stress the necessity for a novel approach. A common IT infrastructure refers to a data storage or processing environment, which is accessible within a well-defined network, but secured against the global internet, e.g. a company-wide network drive which is protected behind a firewall. A centralized WFMS holds processes and data centralized by definition (CS), whereas LCWs cannot satisfy this requirement due to the geographical and logical separation of data. The preservation of the autonomy demands that participants must be able to apply steady changes to internal workflows at any time [12]. LCW as implemented in [13] supports autonomy where local process model fragments of participants are connected with an event system, whereas CS uses an agreed-upon global workflow in a centralized WFMS where ad-hoc changes of single participants are hard to apply. Global monitoring is essential to query the progress of a particular workflow instance. Trivially, monitoring is easy within a centralized WFMS (CS), but with LCWs, increasing cardinality of participants raises both importance and complexity of process monitoring at the same time [14]. The third requirement refers to privacy [14]. Companies want to protect business secrets for reasons of preserving competitive edges or are obliged to keep data private because of legal constraints (e.g. GDPR). Using LCWs, only the tasks which are responsible for message exchange are necessary to be publicy visible. Therefore, LCW surpasses CS, where all tasks remain public in the global workflow. Trust was detected recently as impeding factor for establishing interorganizational workflows. That is, when a service provider is not trusted to host the infrastructure (CS) or a collaborating party corrupt historic execution logs to their advantage [27]. The latter concerns LCWs, because participants have full control over their fragment of the overall workflow. Table 1. Basic requirements for interorganizational process execution Criteria

CS LCW Decentralized control

Common IT infrastructure Autonomy Global monitoring Privacy Trust

+ − + − −

− + − + −

+ + + + +

We propose a novel form of interoperability, which solves the issues of global monitoring without centralizing the responsibility and which preserves local autonomy and the privacy of participants: In decentralized control (cf. Fig. 1, middle), a trusted, decentralized and fault tolerant network builds the common IT infrastructure. Each participant holds the process data (models, instances/cases, in-process variables) locally. At first, this introduces redundant

Decentralized Control Workflow Interoperability

265

copies of data which may not always correspond to each other due to intentional manipulation or technical faults and hence is untrustworthy (byzantine faulty). Consensus mechanisms and especially byzantine fault tolerant algorithms can deal with a certain number of malfunctioned or wilful attacking nodes and ensure that honest and reliable nodes will automatically agree on a common state [3]. This adds a trustworthy layer of global monitoring and still avoids centralization. In the local environment, the participants are free to decouple the local confidential workflow from the global process so that autonomy and privacy is preserved. In the remainder of this paper, we implement a decentralized system leveraging on blockchain technology and the Ethereum protocol in particular. Section 3 discusses in detail, how the components of Ethereum address the stated requirements.

CUST. Send Order SELLER

Order conf.

Funds CUST. Send Funds PAY.ORG.

CUST. Send Order SELLER

Payment conf.

Products

Fig. 2. BPMN choreographie [28]

PAY.ORG. RESELLER

Order

CUSTOMER

Modelling. Different paradigms for modelling interorganizational processes were proposed. Message Sequence Charts are used [2], when the sole interaction of workflows is of particular interest. Workflow nets are used to model the global workflow in the Public-to-Private approach [1]. This top-down approach is still current in state of the art methodologies, e.g. when high level milestones are defined first, before interaction of business partners with these milestones are identified. Then, global behavioural interfaces of participants are detected before they get finally linked to local executable workflows in the LCWs style [28]. The approaches correspond in a contract-like definition of the intermediate collaboration interface which must be respected by local workflows as described with LCWs. BPMN reflects this conceptual architecture in its various diagrams: Local workflows are modelled as private executable processes ([21] §10.2.1.1) which are interpreted in the WFMSs. For communication issues, public processes ([21] §10.2.1.2) define the behavioural interface, i.e. the global visible tasks of one single participant which are connected to global visible tasks of another participant (cf. Fig. 3). The overall interorganizational workflow, is shown in a choreography ([21] §11) which is conceptually located between the participants. Choreographies are not directly executed by a central controller, but rather specify the to-be interactions of local workflows and focus on the required exchange of messages.

Order Receive

Receive

Pay

Receive

Confirm

Send O.

Receive

Confirm

Fig. 3. Send order is not executed, because the Seller is not informed

266

C. Sturm et al.

Weske et al. describes certain issues that come with choreographies [28], which are a consequence of the tight coupling with LCWs. For instance, the choreography in Fig. 2 is not enforceable ([21] §11.5.6), because the initiator (Seller ) of the message exchange Send order is not involved in the previous Send Funds and therefore not informed that the payment was made. Enforceable choreographies must contain such message exchanges explicitly and unnecessarily blow up the models. This issue is due to the message-centric modelling in choreographies and the lack of a centralized controlling instance. Decentralized control promotes to model and execute interorganizational executable processes which avoids these message-centric issues by nature. All participants get informed on state changes of the process automatically as the process progress and in-process variables are globally available, hence there is no need to focus on message exchanges. We will therefore concentrate on the interorganizational control-flow and model the intermediate contract as private executable process in Sect. 4 and Sect. 5. In essence, no BPMN diagram is specialized on the control-flow between cross-enterprise organizational units, e.g. participants are usually modelled within BPMN pools, but the control-flow cannot leave the pool’s boundaries ([21] §9.3). This and the different diagrams show that BPMN is significantly influenced by the LCW interoperability and consequently cannot directly support the novel interoperability. However, we will still use BPMN elements and take the officially specified semantics, whenever possible. At some point, we will have to reinterpret elements in the proper meaning of BPMN. Implementation Aspects of Interorganizational Workflows. Interorganizational workflows and LCWs in particular can be implemented using web services (cf. WS-BPEL, the Business Process Execution Language). This realises an integration of two separate machines of different companies to automatically exchange data, but not process enactment in the sense of decoupling application programs from the process logic to achieve the flexibility of work item distribution or global data-based routing of the process flow. That is rather achieved by interpreting process models in a process-aware management tool. In decentralized control, we implement an executable process model in a decentralized WFMS. Speaking of implementing processes, Dumas et al. separates between human tasks and service tasks [5]. The former are rather manual tasks which are hard to track automatically, e.g. phone calls or manual exchange of raw materials in production lines. The latter is for instance a small program or script which sends an email or updates in-process data values. Hence, a WFMS serves as distributor of work items to the respective resource on the one hand, but may also be responsible for process automation. In this paper, we focus on human tasks and ensuring proper execution with data-based routing, but do not automate certain tasks. However, it has been already proven, that Ethereum can also accomplish basic automated service tasks [16], but the solution requires to write program code during process modelling.

Decentralized Control Workflow Interoperability

3

267

Selecting Blockchain as Decentralized Environment

Blockchains are a novel possibility to build large-scale byzantine fault tolerant networks. Ethereum can execute decentralized applications on top and has proven to be able to run business processes [16,27]. Hence, it seems to be a suitable solution for implementing decentralized control interoperability. We introduce basic concepts of Ethereum now, but concentrate on the essential details to keep the blockchain as black-boxed as possible for readability purposes. On a very high-level view, we can describe the blockchain using four perspectives. From a technical perspective, a blockchain is a protocol running within an open peer-to-peer network everyone can join by running a client software. The protocol includes a gossip functionality so that connected participants and thus at some time the whole network is notified about necessary information. From a functional perspective the blockchain is a distributed append-only data storage, where a consensus mechanism allows the network to decide over new data (new blocks) autonomously. That includes, to select the responsible entity for proposing a new block and to decide on the correctness which is determined by a protocol-given ruleset. As an example, the ruleset in the Bitcoin protocol includes that a transaction is propagated only if the senders’ balance is beyond the amount of monetary units he wants to transfer. The winner of a laborious puzzle is the chosen one to propose a new block (Proof-of-work) which must comply with the ruleset so that the block is appended to the chain. The integrity perspective of the blockchain practically prevents that historical data gets modified or corrupted ex post. Blocks include a hash value of the previous block so that a chain towards the initial (genesis) block is established. Data modifications are easy to detect, because the hash value of the respective block is updated and propagated through the chain towards the latest block. Convincing the network that the corrupted sibling chain is the valid one is prevented due to the laborious calculation of blocks and the rule to follow the longest chain in the network. We will also exploit the extensibility perspective of blockchains. For our purposes, we define this as the possibility to modify the ruleset for database inserts. The level of extensibility culminates in the Ethereum blockchain where arbitrary (turing-complete) programs (smart contracts) can be executed. Speaking in BPM terms, the technical perspective provides a common IT infrastructure. The blockchain can be installed as a private chain which allows the required access control to the processes and data. The functional perspective prevents centralization, because the consensus mechanism ensures that the participants decide as group over new data, i.e. the process state, autonomously. The integrity perspective establishes a trusted layer. Each participant stores a copy of the data locally and is free to apply manipulations to his favour, but after the data was propagated and accepted by the network such manipulations are detected easily. We rely on the extensibility perspective and design a smart contract to define a process (model)-aware ruleset and so enforce model compliance including automated data-based routing during the execution. In essence, the blockchain can serve the requirements of interorganizational process management which were identified in Sect. 2.

268

C. Sturm et al.

The operability of a blockchain is not safeguarded by design but ensured during run time. Attacks are theoretically possible, but with consensus mechanisms which are driven with an economical background, attempted manipulation causes the loss of money whereas canonical behavior is awarded. This is funded by fee-based transactions and on-chain storage. To minimize the costs for process execution, we focus on a small set of BPMN elements instead of supporting a full-blown modelling language.

4

On-Chain Interpreted Execution of BPMN-Processes

The next two sections introduce the design of our decentralized control implementation. Semantics for certain BPMN modelling elements are defined to model interorganizational processes which are then executed within a decentralized management system. In [24], a novel approach is presented to interpret BPMNnotated models on the Ethereum blockchain. We will briefly recap the preliminaries and extend this architecture in Sect. 5 for our purposes with data values to facilitate data-based routing of the control-flow. 4.1

Execution Semantics

The BPMN specification defines the execution semantics of the modelling language in textual form with references to the token game in a petri net, but a formal definition regarding the execution is missing for some elements in the specification, especially in the interorganizational context. BPMN uses pools to model participants ([21] §7.3.1) but do not reference them in the section regarding the execution semantics ([21] §13). Further on, the meaning of lanes is not specified intentionally ([21] §10.8). In consequence, multiple interpretations can be found in different systems: Sometimes, pools are used for a short process description and process participants are depicted in lanes [16] or Camunda1 ignores pools and lanes completely and uses an own account system. To avoid any discrepancies, we propose the execution semantics for rudimentary BPMN elements and adopt them to the needs for our architecture. The upcoming definitions regarding the execution semantics are built upon requirements or pre-/post-conditions to model the intermediary contract of the interorganizational process. Informally spoken, previous to the execution of a task, we check that all preconditions of this task are fulfilled, for instance the preceding task was executed, the task is not locked due to the execution of concurrent tasks or all data conditions are fulfilled (cf. Sect. 5). As stated, we do not claim to be BPMN compliant. For instance, we use pools as the process container and lanes are mapped to the responsible participants so that the control-flow can be distributed over all participants.

1

A widely used open-source BPMS: https://camunda.com/.

Decentralized Control Workflow Interoperability

4.2

269

Sequence and Parallel Gateway

We define an interorganizational process IOP as a 5-tuple IOP = (T , G, Σ, F, L), referring to tasks, gateways, sequence flows, fulfilled tasks and locked tasks. Tasks and Gateways. T = T˜ ∪{} unions the set of the tasks T˜ in the model with the start node denoted as , and G comprises the set of gateways. The function γ : G → {×, ◦, +} maps gateways to their respective type in the model: + for a parallel gateway, × for an exclusive gateway and ◦ for an inclusive gateway (cf. Sect. 5). Control-Flow. The set  Σ = {σi |i ∈ N} = {(σi1 , σi2 ) ∈ T ∪ G × T ∪ G  Ξ, i ∈ N} with condition Ξ describes the control-flow. Ξ claims for each tuple (σ 1 , σ 2 ) ∈ Σ a corresponding sequence flow from σ 1 to σ 2 in the given BPMN model. We define a helper function κ : G → {≺, } for determining if the gateway is a forking gateway (≺) or a merging gateway () respectively:   ≺, |{(g, t) ∈ Σ  t ∈ T }| > 1  κ : g → , |{(g, t) ∈ Σ  t ∈ T }| = 1 For readability purposes, notation g≺ is used for forking gateways (κ(g) = ≺) and  g describes merging gateways (κ(g) = ). The gateway type is denoted in superscript, e.g. a merging exclusive gateway would be denoted as  g × (κ(g) = , γ(g) = ×). Preconditions. For a task t to be executable, the following conditions must apply. The task is not locked (t ∈ / L, cf. data-based postconditions in Sect. 5) or finished (t ∈ / F, cf. postconditions below). Henceforth, let Θ be an abbreviation for t ∈ / F ∪ L. Further on, a task execution is bound to requirements, i.e. tasks which must have been potentially executed before. We define requirements as function ρ : T → P(T )2 . In the following definition for ρ, let t˜ ∈ T and g ∈ G. ⎧ Ø, ⎪ ⎪ ⎪ ⎨{t˜}, ρ : t → ⎪{t˜}, ⎪ ⎪ ⎩˜ T,

(, t) ∈ Σ ∃t˜ : (t˜, t) ∈ Σ   ∃t˜ ∃g≺ : (t˜, g≺ ) ∈ Σ ∧ (g≺ , t) ∈ Σ ∃ g + ∈ G : ( g + , t) ∈ Σ

 with T˜ = {t˜ {(t˜, g + ), ( g + , t)} ⊂ Σ}. 2

P(T ) describes the cartesian product of T .

270

C. Sturm et al.

Consider Fig. 4. Task A does not include any requirements, because it directly follows the start node (cf. line 1 of ρ). Task A itself is requirement for B, because a sequence flow exists in between (line 2). Before executing the tasks after the forking gateway (C and D), B must have been executed (line 3). And finally, the task after the merging gateway (G) must wait for the final tasks of the respective flows, which are F and D in the process model in Fig. 4 (line 4). We define the preconditions for an execution of t as function P : T → {✓, ✗} now. Again, let t˜ ∈ T and g ∈ G unless otherwise stated. P (t) = ✓ indicates that the task t is executable and vice versa P (t) = ✗ indicates that the task t is not executable. ⎧ ⎪ ✓, ⎪ ⎪ ⎪ ⎪ ⎪ ⎨✓, P : t → ✓, ⎪ ⎪ ⎪✗, ⎪ ⎪ ⎪ ⎩✗,

∃t˜ : (t˜, t) ∈ Σ ∧ (ρ(t) ⊆ F) ∧ Θ + + ∃g≺ : (g≺ , t) ∈ Σ ∧ (ρ(t) ⊆ F) ∧ Θ + ∃ g : ( g + , t) ∈ Σ ∧ (ρ(t) ⊆ F) ∧ Θ t∈F ∨t∈L else

Organizational Perspective. We use BPMN lanes to determine the respective resource, i.e. the actor responsible for executing a task. The interaction with blockchain technology is based on cryptographic public keys which are used for the lane’s name. In Fig. 4, 0x01E... is responsible for the tasks B and D. We omit the mathematical foundations here due to limited space, but the system ensures that tasks are only executable by the respective resource. Postconditions. After a task t was executed, denoted as t✓ , it is not longer executable and included in the set of finished tasks: F = F ∪ {t✓ }

0x01E… 0x1AF2

{D, E}

{B}

{}

C

A B {A}

+

F

z

{F, D}

{C}

+

G

[5, 9]

X z 10

E

{B}

{C, D}

Fig. 4. A process with requirements in parallelograms.

B

z=5 || z=6 …

C {C, E} z, 5,


Fig. 5. A process with decisions in dark hexagons and competitors in light trapezoids.

Decentralized Control Workflow Interoperability

5

271

Modelling and Executing Data-Based Decisions

Data is essential to processes from an application-oriented point of view and one of five important aspects which were identified and conflated in the multiperspective process model [10]. In the context of blockchain-based process execution, the functional and behavioural perspective were focused first and also some organizational aspects are rudimentary considered. In this section we focus on the informational perspective by emerging data-based decisions to control the flow during the execution of a process. In contrast to current systems, where global data is centralized (CS) or not available (LCWs), the blockchain as operator is responsible here for holding all process data. The blockchain must consequently be the authority which decides on the control-flow in case of data-based decisions which are automatically triggered or based on human input. We refer to the definitions given in Sect. 4 and extend these in terms of data-based decisions. 5.1

Modeling Decisions in BPMN

The operation of a business process is rarely predictable a priori but often determined dynamically based on internal data or external interactions and decisions. BPMN provides constructs to model the split of the control-flow based on conditions (data-based routing). Here, we focus on exclusive gateways ([21] §10.6.2), inclusive gateways (§10.6.3) and parallel gateways (§10.6.4). To illustrate the usage of data-based routing, we refer to Fig. 5. After the execution of task B, the current value of z determines the upcoming valid execution path(s). The conditions are labeled on the gateway’s outgoing sequence flows, as example C is enabled for z = 5, D is enabled for z = 0 or E is enabled for z = 15. 5.2

Formalization of the Execution Semantics

Data storage. We extend the interorganizational process IOP by a global data store Φ, with Φ = {φi |i ∈ N} = {(φ1i , φ2i )|i ∈ N} Φ is responsible for holding all in-process data variables which may be consulted in data-based decisions during process execution, where φ1 refers to the variable (e.g. z in Fig. 5) and φ2 is the current value which may be updated on occasion. Note that we refer to the value of φi only at time of the evaluation and neglect changes over time. Conditional Control-Flow. If a gateway splits the control-flow in BPMN, the labels on the outgoing sequence flows determine the set of valid execution paths. For this purpose, we introduce the set of decisions D, with D = {di |i ∈ N} = {(φ, λ, o)i |i ∈ N},

272

C. Sturm et al.

where φ ∈ Φ is the global payload and λ is the reference value on the sequence flow. We define λ here informally as number, text or numerical interval. λ is evaluated against of the value of the global payload φ2 using a suitable relational operator o (e.g. >,