Systems, Software and Services Process Improvement: 30th European Conference, EuroSPI 2023, Grenoble, France, August 30 – September 1, 2023, ... in Computer and Information Science, 1890) 3031423062, 9783031423062

This two-volume set constitutes the refereed proceedings of the 30th European Conference on Systems, Software and Servic

123 57 48MB

English Pages 407 [403] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Preface
Recommended Further Reading
Acknowledgements
Organization
Contents – Part I
Contents – Part II
SPI and Emerging and Multidisciplinary Approaches to Software Engineering
Sustained Enablement of AI Ethics in Industry
1 Introduction
2 State of the Art
2.1 Terms in Research Around AI Ethics
2.2 Trustworthy AI Values in Research
2.3 Enablement Approaches in AI Ethics
3 Research Question and Methodology
4 Development of the Enablement of AI Ethics
4.1 Introduction to a Practical AI Ethics Concept
4.2 Trustworthy AI Values
4.3 AI Ethics Action Fields
4.4 Embedding AI Ethics into Industrial Context
5 Enabling of AI Ethics
5.1 Feasibility Aspects of AI Ethics in Industry
5.2 Measurement Approach for AI Ethics
6 Discussion
7 Conclusions
8 Relationship with the SPI Manifesto
References
Investigating Sources and Effects of Bias in AI-Based Systems – Results from an MLR
1 Introduction
2 Research Methodology
2.1 Methodology
2.2 Search Queries
2.3 Inclusion/Exclusion
2.4 Methodology Limitations
3 Literature Review
3.1 RQ1: What Are AI-Based Systems?
3.2 RQ2: What Are the Types of Bias in AI-Based Systems?
3.3 RQ3: What Are the Potential Risks/Effects of Bias in AI-Based Systems?
3.4 RQ4: How Can Bias Be Resolved/Prevented in AI-Based Systems?
4 Limitations of Research
5 Directions for Future Research
6 Conclusion
References
Quality Assurance in Low-Code Applications
1 Introduction
2 Quality Assurance Factors
2.1 Layouts and Corporate Designs
2.2 Adherence to Internal Processes
2.3 Performance and Security
3 Quality Assurance Approach
3.1 Application Meta-model
3.2 Company Rules (prosa)
3.3 Formalized Rules
3.4 Low-Code Application
3.5 Application Model
3.6 Quality Engine
3.7 Test Results
3.8 Knowledge Base
3.9 Feedback Generator
4 Initial Evaluation: Microsoft PowerApps
4.1 Application “Meta-model”
4.2 Low-Code Application for Test
4.3 Company Rules (prosa)
4.4 Formalized Rules
4.5 Quality Engine and Feedback Generator
4.6 Feedback from Low-Code Developers
5 Related Work
6 Summary and Outlook
References
Towards a DevSecOps-Enabled Framework for Risk Management of Critical Infrastructures
1 Introduction
2 Related Work
3 Towards a DevSecOps Environment that Fosters Risk Management
3.1 Feasibility Analysis
3.2 Risk Management Perspective
3.3 DevSecOps Proposal
3.4 Evaluation by Experts
3.5 Improvements (Continuous Feedback)
3.6 Implementation
3.7 Critical Infrastructure Performance
4 Conclusion and Future Work
References
Gamified Focus Group for Empirical Research in Software Engineering: A Case Study
1 Introduction
2 Background
3 Method
4 Results
4.1 Gamified Focus Group
4.2 Execution of Gamified Focus Group
4.3 Consolidation of Results
5 Discussion
6 Conclusions and Future Work
References
Exploring Metaverse-Based Digital Governance of Gambia: Obstacles, Citizen Perspectives, and Key Factors for Success
1 Introduction
2 Literature Review
2.1 Digital Governance in The Gambia
2.2 The Metaverse
3 Research Methodology
3.1 Research Design
3.2 Sample Selection
3.3 Data Collection Methods
3.4 Descriptive Analysis
4 Results
4.1 Validation of the Measurement Scales
4.2 Hypothesis Testing
5 Discussion
6 Conclusion
References
Identification of the Personal Skills Using Games
1 Introduction
2 Related Works
3 Process to Select or Develop Games for Analyzing Soft Skills
4 Games to Evaluate Flexibility to Change
4.1 Case Study Design and Performance
4.2 Case Study Performance
4.3 Analysis of Data Collected
4.4 Results
4.5 Discussion
5 Conclusion
References
Identifying Key Factors to Distinguish Artificial and Human Avatars in the Metaverse: Insights from Software Practitioners
1 Introduction
2 Background
3 Methodology
4 Findings
5 Conclusion and Future Work
References
Digitalisation of Industry, Infrastructure and E-Mobility
An Approach to the Instantiation of the EU AI Act: A Level of Done Derivation and a Case Study from the Automotive Domain
1 Introduction
2 Literature
3 Methodology
4 Derivation of LoD Layer EU AI Act
5 Case Study for Crosscheck Validation in a Real Project Setups
6 Discussion and Limitations
7 Conclusion and Outlook
References
An Investigation of Green Software Engineering
1 Introduction
2 Research Methodology
2.1 Search Strings
2.2 Inclusion/Exclusion
3 Analysis
3.1 What Are the Principles and Practices of Green Software Engineering?
3.2 How Can Software Developers Reduce the Energy Consumption of Software?
3.3 What Are the Challenges Facing Green Software Engineering?
3.4 Green Software Engineering is Part of Green System Engineering?
3.5 How Can We Measure and Quantify the Impact of Green Software Engineering on the Environment?
4 Research Limitations
5 Directions for Future Research
6 Conclusions
References
Developing a Blueprint for Vocational Qualification of Blockchain Specialists Under the European CHAISE Initiative
1 Introduction
2 The Problem Background
3 Research Question and Methodology
4 Scope and Structure of the CHAISE Blueprint
5 Occupational Profiles
5.1 Blockchain Developer
5.2 Blockchain Architect
5.3 Blockchain Manager
6 Program Specifications and Delivery
6.1 Program Outline
6.2 Learning Outcomes
6.3 Entry Requirements
6.4 Thematic Coverage
6.5 Delivery Methods
6.6 Assessment Criteria
7 Certification Pathways
7.1 National Certification
7.2 ECQA Certification
7.3 Grading and Passing Requirements
8 Requirements for Training Providers
8.1 Resources and Equipment
8.2 Teaching Staff
9 Summary, Conclusion and Outlook
10 Relationship with the SPI Manifesto
References
Trustful Model-Based Information Exchange in Collaborative Engineering
1 Collaboration Along the Supply Chain
2 Related Work and Background
2.1 Collaborative Engineering
2.2 Threat Modeling
2.3 ISO/IEC 270xx
3 Research Approach
4 Insights and Assumptions
5 Collaborative Engineering Scenario
6 Data Security Threats
7 Data Security Guidelines
7.1 Data Minimization
7.2 ISO/IEC 27010-Compliant Model Exchange
8 Discussion
9 Conclusion
References
Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR Educational Program
1 Introduction
2 The E-DRIVETOUR Initiative
3 Training Curriculum
4 Program and Course Assessment
5 Project Sustainability
6 Discussion and Conclusions
7 Relationship with the SPI Manifesto
References
Towards User-Centric Design Guidelines for PaaS Systems: The Case of Home Appliances
1 Introduction
2 State of the Art
2.1 PaaS Systems Concepts
2.2 Design and Consumer Acceptance
2.3 Discrete Choice Experiment
3 Methodology
4 Results
5 Conclusions
6 Relationship with the SPI Manifesto
References
Boosting the EU-Wide Collaboration on Skills Agenda in the Automotive-Mobility Ecosystem
1 Background
1.1 Automotive Skills Alliance (ASA) - PfS Large-Scale Partnership
1.2 Project FLAMENCO - Boosting the Collaboration
2 Research on Collaboration Needs and Requirements
2.1 Collaboration Needs
2.2 Collaboration Framework
2.3 Collaboration Outcomes
2.4 Collaboration Challenges and Risks Encountered in Past Collaboration
3 A Way Forward
4 Relationship with the SPI Manifesto
References
Automotive Data Management SPICE Assessment – Comparison of Process Assessment Models
1 Introduction
2 Background and Related Work
2.1 Automotive SPICE Assessment
2.2 Data Management
2.3 Current Research Status
3 Research Question and Methodology
4 Data Management SPICE
4.1 An Automotive Example Scenario and Its Interpretation
5 PAM Mapping and Analysis
6 Evaluation and Discussion
7 Conclusion and Outlook
8 Relationship with the SPI Manifesto
References
A Knowledge Management Strategy for Seamless Compliance with the Machinery Regulation
1 Introduction
2 Background
2.1 Machinery Directive
2.2 Machinery Directive-Related Harmonised Standards
2.3 Machinery Regulation
2.4 MR-Related Regulations
2.5 Centrifugal Pumps
2.6 Multi-concern Assurance
2.7 Variability Management via Base Variability Resolution
3 Knowledge Management Strategy
4 Exemplification of the Strategy
5 Discussion and Synergy with the SPI Manifesto
6 Related Work
7 Conclusion and Future Work
References
SPI and Good/Bad SPI Practices in Improvement
Corporate Memory – Fighting Rework with a Simple Principle and a Practical Implementation
1 Introduction
2 The Problem Today
3 Corporate Memory Principle
4 Grundfos Case
4.1 Requirements
4.2 Features
4.3 Work
4.4 Test
4.5 Correlation Between Requirements and Test
4.6 Risks
4.7 Baselining
4.8 Application - Production and Traceability
4.9 Grand Model
4.10 Overview
4.11 Credits
5 Conclusion
6 Relationship with the SPI Manifesto
References
Managing Ethical Requirements Elicitation
1 Introduction
1.1 Requirements Elicitation
1.2 Computer Ethics
1.3 SPI Manifesto
2 Ethical Challenges in Requirements Elicitation: A Literature Review
3 Defensible Ethical Principles
3.1 Deontological Principles
3.2 Telelogical Principles
4 Heuristics
5 Conclusions
References
Process Improvement Based on Symptom Analysis
1 Introduction
2 Literature
3 Method
4 A Survey of Symptoms
5 Symptoms, Causes, and Consequences
6 Mapping from Root Causes to CMMI
6.1 Symptom 22 “We Don’t Know our Performance on Different Tasks”
6.2 Symptom 24: “We Cannot Get the Competences and Resources Needed for the Project”
7 Discussion
8 Conclusion
9 Further Work and the Relationship to the SPI Manifesto
References
SPI and Functional Safety and Cybersecurity
The New Cybersecurity Challenges and Demands for Automotive Organisations and Projects - An Insight View
1 Introduction
2 Example Cybersecurity Item Definition
3 TARA Explanation and Hints
3.1 Asset Identification and Impact Rating
3.2 Threat Scenario Identification
3.3 Attack Path Analysis and Attack Feasibility Rating
3.4 Risk Value Determination and Risk Treatment Decision
3.5 General Remarks/Hints
4 Cybersecurity Design and Requirements at System Level
5 Cybersecurity Design and Requirements at Software Level
6 Automotive SPICE® for Cybersecurity Experiences so Far and Outlook
References
An Open Software-Based Framework for Automotive Cybersecurity Testing
1 Introduction
2 Related Work
2.1 Automotive Security Testbed Implementations
2.2 PENNE Inspiration: PASTA Portable Automotive Security Testbed with Adaptability
3 PENNE Testbed Design
3.1 Vehicle Function ECUs
3.2 Graphical User Interface
3.3 Attacks of the PENNE Testbed
3.4 Security Measures of the PENNE Testbed
4 Results and Evaluation
4.1 Testbed Capabilities
4.2 Message Timings
4.3 Attack Prevention
5 Conclusion
6 Relation to SPI Manifesto
References
Requirements Engineering for Cyber-Physical Products
1 Introduction
2 Literature Review
3 Hypotheses and Objectives
3.1 Hypothesis
3.2 Research Questions
4 Sample Concepts
4.1 Learning from Samples
4.2 Avoiding Racial or Gender Bias
4.3 Protecting Lives
4.4 Learning Concepts
5 Continuous Requirements Engineering
5.1 Collecting Feedback and User Reactions
5.2 Net Promoter® Score (NPS) for Interpreting Feedback
5.3 Importance and Satisfaction with a Product
5.4 Automatization Within the DevOps Cycle
6 Requirements Elicitation
6.1 Modeling Functionality
6.2 Functional Effectiveness
6.3 Embracing Change
6.4 Requirements for Intelligent Systems Using Concepts
7 Limitations
8 Conclusions
References
Consistency of Cybersecurity Process and Product Assessments in the Automotive Domain
1 Introduction
2 Types of Audits
2.1 Automotive SPICE for Cybersecurity
2.2 Automotive Cybersecurity Management System (ACSMS) Audit
2.3 ISO 21434 Product Assessment
3 Implementation of Links Between Product Evaluation and Project Assessment
3.1 Background
3.2 Discussion
3.3 Conclusion
3.4 Implementation
4 Conclusion
5 Relation to SPI Manifesto
References
A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS
1 Introduction
2 Related Works and Concepts
3 Methodology
4 Experimentation on Cyberattacks-Based Scenarios
4.1 Procedure and Steps for Experimental Setup
4.2 Cyberattack Scenarios
4.3 Discussion
5 Conclusion and Future Work
6 Relationship with the SPI Manifesto
References
CYBERENG - Training Cybersecurity Engineer and Manager Skills in Automotive - Experience
1 Introduction
2 Automotive Cybersecurity Training Concept and Highlights
3 Training Units and Elements - Overview and Knowledge Examples
3.1 U.1 Cybersecurity Management
3.2 U.2 Cybersecurity Operation and Maintenance
3.3 U.3 Engineering Aspects of Cybersecurity
3.4 U.4 Testing Aspects of Cybersecurity
4 Course Implementation – Experience
5 Summary and Conclusions
6 Future of the CYBERENG Training
7 Relation to SPI Manifesto
References
Author Index
Recommend Papers

Systems, Software and Services Process Improvement: 30th European Conference, EuroSPI 2023, Grenoble, France, August 30 – September 1, 2023, ... in Computer and Information Science, 1890)
 3031423062, 9783031423062

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Murat Yilmaz Paul Clarke Andreas Riel Richard Messnarz (Eds.)

Communications in Computer and Information Science

1890

Systems, Software and Services Process Improvement 30th European Conference, EuroSPI 2023 Grenoble, France, August 30 – September 1, 2023 Proceedings, Part I

Communications in Computer and Information Science Editorial Board Members Joaquim Filipe , Polytechnic Institute of Setúbal, Setúbal, Portugal Ashish Ghosh , Indian Statistical Institute, Kolkata, India Raquel Oliveira Prates , Federal University of Minas Gerais (UFMG), Belo Horizonte, Brazil Lizhu Zhou, Tsinghua University, Beijing, China

1890

Rationale The CCIS series is devoted to the publication of proceedings of computer science conferences. Its aim is to efficiently disseminate original research results in informatics in printed and electronic form. While the focus is on publication of peer-reviewed full papers presenting mature work, inclusion of reviewed short papers reporting on work in progress is welcome, too. Besides globally relevant meetings with internationally representative program committees guaranteeing a strict peer-reviewing and paper selection process, conferences run by societies or of high regional or national relevance are also considered for publication. Topics The topical scope of CCIS spans the entire spectrum of informatics ranging from foundational topics in the theory of computing to information and communications science and technology and a broad variety of interdisciplinary application fields. Information for Volume Editors and Authors Publication in CCIS is free of charge. No royalties are paid, however, we offer registered conference participants temporary free access to the online version of the conference proceedings on SpringerLink (http://link.springer.com) by means of an http referrer from the conference website and/or a number of complimentary printed copies, as specified in the official acceptance email of the event. CCIS proceedings can be published in time for distribution at conferences or as postproceedings, and delivered in the form of printed books and/or electronically as USBs and/or e-content licenses for accessing proceedings at SpringerLink. Furthermore, CCIS proceedings are included in the CCIS electronic book series hosted in the SpringerLink digital library at http://link.springer.com/bookseries/7899. Conferences publishing in CCIS are allowed to use Online Conference Service (OCS) for managing the whole proceedings lifecycle (from submission and reviewing to preparing for publication) free of charge. Publication process The language of publication is exclusively English. Authors publishing in CCIS have to sign the Springer CCIS copyright transfer form, however, they are free to use their material published in CCIS for substantially changed, more elaborate subsequent publications elsewhere. For the preparation of the camera-ready papers/files, authors have to strictly adhere to the Springer CCIS Authors’ Instructions and are strongly encouraged to use the CCIS LaTeX style files or templates. Abstracting/Indexing CCIS is abstracted/indexed in DBLP, Google Scholar, EI-Compendex, Mathematical Reviews, SCImago, Scopus. CCIS volumes are also submitted for the inclusion in ISI Proceedings. How to start To start the evaluation of your proposal for inclusion in the CCIS series, please send an e-mail to [email protected].

Murat Yilmaz · Paul Clarke · Andreas Riel · Richard Messnarz Editors

Systems, Software and Services Process Improvement 30th European Conference, EuroSPI 2023 Grenoble, France, August 30 – September 1, 2023 Proceedings, Part I

Editors Murat Yilmaz Gazi University Ankara, Türkiye

Paul Clarke Dublin City University Dublin, Ireland

Andreas Riel Grenoble Alpes University Grenoble, France

Richard Messnarz I.S.C.N. GesmbH Graz, Austria

ISSN 1865-0929 ISSN 1865-0937 (electronic) Communications in Computer and Information Science ISBN 978-3-031-42306-2 ISBN 978-3-031-42307-9 (eBook) https://doi.org/10.1007/978-3-031-42307-9 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Preface

This textbook comprises the proceedings of the 30th Systems, Software and Service Process Improvement and Innovation (EuroSPI) Conference, held during August 30 – September 1, 2023 in Grenoble, France. Earlier conferences were held in Dublin (Ireland, 1994), Vienna (Austria, 1995), Budapest (Hungary, 1997), Gothenburg (Sweden, 1998), Pori (Finland, 1999), Copenhagen (Denmark, 2000), Limerick (Ireland, 2001), Nuremberg (Germany, 2002), Graz (Austria, 2003), Trondheim (Norway, 2004), Budapest (Hungary, 2005), Joensuu (Finland, 2006), Potsdam (Germany, 2007), Dublin (Ireland, 2008), Alcala (Spain, 2009), Grenoble (France, 2010), Roskilde (Denmark, 2011), Vienna (Austria, 2012), Dundalk (Ireland, 2013), Luxembourg (2014), Ankara (Turkey, 2015), Graz (Austria, 2016), Ostrava (Czech Republic, 2017), Bilboa (Spain, 2018), Edinburgh (UK, 2019), Düsseldorf (Germany, 2020), Krems (Austria, 2021), and Salzburg (Austria, 2022). The EuroSPI conference series (and book series) was established in 1994 as a leading conference in the area of Systems, Software, and Service Process and Product Improvement and Innovation with contributions from leading industry and leading researchers. SOQRATES as a working group of leading German and Austrian industry partners started in 2003 and has been moderated by the chair of EuroSPI since 2003 and the working group contributes to the thematic workshops organized at EuroSPI, to define the state of the art in system design, safety and cybersecurity, assessments, quality management, agile processes, standards, etc. The EuroSPI academy started in 2020 (based on the EU Blueprint project DRIVES concept of a learning compass for the European automotive industry) and within a year had many hundreds trained, and on the DRIVES learning portal we now have more than 2000 MOOC trainees. The exam systems originally developed to support ECQA are now adapted and integrated to support a Europewide certification and exams systems under the European System, Software, Service Process Improvement EuroSPI Certificates & Services GesmbH. All the activities are being integrated into a EuroSPI Certificates & Services GesmbH under one umbrella. EuroSPI is an initiative with the following major action lines (http://www.eurosp i.net): • Establishing an annual EuroSPI conference supported by software process improvement networks from different EU countries (https://conference.eurospi.net). • Establishing a Europe-wide academy with online training for qualifications related to the content discussed in the EuroSPI conference series (https://academy.eurospi.net). • Establishing an exam and certification system for all the trainings offered in the EuroSPI academy (https://www.iscn.com/projects/exam_portal/index_asa.php). • EuroSPI entered a strategic partnership with ASA (Automotive Skills Alliance) in Brussels which includes the association of all car makers and all suppliers in the automotive industry in Europe, and EuroSPI certificates are issued in cooperation with ASA. Also EuroSPI has had a special ASA representative as a conference board member since 2022 (https://automotive-skills-alliance.eu/).

vi

Preface

• EuroSPI runs its own social media and YouTube Channel (https://www.youtube.com/ channel/UCIQfOm8-ycv8gY2BuuhKDMQ/videos). EuroSPI has had a cooperation with the ASA (Automotive Skills Alliance) from 2022 onwards, and the ASA is the result of the EU Blueprint for Automotive project DRIVES (2018–2021) where leading Automotive organizations discussed and presented skills for the Europe 2030 strategy in the automotive sector. EuroSPI has a cooperation with the EU Blueprint for Battery Systems ALBATTS (2020–2023) where leading industrial organizations discuss and present skills for the creation of battery production in Europe for cars, ships, planes, industrial plants, etc. EuroSPI established the SPI Manifesto (SPI: Systems, Software and Services Process Improvement) at EuroSPI 2009 in Alcala, Spain, and this manifesto still provides a framework for improvement and innovation in organizations. From 2013 onwards, new communities (cybersecurity, Internet of Things, Agile) joined EuroSPI2 and the meaning of the letter “S” extended to “System, Software, Service, Safety, and Security”, and the letter “I” extended to meaning “Improvement, Innovation, and Infrastructure (Internet of Things)”. In memory of our dear friend and long-term EuroSPI conference series editor, Rory O’Connor of Dublin City University and Lero (the Science Foundation Ireland Research Centre for Software), the committee has in collaboration with ISCN, ASQ and Lero, established the Rory O’Connor Award for Research Excellence. On an annual basis, the individual presenting the highest quality work to the conference audience, especially in areas of major importance to our field, is awarded this honor. A typical characterization of EuroSPI is reflected in a statement made by a company: “... the biggest value of EuroSPI lies in its function as a European knowledge and experience exchange mechanism for SPI and innovation.” Since its beginning in 1994 in Dublin, the EuroSPI initiative has outlined that there is not a single silver bullet with which to solve SPI issues, but that you need to understand a combination of different SPI methods and approaches to achieve concrete benefits. Therefore, each proceedings volume covers a variety of different topics, and at the conference we discuss potential synergies and the combined use of such methods and approaches. These proceedings contain 48 research and industrial contributions under nine core themes: • • • • • • • • •

I: Emerging and Multidisciplinary Approaches to Software Engineering II: E-Mobility, Digitalisation of Industry, and Infrastructure III: Good Process Improvement Practices IV: Functional Safety, Cybersecurity, ADAS, and SOTIF V: SPI and Agile VI: International Standards and Norms VII: Sustainability and Life Cycle Challenges VIII: SPI and Recent Innovations IX: Virtual Reality and Augmented Reality

For the contributions, only the highest-quality research submissions were accepted. Theme I presents eight papers related to emerging and multidisciplinary software engineering paradigms. Theme II presents nine papers about e-mobility, digitalisation of

Preface

vii

industry, and infrastructure. Theme III presents three papers about good process improvement practices, while Theme IV contains six papers on functional safety, and cybersecurity. Theme V includes five papers focused on Agile software development and SPI. Theme VI presents five papers about international standards and norms. Theme VII presents five further contributions related to sustainability and life cycle challenges, and Theme VII presents five further contributions focused on recent innovations. Finally, Theme IX contains two contributions focused on virtual reality. To encourage synergy between best academic and industrial practices, the various core research and industrial contributions to this conference were presented side by side at the conference under the nine key themes identified for this EuroSPI edition. September 2023

Murat Yilmaz Paul Clarke Andreas Riel Richard Messnarz

Recommended Further Reading

In [1], the proceedings of three EuroSPI conferences were integrated into a single book, which was edited by 30 experts in Europe. The proceedings of EuroSPI 2005 to 2022 inclusive were published by Springer in [2–19], respectively.

References 1. Messnarz, R., Tully, C. (eds.): Better Software Practice for Business Benefit – Principles and Experience, 409 pages. IEEE Computer Society Press, Los Alamitos (1999) 2. Richardson, I., Abrahamsson, P., Messnarz, R. (eds.): Software Process Improvement. LNCS, vol. 3792, p. 213. Springer, Heidelberg (2005) 3. Richardson, I., Runeson, P., Messnarz, R. (eds.): Software Process Improvement. LNCS, vol. 4257, pp. 11–13. Springer, Heidelberg (2006) 4. Abrahamsson, P., Baddoo, N., Margaria, T., Messnarz, R. (eds.): Software Process Improvement. LNCS, vol. 4764, pp. 1–6. Springer, Heidelberg (2007) 5. O’Connor, R.V., Baddoo, N., Smolander, K., Messnarz, R. (eds.): Software Process Improvement. CCIS, vol. 16, Springer, Heidelberg (2008). 6. O’Connor, R.V., Baddoo, N., Gallego C., Rejas Muslera R., Smolander, K., Messnarz, R. (eds.): Software Process Improvement. CCIS, vol. 42, Springer, Heidelberg (2009). 7. Riel A., O’Connor, R.V. Tichkiewitch S., Messnarz, R. (eds.): Software, System, and Service Process Improvement. CCIS, vol. 99, Springer, Heidelberg (2010). 8. O’Connor, R., Pries-Heje, J. and Messnarz R., Systems, Software and Services Process Improvement, CCIS Vol. 172, Springer (2011). 9. Winkler, D., O’Connor, R.V. and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 301, Springer (2012). 10. McCaffery, F., O’Connor, R.V. and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 364, Springer (2013). 11. Barafort, B., O’Connor, R.V. and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 425, Springer (2014). 12. O’Connor, R.V. Akkaya, M., Kemaneci K., Yilmaz, M., Poth, A. and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 543, Springer (2015). 13. Kreiner, C., Poth., A., O’Connor, R.V., and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 633, Springer (2016). 14. Stolfa, J., Stolfa, S., O’Connor, R.V., and Messnarz R. (eds.): Systems, Software and Services Process Improvement, CCIS 633, Springer (2017). 15. Larrucea, X., Santamaria, I., O’Connor, R.V., Messnarz, R. (eds.): Systems, Software and Services Process Improvement, CCIS Vol. 896, Springer (2018). 16. Walker A., O’Connor, R.V., Messnarz, R. (eds.): Systems, Software and Services Process Improvement, CCIS Vol. 1060, Springer (2019). 17. Yilmaz M., Niemann, J., Clarke, P., Messnarz, R. (eds.): Systems, Software and Services Process Improvement, CCIS Vol. 1251, Springer (2020). 18. Yilmaz M., Clarke P., Messnarz R., Reiner M. (eds.): Systems, Software and Services Process Improvement, CCIS Vol. 1442, Springer (2021). 19. Yilmaz M., Clarke P., Messnarz R., Wöran B. (eds.): Systems, Software and Services Process Improvement, CCIS Vol. 1646, Springer (2022).

Acknowledgements

Some contributions published in this book were funded with support from the European Commission. European projects (supporting ECQA and EuroSPI) contributed to this Springer volume including the FLAMENCO Project (Automotive Skills Alliance cooperation Models, Project 101087552), OpenInnotrain (H2020-MSCA-RISE-2018, exchange of researchers), ALBATTS – BLUEPRINT Project (612675-EPP-1-20191-SE-EPPKA2-SSA-B), and TIMS (Agreement Number: 2021-1-LV01-KA220-VET000033281, ISO 56000 Innovation Management Norm: Training in Innovation Management System for Sustainable SMEs). In this case the publications reflect the views only of the author(s), and the Commission cannot be held responsible for any use which may be made of the information contained therein. This work was supported, in part, by Science Foundation Ireland grant 13/RC/2094_2 and co-funded under the European Regional Development Fund through the Southern & Eastern Regional Operational Programme to Lero (the Science Foundation Ireland Research Centre for Software, www.lero.ie). In this case the publications reflect the views only of the author(s), and the Science Foundation Ireland and Lero cannot be held responsible for any use, which may be made of the information contained therein.

Organization

General Chair and Workshop Chair Richard Messnarz

ISCN GesmbH, Graz, Austria

Scientific Chairs Ricardo Colomo-Palacios Murat Yilmaz Paul Clarke Andreas Riel

Polytechnic University of Madrid, Spain Gazi University, Turkey Dublin City University, Ireland Université Grenoble Alpes, Grenoble INP, France

Organization Chairs Richard Messnarz Andreas Riel Damjan Ekert Tobias Zehetner Laura Aschbacher

ISCN GesmbH, Graz, Austria Université Grenoble Alpes, Grenoble INP, France ISCN GesmbH, Austria ISCN GesmbH, Austria ISCN GesmbH, Austria

Local Organization Chairs Richard Messnarz Andreas Riel

ISCN, Austria Université Grenoble Alpes, Grenoble INP, France

Emerging and Multidisciplinary Approaches to Software Engineering Co-chairs Murat Yilmaz Paul Clarke Ricardo Colomo Palacios Richard Messnarz

Gazi University, Turkey Dublin City University, Ireland Ostfold University College, Norway ISCN GesmbH, Graz, Austria

xiv

Organization

Digitalisation of Industry, Infrastructure and E-Mobility Co-chairs Peter Dolejsi Jakub Stolfa Svatopluk Stolfa Andreas Riel Michael Reiner Georg Macher Richard Messnarz

ACEA, The European Automobile Manufacturers Association VSB Ostrava, Czech Republic VSB Ostrava, Czech Republic Université Grenoble Alpes, Grenoble INP, France University of Applied Sciences Krems, Austria TU Graz, Austria ISCN GesmbH, Austria

Good and Bad Practices in Improvement Co-chairs Elli Goergiadou Eva Breske Tomas Schweigert Kerstin Siakas Mirna Munoz

Middlesex University, UK Robert Bosch Engineering, Germany ExpleoGroup, Germany International Hellenic University, Thessaloniki, Greece, and Vaasa University, Finland CIMAT, Mexico

Functional Safety and Cybersecurity Co-chairs Alexander Much Miklos Biro Richard Messnarz

Elektrobit, Germany SCCH, Austria ISCN GesmbH, Austria

Experiences with Agile and Lean Co-chairs Alexander Poth Susumu Sasabe Khaled Badr Antonia Mas

Volkswagen AG, Germany JUSE, Japan VALEO, Egypt University of the Balearic Islands, Spain

Recent Innovations Co-chairs Bruno Wöran Georg Macher Tom Peisl

Merinova, Finland TU Graz, Austria Hochschule Munich, Germany

Organization

Samer Sameh Gabriele Sauberer Joanne Hyland Richard Messnarz Laura Aschbacher

xv

VALEO, Egypt ECQA & Termnet, Austria rInnovationGroup, USA ISCN, Austria ISCN GmbH, Austria

Standards and Assessment Models Co-chairs Gerhard Griessnig Klaudia Dussa Zieger Samer Sameh

AVL, Austria IMBUS, Germany VALEO, Egypt

Sustainability and Life Cycle Challenges Co-chairs Richard Messnarz Andreas Riel

ISCN GesmbH, Graz, Austria Université Grenoble Alpes, Grenoble INP, France

Board Members EuroSPI Board Members represent centers or networks of SPI excellence having extensive experience with SPI. The board members collaborate with different European SPINS (Software Process Improvement Networks). The following have been members of the conference board for a significant period: • • • • • • • • • • •

Richard Messnarz, ISCN, Austria Paul Clarke, Dublin City University, Ireland Gabriele Sauberer, TermNet, Austria Jörg Niemann, University of Applied Sciences Düsseldorf, Germany Andreas Riel, Université Grenoble Alpes, Grenoble INP, France Miklós Biró, Software Competence Center Hagenberg GmbH, Johannes Kepler Universität Linz, Austria Ricardo Colomo-Palacios, Ostfold University, Norway Georg Macher, Graz University of Technology, Austria Michael Reiner, IMC FH Krems, University of Applied Sciences, Austria Murat Yilmaz, Gazi University, Turkey Jakub Stolfa, VSB Ostrava, Czech Republic

xvi

Organization

EuroSPI Scientific and Industry Program Committee EuroSPI established an international committee of selected well-known experts in SPI who are willing to be mentioned in the program and to review a set of papers each year. The list below represents the Research and Industry Program Committee members. EuroSPI also has a separate Industrial Program Committee responsible for the industry/experience contributions.

EuroSPI2 2023 Scientific Program Committee Calvo-Manzano Villalon, Jose A. Clarke, Paul Colomo Palacios, Ricardo Dobaj, Jürgen Georgiadou, Elli Gökalb, Ebru Gomez Alvarez, Maria Clara Hirz, Mario Kidiman, Esra Kolukısa, Ayça Macher, Georg Makkar, Samer Martins, Paula Matthies, Christoph Mayer, Nicolas Mumcu, Filiz Munoz, Mirna Regan, Gilbert Riel, Andreas Rodic, Miran Sechser, Bernhard Stolfa, Jakub Stolfa, Svatopluk Treacy, Ceara Winkler, Dietmar Yilmaz, Murat

Polytechnic University of Madrid (UPM), Spain Dublin City University, Ireland Ostfold University College, Norway University of Technology Graz, Austria Middlesex University, UK Hacettepe University, Turkey Universidad de Medellin, Colombia University of Technology Graz, Austria The Republic of Turkey Ministry of National Education, Turkey Hacettepe University, Turkey University of Technology Graz, Austria VALEO Egypt, Egypt University of the Algarve, Portugal Hasso Plattner Institute, Germany Luxembourg Institute of Science and Technology (LIST), Luxembourg Celal Bayar University, Turkey CIMAT – Unidad Zacatecas, Mexico Dundalk Institute of Technology, Ireland Grenoble INP, France University of Maribor, Slovenia Process Fellows, Germany VSB Ostrava, Czech Republic VSB Ostrava, Czech Republic Dundalk Institute of Technology, Ireland University of Technology Vienna, Austria Gazi University, Turkey

Organization

xvii

EuroAsiaSPI2 2023 Industrial Program Committee Aschbacher, Laura Barafort, Beatrix Breske, Eva Danmayr, Tobias Daughtrey, Taz Dreves, Rainer Dussa-Zieger, Klaudia Ekert, Damjan Fehlmann, Thomas Griessnig, Gerhard Ito, Masao Johansen, Jorn Kaynak, Onur Keskin Kaynak, Ilgi Larrucea Uriarte, Xabier Lindermuth, Peter Mayer, Nicolas Mandic, Irenka Messnarz, Richard Morgenstern, Jens Much, Alexander Nevalainen, Risto Norimatsu, So Peisl, Tom Poth, Alexander Reiner, Michael Sasabe, Susumu Schweigert, Tomas Sechser, Bernhard Spork, Gunther Stefanova Pavlova, Maria Steger, Bernhardt Wegner, Thomas

ISCN GmbH, Austria Luxembourg Institute of Science and Technology (LIST), Luxembourg Bosch Engineering GmbH, Germany ISCN GmbH, Austria American Society for Quality, USA Continental Corporation, Germany imbus AG, Germany ISCN GmbH, Slovenia Euro Project Office AG, Switzerland AVL LIST GmbH, Austria Nil Software Corp., Japan Whitebox, Denmark ASML, Netherlands ASML, Netherlands Tecnalia, Spain Magna Powertrain, Austria Luxembourg Institute of Science and Technology (LIST), Luxembourg Magna Powertrain, Austria ISCN GmbH, Austria Germany Elektrobit Automotive GmbH, Germany Falconleader, Finland JASPIC, Japan University of Applied Sciences Munich, Germany Volkswagen AG, Germany IMC Krems, Austria JUSE, Japan ExpleoGroup, Germany Process Fellows GmbH, Germany Magna Powertrain, Austria CITT Global, Bulgaria ISCN GesmbH, Austria ZF Friedrichshafen AG, Germany

Contents – Part I

SPI and Emerging and Multidisciplinary Approaches to Software Engineering Sustained Enablement of AI Ethics in Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martina Flatscher, Anja Fessler, and Isabel Janez Investigating Sources and Effects of Bias in AI-Based Systems – Results from an MLR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Caoimhe De Buitlear, Ailbhe Byrne, Eric McEvoy, Abasse Camara, Murat Yilmaz, Andrew McCarren, and Paul M. Clarke Quality Assurance in Low-Code Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Markus Noebauer, Deepak Dhungana, and Iris Groher Towards a DevSecOps-Enabled Framework for Risk Management of Critical Infrastructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xhesika Ramaj, Ricardo Colomo-Palacios, Mary Sánchez-Gordón, and Vasileios Gkioulos

3

20

36

47

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Luz Marcela Restrepo-Tamayo and Gloria Piedad Gasca-Hurtado

59

Exploring Metaverse-Based Digital Governance of Gambia: Obstacles, Citizen Perspectives, and Key Factors for Success . . . . . . . . . . . . . . . . . . . . . . . . . . Pa Sulay Jobe, Murat Yilmaz, Aslıhan Tüfekci, and Paul M. Clarke

72

Identification of the Personal Skills Using Games . . . . . . . . . . . . . . . . . . . . . . . . . . Adriana Peña Pérez Negrón, Mirna Muñoz, and David Bonilla Carranza Identifying Key Factors to Distinguish Artificial and Human Avatars in the Metaverse: Insights from Software Practitioners . . . . . . . . . . . . . . . . . . . . . . Osman Tahsin Berkta¸s, Murat Yılmaz, and Paul Clarke

84

96

Digitalisation of Industry, Infrastructure and E-Mobility An Approach to the Instantiation of the EU AI Act: A Level of Done Derivation and a Case Study from the Automotive Domain . . . . . . . . . . . . . . . . . . 111 Fabian Hüger, Alexander Poth, Andreas Wittmann, and Roland Walgenbach

xx

Contents – Part I

An Investigation of Green Software Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Martina Freed, Sylwia Bielinska, Carla Buckley, Andreea Coptu, Murat Yilmaz, Richard Messnarz, and Paul M. Clarke Developing a Blueprint for Vocational Qualification of Blockchain Specialists Under the European CHAISE Initiative . . . . . . . . . . . . . . . . . . . . . . . . . 138 Giorina Maratsi, Hanna Schösler, Andreas Riel, Dionysios Solomos, Parisa Ghodous, and Raimundas Matuleviˇcius Trustful Model-Based Information Exchange in Collaborative Engineering . . . . 156 David Schmelter, Jan-Philipp Steghöfer, Karsten Albers, Mats Ekman, Jörg Tessmer, and Raphael Weber Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR Educational Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Theodoros Kosmanis, Dimitrios Tziourtzioumis, Andreas Riel, and Michael Reiner Towards User-Centric Design Guidelines for PaaS Systems: The Case of Home Appliances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 José Hidalgo-Crespo and Andreas Riel Boosting the EU-Wide Collaboration on Skills Agenda in the Automotive-Mobility Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 Jakub Stolfa, Marek Spanyik, and Petr Dolejsi Automotive Data Management SPICE Assessment – Comparison of Process Assessment Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 Lara Pörtner, Andreas Riel, Marcel Leclaire, and Samer Sameh Makkar A Knowledge Management Strategy for Seamless Compliance with the Machinery Regulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Barbara Gallina, Thomas Young Olesen, Eszter Parajdi, and Mike Aarup SPI and Good/Bad SPI Practices in Improvement Corporate Memory – Fighting Rework with a Simple Principle and a Practical Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Morten Korsaa, Niels Mark Rubin, and Jørn Johansen Managing Ethical Requirements Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Errikos Siakas, Harjinder Rahanu, Joanna Loveday, Elli Georgiadou, Kerstin Siakas, and Margaret Ross

Contents – Part I

xxi

Process Improvement Based on Symptom Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 273 Jan Pries-Heje, Jørn Johansen, Morten Korsaa, and Hans Cristian Riis SPI and Functional Safety and Cybersecurity The New Cybersecurity Challenges and Demands for Automotive Organisations and Projects - An Insight View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Thomas Liedtke, Richard Messnarz, Damjan Ekert, and Alexander Much An Open Software-Based Framework for Automotive Cybersecurity Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316 Thomas Faschang and Georg Macher Requirements Engineering for Cyber-Physical Products: Software Process Improvement for Intelligent Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Thomas Fehlmann and Eberhard Kranich Consistency of Cybersecurity Process and Product Assessments in the Automotive Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Christian Schlager, Richard Messnarz, Damjan Ekert, Tobias Danmayr, Laura Aschbacher, Almin Iriskic, Georg Macher, and Eugen Brenner A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 Kanthanet Tharot, Quoc Bao Duong, Andreas Riel, and Jean-Marc Thiriet CYBERENG - Training Cybersecurity Engineer and Manager Skills in Automotive - Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Svatopluk Stolfa, Jakub Stolfa, Marek Spanyik, Richard Messnarz, Damjan Ekert, Georg Macher, Michael Krisper, Christoph Schmittner, Shaaban Abdelkader, Alexander Much, and Alen Salamun Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385

Contents – Part II

SPI and Agile The Future of Agile Coaches: Do Large Companies Need a Standardized Agile Coach Certification and What Are the Alternatives? . . . . . . . . . . . . . . . . . . . Alexander Ziegler, Thomas Peisl, and Alev Ates

3

Agile Teamwork Quality – Reflect Your Team While Playing and Identify Actions for Empowerment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Poth, Mario Kottke, and Mourine Schardt

16

Agile Team Autonomy and Accountability with a Focus on the German Legal Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. Poth, C. Heere, and D.-A. Levien

30

Foster Systematic Agile Transitions with SAFe® and efiS® Oriented Team Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexander Poth, Mario Kottke, and Mourine Schardt

46

Identifying Agile Practices to Reduce Defects in Medical Device Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Misheck Nyirenda, Róisín Loughran, Martin McHugh, Christopher Nugent, and Fergal McCaffery

61

SPI and Standards and Safety and Security Norms Challenges in Certification of ISO/IEC 15504 Level 2 for Software for Railway Control and Protection Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ay¸segül Ünal and Taner Özdemir Automotive SPICE Draft PAM V4.0 in Action: BETA Assessment . . . . . . . . . . . Noha Moselhy, Ahmed Adel, and Ahmed Seddik

79

96

Automotive Functional Safety Standardization Status and Outlook in China . . . . 113 Xuejing Song and Gerhard Griessnig Digitalizing Process Assessment Approach: An Illustration with GDPR Compliance Self-assessment for SMEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Stéphane Cortina, Michel Picard, Samuel Renault, and Philippe Valoggia

xxiv

Contents – Part II

Acceptance Criteria, Validation Targets and Performance Targets in an ISO 21448 Conform Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Justus Hofmeister and Dietmar Kinalzyk Sustainability and Life Cycle Challenges Sustainable IT Products and Services Facilitated by “Whole Team Sustainability” – A Post-mortem Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 Alexander Poth and Olsi Rrjolli Emerging Technologies Enabling the Transition Toward a Sustainable and Circular Economy: The 4R Sustainability Framework . . . . . . . . . . . . . . . . . . . 166 Dimitrios Siakas, Georgios Lampropoulos, Harjinder Rahanu, Elli Georgiadou, and Kerstin Siakas Methodological Transition Towards Sustainability: A Guidance for Heterogeneous Industry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 Ernesto Quisbert-Trujillo and Helmi Ben-Rejeb Improvement of Process and Outcomes Through a STEEPLED Analysis of System Failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Dimitrios Siakas, Georgios Lampropoulos, Harjinder Rahanu, Kerstin Siakas, Elli Georgiadou, and Margaret Ross Supporting Product Management Lifecycle with Common Best Practices . . . . . . 207 Bartosz Walter, Ilija Jolevski, Ivan Garnizov, and Andjela Arsovic SPI and Recent Innovations The New ISO 56000 Innovation Management Systems Norm and ISO 33020 Based Innovation Capability Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 Mikus Zelmenis, Mikus Dubickis, Laura Aschbacher, Richard Messnarz, Damjan Ekert, Tobias Danmayr, Jonathan Breitenthaler, Lara Ramos, Olaolu Odeleye, and Marta Munoz Frugal Innovation - A Post Mortem Analysis of the Design and Development of a Cyber-Physical Music Instrument . . . . . . . . . . . . . . . . . . . . 234 Alexander Poth and Gabriel Poth Alaman Insights into Socio-technical Interactions and Implications - A Discussion . . . . . 248 Rumy Narayan and Georg Macher

Contents – Part II

xxv

Frugal Innovation Approaches Combined with an Agile Organization to Establish an Innovation Value Stream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 Alexander Poth and Christian Heimann Open Innovation Cultures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Georg Macher, Rumy Narayan, Nikolina Dragicevic, Tiina Leino, and Omar Veledar Virtual Reality and Augmented Reality Augmented Shopping: Virtual Try-On Applications in Eyewear E-retail . . . . . . . 289 Bianca Konarzewski and Michael Reiner On the Service Quality of Cooperative VR Applications in 5G Cellular Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300 Tomoki Akasaka, Shin’ichi Arakawa, and Masayuki Murata Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313

SPI and Emerging and Multidisciplinary Approaches to Software Engineering

Sustained Enablement of AI Ethics in Industry Martina Flatscher(B) , Anja Fessler, and Isabel Janez ZF Friedrichshafen AG, 88038 Friedrichshafen, Germany [email protected]

Abstract. Artificial Intelligence (AI) has become an increasingly pervasive technology in various industries, offering numerous benefits such as increased efficiency, productivity, and innovation. However, the ethical implications of AI adoption in industry have raised concerns and AI ethics has emerged as a critical field of study, focusing on the trustworthy development, deployment, and use of AI technologies. In this paper, we explore an AI Ethics concept with a particular focus on sustained enabling factors to guide organizations in navigating the ethical challenges associated with AI adoption. Keywords: Artificial Intelligence · Ethics · Trustworthy AI · Innovation · dynamic Framework · Enablement · Measurement

1 Introduction The rapid advancement of AI has led to its widespread adoption across various industries, including healthcare, finance, manufacturing, transportation, and more. AI technologies, such as machine learning, natural language processing, and computer vision, have enabled organizations to automate processes, gain insights from data, and enhance decision-making. Foundation models like Chat GPT, as one of many applications and a very prominent example shows how much AI has arrived in society and industry. Along with the benefits, the increasing use of AI has raised ethical concerns related to its impact on society, economy, and individuals. Bender et al. [1] illustrates the importance of the consideration of AI Ethics because of the risks involved with AI Applications like large language models. In terms of use and the scope of AI Applications there are various level of criticalities, related, and to the potential to harm people through the usage of personal data or direct involving of people using the AI Applications [2], which need to be addressed accordingly [3, 4]. However, there are no clear standards for the appropriate handling of ethical considerations in industry. Position papers reflect that current movements target to become more and more ethical, as industries are highly requiring ethical procedures [5–7]. New terms are evolving around AI Ethics like Trustworthy AI and Responsible AI [8]. The industry is trying come up with ethical frameworks, standard processes, and best practices that align with the values of organizations and society to deal with new challenges coming with the dynamics of AI [9–11]. However, industry struggles in holistically manage AI Ethics. This paper proposes an approach to frame AI Ethics and enable it in a feasible way. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 3–19, 2023. https://doi.org/10.1007/978-3-031-42307-9_1

4

M. Flatscher et al.

2 State of the Art To understand what AI Ethics compromises it is of importance to first clarify and narrow down related terms that appear in the research, and based on this, to expand the relevant subject areas. Section 2.1 gives an overview of used terms related to AI Ethics and comes up with a definition of terms to build a common understanding. Section 2.2 and 2.3 grasps the scope of the study area in this paper summarizing AI Ethics values and enablement approaches. 2.1 Terms in Research Around AI Ethics In industry the term AI Ethics isn’t properly defined, therefore related terms will be clarified in the following: The term Trustworthy AI (TAI) is often used to ensure that AI systems are moral, open, and responsible. Accuracy, robustness, openness, explainability, human management and monitoring, justice, the elimination of bias, and security are requirements for TAI systems [2–7]. It turns out that TAI is more likely used when it comes to values that engineering and governance departments can fulfill and work on by developing appropriate methods and processes. Responsible AI, inter alia used by Gartner [2], is used when the human centric approach is of major importance [3]. Beyond algorithmic fairness, responsible AI algorithms cover significant facets of AI that may contribute to avoid AI’s apathetic behavior, such as societal effect and human rights protection [4]. Concerns exist over AI’s capacity for moral behavior and decision-making, and several ethical frameworks, guidelines, and rules have lately been implemented to address these concerns [5]. The term AI Ethics includes more than just the engineering point of view, referring to standards, guidelines, and frameworks that direct the creation and responsible usage of AI systems. An operationalization of AI Ethics will consider and incorporate all components [6]. In the following we will use the term TAI when it comes to the definition of values, which ensure ethical behavior in AI development. Thereby, we are referring to the HLEG Group and its definition, shown in Sect. 2.2. In distinction to this, we use the term AI Ethics in the further course for the holistic view on the topic. In this context, we will examine how industry can be guided towards a more ethical behavior of AI development, deployment, and use. 2.2 Trustworthy AI Values in Research Due to wide adoption of AI, society and industry require ethics in AI. Many principles and guidelines for ethical AI have been issued by private companies (e.g. 2019:84 officially published guidelines for AI) [7–11], research institutions [12] and public sector [13]. What they all have in common is that they span the complex field of AI Ethics into values [14]. Many authors come up with a bunch of AI Ethic values [12, 14–20], which the European Group on Ethics in Science and New Technologies summarizes the most

Sustained Enablement of AI Ethics in Industry

5

comprehensively. The Trustworthy AI values of the HLEG Group are the following seven values shown in Fig. 1 below [2, 12]:

Fig. 1. TAI values according to HLEG [21]

TAI values can be differentiated into different levels of tangibility. For a better understanding of the values and their themes, Hagendorff distinguishes between quantifiable values (explainability, privacy and fairness) and unseizable values, due to the topic’s complexity. Farther he points out that the scope of AI Ethics is not adequately grasped by often idealized, quantifiable, and calculable forms of quantifiable values and needs a holistic view to ensure AI Ethics [15]. TAI requires implementation on a technical and governmental level. In research much effort is devoted to investigating more and more approaches and tools to implement quantifiable values [22]. However, governmental aspects, which fall under the unseizable values are far more difficult to grasp and will be investigated in an enabling context in the next section. 2.3 Enablement Approaches in AI Ethics Frequently research, industry, and government came up with enablement approaches for AI Ethics. Identified approaches can be categorized into: Expand AI Ethics into Action Fields to make complexity tangible: National strategy documents derive Action Fields to grasp AI Ethics in a governmental context. Amongst many nations, Europe, the USA and China [14, 23, 24] are major player in setting

6

M. Flatscher et al.

standards. HLEG provides a framework for outlining a set of self-assessment questions specifically regarding the TAI values mentioned in Sect. 2.2 [25]. From industrial perspective KPMG has developed the AI in Control Framework to help organizations build greater confidence and transparency along the AI lifecycle - from strategy to development - through tested AI governance constructs as well as methods and tools. It also provides some of the key recommendations and leading practices for implementing AI governance, conducting AI assessments, and developing continuous AI monitoring and visualization [26]. Risk based approach: The European Commission, as the first major regulator that proposes a law on AI, assigns AI applications to risk categories. As an attempt to advance digitization in the EU and make it competitive in an international comparison the AI Act recommends dedicated actions regarding their risk class [27]. At a supranational level another regulatory initiative comes from UNECE, to use AI in trade facilitation [13]. AI Committee: Setting up AI Committees to centrally coordinate AI Ethics initiatives like AI Use Case Portfolio, AI opportunities, AI Risk Management, etc. [18, 23]. Standards: Using Standards, where AI Ethics is ensured by step-by-step approach [28] IEEE presents the first industry standard “IEEE 7000-2021”, that addresses AI Ethics operationalization [29]. AI Ethics Measurement approaches: Enabling AI depends beyond a feasible approach heavily on knowing your current status and evaluating it. This should incorporate regular assessments and evaluations to ensure that TAI is not only implemented but also maintained and continuously improved. These assessments should be done in a transparent and accountable manner to build trust and confidence in the TAI system [25]. Defining specific evaluation criteria helps to reduce complexity and to get management attention [22, 30, 31]. The AIC4 allows an independent auditor to conduct an attestation engagement on the AI service’s compliance with the criteria [32]. The Bertelsmann Stiftung has laid the foundation for potential measurability in the use of AI through its ethical label approach. AI companies can use such a label to publicly communicate the quality of their products. The label improves the comparability of products on the market for consumers and AIusing organizations and provides a quick overview of whether an algorithmic system meets the required ethical standards. The principles of the HLEG Group are adopted and endorsed [33]. ETAMI also uses AI Auditing and Ethics Label to align trustworthy and ethical AI between academia, industry, and society [34]. Although the engineering side has been partially worked out, there is a lack of a holistic solution and a lack of a strategic approach [17, 35, 36], which is why the industry has not yet fully integrated AI Ethics.

3 Research Question and Methodology The literature review showed that TAI values and enabling approaches evolve, but industry is struggling to actively ensure AI Ethics holistically. The research question that emerges from this review can be formulated as follows:

Sustained Enablement of AI Ethics in Industry

7

How to live a successful approach to enable AI Ethics in industry? To answer this question, we propose a concept for AI Ethics and recommend a sustained enablement approach and a measurement concept. This concept is applied to industry that is developing AI and applying it in systems. Section 2.1 is based on a literature review on AI Ethics beginning with terms definitions and in-depth literature review on TAI (Sect. 2.2) and enablement approaches (Sect. 2.3). The advancements of ensuring AI Ethics by technical solution in coding has been well-explained, but the incorporation of governmental procedures has not yet been well addressed in the literature. Therefore, this paper transfers findings from literature into an AI Ethics approach enhanced through feasibility aspects considering the industrial context. Section 4.1 introduces the necessary adaptation of theory for industry. The AI Ethics Concept is divided into TAI values (Sect. 4.2) and recommended AI Ethics action field (Sect. 4.3). Section 4.4 puts it all in an industrial context highlighting the existing correlations within an organization. Chapter 5 deals with the feasibility of AI Ethics, where Sect. 5.1. Elaborates on a living integrated process approach and Sect. 5.2. Comes up with a measurement approach. This concept is about to be applied to industry that is developing AI and applying it in systems.

4 Development of the Enablement of AI Ethics This work aims to define a concept for bringing AI Ethics into industry. To succeed in such a practical process Sect. 4.1 elaborates on the introduction which various aspects must be considered in an AI Ethics concept to guarantee holism. In the following three Sects. 4.2–4.4 the elements and success factors for an AI Ethics concept are presented. 4.1 Introduction to a Practical AI Ethics Concept Literature findings show a variety of approaches and tools to handle the challenges of AI Ethics. However everyday practice teaches us, that theoretical constructs like technical standards, best practices, and ethical principles for AI, often have to be adapted to industry in order to be successful and feasible. This is because researchers frequently have a thorough knowledge of AI’s technical and ethical issues but may have little practical expertise with commercial operations and real-world applications. Therefore, findings from research needs to be transferred into very concrete action plans useable in industry. These action plans must in turn be individually adapted to the specific practical considerations. Also, in terms of business operations, the diversity of AI application fields must be considered. Within industry various fields of operating levels need to be considered regarding AI Ethics (further elaboration in Sect. 4.4 see Fig. 4). Engineering teams on the one hand ensure that the development of code and algorithms are transparent, fair, and explainable (see quantifiable values Sect. 2.2), to not reinforce or amplify biases. This requires a deep understanding of how AI algorithms work and the ability to identify and address biases and errors. Technical teams also need to consider how to manage data privacy and security, and how to mitigate the risk of data breaches or other security threats.

8

M. Flatscher et al.

At the same time, governance teams need to ensure that their regulations and policies address the ethical considerations of AI, including issues such as privacy, security, and accountability (see unseizable values Sect. 2.2). This also requires the ability to identify and address ethical risks and concerns. Additionally, another challenge of applying TAI in industry is the need to balance the competing demands of innovation and ethics. While TAI is critical for ensuring that AI benefits society, it can also place significant constraints on the development and deployment of AI systems. This can create tensions between the technical and ethical considerations of AI and the practical considerations of business operations. 4.2 Trustworthy AI Values AI Ethics needs to be based on a common understanding regarding what values stand behind it. As described in Sect. 4.1, enabling via a theoretical construct only succeeds if it is made tangible for industry. Consequently, [10, 33] defined AI Ethics Guidelines that break down the TAI values increase employee understanding in clarifying the TAI values [10, 33]. Hence the identified TAI values [21] in Chapter 2 are clarified in the following, by approaching it by listing further subject areas (named principles within values in Fig. 2), which fall under the respective value. The interpretation of these values can vary depending on the application context, the cultural region and the stakeholders involved. Example key tasks are listed in the bottom column of the table to derive TAI values on a work level. Tasks have technical and governmental characteristics. Figure 2 illustrates the subject areas of how TAI can be grasped in industry. This list of principles and tasks is a glimpse into the industry, as the list does not claim to be complete.

Fig. 2. Exemplary TAI Value principles and related tasks oriented on [10, 37]

Ultimately, the successful application of TAI in industry will depend on the careful consideration of these foundational values and their interpretation in different application fields. This will require ongoing dialogue and collaboration between stakeholders from industry, academia, and civil society.

Sustained Enablement of AI Ethics in Industry

9

4.3 AI Ethics Action Fields Despite the importance of engineering skills that ensure AI Ethics on a technical level, the enabling of purely technical AI Ethics aspects is not further elaborated in this paper, as this is already well covered in literature. This paper addresses the missing concrete approaches for governmental tasks and comes up with AI Ethics Action fields, inspired by the research study in Chapter 2. These seven Action Fields represent foundation pillars to holistically work on. Each action field can be spanned into several subcategories illustrated in Fig. 4. For a better understanding of how to scope the Action Fields, only the most relevant subcategories are mentioned in the Section that follows [38]. Please note that the listed subcategories do not claim to be exhaustive but should rather be seen as inspiration. AI Literacy/Education: Practice shows that AI often raises ethical concerns due to a lack of knowledge in the company. Employees are simply often not aware of what an ethical AI model requires or what implications AI applications can have if used incorrectly. It is therefore of enormous importance to train the company specifically and to point out AI Ethics aspects or even guided through AI Ethics training courses. This involves providing training and education to employees and stakeholders on the technical and ethical considerations of AI systems. This could include providing training on data privacy and cybersecurity, as well as education on ethical principles and values. Training makes aware of TAI values in the company and thus they specifically can be considered through upskilling in development. Through training and associated upskilling, employees are made aware of the TAI values in the company and can specifically take it into account in their development. Practical knowledge management is one subcategory that contributes to the important dissemination of state of knowledge around AI Ethics. Furthermore, it’s building trust as it creates a relationship constructs and commitment to the topic. AI Risk Management/Mitigation: Being aware of the risks of using AI sharpens the handling of the same technology. Here it is primarily a question of avoiding humaninjuring applications beforehand, but also of uncovering competitive opportunities through not using AI. Furthermore, the AI Lifecycle in its total has to be covered in a continuous process where unforeseen harm must be thoroughly assessed and managed. Risk mitigation on the one hand uses organizational tools like communication and awareness campaigns to avoid a misuse of AI through lacking knowledge, on the other hand the improvement of an AI model to become more robust and transparent also contributes to reduce AI risks on an engineering level. AI Awareness/Communication: The more employees understand how an AI model works, the more engineers (AI Hub) understand their toolset of improving their AI use cases regarding AI Ethic concerns, the more ethical a company acts in using AI. This action field is clearly linked to AI Literacy/Education as this contributes to increasing AI Awareness. Nice side but decisive effect is, that awareness creates trust, which is crucial for AI users to accept the technology, for engineers to consider AI in product/process development [41] and to realize the full potential of AI in the long term [20]. Psychological trust models name performance, reliability, prediction and explainability as decisive factors in building trust for AI applications [42, 43]. Especially transparency can be called an important factor for trust in AI [44].

10

M. Flatscher et al.

AI Compliance: The increasing number of AI regulation papers, norms, standards and laws requires a proper use of AI. Therefore, a compliance and governance approach is needed and needs to be rolled out across the group. AI Policy: As mentioned in chapter 2, industry is coming up with a lot of AI Ethics position papers [14]. Outlining a compromise between too flexible and too strict Raab [47] argues that when prescriptive guidelines are implemented in a top down and nonflexible fashion, this gives the misleading impression that it is possible to take a formulaic approach to the application of ethical norms, principles and general rules to specific instances [47]. Therefore, AI Policy has on the one hand to build the rules and guidelines for the use of AI, on the other hand structure the monitoring to really fulfill AI Ethics requirements in the highly dependent corporate organization processes. AI Auditing: As it is also intended in the AI Act an AI Auditing process has to centrally manage and monitor all AI activities throughout the company to fulfill TAI Values and thus ensure AI Ethics. The use of technology must be a comprehensible decision that is made based on risk management, also regarding the due diligence expected by law. Oversight mechanisms have to ensure that AI systems are developed and deployed in a responsible and trustworthy manner. This could include establishing an AI Ethics Committee to review, conducting regular audits and evaluations, and creating mechanisms for reporting and addressing ethical concerns [48, 49].

Fig. 3. Action Fields of AI Ethics

4.4 Embedding AI Ethics into Industrial Context In industry AI is usually organized embedded in an interdependent corporate context. As depicted in Fig. 4 AI Ethics is composed of multidimensional aspects in a complex dependent inter-organizational relationship. The corporate top-down framework determines the strategic course of AI. At a technical level AI is developed and applied (upper

Sustained Enablement of AI Ethics in Industry

11

3 levels in the top box), whereas at a non-technical level AI has to be organized regarding Education, Governance, Communication, etc. (lowest level in the top box). To ensure TAI values it is essential to have a strong AI hub, where technical standards and best practices are elaborated and defined for developing and deploying AI systems in a way that is safe, reliable, and secure. This could include guidelines for data collection and processing, testing and validation, and cybersecurity. In addition, AI Ethics Guidelines are based on the TAI values to establish a respective value understanding. The lower box illustrates the key elements including presented Action Fields from Sect. 4.3 for AI Ethics. Within the boxes, but also between the upper and lower box, there are strong dependencies and connections.

Fig. 4. Multidimensional nature of AI Ethics in an industrial context

The correlation between the Action Fields were mentioned in Sect. 4.3. The other given correlations will be explained in the following, as their consideration in an AI ethics concept is a crucial factor. It is crucial to determine AI values strongly related to the corporate values and considering all stakeholder perspectives in order to align with the needs and values of stakeholders and to enable incorporation into the design and implementation of AI systems. Concrete Action Fields, shown in italics, are highly dependent on the organizational structure related departments are arranged. It is important to work interdisciplinary on the Action Fields breaking it down into subcategories (see Sect. 4.3) and adapt them company specific. This should involve collaboration between AI developers, industry experts, policymakers, and other stakeholders, including end-users and affected communities.

5 Enabling of AI Ethics Lockey states that AI Ethics has a high similarity to innovation, thus the operationalization of ethical aspects requires a feasible concept that guides industry to success. This means that a one-size-fits-all approach to AI Ethics is not appropriate.

12

M. Flatscher et al.

In the following Sect. 5.1 feasibility recommendations are given to succeed in AI Ethics. Based on that Sect. 5.2 describes a measurement approach to evaluate the progress. 5.1 Feasibility Aspects of AI Ethics in Industry Given the fact that AI Ethics comes with intrinsic complexity, combined with high dynamics regards to the fast adaptation of AI Applications, a flexible operationalization is required with adaptation options in the sense of a living integrated process. As pictured in Fig. 5 road mapping, as a well-established strategic management tool [49–51], seems to be a good analogy for organizations to schematically show that companies have to walk through their AI Ethics path individually to become better.

Fig. 5. Living integrated process to AI Ethics governance

In the following feasibility aspects are indicated, which show how the path to an AI ethics-compliant company can be successfully followed. Clarity about relevant AI Ethics stakeholders and fruitful working mode: This aspect probably stands out the most, as AI Ethics, with its complex and multidisciplinary particularity, only subsist in companies, if the various stakeholders are identified. Companies must define stakeholder according to the various perspectives on the topic for example AI designers and AI developers of the AI system, Data Scientists, procurement officers or specialists, front-end staff, that will use or work with the AI system, legal/compliance officers, management, etc. [25]. The dynamic inter-company environment regarding employee turnover and priority shifts requires a consistent integration of these stakeholders in order to represent their interest, values and needs in the governmental (Action Fields) and technical procedures of AI Ethics within the AI Action Fields. Regarding

Sustained Enablement of AI Ethics in Industry

13

AI Governance tasks a dedicated AI Ethics Stakeholder Committee can facilitate the coordination of the AI Action Fields. A factor and AI initiatives is meetings to work constructively and consistently on the action. The consistent work on the Action Fields increases the level of AI Ethics. Stakeholder integrated definition of TAI Values and AI Ethics Action Fields: As pointed out in Chapter 4 the specific and stakeholder involving elaboration of TAI values as well as AI Ethics Action Fields depending on the company context and the organizational characteristic of AI Ethics, is of major importance. The recommended values in this paper can be used and adjusted specifically in order to get a holistic view on AI Ethics. This is achieved by uncovering identifying and tailoring concrete initiatives within the Action Fields that leads to fully ethical companies with regards to AI development [38]. Flexible operationalization and a living integrated process: Millar et al. [51] recognize that the leadership of disruptive innovation, as an intrinsic characteristic of AI Ethics, comes with volatility, uncertainty, complexity and ambiguity (VUCA) [52]. Companies have to deal with their own individual and personalized VUCA world, which requires tact and sensitivity [53, 54]. Therefore, undefined influences of the VUCA environment must be detected [55]. There are inter-company dynamics like shift of key persons or priorities. From an external perspective new trends and AI advancements needs to be managed because they involve risks and opportunities. Additionally, the AI specific challenges increase the complexity of the topic [56]. 5.2 Measurement Approach for AI Ethics The enablement of AI Ethics requires knowledge of the implementation status of the AI Action Fields. However, since the topic AI Ethics is very complex, the assessment must be made on a more concrete level than the Action Fields. As mentioned in Fig. 3 Action Fields must be defined in tangible subcategories. For example, the action field AI Education can be defined in AI training, AI Awareness and Collaboration and Research. These terms describe the Action Field in a holistic way and allow the topic to be examined from several perspectives. Subsequently, specific actions can be assigned to subcategories to get measurable elements. Using the subcategory AI Training as an example, contributing actions could be to establish an AI training strategy and provide AI trainings to employees. Finally, indicators must be defined that make evidence of the activity within the Action Fields. In this example, the number of provided AI trainings is a quantifiable KPI for the AI Training action field. The indicators taken together provide a measurement standard on the basis of which a company can improve individually [53]. In this way stakeholders understand the objectives behind AI Ethics through the quantifiable indicators [56]. Figure 6 shows schematically how the measurement of AI Ethics is built. Beyond the transparency of how a company performs in the defined AI Ethics Action Fields the consistent working on TAI Values itself has to be measured as well. Therefore, the measurement approach records them in the same way as the Action Fields. The tasks from Fig. 2 can be used to make the TAI values measurable. Indicators then need to be found to remain in the same evaluation logic as with the Action Fields.

14

M. Flatscher et al.

Fig. 6. Measurement Approach

Companies are not comparable with each other, as the scope of AI Ethics is framed and organized very differently. Therefore, companies must define individually their Action Fields, subcategories, actions, and indicators in order to get a realistic AI Ethics evaluation. In doing so, companies immediately see what they need to work on within the Action Fields.

6 Discussion The motivation for defining a dynamic enablement concept for AI Ethics lies with the thought that societies will only ever be able to achieve the full potential of AI if trust can be established in its development, deployment, application and use [21]. If exemplarily, the general public doesn’t trust autonomous cars, they will never replace common, manually steered cars [57]. That is why considerations of trust must begin in the manufacturing industry itself. Emaminejad N. [58], even defines it as an inevitable user acceptance requirement. At the same time, it is of importance not to turn TAI into an intellectual land of plenty: it should not be perceived as an umbrella term for everything that would be nice to have regarding AI systems, both from a technical as well as an ethical perspective [53]. Since this can only go well if no worst-case scenarios arise. To counteract this general societal issue the European Union is working on the AI Act. Given the need to address the societal, ethical, and regulatory challenges of AI, the EU’s stated added value is to turn it into a competitive advantage under the banner of Trustworthy AI. This vision for AI-enabled technologies, which aims to mitigate potential harm and enable accountability and control, could set Europe apart from its global competitors. Also, it can also serve as a key component in strengthening the EU’s digital sovereignty by giving European users more choice and control [54].

Sustained Enablement of AI Ethics in Industry

15

Though Castro [59], sees this critical and is wondering if Europe will Be Left Behind If It Focuses on Ethics and Not Keeping Pace in AI Development”. He suggests “a delusion built on three fallacies: that there is a market for AI that is ethical-by-design, that other countries are not interested in AI ethics, and that Europeans have a competitive advantage in producing AI systems that are more ethical than those produced elsewhere”. In addition to the global and European perspective, the understanding across all industries, on the ethical level, should also be strengthened. This makes a cross-industry alignment regarding the interpretability of AI Ethics and the measurement of it, of interest. As a potential way to reconcile the complexity presented, as well as achieve a universal understanding on AI Ethics that is carried globally and across industries, we see the introduction of an AI Ethics label as an option. However, we see the challenges in introducing a uniform label that forms a global, Europe-wide, industry-wide consensus. From an individual perspective, there needs to be further discussion on how to bring stakeholders and policy together with the technical perspective [59].

7 Conclusions The wide adoption of AI makes it an indispensable part of the industrial context coming with complex ethical challenges that need to be addressed in a holistic approach. AI Ethics and its enablement needs to be tackled not only from an engineering point of view (strong AI hub) but also from a governmental side. To overcome the limitations of lacking standards and feasible approaches this work proposes a concept where key elements

Fig. 7. Disciplines of an AI Ethics Enablement

16

M. Flatscher et al.

related to AI Ethics, based on the research study, are framed a way, that companies can individually elaborate on their best AI Ethics performance. The framework is based on dedicated AI Ethics values that must align with and enhance the corporate values and needs to be considered in the whole corporate context. Framing AI Ethics Action Fields enable the operationalization to be dealt with holistically and tangibly. Furthermore, practical implications are given to make the proposed concept as feasible as possible. The starting point to measure the topic of operationalization is an assessment of existing activities respectively related indicators that contribute to the Action Fields. Along the pathway companies assign more and more activities to the Action Fields and AI ethics becomes more and more complete. Figure 7 brings together the different disciplines that an AI Ethics Enablement requires to be feasible and successful.

8 Relationship with the SPI Manifesto Process improvement and innovation where AI Ethics lies are at the very core of the long, successful EuroAsiaSPI 2 history. The SPI manifesto created in this community defines the values and principles required to deploy SPI efficiently and effectively. The enablement of AI Ethics needs to be integrated in processes, and continuously improved in order to adapt to the evolving understanding of organization maturity. The importance of transformation for organizations has clearly been evolving in the digital age, which is why this work is perfectly aligned with the SPI manifesto [60].

References 1. Bender, E.M., Gebru, T., McMillan-Major, A., Shmitchell, S.: On the dangers of stochastic parrots. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada, pp. 610–623 (2021) 2. Gartner: Gartner Identifies Four Trends Driving Near-Term Artificial Intelligence Innovation. https://www.gartner.com/en/newsroom/press-releases/2021-09-07-gartner-identifiesfour-trends-driving-near-term-arti-ficial-intelligence-innovation. Accessed 19 Apr 2022 3. Mikalef, P., Conboy, K., Lundström, J.E., Popoviˇc, A.: Thinking responsibly about responsible AI and ‘the dark side’ of AI. Eur. J. Inf. Syst. 31(3), 257–268 (2022). https://doi.org/10.1080/ 0960085X.2022.2026621 4. Cheng, L., Varshney, K.R., Liu, H.: Socially responsible AI algorithms: issues, purposes, and challenges. J. Artif. Intell. Res. 71 (2021) 5. Lu, Q., Zhu, L., Xu, X., Whittle, J., Xing, Z.: Towards a roadmap on software engineering for responsible AI (2022) 6. Birhane, A., et al.: The forgotten margins of AI ethics. In: 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul Republic of Korea, pp. 948–958 (2022) 7. BMW Group: Seven principles for AI: BMW Group sets out code of ethics for the use of artificial intelligence (2020) 8. Google: AI at Google: our principles (2018) 9. Robert BOSCH GmbH: AI code of ethics: Bosch sets company guidelines for the use of artificial intelligence (2020) 10. Götz Manuel and Flatscher Martina: AI Ethics Guidelines. https://www.zf.com/ethical-ai. Accessed 27 Apr 2023

Sustained Enablement of AI Ethics in Industry

17

11. Microsoft: Responsible AI principles from Microsoft. https://www.microsoft.com/en-us/ai/ responsible-ai?activetab=pivot1:primaryr6. Accessed 27 Apr 2023 12. Hagendorff, T.: The ethics of AI ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020). https://doi.org/10.1007/s11023-020-09517-8 13. Malik, T.: White paper on the use of artificial intelligence in trade facilitation (2023) 14. Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1(9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2 15. Hagendorff, T.: Blind spots in AI ethics. AI Ethics (2021). https://doi.org/10.1007/s43681021-00122-8 16. Gillespie, N., Curtis, C., Bianchi, R., Akbari, A., van Fentener Vlissingen, R.: Achieving trustworthy AI: a model for trustworthy artificial intelligence, Australia (2020). Accessed 11 Apr 2022 17. Gillespie, N., Lockey, S., Curtis, C.: Trust in artificial Intelligence: a five country study. Brisbane, Australia (2021). Accessed 10 May 2022 18. OECD: Recommendation of the council on artificial intelligence: artificial intelligence policy observatory. OECD Legal Instruments (2019). Accessed 9 Nov 2021 19. Varshney, K.R.: Trustworthy Machine Learning (2022) 20. Thiebes, S., Lins, S., Sunyaev, A.: Trustworthy artificial intelligence. Electron Markets 31(2), 447–464 (2020). https://doi.org/10.1007/s12525-020-00441-4 21. HLEG on Artificial Intelligence: Ethics Guidelines for Trustworthy AI (2019) 22. Lechler, T.G., Thomas, J.L.: Examining new product development project termination decision quality at the portfolio level: Consequences of dysfunctional executive advocacy. Int. J. Project Manage. 33(7), 1452–1463 (2015). https://doi.org/10.1016/j.ijproman.2015.04.001 23. Kratochwill, L., Richard, P., Mamel, S., Brey, M., Schätz, K.: Globale trends der künstlichen Intelligenz und deren Implikationen für die Energiewirtschaft: dena-ANALYSE (2020) 24. International Research Center for AI Ethics and Governance: The Ethical Norms for the New Generation Artificial Intelligence, China. Accessed 9 June 2022 25. HLEG on Artificial Intelligence: Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment (2020) 26. KPMG in the UK: Controlling AI (2019) 27. European Commission: Proposal for a Regulation of the European Parliament and the Council: Laying down harmonised rules on Artificial Intelligence (Artificial Inteligence Act) and amending certain Union Legislative acts (2021) 28. Blackman, R.: Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI. Harvard Business Review Press, Boston (2022) 29. IEEE Standard Model Process for Addressing Ethical Concerns during System Design, IEEE SA. https://standards.ieee.org/ieee/7000/6781/ 30. Martinsuo, M., Poskela, J.: Use of evaluation criteria and innovation performance in the front end of innovation*. J. Product Innov. Manag. 28(6), 896–914 (2011). https://doi.org/10.1111/ j.1540-5885.2011.00844.x 31. Bayer, F., Kühn, H.: Prozessmanagement für Experten: Impulse für aktuelle und wiederkehrende Themen. Springer Gabler, Heidelberg (2013). https://doi.org/10.1007/9783-642-36995-7 32. Hayes, A.: BSI White Paper – Overview of standardization landscape in artificial intelligence (2019) 33. Bertelsmann Stiftung (HRSG.): From Principles to Practice - An interdisciplinary framework to operationalise AI ethics (2020) 34. ABB et al.: etami: aligning trustworthy and ethical AI between academia, industry, and society. https://www.etami.eu/etami.eu.html

18

M. Flatscher et al.

35. Lukyanenko, R., Maass, W., Storey, V.C.: Trust in artificial intelligence: from a foundational trust framework to emerging research opportunities. Electron Markets 32(4), 1993–2020 (2022). https://doi.org/10.1007/s12525-022-00605-4 36. Sharkov, G., Todorova, C., Varbanov, P.: Harnessing the power of responsible innovation: the shift towards human-centered skills and competences in AI engineering (2022) 37. Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M.: Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI (2020) 38. Riel, A., Flatscher, M.: A design process approach to strategic production planning for industry 4.0. In: Stolfa, J., Stolfa, S., O’Connor, R. V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 323–333. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_27 39. Sjoerdsma, M., van Weele, A.J.: Managing supplier relationships in a new product development context. J. Purch. Supply Manag. 21(3), 192–203 (2015). https://doi.org/10.1016/j.pur sup.2015.05.002 40. Aloisi, A., de Stefano, V.: Between risk mitigation and labour rights enforcement: assessing the transatlantic race to govern AI-driven decision-making through a comparative lens. SSRN J. (2023). https://doi.org/10.2139/ssrn.4337517 41. Jacovi, A., Marasovi´c, A., Miller, T., Goldberg, Y.: Formalizing trust in artificial intelligence. In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, Virtual Event Canada, pp. 624–635 (2021) 42. Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992) 43. Lazanyi, K., Maraczi, G.: Dispositional trust — do we trust autonomous cars?. In: 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, pp. 135–140 (2017) 44. Zerilli, J., Bhatt, U., Weller, A.: How transparency modulates trust in artificial intelligence. Patterns 3(4), 100–455 (2022) 45. Dor, L.M.B., Coglianese, C.: Procurement as AI governance. IEEE Trans. Technol. Soc. 2(4), 192–199 (2021). https://doi.org/10.1109/TTS.2021.3111764 46. Ponick, E., Wieczorek, G.: Artificial intelligence in governance, risk and compliance: results of a study on potentials for the application of artificial intelligence (AI) in governance, risk and compliance (GRC), December 2022 47. Raab, C.D.: Information privacy, impact assessment, and the place of ethics. Comput. Law Secur. Rev. 37, 105–404 (2020). https://doi.org/10.1016/j.clsr.2020.105404 48. Lotlikar, P., Mohs, J.N.: Examining the role of artificial intelligence on modern auditing techniques. SMQ 9(2) (2021). https://doi.org/10.15640/smq.v9n2a1 49. Koshiyama, A., et al.: Towards algorithm auditing: a survey on managing legal, ethical and technological risks of AI, ML and associated algorithms. SSRN J. (2021). https://doi.org/10. 2139/ssrn.3778998 50. Flatscher, M., Riel, A., Kösler, T.: The need for a structured approach towards production technology roadmaps in innovation-driven industries. In: Barafort, B., O’Connor, R. V., Poth, A., Messnarz, R. (eds.) EuroSPI 2014. CCIS, vol. 425, pp. 251–261. Springer, Heidelberg (2014). https://doi.org/10.1007/978-3-662-43896-1_22 51. Millar, C.C.J.M., Groth, O., Mahon, J.F.: Management innovation in a VUCA world: challenges and recommendations. California Manag. Rev. 61(1), 5–14 (2018). https://doi.org/10. 1177/0008125618805111 52. Bennett, N., Lemoine, G.J.: What a difference a word makes: understanding threats to performance in a VUCA world. Bus. Horiz. 57(3), 311–317 (2014). https://doi.org/10.1016/j.bus hor.2014.01.001 53. Reinhardt, K.: Trust and trustworthiness in AI ethics. AI Ethics (2022). https://doi.org/10. 1007/s43681-022-00200-5

Sustained Enablement of AI Ethics in Industry

19

54. Brattberg, E., Csernatoni, R., Rugova, V.: Europe and AI: leading, lagging behind, or carving its own way? (2020). Accessed 27 Apr 2023 55. Nowacka, A., Rzemieniak, M.: The impact of the VUCA environment on the digital competences of managers in the power industry. Energies 15(1), 185 (2022). https://doi.org/10. 3390/en15010185 56. Sari, R.P.: Integration of key performance indicator into the corporate strategic planning: case study at PT. Inti Luhur Fuja Abadi, Pasuruan, East Java, Indonesia. Agric. Agric. Sci. Procedia 3, 121–126 (2015). https://doi.org/10.1016/j.aaspro.2015.01.024 57. Condliffe, J.: A Single autonomous car has a huge impact on alleviating traffic. MIT Technol. Rev. (2017). Accessed 27 Apr 2023 58. Emaminejad, N., Akhavian, R.: Trustworthy AI and robotics: implications for the AEC industry. Autom. Constr. 139, 104298 (2022). https://doi.org/10.1016/j.autcon.2022.104298 59. Castro, D.: Europe will be left behind if it focuses on ethics and not keeping pace in AI development | view. Euronews (2019). Accessed 27 Apr 2023 60. Pries-Heje, J., Johansen, J., Messnarz, R.: SPI Manifesto (2010). https://conference.eurospi. net/images/proceedings/EuroSPI2012-ISBN-978-87-7398-154-2.pdf

Investigating Sources and Effects of Bias in AI-Based Systems – Results from an MLR Caoimhe De Buitlear1 , Ailbhe Byrne1 , Eric McEvoy1 , Abasse Camara1 , Murat Yilmaz2 , Andrew McCarren1,3 , and Paul M. Clarke1,4(B) 1 School of Computing, Dublin City University, Dublin, Ireland {caoimhe2.debuitlear4,ailbhe.byrne287,eric.mcevoy23, abasse.camara2}@mail.dcu.ie, {andrew.mccarren, paul.m.clarke}@dcu.ie 2 Department of Computer Engineering, Gazi University, Ankara, Turkey [email protected] 3 Insight, The Science Foundation Ireland Research Center for Data Analytics, Dublin, Ireland 4 Lero, the Science Foundation Ireland Research Center for Software, Limerick, Ireland

Abstract. AI-based systems are becoming increasingly prominent in everyday life, from smart assistants like Amazon’s Alexa to their use in the healthcare industry. With this rise, the evidence of bias in AI-based systems has also been witnessed. The effects of this bias on the groups of people targeted can range from inconvenient to life-threatening. As AI-based systems continue to be developed and used, it is important that this bias should be eliminated as much as possible. Through the findings of a multivocal literature review (MLR), we aim to understand what AI-based systems are, what bias is and the types of bias these systems have, the potential risks and effects of this bias, and how to reduce bias in AIbased systems. In conclusion, addressing and mitigating biases in AI-based systems is crucial for fostering equitable and trustworthy applications; by proactively identifying these biases and implementing strategies to counteract them, we can contribute to the development of more responsible and inclusive AI technologies that benefit all users. Keywords: AI · bias · artificial intelligence · risks

1 Introduction The concept of artificial intelligence (AI) has been around for a long time. The idea appeared c. 8th century BC when Homer wrote about the Gods being waited on at dinner by mechanical ‘tripods’. It appears consistently throughout history from science fiction writers who wrote about intelligent machines being a possibility, to their presence in religions such as Judaism, where the artificially created being known as a Golem appears [1]. An AI-based system is a “computer system able to perform tasks that ordinarily require human intelligence” [2]. Examples of this include systems that can play games like chess, draughts and checkers, filter spam emails and autocorrect text [3, 4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 20–35, 2023. https://doi.org/10.1007/978-3-031-42307-9_2

Investigating Sources and Effects of Bias in AI

21

Bias is defined as “the action of supporting or opposing a particular person or thing in an unfair way” which can be done by making an unreasoned judgement or allowing personal beliefs to influence a decision [5]. Bias may present in AI-based systems for various reasons, including how they were trained and the very data used to train them. This can lead to “algorithmic bias”, which is when an AI-based system produces a result that is systematically incorrect [6]. These incorrect results can have varying effects on the groups of people the AI-based system is biased towards. An AI healthcare system was found to be racially biased, facial recognition AI systems are being used which have a lower accuracy on darker-skinned females, and search engines using AI were found to discriminate based on race and gender [7]. In this paper, we will examine what AI-based systems are, we will attempt to understand bias and the types of bias which appear in AI-based systems, we will study the potential risks and effects of bias in AI-based systems, and lastly, we will strive to identify techniques to reduce bias in AI-based systems. The objectives of this study are to: 1. Elucidate the nature and functioning of AI-based systems and their increasing prevalence in various aspects of daily life. 2. Define and categorize the different types of bias present in AI-based systems. 3. Investigate the potential risks and consequences stemming from biased AI-based systems, and their impact on targeted populations. 4. Identify effective methodologies and strategies to minimize bias in AI-based systems. 5. Emphasize the significance of reducing bias in AI-based systems and advocate for the development of equitable and reliable technologies.

2 Research Methodology 2.1 Methodology This research paper was created as part of a multivocal literature review. To fulfil the research objective, we adopted an MLR approach which included both academic/peerreviewed (white) and non-academic/non-peer reviewed (grey) literature. Only a partial application of the MLR process [78] is implemented, whereby the concept of using predefined search strings is employed to evaluate white literature alongside non-traditional academic sources. Accordingly, we made use of Google, Google Scholar, ScienceDirect and other sources to conduct our research. 2.2 Search Queries Initial high-level surveys of the related literature allowed us to break down the main topic into subtopics, for which associated research questions (RQs) were elaborated. Individual team members examined different sub-topics in detail. The search queries used included strings such as: “AI systems”, “bias in AI systems”, “what causes bias in AI”, “machine learning and bias”, “risks of bias in AI”. We made use of Google, Google Scholar, IEEE and other sources in order to conduct our research.

22

C. De Buitlear et al.

2.3 Inclusion/Exclusion We limited our search space to papers that were written in English so that no translation was needed during our research. To determine which sources were appropriate to use during our research, we endeavored to only include literature from peer reviewed academic outlets and grey sources that offered robust levels of quality (e.g., moderated blogs). 2.4 Methodology Limitations There are a number of significant methodological limitations in this work that are discussed in Sect. 4.

3 Literature Review 3.1 RQ1: What Are AI-Based Systems? Although the concept of AI may have existed previously, it was only in 1955 when the term ‘artificial intelligence’ was coined by John McCarthy. McCarthy was the organizer of the first academic conference on AI, which was held in Dartmouth, New Hampshire [3]. In the years following this conference, computers advanced and became more readily available with larger available memory and greater processing power. This allowed the field of AI to progress [8]. There are many existing definitions for the field of AI, which is considered to be “intelligence demonstrated by machines”, as opposed to the natural intelligence shown by humans and animals [3]. In Alan Turing’s 1950 paper “Computing Machinery and Intelligence”, he asked the question “can machines think?” and created a test where a human evaluator must ask questions and determine which answers belong to a human and which belong to a machine. Although Turing created this test, he did not define what it meant for a machine to be intelligent [9, 10]. Later, Stuart Russell and Peter Norvig published a textbook on AI where they consider four potential definitions of AI to be: • • • •

Systems that think like humans, Systems that act like humans, Systems that think rationally, Systems that act rationally [9].

In perhaps simpler terms, AI “is a field, which combines computer science and robust datasets, to enable problem-solving” [9]. In a general sense, AI systems work by analysing data to identify associations and patterns, later using this knowledge to make predictions about future events and states [11]. The data utilised in the analysis is sometimes referred to as training data [11]. AI-based systems may require large amounts of data during the training process, and therefore it is necessary to have access to large and reliable datasets. This access began in the 1960s with the arrival of the first data centres and as relational databases began to be developed. Following this in the 2000s was the rise of big data. Big data is “data that contains greater variety, arriving in increasing volumes and with more velocity” [12].

Investigating Sources and Effects of Bias in AI

23

This rise was due to the exponential growth of the internet and the amount of internet users, from humans to objects and devices connected to the internet [12]. While training an AI-based system, there are different learning types that can be applied. These learning types are categories of machine learning. Machine learning is a subfield of AI and is “the field of study that gives computers the ability to learn without being explicitly programmed” [13]. It allows machines to learn from data by finding patterns, gaining insight and improving at the task they are being designed to do [14]. In supervised learning, a labelled dataset is used to allow an algorithm to learn. In unsupervised learning, the algorithm uses an unlabelled dataset to attempt to extract features and patterns. A combination of supervised and unsupervised learning can also be used [3]. Deep learning, a sub-field of machine learning, is considered deep because it is composed of a neural network containing more than 3 layers [9]. Neural networks comprise a series of algorithms that seek to identify relationships in a set of data by attempting to emulate human brain operation [13], using mathematical formulas and functions to make decisions [15]. Deep learning means that a system can learn very complicated behaviours without the need for a feature extractor to be designed through “careful engineering and domain expertise” [16]. This has allowed for noticeable advances in many problematic fields of AI. It has surpassed the ability of previous techniques in the fields of speech recognition, image recognition, use in healthcare and natural language understanding [16]. AI-based systems can be broken down into two main categories: weak/narrow AI systems and strong AI systems. Weak AI is “AI trained and focused to perform specific tasks” [9]. These systems are explicitly trained to carry out specialised tasks. Although the term ‘weak’ is used to describe them, these systems are very powerful at what they are trained to do [4]. These AI-based systems have been proven to be able to perfect their task and outperform humans, dating back to 1996, when IBM’s Deep Blue defeated chess grandmaster Garry Kasparov. This was possible as per second it was able to process 200 million positions [10]. This category of AI is prevalent in contemporary AI-based systems, for example in smart assistants like Siri and Alexa, Google maps, recommendation systems on websites like Spotify and Netflix and chatbots, which have replaced some customer service agents [4]. Strong AI, also known as artificial general intelligence (AGI), is “the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can understand or learn.” [3]. To accomplish this, the system would need to be able to solve problems, learn new skills, apply knowledge to tasks, adapt to changes, communicate in natural language and plan for the future. It would need to be able to carry out all the functions human intelligence can achieve. Some researchers believe strong AI may be impossible to achieve, while others agree it may happen before 2045, as stated by Ray Kurzweil in ‘The Singularity is Near’ [3, 4]. Two subfields of AI involve natural language processing (NLP) and computer vision. NLP is the ability of a computer to understand and manipulate text and speech the same way that a human can. NLP does this by using a combination of computational linguistics and machine learning methods [17]. Computer vision is the ability of a computer to

24

C. De Buitlear et al.

analyse images and videos, and identify components within them. It can then take action based on what it has analysed, powered by neural networks and deep learning [9]. AI is currently being used in many industries around the world. For example: • Research and development - to model experimental scenarios and to automate testing, • Healthcare - to help diagnose patients by using image analysis and to monitor patients via wearable sensors, • Finance - to predict future outcomes and trends to help improve investing, • Education - to automate administrative tasks such as marking exams, • Retail - to eliminate the need for cashiers by using cameras and sensors to track items customers take and then charging their account once they leave the store, • Media and entertainment - to moderate content by scanning and then removing unwanted content, • Transportation - to manage traffic by collecting and analysing traffic data, • Agriculture - to produce more accurate weather forecasts and to analyse soil quality [3]. AI is developing extremely quickly and is likely to have a significant impact on the future of many industries. There are advantages to the evolution of AI-based systems. It could lead to increased productivity, as the systems would not need to take breaks, and lower costs, as employees would not need to be paid. The systems may also be more accurate than humans. The downside to this is that it could lead to unemployment in many areas, which could be highly disruptive. As AI-based systems progress, the ethics behind what they can do also needs to be taken into consideration, particularly in the areas of security, privacy and safety [10, 18]. 3.2 RQ2: What Are the Types of Bias in AI-Based Systems? Data bias occurs when we use biased data to train the algorithms. This can happen if we perform data generation or data collection that does not include disadvantaged groups in the data or where they are “wrongly depicted” in the data [20]. Data generation acquires and processes observations of the real world, and delivers the resulting data for learning [21]. This data is synthesised and is “generated from a model that fits to a real data set” [22]. There is ongoing research surrounding synthetic data generation and there have been some positive results in areas such as healthcare [23]. Data collection is needed in areas where there is insufficient data readily available to train an AI-based system. There are three main methods for data collection; data acquisition, measuring and analysing [24]. Data acquisition harvests data from a variety of sources such as surveys, feedback forms, websites, datasets etc. This data must be gathered in a meaningful way that makes sense for the AI-based system being trained. This data then needs to be measured so that the model can learn correctly from it. This can be done by labelling the raw data and/or adding meaningful tags to it [25]. This process can be done manually or automatically (i.e., by another AI-based system that is trained to do so). The data can then be analysed to extract “meaningful knowledge from the data and make it readable for a machine learning model” [25]. Institutional biases include discriminatory practices that arise at the institutional level of analysis and operate in mechanisms that go beyond prejudice and discrimination at the

Investigating Sources and Effects of Bias in AI

25

individual level [26]. This bias is not always a result of conscious discrimination, as the algorithms and data may appear unbiased, however their output reinforces societal bias [27]. Even if an individual’s negative associations, stereotypes and prejudices against outgroups are removed, the discrimination still happens, even in an ideal environment with no shared biases or prejudices. Societal bias happens within AI-based systems because it relies heavily on data generated by humans or collected via systems created by humans [28], and therefore human assumptions or inequalities are reflected in data. This can happen due to certain expectations humans have or areas they are not informed about. A global initiative “Correct the Internet” by DDB Group Aotearoa New Zealand is trying to change bias within the internet. This is due to the fact that upon using Google to search for “who has scored the most goals in international football?”, the search engine returns “Cristiano Ronaldo”, even though it is a fact that Christine Sinclair holds the record. This might suggest that the Google search engine has learnt to be gender-biased, returning results that might be considered incorrect [29]. There are certain common societal definitions that become intermingled in search engines, and they sustain societal norms, for example “football” could also potentially mean not just soccer football but also rugby football. It is perhaps unavoidable that technology of this nature will sustain existing common language usage, even if it is considered by some parties to be biased in some way. This same impact can be seen in emerging AI technology such as chatGPT, where bias in training data for large language models may be carried forward into resulting AI applications [77]. Sampling bias can occur when a dataset is created by selecting particular types of instances more than others. Therefore, when the model is trained with this dataset it can result in a group being underrepresented. For example, Amazon created an AI recruiting tool with the aim of reviewing resumes for certain positions. However, in 2015, they discovered that for software developer roles, the system was biased towards men, due to the fact that the models were trained using resumes over the past 10 years when males dominated the tech industry [30]. It therefore seems crucial to keep humans integrated in the evaluation of AI models as it is important for accurate model performance [31]. This process is called “human-in-the-loop”. Humans can correct a machine’s incorrect results using their own expertise, which can improve the performance of the machine by teaching it how to handle certain data [32]. However, human evaluation bias can affect the performance of an AI model, due to the fact that human evaluators need to validate the performance of an AI-based system. An example of this would be confirmation bias, which is the tendency for a human to look for, interpret, focus and remember information that supports their own prejudices. Labels can be assigned based on prejudices or prior beliefs rather than objective evaluation [33]. Data scientists should assess the data a system uses and make a judgement based on how representative it is. If there are biases identified then the correct adjustments can be made, which means that machine learning biases tend to decrease over time and will create much more fairness than harm [34]. There can also be a design-related bias in which the biases happen due to the limitations of an algorithm or constraints on the system such as computational power, for example in Spotify’s shuffle algorithm [33]. How we as humans perceive randomness is

26

C. De Buitlear et al.

not how it is perceived in computers, because computers essentially use pseudo-random numbers [35]. Due to this, Spotify users were complaining as they were getting the same song multiple times within a shuffle. This is because each song had an equal chance of being in the order. Spotify has since changed its algorithm to make it less random essentially more biased - to certain songs so that it appears more random to the listener [36]. This is a particularly interesting observation, as although randomness is a property that may be associated with reduced bias, there are clear instances where randomness is not attractive to human agents. It is important to also note that not every bias within algorithms leads to discrimination or less favorable treatment. For example, some algorithms may contain biases that are justifiable in job situations such as an age limit or certain qualifications [37]. This is why algorithms have to be assessed to determine any legal implications. For example, New York City passed a measure that bans employers from using automated employment decision tools to screen applicants without having a bias audit performed on the tool in advance [38]. Laws like this are put in place for AI-based systems in order to identify and mitigate risks. 3.3 RQ3: What Are the Potential Risks/Effects of Bias in AI-Based Systems? Recent scrutiny suggests a large number of cases of discrimination were caused by AI-based systems. A couple of fields that it seems to be heavily infiltrating are risk assessments for policing and credit scores [39]. To tackle the potential risks of bias in AI-based systems we first need to understand that there are many different perspectives that can be looked at when addressing bias. The risks within technical, legal, social and ethical AI-based systems will be the key aspects that are highlighted. From a technical perspective there are two well established approaches used to measure bias. The procedural approach, which focuses on recognizing biases in the decisionmaking algorithms [40], and the relational approach, which focuses on preventing biased decisions in the algorithmic output. The potential risk in procedural approaches is that interventions can be complex and difficult to implement due to the AI algorithms being too sophisticated. This leads to major upbringing of bias in these AI-based systems. They are also trained with monolithic datasets and utilise unsupervised learning structures that might make bias difficult to comprehend. With further advancements in explainable AI, procedural approaches will become more beneficial [40]. However, declaring that an algorithm is free from bias does not ensure a nondiscriminatory algorithmic output. Discrimination can appear as a consequence of bias in training. The metrics for measuring bias from a technical perspective are called statistical measures. Statistical measures focus on investigating relationships between the algorithms’ predicted outcome from the different demographic distribution to the actual outcome achieved. This measure covers group fairness. As an illustration, if 7 out of 10 candidates were given a mortgage, the same ratio from the protected group should have the right to obtain a mortgage. Despite the demand in statistical metrics, a potential risk has appeared that statistical definitions are inadequate to estimate the absence of bias in algorithmic outcomes. This is because they already assume the accessibility of verified outcomes and may ignore other critical attributes of the classified subject rather than the sensitive ones [41].

Investigating Sources and Effects of Bias in AI

27

During the rise of AI-based systems, it remains unclear as to whether we can render them immune to anti-discriminatory behaviours. Anti-discrimination laws vary across countries; there is no universal law for various actions. This raises complex challenges for those engineering AI-based software systems for global audiences. For instance, in the European Union, anti-discrimination legislation is classified in Directives. 2000/43/EC is a Directive against discrimination on the grounds of race and ethnic origin, or Chapter 3 of the EU Charter of fundamental rights [42]. Whereas in the US, anti-discrimination laws are described as the “Title VII of the Civil Rights Act of 1964”. This states that it forbids discrimination in employment in the sense of race, sex, national origin and religion [43]. A major bias risk in AI-based systems in legal trials addressing discrimination concerns the discrimination measures that characterise underrepresented groups e.g. disparate impact or disparate treatment [44, 45]; and the relevant population that is affected by the case of discrimination [46]. These two risks run parallel to the problems explored in the technical perspectives introduced earlier. The Castadena Rule asserts that the specific number of people in the protected group from an applicable population is prohibited from being smaller than 3 standard deviations from the number expected in an arbitrary selection [44]. Although such laws can relieve a number of discriminatory issues, the risk presented is quite complex as completely different scenarios could arise that the AI-based system will fail to take account of, leading to bias. Digital discrimination is prevalent in AI-based systems as it gives a set of individuals unfair treatment based on certain characteristics such as gender, ethnicity, income and education. When people think of digital discrimination, they think about it as a technical phenomenon regulated by law. However, it needs to be taken into account more from a sociocultural perspective to be heavily cognizant of. There are infinite possibilities of what can amount to bias in AI-based systems from a social perspective. A potential risk of bias in AI, highlighted from a social perspective, is the potential of digital discrimination to fortify existing social inequalities. This phenomenon is called intersectionality [47]. Ultimately, this is formed by the heterogeneous ways that gender and race link with class in the labour market. There is no set evaluation methodology that exists amongst AI researchers to ethically assess their bias classifications, as it can be dissimilar through different contexts, and assessed differently by different people [48]. The way a dataset is defined and maintained may incorporate assumptions and values of the creator(s) [49]. Hildebrand and Koops are examining the design of sociotechnical infrastructure that allows humans to anticipate and respond as to how they are profiled [50]. Morality and associated moral values – which are not universally agreed upon – must also somehow be considered in the design of AI based systems. From an ethical standpoint, Tasioulas states that discrimination does not need to be unlawful to be unfair [51]. Some may say that Isaac Asimov’s Three laws of Robotics could help systems address bias. However, these principles have been criticised as being quite vague in a way that makes them not helpful. Frameworks for AI ethics to prevent bias have been proposed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems [52]. Ethical questions geared towards AI usage have been arranged into three interconnected levels [51]. One of the levels is the social morality of AI. The risk of bias in this case is catastrophic due to the human’s emotional response like anger, guilt or empathy possibly having an influence on the creation of AI-based systems.

28

C. De Buitlear et al.

Another level includes people’s interaction with AI. Citizens have a right to exercise their own moral judgement in relation to appropriate codes of practice, but from a technical point of view it is not yet very clear as to what is considered bias and what is not [51]. 3.4 RQ4: How Can Bias Be Resolved/Prevented in AI-Based Systems? Evidence of unintentional algorithmic bias in AI based systems has been documented and recorded considerably in recent investigations [53]. Algorithmic discrimination can be introduced to a system during development stages [54]. Therefore, it is critical for corporations to detect and address the origin of such biases throughout the duration of the development process. One potential procedure to accomplish this is to attempt to inhibit the presence of bias through the incorporation of appropriate specialised capabilities in the system development process [55]. However, as a result of AI-based systems being designed by humans and are also established on input data supplied by humans who are inadvertently prone to bias as well as inaccuracies, AI-based systems accidentally obtain these human qualities which are then embedded within their system [56]. Since the ambition of AI-based systems is to conclude impartial, data-driven and neutral decision making to have a positive consequence on many people’s lives, algorithmic discrimination has to be mostly nullified [54]. Subsequently, it is important to keep in mind, given the various complexities of impartiality and its circumstantial nature, it is typically not feasible to completely de-bias an AI-based system or to ensure its fairness [57, 58]. There are also moral requirements about algorithmic integrity procedures, given that these procedures are designed to produce predictions that are impartial, instead of sanctioning the impartial treatment of specific humans [59]. Moreover, modern resolutions to accomplish administrative necessities are directed on aggregate-level functionality, which can conceal stratification amongst subpopulations. The first and most important stage when it comes to naturally preventing biases contained within AI-based systems, is to establish methods for identifying them. There have been numerous research projects designed at identifying discrimination in AI-based systems. However, most of them necessitate comprehensive understanding of the internal mechanics of the algorithm and/or the dataset provided. For instance, McDuff [60] advocates a structure of “classifier interrogation” which necessitates characterised data to investigate the capacity domains that may result with a bias. Furthermore, procedures for identifying bias within AI-based systems can be to some extent job specific and complex to establish. A variation of the Implicit Association Task can be utilised in order to identify prejudice in word implants. Regardless of this being a legitimate function of the initial objective of Implicit Association Task, it is ambiguous how to progress above bias identification in independent settings [61]. Bias mitigation techniques are established in the position in which these algorithms can interfere within a determined AI-based system which is based on the distinction from Calmon et al. [62], which delimit three scopes of interest. If an algorithm has the ability to alter the training data, then pre-processing could be applied [63]. If the algorithm is authorized to alter the learning mechanism for a model, then in-processing could be applied [64, 65]. If the algorithm can exclusively operate the concluded model as a black box with the absence of capabilities to alter the training data or learning algorithm, then

Investigating Sources and Effects of Bias in AI

29

post-processing can be applied [66]. The majority of this section concentrates on preprocessing because it is most adaptive when it comes to the data science pipeline, it is self-sufficient in the modelling procedure and can also be unified with data delivery as well as publishing processes [62, 67]. Excluded variable bias expresses that disregarding a variable is an inadequate approach to prevent prejudice as any remaining variables which correspond to the absent variable still accommodate data about that variable [68]. Furthermore, modern research has identified that in order to establish that AI-based systems do not contradistinguish, it is essential that data with respect to the variable is utilized when conceiving the algorithm [69]. One such example of this approach is the ideology of dropping the gender variable which originates from Calmon et al. [62], in which they use gender elimination as a benchmark model. The composers of this publication have also noted that this technique may not always be effective as other variables that are not eliminated could potentially be associated with the protected feature, which would still commission bias. Measuring prejudice in AI-based systems can assist the elimination of bias from data sets that have been identified to be biased, unfinished or accommodate inequitable decisions, and accordingly encourage fairness in such systems [70]. In order to accomplish this, the algorithms calibrate non-discrimination procedures into restraints translatable by a machine, and subsequently models are developed fixed on the chartered restraints. Software toolkits are presently being established which comprise statistical procedures for calibrating and mitigating bias in AI-based systems. Although it is difficult to conclude how quickly these modern toolkits are being utilised in application, their accelerated adoption advocates an exceptional necessity in the private and public sectors. Examples of such software tools include IBM’s ‘AI Fairness 360 Open Source Toolkit’ and Accenture’s ‘Fairness Tool’. IBM’s ‘AI Fairness 360 Open Source Toolkit’ has an ambition to facilitate developers to “examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle” [72, 73]. This instrument administers tests and algorithms to calibrate fairness and prevent discrimination in dataset and models. Accenture’s ‘Fairness Tool’ has an ambition to detect bias and potential proxies for protected attributes contained inside datasets which are utilised by algorithmic systems [71]. This instrument can abolish correlations amidst sensitive constants as well as proxies which can conclude with biased outcomes. Unfortunately, investigating the bias-accuracy tradeoff is an unfinished picture of impartiality of an algorithm. Such deficiencies have been analysed by Dwork et al. [41] with regard to how an adversary could accomplish statistical consistency whilst still addressing the preserved party unjustly. Fish et al. [64] exhibited these deficiencies in action even with the lack of antagonistic manipulation. Along with other procedures, they have also shown that adapting a classifier by arbitrarily flipping specific output characteristics with a specific probability already substantially exceeds the previous fairness literature in bias as well as accuracy. An additional challenge with the resolutions of the equality with the discriminating factors of AI-based systems is whether they take into consideration factual recorded inconsistencies or under-representation, or in other words meaning not all unequal outcomes are unfair. One such instance of this can be the inconsistency between the amounts

30

C. De Buitlear et al.

of male versus female CEOs within the business industry. This instance features the socio-technical view about data and prejudice where an abundance of concerns are social at first, and technical second [74, 75]. Simultaneous amendments to these complications will only result in aggravating current problems [76].

4 Limitations of Research Research is a crucial instrument in furthering knowledge and understanding topics in various fields. Throughout our research, we encountered many limitations. One of the preeminent limitations was the timeframe available to undertake this MLR, which was conducted over a 6-week period in January and February 2023. The reason for the time limit arises from the nature of the original assignment: a four-person team research project as part of a final year undergraduate module in Software Engineering. It is also the case that the focus of this review has largely examined data driven AI based systems, but not the earlier knowledge-driven system that were more prevalent in the 1990s but which continue to be in us today. Given the undergraduate status of the primary researchers, there was an absence of prior formal research training. However, all researchers received instruction on the MLR technique at research outset and were furthermore engaged on a weekly basis with a senior academic researcher to direct efforts. This training and direction helped to reduce the impact of core researcher inexperience. Guidance on writing academic papers was also provided so that the core researchers could strive towards high academic quality in their work products. A final limitation emanates from the adoption of an MLR methodology. While this methodology permits the inclusion of non-peer-reviewed work which can have a positive impact on the research, it nevertheless may result in the inclusion of materials that are not of traditional scientific standing. For example, there are various arxiv.org papers included in this work but some of them may ultimately not be successful in scientific peer review and therefore of diminished scientific value.

5 Directions for Future Research This research has identified various complex challenges and risks that arise in AI-based software systems. Future important research could integrate important ethical and moral knowledge into software engineering education and practice. This has clear resonance with societal and cultural values and expectations, both of which vary among and within populations. This may indicate that a bumpy road ahead might be expected for AI-based software systems that seek broader population effectiveness. It is furthermore the case that evaluating the extent to which a software system might be biased is worthy of further research. Perhaps the design and deployment of AIenabled systems can learn from safety-critical systems development where approaches such as Failure Mode and Effect Analysis (FMEA) help to build safer systems. Building less biased (or appropriately biased) software systems might incorporate an analysis of the possible sources of bias and their impact in production systems, this could become known as Bias Effect Analysis.

Investigating Sources and Effects of Bias in AI

31

6 Conclusion This research paper provides a review of AI-based systems by exploring their history, the techniques used to train them, the types of AI-based systems there are and the areas they are currently being utilised in around the world. It also investigates the various kinds of biases that may occur in AI-based systems by presenting findings on data bias, institutional bias, societal bias, sampling bias, evaluation bias and design-related bias. It highlights the challenges of what bias in AI-based systems can do today by looking at the different perspectives of technical, legal, social and ethical risks. It also examines the approaches used to measure bias and whether discrimination can be removed from AI-based systems. Finally, it touches on some best practices suggested by AI experts to help prevent this bias. It also examined how crucial it is to identify bias in order to prevent it, and the process of removing algorithmic bias by examining pre-processing, in-processing and post-processing techniques. Interesting questions revolve around such areas as “What is bias?”, “Is bias bad?” and “Should we aspire to remove bias?” These questions are for the most part the prevail of academic participants outside computer science and software engineering. However, given the rise of AI, we find that these questions now take on much greater significance in day-to-day life, as AI based systems are increasingly supporting decisions that affect broader swathes of society. It therefore seems wise to integrate the accumulated learning of fields such as ethics into software engineering in the near term, and certainly well before the AI-enabled systems revolution takes full effect. Acknowledgements. This research is supported in part by SFI, Science Foundation (https://www.sfi.ie/) grant No SFI 13/RC/2094_P2 to Lero - the Science Foundation Research Centre for Software. It is also supported in part by SFI, Science Foundation (https://www.sfi.ie/) grant No SFI 12/RC/2289_P2 to Insight - the Science Foundation Research Centre for Data Analytics.

Ireland Ireland Ireland Ireland

References 1. Buchanan, B.G.: A (Very) brief history of artificial intelligence. AI Mag. 26(4), 53 (2005). https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/1848. Accessed 15 Feb 2023 2. Schroer, A.: What is artificial intelligence? Built. (2022). https://builtin.com/artificial-intell igence. Accessed 15 Feb 2023 3. Lowe, A., Lawless, S.: Artificial intelligence foundations. BCS, The Chartered Institute for IT, London, UK (2021) 4. Glover, E.: Strong AI vs weak AI: what’s the difference? Built. (2022). https://builtin.com/ artificial-intelligence/strong-ai-weak-ai. Accessed 16 Feb 2023 5. Bias: Cambridge University Press dictionary, translations & thesaurus. https://dictionary.cam bridge.org/. Accessed 10 Feb 2023 6. Nelson, G.S.: Bias in artificial intelligence. N C Med. J. 80(4), 220–222 (2019) 7. Leavy, S., O’Sullivan, B., Siapera, E.: Data, power and bias in artificial intelligence (2020). https://arxiv.org/pdf/2008.07341.pdf. Accessed 16 Feb 2023 8. Anyoha, R.: The history of artificial intelligence. Harvard University (2017). https://sitn.hms. harvard.edu/flash/2017/history-artificial-intelligence/. Accessed 15 Feb 2023

32

C. De Buitlear et al.

9. What is Artificial Intelligence (AI)? IBM (n.d.). https://www.ibm.com/topics/artificial-intell igence. Accessed 15 Feb 2023 10. Taulli, T.: Artificial Intelligence Basics: A Non-Technical Introduction. Apress, NYC, USA (2019) 11. Burns, E.: What is Artificial Intelligence (AI)? TechTarget (2023). https://www.techtarget. com/searchenterpriseai/definition/AI-Artificial-Intelligence. Accessed 15 Feb 2023 12. What is Big Data? Oracle (n.d.). https://www.oracle.com/ie/big-data/what-is-big-data/. Accessed 15 Feb 2023 13. Mahesh, B.: Machine learning algorithms - a review. Int. J. Sci. Res. 9(1) (2020) 14. How Does AI Actually Work? CSU Global (2021). https://csuglobal.edu/blog/how-does-aiactually-work. Accessed 15 Feb 2023 15. What are Neural Networks? IBM (n.d.). https://www.ibm.com/topics/neural-networks. Accessed 16 Feb 2023 16. Wick, C.: Deep learning. Informatik-Spektrum 40(1), 103–107 (2016). https://doi.org/10. 1007/s00287-016-1013-2 17. Kurzweil, R.: The Singularity is Near: When Humans Transcend Biology. Duckworth Books, London, UK (2016) 18. What is Natural Language Processing (NLP)? IBM (n.d.). https://www.ibm.com/topics/nat ural-language-processing. Accessed 16 Feb 2023 19. Bundy, A.: Preparing for the future of artificial intelligence. AI & Soc. 32, 285–287 (2016) 20. Lopez, P.: Bias does not equal bias: a socio-technical typology of bias in data-based algorithmic systems. Internet Policy Rev. (2021) 21. Hellstrom, T., Dignum, V., Bensch, S.: Bias in machine learning - what is it good for? (2020). arxiv https://arxiv.org/pdf/2004.00686.pdf. Accessed 16 Feb 2023 22. Hernandez, M., Epelde, G., Alberdi, A., et al.: Synthetic data generation for tabular health records: a systematic review (2022) 23. Rankin, D., Black, M., Bond, R., et al.: Reliability of supervised machine learning using synthetic data in health care: model to preserve privacy for data sharing. JMIR Med. Inf. (2020) 24. Creative AI: What is data collection? Creative AI. (2022) https://creative-ai.tech/en/what-isdata-collection/. Accessed 7 Feb 2023 25. Kniazieva, Y.: Data collection. High quality data annotation for Machine Learning (2022) https://labelyourdata.com/articles/data-collection-methods-AI. Accessed 16 Feb 2023 26. Henry, P.J.: Institutional Bias. The Sage Handbook of Prejudice, Stereotyping and Discrimination, p. 426 (2010) 27. Kulkarni, A.: Bias in AI and machine learning: sources and solutions, Lexalytics (2022) https://www.lexalytics.com/blog/bias-in-ai-machine-learning/. Accessed 2 Feb 2023 28. Ntoutsi, E., Fafalios, P., Gadiraju, U., et al.: Bias in data-driven artificial intelligence systems— an introductory survey. WIREs Data Min. Knowl. Discov., 3 (2019) 29. Mishra, N.: Break the bias, ‘correct the internet’ to make women in sports more visible, Campaign Asia (2023) https://www.campaignasia.com/video/break-the-bias-correct-the-int ernet-to-make-women-in-sports-more-visible/483047. Accessed 10 Feb 2023 30. Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. Reuters (2018) 31. Wodecki, B.: Human evaluation of AI is key to success - but it’s the least funded. AI Bus. (2022) 32. Maadi, M., Akbarzadeh Khorshidi, H., Aickelin, U.: A review on human-AI interaction in machine learning and insights for medical applications. Int. J. Environ. Res. Public Health, 4 (2021) 33. Srinivasan, R., Chander, A.: Biases in AI systems. Commun. ACM 64(8), 44–49 (2021)

Investigating Sources and Effects of Bias in AI

33

34. Moschella, D.: Machines are less biassed than people. Verdict (2019) https://www.verdict.co. uk/ai-and-bias/. Accessed 7 Feb 2023 35. Rubin, J.M.: Can a computer generate a truly random number? MIT Eng. (2011) https:// engineering.mit.edu/engage/ask-an-engineer/can-a-computer-generate-a-truly-random-num ber/. Accessed 10 Feb 2023 36. Brocklehurst, H.: Ever feel like the Spotify Shuffle isn’t actually random? Here’s the algorithm explained. The Tab (2021) https://thetab.com/uk/2021/11/17/spotify-shuffle-explained228639. Accessed 6 Feb 2023 37. Bias in Algorithms – Artificial Intelligence and Discrimination. European Union Agency for Fundamental Rights (2022) 38. Mithal, M., Wilson Sonsini Goodrich & Rosati: Legal requirements for mitigating bias in AI Systems. JD Supra. (2023). https://www.jdsupra.com/legalnews/legal-requirements-for-mit igating-bias-3221861/. Accessed 10 Feb 2023 39. Weapons of math destruction: How big data increases inequality and Threatens Democracy’, Vikalpa. J. Decis. Makers 44(2), 97–98. https://journals.sagepub.com/doi/https://doi.org/10. 1177/0256090919853933. Accessed 21 Feb 2023 40. Mueller, S.T., Hoffman, R.R., Clancey, W., et al.: Explanation in human-AI systems: a literature meta-review, Synopsis of key ideas and publications, and bibliography for explainable AI. arXiv.org (2019). https://arxiv.org/abs/1902.01876. Accessed 20 Feb 2023 41. Dwork, C., Hardt, M., Pitassi, T., et al.: Fairness through awareness. In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference (2012). https://doi.org/10.1145/ 2090236.2090255 42. Council directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin. Pharmaceuticals Policy Law 13, 301– 310 (2011). https://doi.org/10.3233/ppl-2011-0332 43. Title VII of the Civil Rights Act of 1964. US EEOC. https://www.eeoc.gov/statutes/title-viicivil-rights-act-1964. Accessed 22 Feb 2023 44. Barocas, S., Selbst, A.D.: Big data’s disparate impact. SSRN Electron. J. (2016). https://doi. org/10.2139/ssrn.2477899 45. Feldman, M., Friedler, S.A., Moeller, J., et al.: Certifying and removing disparate impact. In: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2015). https://doi.org/10.1145/2783258.2783311 46. Romei, A., Ruggieri, S.: A multidisciplinary survey on discrimination analysis. Knowl. Eng. Rev. 29, 582–638 (2013). https://doi.org/10.1017/s0269888913000039 47. Walby, S., Armstrong, J., Strid, S.: Intersectionality: multiple inequalities in social theory. Sociology 46, 224–240 (2012). https://doi.org/10.1177/0038038511416164 48. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2019). https://doi.org/10.1016/j.artint.2018.07.007 49. Van Nuenen, T., Ferrer, X., Such, J.M., Cote, M.: Transparency for whom? Assessing discriminatory artificial intelligence. Computer 53, 36–44 (2020). https://doi.org/10.1109/mc. 2020.3002181 50. Hildebrandt, M., Koops, B.-J.: The challenges of ambient law and legal protection in the profiling era. Mod. Law Rev. 73, 428–460 (2010). https://doi.org/10.1111/j.1468-2230.2010. 00806.x 51. Tasioulas, J.: First steps towards an ethics of robots and artificial intelligence. SSRN Electron. J. (2018). https://doi.org/10.2139/ssrn.3172840 52. The IEEE global initiative on ethics of autonomous and intelligent systems. In: IEEE Standards Association (2023). https://standards.ieee.org/industry-connections/ec/autonomoussystems/. Accessed 20 Feb 2023

34

C. De Buitlear et al.

53. Edelman, B., Luca, M., Svirsky, D.: Racial discrimination in the sharing economy: evidence from a field experiment. Am. Econ. J. Appl. Econ. 9, 1–22 (2017). https://doi.org/10.1257/ app.20160213 54. Lemonne, E.: Ethics guidelines for Trustworthy AI. In: FUTURIUM - European Commission (2021). https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html. Accessed 22 Feb 2023 55. Dignum, V.: Ethics in artificial intelligence: introduction to the special issue. Ethics Inf. Technol. 20(1), 1–3 (2018). https://doi.org/10.1007/s10676-018-9450-z 56. Kirkpatrick, K.: Battling algorithmic bias. Commun. ACM 59, 16–17 (2016). https://doi.org/ 10.1145/2983270 57. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021) 58. Selbst, A., Danah, B., Friedler, S., et al.: Fairness and abstraction in sociotechnical systems, Sorelle Friedler (2019). http://sorelle.friedler.net/papers/sts_fat2019.pdf. Accessed 18 Feb 2023 59. Hughes, S., Aiyegbusi, O.L., Lasserson, D., et al.: Patient-reported outcome measurement: a bridge between health and social care? In: WARWICK (2021). http://wrap.warwick.ac.uk/ 151243/7/WRAP-Patient-reported-outcomes-measures-what-are-benefits-social-care-2021. pdf. Accessed 17 Feb 2023 60. McDuff, D., Ma, S., Song, Y., Kapoor, A.: Characterizing bias in classifiers using generative models (2019). arxiv. https://arxiv.org/pdf/1906.11891.pdf. Accessed 22 Feb 2023 61. Caliskan, A., Bryson, J., Narayanan, A.: Semantics derived automatically from language corpora contain human-like biases. Science (2017). https://www.science.org/doi/https://doi. org/10.1126/science.aal4230. Accessed 9 Feb 2023 62. Calmon, F.P., Wei, D., Vinzamuri, B., et al.: Optimized pre-processing for discrimination prevention. In: NeurIPS Proceedings (2017). https://proceedings.neurips.cc/paper/2017/file/ 9a49a25d845a483fae4be7e341368e36-Paper.pdf. Accessed 20 Feb 2023 63. Hajian, S.: Simultaneous discrimination prevention and privacy protection in data publishing and mining (2013). arxiv. https://arxiv.org/pdf/1306.6805.pdf. Accessed 22 Feb 2023 64. Fish, B., Kun, J., Lelkes, Á.D.: A confidence-based approach for balancing fairness and accuracy. In: Proceedings of the 2016 SIAM International Conference on Data Mining (2016). https://doi.org/10.1137/1.9781611974348.17 65. Zafar, M.B., Valera, I., Rodriguez, M.G., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web, pp. 1171–1180 (2017) 66. Hardt, M., Price, E., Srebro, N.: Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, no. 29 (2016) 67. Vigild, D.J., Johansson, L., Feragen, A.: Identifying and mitigating bias in machine learning models. Thesis, Technical University of Denmark, pp. 11–18 (2021) 68. Clarke, K.A.: The phantom menace: omitted variable bias in econometric research. Confl. Manag. Peace Sci. 22, 341–352 (2005). https://doi.org/10.1080/07388940500339183 69. Žliobait˙e, I., Custers, B.: Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models. Artif. Intell. Law 24(2), 183–201 (2016). https:// doi.org/10.1007/s10506-016-9182-5 70. Žliobait˙e, I.: Measuring discrimination in algorithmic decision making. Data Min. Knowl. Disc. 31(4), 1060–1089 (2017). https://doi.org/10.1007/s10618-017-0506-1 71. Duggan, J.: Fairness you can bank on. Accenture (2023). https://www.accenture.com/ie-en/ case-studies/applied-intelligence/banking-aib. Accessed 15 Feb 2023 72. Varshney, K.R.: Introducing AI fairness 360. In: IBM (2018). https://www.ibm.com/blogs/ research/2018/09/ai-fairness-360/. Accessed 15 Feb 2023

Investigating Sources and Effects of Bias in AI

35

73. Bellamy, R.K., Dey, K., Hind, M., et al.: Ai fairness 360: an extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. (2019). https://doi.org/10.1147/jrd.2019. 2942287 74. Lloyd, K.: Bias amplification in artificial intelligence systems (2018). arXiv.org. https://arxiv. org/abs/1809.07842. Accessed 14 Feb 2023 75. Oakley, J.G.: Gender-based barriers to senior management positions: understanding the scarcity of female CEOs. J. Bus. Ethics 27, 321–334 (2000). https://doi.org/10.1023/a:100 6226129868 76. Domino: On ingesting Kate Crawford’s “The trouble with bias”. Domino Data Lab (2022). https://www.dominodatalab.com/blog/ingesting-kate-crawfords-trouble-with-bias. Accessed 22 Feb 2023 77. Zhai, X.: ChatGPT for next generation science learning. XRDS: Crossroads ACM Mag. Stud. 29(3), 42–46 (2023) 78. Garousi, V., Felderer, M., Mäntylä, M.V.: Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Inf. Softw. Technol. 1(106), 101–121 (2019)

Quality Assurance in Low-Code Applications Markus Noebauer1(B)

, Deepak Dhungana2

, and Iris Groher3

1 insideAX-GmbH, Pasching, Austria

[email protected]

2 IMC University of Applied Sciences, Krems, Austria

[email protected]

3 Johannes Kepler University Linz, Linz, Austria

[email protected]

Abstract. Low-code applications promise to lower the hurdles in designing domain-specific apps based on reusable components without prior knowledge of programming. Increasingly more and more platforms and tools are supporting this paradigm. However, as soon as they go beyond the state of prototypes and become part of the IT landscape, such applications start posing challenges in terms of design cultures, corporate processes, security, and performance. In order to ensure high-quality standards in low-code apps, one must implement quality assurance measures and enforce these rules. However, testing these apps in traditional ways seems to be infeasible, as the developers of these apps are not necessarily trained software engineers. This paper presents an approach for enforcing quality assurance measures on low-code apps, while also following the philosophy of low-code in the testing procedures. Keywords: testing low-code apps · quality assurance in low-code apps

1 Introduction Low code platforms enable domain-experts to create useful applications without indepth knowledge of programming languages [5, 7, 9]. This is achieved through the abstraction of technical details and (usually) a graphic modeling framework, enabling experts to quickly drag and drop the required components and focus on the domainspecific processes rather than the implementation of the function itself. Such low code platforms (e.g., Microsoft power apps,1 Mendix,2 Caspio,3 ) have several advantages: domain-experts are empowered to create their tools independently and the focus shifts away from technical implementation of the function towards the usefulness for the end-users. Supported by FFG, project: 891632. 1 https://powerapps.microsoft.com/. 2 https://www.mendix.com/. 3 https://www.caspio.com/. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 36–46, 2023. https://doi.org/10.1007/978-3-031-42307-9_3

Quality Assurance in Low-Code Applications

37

High degree of automation and ease in the creation of the app also means that there is less control over what kinds of applications are being created. The proliferation of selfcreated apps by power users can drift into “shadow IT ”. Companies that can no longer control the IT tools used and can no longer protect the data of their customers, suppliers, employees, etc. face high penalties and a considerable loss of reputation in the event of a data leak. Apps that were created from low-code platforms and are popular with companies must be maintained, rolled out and further developed. The maintainability and scalability of these apps is a quality criterion that is hardly considered by non-developers. In retrospect, this can result in great effort and costs in IT to rebuild a popular app so that it can be transferred to the regular life cycle of the company software. In this paper, we present an approach for enforcing quality assurance for low-code platforms. Section 2 discusses quality assurance factors, while Sect. 3 presents a conceptual solution to address these in a low-code environment. Our tool supported approach and a first evaluation are described in Sect. 4. Section 5 contains related work. In this paper we demonstrate the approach based on Microsoft Power Platform, however, the general approach could be used in other frameworks and technologies.

2 Quality Assurance Factors Low-code applications are designed to streamline the development process by providing tools and features that require minimal coding skills. However, this does not mean that quality assurance should be overlooked [8, 13]. In addition to the functionality of the application, where testers focus on testing the application’s core functionalities and features, it is important to enforce coding standards and best practices, leading to more consistent architecture, layout and design standards across the portfolio of applications in any given organization. As in traditional software engineering, the quality of low-code applications can be ensured via static and dynamic testing procedures. Given the nature of low-code applications, behaviour testing is typically done through the user interface and automated through frameworks such as Selenium. Behavior testing is out of scope of this paper. Here we focus on static analysis of low-code applications. In traditional software engineering, these are typically adopted to detect code smells [1], enforce coding conventions [4], and improve the code quality in terms of maintenance and evolution. We argue that a similar static analysis of low-code applications is needed to improve the overall quality of such applications. Given the nature of the apps, there is hardly any code to analyse, so such analysis needs to consider other aspects of the application. 2.1 Layouts and Corporate Designs The application should be tested thoroughly to ensure that it is user-friendly and meets the needs of its intended users. Just like any other software application, these must follow the internal guidelines and conventions of the organization. It is equally important to ensure that the app is compatible with all the devices and browsers on which it is intended to run. In general, the design and layout of the screens must be tested, ensuring that they meet usability standards and accessibility requirements. Some of the quality assurance

38

M. Noebauer et al.

factors in this context are: naming conventions, screen sizes, device resolutions, quality of images, logos, fonts and colors etc. 2.2 Adherence to Internal Processes Typically, low-code applications work with data provided through external sources, so one of the first quality assurance measures is to validate that the data sources used by the app are accurate and up-to-date. Additionally, it must be ensured that users can easily navigate through the app and find the information they need, thereby adhering to the internal approval processes and guidelines. It is equally important to test how the application interacts with other systems or APIs, by examining the code and ensuring that it adheres to the appropriate protocols and standards. Some examples of quality assurance factors in this context are: navigation paths and structures, source and validity of data used, API calls and external interactions, etc. 2.3 Performance and Security As more and more low-code applications make their way into productive software used by customers, accessing sensitive data, it must be verified that only authorized users have access to sensitive data by testing user permissions and roles. Data must be transmitted securely and the app’s data retention policies must respect the regulations. Static security testing can be done to check whether the application has any known vulnerabilities, such as SQL injection or cross-site scripting. The quality therefore also depends on the data access methods, compliance with regulations, data encryption and retention policies and the application’s approach addressing known security vulnerabilities.

3 Quality Assurance Approach Given the wide range of technology stacks and the environment, where low-code applications are embedded in different organizational IT Landscapes, we propose a generic approach to quality assurance in low-code applications. The approach can be “instantiated” with the given technology stack, which is demonstrated in Sect. 4. As depicted in Fig. 1, the general approach relies on different artefacts and steps. 3.1 Application Meta-model In order to specify the rules and quality assurance measures on the organizational level, a meta-model of the low-code applications is created. This meta-model heavily depends on the choice of the technological platform in use. It is important to note that such a meta-model need not cover the entire scope of all the application features (this would more or less replicate the low-code application platform), rather only the aspects of the application which are essential for writing the rules. The process of creation of the meta-model can also be automated to a certain extend. For example, if the applications are available as JSON or XML files, the meta-model for the purpose of static testing, is an abstraction of the schema of those documents.

Quality Assurance in Low-Code Applications

39

Fig. 1. Overview of the proposed approach.

3.2 Company Rules (prosa) Domain-experts, UI and UX experts, and the IT governing bodies within the organization come up with a set of rules all applications must adhere. These rules are written in nonformal way and thus could be written by non-programmers. It is important to note that some rules may require execution of the application (behavior rules), which is not in the scope of this paper. 3.3 Formalized Rules A super-user takes the company rules and write them as queries against the application meta-model. This person is not necessarily a programmer, but somebody with skills in writing formal queries (e.g., in SQL, JSON, XML etc.). Again, depending on the technology stack, and the low-code platform, the rules may be written as OCL constraints against a UML meta-model, they could be written as MongoDB queries against a JSON document, or even as XQueries against a XML Schema document.

40

M. Noebauer et al.

3.4 Low-Code Application The application that is being developed is subject to test. These applications are created by the low-code developers within the organization. 3.5 Application Model The application under test is typically a complex object, where static testing is not feasible without accessing the (often proprietary format) of the application. Typically, an API is not available to testing these applications. So the application model is generated based on the meta-model (for testing purposes only). In general, a transformer may be needed in this step. However, for standard persistence formats such as XML or JSON, the persisted artefacts can be directly used for the translation step. 3.6 Quality Engine A quality assurance engine helps in maintaining high quality of the application by automating and streamlining the testing process. It typically includes features such as test case management, test execution, and test reporting – which in our approach are the rules that are being executed against the application. 3.7 Test Results The output of the quality assurance engine is the result of executing the rules (as queries against the application in test). These results are stored in a database for future reference. Storing past test results can help testers identify issues that were previously found and ensure that they have been resolved in subsequent versions of the software. Organizations can ensure that they have tested all the necessary features and that the software is stable and reliable. By analyzing past test results, teams can identify patterns and trends that can help them improve the software over time. 3.8 Knowledge Base Our approach foresees an organizational knowledge base about the rules that govern all applications. Such a knowledge base, (e.g., based on wiki) is used to support the low-code developers in finding a solution to a problem that was identified by the test cases. In a step-by-step fashion, users are guided towards a solution. 3.9 Feedback Generator A separate component is used to generate a user-friendly feedback to the users based on the test results. Generally, the output of the quality assurance engine can seem very “cryptic” to a non-programmer and therefore a translation step is needed between the execution of the tests and the presentation of the error to the low-code developer. To make the information more accessible and understandable, the feedback generator uses analogies and metaphors, avoids jargon and technical terms, uses visual aids such as diagrams, flowcharts, and graphs and provides context information about where the errors occurred.

Quality Assurance in Low-Code Applications

41

4 Initial Evaluation: Microsoft PowerApps In order to demonstrate the feasibility of the approach, we have chosen a popular lowcode platform and implemented the tools required to support low-code developers in ensuring high-quality. In particular, we are using PowerApps4 to develop low-code apps for mobile devices likes phones, tablets, and handheld computers. For a first evaluation, we focused on company internal business application for inventory counting, developed for Android handheld devices. Figure 2 shows a picture of PowerApps Studio and our tool that presents rule checks for the application. The next sections discuss the components in detail. The development environment for PowerApps is a web application called PowerApps Studio. Low-Code developers only need a supported browser like Google Chrome to implement, test and run their apps (see Fig. 2). The IDE includes UI components (e.g., textboxes, buttons, layouts, etc.), access to hardware components (e.g., camera) and data sources (e.g., SQL databases, Files, Web applications). PowerApps Studio is also used to manage the application lifecycle, like versioning and publishing apps. 4.1 Application “Meta-model” PowerApps are stored and managed within the Power Platform environment. However, apps can be exported into a ZIP file. The source code of an exported PowerApp contains multiple JSON files. These JSON files follow a general schema that organizes the app in a hierarchical parent-child structure (i.e. application, screens, containers, components, properties) and defines types and their properties (e.g., type label and property fontcolor). The fact that PowerApps source codes are instances of the same schema allows us to create rules that can be applied to different apps. Thus the element “Application meta-model” in our approach (cf. Fig. 1) is the schema of the JSON files of PowerApps. 4.2 Low-Code Application for Test The test-subject is the low-code application (the PowerApp) stored in a MongoDB,5 which is a document optimized database. In our approach, a PowerApp corresponds to a collection in MongoDB, which can contain multiple JSON files MongoDB supports a query language to retrieve information from stored files. A query can be used to check the existence of a certain UI element in a JSON file and the value of certain properties. For example, check whether “a label exists that contains the name of the app and is formatted in Arial 12pt red.” 4.3 Company Rules (prosa) In this first evaluation, we defined three types of rules: Information provides hints and best practices for developers (e.g., corporate identity colors and fonts), warnings are more severe problems that should be addressed by the developer but do not break the 4 https://powerapps.microsoft.com/. 5 https://www.mongodb.com/.

42

M. Noebauer et al.

Fig. 2. An invent counting app developed in PowerApps Studio

app (e.g., form factor on mobile devices), errors must be addressed by the developer and will prevent the app from being published (e.g., barcode type has to be set to EAN13). Rules can be applied to all apps or to a specific app. For example, stored goods in the warehouse use EAN13 barcodes; therefore, an app used in the warehouse has to use this type of barcode otherwise it will not work. All applications should follow the same corporate identity. 4.4 Formalized Rules In our approach, rules are stored as record in an Azure Table Storage,6 where they are managed by the software quality assurance team. Each rule has a title, description, type, context, query, expected result and enablement flag, and links to a wiki page. Title and descriptions are used to provide a meaningful feedback to the developer. Type and context are used to classify the rule as information, warning or error, and as an app-specific or universal rule. The rule also contains the query executed against the MongoDB collection and the expected result, which will be compared to the actual result to decide if the rule was violated. We also foresee an enablement flag to activate or deactivate rules if needed. Some rules require a more comprehensive explanation and guidance on how to fix it. Therefore, we setup a wiki that describes each rule’s purpose and how to address it in 6 https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-overview.

Quality Assurance in Low-Code Applications

43

PowerApps Studio (e.g., different sizes and screen resolutions of devices used in the company and where to set the form factor and resolution in PowerApps Studio). 4.5 Quality Engine and Feedback Generator One of our goals was to provide quick feedback for the developer. Because PowerApps Studio is a web application, we have developed a Chrome browser plugin that integrates with the developer experience by working in a browser window. The developer has to open the plugin and trigger the rule check by clicking a button. The plugin instructs a web application to execute the relevant queries against the source code stored in MongoDB. The result is presented to the developer in the browser plugin. By clicking on the help link, the developer can navigate to the wiki page and investigate the rule (cf. Fig. 3). 4.6 Feedback from Low-Code Developers We conducted an initial workshop at insideAx with a developer who has more than 10 years of experience in business software development and mobile applications. We stated our goal to improve the software quality of low-code apps developed by power users. In a first step, we presented the tool-based approach and explained each component. Next we explained how a technically experienced super-user can define rules for apps. Then, we presented the developing experience for a power-user by implementing a low-code invent counting app and guidance from our solution. Finally, we asked the insideAx developer to offer feedback. The developer liked the fact that rules can be grouped by severity like error, warning and info by topic like best-practice, corporate identity, etc. as well as on general or applevel. A downside of the approach is the need to learn another query language to define

Fig. 3. Results of automated checks, linked to the documentation about how to resolve the issues.

44

M. Noebauer et al.

the rules. The developer stated that our approach is missing the ability to define rules for runtime behaviour. However, using Word-press for creating a knowledge base was welcomed because it is easy to use and more help topics can be added soon.

5 Related Work There are several tools available for code quality analysis: SonarQube [10], for example, is an open-source platform for continuous analysis of code quality. It uses a combination of static code analysis and dynamic code analysis to identify bugs, vulnerabilities, code smells, and technical debt. SonarQube supports many programming languages and it can easily be integrated with different build tools. SonarQube allows users to customize the rules for code analysis based on their specific needs. This can help to enforce coding standards and ensure that code meets organizational requirements. It also provides a dashboard and support for reporting. Another example is PMD [11], a code analysis tool focusing on static code analysis on Java source code. Similar to SonarQube, PMD allows developers to customize the set of rules used to coding standards and best practices specific to an organization or project. It can also be integrated with various build tools and supports the generation of reports. Other tools include CodeClimate7 and Checkstyle8 . Other tools focus on static application security testing (SAST) [3, 12]. Check-marx,9 for example, can be used to analyze the code of low-code applications and identify potential security vulnerabilities. It uses a combination of static code analysis and dynamic scanning to identify potential security vulnerabilities and can detect vulnerabilities such as SQL injection, cross-site scripting, and buffer overflows. Veracode10 is another tool for analyzing the security of low-code applications. It can scan applications built on a variety of platforms and languages, and provides detailed reporting and analytics capabilities. Other SAST tools that can be used in combination with low-code applications include Kiuwan11 and CAST Highlight,12 which also provides a greenability analysis. Automated testing of low-code applications has been identified as one of the major challenges in this domain [2]. Khorram et al. [8] conduct an analysis of the testing components of five commercial Low-Code Development Platforms and propose a feature list for low-code testing. They identify challenges with respect to the role of citizen developer in testing, the need for high-level test automation, and cloud testing. In [6], for example, a mocking solution prototype for the OutSystems low-code development platform is presented.

6 Summary and Outlook In this paper, we presented the challenges of quality assurance in low-code applications in terms of design, processes, security, and performance. We emphasized the need for quality assurance measures to ensure high standards in low-code apps, but traditional 7 https://codeclimate.com. 8 https://checkstyle.org. 9 https://checkmarx.com. 10 https://www.veracode.com. 11 https://www.kiuwan.com. 12 https://learn.castsoftware.com.

Quality Assurance in Low-Code Applications

45

testing methods may not be feasible as low-code developers may not be trained software engineers. We presented an approach to enforce quality assurance measures on low-code apps that follows the low-code philosophy in testing procedures. The generic approach is applicable to different technology stacks for low-code applications. In this paper, we demonstrated the feasibility of the approach based on an quality assurance approach implemented for Microsoft PowerApps. Low-code development platforms and tools are still evolving, and there is no industry standard for quality assurance measures. This has resulted in “wild-growth” of these platforms. A holistic approach to quality assurance, including standardization of quality measures, training for low-code developers on secure coding and testing practices are required to accelerate the adoption of low-code applications without compromising for quality. Overall, quality assurance for low-code applications requires a balance between agility and control. While low-code development promises to speed up the development process and lower barriers to entry, quality assurance measures should ensure that these benefits do not come at the cost of security, performance, and compliance. Acknowledgment. The project was partially funded by the Austrian Research Promotion Agency (FFG, project: 891632).

References 1. Refactoring: Improving the Design of Existing Code. Addison-Wesley Longman Publishing Co., Inc., USA (1999) 2. Al Alamin, M.A., Malakar, S., Uddin, G., Afroz, S., Haider, T.B., Iqbal, A.: An empirical study of developer discussions on low-code software development challenges. In: 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR), pp. 46–57 (2021). https://doi.org/10.1109/MSR52588.2021.00018 3. Aloraini, B., Nagappan, M., German, D.M., Hayashi, S., Higo, Y.: An empirical study of security warnings from static application security testing tools. J. Syst. Softw. 158 (2019). https://doi.org/10.1016/j.jss.2019.110427 4. Boogerd, C., Moonen, L.: Assessing the value of coding standards: an empirical study. In: 2008 IEEE International Conference on Software Maintenance, pp. 277–286 (2008). https:// doi.org/10.1109/ICSM.2008.4658076 5. Cabot, J., Clariso, R.: Low code for smart software development. IEEE Softw. 40(01), 89–93 (2023). https://doi.org/10.1109/MS.2022.3211352 6. Jacinto, A., Lourenço, M., Ferreira, C.: Test mocks for low-code applications built with OutSystems. In: Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings. MODELS 2020. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3417990. 3420209 7. Juhas, G., Molnár, L., Juhasova, A., Ondrišová, M., Mladoniczky, M., Kováˇcik, T.: Low-code platforms and languages: the future of software development. In: 2022 20th International Conference on Emerging eLearning Technologies and Applications (ICETA), pp. 286–293 (2022). https://doi.org/10.1109/ICETA57911.2022.9974697 8. Khorram, F., Mottu, J.M., Sunyé, G.: Challenges and opportunities in low-code testing. In: Proceedings of the 23rd ACM/IEEE International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings. MODELS 2020. Association for Computing Machinery, New York, NY, USA (2020). https://doi.org/10.1145/3417990.3420204

46

M. Noebauer et al.

9. Luo, Y., Liang, P., Wang, C., Shahin, M., Zhan, J.: Characteristics and challenges of lowcode development: the practitioners’ perspective. In: Proceedings of the 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). ESEM 2021. Association for Computing Machinery, New York, NY, USA (2021). https:// doi.org/10.1145/3475716.3475782 10. Marcilio, D., Bonifacio, R., Monteiro, E., Canedo, E.D., Luz, W.P., Pinto, G.: Are static analysis violations really fixed?: A closer look at realistic usage of SonarQube. In: Proceedings of the 27th International Conference on Program Comprehension, ICPC 2019, Montreal, QC, Canada, 25–31 May 2019, pp. 209–219. IEEE/ACM (2019). https://doi.org/10.1109/ICPC. 2019.00040 11. Trautsch, A., Herbold, S., Grabowski, J.: A longitudinal study of static analysis warning evolution and the effects of PMD on software quality in apache open source projects. CoRR abs/1912.02179 (2019). http://arxiv.org/abs/1912.02179 12. Yang, J., Tan, L., Peyton, J., Duer, K.A.: Towards better utilizing static application security testing. In: Proceedings of the 41st International Conference on Software Engineering: Software Engineering in Practice, ICSE (SEIP) 2019, Montreal, QC, Canada, 25–31 May 2019, pp. 51–60. IEEE/ACM (2019). https://doi.org/10.1109/ICSE-SEIP.2019.00014 13. Zaheri, M.: Towards consistency management in low-code platforms. In: Proceedings of the 25th International Conference on Model Driven Engineering Languages and Systems: Companion Proceedings, MODELS 2022, pp. 176–181. Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3550356.3558510

Towards a DevSecOps-Enabled Framework for Risk Management of Critical Infrastructures , Ricardo Colomo-Palacios2 , Mary Sánchez-Gordón1(B) and Vasileios Gkioulos3

Xhesika Ramaj1

,

1 Østfold University College, 1757 Halden, Norway

[email protected]

2 Escuela Técnica Superior de Ingenieros Informáticos, Universidad Politécnica de Madrid,

28660 Madrid, Spain 3 Norwegian University of Science and Technology, 2802 Gjøvik, Norway

Abstract. Risk Management is a cornerstone of daily business operations that ensures their sustained viability over the long term. It is a critical function for any business as it helps identify potential threats and opportunities and enables informed decision-making. When placed within the context of critical infrastructures, a risk management profile attains a heightened dimension. The integration of security practices within the software development pipeline, commonly known as DevSecOps, is a novel approach to enhance software application security. This approach has been touted as a transformative solution that not only promotes the adoption of security practices but also provides financial benefits by mitigating risk levels and ensuring uninterrupted business operations. The objective of this paper is to present a conceptual framework to manage risk inside critical infrastructures in the DevSecOps context. This framework is built upon three pillars: action, state, and contrivance. It aims to: (i) facilitate the comprehension of risk, (ii) provide an incentive mechanism towards enhancing risk management, (iii) contribute to both human and machine contrivance, and (iv) ensure the quality of the information retrieved by involving all teams. Keywords: DevSecOps · Risk Management · Critical Infrastructures

1 Introduction Critical infrastructures encompass a broad range of elements such as networks, facilities, services, and assets, both physical and information technology-oriented, whose disruption or destruction would be of significant impact on the country [1]. Critical infrastructures have traditionally relied on a combination of software and hardware technologies, to control their critical processes. This has been the case for many years. Over time, the impact radius of the disrupted elements has expanded to include areas such as health, safety, and security of the society, the national economic activity, and the government’s functioning [1]. This is explained by the applicability of critical infrastructures in a wide range of sectors and industries. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 47–58, 2023. https://doi.org/10.1007/978-3-031-42307-9_4

48

X. Ramaj et al.

Critical infrastructures extend across many sectors of the economy, including banking and finance, transport and distribution, energy, utilities, health, food supply and communications, as well as key government services [1]. In the aftermath of the September 11, 2001 attacks, most European nations established a definition for “Critical Infrastructure” and took measures to safeguard it [2]. However, the definition of critical infrastructure is constantly evolving to reflect the ever-changing security and resilience landscape [3]. While some critical elements within these sectors could be classified as infrastructure, others are networks or supply chains that are essential for delivering crucial products or services. Take, for instance, the food and water supply chain to major urban areas. This supply chain is dependent not only on key facilities but also on a complex network of producers, processors, manufacturers, distributors, and retailers [1]. In the European Programme for Critical Infrastructure Protection [4], the European Union Member States identify the critical infrastructures by certain predefined criteria such as scope, rated by the extent of the geographic area and severity, assessed on the basis of public, economic, environmental, political, and psychological effect, as well as on the public health consequences. As a result, the list of critical infrastructures is virtually limitless, and any organization that deals with critical processes and meets the established criteria could potentially be included. On the contrary, the United States (US) encounter 16 critical infrastructure sectors whose physical and virtual assets, systems, and networks are deemed so essential that their destruction or incapacitation would have a severe impact on national security, economic security, public health, or safety, or any combination of these factors [5]. The US started to address the matter of critical infrastructure protection following a terrorist attack on a federal building in Oklahoma City in 1995 [6]. Today, critical infrastructures in the US include sectors like Chemicals, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare and Public Health, Information Technology, Nuclear Reactors “Materials and Waste”, and Transportation Systems Sector, as well as Water and Wastewater Systems [5]. Despite the sector in which they are operating, some infrastructures are of such critical importance that the long-term viability of a country depends on their continued functioning [7]. As a result of their significance, the management of various risks that critical infrastructures might face becomes a pivotal responsibility. There are various forms in which critical infrastructures are vulnerable to potential risks. Critical infrastructure is susceptible to harm, incapacitation, damage, or destruction from both natural and human-caused (whether intentional or accidental) incidents [8]. In response to natural disasters, frameworks such as the Hyogo Framework for Action for 2005–2015 and the Sendai Framework for Disaster Risk Reduction for 2015–2030 work towards enhancing the understanding of resilience in critical aspects and dimensions of society, such as the resilience of critical infrastructure [9]. In addition to natural disasters, the combination of hardware and software makes critical infrastructures susceptible to technical malfunctions and cyber-attacks [3]. Instead of concentrating on a particular threat or hazard, such as terrorism or hurricanes, it is essential to recognize all the risks and hazards that pose the greatest danger to critical

Towards a DevSecOps-Enabled Framework

49

infrastructure. This approach enables more effective and efficient planning and resource allocation [8]. Furthermore, it helps to comprehend the risk and take action, which is also the essence of the risk management process. European Commission defines risk management as the process of understanding risk and implementing actions that guarantee an acceptable level of risk. The process includes identifying the risk, measuring, and controlling it using the commensurability of the risk level with an assigned level [1]. In the critical infrastructures context, the task of pinpointing risks becomes increasingly challenging, given the complexity and interdependencies between systems. As the level of risk increases, there is a corresponding need to intensify efforts to manage it. DevOps frequently minimizes risks by detecting and addressing failures early on, whereas in waterfall approaches, failures may remain unnoticed until the final product is delivered [10]. DevOps nature of taking early and small steps is merged with the approach of integrating security, which gives rise to DevSecOps. DevSecOps is the acronym for DEVelopment, SECurity, and OPerationS. Although there is a number of definitions by several authors [11], the security principles and practices lie at the heart of DevSecOps. Security is integrated throughout the whole software development life cycle and at each layer of the DevOps. Moreover, this approach shortens development time while also ensuring the software quality [12], measured in terms of code, security, and delivery mechanisms [13]. In this paper, we propose a DevSecOps-enabled framework for risk management of critical infrastructures. To the best of our knowledge, there are no studies that use DevSecOps environments, and the aspects encompassed to keep track of and manage the risk in critical infrastructures. Our proposed framework targets critical infrastructure by emphasizing the need to consider the potential implications of risks, such as those associated with cyber-attacks, from a risk management perspective. By doing so, the proposal aims to enhance the understanding and assessment of the various impacts that can result from such risks. By clarifying the need for a risk management perspective that incorporates different types of impacts, the proposal aims to ensure a more comprehensive understanding of the risks faced by critical infrastructures. This understanding can inform the development of effective strategies and investments in new technology, to protect and enhance the resilience of critical infrastructure systems against risks, including but not limited to cyber-attacks. To ensure comprehensive participation from all teams involved, the proposal plans to implement incentive mechanisms for those who actively contribute to the process of risk management. This framework is built upon three identified main pillars: action, state, and contrivance, and aims to: 1. Facilitate the comprehension of risk by providing a holistic view of it, to make sure that everybody involved understands the risk, is able to identify it, and takes the required actions to minimize it. The goal will be accomplished by utilizing user reports to act and carefully analyzing each step to identify patterns that contribute towards risk management, but also to evaluate the contribution of each action and promote them based on the impact. 2. Provide an incentive mechanism towards actions and performers who contribute to managing the state of risk.

50

X. Ramaj et al.

3. Contribute to the contrivance of both human and technological resources by facilitating risk comprehension and investing in new technologies. One way to facilitate this could be by organizing internal risk-focused challenges, in order to boost their skills and help them to better understand the risk. 4. Ensure the quality of the information retrieved from previous experiences by involving all teams in the assessment. The rest of the paper is structured as follows: Section 2 presents the Related work. Section 3 describes the DevSecOps-enabled framework that we propose, and Sect. 4 concludes our work and provides potential future research directions.

2 Related Work Numerous risk management frameworks are applied across different industries and sectors; however, the frameworks developed by the International Organization for Standardization (ISO) and the National Institute of Standards and Technology (NIST) are the most widely used ones. ISO31000 [14] is a risk management standard developed by ISO which provides a generic framework for managing risk across all types of organizations and industries. Additionally, NIST has introduced two distinct frameworks that complement each other and are intended to function in conjunction: NIST Risk Management Framework (RMF) [15] and NIST Cybersecurity Framework (CSF) [16]. The RMF is a process that consolidates security, privacy, and cyber supply chain risk management activities into the system development life cycle. On the other hand, the CSF is a risk management framework that focuses on managing cybersecurity risks by offering a range of best practices, guidelines, and standards specifically designed for organizations. Besides the frameworks provided by ISO and NIST, there are other recognized risk management frameworks such as Enterprise Risk Management (ERM) framework [17] by the Committee of Sponsoring Organizations of the Treadway Commission (COSO), The Factor Analysis of Information Risk (FAIR) framework [18] by FAIR Institute, PMI Risk Management Framework by the Project Management Institute (PMI) [19], The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) [20], and The CCTA Risk Analysis and Management Method (CRAMM) [21]. These frameworks have a wide scope, and their targets can range from generic to specific levels, such as enterprise-level [17] or project-level [19], different domains, including financial systems or information technology systems, and profiles such as private or governmental. Additionally, the selection of a framework will be influenced by the unique needs and demands of the organization, the industry it operates in, the goals it has established, and the needs of its stakeholders. The proposed framework in this study is not limited to specific levels, domains, or profiles. Instead, it is focused on critical infrastructures, regardless of whether they are managed by a private company or the government. Moreover, this framework emphasizes the importance of risk management within the context of a DevSecOps environment, which is gaining popularity due to the increasing demand for security implementation among numerous enterprises that have adopted DevOps [22].

Towards a DevSecOps-Enabled Framework

51

Numerous authors have made significant efforts to present frameworks that assist organizations that employ DevOps in managing risks. Plant, Hillegersberg, and Aldea [23] provide a framework that aims to assist organizations that utilize DevOps in governing risks and providing assurance to auditors and stakeholders regarding their software delivery processes while maintaining agility. The framework focuses on mitigating risks associated with operations, reporting, and compliance objectives, which constitute the focus of internal controls. Aljohani and Alqahtani [24] propose a unified framework, demonstrated through a practical case study, as a solution to automate software security analysis in the DevSecOps paradigm. It acts as an intermediary stage in the software development process between the Continuous Integration (CI) and Continuous Delivery (CD) pipelines of software applications and their corresponding security services. During the implementation of the case study, the researchers identified four vulnerabilities with varying risk severity, but they did not delve further into these vulnerabilities and their risk. Instead, they provide future work directions on extending the functionalities of the framework by adding more modules such as user reports analysis dashboard. We aim to facilitate the comprehension of risk by analyzing the user reports to identify contributive patterns on the process of risk management. These authors planned to evaluate the proposed framework by conducting a user study, we plan to interview DevOps and security experts as part of the empirical validation of our proposed framework. Yasar [25] implements a DevSecOps assessment for highly regulated environments. This study highlights the need of integrating the required risk management framework into the application development process in highly regulated environments, but it does not focus on the risk management process itself. Rafi et al. [26] identify and develop a prioritization-based taxonomy of DevOps security challenges by reviewing the literature. The proposed taxonomy has been evaluated by experts. Findings indicate that the absence of automated testing tools poses the most significant challenge to securing DevOps implementations. Therefore, they recommend implementing an appropriate automated test to monitor the security risks of DevOps.

3 Towards a DevSecOps Environment that Fosters Risk Management Our proposal involves a step-by-step approach that includes all the teams operating in a DevSecOps environment, including development, security, operations, and additional teams that may be brought together under DevSecOps, such as customer support, legal, and marketing. The aim is to facilitate discussions on the organization’s plans and execute the tasks outlined below. A high-level view of this approach includes a Feasibility analysis, Risk management Perspective, DevSecOps Proposal, Evaluation by experts, Improvements (Continuous feedback), Implementation, and Critical Infrastructure Performance. Figure 1 shows the proposed framework for Risk Management in a DevSecOps environment consisting of the above-mentioned processes. The first step is to analyzedevelops critical infrastructure has the necessary human, technological, and financial resources to implement new solutions It is important that the

52

X. Ramaj et al.

Fig. 1. The DevSecOps-enabled Framework for Risk Management

critical infrastructure develop a perspective on risk management by determining both the objectives of the company towards it and how much potential risks will cost. In such a scenario, the use of metrics can provide a holistic view of the ratio between the costs and the business goals towards risk management. Our framework incorporates both safety and compliance aspects as the foundational pillars upon which the DevSecOps proposal is built (see details in Sect. 3.3). The proposal aims to be validated by both DevOps and security experts, and improved based on the received feedback. The performance of any infrastructure dealing with critical operations will serve as an indicator of the relevance of the implemented solution. In what follows, we describe in detail each one of the elements included in the proposed framework. 3.1 Feasibility Analysis Adopting new technology can be a daunting task, and it becomes even more challenging to implement it effectively. One of the significant obstacles in cybersecurity stems from novel technological strategies that prioritize the advantages of implementation over outlining the essential governance and risk management modifications required to establish

Towards a DevSecOps-Enabled Framework

53

and implement effective safeguards [27]. The feasibility analysis must be conducted in terms of human, technological and financial resources. A. Human Resources According to the State of DevOps Report 2021 [28], less than 2% of top-tier companies express opposition to DevOps at the executive level. Although strong teams can drive significant transformation within themselves and adjacent teams, without meaningful leadership support, any success achieved will likely be limited to specific areas and overall organizational progress will be hindered. Additionally, the State of DevOps Report 2017 [29] point out the pivotal importance of transformational leadership in driving DevOps projects and achieving optimal IT performance. This makes room for considering management at all levels. The involvement of all parties opens up discussion about whether the top management will adopt and integrate new approaches into the business environment, whether the middle management will be willing to implement the strategies endorsed by senior leadership, and whether the lower management is prepared to alter their daily operations routine. Furthermore, staffing is seen as a primary hurdle [30], with the need to upskill the entire team to enable anyone to address operational issues while being on-call. Additionally, considering training of the staff , both new and current staff, is necessary to learn new technologies and concepts promptly enough to meet the business demands. This is because implementing new technology often leads to questioning whether the staff is adequately trained to apply these new solutions to the business. B. Technological Resources The infrastructure plays a significant role in facilitating DevOps to a great extent [31], and DevSecOps is not an exception. In critical operations, it becomes even more crucial to assess whether the business is equipped with the necessary technological resources and infrastructure to implement new solutions. C. Financial Resources Financial value is listed among the four values of software according to a software value map proposed by Khurum et al. [32]. The financial perspective of a company pertains to the implementation and execution of its strategy, with the ultimate goal of improving. This perspective encompasses the organization’s long-term strategic objectives and translates the tangible outcomes of the strategy into traditional financial terms. When evaluating the benefits of implementing new technology, businesses must first consider the associated costs and whether they outweigh the potential benefits. It is essential to analyze whether the financial resources required for implementing the technology are available and if the benefits will ultimately offset the costs. 3.2 Risk Management Perspective Once the feasibility of the proposal has been assessed, the enterprise should determine the perspective of the business concerning risk management. In our previous study about the safety aspects of DevSecOps [33], we identified the freedom from risk as one of the aspects, consisting of several actions towards it, such as separation of information about vulnerabilities, risk identification, monitoring, mitigation, reduction, assessment, and analysis. To assist in these efforts, the enterprise must initially establish clear objectives pertaining to risk management. During this step, the preceding financial evaluation

54

X. Ramaj et al.

regarding the feasibility of DevSecOps focuses on a more specific goal - that of risk assessment - and identifies the expenses associated with potential risks, thus defining the risk-related costs. To get a whole view of the current state, and to realize where the enterprise stands, we suggest the use of metrics to scale the goals-costs relationship. 3.3 DevSecOps Proposal Our earlier studies focused on identifying safety [33] and compliance aspects [11]. Those previous studies were analyzed to retrieve the common denominator the aspects work towards. We identified risk-related safety aspects, including risk identification, monitoring, mitigation, reduction, prevention, assessment, risk analysis tools, and assurance cases and frequent feedback, as actions towards achieving freedom from risk. On the other hand, the compliance aspects identified by [11] under the ‘Initiation’ group, such as compliance issues and requirements, were seen as initial attempts towards compliance and were thus both aspects labelled as actions. Additional safety aspects of DevSevOps were identified from the literature [33]. For instance, users’ safety, minimizing use errors, cultural and behavioral changes, and changing work and responsibilities, which we have grouped under the human-related category of safety aspects. In the case of compliance [11], compliance as code and compliance tools were grouped under ‘technicalities’ umbrella of compliance aspects. Upon comparing these two aspects of safety and compliance, we observed that they both rely on the skills of both humans and machines to achieve a safe or compliant state. Therefore, we classified them under the contrivance category. Both actions and skills are utilized to achieve a common objective: the state of being safe or compliant. This goal is recognized within both the management group of safety and compliance aspects. Safety aspects aim to achieve a safe state through consideration of standards, security environment, and operational safety. Similarly, compliance can be achieved through compliance awareness and training, automation, testing and verification, validation, control and monitoring, as well as assessment. In summary, the common denominator can be described using three terms: action, state, and contrivance as shown in Fig. 2. 3.4 Evaluation by Experts Despite the fact that DevSecOps has circulated as an approach for about one decade (since 2012) [34], the topic has not been extensively explored in the literature. This has been noticed also during our previous studies to provide a state of the art on the topic, with a focus on safety [33] or compliance aspects [11]. Therefore, we consider it crucial to obtain an assessment from experts in the industry. 3.5 Improvements (Continuous Feedback) Among its various advantages, DevOps facilitates rapid and continuous communication between development and operation teams, enabling the timely identification of issues before they affect customers [35]. DevSecOps shares the same objective by involving

Towards a DevSecOps-Enabled Framework

55

Fig. 2. Overview of Safety and Compliance Aspects of DevSecOps

the security team in the process loop. Our framework adopts a similar approach of ongoing communication and feedback from industry experts, with the aim of improving it. Continuous feedback is not regarded as a separate process, but rather as a loop to refine the framework both before and after its implementation. The post-implementation feedback is intended to serve as a source of learning. 3.6 Implementation In this step, the proposed framework is implemented by considering the specific sector of critical operations, such as energy, transportation, telecommunications and information technology systems, financial, or healthcare sector. The development of the DevSecOpsenabled framework proposal requires close cooperation between DevOps professionals and security experts with a focus on risk management. Furthermore, professionals possessing expertise in critical operations are an added asset. 3.7 Critical Infrastructure Performance The last phase is about measuring the performance of the critical infrastructure after the implementation of the proposed framework. Metrics play a vital role in enhancing the accuracy and precision of measurement in Software Engineering. Suggesting these metrics triggers a discussion that can lead to a deeper comprehension of the issues [36]. A set of metrics, grounded on both professional and academic perspectives are presented in [22], allowing the enterprises to track the progress of their DevSecOps practices. The proposed framework will use the metrics to monitor and estimate the advances of risk management enabled by a DevSecOps environment in critical infrastructures. The performance of the critical operations will be a health indicator, whether positive or

56

X. Ramaj et al.

negative. If critical operations perform well, it serves as a positive indicator. However, if the performance of the critical infrastructure is not the expected one, the framework should be reviewed and improved based on the feedback received.

4 Conclusion and Future Work The purpose of this paper is to introduce a new framework for managing risk within critical infrastructures, which leverages the DevSecOps approach. By utilizing DevSecOps throughout the software development lifecycle, the framework creates patterns of risk and facilitates the management of risk enabled by analysis of various actions taken. This methodology provides greater visibility into the origins of risks and how to address them. Moreover, the framework can also be utilized to analyze and visualize risk-related data, thereby improving the overall comprehension of the software systems. Several methods used, such as incentive mechanisms of risk-related challenges aim to facilitate the comprehension of the risk and take the relevant actions towards it. Additionally, this aims to involve all the teams in the process and ensure the quality of the information retrieved from previous experiences. Although the proposed framework is intended to improve the overall performance of the business and its critical operations before being implemented, it requires to be overall validated by experts in the field, therefore those professionals must be experts on both DevSecOps and Risk Management. The focus on the critical infrastructures of the framework rises the need for the expertise of critical operation professionals as well. Once validated, the risk management focused DevSecOps proposal can be implemented and its performance must be measured. Future work will include the experts’ evaluation and implementation of the proposed framework to validate and identify the potential improvements and drawbacks in the context of risk management. Acknowledgements. This work is partially funded by the Research Council of Norway (RCN) in the INTPART program, under the project “Reinforcing Competence in Cybersecurity of Critical Infrastructures: A Norway-US Partnership (RECYCIN)”, with the project number #309911.

References 1. European Commission: Communication from the Commission to the Council and the European Parliament - Critical Infrastructure Protection in the fight against terrorism. https://eurlex.europa.eu/legal-content/GA/TXT/?uri=CELEX:52004DC0702 2. Rehak, D., Markuci, J., Hromada, M., Barcova, K.: Quantitative evaluation of the synergistic effects of failures in a critical infrastructure system. Int. J. Crit. Infrastruct. Prot. 14, 3–17 (2016). https://doi.org/10.1016/j.ijcip.2016.06.002 3. Esnoul, C., Colomo-Palacios, R., Jee, E., Chockalingam, S., Eidar Simensen, J., Bae, D.-H.: Report on the 3rd international workshop on engineering and cybersecurity of critical systems (EnCyCriS - 2022). SIGSOFT Softw. Eng. Notes. 48, 81–84 (2023). https://doi.org/10.1145/ 3573074.3573095 4. The European Programme for Critical Infrastructure Protection (EPCIP). https://home-aff airs.ec.europa.eu/pages/page/critical-infrastructure_en

Towards a DevSecOps-Enabled Framework

57

5. Critical Infrastructure Sectors | CISA. https://www.cisa.gov/topics/critical-infrastructure-sec urity-and-resilience/critical-infrastructure-sectors 6. Presch-Cronin, K., Marion, N.E.: Critical Infrastructure Protection, Risk Management, and Resilience: A Policy Perspective. CRC Press, Boca Raton (2016) 7. Khan Babar, A.H., Ali, Y.: Framework construction for augmentation of resilience in critical infrastructure: developing countries a case in point. Technol. Soc. 68, 101809 (2022). https:// doi.org/10.1016/j.techsoc.2021.101809 8. A Guide to Critical Infrastructure Security and Resilience. https://www.cisa.gov/search 9. Quitana, G., Molinos-Senante, M., Chamorro, A.: Resilience of critical infrastructure to natural hazards: a review focused on drinking water systems. Int. J. Disaster Risk Reduction 48, 101575 (2020). https://doi.org/10.1016/j.ijdrr.2020.101575 10. Fox, M.R.: IT Governance in a DevOps World. IT Prof. 22, 54–61 (2020). https://doi.org/10. 1109/MITP.2020.2966614 11. Ramaj, X., Sánchez-Gordón, M., Gkioulos, V., Chockalingam, S., Colomo-Palacios, R.: Holding on to compliance while adopting DevSecOps: an SLR. Electronics 11, 3707 (2022). https:// doi.org/10.3390/electronics11223707 12. Riungu-Kalliosaari, L., Mäkinen, S., Lwakatare, L.E., Tiihonen, J., Männistö, T.: DevOps adoption benefits and challenges in practice: a case study. In: Abrahamsson, P., Jedlitschka, A., Nguyen Duc, A., Felderer, M., Amasaki, S., Mikkonen, T. (eds.) PROFES 2016. LNCS, vol. 10027, pp. 590–597. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-490946_44 13. Carturan, S.B.O.G., Goya, D.H.: A systems-of-systems security framework for requirements definition in cloud environment. In: Proceedings of the 13th European Conference on Software Architecture, vol. 2, pp. 235–240. Association for Computing Machinery, New York, NY, USA (2019) 14. ISO - ISO 31000 — Risk management. https://www.iso.org/iso-31000-risk-management.html 15. Computer Security Division, I.T.L.: NIST Risk Management Framework. https://csrc.nist. gov/Projects/risk-management 16. NIST Cybersecurity Framework. NIST (2013) 17. Compliance Risk Management Applying the COSO ERM Framework. https://www.coso. org/Shared%20Documents/Compliance-Risk-Management-Applying-the-COSO-ERM-Fra mework.pdf 18. 1nstitute, F.: The Importance and Effectiveness of Quantifying Cyber Risk. https://www.fai rinstitute.org/fair-risk-management 19. Project Management Institute ed: PMI Risk Management Framework. Project Management Institute, Newtown Square, Pa (2009) 20. The Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE). https:// www.enisa.europa.eu/topics/risk-management/current-risk/risk-management-inventory/rmra-methods/m_octave.html 21. The CCTA Risk Analysis and Management Method (CRAMM). https://www.enisa.europa. eu/topics/risk-management/current-risk/risk-management-inventory/rm-ra-methods/m_c ramm.html 22. Forsgren, N., Kersten, M.: DevOps metrics. Queue. 15 (2017). https://doi.org/10.1145/315 9169 23. Plant, O.H., van Hillegersberg, J., Aldea, A.: Rethinking IT governance: designing a framework for mitigating risk and fostering internal control in a DevOps environment. Int. J. Account. Inf. Syst. 45, 100560 (2022). https://doi.org/10.1016/j.accinf.2022.100560 24. Aljohani, M.A., Alqahtani, S.S.: A unified framework for automating software security analysis in DevSecOps. In: 2023 International Conference on Smart Computing and Application (ICSCA), pp. 1–6 (2023)

58

X. Ramaj et al.

25. Yasar, H.: Implementing Secure DevOps assessment for highly regulated environments. In: Proceedings of the 12th International Conference on Availability, Reliability and Security, pp. 1–3. Association for Computing Machinery, New York, NY, USA (2017) 26. Rafi, S., Yu, W., Akbar, M.A., Alsanad, A., Gumaei, A.: Prioritization based taxonomy of DevOps security challenges using PROMETHEE. IEEE Access 8, 105426–105446 (2020). https://doi.org/10.1109/ACCESS.2020.2998819 27. Woody, C.: DevSecOps pipeline for complex software-intensive systems: addressing cybersecurity challenges (2020) 28. State of DevOps Report 2021 | Puppet by Perforce. https://www.puppet.com/resources/stateof-devops-report 29. State of Devops Report 2017 (2017) 30. Senapathi, M., Buchan, J., Osman, H.: DevOps capabilities, practices, and challenges: insights from a case study. In: Proceedings of the 22nd International Conference on Evaluation and Assessment in Software Engineering 2018, pp. 57–67. ACM, Christchurch New Zealand (2018) 31. Lwakatare, L.E., et al.: DevOps in practice: a multiple case study of five companies. Inf. Softw. Technol. 114, 217–230 (2019). https://doi.org/10.1016/j.infsof.2019.06.010 32. Khurum, M., Gorschek, T., Wilson, M.: The software value map — an exhaustive collection of value aspects for the development of software intensive products. J. Softw. Evol. Process 25, 711–741 (2013). https://doi.org/10.1002/smr.1560 33. Ramaj, X., Sánchez-Gordón, M., Chockalingam, S., Colomo-Palacios, R.: Unveiling the safety aspects of DevSecOps: evolution, gaps and trends. Recent Adv. Comput. Sci. Commun. 16, 61–69 (2023) 34. DevOpsSec: Creating the Agile Triangle. https://www.gartner.com/en/documents/1896617 35. López-Peña, M.A., Díaz, J., Pérez, J.E., Humanes, H.: DevOps for IoT systems: fast and continuous monitoring feedback of system availability. IEEE Internet Things J. 7, 10695– 10707 (2020). https://doi.org/10.1109/JIOT.2020.3012763 36. Fenton, N., Bieman, J.: Software Metrics: A Rigorous and Practical Approach, 3rd edn. CRC Press (2014)

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study Luz Marcela Restrepo-Tamayo(B)

and Gloria Piedad Gasca-Hurtado

Universidad de Medellín, Carrera 87 No. 30-65, 50026 Medellín, Colombia [email protected]

Abstract. Focus group discussion is an empirical research method for qualitative studies aimed at eliciting information and perceptions from practitioners. This method is used in software engineering to validate and generalise research results. However, the time available to conduct the sessions could be improved, more tools are required to achieve the discussion in a structured way, and it is vital to avoid the inhibition of the participants. This study proposes a gamification-based strategy to conduct research in which the focus group method is paramount for validation and overcoming some mentioned limitations. For this purpose, a gamification strategy design method consisted of five phases: planning, design, pilot testing, programming and evaluation. The designed strategy was applied in a focus group of seven professionals to prioritise six non-technical factors required in software development teams within Industry 4.0. Applying the gamified strategy allowed us to capture the feelings of all group members methodically within a limited time window. Therefore, the strategy is a potential structured tool for conducting focus group sessions. Keywords: Focus group · Gamification · Non-technical factors · Software engineering

1 Introduction The focus groups emerged from the open interview format that extended to group discussion [1], becoming a “research technique that collects data through group interaction on a topic determined by the researcher” [2]. Focus groups are discussions where the personal perceptions of group members on a defined research topic are intended to be established. Generally, there are between 3 and 12 participants, and the discussion is guided by a moderator-researcher, who follows a pre-defined question structure to keep the discussion focused. The members are selected based on their characteristics concerning the session’s topic (intentional sampling). The group setting allows participants to develop responses and ideas from other participants, which increases the richness of the information obtained [3]. Focus group sessions mainly produce qualitative information about the object of the study. One of the advantages of focus groups is that they produce truthful information, in addition to being a method characterised by being economical and quick to carry out [4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 59–71, 2023. https://doi.org/10.1007/978-3-031-42307-9_5

60

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado

However, the method has weaknesses that are typical of other qualitative methods. The sample size is usually small, and the results are biased due to the group dynamics, so it can be difficult to generalise the results [5]. Poorly conducted focus groups can produce unreliable results due to the difficulty in identifying the contribution of each participant [6, 7]. Thus, adaptations are required to compensate for this technique’s weaknesses and increase its effectiveness. The Software Engineering discipline uses models such as Capability Maturity Model Integration (CMMI) or Function Point Analysis (FPA), characterised by group process execution. Thus, it is possible to use a focus group to capture experts’ opinions on a specific topic [7]. Moreover, studies have shown that the focus group method is valuable and effective in this discipline for obtaining input from professionals and users [8, 9]. On the other hand, gamification is a strategy that enhances the benefits of other techniques [10]. Therefore, this proposal presents the use of gamification to overcome some of the limitations of the focus group and offer a sustainable alternative for empirical research in Software Engineering to continue using the focus group method as a qualitative research method, particularly when seeking consensus from a team to assign a priority order to a set of elements. A relevant research topic in Software Engineering is team management, which must consider technical and non-technical factors (NTF) [11, 12]. However, the latter is usually less frequently considered [13] due to their difficulty in measuring [14, 15]. NTF, such as adaptability, problem-solving [16], communication [17], innovation, leadership, and teamwork [18], are identified as relevant within the framework of Industry 4.0. However, it is helpful to prioritise these six factors so that those managing software development teams can address them sequentially in, for example, productivity estimation models or team intervention plans. The above provides a strategic focus, enables efficient resource management, and fosters results orientation, contributing to processes that focus on the critical aspects of team and organisational success [19]. Given that the assigned order of priority may vary from one team to another, it is convenient to have a strategy that promotes consensus, which can be achieved through a gamified focus group for empirical research in Software Engineering. Based on this premise, a case study focused on the NTF required in software development teams because there is an interest in identifying the skills that allow facing the challenges of Industry 4.0. After the introduction, Sect. 2 presents the research method used. Section 3 presents the results, which are discussed in Sect. 4. Finally, Sect. 5 presents the conclusions and future work.

2 Background Project management in Software Engineering is fundamental, and therefore, the human factor is relevant [20] since the software development process is a social process centred on people [21]. Thus, the research approach adopted in this discipline must incorporate this factor, which can be done using empirical methods [22]. There are several approaches to empirical research. One of them is qualitative research, which focuses on studying objects in their natural environment to interpret

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

61

a phenomenon based on explanations provided by people [23]. The focus group is a qualitative research method used to capture information from members of a specific group of people through discursive interaction and contrasting opinions [24]. In this study, the group of interest corresponds to software development teams. The focus group is a method that has been used in software engineering. It was used to rate a visual programming language in terms of usability and comprehension [25] and to assess the adequacy of a model that evaluates communication in distributed software development [26]. It was used to discuss the importance of including only practitioners in discipline-related experiments [27] and to identify how Scrum facilitates achieving the right level of software quality [28]. This method helps discuss results obtained from quantitative methods [8] or qualitative methods such as literature reviews [29]. This research shows the existing motivation in the Software Engineering community to use qualitative research methods such as the focus group since it allows the identification of problems and needs, the generation of ideas and solutions, the evaluation of proposals and the validation of changes. Its use in this discipline provides detailed information and diverse perspectives, making it a valuable tool for promoting continuous improvement in software development processes. It is a method characterised as practical and cost-effective [30]. However, limitations have been reported related to the time it may take to conduct a session [31], the method of convening, and influenced group thinking [32]. Thus, group dynamics, social acceptability, trade secrets, and availability of agendas may affect the responses obtained [33]. Some of these limitations, such as the inhibition of participants, can be overcome through gamification. Gamification is an approach widely used in Software Engineering [34, 35] and uses game elements in non-game contexts to motivate and engage people in certain activities [36, 37]. It is based on the idea that incorporating game elements, such as challenges or rewards, can increase individuals’ participation, engagement and motivation [38]. Games and playful activities help break down initial barriers in focus groups. They can create a more relaxed and conducive environment for participants to feel comfortable and express their opinions and ideas openly. This technique can motivate people to provide information voluntarily, with more detail and accuracy. Given that both gamification and focus groups have been used in Software Engineering, this proposal aims to overcome some limitations of focus groups through gamification. This research aims to design a gamified strategy to conduct focus group sessions in contexts related to Software Engineering, particularly when seeking consensus from a team to assign a priority order to a set of elements.

3 Method To achieve the gamified strategy that supports the implementation of the focus group as an empirical research method in Software Engineering, we adapted the method for designing pedagogical strategies based on gamification principles, called the Pedagogic Instrument Design (PID) method [39, 40]. The design method corresponds to the first stage of the methodology used in this research and includes five phases that cover all the formal aspects of the strategy. The

62

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado

second stage involves the application of the designed strategy, and the third stage involves the consolidation of the results obtained. The phases of each stage are detailed in Fig. 1.

Fig. 1. Stages and phases of the employed method.

4 Results The three stages presented in Fig. 1 are detailed in the following subsections. 4.1 Gamified Focus Group Phase A: Planning. The strategy is aimed at professionals in Information Technology or Software Engineering. The theme to be developed is related to NTF required in Software Engineering to address the challenges of Industry 4.0. The strategy should allow for the capture of team members’ ideas, experiences, and perceptions [41] so that qualitative data can be obtained playfully. It should enable data collection through a semi-structured group interview that incorporates gaming elements and revolves around the theme proposed by the research team. Furthermore, it should allow for the sharing of diverse perspectives within the team context in a rigorous manner that facilitates the formal collection of data. At the same time, participants are motivated by being engaged in a playful activity. Finally, the designed strategy should seek to assign a priority order to the identified NTF under a consensus scheme in the focus group, without this consensus implying influencing the voting of other participants. Considering those mentioned earlier, it is expected that the strategy promotes teamwork for the recognition of NTF that influence the productivity of a software development team. Phase B: Design. The strategy’s objective is to obtain a priority order of the previously identified NTF using a gamified focus group scheme and obtaining data from a focus group that facilitates the characterisation of those NTF.

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

63

Planning poker is a technique for estimating effort based on user stories, with a high level of familiarity among software development teams, and based on expert judgment [42]. It consists of a development team using estimation cards to vote anonymously on the complexity of a task and then discuss the differences to reach a consensus [43]. An adaptation of planning poker is used to achieve the goal of the proposed gamification strategy. The values used in planning poker allow assigning a priority order by changing user stories to NTF. The application of the strategy requires a facilitator to guide the team of software development experts during the activity. Although the PID guide suggests defining a challenge and rules to determine the winner, in this case, all participants have the same role, and there is no challenge, so there are no rules for such purposes. A set of souvenirs was provided as a stimulus in appreciation for the time and effort contributed by the participants. Finally, considering the abovementioned aspects, the gamification elements used are Team, Reward and Cooperation [44]. Instructions. The game instructions are as follows: • Organise groups of seven people if the group is too large. The number of people equals the recommendation of the focus group technique that has been decided to gamify. • Install the Scrum PokerCard app from PlayStore. • Indicate to the team the defined value scale for the assessment of NTF. For this case, the t-shirt scale will be used with the following labels: XS, S, M, L, XL, XXL—one for each NTF. • Introduce the topic of Planning Poker and its variation for this focus group exercise. Conduct a practice exercise if necessary. • Distribute a set of cards that includes the name of the NTF and its image and, on the back, its respective definition. • Explain the rules of the game to the participants. • Record the data on a template that is given to the participants. • Conclude the strategy with a generalised conclusion that is shared and the evaluation of the activity. Rules of the game. The rules of the game are as follows: • Select the factor to be analysed. One factor is analysed at a time. • Each participant reads the definition and analyses the factor from the following question: What are the implications of the absence of this NTF in the software development process? • For the analysis of each NTF, the possible scenarios presented in Table 1 are considered, considering three cycles per factor. • The value scale is designed: XS represents a lower priority, and XXL represents a higher priority. • Priority cannot be repeated in any of the factors. Therefore, no label can be left over. All labels must be assigned to the identified factors. • The voting must be secret. • Participants must maintain a “poker face”, that is, successfully hide their emotions under an imperturbable expression that does not reveal anything.

64

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado Table 1. The action of the participants in each of the possible scenarios.

Scenario

Cycle

Condition

Action by the participants

1

First cycle

All participants have voted, and there is a consensus

Allocate the result of the consensus vote to the studied NTF

2

First cycle

All participants have voted, and there is no consensus

Resolve differences by stating the views of the two participants with the highest and lowest votes, and re-vote

3

Second cycle

All participants have voted, and there is a consensus

Allocate the result of the consensus vote to the studied NTF

4

Second cycle

All participants have voted, and there is no consensus

Count the frequency of voting, and if there are 5 votes or more, the participants assign the result of the consensus vote to the NTF under study

5

Second cycle

All participants have voted, and there is no consensus

Count the frequency of voting; if there are 4 votes or fewer, they must vote again and record the number of votes

6

Third cycle

All participants have voted, and there is no consensus

Count the frequency of voting, and if there are 4 or more votes, the participants assign the result of the consensus vote to the NTF under study

7

Third cycle

All participants have voted, and there is no consensus

Count the frequency of voting, and if there are 3 or fewer votes, participants should discuss (without voting) until the value assigned to the NTF is defined

• Participants should only use cell phones to assign the valuation through Scrum PokerCard. Material. Each member should be given a set of six cards, each with the definition of a NTF, as shown in Fig. 2. In addition, the team must have a template in which the data of each run for each NTF are recorded. The header of this template is shown in Fig. 3. Phase C: Pilot Test. The instructions and rules of the game were socialised with a team of Systems Engineering undergraduate students, who applied the strategy. This pilot test indicated that the structure of the strategy is clear. However, there was a need for a device (board) on which all participants could see the actions in each scenario (Table 1) and the order of priority they were assigned to each FNT.

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

65

Fig. 2. Set of cards given to each participant.

Fig. 3. Format Structure - Data Recording

Phase D: Scheduling. The strategy was planned for 26 September 2022 at 18:00 h. To avoid external distractions, a physical space with seven chairs around a table is required, preferably enclosed. Phase E: Evaluation. A tool was designed and validated to obtain relevant information regarding each participant’s experience throughout the activity. The evaluation tool consists of nine items. Four items are related to the purpose of the activity; that is, it refers to the usefulness of the activity to achieve the proposed objective. Two items are related to satisfaction, oriented towards how the participant felt during the activity. Finally, three items refer to using the Scrum PokerCard regarding the tool’s usefulness as a facilitator for the activity. Each item has four alternatives, and the respondent must choose only one. The alternatives correspond to a Likert scale of the degree of agreement with the propositions, organised as follows: Completely disagree (1), Disagree (2), Agree (3), and Completely agree (4). The evaluation of the activity is obtained from the sum of the scores of the items in each dimension. The expected time for applying the tool is five minutes. Data collection is under the self-administered questionnaire modality. The items are presented in Table 2. 4.2 Execution of Gamified Focus Group Phase A: Introduction. Considering the lessons learned from the pilot test, the facilitator positioned the participants around the table in the designated place for the exercise. He/she shared the objective of the strategy, the instructions, and the game’s rules. Previously, an electronic board with the actions for each possible scenario (Table 1) and the list of NTF to visualise the order assigned by the team had been arranged.

66

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado Table 2. Items to evaluate the activity.

Aspect

Items

Purpose of the activity

P1. With Planning Poker, I reflected on the level of importance of NTF in the software development process P2. I found it interesting to use planning poker in a different context than agile planning P3. It was very interesting to know the position of other participants with respect to the order of priority of the NTF analysed P4. I feel very little identification with the final result of the order of priority of the NTF studied

Satisfaction

S1. Planning poker was not a fun activity S2. By the nature of Planning Poker, I felt that the group pressured me to change my position on the order of at least one of the factors

Use of Scrum PokerCard U1. I found the Scrum PokerCard application difficult to use in a context other than agile planning U2. The Scrum PokerCard application is suitable for the activity U3. Prioritisation of factors is made easier by using Scrum PokerCard

Phase B: Execution. A pilot test was conducted to verify that all participants understood the strategy procedure and to ensure the proper functioning of the Scrum PokerCard. Once the questions that arose were clarified, the participants gave their consent to record the session, and together they selected the first NFT they would analyse. Phase C: Data Collection. The format presented in Fig. 3 was used to obtain the data for the analysis of each NTF. Table 3 shows the order in which the team analysed the factors, the number of cycles according to the possible scenarios in Table 1, and the time it took to analyse each factor. Table 3. Data obtained at each iteration. Order of analysis

1

2

3

4

5

NTF

Problem-solving

Teamwork

Leadership

Innovation

Adaptability

Quantity of cycles

3

3

3

2

2

Time (min)

11:25

10:10

7:40

5:30

4:15

At the end of the activity, snacks were provided, and participants were asked to complete the evaluation, considering the items presented in Table 2.

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

67

4.3 Consolidation of Results Phase A: Order of Priority. All participants stated that the six NTF are relevant to software development. However, after the analysis using the strategy, they agreed that leadership is the NTF of lower priority and adaptability is the highest priority. The priority order of the six NTF is presented in Fig. 4.

Fig. 4. Assigned priority order.

Phase B: Activity Evaluation. Figure 5 shows the results of the strategy evaluation.

Fig. 5. Results - Strategy evaluation.

The results of the strategy evaluation indicate that participants agreed or strongly agreed that the proposed activity helped achieve the stated objective (P1, P2, P3). Only one participant felt very little connection to the outcome (P4). Participants agreed or strongly agreed that the activity was fun (S1). Four members agreed they felt pressure from the group to change their position on the order of at least one factor, while three agreed with this premise (S2). Regarding Scrum PokerCard, only one person found it difficult to use (U1). Participants expressed that they agree or strongly agree that the application suits the activity (U2). Only one person disagreed that the assignment of the order of the NTF became easier when using the application.

68

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado

5 Discussion This study aims to present how using a gamified focus group can improve empirical research in Software Engineering. A case study was presented in which a focus group of seven software development experts were asked to assign a priority order to six essential NTF in Software Engineering to address the challenges of Industry 4.0. Based on the activity evaluation results, the strategy helped achieve the stated objective, the participants expressed satisfaction, and the Scrum Poker Card application facilitated its execution. If the software development team’s interest is to assign a priority order, it is possible to use an adaptation of planning poker, as suggested by this research. Based on the gamified focus group for this purpose, presenting the team with a guide of action for each possible situation (Table 1), it was possible to reach a consensus (see Fig. 4) in 39 min (Table 3), ensuring the participation of all members through the voting scheme and sharing of opinions. Thus, the limitations of time, activity orientation, data recording, and participation of group members were covered. While the focus group method has been used in Software Engineering [7, 26–29, 31, 32, 45] and procedures have been defined to facilitate its application in this discipline [45, 46], the gamified strategy presented in this study is a contribution to empirical research in Software Engineering, as it offers a procedure with the advantages of a focus group and, in addition to covering the limitations as mentioned earlier, it is done in a fun way. This proposal is a strategy that promotes consensus and sharing of perspectives from experts. Moreover, it is easily replicable in other software development teams to assign a priority order to the analysed NTF. However, assigning a priority order to other factors of interest can be generalised, such as the assignment of responsibilities, sprint planning, workflow, task priorities, and even team hierarchy. A systematic approach that allows a consensus to be reached on the order of certain elements that interest the team, whether technical or non-technical, contributes to the improvement of software development processes by promoting the participation and commitment of team members and organisational learning [19].

6 Conclusions and Future Work Since in Software Engineering, the focus group technique is used to obtain qualitative data from thematic experts [7], then a contribution to empirical research in this discipline is using gamified focus groups. Through gamification, it is possible to overcome some of the identified limitations of traditional focus groups and thus improve the quality of the data obtained. The proposed gamified focus group method was designed to prioritise six essential NTF required in Software Engineering to address the challenges of Industry 4.0. However, this proposal can be adapted to other Software Engineering contexts where a focus group is needed to prioritise elements, not necessarily NTF. Although not documented, it is possible to adapt this proposal to prioritise other aspects of software engineering, such as assigning responsibilities, sprint planning, workflow, task prioritisation, and even hierarchy within the team. The above constitutes future work.

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

69

The proposed approach assumes the physical presence of team members, which is a limitation. However, the research team intends to continue working on designing a software tool that facilitates its implementation in distributed teams.

References 1. Templeton, J.F.: The focus group : a strategic guide to organising, conducting and analysing the focus group interview. McGraw-Hill (1996) 2. Morgan, D.L.: Focus groups. Annu. Rev. Sociol. 22, 129–152 (1996). https://doi.org/10.1146/ annurev.soc.22.1.129 3. Langford, J., McDonaugh, D.: Focus Groups: Supporting Effective Product Development. Taylor and Francis, London (2003) 4. Widdows, R., Hensler, T.A., Wyncott, M.H.: The focus group interview: a method for assessing users’ evaluation of library service. Coll. Res. Libr. 52, 352–359 (1991). https://doi.org/10. 5860/crl_52_04_352 5. Maruyama, G., Ryan, C.S.: Research Methods in Social Relations, 8th edn., p. 572 (2014) 6. Rodas Pacheco, F.D., Pacheco Salazar, V.G.: Grupos focales: marco de referencia para su implementación. INNOVA Res. J. 5, 182–195 (2020). https://doi.org/10.33890/innova.v5.n3. 2020.1401 7. Kontio, J., Lehtola, L., Bragge, J.: Using the focus group method in software engineering: obtaining practitioner and user experiences. In: Proceedings. 2004 International Symposium on Empirical Software Engineering, ISESE 2004, pp. 271–280 (2004). https://doi.org/10. 1109/ISESE.2004.1334914 8. Bjarnason, E., Gislason Bern, B., Svedberg, L.: Inter-Team Communication in Large-Scale Co-located Software Engineering: A Case Study. Springer, US (2022) 9. Kriglstein, S., Leitner, M., Kabicher-Fuchs, S., Rinderle-Ma, S.: Evaluation methods in process-aware information systems research with a perspective on human orientation. Bus. Inf. Syst. Eng. 58(6), 397–414 (2016). https://doi.org/10.1007/s12599-016-0427-3 10. García, F., Pedreira, O., Piattini, M., Cerdeira-Pena, A., Penabad, M.: A framework for gamification in software engineering. J. Syst. Softw. 132, 21–40 (2017). https://doi.org/10.1016/ j.jss.2017.06.021 11. Wagner, S., Ruhe, M.: A systematic review of productivity factors in software development. In: 2nd International Workshop on Software Productivity Analysis and Cost Estimation (SPACE 2008) (2018) 12. Trendowicz, A., Münch, J.: Chapter 6 Factors influencing software development productivitystate-of-the-art and industrial experiences. In: Advances in Computers, pp. 185–241 (2009) 13. Prechelt, L.: The Mythical 10x programmer. In: Sadowski, C., Zimmermann, T. (eds.) Rethinking Productivity in Software Engineering, pp. 3–11. Apress, Berkeley, CA (2019). https://doi.org/10.1007/978-1-4842-4221-6_1 14. Canedo, E.D., Santos, G.A.: Factors affecting software development productivity: an empirical study. In: ACM International Conference Proceeding Series, pp. 307–316. Association for Computing Machinery (2019) 15. Capretz, L.F., Ahmed, F., da Silva, F.Q.B.: Soft sides of software. Inf. Softw. Technol. 92, 92–94 (2017). https://doi.org/10.1016/j.infsof.2017.07.011 16. World_Economic_Forum: The Future of Jobs Report (2018) 17. Spiezia, V.: Jobs and skills in the digital economy (2017) 18. Infosys: Talent Radar How the best companies get the skills they need to thrive in the digital era. (2019)

70

L. M. Restrepo-Tamayo and G. P. Gasca-Hurtado

19. Manifesto: Software Process Improvement (SPI). https://conference.eurospi.net/images/eur ospi/spi_manifesto.pdf 20. Pressman, R.: Ingeniería del Software: Un Enfoque Práctico. Mc Graw Hill (2010) 21. Kirilo, C.Z., et al.: Organisational climate assessment using the paraconsistent decision method. Procedia Comput. Sci. 131, 608–618 (2018). https://doi.org/10.1016/j.procs.2018. 04.303 22. Wohlin, C., Höst, M., Henningsson, K.: Empirical research methods in software engineering. In: Conradi, R., Wang, A.I. (eds.) Empirical Methods and Studies in Software Engineering. LNCS, vol. 2765, pp. 7–23. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-54045143-3_2 23. Denzin, N.K., Lincoln, Y.S.: Handbook of (1994) 24. Monje Álvarez, C.A.: Metodología de la investigación cuantitativa y cualitativa. Guía didáctica (2011) 25. Jbara, A., Bibliowicz, A., Wengrowicz, N., Levi, N., Dori, D.: Toward integrating systems engineering with software engineering through Object-Process Programming. Int. J. Inf. Technol. (2020). https://doi.org/10.1007/s41870-020-00488-8 26. Junior, I.deF., Marczak, S., Santos, R., Rodrigues, C., Moura, H.: C2M: a maturity model for the evaluation of communication in distributed software development. Empir. Softw. Eng. 27 (2022). https://doi.org/10.1007/s10664-022-10211-9 27. Falessi, D., et al.: Empirical software engineering experts on the use of students and professionals in experiments. Empir. Softw. Eng. 23(1), 452–489 (2017). https://doi.org/10.1007/ s10664-017-9523-3 28. Alami, A., Krancher, O.: How Scrum adds value to achieving software quality? Empir. Softw. Eng. 27 (2022) 29. Assyne, N., Ghanbari, H., Pulkkinen, M.: The essential competencies of software professionals: a unified competence framework. Inf. Softw. Technol. 151, 107020 (2022). https://doi. org/10.1016/j.infsof.2022.107020 30. Luke, M., Goodrich, K.M.: Focus group research: an intentional strategy for applied group research? J. Spec. Gr. Work. 44, 77–81 (2019). https://doi.org/10.1080/01933922.2019.160 3741 31. Kontio, J., Bragge, J., Lehtola, L., Wohlin, C., Höst, M., Henningsson, K.: The focus group method as an empirical tool in software engineering. Guid. to Adv. Empir. Softw. Eng. 2765, 93–116 (2008). https://doi.org/10.1007/978-1-84800-044-5_4 32. Dingsøyr, T., Lindsjørn, Y.: Team performance in agile development teams: findings from 18 focus groups. In: Baumeister, H., Weber, B. (eds.) XP 2013. LNBIP, vol. 149, pp. 46–60. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38314-4_4 33. Franz, N.K.: The unfocused focus group: Benefit or bane? Qual. Rep. 16, 1380–1388 (2011). https://doi.org/10.46743/2160-3715/2011.1304 34. Pedreira, O., García, F., Brisaboa, N., Piattini, M.: Gamification in software engineering - a systematic mapping. Inf. Softw. Technol. 57, 157–168 (2015). https://doi.org/10.1016/j.inf sof.2014.08.007 35. Muñoz, M., Hernández, L., Mejia, J., Gasca-Hurtado, G. P., Gómez-Alvarez, M. C.: State of the use of gamification elements in software development teams. In: Stolfa, J., Stolfa, S., O’Connor, R. V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 249–258. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_20 36. Oprescu, F., Jones, C., Katsikitis, M.: I PLAY AT WORK-ten principles for transforming work processes through gamification. Front. Psychol. 5, 1–5 (2014). https://doi.org/10.3389/ fpsyg.2014.00014 37. Mounir, M., Badr, K., Sameh, S.: Gamification framework in automotive SW development environment to increase teams engagement. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner,

Gamified Focus Group for Empirical Research in Software Engineering: A Case Study

38. 39.

40.

41. 42. 43.

44. 45.

46.

71

M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 278–288. Springer, Cham (2021). https://doi. org/10.1007/978-3-030-85521-5_18 Alsawaier, R.S.: The effect of gamification on motivation and engagement. Int. J. Inf. Learn. Technol. 35, 56–79 (2018). https://doi.org/10.1108/IJILT-02-2017-0009 Gomez-Alvarez, M.C., Gasca-Hurtado, G.P., Manrique-Losada, B., Arias, D.M.: Method of pedagogic instruments design for software engineering. In: 11th Iberian Conference on Information Systems and Technologies (CISTI) (2016) Gasca-Hurtado, G.P., Gómez-Álvarez, M.C., Manrique-Losada, B.: Using gamification in software engineering teaching: study case for software design. In: Advances in Intelligent Systems and Computing (2019) Hamui-Sutton, A., Varela-Ruiz, M.: La técnica de grupos focales. Investig. en Educ. Médica. 2, 55–60 (2013). https://doi.org/10.1016/s2007-5057(13)72683-8 Mahniˇc, V., Hovelja, T.: On using planning poker for estimating user stories. J. Syst. Softw. 85, 2086–2095 (2012). https://doi.org/10.1016/J.JSS.2012.04.005 Sharma, B., Purohit, R.: Review of current software estimation techniques. In: Panda, B., Sharma, S., Roy, N.R. (eds.) REDSET 2017. CCIS, vol. 799, pp. 380–399. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-8527-7_32 Werbach, K., Hunter, D.: The Gamification Toolkit - Dynamics, Mechanics, and Components for the Win (2015) Martakis, A., Daneva, M.: Handling requirements dependencies in agile projects: a focus group with agile software development practitioners. In: Proceedings - International Conference on Research Challenges in Information Science (2013). https://doi.org/10.1109/RCIS. 2013.6577679 Mendoza-Moreno, M., González-Serrano, C., Pino, F.J.: Focus group como proceso en ingeniería de software: una experiencia desde la práctica focus group as a software engineering process: an experience from the praxis. Dyna. 80, 51–60 (2013)

Exploring Metaverse-Based Digital Governance of Gambia: Obstacles, Citizen Perspectives, and Key Factors for Success Pa Sulay Jobe2 , Murat Yilmaz1,2(B) , Aslıhan Tüfekci1 , and Paul M. Clarke3,4 1 Gazi University, Informatics, Institute, Ankara, Turkey

{my,aslihan}@gazi.edu.tr

2 Department of Computer Engineering, Gazi University, Ankara, Turkey

[email protected]

3 School of Computing, Dublin City University, Dublin, Ireland

[email protected] 4 Lero, the Science Foundation Ireland Research Center for Software, Limerick, Ireland

Abstract. The metaverse concept has recently garnered substantial attention, with growing interest in its potential application in governance. This study examines the obstacles, citizen perspectives, and crucial factors that may facilitate or impede the success of metaverse-based digital governance in a country. Through an in-depth analysis of survey data, the research reveals that weak internet connections and insufficient infrastructure constitute the primary barriers to adopting metaversebased digital governance in The Gambia. However, addressing these challenges could significantly contribute to its successful implementation. The findings indicate that citizens’ familiarity with the metaverse has a mixed impact on their confidence in the government’s capacity to utilize the technology effectively. Additionally, a positive correlation was observed between satisfaction with existing digital governance and the public’s propensity to engage in metaverse-driven initiatives. Privacy and security concerns surfaced as notable factors influencing citizens’ willingness to participate in digital governance efforts within the metaverse. To ensure the effective adoption or execution of metaverse-based digital governance in The Gambia, the study proposes a roadmap prioritizing digital literacy programs, and infrastructure development, addressing privacy and security concerns, and cultivating trust in the government’s ability to manage the transition competently. This research may serve as a valuable resource for other nations considering the adoption of metaverse-based digital governance systems. Keywords: Metaverse · Digital Governance · The Gambia · Roadmap

1 Introduction In the realm of digital governance, which Scholl [1] defines as “the effective use of information and communication technologies in managing a state or other entity,” the metaverse could play a pivotal role. Technology’s rapid advancement has revolutionized © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 72–83, 2023. https://doi.org/10.1007/978-3-031-42307-9_6

Exploring Metaverse-Based Digital Governance of Gambia

73

how governments engage with their citizens, leading to the emergence of digital governance [1]. Recently, the concept of the metaverse has garnered attention as technological breakthroughs have allowed individuals to create and participate in immersive virtual experiences. This signifies the next evolution of the web, transitioning from a 2D interface to a 3D environment that fully immerses users [33]. Consequently, metaverse-based digital governance systems may substantially alter how governments deliver services and interact with their constituents, paving the way for innovative governance and civic engagement approaches. This research aims to assess the metaverse landscape in The Gambia and its potential for digital governance. The study seeks to gauge the current awareness of the metaverse among Gambian citizens. This insight will establish the foundation for evaluating the viability and receptivity of the population regarding a metaverse-based digital governance framework in The Gambia. Subsequently, the research endeavors to identify the primary challenges and opportunities that warrant attention for the successful implementation of a metaverse-based digital governance system in The Gambia. Recognizing these factors will help guide the development of strategies and initiatives tailored to the country’s unique context, ensuring potential obstacles are mitigated, and opportunities are seized. Moreover, based on the information gathered, we aim to develop a comprehensive plan or roadmap for implementing a metaverse-based digital governance system in The Gambia. This roadmap addresses the identified opportunities and challenges, providing policymakers and stakeholders with a clear and actionable framework for harnessing the metaverse’s potential to enhance governance and civic engagement within the nation. Section 2 presents a literature review of the domain, with Sect. 3 discussing the methodology guiding the work. In Sect. 4, the approach to validating the measurement scales is introduced, it also reports on the hypotheses examined, and Sect. 5 presents a brief discussion. Finally, Sect. 6 outlines the primary conclusions.

2 Literature Review In this section, we review the existing literature to examine The Gambia’s e-government program, focusing on the progress achieved to date. Furthermore, we investigate the potential applications of the metaverse, particularly in the realm of digital governance. 2.1 Digital Governance in The Gambia Despite facing challenges like scarce resources and low digital literacy [3], Africa holds potential for digital governance transformation, key initiatives include digital infrastructure investment and e-government services [5]. Collaboration, public engagement, and innovative approaches are crucial for success [4, 7, 28], alongside addressing drawbacks and ensuring policy compliance [28]. Just like other African countries, The Gambia is also improving with several digital initiatives for its citizens. We looked at several research studies to comprehend the present status of digital governance in The Gambia. The use of information technology has been identified as a major area for Africa’s entry into the information era, claims

74

P. S. Jobe et al.

Islam [6], who produced a study on the preliminary evaluation and processes for the construction of an e-government approach for The Gambia. Ceesay and Bojang’s [10] study of The Gambia during the period of the pandemic, found that e-government practices increase transparency and government-citizen interaction, which in turn enhances the delivery of public services. Similarly, research conducted by Lin et al. [8] focuses on the uptake of e-governance activities by Gambian individuals. By using the TAM (Technology Acceptance Model), the intentions of users toward e-government facilities are examined. Their research indicates that TAM strongly affects the decision to employ e-government services, which may help The Gambians’ government run more efficiently and cheaply. As stated by Jung [9] in his assessment of the study of Lin et al. [8], the key TAM parameters such as PEOU (perceived ease of use), IQ (information quality), and PU (perceived usefulness) are essential to the success of e-government operations in The Gambia. In accordance with the research, people in The Gambia are open to adopting e-governance projects, provided they are created with TAM’s fundamental principles in mind. Bojang [11] mentioned in his research that citizens’ intentions to embrace e-governance services were shown to be directly correlated with the degree of public awareness and the availability of high-quality services. It is strongly influenced by citizens’ perceptions of the accessibility, usability, timeliness, accuracy, and responsiveness of government websites. In their evaluations of The Gambia’s e-government, Jung [9], Ceesay and Bojang [10], and Bojang [11] offer insights into the state of the sector there. Jung [9] found that the e-government sector in The Gambia is still in its initial stages, but Ceesay and Bojang [10] pointed out that more investment in infrastructure and people is needed to improve the way e-government services are delivered. According to Bojang [11], to enhance the success of e-governance efforts in The Gambia, there should be an emphasis on citizen involvement and participation. Moreover, it is essential to understand how people embrace and utilize e-government applications. More effective and cost-effective governance operations in The Gambia may be achieved using TAM’s central components in the planning, development, and promotion of e-government systems there. PEOU, IQ, and PE are three fundamental characteristics of the TAM that the authors believe should be considered while developing e-government efforts. To guarantee the accomplishment of e-government efforts, governments must take measures to develop confidence and offer excellent service delivery, which in turn requires cooperation between the public sector and individuals. Considering the metaverse’s potential to revolutionize various industries, The Gambia can explore its application in digital governance to provide more immersive and interactive experiences for citizens, enhancing government-citizen interaction, transparency, and public service delivery. By leveraging the insights from the literature and prioritizing the development of an e-governance approach that meets the needs and desires of the population, The Gambia can pave the way for a more efficient and cost-effective digital governance model.

Exploring Metaverse-Based Digital Governance of Gambia

75

2.2 The Metaverse The metaverse, encompassing social VR platforms and enabled by 5G and edge computing, holds the potential to revolutionize industries like education, entertainment, healthcare, and governance [14, 26]. Essential components include immersive realism, ubiquitous access, and seamless user experiences [12]. In education, the metaverse offers immersive learning experiences but raises concerns about support systems, educator preparation, and security [30–32]. Robust support systems and comprehensive teacher training are necessary [13, 15]. Blockchain can enhance security [16, 17]. In tourism, the metaverse enhances customer satisfaction through personalized experiences and digitized cultural heritage [18, 19]. Further research is needed for practical applications. In healthcare, the metaverse improves patient satisfaction, health education, and medical training accessibility [20–22]. The metaverse offers opportunities to enhance governance, communication, and service delivery, but faces challenges such as costs, privacy, and safety concerns [34]. South Korea is proactively embracing metaverse technology, while other countries adopt a cautious approach [35, 36]. Metaverse applications in public universities seek to improve education and career preparation. Cities worldwide are building digital twins for urban planning, potentially benefiting developing countries [33]. Despite challenges like infrastructure, governance, taxes, intellectual property, and cybersecurity, the metaverse can address concerns in urban planning, healthcare, education, and employment [33]. Strategic futures groups, virtual tech frontier ministries, and national digital twin policies can help countries prepare for the metaverse transformation. In Africa, the metaverse can improve public engagement, transparency, and information access, but faces issues like limited mobile network access and privacy concerns [37, 39]. Blockchain technology can also address privacy issues in the metaverse [27, 38], and strengthen enabling technologies such as IoT, digital twins, AI, and big data [23, 24]. Ethical metaverse design should prioritize privacy, governance, and ethical design [25]. By embracing ethical metaverse design, prioritizing privacy and governance, and utilizing blockchain technology to strengthen enabling technologies, The Gambia can harness the potential of the metaverse to drive innovation and progress in various sectors, ultimately benefiting its citizens. Considering its applications in education, tourism, and healthcare, the Metaverse can contribute to The Gambia’s overall development. However, further research is needed to understand its practical applications.

3 Research Methodology In order to comprehensively examine the potential of leveraging the metaverse for digital governance in The Gambia, this study employed a robust methodology that combined various data collection methods and analytical techniques. The methodology aimed to give a detailed comprehension of the present state of digital governance, identify key stakeholders’ perspectives and ultimately develop a metaverse-based digital governance roadmap.

76

P. S. Jobe et al.

3.1 Research Design This study adopts a quantitative research approach, employing a combination of literature review, document examination, and survey to collect and analyze data. The survey carried out from 29th March 2023 to 7th April 2023, represented 115 respondents and functioned as a principal source of data to strengthen the findings from the literature review and document analysis. 3.2 Sample Selection The study’s sample comprises key stakeholders in The Gambia’s digital governance, including government officials, technology experts, business representatives, academia, students, and community members. These diverse participants offer insights into the current state of digital governance and the potential for metaverse-based governance. A survey was distributed via personal contacts and word of mouth to reach a wide range of respondents, ensuring an accurate representation of the population interested in digital government usage in The Gambia. 3.3 Data Collection Methods Besides the literature review and research papers, an online survey utilizing Google Forms was carried out to amass primary data from a chosen sample of stakeholders. The survey delved into various subjects, including the present state of digital governance in The Gambia, harnessing the metaverse for digital governance, potential advantages and challenges of metaverse-centric governance, and attitudes towards employing the metaverse for digital governance. This combination of data-gathering techniques ensured a comprehensive and well-balanced understanding of the topic at hand. 3.4 Descriptive Analysis The survey collected demographic information, such as age, gender, occupation, and monthly earnings, from respondents involved in digital governance in The Gambia to better understand their perspectives on metaverse-based digital governance. Most respondents (59.1%) were aged 26–31, indicating that younger individuals, more familiar with digital technology, participated in the survey. Gender distribution showed 74.8% male and 25.2% female respondents, highlighting the need for increased gender diversity in metaverse-based governance discussions and implementations. The occupation breakdown included students (33%), technology experts (27%), and government officials (19.1%), providing a comprehensive understanding of The Gambia’s digital governance landscape and the potential for metaverse-based governance. Regarding monthly earnings, 56.5% reported a range of $101-$500, while 17.4% preferred not to disclose their income. This data emphasizes the cruciality of establishing strategies for metaversebased digital governance, ensuring affordability and ease of use in order to cater to individuals with varying financial capacities. Consequently, the opportunity to partake in this new realm will remain equitable for all citizens. Further insights will be provided in the later part when discussing the hypotheses.

Exploring Metaverse-Based Digital Governance of Gambia

77

4 Results 4.1 Validation of the Measurement Scales The survey questions were designed based on a comprehensive review of the relevant literature in the field of digital governance and metaverse technologies. This ensured that the survey instrument covered all aspects of the research topic and accurately reflected the theoretical constructs under investigation. In the study, Cronbach’s reliability test was utilized, yielding a satisfactory alpha value of 0.724, ensuring internal consistency among survey questions [40]. Cronbach’s alpha values range from 0 to 1, with higher values demonstrating greater internal consistency. Generally, the acceptable reliability threshold lies between 0.7 and 0.8, with values above 0.8 considered excellent. The obtained alpha value of 0.724 situates the survey instrument within the acceptable range, suggesting that the survey items are adequately correlated and effectively measure the intended construct. 4.2 Hypothesis Testing We have developed 5 hypotheses, drawing from our comprehension of the research problem and relevant literature. These hypotheses encompass a variety of aspects pertaining to digital governance in the metaverse, including citizens’ acquaintance with the metaverse, confidence in governmental entities, anticipated economic implications, concerns about confidentiality and safety, the digital divide, and the interplay between these elements and citizens’ inclination to partake in metaverse-based digital governance initiatives. Below are the five hypotheses: Hypothesis 1: The degree of satisfaction with the current digital governance condition in The Gambia is negatively correlated with the populace’s inclination to partake in metaverse-driven administrative initiatives. Hypothesis 2: If infrastructure improvements and increased internet accessibility are implemented, then the primary barriers to establishing metaverse-based digital governance in The Gambia will be significantly reduced. Hypothesis 3: The level of familiarity with the metaverse is negatively associated with the level of trust citizens have in the government’s ability to responsibly leverage the metaverse for digital governance. Hypothesis 4: Prioritizing digital literacy programs in The Gambia will increase the likelihood of successful implementation and widespread adoption of a metaverse-based digital government. Hypothesis 5: The people who are very familiar with the concept of the metaverse will be less likely to participate in metaverse-based digital governance in The Gambia. Table 1 supports Hypothesis 1, stating that satisfaction with current digital governance in The Gambia positively correlates with citizens’ inclination to participate in

78

P. S. Jobe et al.

Table 1. The relationship between satisfaction in digital governance in The Gambia and the likelihood of participation in the metaverse initiative. Measure

Completely likely

Very likely

Moderately likely

Slightly likely

Not at all likely

Total %

Not at all satisfied

16.52%

30.43%

11.30%

7.83%

1.74%

67.83%

Slightly satisfied

2.61%

14.78%

5.22%

0.87%

0

23.48%

Moderately satisfied

0.87%

4.35%

2.61%

0.87%

0

8.70%

Total

100%

metaverse-driven initiatives. Among the “Not at all satisfied” respondents, 67.83% indicated varying likelihoods of participating in metaverse-based initiatives, with 30.43% being “Very likely” to participate. This implies dissatisfaction with current digital governance may drive citizens to seek alternatives like the metaverse. For “Slightly satisfied” respondents, 23.48% expressed their likelihood to participate in metaverse initiatives, with 14.78% being “Very likely.“ Moreover, 8.70% of “Moderately satisfied” respondents showed their likelihood to partake in metaverse-based ventures, with 4.35% being “Very likely.“ This reveals that regardless of satisfaction levels, a significant portion of respondents show interest in exploring metaverse-based solutions. Concerning Hypothesis 2, respondents identified the two most significant challenges in leveraging the metaverse for digital governance as “Poor Internet Connections” (73.9%) and “Lack of Good Infrastructures” (53.9%). This supports the hypothesis that infrastructure improvements and increased internet accessibility are crucial to overcoming primary barriers to metaverse-based digital governance in The Gambia. Additional challenges, such as “Technological barriers and challenges” (59.1%) and “Digital divide and cost of necessary gadgets” (37.4%), could also be addressed through enhanced infrastructure and internet accessibility. “Privacy and security concerns” (40.9%) represent another challenge, necessitating robust security measures and privacy protection policies within metaverse-based digital governance systems. Furthermore, 7.8% of respondents indicated “Other” challenges, warranting further investigation. From Table 2, we can deduce the following observations and conclusions regarding the validity of Hypothesis 3, which states, “The level of familiarity with the metaverse is negatively associated with the level of trust citizens have in the government’s ability to responsibly leverage the metaverse for digital governance.“ Among the survey participants who possess a slight familiarity with the metaverse, the majority (12.17%) lack confidence in the government’s competence to responsibly implement the metaverse for digital governance, followed by those who exhibit moderate trust (9.57%). This suggests that as familiarity increases, faith in the government’s capability appears to reduce. Similarly, respondents who are moderately familiar with the metaverse show a higher percentage (12.17%) of slight trust in the government’s ability, followed by moderate trust (8.70%). This trend supports the hypothesis that increased familiarity

Exploring Metaverse-Based Digital Governance of Gambia

79

Table 2. Relationship between the level of familiarity with the metaverse to the level of trust for the government to implement the initiative for digital governance. Measure

Not at all likely

Slightly likely

Moderately likely

Very likely

Completely likely

Total %

Not at all familiar

6.96%

7.83%

5.22%

2.61%

0

22.61%

Slightly familiar

12.17%

7.83%

9.57%

2.61%

0

32.17%

Moderately familiar

6.96%

12.17%

8.70%

3.48%

0

31.30%

Very familiar

3.48%

2.61%

5.22%

0

0

11.30%

Extremely familiar

1.74%

0

0

0

0.87%

2.61%

Total

100%

is negatively associated with trust in the government’s ability. Interestingly, among the respondents who are not at all familiar with the metaverse, a higher percentage (7.83%) have slight trust in the government’s ability, while 6.96% do not trust the government at all. This might imply that a lack of familiarity could lead to more neutral or uncertain trust perceptions. For respondents who are very familiar with the metaverse, the majority (5.22%) have moderate trust in the government’s ability. This finding does not align with the hypothesis, as it suggests that increased familiarity may not necessarily lead to decreased trust. In this case, data provide mixed support for Hypothesis 3. While the majority of respondents with slight and moderate familiarity appear to have less trust in the government’s ability to responsibly leverage the metaverse for digital governance, the same trend does not hold for respondents who are very familiar with the metaverse. Hence, we can categorize it as not supported. Table 3. Importance of digital literacy programs How important is it for governments to prioritize digital literacy programs to help citizens participate in the metaverse?

percentage %

Very important

45.22%

Extremely important

44.35%

Moderately important

5.22%

Slightly important

4.35%

Not at all important Total

0.87% 100.00%

80

P. S. Jobe et al.

Table 3 shows that digital literacy programs are considered very important or extremely important by the vast majority of respondents (89.57%) for successful implementation and widespread adoption of metaverse-based digital governance in The Gambia. This finding supports Hypothesis 4, which suggests prioritizing digital literacy programs to increase the likelihood of successful implementation of a metaverse-based digital government. Only a small fraction of respondents (0.87%) deems digital literacy programs as not important, indicating limited resistance to prioritizing digital literacy programs in The Gambia (Table 4). Table 4. Familiarity with the Metaverse and the likelihood to participate in digital governance initiatives Measure

Not at all likely

Slightly likely

Moderately likely

Very likely

Completely likely

Total %

Not at all familiar

0.87%

2.61%

3.48%

11.30%

4.35%

22.61%

Slightly familiar

0

5.22%

8.70%

16.52%

1.74%

32.17%

Moderately familiar

0.87%

0.87%

6.96%

13.04%

9.57%

31.30%

Very familiar 0

0.87%

0

7.83%

2.61%

11.30%

Extremely familiar

0

0

0.87%

1.74%

2.61%

Total

0

100%

The data above examines Hypothesis 5, suggesting that those highly familiar with the metaverse concept are less likely to participate in metaverse-based digital governance in The Gambia. Interestingly, respondents with slight familiarity displayed the highest participation likelihood at 32.17%. As familiarity increases, there isn’t a consistent decline in participants’ willingness to engage in metaverse-based digital governance. Moderately familiar participants accounted for 22.61%, while those highly familiar represented 10.44%. The data doesn’t conclusively support the hypothesis. Moreover, a considerable percentage (15.65%) of respondents unfamiliar with the metaverse remain likely to participate in metaverse-focused digital governance. This suggests that a lack of familiarity doesn’t inherently hinder interest in digital governance initiatives within the metaverse. In conclusion, the data doesn’t provide strong support for Hypothesis 5. Although a slight decrease in participation likelihood is observed as familiarity increases, this trend isn’t consistent or significant enough to confirm the hypothesis.

Exploring Metaverse-Based Digital Governance of Gambia

81

5 Discussion This study investigated key factors affecting the implementation of metaverse-based digital governance in The Gambia and analyzed the public’s perceptions and attitudes toward these initiatives. Five hypotheses were assessed, covering various aspects of digital governance in the metaverse. The analysis showed mixed results for the hypotheses. Hypothesis 1 found a positive correlation between satisfaction with current digital governance and the public’s inclination to participate in metaverse-driven initiatives. Hypothesis 2 demonstrated that addressing primary barriers, such as poor internet connections and inadequate infrastructure, would significantly contribute to the successful implementation of metaversebased digital governance in The Gambia. However, addressing additional challenges like privacy and security concerns is crucial for a comprehensive strategy. Mixed support was found for Hypothesis 3, which suggested a negative relationship between metaverse familiarity and trust in the government’s capacity for effective digital governance. While respondents with slight and moderate familiarity displayed less trust, the trend did not hold for those with high familiarity, implying that other factors may influence trust. Hypothesis 4 was supported, as most respondents recognized the importance of digital literacy programs in facilitating the successful implementation and adoption of metaverse-based digital governance. However, the data did not strongly support Hypothesis 5, which claimed that individuals with high metaverse familiarity would be less inclined to participate in metaverse-oriented digital governance.

6 Conclusion In conclusion, this study provides valuable insights for the development of effective strategies and policies to implement metaverse-based digital governance in The Gambia. Policymakers are advised to focus on the execution of metaverse-driven digital governance projects, prioritize infrastructure improvements, foster trust through transparent communication, emphasize digital literacy programs, and conduct further research. Despite limitations such as the cross-sectional design and reliance on self-reported data, the study serves as a foundation for future research to explore emerging challenges and technological advances in metaverse-based digital governance using larger, diverse samples and additional data collection methods. Ultimately, the findings of this study will contribute to shaping the future of digital governance in The Gambia, promoting innovation, and enhancing civic engagement. Acknowledgments. This research is supported in part by SFI, Science Foundation Ireland (https:// www.sfi.ie/) grant No SFI 13/RC/2094 P2 to–Lero - the Science Foundation Ireland Research Centre for Software.

References 1. Scholl, H.J.: Digital government: looking back and ahead on a fascinating domain of research and practice. Dig. Govern. Res. Pract. 1(1), 1–12 (2020)

82

P. S. Jobe et al.

2. Dunleavy, P., Margetts, H., Bastow, S., Tinkler, J.: New public management is dead—long live digital-era governance. J. Public Adm. Res. Theory 16(3), 467–494 (2006) 3. Holden, K., Van Klyton, A.: Exploring the tensions and incongruities of Internet governance in Africa. Gov. Inf. Q. 33(4), 736–745 (2016) 4. Breuer, A., et al.: Decentralisation in Togo: the contribution of ICT-based participatory development approaches to strengthening local governance. German Development Institute Discussion Paper, (6) (2017) 5. Demuyakor, J.: Ghana’s digitization initiatives: a survey of citizens perceptions on the benefits and challenges to the utilization of digital governance services. Int. J. Publ. Social Stud. 6(1), 42–55 (2021) 6. Islam, K.M.: Strategy Paper on e-Government Programme for the Gambia. United Nations Economic Commission for Africa Addis Ababa (2003) 7. Srinivasan, S., Diepeveen, S., Karekwaivanane, G.: Rethinking publics in Africa in a digital age. J. East. Afr. Stud. 13(1), 2–17 (2019) 8. Lin, F., Fofanah, S.S., Liang, D.: Assessing citizen adoption of e-Government initiatives in Gambia: a validation of the technology acceptance model in information systems success. Gov. Inf. Q. 28(2), 271–279 (2011) 9. Jung, D.: “Assessing citizen adoption of e-government initiatives in Gambia: a validation of the technology acceptance model in information systems success”: a critical article review, with questions to its publishers. Gov. Inf. Q. 36(1), 5–7 (2019) 10. Bojang, M.B., Ceesay, L.B.: Embracing e-government during the Covid-19 pandemic and beyond: insights from the Gambia. Glob. J. Manag. Bus. Res. 20(A13), 33–42 (2020) 11. Bojang, M.B.: Critical factors influencing E-government adoption in the Gambia. Society Sustain. 3(1), 39–51 (2021) 12. Dionisio, J.D.N., Iii, W.G.B., Gilbert, R.: 3D virtual worlds and the metaverse: current status and future possibilities. ACM Comput. Surv. (CSUR) 45(3), 1–38 (2013) 13. Ng, D.T.K.: What is the metaverse? definitions, technologies and the community of inquiry. Aust. J. Educ. Technol. 38(4), 190–205 (2022) 14. Davis, A., Murphy, J., Owens, D., Khazanchi, D., Zigurs, I.: Avatars, people, and virtual worlds: foundations for research in metaverses. J. Assoc. Inf. Syst. 10(2), 1 (2009) 15. Mustafa, B.: Analyzing education based on metaverse technology. Technium Soc. Sci. J. 32, 278 (2022) 16. Contreras, G.S., González, A.H., Fernández, M.I.S., Martínez, C.B., Cepa, J., Escobar, Z.: The importance of the application of the metaverse in education. Mod. Appl. Sci. 16(3), 1–34 (2022) 17. Kye, B., Han, N., Kim, E., Park, Y., Jo, S.: Educational applications of metaverse: possibilities and limitations. J. Educ. Eval. Health Prof. 18, 1149230 (2021) 18. Buhalis, D., Lin, M.S., Leung, D.: Metaverse as a driver for customer experience and value cocreation: implications for hospitality and tourism management and marketing. Int. J. Contemp. Hosp. Manag. 35(2), 701–716 (2022) 19. Zhang, X., et al.: Metaverse for cultural heritages. Electronics 11(22), 3730 (2022) 20. Petrigna, L., Musumeci, G.: The metaverse: a new challenge for the healthcare system: a scoping review. J. Funct. Morphol. Kinesiol. 7(3), 63 (2022) 21. Kawarase, M.A., IV., Anjankar, A.: Dynamics of metaverse and medicine: a review article. Cureus 14(11), 1–6 (2022) 22. Sebastian, S.R., Babu, B.P.: Impact of metaverse in health care: a study from the care giver’s perspective. Int. J. Commun. Med. Public Health 9(12), 4613 (2022) 23. Gadekallu, T. R.,et al.: Blockchain for the metaverse: a review (2022). arXiv preprint arXiv: 2203.09738

Exploring Metaverse-Based Digital Governance of Gambia

83

24. Mozumder, M.A.I., Sheeraz, M.M., Athar, A., Aich, S., Kim, H.C.: Overview: technology roadmap of the future trend of metaverse based on IoT, blockchain, AI technique, and medical domain metaverse activity. In: 2022 24th International Conference on Advanced Communication Technology (ICACT), pp. 256–261. IEEE (2022) 25. Fernandez, C.B., Hui, P.: Life, the metaverse and everything: an overview of privacy, ethics, and governance in metaverse (2022). arXiv preprint arXiv:2204.01480 26. Cheng, R., Wu, N., Chen, S., Han, B.: Will metaverse be nextg internet? vision, hype, and reality (2022). arXiv preprint arXiv:2201.12894 27. Duan, H., Li, J., Fan, S., Lin, Z., Wu, X., Cai, W.: Metaverse for social good: a university campus prototype. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 153–161 (2021) 28. Erkut, B.: From digital government to digital governance: are we there yet? Sustainability 12(3), 860 (2020) 29. Manda, M.I., Backhouse, J.: Addressing trust, security and privacy concerns in e-government integration, interoperability and information sharing through policy: a case of South Africa (2016) 30. Alpcan, T., Bauckhage, C., Kotsovinos, E.: Towards 3D internet: why, what, and how? In: 2007 International Conference on Cyberworlds (CW 2007), pp. 95–99. IEEE (2007) 31. Kavanagh, S., Luxton-Reilly, A., Wuensche, B., Plimmer, B.: A systematic review of virtual reality in education. Themes Sci. Technol. Educ. 10(2), 85–119 (2017) 32. Boud, A.C., Haniff, D.J., Baber, C., Steiner, S.J.: Virtual reality and augmented reality as a training tool for assembly tasks. In: 1999 IEEE International Conference on Information Visualization (Cat. No. PR00210), pp. 32–36. IEEE (1999) 33. Sudan, R.: How should governments prepare for the metaverse? | by Randeep Sudan | Digital Diplomacy | Medium. Medium; medium.com (2021). https://medium.com/digital-diplomacy/ how-should-governments-prepare-for-the-metaverse-90fd03387a2a 34. Metaverse – The Start of a New Era of Government Services | National Informatics Centre. (n.d.). Metaverse – The Start of a New Era of Government Services | National Informatics Centre. www.nic.in. https://www.nic.in/blogs/the-metaverse-the-start-of-a-new-eraof-government-services/. Accessed 18 June 2022 35. Should State and Local Governments Care About the Metaverse? (2022). GovTech. www.govtech.com. https://www.govtech.com/products/should-state-and-local-governmentcare-about-the-metaverse 36. Allen, K.: A Metaverse of Nations: Why Governments are Making Big Moves into the Metaverse. Acceleration Economy; accelerationeconomy.com (2022). https://acceleration economy.com/metaverse/a-metaverse-of-nations-why-governments-are-making-big-movesinto-the-metaverse/ 37. Siddo, B.I.: Understanding the Metaverse – Africa’s role. ACE Times (2022). https://www. zawya.com/en/world/africa/understanding-the-metaverse-africas-role-o15wvrrk 38. Rosenberg, L.: Regulation of the metaverse: a roadmap: the risks and regulatory solutions for largescale consumer platforms. In: Proceedings of the 6th International Conference on Virtual and Augmented Reality Simulations, pp. 21–26 (2022) 39. Ngila, F.: Meta wants to bring the metaverse to Africans through their cell phones. Quartz (2022). https://qz.com/meta-wants-to-bring-the-metaverse-to-africans-through-t-184 9886796 40. Tavakol, M., Dennick, R.: Making sense of Cronbach’s alpha. Int. J. Med. Educ. 2, 53 (2011)

Identification of the Personal Skills Using Games Adriana Peña Pérez Negrón1

, Mirna Muñoz2(B)

, and David Bonilla Carranza1

1 Universidad de Guadalajara, CUCEI, Blvd. Marcelino García Barragán 1421,

C.P. 44430 Guadalajara, Jal., Mexico [email protected], [email protected] 2 CIMAT Zacatecas, C. Lasec y And. Galileo Galilei, Mn 3, L 7 Quantum Ciudad del Conocimiento, C.P. 98160 Zacatecas, Zac., Mexico [email protected]

Abstract. Software development is typically a team activity due to its complexity. However, integrating a new team can be challenging because team member needs to adapt to work with the others. In this context, the study of personal or soft skills has become relevant because they are essential for teamwork and individual success in professional life; although evaluating them is challenging. This paper presents a proposal for assessing soft skills, specifically flexibility to change, with games as an alternative. Games abstract participants, making them to forget they are observed, producing an environment in which a more natural performance is expected while recreating situations similar to those during software development projects. Also, results from a case study focused on evaluate flexibility to change are presented. Flexibility to change is a soft skill that can be evaluated at individual level and highly contributes to the success of teamwork. Keywords: games · soft skills · teamwork · flexibility to change · assessment instruments

1 Introduction Personal or non-technical competencies are closely related to the way people behave. They include abilities, attitudes, habits, and personality traits that support people to better perform [1]. However, compared to technical skills which are measured based on performance in the field of interest, personal skills represent a challenge for their formalization and measurement [2]. Within the literature, at the individual level, personal skills include: (1) psychological aspects such as personality traits, characteristics, mental patterns, judgment, prejudices, or motivations; (2) cognitive aspects such as ease of learning or learning preferences, abstract thinking, knowledge sharing or training; and (3) management aspects such as planning or estimating the effort to perform a task [3]. And, at team level, it is known that personal skills influence how the team members relate to each other and the team as a whole [4]. Moreover, soft skills have a main role in a team performance. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 84–95, 2023. https://doi.org/10.1007/978-3-031-42307-9_7

Identification of the Personal Skills Using Games

85

The most common instruments for studying personal and behavioral characteristics are those based on self-reports, such as interviews and questionnaires. Though, self-reporting is susceptible to bias due to social desirability, the tendency to present a favorable or expected image instead of the actual one, memory, and motivation [5]. Alternatively, a situational assessment represents a non-invasive approach for observation and analysis. Here games offer advantages such as attractive and immersive scenarios generated by computers, in defined spaces in which people can interact and make decisions without suffering consequences; characteristics, we assume, encourage players to be authentic [6]. In addition, in a game, the control of the situations is the same for each participant. In this context, with the analysis of the player’s style, the video game industry put on the table the study of the user’s traits or characteristics for the benefit of the gaming experience. The game experience refers to the interaction of the players with the virtual space and what they can do on it, including the processes and results of such interactions. In a video game, all the actions and reactions of the player represent valuable information to understand their behavior within the virtual setting; this analysis has been proposed as a means to assess the psychological characteristics of video game players [6, 7]. Therefore, observing the players’ behavior is proposed as a means for the analysis of personal skills. In addition, a game is a medium in which situations can be represented so that the ability to be analyzed occurs. On the other hand, in games, data can be collected in a non-invasive way. Even though, games facilitate the appropriate configuration to understand individual and collective behavior characteristics, their use as a psychological assessment tool is still on an initial phase. Furthermore, it is currently widespread to use a psychometric test to validate or corroborate the analysis of the data obtained when the medium of behavior analysis is a video game [8–10]. This article presents the results of a case study in which games were performed to analyze flexibility to change according to a set of established needs. Flexibility or adaptation to changes was selected as an important soft skill for software developers, with a direct impact on the team performance.

2 Related Works In this section, we present several proposals from recent literature that highlight the creation and use of video games for the study of behavior. Alloza et al. [11] argue that commercial video games are a helpful tool both for the identification of personal skills and for training. Their process starts by identifying open-source video games to guarantee the possibility of making modifications so that indicators can be included for the evaluation of personal skills. For this purpose, the authors designed a telemetry algorithm based on a literature review. From the selected video games, it is decided which personal skill can be measured. They use a psychometric pre-test coupled with facial recognition to predict the players’ affective states. At the end of the sessions, the players answer the psychometric instrument again to compare results and establish possible improvements in the selected personal skills. Mayer [3] aimed to demonstrate the validity of soft skills evaluation and training through video games. They used a commercial, three-dimensional, multiplayer video

86

A. Peña Pérez Negrón et al.

game for team training and psychometric testing. Their process includes collecting data by pre-game and post-game surveys, both instruments to assess team characteristics at the individual level through video game records; they consider both team and personal evaluations. McCord et al. [12] conducted a study to compare traditional personality analysis tools with a similar tool in a video game. Their design is a narrative scenario with options to modify its course. The game gives the player the impression that progress depends on the chosen option, but these are only related to some personality traits. Pouezevara et al. [13] described a process for developing and testing a problemsolving video game. They used an evidence-centered design (ECD) as a framework for the design of tasks related to the data generated by the video game. In this approach, the ECD uses domain analysis, domain modeling, and conceptual assessment to break down the skill into measurable parts. Ammannato & Chiesi [9] conducted a study using information about how players act and react in a competitive video game to assess personality traits by applying Machine Learning to extract data from game logs. The authors selected a model of personality and personal skills (i.e., honesty-humility and emotionality traits) related to a multi-user game, assuming that a video game includes the pleasure of interacting with others and the uncertainty in the results. Haizel et al. [14] presented a study to establish if a video game can predict personality and compare this evaluation with traditional methods. They developed a role-playing video game (RPG), with freedom of choice with a “walk and solve” type. This proposal includes understanding the video game habits of the participants. The proposal can be seen as part of the video gamer’s skills and to understand certain tendencies to consider during the psychometric assessment. Peña Pérez Negrón et al. [15] presented a study to evaluate interactive styles of consistent behavior in situations with the same arrangements associated to software development teams. The approach consists of selecting a contingency format according to the interactive type to be analyzed. Next, the expected values of the team members are established. From these steps the game is selected. Finally, they describe the data that will define the evaluation of the participants. Based on these studies, a process to select games for the study of soft skills for software developers was purposed, it is next briefly described.

3 Process to Select or Develop Games for Analyzing Soft Skills The process is composed of three phases preproduction, production, and postproduction. Table 1 shows a summary of the phases, a phase description, and the activities of each phase. A complete description of the method is provided in [16].

Identification of the Personal Skills Using Games Table 1. Game selection or development process Phase

Phase description

Activities

Preproduction

Aims to carry out the conception and design of the game

CONCEPTION Game Design • Identify the personal Document (GDD) skill to be evaluated • Establish whether the personal skill is intrapersonal or interpersonal • Define the level of measures • Identify the audience • Establish the situation or situations DESIGN • Determine the framework for task design • Set task goals • Task breakdown • Review skills to perform the tasks of the game • Define the game elements

Production

Aims to select a game or • Select type of game to develop the video game use • Game Settings • Review game development • Validations and verifications • Set the data to be collected from the game or video game

Game or video game

Post-production

Aims to validate the results of the personal skills analysis through the collected data

Game or video game validation

• Recognized psychological instrument • Judgment of one or more experts • Player self-evaluation • Data collection and application of AI techniques

Outputs

87

88

A. Peña Pérez Negrón et al.

4 Games to Evaluate Flexibility to Change An essential fact is that the team’s performance depends on how each member contacts the requirements and circumstances. Besides, these requirements and circumstances will vary during software development [17]. In this section we present a case study to analyze how flexible to change are the individuals, that is their flexibility. Understanding flexibility as changes in responses when facing signaled, not signaled, or unspecified contingencies [17]. In software development, changes often bring challenges and obstacles. Then, being flexible to them enables and adjustment in strategies and to find alternative solutions when faced with unexpected circumstances. Besides, it helps team members overcome difficult situations with resilience and creativity. Therefore, flexibility to change was selected since it allows evaluating how individuals face the contingencies and their multiple independent responses. In this context, software development generates different situations in each phase e.g., changes in requirements, technology, team composition, schedule and budget, quality assurance, testing, continuous improvement, among others that can generate risks for the development team. Next, the case study performed to evaluate the flexibility to change is presented. 4.1 Case Study Design and Performance Case Study Objective, Research Question and Sample The case study aims on one hand, to identify the viability of the proposed process. And, on the other hand, to determine the viability of the selected games to achieve our target. The research question of this case study is how flexible to change are the individuals? Based on this question, we establish two goals: 1) validate if the process proposed by the authors of this paper is viable for selecting the games and 2) validate if the games allow us to evaluate the level of flexibility that participants have. In the case study a sample of thirty-two individuals were randomly assigned to eight teams. They used a table with good illumination to play both games. Methods and Materials • Games selection. To achieve the goal of this paper, both games were selected using the process described in Sect. 3. • Participants. Thirty-two students voluntarily responded to a call through Facebook. Twenty-seven men and five women are between 20 and 39 years old. All of them are undergraduate students of Computer Engineering or Informatics Engineering. All the participants are computer programmers and 24 work in the industry. • Materials and experimental situation. We used two games, the rules changed to the games are next explained: • UNO Spin™, a variation of the popular UNO game. UNO Spin is played by following the same rules as the classic UNO but includes some special spin cards.

Identification of the Personal Skills Using Games

89

When one spin card is thrown in the pot, a wheel with several tasks is turned before moving on to the next player; these tasks can change the game’s rules. • Math Fluxx™ is a game about numbers. Players use positive integers numbers to achieve a mathematical goal. In the game are meta-rules that allow new rules that give the player a second way to win but are more challenging. • Both games have incremental requirements, encouraging the persistence of the players. But most important, also both have random chances of changing the rules. • Each participant signed a document explaining that their abilities were not evaluated and that the data collected from the case study was used anonymously for research purposes only. • Evaluation level scale. The evaluation performed to identify the degree to which a participant faces contingencies according to their flexibility to change. To evaluate it, we defined a five levels scale, from −1 to 2 as follows: • −1: High resistance to change. Participant who hardly adapts changes. • −2: Resistance to change. Participant who has a certain grade of difficulty to accept changes. • 0: Neutral. Participant who being neither strongly in favor nor opposed to changes, maintaining a balanced perspective. • 1: Tendency to change. Participant who receptive to different perspectives and willing to consider alternative viewpoints or approaches. • 2: High tendency to change. Participant who looks forward for changes even when they might not be required. 4.2 Case Study Performance After signing the consent form, they fulfilled a format with personal data and information about their appreciation regarding how good they think their cards are according to the game in progress and if they want the game conditions to change. In this context, good cards mean that the player will probably not want to change the game conditions; on the contrary, with a bad set of cards, the player will probably want to change the conditions. After registered their personal data, there were 20 game sections, and the participants were instructed to ask for more forms if needed (no one asked for another format). The form used by participants was as the one showed in Fig. 1. Then, participants were then instructed as follows: • “You will first play the UNO Spin for 30 min. It has the same rules as the classic UNO, but it has spin cards; when someone puts one of these cards in the pot, the next player has to spin the wheel. The spin might give an advantage or a disadvantage. For example, discard cards with the same number or color or draw until you get a blue card.” • They were asked if they knew how to play UNO. If someone said no, the rules were briefly explained.

90

A. Peña Pérez Negrón et al. Evaluation questionnaire General data Given name: _________________________ Surname: _____________________________________ Age: ______________ Gender: _____________ Carrer you are studying: __________________________________________ Current semester: __________________ Game: _____________ Game section Rate you game: EXCELLENT - GOOD - AVERAGE I AM NOT SURE I want the game to change: NO -

BAD - LOUSY YES

Game section Rate you game:

BAD -

EXCELLENT - GOOD

I want game to change:

NO

-

- AVERAGE -

I AM NOT SURE

-

LOUSY

YES

Fig. 1. Evaluation questionnaire with general data and each changing situation player evaluation

• Then the instructions followed: • “Please fulfill the EQ each time a spin card is thrown in the pot and before doing the wheel spinning. Any of you can stop playing at any time. So let us play a test round.” • After 30 min, they were instructed as follows: • “You will now play the Match Fluxx game for 30 min; it starts with some basic rules, and these can change during the game through some cards. So before downloading one of the changing rules cards, all participants must be told to fill out an EQ. Then, whenever you want, you can stop playing. First, you will play a test round.” • After the test round, a timer was set, and after 30 min, they were indicated to stop playing. 4.3 Analysis of Data Collected The different options a player could answer when there was about to be a change in the game condition evaluations are presented in the following Table 2. In the first column is the consecutive number, the second column has the player game evaluation of their game before the changing condition, then want to change? The third column has the player’s desire to change the game conditions. The assumption of the fourth column was an evaluated situation to give a value according to the evident expected player answer. For example, with an excellent or good game, the player is expected to want to keep the conditions to preserve their excellent or good game. Still, if the answer with an excellent or good game is “I AM NOT SURE”, then the player might have a tendency to change even when the new conditions might not be as favorable as they are now; a player with an excellent or good game that wants the conditions to change is probably a person with a high tendency to look for a change even when it might not be a favorable condition.

Identification of the Personal Skills Using Games

91

To assign an evaluation the next rules were followed: a value of 0 (cero) was given when the player answered the expected answer, a value of 5 (five) with a moderated tendency to change, and a 10 (ten) with a high propensity to change. On the contrary, if the player had resistance to change, that is, if the participant decided that they did not want to change the game conditions even when they consider that their cards are bad or lousy, it is assumed that is a person that does not like changes; a value of −5 (less five) was given when there was resistance to change and −10 (less ten) when the resistance was considered high. Table 2. Values for the possible different player evaluations No.

Game evaluation

Want to change?

1

EXCELLENT

NO

Neutral

0

2

EXCELLENT

I AM NOT SURE

Tendency to change

5

3

EXCELLENT

YES

High tendency to change

4

GOOD

NO

Neutral

0

5

GOOD

I AM NOT SURE

Tendency to change

5

6

GOOD

YES

High tendency to change

10

7

AVERAGE

NO

Resistance to change

−5

8

AVERAGE

I AM NOT SURE

Neutral

9

Assumption

AVERAGE

NO

Tendency to change

10

BAD

NO

High resistance to change

11

BAD

I AM NOT SURE

Resistance to change

12

BAD

YES

Neutral

13

LOUSY

NO

High resistance to change

14

LOUSY

I AM NOT SURE

Resistance to change

15

LOUSY

YES

Neutral

Value

10

0 5 −10 −5 0 −10 −5 0

4.4 Results It is essential to notice that only some players had the same number of answers because the changing conditions randomly appear in the game. Table 3 are presented the results. In the first column is the participant ID. After in the second column, UNO is the number of games evaluated. Participant P01 considered their game ten times and decided if they wanted to change the game conditions. Then the average obtained in the UNO is shown in the third column (UAV). After, the fourth column, is the number of times they were asked about their game conditions in the Math Fluxx (MF); the fifth column has the average of the evaluations obtained by the participant in the Math Fluxx answers (MAV), and the last column has the average of the two-game evaluations (EV).

92

A. Peña Pérez Negrón et al. Table 3. Results from the gamified evaluation

Participant

UNO

UAV

MF

MAV

EV

P01

10

0.5

7

5.0

3

P02

10

0.0

7

0.0

0

P03

10

−4.5

7

−2.9

−4

P04

10

−4.0

7

−0.7

−2

P05

21

0.5

4

0.0

0

P06

21

−0.5

4

1.3

0

P07

21

1.7

4

2.5

2

P08

21

4.0

4

5.0

5

P09

4

−2.5

11

2.9

0

P10

4

0.0

11

−0.4

0

P11

4

0.0

11

−3.8

−2

P12

4

1.3

11

0.4

1

P13

6

1.7

5

0.0

1

P14

6

4.2

5

2.5

3

P15

6

1.7

5

4.0

3

P16

6

0.8

5

1.0

1

P17

4

0.0

9

0.0

0

P18

4

0.0

9

−5.0

−3

P19

4

0.0

9

1.1

1

P20

4

−5.0

9

−2.2

−4

P21

7

−3.6

1

0.0

−2

P22

7

2.1

1

10.0

6

P23

7

5.0

1

−10.0

−3

P24

7

−0.7

1

5.0

2

P25

8

1.9

10

2.0

2

P26

8

0.0

10

0.0

0

P27

8

3.1

10

−2.0

1

P28

8

0.0

10

−1.5

−1

P29

9

0.0

6

1.7

1

P30

9

0.0

6

2.5

1

P31

9

−1.7

6

−2.5

−2

P32

9

0.0

6

0.0

0

Identification of the Personal Skills Using Games

93

Following the given values on the assumptions, a range of values to provide a profile for the participants was given from −1 to 1, neutral; from −2 to −3, resistant to change; −4 or less, high resistance to change; 2 to 3 tendencies to change; 4 or more, high tendency to change. 4.5 Discussion Due to the requirements in the team’s performance changing, we considered interest in evaluating flexibility to change that will show the team members’ adjustment to the mentioned changes. Besides, there is validation on observing a person in a natural environment [18]. Based on the process proposed by authors to select or develop a game, it was possible to identify proper games for the soft skill selected. Besides, we were able to calculate a degree of flexibility to change, giving an evaluation to participants based on their performance in the game, and how they react to changes. However, it is important to mention as one crucial limitation of this case study, was the times that participants had to achieve both games were different among them; a fact that might affected the obtained results.

5 Conclusion Personal skills still represent a challenge in evaluation because they are critical factors for success in both personal and professional life. Besides, after reviewing the related works, different authors present initiatives with methodologies for evaluating personal skills using games or video games due to their advantages compared to other alternatives to measure personal characteristics, such as questionnaires. Specifically, games allow recreating situations in a situational context in which these skills are presented to resolve tasks that are not always related to the person’s daily activities. In this context, this paper uses two games to evaluate the flexibility to change. After analyzing the results of this case study, and covering the established goals, we can conclude: • Related to the first goal, the process proposed by the authors of this paper is viable for selecting the games required to achieve the personal skill measure. In this case, it helped to select games focused on flexibility to change (UNO SpinTM and Math FluxxTM). Moreover, some features were detected to change for the next version of the process, such as removing the personal assessment. Finally, we noticed that a participant could ensure a fact entirely different from how the participant performs whitin the game. Besides, another way to contrast the result obtained from the game should be defined. • Related to the second goal, the games allow us to evaluate the level of flexibility that participants have. However, taking care of some aspects that can influence the results will be necessary, such as the time participants must play each game. We confirm by performing this case study that the self-assessments do not work as it is expected.

94

A. Peña Pérez Negrón et al.

It is important to highlight that we cannot generalize our results because of the sample size. This is one of the facts addressing our future work related to performing this type of case study in industry and broadening the sample.

References 1. García, I., Pacheco, C., Méndez, F., Calvo-Manzano, J.A.: The effects of game-based learning in the acquisition of “soft skills” on undergraduate software engineering courses: a systematic literature review. Comput. Appl. Eng. Educ. 28(5), 1327–1354 (2020). https://doi.org/10. 1002/cae.22304 2. Muzio, E., Fisher, D.J., Thomas, E.R., Peters, V.: Soft skills quantification (SSQ) foi project manager competencies. Proj. Manag. J. 38(2), 30–38 (2007). https://doi.org/10.1177/875697 280703800204 3. Mayer, I.: Assessment of teams in a digital game environment. Simul. Gaming 49(6), 602–619 (2018). https://doi.org/10.1177/1046878118770831 4. Milczarski, P., Podlaski, K., Hłoba˙z, A., Dowdall, S., Stawska, Z., O’Reilly, D.: Soft skills development in computer science students via multinational and multidisciplinary gamedev project. In: Proceedings of the 52nd ACM Technical Symposium on Computer Science Education. Presented at the SIGCSE 2021: The 52nd ACM Technical Symposium on Computer Science Education, Virtual Event, USA, pp. 583–589. ACM (2021). https://doi.org/10.1145/ 3408877.3432522 5. Desurvire, H., El-Nasr, M.S.: Methods for game user research: studying player behavior to enhance game design. IEEE Comput. Graphics Appl. 33(4), 82–87 (2013). https://doi.org/10. 1109/MCG.2013.61 6. Ventura, M., Shute, V., Kim, Y.J.: Video gameplay, personality and academic performance. Comput. Educ. 58(4), 1260–1266 (2012). https://doi.org/10.1016/j.compedu.2011.11.022 7. Ontanon, S., Zhu, J.: The personalization paradox: the conflict between accurate user models and personalized adaptive systems. In: 26th International Conference on Intelligent User Interfaces. Presented at the IUI ’21: 26th International Conference on Intelligent User Interfaces, pp. 64–66. ACM, College Station (2021). https://doi.org/10.1145/3397482.3450734 8. Aggarwal, S., Saluja, S., Gambhir, V., Gupta, S., Satia, S.P.S.: Predicting likelihood of psychological disorders in PlayerUnknown’s Battlegrounds (PUBG) players from Asian countries using supervised machine learning. Addict. Behav. 101, 106132 (2020). https://doi.org/10. 1016/j.addbeh.2019.106132 9. Ammannato, G., Chiesi, F.: Playing with networks: using video games as a psychological assessment tool. Eur. J. Psychol. Assess. 36(6), 973–980 (2020). https://doi.org/10.1027/ 1015-5759/a000608 10. Guardiola, E., Natkin, S.: A game design methodology for generating a psychological profile of players. In: Loh, C.S., Sheng, Y., Ifenthaler, D. (eds.) Serious Games Analytics. AGL, pp. 363–380. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-05834-4_16 11. Alloza, S., Escribano, F., Delgado, S., Corneanu, C., Escalera, S.: XBadges. Identifying and training soft skills with commercial video games (2017). https://doi.org/10.48550/ARXIV. 1707.00863 12. McCord, J.-L., Harman, J.L., Purl, J.: Game-like personality testing: an emerging mode of personality assessment. Pers. Individ. Differ. 143, 95–102 (2019). https://doi.org/10.1016/j. paid.2019.02.017 13. Pouezevara, S., Powers, S., Moore, G., Strigel, C., McKnight, K.: Assessing soft skills in youth through digital games. In: Presented at the 12th annual International Conference of Education, Research and Innovation, Seville, Spain, pp. 3057–3066 (2019). https://doi.org/ 10.21125/iceri.2019.0774

Identification of the Personal Skills Using Games

95

14. Haizel, P., Vernanda, G., Wawolangi, K.A., Hanafiah, N.: Personality assessment video game based on the five-factor model. Procedia Comput. Sci. 179, 566–573 (2021). https://doi.org/ 10.1016/j.procs.2021.01.041 15. Negrón, A.P.P., Muñoz, M., Carranza, D.B., Rangel, N.: Towards the evaluation of relevant interaction styles for software developers. In: Mejia, J., Muñoz, M., Rocha, Á., Avila-George, H., Martínez-Aguilar, G.M. (eds.) CIMPS 2021. AISC, vol. 1416, pp. 137–149. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-89909-7_11 16. Negrón, A.P.P., Carranza, D.B., Muñoz, M., Aguilar, R.: Diseño de videojuegos para el análisis de habilidades personales. RISTI (2023) 17. Rangel, N., Torres, C., Peña, A., Muñoz, M., Mejia, J., Hernández, L.: Team members’ interactive styles involved in the software development process. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 675–685. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_56 18. Ahn, S.J., Fox, J., Bailenson, J.N.: Avatars. Leadership in science and technology: a reference handbook, vol. 2, pp. 695–702 (2012)

Identifying Key Factors to Distinguish Artificial and Human Avatars in the Metaverse: Insights from Software Practitioners Osman Tahsin Berkta¸s1 , Murat Yılmaz1,2(B) , and Paul Clarke3,4 1 Informatics Institute, Gazi University, Ankara, Turkey

{osmantahsin.berktas,my}@gazi.edu.tr

2 Faculty of Engineering, Department of Computer Engineering, Gazi University, Ankara,

Turkey 3 School of Computing, Dublin City University, Dublin, Ireland

[email protected] 4 Lero, the Science Foundation Ireland Research Center for Software, Limerick, Ireland

Abstract. The Metaverse comprises a network of interconnected 3D virtual worlds, poised to become the primary gateway for future online experiences. These experiences hinge upon the use of avatars, participants’ virtual counterparts capable of exhibiting human-like non-verbal behaviors, such as gestures, walking, dancing, and social interaction. Discerning between human and artificial avatars becomes crucial as the concept gains prominence. Advances in artificial intelligence have facilitated the creation of virtual human-like entities, underscoring the importance of distinguishing between virtual agents and human characters. This paper investigates the factors differentiating human and virtual participants within the Metaverse environment. A semi-structured interview approach was employed, with data collected from software practitioners (N = 10). Our preliminary findings indicate that response speed, adaptability to unforeseen events, and recurring scenarios play significant roles in determining whether an entity in the virtual world is a human or an intelligent agent. Keywords: Metaverse · Avatars · Artificial Agents · Virtual Characters

1 Introduction In recent years, the notion of the Metaverse has gained significant attention as an interconnected network of 3D virtual environments, which has the potential to serve as the primary gateway for most real-time human experiences in the future. As a collective imaginary shared space, the Metaverse integrates technologies such as virtual environments and augmented reality to form the next generation Internet, i.e., forging a novel, inclusive social network that transforms human communities [1]. The Metaverse holds immense potential to transform various aspects of society, revolutionizing how people experience the digital world by redefining the boundaries between physical and virtual realities. This unprecedented shift encompasses areas such © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 96–108, 2023. https://doi.org/10.1007/978-3-031-42307-9_8

Identifying Key Factors to Distinguish Artificial and Human Avatars

97

as communication, education, entertainment, and work, attracting significant interest from the software industry and academia [2]. This shift proposes impressive opportunities for individuals, businesses, and organizations to interact, collaborate, and engage in an immersive digital ecosystem with various activities. By seamlessly integrating elements from gaming, social media, e-commerce, and other digital platforms, the future Metaverse has the potential to provide a dynamic and interconnected virtual environment that facilitates innovation, creativity, and collaboration across various domains. However, as the development of the Metaverse accelerates, it becomes increasingly important to address emerging challenges and opportunities, such as maintaining privacy and security and tackling ethical concerns related to virtual interactions and artificial entities. In particular, our ability to discern between human and artificial avatars becomes increasingly crucial. However, recent advances in artificial intelligence have enabled the creation of highly realistic virtual human-like entities, highlighting the importance of distinguishing between virtual agents and human characters in this emerging landscape. In this context, the growing need to differentiate human participants from virtual agents is essential to creating a safe, authentic, and engaging environment for all users. By understanding and addressing the factors that distinguish human users from artificial counterparts, researchers and practitioners can contribute to the responsible and sustainable development of the Metaverse, ultimately shaping its impact on society, culture, and the economy in the age of spiritual machines. This paper explores factors that are likely to identify artificial avatars in the metaverse which might have a potential impact on society, culture, and the economy in the digital age. The primary research objective of this paper is to explore ways to differentiate between virtual and human participants in the virtual environment to enhance the understanding of their distinct characteristics and behaviors within this immersive digital realm. The goal is to facilitate consistent and rigorous evaluations of humancomputer interactions and behavior dynamics of virtual agents and human users in the virtual settings. Our approach will contribute to a comprehensive understanding of the Metaverse environment and its implications for human and artificial interactions. The remaining part of the paper proceeds as follows: Sect. 2 reviews the literature, with Sect. 3 presenting the methodology employed for this study. Section 4 details the research findings. Ultimately, the final section concludes and suggests directions for further research.

2 Background By harnessing a variety of advanced technologies, Metaverse is a concept that enables multifaceted applications across numerous domains, including production, culture, social interaction, entertainment, and economy [4]. In particular, the concept promotes the idea of an inventive, inclusive 3D social network that seamlessly interweaves with human communities, facilitating continuous sharing and collaboration. This shared, collective living space melds virtual 3D environments, augmented reality, and blockchain technologies to create an immersive digital ecosystem with a virtual economy [3]. The Metaverse concept comprises five essential components [2]: Network Infrastructure, Cyber-Reality Interface, Data Management and Applications, Authentication

98

O. T. Berkta¸s et al.

System, and Content Generation. The first component, Network Infrastructure, employs high-speed end-to-end connections and IoT technology to facilitate rich content rendering and powerful interactive functionalities, with 5G networks being a core element. The second component, the Cyber-Reality Interface, enables seamless linking and transitioning between virtual and real worlds through reality, smart devices, and human-computer interaction. The third component, Data Management and Applications, leverages cloud technology, big data operations, and edge computing for large-scale data acquisition, analysis, storage, and transmission. The fourth component, the Authentication System, relies on blockchain technology to ensure transparency, stability, and reliability, addressing challenges such as data sovereignty, identity authentication, and value attribution and circulation through biometric solutions. The final component, Content Generation, maps the physical world to cyberspace using digital twins and integrates content with artificial intelligence-driven deep learning and cognitive learning methods for content editing and growth scenarios, fostering a rich, comprehensive, and self-contained digital world (ref. Fig. 1). Metaverse environments might not utilize the same methods employed by other applications, such as games, for content conditioning and preparation. While digital game developers can collaborate closely with content designers and process content through a conditioning pipeline to ensure real-time response speeds, virtual worlds must accommodate arbitrary user-generated content. Moreover, game content is typically delivered to players before the application runs, whereas Metaverse users expect to access and use new 3D models instantly by uploading them to cloud world servers. Imposing strict constraints on unoptimized data in the Metaverse could substantially reduce the availability of content and diminish the system’s usability. Users should not be burdened with the complex, technical details of 3D content [3].

Fig. 1. The proposed components of the Metaverse [2].

Identifying Key Factors to Distinguish Artificial and Human Avatars

99

Mar Gonzales-Franco, a researcher at Microsoft, highlighted some challenges and deficiencies related to the Metaverse during her presentation at the ISMAR 2021 Symposium [4]. These concerns, some of which have been addressed, include: • Locomotion: How can participants simultaneously navigate both real and virtual environments? • Technological latency: How can latency or delay issues be resolved? • Equipment affordability: How will users obtain costly devices? • Characterization: Should this be achieved through animations or avatars? • Sound direction: How can sound direction determination issues be addressed, as sound in virtual spaces does not behave like waves in real life? • Cross-content interaction: How can problems, such as being unable to read emails while working in the virtual environment, be solved? • Battery life: How can the battery challenges of auxiliary devices be overcome? • Isolation: How can the potential for isolation from real life be mitigated? User adaptation: How can new technology adoption problems be resolved for users? Gonzales-Franco [4] indicates that some challenges have been addressed. For instance, wearable devices and cost-effective products have become accessible to users, and adaptation issues are gradually being overcome thanks to investments in the sector. Moreover, the Metaverse has evolved into a social environment with its ecosystem. As the concept interactive environments increasing attention, it becomes crucial to differentiate between human and artificial avatars to ensure an authentic and immersive experience. This is particularly important because the Metaverse experience relies on avatars that can move freely in a virtual world, exhibiting human-like non-verbal behaviors such as walking, dancing, and interacting with others. Another problem mitigated is the characterization issue, where avatars are created using animated images of real people instead of generic animations. Human or artificial avatars are defined as avatars that are co-located with the user’s body and viewed from the user’s perspective within an immersive virtual environment [5]. There is growing scientific evidence highlighting the significance of artificial humans in self-avatars. Apart from the apparent necessity of having a virtual representation to interact with others in social VR settings, artificial humans have been shown to enhance users’ cognitive abilities [6] and improve self-recognition and identification in virtual meetings [7]. Nonetheless, the cognitive load impacts of avatars remain poorly understood and may influence outcomes [8, 9]. Lush [10] has expressed concerns that the illusion of presence in a virtual environment could be a response to imaginative suggestion, suggesting that the power of suggestion itself may drive it. A standardized measurement for differentiating characters is essential for comparing and replicating experiments across the field of character differentiation. Participants are unique and can have significantly different experiences and responses within the same virtual setup. For instance, avatars have been shown to affect distance prediction [11, 12] and object size estimation [13, 14], which may further influence distance perception [15]. A standardized differentiation questionnaire sensitive enough to detect character differences could assist researchers in better understanding and interpreting the effects of artificial humans. In every experiment, research using questionnaires should measure differentiation to account for and comprehend intrinsic variables that might impact

100

O. T. Berkta¸s et al.

results. Differentiating characters presents challenges, given that virtual bodies can take any form. Recent advances in artificial intelligence have made it possible to create virtual human-like entities, further emphasizing the need to distinguish between virtual agents and human characters.

3 Methodology The semi-structured interview is a widely used research method in qualitative data analysis [29]. It combines elements of structured and unstructured interviews, offering advantages of each approach. While a structured interview features a strict set of questions without room for deviation, an unstructured interview is informal and free-flowing, resembling a casual conversation. A semi-structured interview falls between these two extremes, with loosely structured questions that give respondents more opportunity to express themselves fully [30]. In a semi-structured interview, the interviewer generally explores a particular theme but remains open to different perspectives and allows for the emergence of new ideas based on the respondent’s input [29]. This approach provides benefits for both interviewers and respondents. For interviewers, the structured aspect offers a general overview of respondents, enabling objective comparisons that are valuable for qualitative research or job interviews. For respondents, the unstructured part affords more freedom to express thoughts, typically reducing stress during the interview analysis [30]. By fostering a warm and friendly atmosphere, the interviewer can demonstrate better communication skills and establish a personal connection with the respondent. The data collected during the semi-structured interview undergo content and thematic analysis, with coding and categorization applied to the data. In this study, participants were chosen from individuals with a minimum of 5 years of experience in software development. These individuals were selected from those who presented as valuable contributors to our research, based on demonstrable experience and interest in the study domain. Table 1 below presents information on why these individuals were chosen. The semi-structured interview process with participants comprises two stages. In the first stage, face-to-face interviews differentiate between virtual and real human characters in the Metaverse environment across five topics. In the second stage, participants are asked to evaluate the research study, separate from the study’s subject matter. This approach allows for an examination of the study’s method and content to identify potential areas for improvement. To achieve the stated objectives, this study employed the grounded theory method [20], a frequently used approach across various disciplines for many years. The method offers flexible research and analysis capabilities, particularly in addressing people’s problems. By collecting data from individuals and supplementing it with literature sources, the grounded theory method can contribute to solutions such as overcoming difficulties, managing risks, and improving working life in diverse research fields, ranging from manufacturing [21, 22] to agriculture [23], from software development [24, 25] to educational institutions [26]. The method’s applicability in resolving problems in the research area is evident from the overview of recent studies in the literature. Grounded theory is also referred to as an embedded theory method in some sources. In summary, this

Identifying Key Factors to Distinguish Artificial and Human Avatars

101

Table 1. Participant Profiles. ID

Role

Experience

Education

INT. 1

Software Team Leader

13 years

B.Sc

INT. 2

Software Team Leader

20 years

B.Sc

INT. 3

Software Developer

5 years

M.Sc

INT. 4

Software Team Leader

10 years

Ph.D. Candidate

INT. 5

Scientific Researcher

10 years

Ph.D. Candidate

INT. 6

Software Developer

5 years

M.Sc

INT. 7

Software Team Manager

12 years

Ph.D

INT. 8

Assistant Professor

12 years

Ph.D

INT. 9

Assistant Professor

12 years

Ph.D

INT. 10

Software Team Leader

15 years

B.Sc

four-step method involves data collection, qualitative data analysis with coding, classification and grouping of coded or tagged information, and finally, developing a theoretical conclusion based on the research objectives. Figure 2 provides a visual representation of the grounded theory steps. The diagram demonstrates the process, starting from data collection, followed by qualitative data analysis and coding. Next, the coded or tagged information is classified and grouped. Finally, the grouped data is organized under the research objectives, leading to a theoretical conclusion. This illustration serves as a guide to understanding the systematic approach taken in the grounded theory method.

Fig. 2. Grounded Theory Research Steps

To achieve these objectives, we administered 5-item questionnaires and collected data from software practitioners (N = 10). Additionally, a semi-structured interview was conducted based on the answers obtained. In the following stage, the validity of the questionnaires was re-assessed by the software practitioners, providing a starting point for future correlation analyses and pilot studies, such as personality type analysis studies [17]. This paper analyzed the responses from ten software practitioners involved in software development projects. Our research study utilized a semi-structured interview and grounded theory method, incorporating semi-structured interview data as input. The first

102

O. T. Berkta¸s et al.

step involved conducting semi-structured interviews with ten software practitioners. The semi-structured interview is a data collection method that combines questionnaires and interviews. The 5-item questionnaire was structured around a predetermined thematic framework, with the questions presented in a non-linear order. Figure 3 depicts the exploratory research process for differentiating artificial and natural human characters in the Metaverse. The first stage of this process was carried out in the current study.

Fig. 3. Research Steps for Differentiating Artificial and Human Characters in Metaverse

Our research questions were derived from the five evaluation criteria presented in Table 2. The first criterion focuses on human similarity questions, which aim to understand the parallels between a real person and an artificial intelligence character. The second evaluation criterion examines decision-making situations by preparing questions regarding the quality of responses from the characters. The third criterion is related to production and investigates the success of virtual characters when confronted with actions or verbal expressions directed towards them. Our fourth evaluation topic concerns operation, addressing questions about how characters are perceived in areas such as management, reproduction, and maintenance. Lastly, we asked questions about general evaluation scores and potential performance assessments. While formulating these questions, the criteria from Man-Je Kim’s were also utilized [18]. Table 2. Questionnaire being gathered via software practitioners. Evaluation Criteria

Criteria Description

# Of Questions

Human Likeness

Similarities

1

Decision Making

Quality of Response

1

Production

Unit Production per Response or Successfulness

1

Operation

Management, Creation, Maintenance

1

Performance

Overall Evaluation of Scores

1

Identifying Key Factors to Distinguish Artificial and Human Avatars

103

This paper aims to validate and enhance the proposed questionnaire by incorporating a new one with additional analyses for three studies. We utilized the questions mentioned in Yiqian Han’s publication [19]: Q1: Would you recommend this work to others? A higher score indicates a greater willingness to recommend. Q2: Do you believe the questionnaire requires improvement? A high score signifies that no improvement is needed. Q3: Were you able to express your thoughts naturally? The higher the score, the more natural the expression. Q4: Did any questions make you feel uncomfortable during the questionnaire? A high score implies discomfort. Q5: Were you satisfied with the work? A high score denotes satisfaction. Q6: Do you find this interactive technology comfortable to use? A high score suggests comfort.

4 Findings In this study, we investigated five main headings to differentiate between artificial and real human characters in the Metaverse environment. The study was carried out in two stages using semi-structured interviews. In stage 1, we gathered brief information on Human Likeness, Decision Making, Production, Operation, and Evaluation to inform the participants about the general framework of our research. We believed that prior knowledge of the Metaverse concept, a relatively new field, would benefit the participants. In stage 2, we conducted face-to-face interviews with the participants. During these conversations, we revisited the questions from stage 1 and encouraged the participants to expand on their answers, facilitated by a card sort exercise. We had the audio recordings from stage 2 transcribed into text by expert individuals to capture the nuances of everyday speech. The collected data can be accessed from a shared source in the field, ensuring transparency and collaboration. By conducting a systematic and consistent study of our data, we aimed to contribute valuable insights to the understanding and differentiating artificial and real human characters within the Metaverse environment. Since the participants’ identities are confidential, they will be referred to as “participants”. • In Interview 1, the participant mentioned that the most critical factor is the details in physical appearance. He thinks that differentiating factors can be obtained by examining demographic characteristics. The diversity of the avatar’s reactions is important, and he stated that improvised words and behaviours might also be used. He evaluated that the richness of the reactions would also be the second distinguishing factor. According to the participant, the algorithm pattern of the computer-aided machines is easy to understand. He can distinguish that he is not a natural human by continuing on the same order. In addition, the pattern of responses in sick or disabled people changes dramatically. Another consideration is that continuity problems, which are very difficult for humans, can be easily solved by machines. Performance

104









O. T. Berkta¸s et al.

measurement systems have difficulty updating with time so a realistic measurement system will give different results after a while. Finally, it will be helpful to examine the emotional state (EQ) as well as the intelligence-related (IQ) in human-human interaction. In Interview 2, the participant stated that artificial intelligence is at a prospering level regarding human similarity. It would not be a successful method to detect it with visual features. He also said that it would not be possible to differentiate artificial intelligence by following the logic sequence. He stated that he thought a well-trained artificial character could imitate a natural person very well. He emphasized that smelling, one of human’s five senses, is not in the digital environment. In addition, he stated that products belonging to the sense of touch are just beginning to appear in the market. An artificial character can describe his recorded life. If he hasn’t recorded it, he can’t tell shared knowledge. Finally, he stated that computers would respond very quickly to some problems. According to the Interview 3 participant, gestures and facial expressions of the characters can be distinguishing factors. However, growing scientific studies will make it increasingly difficult to use those factors. He said virtual humans would have difficulty in edge and counter cases or when I asked him to tell a memory. He evaluated that differentiating can be done quickly in edge and corner cases. However, he mentioned that he had to spend time with the participant to use these factors. While distinguishing virtual and real human characters, we can divide our observation into passive and interactive methods. While performing interaction analysis, asking mathematical or logical questions increases the observation efficiency with a multidimensional perspective. In addition, in order not to give away the participant, the chosen balance points are invaluable. Using identified characters will solve the problems from a different perspective. But this method will reduce the reality somewhat. Inter-participant levelling or character classification may be beneficial instead of free-for-all environments. He recommends using more than one method in the evaluation. Observation can be made by calculating the mean and standard deviation of the natural human population and the mean and standard deviation of virtual human characters according to the Interview 4 participant. The life status of the observed character can be distinguished by using population references. Physical movements and reactions can be valuable distinguishing factors more than physical appearance. The interviewer suggests that physical movements and reactions can be used as distinguishing factors rather than the appearance of the characters. He suggested that data per unit time can be used as a differentiating factor for successful responses or responses in decision-making processes. According to the Interview 5 participant, a character’s facial expressions and body movements create a perception in a Metaverse environment. Hand and arm motions, body movements or laughing in response to a joke may be a useful indicator. In addition, I think the principle of impartiality of measurement systems is essential. Finally, genuine responses, such as improvisation, would be used as distinguished data. She suggests that it would be logical to use body movements, mimics and facial expressions, which are called nonverbal communication, as differentiating factors.

Identifying Key Factors to Distinguish Artificial and Human Avatars











105

She suggests that emotional observation, as well as logical evaluation, will be useful in decision-making processes. Interview 6 participant stated that he does not consider it possible to use physical movements as a distinguishing factor. It is not possible to distinguish it from daily activities such as walking, running, sitting, and getting up. Monitoring of managerial decisions can result in a distinction. It would be helpful to introduce a systematic approach with regular time intervals to make the distinction. He stated that the responses given as a result of the interaction could be used in differentiation. He stated that responding in a short time could be done more easily by characters under computer control. He suggested that there can be a systematic measurement system with the resources used, the outputs produced, the decisions made, and the conclusions reached. According to the Interview 7 participant, patterns can be drawn from movement or other qualities. A real human appearance can be easily simulated by using recorded patterns. He stated that it would not be efficient to use visual factors as distinguishing factors since they are features that can be easily imitated today. In addition, a natural person will have daily needs. Uninterrupted connection to the system for a long time can be the distinguishing factor. Finally, different methods should be used according to the scenario. Purpose and scope are essential. Applying more than one method at the same time will give more efficient results. He also stated that repetitive actions on a specific task could be used as a differentiating factor. In response to the questions, Interview 8 participant said that personality analysis would yield effective results. He evaluated that sentiment analysis could yield effective results. In addition to quantitative data, more accurate results can be drawn with qualitative data. According to Interview 9 participant, novel products or intellectual activities will be distinguished. Computer-controlled characters will average scores better than natural human characters at computational tasks. He has evaluated that it is not easy to make concrete measurements but that observing interactive situations can yield successful results. Interview 10 participant evaluated the character’s reactions as distinguishing factors. If the response time is too short, it can be defined as an artificial human. In addition, solutions using more than one method will be effective. Continuity can be a distinguishing feature. Non-interactive passive methods will be a problem because the word and movement analysis will not be enough. Finally, he evaluated that focusing on success in research would be misleading since people in the Metaverse environment will not have a goal of being successful. He suggested that machine management can be detected by observing cyclical movements and behaviors. It has been stated that it is not easy for artificial intelligence to terminate the interaction. It was stated that extending the observation over time would be useful.

The results from this study suggest that factors of response speed, improvised events, and loop scenarios have significant implications for understanding how we can differentiate humans from intelligent agents in a virtual setting. The participants provided supportive and positive answers in the questionnaire, expressing satisfaction, willingness, and answers in their unique way.

106

O. T. Berkta¸s et al.

The contribution of our study is summarized as follows: As a result of our semistructured interview study, three main factors emerged as the distinguishing factors between humans and intelligent agents in the Metaverse - response speed, improvised responses, and loop scenarios. These factors help in establishing a better understanding of how to differentiate between human and artificial characters in virtual environments, contributing to the ongoing research in this area. This knowledge can further improve user experience and interaction within the Metaverse and other virtual settings.

5 Conclusion and Future Work In this study, we explored the factors that distinguish virtual and human characters within a virtual reality environment. The findings of this study have a number of practical implications. In particular, this study identified three main factors that differentiate virtual and human characters: response speed, improvisation, and composite work in a loop scenario. These factors are crucial in determining the behavior of a character and differentiating it from an artificial intelligence algorithm. Our findings suggest that emotional and logical approaches are more effective than physical features in distinguishing between virtual and human characters. Additionally, distance and size measurement can be used as a differentiating factor. Overall, this study contributes to the ongoing research on identification of artificial and human characters in virtual environments. The findings provide valuable insights into the cognitive load impacts of avatars and suggest that a standardized measurement of differentiation is necessary for accurate and replicable results in future studies. Our study also highlights the importance of examining emotional and logical factors in addition to physical features to distinguish between virtual and human characters. With the pace of AI advance appearing to increase recently, for example with chatGPT [28], it may be the case that the challenge of identifying human and AI agents in the metaverse might become even more difficult in the near future. The need for further research in this space is therefore growing. Further studies can build upon our findings by utilizing the data obtained from this research and repeating the same study with a larger sample size to increase the generalizability of the results. Ultimately, continued research in this field will contribute to the development of more advanced and realistic virtual environments with enhanced human-like interactions. Acknowledgement. This research is supported in part by SFI, Science Foundation Ireland (https:// www.sfi.ie/) grant No SFI 13/RC/2094 P2 to–Lero - the Science Foundation Ireland Research Centre for Software.

References 1. Raij, A.B., et al.: Comparing interpersonal interactions with a virtual human to those with a real human. IEEE Trans. Visualization Computer Graphics 13(3), 443–457 (2007). https:// doi.org/10.1109/TVCG.2007.1030

Identifying Key Factors to Distinguish Artificial and Human Avatars

107

2. Wang, D., Yan, X., Zhou, Y.: Research on metaverse: concept, development and standard system. In: 2021 2nd International Conference on Electronics, Communications and Information Technology (CECIT), Sanya, China, pp. 983–991 (2021) 3. Terrace, J., Cheslack-Postava, E., Levis, P., Freedman, M.J.: Unsupervised conversion of 3D models for interactive metaverses. In: 2012 IEEE International Conference on Multimedia and Expo, pp. 902–907 (2012). https://doi.org/10.1109/ICME.2012.186 4. Gonzales-Franco, M.: Keynote speaker: metaversefrom fiction to reality and the research behind it. In: 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), p. 17 (2021). https://doi.org/10.1109/ISMAR52148.2021.00011 5. Kilteni, K., Groten, R., Slater, M.: The sense of embodiment in virtual reality. Pres. Teleoper. Virtual Environ. 21, 373–387 (2012). https://doi.org/10.1162/PRES_a_00124 6. Steed, A., Pan, Y., Zisch, F., Steptoe, W.: The impact of a self-avatar on cognitive load in immersive virtual reality. In: 2016 IEEE virtual reality (VR), Greenville, SC, 19–23 March 2016, pp. 67–76. IEEE, Piscataway (2016) 7. Gonzalez-Franco, M., Steed, A., Hoogendyk, S., Ofek, E.: Using facial animation to increase the enfacement illusion and avatar self-identification. IEEE Trans. Visual. Comput. Graph. 26, 2023–2029 (2020) 8. Peck, T.C., Doan, M., Bourne, K.A., Good, J.J.: The effect of gender body-swap illusions on working memory and stereotype threat. IEEE Trans. Visual. Comput. Graph 24, 1604–1612 (2018) 9. Peck, T.C., Tutar, A.: The impact of a self-avatar, hand collocation, and hand proximity on embodiment and stroop interference. IEEE Trans. Visual. Comput. Graph 26, 1964–1971 (2020) 10. Lush, P., Vazire, S., Holcombe, A.: Demand characteristics confound the rubber hand illusion. Collabra: Psychol. 6, 83 (2020) 11. Ries, B., Interrante, V., Kaeding, M., Anderson, L.: The effect of self-embodiment on distance perception in immersive virtual environments. In: Proceedings of the 2008 ACM symposium on Virtual reality software and technology, New York; NY, United States, 27 October 2008, pp. 167–170. Association for Computing Machinery, New York (2008) 12. Ebrahimi, E., Hartman, L.S., Robb, A., Pagano, C.C., Babu, S.V.: Investigating the effects of anthropomorphic fidelity of self-avatars on near field depth perception in immersive virtual environments. In: 2018 IEEE Conference on Virtual Reality and 3D user interfaces (VR), Reutlingen, Germany, 18 March 2018, pp. 1–8. IEEE, Piscataway (2018) 13. Jung, S., Bruder, G., Wisniewski, P.J., Sandor, C., Hughes, C.E.: Over my hand: using a personalized hand in VR to improve object size estimation, body ownership, and presence. In: Proceedings of the Symposium on spatial user interaction, Berlin, Germany, 13–14 October 2018, pp. 60–68. Association for Computing Machinery, New York (2018) 14. Ogawa, N., Narumi, T., Hirose, M.: Virtual hand realism affects object size perception in body-based scaling. In: 2019 IEEE Conference on Virtual Reality and 3D user interfaces (VR), Osaka, Japan, 23–27 March 2019, pp. 519–528. IEEE, Piscataway (2019)) 15. Gonzalez-Franco, M., Abtahi, P., Steed, A.: Individual differences in embodied distance estimation in virtual reality. In: 2019 IEEE conference on virtual reality and 3D user interfaces (VR), Osaka, Japan, 23–27 March 2019, pp. 941–943. IEEE, Piscataway (2019) 16. Yilmaz, M., O’Connor, R.V., Clarke, P.: An exploration of individual personality types in software development. In: Barafort, B., O’Connor, R.V., Poth, A., Messnarz, R. (eds.) EuroSPI 2014. CCIS, vol. 425, pp. 111–122. Springer, Heidelberg (2014). https://doi.org/10.1007/9783-662-43896-1_10 17. Yilmaz, M., O’Connor, R.V., Colomo-Palacios, R., Clarke, P.: An examination of personality traits and how they impact on software development teams. Inf. Softw. Technol. 86, 101–122 (2017)

108

O. T. Berkta¸s et al.

18. Kim, M., Kim, K., Kim, S., Dey, A.K.: Performance evaluation gaps in a real-time strategy game between human and artificial intelligence players. IEEE Access 6, 13575–13586 (2018). https://doi.org/10.1109/ACCESS.2018.2800016 19. Han, Y., Oh, S.: Investigation and research on the negotiation space of mental and mental illness based on metaverse. In: 2021 International Conference on Information and Communication Technology Convergence (ICTC), pp. 673–677 (2021). https://doi.org/10.1109/ICT C52510.2021.9621118 20. Glaser, B.G., Strauss, A.L.: The Discovery of Grounded Theory: Strategies for Qualitative Research. Adline Publishing Company, Chicago (1967) 21. Liu, Y., Wu, S.: Quality culture management model of Chinese manufacturing enterprises: a grounded theory study. In: 2018 UKSim-AMSS 20th International Conference on Computer Modelling and Simulation (UKSim), pp. 102–107 (2018). https://doi.org/10.1109/UKSim. 2018.00030 22. Deng, J., Wu, S., Liu, Y.: Investigating the TQM practice of enterprises winning China quality award: a grounded theory perspective. In: 2017 European Modelling Symposium (EMS), pp. 115–120 (2017). https://doi.org/10.1109/EMS.2017.30 23. Zhang, X.: A framework of value connection route for fresh agri-product e-commerce: a grounded theory approach in the context of China. In: 2021 2nd International Conference on Big Data Economy and Information Management (BDEIM), pp. 500–508 (2021). https://doi. org/10.1109/BDEIM55082.2021.00109 24. Ståhl, D., Mårtensson, T.: Won’t somebody please think of the tests? a grounded theory approach to industry challenges in continuous practices. In: 2021 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pp. 70–77 (2021). https://doi. org/10.1109/SEAA53835.2021.00018 25. Lili, Y., Xu, X., Liu, C., Sheng, B.: Using grounded theory to understand testing engineers’ soft skills of third-party software testing centers. In: 2012 IEEE International Conference on Computer Science and Automation Engineering, pp. 403–406 (2012). https://doi.org/10. 1109/ICSESS.2012.6269490 26. Pang, C., Hindle, A., Barbosa, D.: Understanding DevOps Education with grounded theory. In: 2020 IEEE/ACM 42nd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), pp. 107–118 (2020) 27. Lee, L.H., et al.: All one needs to know about metaverse: a complete survey on technological singularity, virtual ecosystem, and research agenda (2021). arXiv preprint, arXiv:2110.05352 28. Zhai, X.: ChatGPT for next generation science learning. XRDS: crossroads. ACM Maga. Stud. 29(3), 42–46 (2023) 29. Galletta, A.: Mastering the Semi-Structured Interview and Beyond: From Research Design to Analysis and Publication, vol. 18. NYU press, New York (2013) 30. Hove, S.E., Anda, B.: Experiences from conducting semi-structured interviews in empirical software engineering research. In: 11th IEEE International Software Metrics Symposium (METRICS 2005), p. 10. IEEE (2005)

Digitalisation of Industry, Infrastructure and E-Mobility

An Approach to the Instantiation of the EU AI Act: A Level of Done Derivation and a Case Study from the Automotive Domain Fabian Hüger1(B) , Alexander Poth2

, Andreas Wittmann1 , and Roland Walgenbach1

1 Cariad SE, Berliner Ring 2, 38440 Wolfsburg, Germany

{fabian.hueger,andreas.wittmann, roland.walgenbach}@cariad.technology 2 Volkswagen AG, Berliner Ring 2, 38440 Wolfsburg, Germany [email protected]

Abstract. Based on the EU AI Act draft from November 2022 a team of data scientists, quality managers and legal experts set out to instantiate the AI Act for their project domain. To focus on the product and service relevant parts of the extensive EU Act, the Level of Done (LoD) layer approach was applied. Based on this LoD layer for the AI Act an evaluation was initiated with ongoing Machine Learning (ML) projects. This case study describes the method and approach on how the instantiation was done and provides a first insight into the LoD-application from an engineering perspective. Keywords: EU AI Act · agile development · software engineering · machine learning

1 Introduction Artificial intelligence (AI) is transforming the way we work and live [17]. Applications range from speech recognition and image processing to self-driving cars. While the benefits of AI are undeniable, there are also significant risks and challenges associated with its use. The European Union (EU) has recognized the need for a comprehensive framework to regulate the development and deployment of AI and has recently proposed the EU AI Act [4]. As a software developer, it is important to understand the technical and ethical implications of AI [16] and how they will affect the development of AI systems. The EU AI Act represents an important step towards regulating AI in a manner that promotes transparency, accountability, and ethical behavior. Therefore, engineers must familiarize themselves with the details of the EU AI act and its impact on the development of AI systems. The EU AI Act introduces new requirements in terms of transparency, explainability, and human oversight that must be built into AI systems. Additionally, it is important to acknowledge that many organizations involved in AI development operate in an agile environment [10]. Ensuring that AI systems are not only functional but also compliant. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 111–123, 2023. https://doi.org/10.1007/978-3-031-42307-9_9

112

F. Hüger et al.

with regulatory requirements is of utmost importance. By adopting an approach that incorporates regulatory considerations into the agile development process, organizations can avoid costly delays and reputational damage, while simultaneously developing more trustworthy and ethical AI systems that gain higher customer acceptance. The research aims to investigate how the generic EU AI Act [4] can be effectively applied to enable engineers to work in compliance during the development process and incorporate the necessary operational capabilities into products and services intended for the EU. The research questions for this business setup are as follows: RQ1: How can engineers focus on the engineering-relevant aspects within the extensive and comprehensive AI Act? RQ2: What is required to establish a setup for engineers in agile organizations to include and retain the relevant aspects of the AI Act in the value stream? RQ3: How do engineers respond to tasks resulting from the AI Act and LoD, with a specific focus on fostering or impeding innovation? Section 2 evaluates the related literature, Sect. 3 presents the applied methodology, Sect. 4 presents the actual derivation AI Act LoD layer, and Sect. 5 describes a case study from CARIAD that applies the LoD. Section 6 contains the discussion of the research questions and outlines the limitations. Section 7 summarizes the key contributions of this article to research and practice and provides an outlook to the authors’ ongoing research activities.

2 Literature The Act, its Evaluation and Recommended Improvements The EU AI Act [4] is currently in the developmental phase. Multiple draft versions have been issued since the initial draft in April 2021. This article is based on the EU AI Act draft version from November 2022. A finalized, non-draft version is scheduled for release in 2023. However, most of the existing literature analyzed in this section focuses on earlier draft versions of the EU AI Act. In [9] the EU AI Act is described as “The new European rules [that] will forever change the way AI is formed”. [15] identifies in their review that self-assessments and private bodies of standardization have too much “power” to define and adjust the instantiation and are too weak to enforce conformity. [5] discusses various aspects that require refinement in proportion to the systems’ risk level. In [3], four issues are identified, and proposed solutions with significant impacts on Chapter III of the AI Act are put forth. [14] concludes, that a label for AI-based systems outside the “high-risk” focus of the AI Act is missing. [6] evaluates the impact of the AI Act on AI training data and the risks to innovation resulting from potential over-regulation. [2] proposes an ontology

An Approach to the Instantiation of the EU AI Act

113

to contextualize the AI Act within established risk management standards such as ISO 31000. Approaches to Establishing Legal Compliance Requirements in Organizations Various approaches exist to align organizations and their product and service delivery teams with regulations and standards. One example is the Level of Done approach [11], which is part of the efiS® framework [12]. The LoD approach enables agile teams to prioritize relevant requirements and integrate the AI Act LoD into a stack of other pertinent LoD layers (e.g. ISO 27001 or GDPR layer for a specific product or service domain [13]). In [11], a LoD layer for the ISO 14001 environmental management system is proposed. With the Self-Service Kit (SSK) approach of the efiS® framework, autonomous teams can independently carry out their value stream work [11], supported by the regulation experts who provide and maintain LoD layers. Additionally, the SAFe® framework recommends the establishment of “stream-aligned teams” that support compliance and regulations [7], supplemented by “shared services” that possess a specialized skill set in regulatory and compliance matters [8].

3 Methodology The methodology of the proposed approach is grounded in Action Research (AR) [1]. In order to address the problem from a broad perspective, a team consisting of two quality assurance experts, two legal experts, and two machine learning engineers was formed. The aim was to devise a strategy to harmonize the EU AI Act with the product and service-specific aspects of an agile development framework. The workflow employed to construct the LoD is depicted in Fig. 1. As the LoD approach itself is available as Self-Service Kit (SSK) it is also possible to offer a domain specific LoD like this one for the EU AI Act via SSK to projects as a final result of the experts work.

Fig. 1. Schematic workflow to derive the LoD

114

F. Hüger et al.

In step (a), the EU AI Act from November 2022 was analyzed to extract the most relevant engineering content by the machine learning experts. In step (b), this content was then transferred to the LoD, where each line addresses a specific development topic. The objective was to ensure that engineers without extensive knowledge of artificial intelligence development, operations, and/or quality assurance could comprehend the information. Thus, considering an agile development setup that encompasses a Product Owner (customer), a development team, an operations team, and an organizational level quality department, the LoD was structured into four levels, each represented by dedicated columns. In step (c), the EU AI Act and the draft version of the LoD were scrutinized from a legal perspective to incorporate any missing or incomplete legal content. Moving on to step (d), a suitable use case, covering a wide range of topics in the LoD, was selected for evaluation purposes. This "piloting phase" served as a cross-check to ensure the intended applicability of the LoD in a development project. Finally, step (e) involved fine-tuning the LoD based on feedback received from the use case.

4 Derivation of LoD Layer EU AI Act Scope and Initial Derivation of the LoD The aim of this study was to develop an EU AI Act LoD layer specifically tailored to an agile organization operating within the automotive domain. The objective was to create a structured LoD layer that aligns with the value-stream flow of agile teams and can be easily comprehended by autonomous development teams. This would help minimize the reliance on legal and quality experts for guidance. The EU AI Act comprises seven hierarchical levels: Sections (Annex, etc.), Title, Chapter, Article, Arabic numbers (1, 2, etc.), Letters (a, b, etc.), and Roman numerals (i, ii, etc.). Furthermore, cross-references are frequently used to establish links between different levels. This type of linking can result in recursive correlations, such as Articles 8 and 17 from Title 3 Chapter 2 referencing the entire Chapter 2. Such a complex structure sets the EU AI Act apart from other standards, management systems (e.g., ISO 9001, ISO 27001, or ISO 14001), and regulations (e.g., GDPR). Dealing with this complexity necessitates an additional auxiliary structure to facilitate comprehension and maintain an overview. Figure 2 illustrates an intermediate graph that was constructed during the analysis to visualize the references and dependencies within the EU AI Act. The colors employed in the figure represent interim markers utilized during the analysis process. Moreover, this complexity underscores the potential for data scientists and engineers to become overwhelmed by the AI Act without the assistance of legal and quality experts. The scope of the LoD was defined to encompass products or services developed within the EU for the EU market, specifically in the context of in-vehicle systems or enterprise IT systems (excluding finance and HR-related systems). It is possible to merge the development and operations teams into a single DevOps team, which would result in reducing the four levels of the LoD to three. This reduction is more feasible than the

An Approach to the Instantiation of the EU AI Act

115

Fig. 2. Authors aid to get an overview about dependencies and relations.

reverse, i.e., splitting a DevOps level into separate development and operations levels. Considering the complexity and extensiveness of the AI Act, having a default combined DevOps level would have created an overly large and unwieldy level. Additionally, while it is conceivable to add a fifth level to the LoD for external assessments, it was not included in this instance as internal assessments are typically conducted for the selected scope and use case. Nevertheless, the four levels still exceed the customary three-level LoD layers derived from other standards and regulations. To create a LoD layer that can be understood by engineers without the involvement of quality or legal experts, relevant portions of the AI Act are handled through the following approaches: a) Literal extraction: When the original text is comprehensible without expert knowledge. b) Extraction with additional explanations: When additional clarifications are required. c) Re-phrased text: Used to summarize less relevant aspects or to enhance the text’s comprehensibility for engineers. In all three cases, references to the AI Act are provided, enabling engineers to consult the source for optimized interpretations specific to their product or service context. To maintain transparency, the extracted text is presented in black, while additional explanations or re-phrased text are displayed in a different color or, in this document, in italics. Table 1 provides examples of the initial LoD layer showcasing these three cases. The LoD references the original location within the AI Act using a code comprising Title (T), Chapter (C), and Article (A). Table 1 also demonstrates an example of splitting an article into two levels, as seen in T3C3A16, which contains parts for both the Ops and QM/Assessment levels. Overall, the EU AI Act LoD layer is presented as a table with eleven rows and four columns. However, the word count within the cells varies significantly, ranging from the brief “Ops” section of T3C3A16 to more extensive portions such as T3C2A9 in the second line of the “Dev” column.

Dev

Compliance with requirements (T3C2A8)(1) taking into account the generally acknowledged state of the art. (2)The intended purpose of the high-risk AI system and the risk management system … shall be taken into account when ensuring compliance…

Risk Management System (T3C2A9) (2) ***The system is established and improved over the entire AI system life-cycle to foresee risks and use the post-market monitoring and applies adoption of measures. (4) The measures shall address residual risks and mitigate them to acceptable levels.*** The following shall be ensured: (a)

Biz/customer

Scope of legislation (T1A2)(2) For AI systems classified as high-risk AI systems in accordance with Articles 6(1) and 6(2) related to products covered by Union harmonisation legislation listed in Annex II, section B only Article 84 of this Regulation shall apply (T1A6)(1)An AI system that is itself a product covered by the Union harmonisation legislation listed in Annex II shall be considered as high risk if it is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the above mentioned legislation

(T1A3)(1)‘artificial intelligence system’ (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/orhuman-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logicand

Correctiveactions (T3C3A21) Providers of high-risk AI systems… Shall immediately investigate… The causes ***of suspected non-conformities*** and take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it …

Obligations of providers of high-risk AI systems (T3C3A16) Providers of high-risk AI systems shall keep the logs … automatically generated by their high-risk AI systems …

Ops

(continued)

Quality management system (T3C3A17) (1) Providers of high-risk AI systems shall put a quality management system in place…, which shall be documented in a systematic and orderly manner in the form of written policies, procedures and instructions, including a strategy for regulatory compliance…, techniques,

Obligations of providers of high-risk AI systems (T3C3A16) Providers of high-risk AI systems shall ensure that their high-risk AI systems are compliant with the requirements set out in C2 ***(esp. Risk management system, Data and data governance requirements, Technical documentation and Recordkeeping, Transparency and safety requirements)***; have a quality management system in place … and keep documentation and ensure that the high-risk AI system undergoes the relevant conformity assessment procedure

QM/Assessment

Table 1. Examples for the LoD layer EU AI Act (changes and additions to the original text are highlighted in italics and within “***”)

116 F. Hüger et al.

knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts; ***The system is considered to be an AI system if all above mentioned criteria are met. *** … (T1A3)(23)‘substantial modification’ means a change to the AI system following its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in Title III, Chapter 2 of this Regulation, or a modification to the intended purpose for which the AI system has been assessed. For high-risk AI systems that continue to learn after being placed on the market or put into ser, changes to the highrisk AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, shall not constitute a substantial modification ***System is in scope of legislation if a substantial modification is performed on an existing product after AI act effective date.*** … Prohibited AI (T2A5) Prohibited is… an AI system that deploys subliminal techniques beyond a person’s consciousness with the objective to or the effect of materially distorting a person’s behavior; that exploits any of the vulnerabilities of a specific group of persons due to their age, disability or a specific social or economic situation; for the evaluation or classification

of natural persons over a certain period of time based on their social behavior or known or predicted personal or personality characteristics, with the social score…; the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces by law enforcement authorities

Dev

elimination or reduction of risks identified and evaluated… as far as possible through adequate design and development of the high risk AI system; (b) where appropriate, implementation of adequate mitigation and control measures in relation to risks that cannot be eliminated; (c) provision of adequate information pursuant ***to the transparency requirements***…, and, where appropriate, training to users. (5) ***testing ensures consistency with the intended purpose and the compliance to T3C2*** (7) Testing shall be performed… Throughout the development process, ***and has to be performed*** in any event, prior to the placing on the market or the putting into service. Testing shall be made against preliminarily defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the… High-risk AI system. (6) ***real world tests (with users etc.) are under specific conditions (A54a). Summary: for each AI system a specific risk management system is established because the measures and controls are specific to each AI system.***

Biz/customer

Ops

Table 1. (continued) QM/Assessment procedures and systematic actions to be used for the design, design control and designverification/ ***and to be used for the development***,… Quality control and quality assurance of the high-risk AI system;… Examination, test and validation procedures to be carried out before, during and after the development of the highrisk AI system…; ***compliance with technical specifications, including standards; systems and procedures for*** data management…; risk management system…; ***the setting-up, implementation and maintenance of a postmarket monitoring system; … Incident reporting procedures related to the reporting of a serious incidents; the handling of communication with authorities***; record keeping and an accountability framework setting out the responsibilities of the management and other staff

An Approach to the Instantiation of the EU AI Act 117

118

F. Hüger et al.

Review and Optimization In a subsequent phase, the legal experts conducted a thorough review of the analysis conducted by the quality and AI experts. This review aimed to address the potential risks stemming from the engineers’ product-focused perspective, which might result in the extraction of only development-relevant aspects while potentially altering the wording and intended meaning of the Act in an effort to enhance accessibility for engineers. It was crucial to avoid omitting important legal considerations during this process. The legal expert review yielded additional extensions and rephrasing of the LoD layer. The recommendations provided by the legal experts were carefully incorporated by the entire LoD team into the finalized LoD layer. The outcome is a comprehensive LoD layer for the EU AI Act, which encompasses the designated scope and endeavors to be easily understandable for engineers who may be affected by its provisions. If necessary, it would be feasible to refine and streamline the LoD layer by narrowing its focus to more specific products or services, as applicable.

5 Case Study for Crosscheck Validation in a Real Project Setups For the case study, an AI software component currently in the development stage was selected, along with an updated variant of the same component in a pre-development stage. The chosen component is a virtual software sensor used in a driver assistance system, which is a function relevant to homologation. The LoD team conducted several interviews with the pre-development team, including the project lead and engineers, during which they reviewed the LoD document. The team gathered the following findings (excerpts): Business/Customer F1. (T1A3)(1) The definition of "AI system" is unclear. The term "elements of autonomy” was not understood. We suggest adding that, in cases of uncertainty, we assume the worst-case scenario, considering the system falls under the definition. F2. (T1A3)(23) In this situation, the current series development variant is likely to be released before the effective date of the AI Act, while an updated version may be released afterward. The meaning of “substantial modification” and “modification to the intended purpose" was not clear. In this case, the component’s architecture will not change, only the weights. Additionally, the purpose is altered to accommodate a higher safety load for an advanced system. We propose marking it as “in scope” if uncertain. F3. (T1A6)(1) The terms “safety components” and “components covered by EU typeapproval rules” were not understood. It was unclear if the sensor is considered a safety component. We suggest marking it as “in scope” if uncertain. F4. (T2A5) The meaning of “exploit… Disability” was unclear. We discussed an example where the vehicle should have the same functionality for individuals with only one arm, and corresponding development measures are in place. Further explanations are needed for this part.

An Approach to the Instantiation of the EU AI Act

119

Dev F5. (T3C2A8)(1) The meaning of “taking into account the generally acknowledged state of the art” was unclear. It could refer to either having the same performance as other state-of-the-art systems or employing state-of-the-art development methods. In a refined version of the LoD document, we suggest mentioning both variants to avoid confusion. F6. (T3C2A9)(2) The regulation requires a Risk Management System (RMS) that includes life cycle considerations. It remains unclear what the RMS and “adequate measures” for an AI system should cover. Providing a reference to an industry norm would help specify the requirements for the development team. In our case, the ISO PAS 8800 “Road vehicles: Safety & Artificial Intelligence” would be appropriate. F7. (T3C2A10)(3) “… Data sets shall be… to the best extent possible, free of errors and complete.“ The term “best extent possible” is not clear. It is recommended to provide documents with statistics demonstrating reasonable data quality and distribution as best practice. Ops F8. (T3C3A16) “providers of high-risk AI systems shall keep the logs”. It remains unclear what needs to be logged. QM/Assessment F9. (T3C3A16) “Providers… Shall ensure that the high-risk AI system undergoes the relevant conformity assessment procedure.“ Here, we refer to an existing internal development process for AI that incorporates AI act specifics. F10. (T3C3A17) “Providers… Shall put a quality management system in place.“ For the given use case, where the AI system is a sensor for a function with broader scope, we apply the quality management system (QMS) at the function level, similar to a hardware sensor.

6 Discussion and Limitations The research questions are answered based on the gathered data and its analysis: RQ1: How can engineers focus on the engineering-relevant aspects within the extensive and comprehensive AI Act? The complex structure and legal language of the AI Act make it challenging for engineers to work with it directly. From an organizational perspective, it is beneficial to identify cases within their product domains to establish a case-focused selection of the relevant requirements. This can be achieved through the collaboration of expert groups comprising engineers to ensure a customer-focused approach, as well as quality and legal experts to ensure completeness. By structuring the relevant aspects and requirements, an LoD layer for the EU AI Act can be developed, which would be useful for engineers.

120

F. Hüger et al.

Discussion: The presented organization has analyzed upcoming products and services with potential AI features and grouped the results into cases addressed by an LoD layer. This approach simplifies the setup and allows for a reduction in the number of requirements specific to a particular case. However, selecting the appropriate requirements and making them understandable for engineers requires considerable effort. Limitations include the fact that this is only one of several possible approaches that could be applicable. There may be more effective approaches to focusing on the relevant aspects of the AI Act for an organization or their value stream teams. Additionally, the analyzed use cases are limited and do not cover all real value stream products and services. RQ2: What is required to establish a setup for engineers in agile organizations to include and retain the relevant aspects of the AI Act in the value stream? Engineers in agile organizations expect to work autonomously within their product and service-specific value streams. To facilitate this, a LoD layer provided by a central and shared governance service is a viable approach. Engineers can utilize the LoD layer, selecting only what is necessary for their specific product domain. With traceability to the source in the AI Act, they can identify potential implications for their value stream. Furthermore, they can seek guidance or share improvement ideas with the experts responsible for building and maintaining the LoD layer. Discussion: The proposed approach has been successful because the expert team was appropriately skilled and staffed. Early evaluation in the development life cycle allowed for optimal adaptation of the LoD layer to fit the scoped cases. Limitations include cases that may not align with the value streams, requiring continued support from experts. This can result in bottlenecks if experts are overwhelmed with support requests. Additionally, engineers may still overlook or misinterpret parts of the LoD layer. To mitigate this, quality assurance activities with randomized samples are necessary, especially for non-mature teams within the scope of the AI Act. RQ3: How do engineers respond to tasks resulting from the AI Act and LoD, with a specific focus on fostering or impeding innovation? Feedback from the development team indicates that, overall, the LoD is a helpful tool for their development process. However, they expressed a need for greater clarity regarding scope definitions and concrete development requirements. Some requirements were perceived as excessive and limiting innovation. Discussion: The development team is accustomed to following rules and regulations, making the LoD a useful tool for deriving development requirements. Limitations include uncertainties stemming from the wording of the AI Act and the initial iteration of additions within the LoD. We suggest iterative fine-tuning of the LoD on a use case basis, as outlined in Chapter 5, along with the addition of examples to improve clarity.

An Approach to the Instantiation of the EU AI Act

121

7 Conclusion and Outlook This work demonstrates the instantiation of the EU AI Act within an enterprise. Initial evaluations and feedback from projects suggest that the LoD layer EU AI Act can provide engineers with a comprehensive understanding of the topic. However, some engineers have expressed concerns about the increased documentation and additional monitoring required for AI functions, which may pose challenges compared to existing non-AI functions. This raises questions about the feasibility and business case for implementing improved AI functions considering these non-trivial overheads imposed by the AI Act requirements. The key contributions to practice can be summarized by the following aspects: – LoD layer EU AI Act: Developing a LoD layer tailored to agile automotive organizations is a practical contribution. This structured layer aligns with agile teams, minimizing reliance on experts and helping engineers understand and comply with the AI Act effectively. It can be made generally available by offering it as a self-service kit to value-stream teams. – Clarification and Interpretation of AI Act Requirements: Extracting, explaining, and rephrasing AI Act provisions in a comprehensible manner bridges the legal-technical gap. Engineers consult the LoD layer for optimized interpretations specific to their context, facilitating accurate application. – Case Study for Validation and Crosscheck: Including a case study during AI software development allows practical validation and feedback gathering. Engaging with the pre-development team helps refine and optimize the LoD layer based on real-world projects, guiding engineers effectively. Furthermore, it is possible to focus the EU AI Act on the business domain by forming a cross-functional team consisting of legal, quality, and engineering experts. Legal experts play a crucial role in ensuring the completeness of the focused outcome, while engineering experts ensure understandability within autonomous value-stream teams. The key contributions to theory can be summarized by the following aspects: – Recognition that the EU AI Act is an extensive and comprehensive document that would benefit from better structuring and simplification, similar to other management system standards such as ISO 9001 or 27001, or regulations like GDPR. – The EU AI Act lacks the clarity and refinement needed for a specific product or service, making it useful to have example cases with implementations demonstrating adequate compliance with the requirements. The complex structure of the EU AI Act requires auxiliary structures like the LoD layer to enhance understandability. Managing complexity in regulatory frameworks improves usability and facilitates comprehension. – The AI Act may not be designed with cost-sensitive embedded systems in mind, especially without a monitoring “back-channel.“ The next steps involve gathering more evaluation feedback to design the final or official version of the LoD layer EU AI Act that best meets the needs of value-stream teams. Additionally, the usefulness of developing more use-case specific LoD layers to reduce their size should be evaluated.

122

F. Hüger et al.

Another research question stemming from these observations is whether the current set of restrictions from the EU AI Act strikes the right balance between promoting innovation and safeguarding people.

References 1. Avison, D.E., Lau, F., Myers, M.D., Nielsen, P.A.: Action research. Commun. ACM 42(1), 94–97 (1999) 2. Dimou, A.: Airo: an ontology for representing ai risks based on the proposed eu ai act and iso risk management standards. In: Towards a Knowledge-Aware AI: SEMANTiCS 2022— Proceedings of the 18th International Conference on Semantic Systems, Vienna, Austria, 13–15 September 2022, vol. 55, p. 51. IOS Press2022 3. Edwards, L.: Regulating AI in Europe: four problems and four solutions (2022). Accessed 15 Mar 2022 4. European Commission. Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Brussels, 11/11/2022, 2021/0106 (COD) (2022) 5. Haataja, M., Bryson, J.J.: Reflections on the EU’s AI act and how we could make it even better. Compet. Policy Int. 24 (2022) 6. Hacker, P.: A legal framework for AI training data—from first principles to the Artificial Intelligence Act. Law Innov. Technol. 13(2), 257–301 (2021) 7. Inc, S.A.: Organizing Agile Teams and ARTS: Team Topologies at Scale (2023). https://scaled agileframework.com/organizing-agile-teams-and-artsteam-topologies-at-scale/. Accessed 28 Apr 2023 8. Inc, S.A.: Shared Services (2023). https://scaledagileframework.com/sharedservices/. Accessed 28 Apr 2023 9. Kop, M.: EU artificial intelligence act: the European approach to AI. Stanford-Vienna Transatlantic Technology Law Forum, Transatlantic Antitrust and IPR Developments, Stanford University, Issue (2021) 10. Meding, W.: Effective monitoring of progress of agile software development teams in modern software companies: an industrial case study. In: Proceedings of the 27th International Workshop on Software Measurement and 12th International Conference on Software Process and Product Measurement, pp. 23–32 (2017) 11. Poth, A., Nunweiler, E.: Develop sustainable software with a lean ISO 14001 setup facilitated by the efiS® framework. In: Przybyłek, A., Jarz˛ebowicz, A., Lukovi´c, I., Ng, Y.Y. (eds.) LASD 2022. LNBIP, vol. 438, pp. 96–115. Springer, Cham (2022). https://doi.org/10.1007/ 978-3-030-94238-0_6 12. Poth, A., Kottke, M., Riel, A.: Orchestrating agile IT quality management for complex solution development through topic-specific partnerships in large enterprises – an example on the EFIS framework. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) Systems, Software and Services Process Improvement. CCIS, vol. 1442, pp. 88–104. Springer, Cham (2021). https:// doi.org/10.1007/978-3-030-85521-5_7 13. Poth, A., Kottke, M., Middelhauve, K., Mahr, T., Riel, A.: Lean integration of IT security and data privacy governance aspects into product development in agile organizations. J. Univ. Comput. Sci. 27(8), 868–893 (2021) 14. Stuurman, K., Lachaud, E.: Regulating AI: a label to complete the proposed act on artificial intelligence. Comput. Law Secur. Rev. 44, 105657 (2022) 15. Veale, M., Borgesius, F.Z.: Demystifying the draft EU artificial intelligence act — analysing the good, the bad, and the unclear elements of the proposed approach. Comput. Law Rev. Int. 22(4), 97–112 (2021)

An Approach to the Instantiation of the EU AI Act

123

16. Wang, W., Siau, K.: Ethical and moral issues with AI (2018) 17. West, D.M., Allen, J.R.: How artificial intelligence is transforming the world. Report (2018). Accessed 24 Apr 2018

An Investigation of Green Software Engineering Martina Freed1 , Sylwia Bielinska1 , Carla Buckley1 , Andreea Coptu1 , Murat Yilmaz2 , Richard Messnarz3 , and Paul M. Clarke1,4(B) 1 School of Computing, Dublin City University, Dublin, Ireland

{martina.freed2,sylwia.bielinska2,carla.buckley38, andreea.coptu2}@mail.dcu.ie, [email protected] 2 Department of Computer Engineering, Gazi University, Ankara, Turkey [email protected] 3 ISCN, the International Software Consulting Network, Graz, Austria [email protected] 4 Lero, the Science Foundation Ireland Research Center for Software, Limerick, Ireland

Abstract. The urgency of sustainability concerns has intensified in recent years, sounding alarm bells over the planet’s condition and prompting nearly every industry and practice to reassess their contributions to the climate crisis. Software engineering is not immune to this scrutiny. Software engineering practices significantly affect the environment and may not align with sustainability goals. Although sustainability is a relatively recent focus in software engineering, it has garnered increased attention, with numerous studies addressing various concerns and practices. Green software engineering aspires to develop dependable, enduring, and sustainable software that fulfills user requirements while minimizing environmental impacts. As this green paradigm gains traction in software engineering, practitioners must incorporate sustainability considerations into future software designs. However, despite the surge in green software engineering research, a universally accepted definition and framework remain elusive. This paper outlines green software engineering by explaining its principles, challenges, and methods for measuring and evaluating software effectiveness in this context. Keywords: sustainability · energy efficiency · software engineering · green software engineering

1 Introduction Software has the power to support the environment, and create environmentally friendly solutions to processes that previously contributed to the carbon footprint and climate change [1]. However, it also has the potential to worsen the climate crisis if proper steps are not taken to manage energy use and the carbon footprint of the software itself. Green software development is becoming a discipline of its own: some have even suggested a new green software engineering (hereafter referred to as “Green SE”) process that is a change on the traditional software development life cycle with a focus on lowering © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 124–137, 2023. https://doi.org/10.1007/978-3-031-42307-9_10

An Investigation of Green Software Engineering

125

resource use [2]. With this new discipline comes new implications for software developers, new practices, new challenges, and new ways of evaluating if the software is effective in respect of sustainability objectives. Green IT has been described as a discipline that considers and optimizes the resources consumed by the life cycle of Information and Communication Technology [1]. This idea can also be applied to Green SE. However, no single universally accepted definition of Green SE has been identified in the literature. This research sets out to explain Green SE by examining its practices, concerns, challenges, and analysis. The objectives of this paper are to: 1. Provide a contemporary understanding of Green SE in the context of the growing urgency for sustainability. 2. Examine the environmental implications of traditional software engineering practices and their alignment with sustainable objectives. 3. Review the evolution and current state of Green SE research, addressing its various concerns and practices. 4. Elucidate the principles, challenges, and potential methods for measuring and evaluating the effectiveness of Green SE. 5. Contribute to the development of a universally accepted definition and framework for Green SE, paving the way for future sustainable software design practices. This paper is organized as follows: Sect. 2 details the research methodology, elucidating our inclusion/exclusion criteria and delineating the related research questions. Section 3 encompasses the analysis, featuring subsections addressing each of the four specific research questions. Section 4 contemplates the limitations of our study, while Sect. 5 highlights avenues for future research. Finally, Sect. 6 offers the research conclusions.

2 Research Methodology 2.1 Search Strings A multivocal literature review (MLR) [48] was conducted for this research. While most of the sources cited are peer reviewed (published academic literature), grey literature is also included. The grey literature included is from platforms which ostensibly incorporate robust oversight and moderation. Google Scholar was used to identify the literature sources. Search strings included ‘green software engineering’, ‘energy efficiency software’, ‘green software engineering practices’, ‘green software engineering sustainable design’, ‘disadvantages of green software engineering’, ‘cost of green software engineering’, ‘green software engineering universal framework’, ‘life cycle assessment’, ‘green metrics’. 2.2 Inclusion/Exclusion When searching on Google Scholar, we defined our inclusion/exclusion criteria to be the first 20 articles from 2019 or later. Software engineering, and especially Green SE, is

126

M. Freed et al.

an evolving discipline, so we wanted to ensure our research was current. We also took a brief look into the topics of the papers that came up to decide if they were relevant to our research questions. Although we limited our results on Google Scholar to papers since 2019, we did include some sources that were older. This was because they were cited by a paper published in 2019 or later, and they provided useful background information on the topic. 180 papers were identified as potential sources. We read the titles, key words, and/or the abstracts to evaluate the relevance and credibility. We also looked into sources that were cited by the original sources to find more information. Ultimately, we included a total of 48 sources. To address the central research question of this paper, “What is Green Software Engineering?”, we have identified the following four subsidiary research questions: • • • •

What are the fundamental principles and practices of green software engineering? In what ways can software developers decrease the energy consumption of software? What are the primary challenges confronting green software engineering? How can we assess and quantify the environmental impact of green software engineering?

3 Analysis 3.1 What Are the Principles and Practices of Green Software Engineering? The growing demand for software products and services has led to an increase in the energy consumption and carbon footprint of the IT industry. As a result, it has become critical to adopt sustainable practices in software engineering to mitigate the environmental impact of the software development lifecycle [3]. The adoption of a green mindset is essential for all stakeholders, including software developers, software development organisations, end-users, and society as a whole. According to Professor San Murugesan [4], the IT sector and users must develop a positive attitude toward addressing environmental concerns and adopt forward-looking, green-friendly policies and practices. Green SE consists of a set of principles and practices aimed at reducing the environmental impact of software development. The principles are high-level guidelines that describe the core values and concepts of Green SE, while the practices are specific actions that can be taken to implement those principles. The principles provide a conceptual framework for Green SE, while the practices offer practical guidance on how to achieve the goals set out by the principles. By adopting Green SE principles and practices, software developers can aspire to reduce the carbon footprint of software development and contribute to a more sustainable future. The principles help to guide decision-making and provide a sense of direction, while the practices ensure that actions are aligned with the principles and contribute to the overall goal of environmental sustainability. This section explores the key principles and practices of Green SE. The key principles include energy efficiency, sustainable design, life cycle assessment, and renewable energy sources. For each principle, we discuss the key practices that can be implemented to achieve the goals set out by the principles. Energy efficiency is one of the key principles of Green SE. It involves optimizing energy usage at every stage of the software development lifecycle. This principle can be

An Investigation of Green Software Engineering

127

achieved by employing best practices such as reducing computational complexity, power management, and using cloud-based services. Minimizing computational complexity involves optimizing the algorithms and data structures used in the software to reduce the amount of processing power required. Power management involves optimizing the hardware and software settings to minimize energy consumption [5]. Finally, using cloud-based services is another way to achieve energy efficiency. Cloud-based services enable software to be run on remote servers, which are optimized for energy efficiency [6]. This means that the energy required to power the software is not consumed on the user’s hardware, which can be less energy-efficient. Sustainable design is a key principle focused on designing software systems that are environmentally friendly and sustainable. Essential practices for sustainable design include minimizing energy consumption, reducing waste, and designing for the future [7]. Minimizing energy consumption involves designing software systems to be energyefficient at every stage of the software development lifecycle, while reducing waste involves designing software systems that minimize the amount of waste generated at every stage of the software development lifecycle. Designing for the future involves designing software systems that are flexible, scalable, and adaptable to future changes in technology and user needs [8]. Life cycle assessment is the principle of evaluating the environmental impact of software throughout its entire lifecycle. A comprehensive life cycle assessment involves practices such as carbon footprint analysis, energy consumption analysis, and waste generation analysis. Carbon footprint analysis involves measuring the amount of carbon emissions generated by the software at every stage of the software development lifecycle. This can be achieved by using tools such as the Software Sustainability Assessment Framework, which helps to measure the environmental impact of software [9]. Energy consumption analysis involves measuring the amount of energy consumed by the software at every stage of the software development lifecycle. This can be achieved by using tools such as the Energy Consumption Analysis Tool, which helps to measure the energy consumption of software. Renewable energy sources are a key principle of Green SE that can help to reduce the environmental impact of software development. By using renewable energy sources, such as solar or wind power, software development can be made more sustainable, reducing the reliance on non-renewable energy sources [10]. The use of renewable energy sources can be achieved through practices such as the use of green hosting services, the adoption of green data centres, and the implementation of energy-efficient hardware. One practice is the use of green hosting services. These services provide data centres that are powered by renewable energy sources such as solar or wind power. By using green hosting services, the environmental impact of software development can be minimized. Another practice involves the adoption of green data centres. Green data centres are data centres that are designed to be energy-efficient and are powered by renewable energy sources [11]. Green data centres can help to reduce the carbon footprint of software development by minimizing energy consumption. In summary, Green SE is essential for mitigating the environmental impact of the software development lifecycle. The key principles of energy efficiency, sustainable design, life cycle assessment, and renewable energy sources provide a framework for

128

M. Freed et al.

achieving environmental sustainability. The key practices associated with each principle offer practical guidance on how to achieve the goals set out by the principles, contributing to a more sustainable future. 3.2 How Can Software Developers Reduce the Energy Consumption of Software? Energy use tend to increase with larger populations and increased reliance on technology. Reducing energy consumption has become critical, as has concern with carbon emissions. Refactoring code is the practice of editing code to improve its design without changing its functionality. Refactoring may mean a number of different things: editing methods to make them more concise, remove duplicate code, or shorten a class [12]. Refactoring code has many potential benefits, including decreasing the use of energy while the program does the same job. For example, a triple refactoring combination on applications for portable devices written in C# and Java can considerably lower power consumption [13]. By refactoring code, software developers can not only make their code more readable and elegant, but also improve its energy efficiency. The programming languages employed in a system may also influence the energy consumption of an application. Researchers utilized The Computer Languages Benchmark Game framework to collect 13 problems, finding that C used the least energy, memory, and was the fastest [14]. They also stated that “Compiled languages tend to be, as expected, the fastest and most energy efficient ones” (14) as opposed to interpreted and virtual machine languages. However, faster languages are not always more energy efficient [14]. This also does not mean that there is one best programming language to use for a piece of software if the developer is considering energy, time, and memory usage. When searching for Green SE, one may find many resources about the usefulness of cloud computing. Cloud computing can enable customers to pay for software on demand instead of owning all of the hardware [15]. Before the cloud, companies and individuals often had more hardware and servers than were necessary. The cloud fixed this problem since these services are delivered over the internet when they are needed. It can be cheaper and more energy efficient for companies [16]. However, simply because something uses the cloud does not necessarily mean it is energy efficient. Many large companies have cloud data centers, which consume massive amounts of energy to operate, take up space, and require significant energy to cool [15]. Power consumption of the cloud is on the rise, which inevitably contributes to carbon emissions. Decreased carbon emissions is a central focus of Green SE [2]. Virtualization (running multiple virtual computers on one piece of hardware), consolidation (consolidated the number of servers so there are less idle servers), a thermal aware approach to data centers, and static and dynamic power management are some cloud-based energy saving approaches [17]. Software defined networking (SDN) is a similar approach to cloud computing because it “is capable of providing the solutions without the knowledge of underground complex network architecture” [18]. SDN allows software developers to manage network operation through software. In a traditional network architecture where elements are not globally controlled [19]. In a traditional network, specific knowledge of the infrastructure may be a necessity [20]. Due to the similarities with cloud computing, there are similar concerns of energy efficiency in software defined networking. Using

An Investigation of Green Software Engineering

129

sleep state of end devices and managing traffic have been suggested as techniques to achieve this [18]. Beyond the energy efficiency of programs being difficult to measure, there has been uncertainty as to how aware software developers are of energy efficiency. When questions regarding energy efficiency on Stack Overflow were compared with other questions on the platform, it was discovered that questions about energy efficiency are popular and diverse in themes (code design, energy use, noise, general questions), but the answers are not always good quality [21]. Improving the answers to these questions would help software developers be prepared to write energy efficient programs. Although energy efficiency is difficult to measure, some researchers sought out to establish commonly agreed upon terminology surrounding energy efficiency of software. The results noted that this terminology did not exist, leading to the creation of a Green Software Measurement Ontology (GSMO) that includes terms and definitions such as Test Case, Test Case Measurement, and a Measuring Instrument [22]. A hardwarebased approach was used to measure energy use, with an Energy Efficiency Tester (EET) and software tool called ELLIOT used to process the data. The EET includes a power source, sensors, and a system microcontroller to gather the information gathered by the sensors and store them in memory. The Green Software Measurement process includes defining the requirements, configuring the measurement environment with the measuring instrument, performing the measurement, and doing data analysis on that test case [22]. Published in 2021, and the authors hoped that it would establish a practice on measuring energy use. Creating energy efficient software is easier if developers can gauge the energy use of software. A focus on energy efficiency can only be useful if there exists a way to measure energy use and gauge the impact. 3.3 What Are the Challenges Facing Green Software Engineering? As the topic of environmental sustainability becomes a widespread concern throughout many areas of computer science, there has been a recent spike in research into Green SE and sustainable computing. Although green software is a topic of increasing interest within the IT industry, it is not a technology that is commonly implemented within these industries. The reason for the lack of green software implementation can come down to three main challenges that a company may be faced with when implementing such technology. These challenges being the lack of awareness, lack of a universal framework and the cost of implementation. Green software is still a relatively new concept. Results from a recent survey indicates that sustainable software is a new concern for software engineers and, despite a high interest in the subject, they have a low perception of the impacts of sustainability throughout the development cycle [23]. A study carried out in 2017 found that, while viewing software sustainability as important, software engineers are primarily concerned with the technical aspects of software sustainability rather than the environmental aspects [24]. When referring to software sustainability, they address organisational and economic issues but lack considerations for environmental issues. Through this study, there is a diverse understanding of what sustainable software is which suggests that a clear definition of sustainable software has yet to be refined and distributed within the IT industry. The results of this survey further reveal that software practitioners have a

130

M. Freed et al.

skewed understanding of sustainability concepts in the software development process [23]. This is due to their targeted perceptions on the reuse of code in regard to sustainable software. This perception is still relevant to Green SE as it does have a positive environmental impact. However, it prevents companies from becoming green as they have their focus only on one dimension of the four major dimensions of software sustainability as defined in a proposed sustainability framework [25]. Companies are yet to fully adapt to these dimensions and utilise any other green models, processes, methods and tools that can support the development of their software in a meaningful way. This can be further evidenced through a recent study where, out of nineteen software engineers interviewed, fourteen participants reported that they have worked on software that was not sustainable [26]. This shows that not only are companies not implementing all dimensions of green software, but they are also not prioritising the technical dimension that they put most emphasis on in the development process. Thus, it can be found that companies do not promote sustainable development within the company [23]. While companies tend to focus on the technical aspects of sustainable software, researchers are mainly interested in proposing frameworks, approaches and models [27]. This shows a gap between the level of interest between academia and the industry which needs to be aligned. It is important for companies in the IT industry to recognize the importance of Green SE and the benefits that it can provide in order to spread awareness within the industry and allow for higher rates of research and implementation of green software. The lack of awareness of Green SE means there is an additional deficit in the amount of research going into developing a universal Green SE framework. While a number of frameworks have been proposed, they are not being widely used in the computing industry. This is due to the lack of a universal framework that can be used by companies. Universal frameworks are an essential part of software engineering by acting as a common reference point for engineers to aid in developing software regardless of where in the world they are working. Without this framework, the companies that want to implement green software will need to research and develop their own framework to work from. Existing software engineering models such as ISO 25000 and ISO 25010 do not consider sustainability as a quality attribute [27]. As a result, over a third of organisations in Europe do not implement green IT practices and less than one fifth of the organisations monitor how their employees reduce their energy consumption [28]. The main reason given for this is that there is no official legislation in their countries enforcing green practices. The majority of Green SE research is happening in developed countries due to the peak in interest. However, in developing countries, there is little research carried out regarding this topic. The focus of these countries is mostly on developing software applications and ICT products for the developed countries [29]. As there is a lack of universal interest in the topic of Green SE, there is insufficient incentive for research to be invested towards a universal green software framework. The existence of a universal framework is essential to allow for better Green SE processes and will promote the implementation of green software into their IT businesses and organisations. The initial implementation cost of green technology may be significant as there are many aspects to the implementation. Not only is it necessary to change the coding

An Investigation of Green Software Engineering

131

standards in accordance with a green framework, but there are also additional payments involved in research and buying new equipment [30]. Although green technology becomes more cost effective over time, this high upfront cost of the technology implementation process can deter businesses from making the switch [31]. Another additional payment concerns maintenance costs which may be increased due to the need to understand the new system and the changes that need to be analysed. Training may also be necessary to understand the original and new programming languages, systems and methods [32]. As there is no universal green software framework, there will be additional charges in order to develop a framework that can be used within the industry. The cost of designing, implementing, evaluating and deploying a framework may be of the order of c.USD$15–20 [33]. The total price of all the resources and maintenance needed for a company to convert from their current software to green software may deter them from making the switch to a more sustainable approach. 3.4 Green Software Engineering is Part of Green System Engineering? The European Union supported the development of green and sustainable concepts and in the last 4 years also blue print projects [51] to develop skills and concepts to move towards a green economy. European studies about future skills [53] required to empower this new development have been performed for e.g. the automotive sector. When looking at solutions for green technologies in the automotive sector it is obvious that the product strategy, the system and the software life cycle are interlinked [50], e.g. software implementing an electric powertrain is supporting the green economy. However, this requires a system design integrating high voltage batteries, electric motors, sensors, and software on electric control units. Moreover, if the mechanical design of the car has a high wind resistance the electric power in the battery is inefficiently consumed. So in fact all three layers (product, system, software life cycle) and their integration play a role for achieving a green solution [52]. A modern car, plane, ship, etc. may be largely controlled by functions that are implemented in software, so software is a key to change functional behavior of systems. For instance, software can switch to green mode and in this case reduce the consumption and at the same time would force the driver to an economy mode. Most cars have meanwhile an over the air update functionality. So one strategy could be an over the air update of a fleet to drive in green economy mode decided by e.g. a region or government. This in fact makes software the nearly most important turn key. 3.5 How Can We Measure and Quantify the Impact of Green Software Engineering on the Environment? Green SE is a field that seeks to reduce the environmental impact of software development and operation. Measuring and quantifying the impact of Green SE on the environment is crucial for promoting sustainable development practices [34]. Quantitative and qualitative methods can be used to achieve a comprehensive understanding of the environmental impact of software development and operation [35].

132

M. Freed et al.

Quantitative methods include life cycle assessment. In the context of Green SE, life cycle assessment can be applied to assess factors such as energy consumption, greenhouse gas emissions, and resource use [36]. Life cycle assessment involves identifying and quantifying the environmental impacts of each stage of a product or service’s life cycle, from raw material extraction and processing to production, use, and disposal. By doing so, life cycle assessment provides a comprehensive understanding of the environmental impact of the entire life cycle of a product or service, enabling decision-makers to identify opportunities for improvement [37]. Energy efficiency metrics such as power usage effectiveness and data center infrastructure efficiency can also be used to measure the amount of energy used by software systems during development and operation [38]. Power usage effectiveness is a ratio that measures the amount of energy used by a data center facility compared to the amount of energy used by the IT equipment it houses [39]. Data center infrastructure efficiency is similar to power usage effectiveness but takes into account the energy efficiency of the IT equipment itself. These metrics can help software developers and data center operators to identify opportunities to improve energy efficiency and reduce energy consumption [40]. Carbon emissions reduction can also be quantitatively measured by calculating the reduction in greenhouse gas emissions achieved through measures such as server consolidation, virtualization, and energy-efficient hardware. Server consolidation involves reducing the number of physical servers in a data center by consolidating multiple applications onto a single server [41]. Virtualization involves creating multiple virtual servers on a single physical server, allowing for more efficient use of hardware resources [42]. By reducing the number of physical servers in a data center and optimizing the use of IT equipment, carbon emissions can be reduced. Overall, these quantitative methods provide a rigorous and systematic approach to measuring the environmental impact of software development and operation. By quantifying factors such as energy consumption, greenhouse gas emissions, and resource use, decision-makers can identify opportunities to improve the environmental sustainability of software systems [41]. These methods also provide a basis for comparing the environmental performance of different software systems and evaluating the effectiveness of Green SE practices. Qualitative methods can provide valuable insights into the impact of Green SE on the environment. Surveys, interviews, and case studies are commonly used qualitative methods [43]. Surveys can be used to gather data on the attitudes and behaviours of software developers regarding Green SE practices. For example, a survey might ask developers about their awareness of energy-efficient coding practices or their use of sustainable software development tools [44]. The data collected from surveys can be used to identify trends in Green SE practices, as well as barriers to the adoption of these practices. Interviews with stakeholders such as customers, employees, and management can provide insights into the impact of Green SE on business operations and customer satisfaction. For example, an interview with a customer might reveal that they are more likely to purchase software products that are developed using sustainable practices [45]. An interview with an employee might reveal that they are more likely to stay with a company that prioritises environmental sustainability. Interviews with management can

An Investigation of Green Software Engineering

133

provide insights into the cost-effectiveness of Green SE practices, as well as the impact of these practices on the company’s bottom line. Case studies can provide detailed information on specific Green SE projects and their impact on the environment. For example, a case study might examine the development of an energy-efficient software application and the resulting reduction in greenhouse gas emissions [46]. Case studies can also provide insights into the challenges and opportunities associated with Green SE, as well as best practices for implementing sustainable software development practices [47]. Overall, qualitative methods provide a more nuanced and detailed understanding of the impact of Green SE on the environment. By gathering data on attitudes, behaviours, and specific projects, qualitative methods can provide valuable insights into the human and organisational factors that influence the adoption of Green SE practices [47]. These methods can also help to identify opportunities for collaboration and communication among stakeholders, as well as potential barriers to the adoption of sustainable software development practices. Green SE is a vital field for promoting sustainable development practices. Measuring and quantifying the impact of Green SE on the environment requires the use of both quantitative and qualitative methods. Life cycle assessment, energy efficiency metrics, carbon emissions reduction, surveys, interviews, and case studies are all methods that can be used to achieve a more comprehensive understanding of the environmental impact of software development and operation. By using these methods, the benefits and challenges of implementing Green SE practices can be identified, and strategies for reducing the environmental impact of software development and operation can be developed.

4 Research Limitations When discussing Green SE, it is important to consider several limitations that may impact research and implementation. Firstly, the limited literature available may not be well-established or robust enough to draw solid conclusions, which means that research may not provide a comprehensive analysis of the topic. Additionally, the availability and quality of data can pose limitations on research as there may not be enough reliable data available to support meaningful conclusions. Moreover, the context in which Green SE practices are implemented may vary depending on industry, organisational culture, and technology infrastructure, which means that research findings may not be generalizable across different contexts. Finally, potential biases in the research design and the lack of a standardized framework or set of practices may impact the validity and reliability of findings. Another set of limitations pertains to the implementation of Green SE practices in real-world settings. Even if practices are identified and validated, implementation may be challenging due to technical, organisational, or financial constraints. Research may not fully address these challenges due to time and resource constraints, leading to a narrow focus on specific aspects of Green SE, such as energy efficiency or sustainable design. In conclusion, a nuanced understanding of the limitations of Green SE is crucial to interpret research findings with appropriate caution and to develop effective strategies for implementing these practices.

134

M. Freed et al.

It is important to highlight that the initial research was undertaken by four final year undergraduate students over a 6-week period. Although preliminary training in academic research and writing was provided, the core primary researchers were essentially novices. To further mitigate this risk, a senior academic was available on a weekly basis to address any questions and to direct the work. Later, the work was reviewed and extended by a team of senior academics. Nevertheless, the experience of the core researchers and the limited time frame available for the review has reduced the academic completeness of the process, as such it might be considered research methodology light, therefore tending towards an experience report. An obvious area for improvement concerns the consistent treatment of quality characteristics in the included works, which is not well reported.

5 Directions for Future Research These findings emerged from a six-week project. Future research efforts can try to answer the question with a broader scope. Ideally, a consensus regarding the definition of Green SE would be reached, along with mutually agreed-upon methods for implementing Green SE practices. One possible solution involves conducting further surveys of software developers to gauge their understanding of Green SE. Additionally, now that sustainability has been a concern in software engineering for at least several years, the effectiveness of these Green SE practices can be evaluated. Consequently, software practitioners can discern which practices yield the most significant environmental benefits and are worth incorporating into their work. Inefficient code and associated algorithmic implementation can increase the hardware and indeed maintenance costs, and therefore it is perhaps an appropriate time to refocus efforts on efficiency in existing system. This can be as simple as unnecessary code included in systems but never actually used, it nevertheless requires hardware resources and associated electrical supply.

6 Conclusions The urgency to address sustainability concerns has led to a growing interest in Green SE, which aims to create reliable, sustainable software that meets the needs of users while reducing environmental impacts. Despite the recent spike in research, there is still no universally accepted definition or framework for Green SE. Through a multivocal literature review, this paper examines the fundamental principles and practices of Green SE, the obstacles confronting the field, and methods for curbing the energy consumption of software systems. As software practitioners embrace the green agenda, they will need to take sustainability into account in the future of software design, and work towards creating software that not only meets the needs of users but also minimizes the environmental impact of their work. To achieve this goal, software developers need to adopt practices such as code reuse, energy-efficient design, and sustainable software lifecycle management. Part of this task will inevitably involve embracing emerging cloud computing paradigms such as function-as-a-service (FaaS) [49]. However, implementing Green SE practices can pose significant challenges, including technical, economic, and social barriers. To overcome

An Investigation of Green Software Engineering

135

these challenges, developers can leverage green metrics, quantitative and qualitative methods, and life cycle assessment to evaluate the environmental impact of their software and make data-driven decisions. In conclusion, Green SE signifies an essential stride towards forging a sustainable future, with software developers holding a pivotal role in this pursuit. By adopting Green SE practices, developers can reduce the carbon footprint of software systems and contribute to global efforts to mitigate climate change. However, achieving this goal will require ongoing research, collaboration, and innovation in the field of software engineering. Acknowledgements. This research is supported in part by SFI, Science Foundation Ireland (https://www.sfi.ie/) grant No SFI 13/RC/2094 P2 to–Lero - the Science Foundation Ireland Research Centre for Software.

References 1. Kern, E., Dick, M., Naumann, S., Guldner, A., Johann, T.: Green software and green software engineering–definitions, measurements, and quality aspects. In First International Conference on Information and Communication Technologies for Sustainability, pp. 87–91. ETH Zurich, Zurich (2013) 2. Ray, S.: Green software engineering process: moving towards sustainable software product design. J. Glob. Res. Comput. Sci. 4(1), 25–29 (2013) 3. Raja, S.P.: Green computing and carbon footprint management in the IT sectors. IEEE Trans. Comput. Social Syst. 8, 1172–1177 (2021) 4. Murugesan, S.: Harnessing green it: principles and practices. IT Prof. 10, 24–33 (2008) 5. Georgiou, S., Rizou, S., Spinellis, D.: Software development lifecycle for energy efficiency. ACM Comput. Surv. 52, 1–33 (2019) 6. Chauhan, N.S., Saxena, A.: A green software development life cycle for cloud computing. IT Prof. 15, 28–34 (2013) 7. Saputri, T.R., Lee, S.-W.: Integrated Framework for incorporating sustainability design in software engineering life-cycle: an empirical study. Inf. Softw. Technol. 129, 106407 (2021) 8. Moises, A.C., Malucelli, A., Reinehr, S.: Practices of energy consumption for sustainable software engineering. In: 2018 Ninth International Green and Sustainable Computing Conference (IGSC) (2018) 9. Erdélyi, K.: Special factors of development of green software supporting eco sustainability. In: 2013 IEEE 11th International Symposium on Intelligent Systems and Informatics (SISY), Subotica, Serbia, pp. 337–340 (2013) 10. Verdecchia, R., Lago, P., Ebert, C., de Vries, C.: Green IT and green software. IEEE Softw. 38(6), 7–15 (2021) 11. Yuan, H., Liu, H., Bi, J., Zhou, M.C.: Revenue and energy cost-optimized biobjective task scheduling for green cloud data centers. IEEE Trans. Autom. Sci. Eng. 18, 817–830 (2021) 12. Fowler, M.: Refactoring. Addison-Wesley Professional, Boston (1999) ˙I, Öztürk, M.M., Yi˘git, T.: Energy efficiency analysis of code refactoring techniques 13. Sanlıalp, ¸ for green and sustainable software in portable devices. Electronics 11(3), 442 (2013) 14. Pereira, R., et al.: Ranking programming languages by energy efficiency. Sci. Comput. Program. 205, 102609 (2021)

136

M. Freed et al.

15. Jain, A., Mishra, M., Peddoju, SK., Jain, N.: Energy efficient computing-green cloud computing. In: 2013 International Conference on Energy Efficient Technologies for Sustainability, pp. 978–982. IEEE, Nagercoil (2013) 16. What is cloud computing?. https://aws.amazon.com/what-is-cloud-computing/. Accessed 22 Dec 2023 17. Bharany, S., et al.: A systematic survey on energy-efficient techniques in sustainable cloud computing. Sustainability 14(10), 6256 (2022) 18. Rout, S., Sahoo, K.S., Patra, S.S., Sahoo, B., Puthal, D.: Energy efficiency in software defined networking: A survey. SN Computer Science 2(4), 308 (2021) 19. Singh, S., Jha, R.K.: A survey on software defined networking: architecture for next generation network. J. Netw. Syst. Manag. 25, 321–374 (2017) 20. What is Software-Defined Networking (SDN)?. https://www.vmware.com/topics/glossary/ content/software-defined-networking.html. Accessed 22 Dec 2023 21. Pinto, G., Castor, F., Liu, Y.D.: Mining questions about software energy consumption. In: Proceedings of the 11th Working Conference on Mining Software Repositories, pp. 22–31. Association for Computing Machinery, Hyderabad (2014) 22. Mancebo, J., Calero, C., García, F., Moraga, M.Á., de Guzmán, I.G.R.: FEETINGS: framework for energy efficiency testing to improve environmental goal of the software. Sustain. Comput. Inf. Syst. 30, 100558 (2021) 23. Karita, L., Mourão, B.C., Machado, I.C.: Software industry awareness on green and sustainable software engineering: a state-of-the-practice survey. In: SBES (2019) 24. Groher, I., Weinreich, R.: An interview study on sustainability concerns in software development projects. In: 2017 43rd Euromicro Conference on Software Engineering and Advanced Applications (2017) 25. Lago, P., Aklini Kocak, S., Crnkovic, I., Penzensradler, B.: Framing sustainability as a property of software quality. Commun. ACM 58, 70–78 (2015) 26. Souza, M.R., Haines, R.,Vigo, M., Jay, C.:What makes research software sustainable? an interview study with research software engineers. In: 2019 IEEE/ACM 12th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE) (2019) 27. Mourão, B.C., Karita, L., Machado, I.C.: Green and sustainable software engineering - a systematic mapping study. In: SBQS (2018) 28. Lago, P., Gu, Q., Bozzelli, P.: A systematic literature review of green software metrics. VU Technical Report (2014) 29. Kumar, A.: An empirical study on green and sustainable software engineering. In: 14th WSEAS International Conference on Software Engineering, Parallel and Distributed Systems (SEPADS 2015), vol. 27 (2015) 30. Iravani, A., Hasan, M., Zohoori, M.: Advantages and disadvantages of green technology; goals, challenges and strengths. Int. J. Sci. Eng. Appl. 6(09) (2017). ISSN-2319–7560 31. Applover.com. Pros and cons of green computing – is it worth the cost?. https://applover.com/ blog/pros-and-cons-of-green-computing-is-it-worth-the-cost/. Accessed 23 Feb 2023 32. Ibrahim, S.R.A., Yahaya, J., Salehudin, H., Deraman, A.: The development of green software process model a qualitative design and pilot study. (IJACSA) Int. J. Adv. Comput. Sci. Appl. 12(8), 1–10 (2021) 33. David, O., et al.: A software engineering perspective on environmental modelling framework design: the object modeling system. Environ. Model. Softw. 39, 201–213 (2013) 34. Calero, C., Piattini, M.: Introduction to green in software engineering. In: Green in Software Engineering, pp. 3–27 (2015) 35. Turkin, I., Vykhodets, Y.: Software engineering master’s program and Green IT: the design of the software engineering sustainability course, Kyiv, UKraine, pp. 662–666 (2018) 36. Mohankumar, M., Anand Kumar, M.: A green it star model approach for software development life cycle. Int. J. Adv. Technol. Eng. Sci. 03(01), 548–559 (2015)

An Investigation of Green Software Engineering

137

37. Wolfram, N., Lago, P., Osborne, F.: Sustainability in software engineering. In: Sustainable Internet and ICT for Sustainability, pp. 1–7. SustainIT, Funchal (2017) 38. Kern, E., Guldner, A., Naumann, S.: Including software aspects in green it: How to create awareness for Green Software issues. In: Green IT Engineering: Social, Business and Industrial Applications, pp. 3–20 (2018) 39. Forti, S., Brogi, A.: Green application placement in the cloud-iot continuum. In: Practical Aspects of Declarative Languages, pp. 208–217 (2022) 40. Ganesan, M., Kor. A-L., Pattinson, C., Rondeau, E.: Green Cloud Software Engineering for big data processing. Sustainability 12, 9255 (2020) 41. Almusawi, S.M.Y., Khalefa, M.S.: Study of knowledge management framework to enhance Enterprise Resource Planning system in Green software development process. In: International Conference on Communication & Information Technology (ICICT) , Basrah, Ira, pp. 1–6 (2021) 42. Kern, E., Silva, S., Guldner, A.: Assessing the sustainability performance of Sustainability Management software. Technologies 6(3), 88 (2018) 43. Ahmad Ibrahim, S.R., Yahaya, J., Sallehudin, H.: Green software process factors: a qualitative study. Sustainability 14, 11180 (2022) 44. Abdalkareem, R., Mujahid, S., Shihab, E., Rilling, J.: Which commits can be CI skipped? IEEE Trans. Softw. Eng. 47(3), 448–463 (2021) 45. Raisian, K., Yahaya, J., Deraman, A.: Current challenges and conceptual model of green and sustainable software engineering. J. Theor. Appl. Inf. Technol. 94(2), 428–443 (2016) 46. Shahin, M., Zahedi, M., Babar, M.A., Zhu, L.: An empirical study of architecting for continuous delivery and deployment. Empir. Softw. Eng. 24(3), 1061–1108 (2018). https://doi.org/ 10.1007/s10664-018-9651-4 47. Turkin, I., Vykhodets, Y.: Software engineering sustainability education in compliance with industrial standards and green IT concept. In: Green IT Engineering: Social, Business and Industrial Applications, pp. 579–604 (2018) 48. Garousi, V., Felderer, M., Mäntylä, M.V.: Guidelines for including grey literature and conducting multivocal literature reviews in software engineering. Inf. Softw. Technol. 106, 101–121 (2019). ISSN 0950–5849 49. Grogan, J.: A multivocal literature review of function-as-a-service (faas) infrastructures and implications for software developers. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 58–75. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-56441-4_5 50. Messnarz, R., Much, A., Kreiner, C., Biro, M., Gorner, J.: Need for the continuous evolution of systems engineering practices for modern vehicle engineering. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 439–452. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_36 51. Stolfa, J., et al.: DRIVES—EU blueprint project for the automotive sector—a literature review of drivers of change in automotive industry. J. Softw. Evol. Process 32(3), e2222 (2020) 52. Messnarz, R., Ekert, D., Grunert, F., Blume, A.: Cross-cutting approach to integrate functional and material design in a system architectural design – example of an electric powertrain. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 322–338. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_25 53. Makkar, S.S., et al.: Automotive skills alliance—from idea to example of Sys/SW international standards group implementation. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement. EuroSPI 2022. Communications in Computer and Information Science, vol. 1646, pp. 125–134. Springer, Cham (2022). https:// doi.org/10.1007/978-3-031-15559-8_9

Developing a Blueprint for Vocational Qualification of Blockchain Specialists Under the European CHAISE Initiative Giorina Maratsi1(B) , Hanna Schösler1 , Andreas Riel2,3 , Dionysios Solomos4 , Parisa Ghodous5 , and Raimundas Matuleviˇcius6 1 ACQUIN e.V., 95448 Bayreuth, Germany

{maratsi,schoesler}@acquin.org

2 Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, 38000 Grenoble, France

[email protected], [email protected] 3 ECQA GmbH, 3500 Krems, Austria 4 EXELIA E.E., Athens, Greece [email protected] 5 Université Claude Bernard Lyon 1, Lyon, France [email protected] 6 University of Tartu, Tartu, Estonia [email protected]

Abstract. EU Blockchain strategy acknowledges the disruptive ability of Blockchain technology for trustful data sharing that is based on the common European values of data protection and sustainability. Given the rapid growth of Blockchain application areas, there is a lack of skills supply and educational programs offered in the market that meet demand needs. The establishment of a blueprint that addresses the need for creating harmonized occupational profiles in Blockchain area is among Erasmus + CHAISE project’s goals. Based on both quantitative and qualitative research, this article elaborates on the necessary aspects that need to be incorporated for creating an EU-wide Blockchain blueprint. The need for a holistic skill perspective that bridges technological, managerial, and transversal skills, as well as the involvement of different actors from education, market and accreditation are crucial for the blueprint uptake at EU and national level. Keywords: Blockchain Technology · Blockchain Skills · Blueprint · Education · Lifelong Learning · Certification · Sector Skill Alliances

1 Introduction EU Blockchain strategy emphasizes on the leading role Europe can undertake as an innovator in distributed ledger technologies and a trusty host for platforms and applications [1]. The ability of Blockchain technology to allow trust among people and organizations without the involvement of a third-party authority provides a revolutionary approach for © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 138–155, 2023. https://doi.org/10.1007/978-3-031-42307-9_11

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

139

data sharing in online transactions that is based on European values, legal and regulatory framework. These values include environmental sustainability, data protection, digital identities, information security and interoperability of systems [1]. The European Commission introduced in 2016 and revised in 2020 the blueprint for sectoral cooperation on skills, with a view to support Cedefop’s Skills Intelligence by developing a sector skills strategy, designing education and training solutions for new occupations and making use of EU tools such as the European Qualifications Framework (EQF), the Occupations - Skills & Competences -Qualifications (ESCO), the European Quality Assurance in Vocational Education and Training (EQAVET) and Europass [2]. The CHAISE consortium, representing 23 organizations (plus 5 associated) over 15 European countries, has been commissioned by the Erasmus+ program to develop a blueprint of the new qualifications that emerged in the Blockchain area and create a roadmap for the establishment of a blockchain specialization qualification [3]. The creation of the first ever blueprint across Europe has brought together the expertise from market representatives, education and training providers, accreditation and certification entities, ministries and regulatory bodies that embrace a holistic approach based on systems thinking, which makes the project a promising candidate for innovation in the Blockchain arena. Therefore, the essential research challenge for this consortium lies in identifying the elements that constitute such a blueprint. The holistic approach of the blueprint is an attempt to map occupational profiles with their ESCO equivalent profile, create a reference skills-set of the mapped profiles and establish accreditation requirements and conditions. The basis of the blueprint has been the research of Blockchain market needs in terms of skills, their consolidation into a skills framework, the development of a 5-semester Blockchain VET program, as well as the development of a European blockchain qualification strategy [4].

2 The Problem Background Even though Blockchain is acknowledged among EU’s key priorities with spending totals of $30.6 billion and 30% of European organizations planning to use Blockchain technology in the next years [5], there is a lack of educational programs and courses offered in the market [6]. The study in the Blockchain market by CHAISE consortium revealed a shortage of talents despite the high demand in the sector, whereas stakeholders emphasized the need to extend technological skills with business and transversal skills [7]. These conclusions have led to the emergence of three profiles, the Blockchain developer, the Blockchain architect and the Blockchain manager. The shortage of talents and educated workforce trigger the need for creating new or updating current professional outlines and educational program specification frameworks [8]. The creation of professional outlines is not a trivial task. Provided that regulatory frameworks and educational programs in Blockchain area vary from country to country, as well as the difficulties of finding publicly accessible registers [9], national authorities are challenged to implement in time the updating of existing occupational profiles and educational frameworks [8]. Other challenges include the use of common reference points for the qualifications (described in learning outcomes and use of common verbs based on Bloom taxonomy [10]), the categorization and structure of terms

140

G. Maratsi et al.

and concepts used, their accessibility and interoperability, as well as the use of coherent skills inventories among EU countries [11]. Lastly, existing education programs fail to incorporate both technical and non-technical knowledge [8]. Addressing the afore-mentioned challenges in the labour and education market across Europe, as well as working towards the alignment of occupational profiles based on the ESCO database [12], are at the heart of CHAISE initiative. After a decade of development, ESCO provides information on about 2,942 occupations and descriptions of 13,485 skills/knowledge, there are attempts in academic research on expanding the use of ESCO as a source for data analysis (including the use of artificial intelligence and text mining techniques) as well as companies are experimenting with the development of skills assessment tools based on ESCO classification [13]. With this research, the authors are targeting both the educational gap and the need for a roadmap in establishing Blockchain qualifications (described as blueprint) with the view to make Blockchain training accessible to interested parties.

3 Research Question and Methodology The general objective of this paper is to find out and outline the characteristics of a blueprint in the Blockchain sector. More specifically, it aims to contribute to the harmonization of occupational requirements and recognition of skills for Blockchain specialists at European level by providing a roadmap for the three occupational profiles: Blockchain developer, Blockchain architect and Blockchain manager that were identified by CHAISE consortium through qualitative and quantitative research [8]. The target group of the blueprint includes qualification and accreditation bodies across Europe, qualification experts, VET providers, VET trainers and curricula designers in ICT field, as well as VET learners. Based on its underlying mission explained in the introduction and background, the key research question this research attempts to answer is the following: What are the necessary aspects that need to be incorporated in the blueprint for Blockchain occupational profiles? In order to answer the research question, we examined insights from the multiple researches conducted during 2020–2023 by the CHAISE consortium [4]. Tables 1 and 2 show the methodological approach used in this research to provide the elements that constitute the blueprint (adopted from [8]).

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

141

Table 1. CHAISE research methodology Blockchain labour market characteristics Quantitative research Qualitative research data data • Survey data EU • Interviews organizations • Desk research • Database research Blockchain Skills Capacity EU Skills needs today & in the future Skill development strategies Skill typology Training system of the future

Blockchain Skill Demand

• • • •

Blockchain Skill Supply

Table 2. Key elements of the CHAISE research methodology Definition of Skills Mismatches for the European Blockchain Sector Study on Blockchain labour market characteristics

Study on Registry of Study on Registry of Blockchain skill Blockchain job Blockchain skill Blockchain demand ads supply educational and training offerings

Data collection and analysis of data from official statistical databases

In-depth expert interviews on Skills In-depth expert interviews with in BC sector education & training providers of BC programs in Europe Desk research on blockchain skill needs evidence

Study on relevant Analyses of BC relevant online job online job vacancies vacancies European Survey on BC Skills Expert Validation Expert Validation

Analysis of existing formal & non-formal BC & education provision Desk research on online BC ICT communities and fora Analysis of participation in BC related VET training programs European Survey on BC Skills

Research Methodology on Labour market and Skills Intelligence

4 Scope and Structure of the CHAISE Blueprint The proposed blueprint report adheres to the EU quality assurance framework (EQF, ECVET, ESCO, EQAVET) that constitutes the backbone of its structure, in alignment and respect to the nationally regulated occupational profiles within the EU countries. The EU VET policy framework for 2021–2025 aims to further develop and recognize common qualifications frameworks that in turn lead to stronger cooperation among the

142

G. Maratsi et al.

participating countries, transparency and facilitation of learners’/staff mobility. According to Cedefop report [14], the majority of EU countries have endorsed the eight levels of qualifications with some exceptions as for instance Slovenia (ten levels) and Spain (eight proposed). In the absence of a blueprint in Blockchain sector [2], CHAISE partnership examined the content of methodological approaches for developing program specification frameworks [8], relative ICT occupational profiles, along with a literature review of Cedefop publications related to training qualification methodologies across Europe [15, 16]. These influenced the proposed content of the blueprint in terms of the three (3) occupational profiles identified by CHAISE consortium (Sect. 5), the VET program specifications and methods of delivery (Sect. 6), certification pathways (Sect. 7), requirements for training providers including apprentices (Sect. 8) and planning future steps (Sect. 9).

5 Occupational Profiles In this chapter, the authors describe the three occupational profiles in terms of ESCO classification, workplace requirements and employment opportunities. ESCO is the multilingual classification of European Skills, Competences, Qualifications and Occupations that was launched in July 2017 as a first full version, being part of a stakeholder consultation project back in 2010. It describes the occupations and knowledge, skills and competences of all sectors and levels within European labour market aiming at closing the gap between the world of work and education and developing a shared and transparent understanding of occupations and skills among member states [17]. ESCO is divided in three interconnected pillars: the occupations, the skills and competences (or skills pillar) and lastly, the qualifications. The development of the ESCO qualifications is an ongoing process that is filled in with qualifications from national databases. 5.1 Blockchain Developer 5.1.1 ESCO Classification According to ESCO classification, Blockchain developers: “implement or program blockchain-based software systems based on specifications and designs by using programming languages, tools, and blockchain platforms” [12]. Target groups (including ESCO classification) include: (25-) ICT Professionals such as for example (2512) Software Developers, (2513) Web and Multimedia Developers, (2519) Software and Applications Developers and Analysts Not Elsewhere Classified, (2521) Database Designers and Administrators, (2529) Database and Network Professionals Not Elsewhere Classified. 5.1.2 Workplace Requirements CHAISE research on skills mismatches [8] identified a large diversity in terms of Blockchain strategy and regulation maturity across Europe. By conducting a topography of Blockchain occupations and job profiles in EU countries, the study gathers the

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

143

characteristics of the Blockchain professional profiles in terms of skills needed in the working environment. Table 3 describes the skills of Blockchain developer. Table 3. Workplace requirements adapted from [8]

• • • • • • • • •

Technical & Blockchain specific Skills Coding (C++, Java, Python) Cryptography Development Smart Contract Development Distributed Network Engineering skills Frontend & Backend Development Development of decentralized Apps. Maths and Stats Protocol Engineering Blockchain Solution Design

• • • • • • • •

Professional / Business Skills Product Development skills Product Management skills Skills in Legal & Compliance matters Finance and Controlling skills Human Resources Development skills Customer Success Design Affiliate Marketing Marketing skills

• • •

• • • • • • • •

Transversal Future Skills Self-efficacy & Selfconfidence Self-determination & Autonomy Self-management / organization / regulation & Selfresponsibility Cooperation Competence Communication Competence Decision-making Competence & taking Responsibility Initiative and Performance competence Ambiguity competence Design Thinking competence Innovation & Creativity competence Future orientation & Willingness to Change

To identify the daily routine of Blockchain developers, we based our findings on the CHAISE Registry of Blockchain online job vacancies that comprises of 338 records from 13 partnership countries [18]. The daily routine of Blockchain developers includes: • • • • • • • • • • •

Develop and improve blockchain algorithms (coding); Define core protocols of a blockchain ecosystem; Develop clients; Write smart contracts; Experiment with consensus mechanisms; Debug software; Interpret technical requirements; Provide technical documentation; Use software design patterns; Use software libraries; Utilise computer-aided software engineering tools.

144

G. Maratsi et al.

5.1.3 Employment Opportunities Blockchain skills are often embedded within the following job profiles: Software Engineer, Full Stack Engineer, Java Software Engineer and Back End Developer. 5.2 Blockchain Architect 5.2.1 ESCO Classification According to ESCO classification, Blockchain architects: “are ICT system architects that are specialized in blockchain-based solutions. They design architecture, components, modules, interfaces, and data for a decentralized system to meet specified requirements.” [12]. Target group (including ESCO classification) includes: (25-) ICT Professionals such as for example (2511) ICT System Architects, (2512) Software Developers, (2513) Web and Multimedia Developers, (2514) Applications programmers, (2519) Software and Applications Developers and Analysts Not Elsewhere Classified, (2521) Database Designers and Administrators (2529) Database and Network Professionals Not Elsewhere Classified. 5.2.2 Workplace Requirements Based on CHAISE research on skills mismatches [8], Table 4 describes the skills of Blockchain architect. The daily routine of Blockchain architects, as identified in CHAISE Registry of Blockchain online job vacancies [18], includes: • Develop blockchain infrastructures; • Design architecture, components, modules, interfaces and data for a decentralized system; • Choose development platform; • Determine functionalities; • Develop prototype; • Add privacy features; • Improve UX; • Define technical requirements; • Interpret technical requirements; • Create business process models; • Design information systems; • Define software architecture; • Analyze ICT system. 5.2.3 Employment Opportunities Blockchain skills are often embedded within the following job profiles: Software Engineer, Full Stack Engineer, Java Software Engineer and Back End Developer.

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

145

Table 4. Workplace requirements adapted from [8] Technical & Blockchain specific Skills • Blockchain Solution Design • Data / Network Security Design • Cloud Infrastructure Design A basic understanding of: • Cryptography Development • Distributed Network Engineering skills • Smart Contract Development • Development of Decentralized Apps

• • • • •

Professional / Business Skills Business Needs Analysis BC Use Case Development Product Development skills Product Management skills Skills in Legal & Compliance matters

• • • • • • • • • • •

Transversal Future Skills Learning literacy & Metacognitive skills Self-efficacy & Selfconfidence Self-determination & Autonomy Decision Competence & Responsibility-taking Design-thinking Competence Innovation & Creativity skills System & Networked Thinking Future Mindset & Willingness to Change Cooperation Competence Communication Competence Ambiguity Competence

5.3 Blockchain Manager 5.3.1 ESCO Classification Blockchain manager is not yet listed in ESCO classification. Target group (including ESCO classification) includes: (24-) Business and Administration Professionals such as for example (2412) Financial and Investment Advisers, (2413) Financial Analysts, (2421) Management and Organization Analysts, (2434) ICT Sales Professionals. 5.3.2 Workplace Requirements Based on CHAISE research on skills mismatches [8], Table 5 describes the skills of Blockchain manager. The daily routine of Blockchain managers, as identified in CHAISE Registry of Blockchain online job vacancies [18], includes: • • • • • •

Develop blockchain implementation strategies, vision and goals; Collaboration and communication with customers, developers and system architects Work with project and product management tools Lead business analyses Monitor human resources, finance and controlling Conduct sales and marketing (analyses).

146

G. Maratsi et al. Table 5. Workplace requirements adapted from [8] Technical & Blockchain specific Skills General technical understanding of • Blockchain Solution Design • Data Analysis • Protocol Engineering • Smart Contract Development • Development of Decentralized Apps • Maths & Stats

• • • • • • • • • • •

Professional / Business Skills Business (Needs) Analysis Business Development Skills Product Development Skills Product Management Skills Finance and Controlling Skills Human Resources Development Skills Customer Success Design Affiliate Marketing Marketing Skills BC Use Case Development Skills in Legal & Compliance matters

• •

• • • • • • • • •

Transversal Future Skills Self-efficacy & Selfconfidence Self-management / organization / regulation & Selfresponsibility Decision Competence & Responsibility-taking Initiative and performance competence Ambiguity Competence Ethics & Environmental competence Innovation & Creativity skills Sensemaking Future Mindset & Willingness to Change Cooperation Competence Communication Competence

5.3.3 Employment Opportunities Blockchain skills are often embedded within the following areas: ICT sector (information technology & services; computer software; internet, telecommunications), financial sector (financial services; banking industry), sales, marketing and advertising, management consulting and research.

6 Program Specifications and Delivery 6.1 Program Outline The description of the occupational profiles in chapter 5 n terms of technical, business and transversal future skills [8], the study on blockchain skill supply of the profile of graduates in ICT and also, where data available, in Blockchain field [19] and lastly, the registry of blockchain educational and training offerings among CHAISE participating countries [20], influenced the proposed design of the program outline. The CHAISE VET program has been designed at EQF level 5 along with methodological and concept advice at each module in order to be easily adapted to EQF level 6. Following EU standards, the CHAISE Blockchain VET program has a 5-semester duration, broken down into 4 semesters of classroom and lab-based learning (up to 1,200 teaching hours) and 1 semester of work-based learning (up to 900 h) [21]. The curriculum structure includes the below lectures within each module [22] (Table 6):

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

147

Table 6. Illustration of lectures per module adapted from [22] 1. 2.

3.

4.

Modules Introduction to Blockchain Technology Regulation, Legal aspects, and Governance of Blockchain Systems

Fundamentals of Blockchain and Distributed Ledger Technology Blockchain Business Management and Planning

• • • • • • • • • • • • • •

5.

Blockchain Security and Digital Identity

• • • •

6.

Blockchain System Architecture and Consensus Protocols Blockchain Platforms

• • •

8.

Marketing and Customer Support

• •

9.

Applied Cryptography

7.

10. Smart Contracts

• • • •

• • • • • • •

Lectures per module Introduction to Blockchain Technology Blockchain History and Future Blockchain basics to set the regulation and governance context and requirements Governance and regulation background Blockchain ecosystem Regulation strategy Blockchain governance Blockchain as a regulation mean for GDPR Information and communications systems for decentralized solutions - Part 1 & 2 Blockchain components and characteristics Distributed information systems and their information security management principles The Blockchain Sector - An industry overview of Blockchain use cases and applications and scenarios (good practices) Applied Digital Ethics & Technology Assessment for Blockchain Fundamentals of business management methods (applied to Blockchain use cases) - Part 1 & 2 Blockchain Honeypots Smart contract security Security risks analysis of blockchain-based applications Identity management and access control models of blockchain-based applications Basics in blockchain system architecture - Part 1 & 2 Different consensus protocols DLT examples Overview of platform characteristics Performance and Scaling Ethereum platform and ecosystem Comparison of selected platforms: IOTA, Hyperledger, others Use of Blockchain in Marketing Marketing for Blockchain (applied to Blockchain use cases) Marketing and Customer Support - Part 1 & 2 Cryptographic paradigms Hash concept Hashes in blockchain Zero knowledge and blockchain Building simple smart contracts Interacting with the blockchain through smart contracts

(continued)

148

G. Maratsi et al. Table 6. (continued) Modules

11. Developing Use Cases: From Ideas to Service 12. Game Theory in Blockchains

• • • • • • • • • •

Lectures per module Building more advanced smart contracts Tokenizing assets with blockchain Business Model for Blockchain Use Case Blockchain Use Case Redesign Blockchain Use Case MVP Blockchain Use Case Roadmap Basic remote purchase Extended remote purchase Game theory approach for fees Game theory behind Proof of Stake (PoS)

6.2 Learning Outcomes Learning outcomes are statements of what a learner knows, understands and is able to do on completion of a learning process defined in terms of knowledge, skills and competences [23]. In the CHAISE VET program, educational modules are described in terms of technical and blockchain specific skills, business skills and transversal skills. The alignment of learning outcomes in relation to the three occupational profiles: Blockchain Architect (A), Blockchain Developer (D) and Blockchain Manager (M) is described in Table 7. Table 7. Educational modules for Blockchain Job Profiles adapted from [22] Transversal Skills (M, A, D) 1. Introduction to Blockchain Technology 2. Regulation, Legal Aspects and Governance of Blockchain Systems Technical Basics (D, A, M)

Business Basics (M, A, D)

3. Fundamentals of Blockchain and Distributed Ledger Technologies

4. Blockchain Business Management and Planning

Technical Blockchain Specialisation (D, A)

Business Blockchain Specialisation (M)

5. Blockchain Security and Digital Identity 6. Blockchain System Architecture & Consensus Protocols

7. Blockchain Platforms 8. Marketing and Customer Support

BC Conception & Use BC Engineering & Case Development (A) Development (D)

Strategic Business Management (A, M)

9. Applied Cryptography

Operational Business Management (D, M)

10. Smart Contracts 11. Developing use 12. Game Theory in and Digital cases: From ideas to Blockchain Currency services Programming

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

149

More specifically, to gain the required professional skills: 1. The Blockchain architect should learn (3), (5), (6), (9). To acquire business skills, Blockchain architect should study (4), (11). Transversal skills are included in (1) and (2). 2. The Blockchain developer should learn (3), (5), (6), (10). To acquire business skills, Blockchain developer should study (4), (12). Transversal skills are included in (1) and (2). 3. The Blockchain manager should learn (3). To acquire business skills, Blockchain manager should study (4), (7), (8), (11), (12). Transversal skills are included in (1) and (2). 6.3 Entry Requirements For enrolling in the MOOC that constitutes the theoretical part of the VET program, no specific requirements of knowledge or experience are needed for the three targeted profiles. For completing the practical assessments and case studies, prior knowledge in ICT, distributed systems, databases, information security or cybersecurity are desirable. The experience can be proved by participation in Blockchain projects for a period of two years. The CHAISE Validation Committee is responsible for designing and approving the relevant criteria. More specifically, based on the three targeted profiles, the below background is desirable: • Blockchain developer: strong IT and programming background; • Blockchain architect: IT solution development, linking DLT’s to business transformation; • Blockchain manager: strong networked IT applications, Customer Relationship Management (CRM), and Enterprise resource planning (ERP). In terms of age, no specific age restriction is posed. 6.4 Thematic Coverage The thematic coverage per module is diversified based on the three knowledge domains comprising of technical, business and transversal skills. The technical knowledge covers (indicatively): • ICT systems for decentralized solutions: big data, AI, extended reality, internet communication and applications, IoT, and cloud computing. • Distributed Information Systems and information security: public-key cryptography, hashing, Merkle trees. • Blockchain security and digital identities: honeypots placement, smart contract honeypots and security issues, identity management and access control. • Blockchain system architecture & consensus protocols: data management patterns such as on-chain, off-chain data storage, tokenization, consensus protocols, and DLT examples from industry.

150

G. Maratsi et al.

• Applied cryptography: symmetric cryptography, data encryption standards DES, public key cryptography, Hash functions, and Zero-knowledge proofs. The business knowledge covers (indicatively): • Blockchain business management and planning: overview of the Blockchain sector from industry, applied digital ethics, Blockchain business use cases, Blockchain decision models, product and value proposition design, technology assessment methods and scenario planning • Marketing and customer support: strategic management planning, marketing Canvas, ethical design framework, • Blockchain sustainability: environmental, social and governance (ESG) factors, Green Blockchain Decision Framework The transversal knowledge covers (indicatively): • Blockchain history: from first Blockchain protocols to Blockchain 4.0 • Regulation and legal aspects: legal environment, governmental regulations, implications of blockchain technology, collaborative distributed organizations, transactionbased models, ecosystems characteristics. European Commission strongly supports the creation of EU-wide rules for blockchain in an attempt to tackle legal and regulatory misuse, and has adopted a series of legislative proposals for regulating crypto-assets (such as financial market rules, and legal framework for regulatory sandboxes of financial supervisors) [24]. CHAISE VET curriculum puts special emphasis on the legal framework and addresses the controlling challenge mainly in module 2 (Regulation, Legal aspects, and Governance of Block-chain Systems) and module 4 (Blockchain Business Management and Planning). 6.5 Delivery Methods The CHAISE VET program accommodates three different modes of delivery: a) classroom-based, b) blended (classroom and traineeship combined), and c) distance online learning via a Massive Open Online Course (MOOC). The course material consists of (video) presentations and lecture notes, practical exercises and case studies, which trainers can adapt to their needs in the classroom or in an online learning environment. Practical exercises and case studies can be delivered in a lab environment and offer learners hands-on practical experience. Cedefop guidelines in lifelong learning [25] highly recommend the facilitation of group learning. A group setting also serves to train important soft skills, among others self-confidence, cooperation, and communication skills.

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

151

Table 8. Delivery methods Modern Educational methods

Additional initiatives

Flipped classroom

E-learning, online platforms

Project-oriented learning

Interdisciplinary degrees

Cooperative learning

Hackathons

Gamification

Project calls (Erasmus plus)

Design-thinking

Awards

Competency-based learning

Formal and non-formal talks with professionals

6.6 Assessment Criteria Assessment must be carried out according to standards of national training providers and evaluation methods approved by them. The following assessment tools are included in the training material: Table 9. Assessment weight Assessment tools

Weight

• 5 Questions/answers per module

• 30%

• Multiple-choice questions

• 30%

• Case studies

• 40% (for evaluating autonomy, proactivity, teamwork)

The evaluation should include aspects such as: autonomy, proactivity in learning, teamwork capacities and other transversal future skills. The final mark for the course will be an average mark of final grades in all modules.

7 Certification Pathways CHAISE VET program provides learners with the flexibility in the pathways that can be chosen to certify his/her knowledge. 7.1 National Certification National certification procedures vary across European countries. The certification procedure serves as an assessment tool, or an external verification of quality provided by an officially recognized body [26]. At national level, the certification possibilities to recognize Blockchain competences should be aligned with the country-specific, regional and structural traditions of the vocational training education.

152

G. Maratsi et al.

7.2 ECQA Certification The validation of the defined learning outcomes in terms of knowledge, skills and competences, to be acquired by attending the CHAISE VET program, is conducted through an online examination that is hosted on the examination portal of the European Certification and Qualification Association (EQCA). ECQA, certified with ISO 17024, is entitled to issue certificates on the basis of track records of achievements mainly via multiple choice exams and practical exercises. Eligible are the learners that have attended the whole or parts of CHAISE VET program. The CHAISE certification scheme distinguishes between three levels of certification for the targeted occupational profiles: • a) Theory badge: is targeting the learners that have completed the MOOC CHAISE VET program after successfully passing the multiple-choice (MCQ) questions. This badge includes no practical elements and leads to the awarding of the “Theory badge”. It is also a pre-requisite to obtain an ECQA certificate (full ECQA certification). Learners can take up to 3 MCQ Modules at once. • b) Practical badge: is targeting the learners that have completed the MOOC CHAISE VET program after successfully passing the multiple-choice questions and the practical exercises. The practical elements are checked by the Validation Committee who awards the “Practical badge”. It is also a pre-requisite to obtain an ECQA certificate (full ECQA certification). • c) Full ECQA certification: it refers to the completion of each Blockchain Module and a positive assessment of the respective practical tasks and MC Questions. The Theory and Practical badge lead to the ECQA certificate. Certification records will be maintained in a Blockchain-based environment. 7.3 Grading and Passing Requirements The evaluation of practical exercises and working experience evidence will be performed by external experts on a voluntary basis. This need will be served by a Validation Committee during the project duration. This Validation Committee is composed of CHAISE consortium representatives from both industry and academia that have proven practical skills in the Blockchain area. The use of CHAISE MOOC is also envisaged to be used in the certification procedure.

8 Requirements for Training Providers 8.1 Resources and Equipment The resources offered in the program structure include videos, lecture notes, practical exercises, case studies, question / answer series and multiple-choice series. Learners need to have access to stable Internet connection and a personal computer with hardware features to execute the Blockchain practical exercises.

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

153

8.2 Teaching Staff For the three Blockchain profiles, teachers and trainers must be able to demonstrate that they meet specific occupational expertise requirements related to: • technical knowledge in the area of Blockchain to the same level as the program being offered; • the experience in the Blockchain area is recent and is constantly being updated through continuing professional development; • development of methods for maintaining contacts with employers, associations and other educational institutes in the Blockchain field to ensure that teachers/trainers are updated in terms of legislation, policies, recent developments and codes of practice; • there is sound experience of providing training.

9 Summary, Conclusion and Outlook The CHAISE blueprint initiative is addressing a gap of harmonized Blockchain occupational profiles and sets the basis for the sector-specific VET program. The contribution of the blueprint lies on the fact that there is no relative initiative at European level, its holistic and interdisciplinary perspective of Blockchain and the multiple validation methods within CHAISE consortium via extensive market research, expert interviews and periodic observation of Blockchain developments quarterly. The incorporation of regulative and ethical aspects of Blockchain technology in the thematic coverage of CHAISE VET curriculum, contributes to the current discourse of not only creating strong regulation, but also detecting and tracking misuse via quality assurance and technology audit processes [27]. The blueprint is envisaged to serve as a starting point for the Blockchain specialization profiles that can be exploited by European countries on their own choice and by identifying the common parts across the different national qualifications’ frameworks. Factors that enhance the facilitation of recognizing qualifications include, among others, the explicit description of learning outcomes (what is covered and what not), the explicit expression of performance level of learning outcomes (knowledge, comprehension, application, analysis, synthesis and evaluation) based on Bloom’s Taxonomy and the weighting of learning outcomes (essential and less essential classification) [10]. Among the limitations of the paper are the non-existing educational outlines in Blockchain area for comparison purposes and the use of graduates profiles from ICT sector for forecasting skills demand and supply due to the absence of Blockchain specific graduates registries. To address the limitations and the ability to generalize results, immediate future activities include the promotion of this blueprint in all the EU countries represented in the CHAISE consortium in order to establish the critical mass required to achieve the objective of becoming the unique EU-wide referential for Blockchain skill-based occupations, and related qualifications. Furthermore, pilot trainings reaching more than 500 trainees will apply the reference training material created in the project. Feedback loops will be valuable to validate and improve this qualification basis, as will the certification procedures.

154

G. Maratsi et al.

10 Relationship with the SPI Manifesto CHAISE consortium is a highly collaborative environment that brings together people from technology, management, education, accreditation and regulatory area. This creates a unique opportunity for applying the principle “Create a learning organization” of the SPI manifesto [28] where organizations continuously facilitate the learning and experience sharing to adapt to a rapidly changing Blockchain environment that is scalable, sustainable and socially responsible. Acknowledgements. The CHAISE project [29] is financially supported by the European Commission in the Erasmus+ Program under the project number 621646-EPP-1–2020-1-FR-EPPKA2SSA-B. This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein. The authors acknowledge the enormous contributions of all CHAISE consortium members and regret that it is not possible to cite all the names here.

References 1. European Commission, Shaping Europe’s digital future (2023). https://digital-strategy.ec.eur opa.eu/en/policies/blockchain-strategy. (Accessed 04 2023) 2. European Commission, Blueprint for sectoral cooperation on skills (2023). https://ec.europa. eu/social/main.jsp?catId=1415&langId=en. (Accessed 04 2023) 3. Solomos, D., Tsianos, N., Ghodous, P., Riel, A.: The european CHAISE initiative to shape the future of blockchain skill qualification and certification. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 640–650. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_43 4. Chaise-blockchainskill.eu, Chaise-blockchainskills.eu (2021). https://chaise-blockchainskill s.eu. (Accessed 04 2023) 5. IDC, New IDC Spending Guide Forecasts Double-Digit Growth for Investments in Edge Computing (2022). https://www.idc.com/getdoc.jsp?containerId=prUS48772522. (Accessed 04 2023) 6. Faschinger-Sanborn, J., Riel, A., Reiner, M.: Future skill requirements for a blockchainenabled automotive supply chain. In: Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI: Salzburg, Austria, 31 August–2 September 2022, Proceedings, p. 2022. Springer International Publishing, Cham (2022). https://doi.org/10.1007/ 978-3-031-15559-8_18 7. CHAISE: D2.3.1, Study on Blockchain Skills Demand (2021). https://chaise-blockchainsk ills.eu/wp-content/uploads/2021/09/CHAISE_D2.3.1_Study-on-Blockchain-Skill-Demand. pdf. (Accessed 04 2023) 8. CHAISE: D2.5.1, Study on Skills Mismatches in the European Blockchain Sector (2021) https://chaise-blockchainskills.eu/wp-content/uploads/2021/11/CHAISE_WP2_D2. 5.1_Study-on-Skills-Mismatches-in-the-blockchain-sector.pdf. (Accessed 04 2023) 9. Labor Institute of the General Confederation of Greek Workers (INE-GSEE), Methodological approaches to developing professional outlines and educational program specification frameworks (2021). https://www.inegsee.gr/wp-content/uploads/2021/07/Me8odo logia_EP_Ebook.pdf. (Accessed 04 2023) 10. Cedefop, Defining, Writing and Applying Learning Outcomes: a European handbook. Luxembourg Publications Office (2017)

Developing a Blueprint for Vocational Qualification of Blockchain Specialists

155

11. Cedefop, Comparing Vocational Education and Training Qualifications: towards a European comparative methodology. Luxembourg Publications Office (2019) 12. ESCO Database Website. https://skillman.eu/escodatabase/. (Accessed 04 2023) 13. Chiarello, F., et al.: Towards ESCO 4.0–Is the European classification of skills in line with Industry 4.0? A text mining approach. Technol. Forecasting Soc. Change 173 (2021) 14. Cedefop, NQF Developments. National qualifications frameworks bring European education and training systems closer together and closer to end users (2019). https://www.cedefop. europa.eu/en/press-releases/how-national-qualifications-frameworks-bring-education-andtraining-systems-closer. (Accessed 04 2023) 15. Cedefop, Comparing vocational education and training qualifications: towards methodologies for analysing and comparing learning outcomes. Luxembourg Publications Office (2022) 16. Cedefop, Analysing and comparing VET qualifications. Cedefop briefing note (2021). http:// data.europa.eu/doi/https://doi.org/10.2801/356418. [Accessed 04 2023] 17. European Commission, What is ESCO (2023). https://esco.ec.europa.eu/en/about-esco/whatesco. (Accessed 04 2023) 18. CHAISE Registry of Blockchain online job vacancies (2021). https://chaise-blockchainsk ills.eu/registry-of-blockchain-educational-and-training-offerings/. (Accessed 04 2023) 19. CHAISE: D2.4.1: Study on Blockchain Skill Supply (2021). https://chaise-blockchainskills. eu/wp-content/uploads/2021/11/CHAISE_WP2_D2.4.1_Study-on-Blockchain-skill-supply. pdf. (Accessed 04 2023) 20. CHAISE Registry of Blockchain educational and training offerings (2021). https://chaise-blo ckchainskills.eu/registry-of-blockchain-educational-and-training-offerings/. (Accessed 04 2023) 21. CHAISE Application Form, “Cooperation for innovation and the exchange of good practices - Sector Skills Alliances - Call for proposals EAC/A02/2019”, 2019 22. CHAISE: D5.1.1, Blockchain Learning Outcomes Report (2022) :https://chaise-blockchai nskills.eu/wp-content/uploads/2022/11/CHAISE-D5.1.1_Blockchain-Learning-OutcomesReport.pdf. (Accessed 04 2023) 23. EU, The European Qualifications Framework: Supporting Learning, Work and Crossborder Mobility (2018). https://ec.europa.eu/social/BlobServlet?docId=19190&langId=en. (Accessed 04 2023) 24. European Commission, Legal and regulatory framework for blockchain (2023). https://dig ital-strategy.ec.europa.eu/en/policies/regulatory-framework-blockchain. (Accessed 04 2023) 25. Cedefop, Delivering the training modules, Resources for guidance: Developing information technologies and labour market information in lifelong guidance (2023) https://www.ced efop.europa.eu/en/tools/resources-guidance/training-modules/delivering-training-modules. (Accessed 04 2023) 26. Cedefop, Accreditation and quality assurance in vocational education and training. Luxembourg Publications Office (2010) 27. Ellul, J., et al.: Regulating Blockchain, DLT and Smart Contracts: a technology regulator’s perspective. In: ERA Forum, vol. 21. Springer, Berlin Heidelberg (2020). https://doi.org/10. 1007/s12027-020-00617-7 28. Pries-Heie, J., Johansen, J., Messnarz, R.: SPI Manifesto (2010). https://conference.eurospi. net/images/eurospi/spi_manifesto.pdf 29. The CHAISE Project Website. https://chaise-blockchainskills.eu/. (Accessed 04 2023)

Trustful Model-Based Information Exchange in Collaborative Engineering David Schmelter1

, Jan-Philipp Steghöfer2 , Karsten Albers3(B) , Mats Ekman4 , Jörg Tessmer5 , and Raphael Weber6 1 Fraunhofer IEM, Paderborn, Germany [email protected] 2 XITASO GmbH, Augsburg, Germany [email protected] 3 INCHRON AG, Erlangen, Germany [email protected] 4 Saab AB, Linköping, Sweden 5 Robert Bosch GmbH, Stuttgart, Germany [email protected] 6 Vector Informatik GmbH, Regensburg, Germany [email protected]

Abstract. Automotive and aviation systems are undergoing a radical shift in their software and hardware architectures, affecting the processes and communities used to design them. On a technical level, we see a trend towards integration of heterogeneous function domains on centralized computing platforms. On a process and collaboration level, this trend implies two things: First, heterogeneous communities of OEMs and suppliers on different tiers need to collaborate intensely to create innovative software-intensive products. Second, these communities need to be able to exchange development artifacts efficiently by means of open, model-based exchange formats. Even competing companies will have to collaborate in such heterogeneous communities. We illustrate the challenges of trustful, model-based information exchange in heterogeneous development communities that arise due to intellectual property protection concerns. We identify data security threats for collaborative, model-based engineering processes and suggest guidelines that support trustful information exchange between partners of a heterogeneous community.

1 Collaboration Along the Supply Chain The times when an OEM could dump a requirements specification on a Tier-1 supplier and expect them to come back with a finished product a year later are over. The architectural shift in automotive and aviation systems necessitates a shift in the processes used to design them and the communities that form to support these processes change accordingly. On the technical level, we see a trend toward integration of heterogeneous function domains on centralized computing platforms resulting in complex systems: in automotive, vehicles combine sensor systems like radar and lidar with image processing © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 156–170, 2023. https://doi.org/10.1007/978-3-031-42307-9_12

Trustful Model-Based Information Exchange in Collaborative Engineering

157

techniques to detect the features of the road, other participants in traffic, and their own position in the world [20]; in aviation, Integrated Modular Avionics (IMA) [14, 17] proposes an architecture with portable application software across an assembly of centralized processing modules. On a process and collaboration level, this trend implies two things: first, heterogeneous communities of OEMs and suppliers on different tiers need to collaborate intensely to create innovative software-intensive products; second, in order to manage the complexity of developing such products, these communities need to be able to exchange development artifacts efficiently. However, the increased heterogeneity of the project partners and the increased complexity of the systems under construction make the existing setup based on bilateral contractual agreements unfeasible. Until now, collaboration in automotive and avionics development projects has typically been between companies on two levels along the supply chain (i.e., several pairs of OEM: Tier-1 Suppliers, Tier-1 Supplier: Tier-2 Suppliers, ...). In such a 1:n collaboration, bilateral contracts that include non-disclosure agreements (NDAs) are meant to protect the intellectual property of the involved partners and enable trustful collaboration. In the future, competing companies across the supply chain will collaborate in communities consisting of an OEM and potentially competing suppliers on different levels of the supply chain. This kind of cooperation is also called coopetition [12]. A trustful exchange of information in such heterogeneous communities is more difficult to achieve. All parties must avoid that sensitive information is leaked to a competitor. At the same time, the development artifacts exchanged within a community, such as system models, include detailed information to deliver high-quality results for, e.g., model-based timing analyses and optimization. We believe that (1) intellectual property protection hinders effective and efficient information sharing in these communities and (2) although contracts and NDAs provide the legal framework for collaboration, observing and enforcing them in such environments is difficult and requires adapted technical and organizational approaches. In particular, concepts to ensure data security in coopetitive communities are required to protect the intellectual property of individual partners and their competitiveness. These concepts must be cultivated through software process improvement (SPI) as part of the collaborative process. Based on these hypotheses, we formulate the following research questions: RQ1 What are the threats of sharing information in a collaborative integration scenario as described above? RQ2 Which information is particularly worth protecting? RQ3 Which guidelines can be used to ensure the secure exchange of information in collaborative automotive and aviation development processes? We conducted a series of workshops to address these research questions. We also identified security threats that arise when sharing sensitive information in collaborative development projects. From this, we derived guidelines that support trustful model exchange between partners. The scope of our work was the exchange of system models, i.e., models that describe hardware, software, and their interplay. Such models are often exchanged between

158

D. Schmelter et al.

project partners as the basis for analyses that need to be run throughout the development process. Typical domain-specific languages for such models are Amalthea [5] or EAST-ADL [3].

2 Related Work and Background In this section we present related work in the field of collaborative engineering. We also present background on threat modeling methods and the series of ISO/IEC 270xx standards, which we have used to describe threats and guidelines in the exchange of system models. 2.1 Collaborative Engineering In their study on challenges in collaborative engineering, Kuenzel and others highlight that “[...] issues of data sovereignty and IP law remain to be addressed to resolve existing concerns among large groups of potential users” [10]. Argiolas et al. [1] describe the software architecture of a web-based workflow management system to ensure trust relationships between various involved engineering project teams in the field of building construction. With this architecture, they aim to establish a virtual organization. Each collaboration activity is modeled as a process that is initiated by an actor (e.g., architect or structural engineer). However, they consider exchanged resources as single objects which is not sufficient for system models since they contain intertwined sensitive and non-sensitive elements. A Security Manager component is responsible for managing and updating the qualification and reputation of each actor. Based on a Reputation Register component, trust relationships are dynamically managed. Lu et al. [11] aim to raise awareness and propose a scientific foundation for collaborative engineering and describe its benefits and challenges. Borsato and Peruzzini [2] provide fundamental concepts on collaborative engineering including different models of collaborative ventures (short-term virtual enterprises, extended enterprises, and consortium enterprises), present collaboration in the context of product lifecycle, and suggest metrics on how to select collaboration tools. However, although they briefly mention confidentiality, security, and trust as important, neither Lu et al. nor Borsato and Peruzzini address the protection of intellectual property in inter-organizational information exchange. Wiener and Saunders discuss the concept of “forced coopetition” in the context of multiple vendors that support the IT infrastructure of a single client [18]. In contrast to the model we discuss here, however, these vendors have clearly separated areas and are less dependent on each other. In our case, the suppliers create a product together with the OEM and therefore need to cooperate in a much more intimate fashion where sensitive information is exchanged regularly. However, we believe that some of the findings, such as encouraging “long-term vendor partnerships”, also apply to our scenario. 2.2 Threat Modeling During our workshops, we discussed data security threats that arise when sharing sensitive information in collaborative development projects via system models. We used the STRIDE and LINDDUN threat modeling methods as foundation.

Trustful Model-Based Information Exchange in Collaborative Engineering

159

STRIDE [6] provides a model of threats for identifying cybersecurity threats in software systems. LINDDUN [4, 19] is a threat modeling method that specifically addresses the collection and mitigation of privacy threats in software systems. As part of our workshop series, we analyzed threats to collaborative development projects that specifically address the internal exchange of sensitive data between collaborating partners. STRIDE and LINDDUN are not directly suitable for this purpose, as they primarily address cybersecurity and privacy threats to software products rather than threats to collaborative development processes. However, we were able to adapt STRIDE’s threat Information Disclosure and LINDDUN’s threat Unawareness for our purpose (cf. Sect. 6). 2.3 ISO/IEC 270xx We investigated international standards that address or are relevant to inter-organizational information sharing. The ISO/IEC 270xx series of standards offers best practice recommendations on information security management and is noteworthy in this context [7–9]. Particularly relevant for our work is volume 27010 “Information security management for inter-sector and inter-organizational communications”, which describes how trustful information sharing of sensitive information between organizations can be realized by means of information sharing communities. The volume provides two recommendations for organizing information sharing communities which we adopt for the exchange of system models (cf. Sect. 7.2): Warning, Advice and Reporting Points (WARPs) are a proven means of exchanging sensitive information between organizations as part of an information sharing community. “A WARP shares information between people or organizations with similar interests, typically on a voluntary basis. The WARP is based on personal relationships between people representing the members of the information sharing community. [...] Typically, WARPs are small, personal and ‘Not-for-Profit’.” [9]. Trusted Information Communication Entities (TICEs) are autonomous organizations for exchanging sensitive information within an information sharing community. The responsibilities of a TICE include ensuring proper exchange of sensitive information, analyzing and handling security incidents, and providing information that raises awareness of information security in the participating organizations. Organizationally, a TICE consists of an executive board, which strategically manages the TICE and maintains relationships between members of the information sharing community, and an operational technical team, which analyzes and evaluates business and technical risks in the exchange of sensitive information and develops mitigation measures as necessary.

3 Research Approach We conducted a series of four workshops over a period of six months in which we systematically addressed the research questions as shown in Fig. 1. The participants in the workshops came from an avionic OEM and integrator (Saab), an automotive integrator and tier-1-supplier (Bosch), tool developers that provide timing analyses and optimization tools for software-intensive systems (INCHRON, Vector), and academia

160

D. Schmelter et al.

(Chalmers GU, Fraunhofer IEM). The experts worked together in the context of the PANORAMA research project1 .

2

4

3

Identify sensitive information in modelbased engineering

Identify minimum required information in Amalthea models

Conceive and discuss a collaborative engineering scenario

Workshop Goals Workshop - Formulate RQs - Sketch a collaborative engineering scenario

- Analyze Amalthea models - Analyze ISO27010

- Collect analysis methods - Identify dataflows

- Identify threats - Conceive and discuss guidelines

Activities Validate & refine engineering scenario

Validate & refine sensitive information in the scenario

Workshop planning, review, and retrospective

- Data Security Threats - Data Security Guidelines

How can we ensure data security in tomorrow's collaborative engineering?

1 Achieve a common understanding on "Data Security"

Recurring Activities

Workshop participants and their roles Saab

Bosch

Vector

INCHRON

Domain

Aviation

Automotive

Automotive

Automotive

Role

OEM Integrator

Tier-1 Supplier Integrator

Tool Supplier

Tool Supplier

Chalmers

Fraunhofer IEM

Research

Research

Fig. 1. Our research methodology.

Each workshop was designed to be run in an agile manner and facilitated by one of the academic partners. At the beginning of each workshop, we collaboratively decided on potential workshop topics, prioritized them, and then addressed the highest-priority issues. The final step of each workshop was a review phase regarding the workshop contents as well as a retrospective regarding our workshop-internal collaboration. This ensured that we worked efficiently towards the defined RQs, quickly incorporated learned insights into subsequent discussions, and built the workshops on top of each other.

4 Insights and Assumptions Based on our workshop discussions with OEMs and suppliers, we identified a number of insights and assumptions that drove our work: Automotive and avionics development projects are and will be carried out based on 1:n contracts along the supply chain. These contracts are a central pillar for establishing trust when sharing sensitive project data during the development project and beyond. We do not expect that this established, contract-based form of collaboration will change in the near future. As such, it presents a baseline for SPI. However, collaborative development and integration of complex systems requires data exchange between multiple partners to be effective and efficient. If a project consortium includes partners who compete with each other, an open information exchange within the consortium is not readily possible, as individual partners will not be willing to share intellectual property directly or indirectly with competing partners. 1 https://www.panorama-research.org.

Trustful Model-Based Information Exchange in Collaborative Engineering

161

Collaborative, exchange standards-driven development projects are not well understood yet. Today, standards that are explicitly tailored to exchange systems engineering information, such as Amalthea, ODE2 , or OASIS VEL3 , are still mainly used for the exchange of system models in 1:1 collaborations, e.g., between OEM and Tier-1 supplier, or for the exchange between departments within one company. We are not aware of any collaboration examples in which more than two project partners exchange system models openly. Additional technical solutions might hinder efficient information exchange. During our workshops, we discussed if technical solutions like obfuscation or encryption of (partial) model elements increase trust in the exchange of system models. However, the workshop participants agreed that such measures fundamentally impede open data exchange, might result in reduced usability, and might thus lead to lower acceptance of system modeling tools. Before considering technical solutions, future collaborative design processes should be sufficiently defined.

5 Collaborative Engineering Scenario During the workshops, we conceived a fictional but representative scenario of an automotive advanced driver-assistance system (ADAS) based on the MobSTr dataset [15]. The ADAS calculates throttle, steering, and brake signals to follow a map of predetermined waypoints. We assume that a consortium of four companies along the supply chain forms a community to develop the ADAS collaboratively (cf. Left-hand side of Fig. 2). Its system model defines several components (cf. Right-hand side of Fig. 2) and how they interact with each other. We describe the involved companies and their roles in the development process in Table 1. We focused on the following objectives during the design of the scenario: – Internal Data Security. The scenario should allow discussing internal data security aspects resulting from collaborative development communities including more than two partners along the supply chain. The focus should be on the exchange of artifacts between partners, especially system models. – Minimal and sufficient. The defined development scenario should be compact—in order to keep it manageable, easily explainable with little effort, and comprehensive— to clearly convey the key data security threats in collaborative development processes. – Critical Information Flows. The scenario should include information flows where intellectual property might be leaked in an undesirable manner. Although our discussions and results are based on an automotive example, they also apply to the avionics domain. The scenario maps to avionics roles proposed by IMA [14, 17]: The automotive OEM matches the IMA roles Platform and Module Supplier who provide a centralized processing module consisting of hardware and basic software. The Tier-1 and Tier-2 suppliers match the IMA role Application Supplier who provide avionics functions by means of application software components. In general, 2 https://deis-project.eu/. 3 https://www.oasis-open.org/committees/tc_home.php?wg_abbrev=vel.

162

D. Schmelter et al.

OEM/Integrator

Supplier A (Tier 1)

Supplier B (Tier 1)

GPS Grabber

Lidar Grabber

Image Grabber

Controller

Lane Detection

Path Planner

Object Detection Supplier C (Tier 2)

Sensor Fusion

Legend SW Component APP4MC System Model Area of Responsibility

Localization

HW Component Information Exchange APP4MC Analysis

Schedulability?

Heterogeneous Computing Platform

Fig. 2. ADAS Overview: collaborative development (left) and its components (right)

an automotive centralized computing platform could be provided by a Tier-1 supplier as well. In that case the Tier-1 supplier would match to an IMA Platform and Module Supplier. The Integrator matches to IMA System Integrator who decides how to connect processing modules and how to allocate avionics functions to these modules. System models are exchanged between the companies for analysis and system integration. In our example, these models are formulated in the Amalthea language using Eclipse APP4MC [5], an open ecosystem for engineering software-intensive systems. Amalthea allows modeling the hardware (CPUs, memory, ...), software (functions, global variables and memory access, tasks and activations, . . . ), as well as the SW/HW allocation and other aspects. APP4MC enables detailed model-based timing analyses and optimization of the SW/HW allocation. We assume that the community follows a standard systems engineering process, roughly using a V-model [16]. Several analysis techniques are used during the process to validate requirements, design choices, and system properties. For the purpose of analyzing the threats and evaluating the sensitivity of exchanged information in a collaborative development project, we consider a relatively late stage in the ADAS development: the system integration of application software components developed by the OEM and the suppliers. We deliberately analyze this phase, as we expect that all potentially sensitive information is available at some company in the community at this stage. This information will be vital to answer the Integrator’s crucial high-level question: “Can we integrate the components provided by suppliers A and B in such a way that all safety and timing requirements in particular are met by the overall ADAS?”.

Trustful Model-Based Information Exchange in Collaborative Engineering

163

Our exemplary scenario reveals an interesting tension: standardized exchange formats, like Amalthea, promote open collaboration within development projects. Standardization occurs to enable effective and efficient exchange of detailed system models that allow the integrator to perform detailed analyses of the integrated system in short cycles. At the same time, however, protection of intellectual property hinders the open exchange of system models if, as in our example, the collaborating partners are competitors. Table 1. Description of the partners and their roles in our exemplary scenario Role

Description

OEM/Integrator

The OEM commissions the development of the ADAS and takes the role of the system integrator. They are thus responsible for ensuring the correct functionality of the ADAS. We assume that the OEM has outsourced the development of a part of the ADAS software components to suppliers A and B. Since the ADAS is a safety-critical system with hard real-time requirements, the analysis methods used are of particular importance. The OEM thus relies on accurate and complete Amalthea system models for precise analysis of the overall integrated system

Tier-1 Supplier A We assume that Supplier A’s core competency is object recognition. Accordingly, the OEM outsources development of the Lane Detection and Object Detection software components to Supplier A (cf. right-hand side of Fig. 2). These are key ADAS components with a high level of innovation Tier-1 Supplier B The core competencies of Supplier B are vehicle localization and trajectory calculation. Accordingly, the OEM outsources development of software components Path Planner and Localization to Supplier B. We also assume that Supplier B is a competitor of Supplier A in the field of object recognition. For this reason, Supplier A has no interest in leaking its own IP (components Lane Detection or Object Detection) to Supplier B during the collaboration—although all partners naturally have an interest in the successful, effective, and efficient development of the overall ADAS Tier-2 Supplier C We assume suppliers A and B have outsourced the preparation and fusion of their data with additional sensor data in software component Sensor Fusion to Supplier C. The software component Controller contains the central control logic of the ADAS, which controls the vehicle’s actuators and reacts to identified emergency situations

164

D. Schmelter et al.

Table 2. Minimum required Amalthea information for schedulability analysis. S → I , I → S: information flow from supplier to integrator or integrator to supplier, respectively. Information

Flow Description

Local Memory Requirements

S → I Local Memory Requirements of the software components provided by the suppliers. In our example these are the memory requirements of the components Object Detection and Lane Detection, Path Planner and Localization, as well as Sensor Fusion provided by supplier A, B, and C respectively

Developed Runnables

S → I Non-functional description of the supplier’s functions that implement their software components. In our example, the assignment is analogous to the local memory requirements

Runnable Communication

S → I Data exchange between runnables (specified via labels in Amalthea). In our example this comprises data exchange between components Object Detection and Sensor Fusion

Affinity Constraints (optional)

S → I Specify whether software elements, e.g., labels, must be deployed on the same core or whether separation on different cores is necessary

Suggested Activations (optional) S → I The supplier might want to give recommendations for activation periods of their provided runnables to ensure that their control algorithms are executed correctly Requirements Available

I → S All requirements provided by the integrator to the suppliers

Hardware Resources

I → S All hardware resources that are available to the suppliers and that the integrator reserves for software components by the suppliers. For instance, in our example the integrator might provide the processor type, core frequency, number of available cores and available memory to Supplier A that they reserved for the software component Object Detection

Provided and Required Labels

I → S Interface for data exchange. The integrator will, e.g., specify on which labels they expect data resulting from sensor fusion

Label-based Data Rates

I → S Frequency of data updates. In our running example, the integrator expects an updated vehicle position every X ms

Existing Tasks (optional)

I → S The integrator might want to provide information about existing tasks that are already deployed on the target platform to suppliers. In our example, the integrator might want to provide existing tasks of the Controller component

Trustful Model-Based Information Exchange in Collaborative Engineering

165

3. 4.

1. OEM/ Integrator

Supplier A

2.

Supplier B

1.

2. Supplier C

Supplier A

4.

3.

Supplier B

Example I: we assume that the integrator wants to perform a schedulability analysis as part of the system integration of the ADAS under development.

Example II: we assume that Supplier C relies on information from both Tier-1 Suppliers A and B to develop the Sensor Fusion component.

Exemplary Information Flow: 1. Supplier A provides a specification of the components Obstacle Detection and Lane Detection to the Integrator by means of an Amalthea system model. 2. Likewise, Supplier B provides a specification of the components Path Planner and Localization to the Integrator. 3. The Integrator integrates these inputs into her Amalthea model and performs a schedulability analysis. We assume that the analysis fails due to issues in the Amalthea model of Supplier B. 4. The Integrator provides her schedulability analysis results to Supplier B by means of her enriched Amalthea model.

Exemplary Information Flow: 1. Supplier A provides the required information by means of an Amalthea system model to Supplier C. 2. Likewise, Supplier B provides the required information to Supplier C. 3. Supplier C tries to merge these inputs into his Amalthea model and, based on his integration tests, needs to clarify requirements received from Supplier B. 4. Supplier C explains his integration attempts to Supplier B by means of his enriched Amalthea model.

Since the merged Amalthea model of the Integrator contains potentially sensitive information that originally stems from Supplier A (cf. step 1) and Supplier A is in competition with Supplier B, there is an information leak at step 4.

Since the merged Amalthea model of Supplier C contains potentially sensitive information that originally stems from Supplier A (cf. step 1) and Supplier A is in competition with Supplier B, there is an information leak at step 4.

Legend Amalthea Model Supplier A Information Source

Amalthea Model Supplier B Information Sink

Integrated Amalthea Model Information Flow

Trust Boundary Information Leak

Fig. 3. Two exemplary, critical information flows of our running example.

6 Data Security Threats The collaborating companies in our scenario form an information sharing community according to ISO/IEC 27010. When a company in our example provides Amalthea system models containing sensitive information to another company, they must have confidence that its use at this other company will be subject to adequate security controls implemented by the receiving organization: in an information sharing community “[...] each member trusts the other members to protect the shared information, even though the organizations may otherwise be in competition with each other” [9]. During our workshop series, we had extensive discussions about what information in Amalthea system models is potentially sensitive, i.e., allows inferences about details that are intellectual property. Our main finding is that information is particularly sensitive if it allows conclusions to be drawn (a) about the functioning of (sub)-components (e.g., algorithms used for object detection in our ADAS example) or (b) about the performance characteristics of the developed (sub)-components (e.g., the capabilities of a radar system). During the discussion, it became apparent that the context of potentially sensitive model elements is a key challenge in their assessment: certain model information may be less worthy of protection in an early development phase compared to a later phase, because it is simply not yet available in much detail at the beginning. Furthermore, the

166

D. Schmelter et al.

need for a high level of detail depends on the intended analysis methods (e.g., schedulability analysis or integration tests). In Fig. 3, we illustrate two exemplary information flows between collaborating partners in which information may be leaked. Example I shows a top-down information leak that might occur when the Integrator wants to communicate schedulability issues to Tier 1 suppliers A or B. Example II shows a bottom-up information leak that might occur when Supplier C needs to clarify requirements received by A or B. We derive two key threats based on STRIDE and LINDDUN, respectively: Unawareness (privacy) according to LINDDUN refers to unawareness of the consequences of sharing (too much) personal data [4]. This fits the collaborative exchange of Amalthea system models: Supplier A may be providing (irrelevant) information to the Integrator that is not needed to perform the schedulability analysis but may include sensitive information. This threat is comparable to LINDDUN’s U 1 “Providing too much personal data”. Moreover, Supplier A may not even be aware of the information contained in the Amalthea system model provided to the integrator in detail. This threat is comparable to LINDDUN’s U 2 “Unaware of stored data”. Information Disclosure (security) according to STRIDE “refers to the security threat which reveals information when it shouldn’t.” [6] We adopt this definition for the exchange of Amalthea models as follows: The Integrator may not sufficiently ensure in step 4 that the system model sent to Supplier B does not contain any sensitive information from Supplier A. This case is comparable to STRIDE’s ID_ds “Information disclosure of a data store can occur [. . . ] when the data store itself is insufficiently protected against unauthorized access and/or when the data itself is not kept confidential”. Our examples show that in collaborative development projects, sensitive information along the supply chain can in principle be leaked in both directions in an undesirable manner (Tier-n → Tier-n-1 as well as Tier-n → Tier-n+1).

7 Data Security Guidelines Based on our analysis in the previous section and the discussion in our workshops, we developed general guidelines that we recommend when sharing system models in collaborative development projects. 7.1 Data Minimization The central measure to minimize the risk of Unawareness is data minimization. System models should only be exchanged for a specific purpose (e.g., to perform a schedulability analysis) and should only contain data required for this purpose. Using schedulability analysis as an exemplary analysis method, we identified (a) the specific Amalthea model elements that typically need to be exchanged between integrator and Tier-1 supplier, (b) the information flows, and (c) which information is sensitive (cf. Table 2 for a general idea). As a generalization of our example, we suggest the following guideline:

Trustful Model-Based Information Exchange in Collaborative Engineering

OEM / Integrator

OEM / Integrator

Supplier A

Supplier B

Supplier A

Supplier B

Supplier C (a) Model Exchange via Warning, Advice, and Reporting Points (WARPs) Legend Community ISMS Organization ISMS

167

Community Boundary Organization Boundary

Supplier C (b) Model Exchange via Trusted Information Communication Entities (TICEs)

Model Exchange WARP Community

TICE Executive Board TICE Operational Technical Team

Security Gate

Fig. 4. ISO/IEC 27010-compliant Exchange of Amalthea System Models

7.2 ISO/IEC 27010-Compliant Model Exchange To minimize the risk of information disclosure, it is not sufficient to simply practice data minimization, as the cause of information leaks may be beyond the control of the information source. Compare our running example, left-hand side of Fig. 3: the cause of the information leak might be the integrator. However, the source of the Amalthea system model is Supplier A. Consequently, further measures are required to minimize the risk of information leaks and at the same time to strengthen the trust between the involved collaborating partners. We recommend establishing an information sharing community according to ISO/IEC 27010 [9] that includes participants from all relevant organizations as well as an information sharing management system (ISMS) that is community-specific, exists alongside the ISMSs of the participating organizations, and regulates information exchange within the community (cf. ISO/IEC 27001 [7] and 27002 [8]). While ISO/IEC 27010 is not originally intended for this purpose, it does describe trustful information sharing of sensitive information between organizations and we suggest transferring its concepts to collaborative engineering processes since they provide a good fit to the needs we have identified. ISO/IEC 27010 distinguishes two flavors of communities: We recommend to adopt WARPs or TICEs for organizing information sharing communities (cf. Fig. 4 (a, b) and Sect. 2.3): According to ISO/IEC 27010, key WARP

168

D. Schmelter et al.

services primarily address communication of community-relevant security information. However, WARPs may provide additional services and therefore can be used to organize the exchange of sensitive information like system models in a trusted environment. Trust in the exchange of sensitive information is achieved in particular through personal relationships between WARP members. Based on this consideration, we derive the following guideline:

8 Discussion The open exchange of information in heterogeneous development projects between several collaborating partners is necessary to meet the trend towards central computing platforms and to develop modern software-intensive systems effectively and efficiently. However, this exchange is hindered if an organization does not have sufficient confidence that their own sensitive information will not be leaked inadvertently. This is particularly the case when collaborating partners are in competition with each other, as our example shows. RQ1–Data Security Threats. We focused on threats that arise within a heterogeneous community of collaborating companies. We adapted two threats from established threat modeling frameworks for our domain: (1) Unintended sharing of too much or unnecessary information if one is the source of a system model (cf. LINDDUN’s Unawareness (privacy)) and (2) information disclosure of sensitive information if one is the distributor – e.g., integrator – of a system model (cf. STRIDE’s Information Disclosure (security)). We are not aware of any related work that specifically addresses threats in modelbased information exchange, but encourage the research community and practitioners to further explore these threats and possible countermeasures. RQ2–Sensitive Information. This research question raised the most discussions between our workshop participants and cannot be answered in general terms. Any iterative development project will continuously enrich the system model with more precise information, e.g., by enriching early simulations with precise measurements later. Hence, it depends on the context and development stage if a model reveals sensitive information. We encourage the research community and practitioners to explore means for identifying sensitive information in system models. Our workshop discussions highlight that it is important to look for model elements that might reveal information (1) about the functioning or (2) about performance characteristics of the system under development. RQ3–Guidelines. In our workshops, we discussed if technical means such as encryption of model elements would help build trust in model-based information exchange. One major practical challenge is that each company employs a number of specific

Trustful Model-Based Information Exchange in Collaborative Engineering

169

well-established tools and processes. Therefore, technical means should impose as few requirements as possible on a community-wide technical infrastructure but should be implementable individually by each company. We suggest data minimization to be the first priority. However, we show that data minimization cannot prevent leakage of intellectual property completely and propose to adapt organizational models like WARPs or TICEs of ISO/IEC 27010. We encourage the research community and practitioners to explore both technical means for data minimization (e.g, by applying model transformations) and organizational means to build trust. Establishing such communities will also require an adaptation of the development processes and concerted SPI efforts to achieve this. We believe that information sharing communities provide the focus on people that are needed to drive these efforts as stated in the SPI manifesto [13]. The members of this community are also the ones that can champion the SPI effort and ensure that the work in the community is embedded in the processes in a way that preserves the necessary development speed while achieving the IP protection required by the partners. At the same time, these communities do provide an “adaptable and dynamic model” the manifesto argues for and make sure that all involved partners can focus on the beneficial aspects of the collaboration that both provides value to the partners and allows creating long-term vendor partnerships [18]. Generalisability. We based our discussions and results on an automotive example where system models are exchanged via Amalthea. However, we do believe our results can and should be considered in any heterogeneous community that develops softwareintensive products on centralized computing platforms and exchanges information in a model-based way via open standards like Amalthea, OASIS VEL, EAST-ADL, or ODE.

9 Conclusion In this paper we have shown that data minimization, purpose-only sharing of system models, and the establishment of an information sharing community in accordance with ISO/IEC 27010 are meaningful measures to increase trust in the distribution of sensitive information and reduce threats such as Unawareness or Information Disclosure. Our example illustrates key threats to be aware of when sharing information between multiple collaborating partners. We encourage research communities dealing with open, model-based exchange formats and collaborative development processes to put more focus on the protection of intellectual property in heterogeneous development communities, as there seems to be little prior work on this so far, but an emerging and urgent need in the industry. Future work should also include investigations on how to embed information sharing communities like WARPs and TICEs into software processes and how to systematically improve existing processes to address the information sharing (and hiding) needs in collaborative development processes. Acknowledgments. This research has been partially funded by the Federal Ministry of Education and Research (BMBF) under grant 01IS18057 and by Vinnova under grant 2018-02228 as part of the ITEA 3 project PANORAMA.

170

D. Schmelter et al.

References 1. Argiolas, C., Dessì, N., Fugini, M.: Modeling trust relationships in collaborative engineering projects. In: Kaschek, R., Kop, C., Steinberger, C., Fliedl, G. (eds.) UNISCON 2008. LNBIP, vol. 5, pp. 555–566. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-789420_53 2. Borsato, M., Peruzzini, M.: Collaborative engineering. In: Computer-Based Design and Manufacturing. Springer, Boston (2015). https://doi.org/10.1007/978-0-387-23324-6_12 3. Cuenot, P., Chen, D., Gerard, S., et al.: Managing complexity of automotive electronics using the EAST-ADL. In: ICECCS. IEEE (2007) 4. Deng, M., Wuyts, K., Scandariato, R., Preneel, B., Joosen, W.: A privacy threat analysis framework: supporting the elicitation and fulfillment of privacy requirements. Requirements Eng. 16 (2010) 5. Höttger, R., Mackamul, H., Sailer, A., Steghöfer, J.P., Tessmer, J.: APP4MC: application platform project for multi-and many-core systems. IT-Inf. Technol. 59(5) (2017) 6. Howard, M., Lipner, S.: The Security Development Lifecycle, vol. 8. Microsoft Press Redmond (2006) 7. ISO: ISO/IEC 27001:2013 Information technology — Security techniques — Information security management systems — Requirements (2013). https://www.iso.org/standard/54534. html 8. ISO: ISO/IEC 27002:2013 Information technology — Security techniques — Code of practice for information security controls (2013). https://www.iso.org/standard/54533.html 9. ISO: ISO/IEC 27010:2015 Information technology — Security techniques — Information security management for inter-sector and inter-organizational communications (2015). https:// www.iso.org/standard/68427.html 10. Künzel, M., Kraus, T., Straub, S.: Collaborative engineering – characteristics and challenges of cross-company partnerships in the integrated engineering of products and supporting services (2020) 11. Lu, S.Y., Elmaraghy, W., Schuh, G., Wilhelm, R.: A scientific foundation of collaborative engineering. CIRP Ann. 56(2) (2007) 12. Padula, G., Dagnino, G.B.: Untangling the rise of coopetition: the intrusion of competition in a cooperative game structure. Int. Stud. Manag. Organ. 37(2), 32–52 (2007) 13. Pries-Heje, J., Johansen, J.: SPI Manifesto (2010). https://conference.eurospinet/images/eur ospi/spi_manifesto.pdf 14. RTCA: DO-297 - Integrated Modular Avionics (IMA) Development Guidance and Certification Considerations (2005) 15. Steghöfer, J.P., et al.: The MobSTr dataset: model-based safety assurance and traceability, June 2021. https://doi.org/10.5281/zenodo.4981481 16. Trei, M., Maro, S., Steghöfer, J.-P., Peikenkamp, T.: An ISO 26262 compliant design flow and tool for automotive multicore systems. In: Abrahamsson, P., Jedlitschka, A., Nguyen Duc, A., Felderer, M., Amasaki, S., Mikkonen, T. (eds.) PROFES 2016. LNCS, vol. 10027, pp. 163–180. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-49094-6_11 17. Watkins, C.B.: Integrated modular avionics: managing the allocation of shared intersystem resources. In: IEEE/AIAA Digital Avionics Systems Conference (2006) 18. Wiener, M., Saunders, C.: Forced coopetition in it multi-sourcing. J. Strateg. Inf. Syst. 23(3), 210–225 (2014) 19. Wuyts, K., Joosen, W.: Linddun privacy threat modeling: a tutorial. Technical report, Department of Computer Science, KU Leuven; Leuven, Belgium (2015) 20. Ziegenbein, D., Saidi, S., Hu, X., Steinhorst, S.: Future Automotive HW/SW Platform Design (Dagstuhl Seminar 19502). Dagstuhl Rep. 9 (2019)

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR Educational Program Theodoros Kosmanis1(B) , Dimitrios Tziourtzioumis1 , Andreas Riel2,3 , and Michael Reiner3 1 International Hellenic University, Thessaloniki, Greece

{kosmanis,dtziour}@ihu.gr

2 Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, 38000 Grenoble, France

[email protected] 3 ECQA GmbH, 3500 Krems, Austria [email protected]

Abstract. This paper focuses on one of the current great challenges of electric vehicle technology, the implementation of a highly technical curriculum, especially under the COVID-19 restricted environment. The presented training program is designed to cater to the automotive market regarding basic electric vehicle skills for engineers. The most notable part of the training program is the blended teaching approach. The trainees attended typical online lectures being available in a synchronous and an asynchronous manner. Significant part of the training are the two teaching mobilities during which the students participate in technical experiments and work on projects based on Augmented Reality and developed on the principles of a project-based learning approach. The training is completed via a short industrial internship period. The paper elaborates lessons learnt from the piloting educational procedure and a thorough discussion on the sustainability of the program and its importance for the electric vehicle market. Keywords: Electric vehicles · HV batteries · electric powertrains · education · training

1 Introduction Among the most common expressions heard during the past few years about Electric Vehicles (EVs) is the vast evolution of their technology and their positive environmental impact in the places of use [1, 2]. Their popularity has vertically increased in the preceding two decades. Reports on their share in the global automotive market, their role in Industry 4.0 or their rapid intrusion and impact in the activities of everyday life are being constantly published and personal estimations on their even more promising future are spread [3–6]. On the other hand, many organizations and automotive manufacturers announce midterm end of life of their internal combustion engine production [7, 8], whereas governments announce funding of electric vehicle purchase in an effort © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 171–185, 2023. https://doi.org/10.1007/978-3-031-42307-9_13

172

T. Kosmanis et al.

to increase their market share [9, 10]. Indeed, electric vehicle industry is, nowadays, among the most rapidly evolving ones. However, as usually happens with new technologies, the wide public as well as many automotive professionals are not familiar with electric vehicle technology. Consequently, questions like what the attributes and most important shortcomings of “electrified” vehicles compared to the “non-electrified” ones are, or how an electric vehicle can be troubleshooted or maintained, arise for everybody, even professionals. The problem was highlighted not as ignorance of the wide public or professionals but as estimated, for the near future, shortage of adequately trained EV technicians and experts, by automotive organizations like the Institute of the Motor Industry (IMI) in Great Britain, the European Automobile Manufacturers’ Association as well as various socioeconomic stakeholders [11–14]. Qualitative introductory and advanced training programs are offered by educational institutes as well as professional automotive organizations, especially in industrialized countries. However, the problem remains and grows. On the other hand, regarding higher educational level, several initiatives in the frame of funded European projects and focusing on EV technology have been taken, especially during the past decade. Some of them, like projects of the European Association for Electromobility (AVERE) [15], mainly focus on research around EVs and not training. More educational oriented projects can be found under the umbrella of the Automotive Skills Association (ASA) [16] and provide analysis of skill requirements by the industry (Drives) and focus on specific sector of the EV technology (batteries [17], cyber security [18] etc.), or offer full certification [19]. Similarly, specialized EV training is provided through the very few Master courses around Europe, mostly in the UK [20, 21]. The purpose of these initiatives was to set the basis for future systematic education and training programs. Again, the problem remains, grows and market needs are not met. An Erasmus Plus, European, educational project entitled “bEyonD the boRder of electrIc VEhicles: an advanced inTeractive cOURse” (E-DRIVETOUR), has been carried out during the past three years having as principal objective the training of engineering department students into EV technology [22]. The project proposes the implementation of a series of learning approaches, like synchronous and asynchronous online lecturing, live experiments, simulations, projects based on Augmented Reality (AR) technology [23, 24] and leading to scientific type papers as well as short time industrial internship, thus formulating a blended educational scheme to train students. A description of the learning activities taken place in the context of the blended learning scheme of E-DRIVETOUR project is given in this paper. Starting from the curriculum with focus on the target group of trainees, the teaching topics, the learning outcomes and the teaching material requirements, the educational scheme is analyzed per activity. Emphasis is given to the laboratory experiments and the mobility periods, which constitute the most demanding and challenging parts of the training, both from the students as well as the trainers point of view, especially after the outbreak of COVID-19 pandemic. It introduced limitations that needed to be overcome. Feedback on the quality of the activities has been received by students, educators and company partners after the pilot implementation of the educational procedure, whereas lessons learnt and plans for the future concerning more widen versions of the program are discussed.

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

173

2 The E-DRIVETOUR Initiative The E-DRIVETOUR project was implemented by a consortium of nine (9) partners, three (3) universities, five (5) small and medium enterprises (SMEs) and a (1) research institute (Table 1). All organizations have long-standing, established experience in their field. In addition, the consortium was comprised from individuals that have a vast experience in EU projects, research and technology diffusion. The aim of the E-DRIVE TOUR project was the development of an innovative, from the educational and the technological point of view, course on Electric Vehicle technologies in order to complete the market void in technical and maintenance specialists created by the rapid expansion of this industry. The main objectives of the project can be summarized to: • develop and adapt a joint, easily deployable curriculum between participating universities, designed on an exhaustive needs analysis and focusing on a “real-life” transnational approach. The course should be recognised by academia and industry throughout the EU by using the ECTS credit system; • create cost effective reconfigurable tools used across universities and companies as well as an online platform for global access to reduce the learning cost in academia and empower distant learners; • ignite entrepreneurship by using interactive teaching and participation methods that boost innovative thinking and to emerge them to this industry sector; • offer a chance to augment the skills of lifelong learners and maintain them on top of the employment market; • set the foundations for developing a Masters course on electric mobility across European Union institutions. Although there are educational programs throughout Europe focusing on the overall electric vehicle technology, they are mostly executed as single courses or part of various courses in preor post-graduate programs on more general subjects. Additionally, there are vocational training programs organised by industries with focus on professionals. However, as has been stated in the introductory section, the general ignorance of the electric vehicle technology is observed even by professionals. Besides supporting an apparent constant need of the automotive market for skilled personnel, E-DRIVETOUR provides certified hands-on knowledge on the electric vehicle technology through a mixture of educational approaches (not only lectures, but also laboratories, tool demonstrators developed by students, industrial practice, augmented reality tools) in order to align with the preferences of modern educational trends and increase teaching efficiency. In its preparatory stages, the program included the definition of course teaching requirements and syllabus, of specifications for the laboratory experiments as well as of requirements (description and functionality) of the demonstrators that would be specially utilized for the program. Furthermore, the specifications and characteristics of an eLearning platform were setup. Teaching material has been developed specifically for the educational procedure as well as experimental exercises that would accompany the lectures. All material was hosted by the e-learning platform and was made available to everyone interested in the subject of vehicle electrification.

174

T. Kosmanis et al. Table 1. Partners in the E-DRIVETOUR consortium.

Competency

Consortium Partner

Acronym Skills

Universities

International Hellenic University, GR

IHU

Academic institutions, fundamental research

Kazimierz Pulaski University UTHR of Technology and Humanities in Radom, PL SMEs

University of Craiova, RO

UOC

Cerca Trova Ltd, BG

CT

Inteligg P.C., GR

INT

eZee Europe, BE

EZEE

eProInn s.l.r. – Energy and Propulsion Innovation, IT

EProInn

ECQA GmbH, AT

ECQA

Research Institutes Hellenic Institute of Transport HIT Centre for Research and Technology Hellas, GR

Technology development, applied research, analysis & optimization

Industrial & Social research, Innovation

3 Training Curriculum The main activity of the project was the development and implementation of the academic curriculum. The academic curriculum is composed of twenty-four (24) courses divided into two time periods, named as the first teaching period (TS1) and second the teaching period (TS2). Both time periods include, besides the lectures, two university visits for the students, one at the International Hellenic University (IHU), Thessaloniki, Greece and one at the University of Technology and Humanities in Radom (UTHR), Poland named mobility periods A & B (Fig. 1). JUNE

JULY

AUGUST

SEPTEMBER Mobility Mobility Mobility Mobility

MAY

Mobility Mobility Mobility

APRIL

Mobility Mobility

MARCH Mobility Mobility

FEBRUARY

Week 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35

Fig. 1. Educational schedule including the mobility periods.

The educational scheme implemented in the frame of the E-DRIVETOUR project included. • 127 h of online (web) lectures divided into two time periods of about two months, • 2 mobility periods, of 14 days duration each, taking place at the end of each time period of the online lectures and during which all attending students would gather in one of the Universities for participating with physical presence in lectures and laboratory experiments (about 150 teaching hours),

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

175

• 14 days practical training in one of the industrial partners of the project (third mobility period) taking place after the end of the previous activities, • 2 medium sized projects to be elaborated by the students in groups in each of the two Universities that would eventually lead to a scientific type paper prepared per group. The initial planning of the program included enrolment of 40 students, 16 from IHU, 16 from UoC and 8 from UTHR. These 40 students would be taught the same course simultaneously (online) in order to have the same knowledge and understanding when they participate in the two large scale laboratories (Mobility Periods A and B) where they would have to work on topic laboratories and on their two medium sized projects as well as during their one month practice period (Mobility Period C). Focusing on the educational approaches per topic, the students would have to elaborate homework or small scale projects, personally or in groups, as well as lab reports. Besides the online lectures, the remaining lecturing hours together with laboratory experimental sessions were offered to the students during the first two mobility periods. The students were divided into six groups and worked for a month in total on appropriately developed experiments on selected topics, like Automotive Energy Sources, Lightweight Materials, Autonomous Vehicles etc. Lab reports were required for all experiments in which the results should be presented and commented on. Local language lessons and cultural activities provided a multidisciplinary character to the mobility periods. A short but significant insight to the real world of vehicle electrification was provided to the students though the practical training mobility. The students were arranged into four of the industrial partners, all active on electric vehicles from various points of view. This last learning mobility had a duration of 14 days per student. The four companies that have hosted students during the training mobility were eProInn, Salerno, Italy – TRIGGO, Warsaw (Lumniaki), Poland – EZEE Europe, Beauvesant, Belgium - Inteligg P.C/Hellenic Institute of Transport, Thessaloniki, Greece. The medium-sized projects completed the overall training of students in an innovative way. The students would have to familiarize themselves with Augmented Reality (AR) technology through a freeware software [25] and appropriately selected or developed 3D models of equipment and electric powertrain parts [26]. Figure 2 presents an indicative AR model of an electric trike developed by the students. The projects were completed with the corresponding reports and the preparation of a scientific type paper that was presented in a specifically organized session of an international workshop. Finally, the overall training was completed with the exams. All students that had attended the lectures, online and in-class, participated in all three mobility periods, delivered the project paper and succeeded in the final exams would receive the skill certification provided by ECQA GmbH, a partner in the project. By the end of the project, 16 out of the 33 students having completed the course, received the final certification. For those that have not succeeded in the exams, they will have the opportunity to retake them in future time. The list of the 24 courses of the curriculum is given in Table 2 with a brief analysis of the teaching hours. The overall equivalence to the European Credit and Transfer and Accumulation System is 20 ECTS.

176

T. Kosmanis et al.

Fig. 2. Students’ medium sized project. Indicative small-scale electric vehicle (tricycle). (Top) Construction, (Bottom) Model through the AR application.

Table 2. Training topics and teaching methods (R: Remote, I: In-class, S: Simulation) Course title

Lab Type

Theory In-class

Introduction to Vehicle Electrification R&I

Automotive Energy Sources Lightweight Materials

3

6

16

22

I

12

4

16

I

6

4

10

Introduction to Vehicle Dynamics

6 I

6

EV Production Management Electric Motors & Motor Drives for EVs

Total teaching hours

Web 3

NI LabVIEW Training

Data Acquisition and EV Sensors

Lab

6 4

10

4

16

9 I

3

9

9

(continued)

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

177

Table 2. (continued) Course title

Lab Type

Autonomous Vehicles

I

Theory In-class 3

EV Business Administration and Automotive Marketing

Lab Web 6

4

15

Language Lessons

6

6 42

S

6

9

15

6

6

12

12

12

6

30

2

13

EV Energy Storage Systems I & S EV Charging Systems

I&S

Mechanical Drivetrai ns for EVs

I

Control System Development

S

9

EV Public Policies

15 6

6

EVs and Smart Gridding I

Life Cycle Assessment of EVs

S

3

Sustainable Transportation

15 6

6

EV On Board Diagnostics, Troubleshooting & Maintenance

Language Lessons

13 15

Intermediate Project 1 EV System Modelling and Simulation

Total teaching hours

6

3

3

9

3

2

5

3

3

6

6

Intermediate Project 2

42

Industrial Practice

80

TOTAL Teaching Hours

35

145

68

412

4 Program and Course Assessment The project in general and the piloting course in particular were beneficial to all the participants, from trainees and trainers to academic institutions and company partners. Feedback investigating the opinion of all groups involved was acquired through appropriate questionnaires. The Likert scale was used in order to simplify answers and present them in a comprehensive way. Thus, for each of the questions, the possible answers should be chosen from a statement among five levels in the range: very unsatisfied (1) – very satisfied (5). The results, grouped per trainees, trainers and company partners are presented below.

178

T. Kosmanis et al. Table 3. E-DRIVETOUR Mobility Periods’ Assessment by Trainees

Characterization Level

Number of Preferences per Characterization Q1

Q2

Q3

Q4

Q5

Very Unsatisfied (1)

1

1

1

1

1

Unsatisfied (2)

0

0

1

0

0

Neutral (3)

2

7

1

3

1

Satisfied (4)

7

7

10

8

4

Very Satisfied (5)

18

13

15

16

22

Overall Average

4.46

4.11

4.32

4.36

4.64

At first, the students were asked to assess the mobility periods’ quality, i.e. accommodation facilities, university and company support in practical issues during mobility, student integration, cultural schedule as well as the overall support from teaching or training personnel were evaluated for each of the mobility periods. The students were asked about their level of satisfaction for services related to. • • • • •

Q1. Their university’s support for mobility; Q2. The accommodation conditions; Q3. The quality of classes carried out within the mobility; Q4. The cultural program and student integration; Q5. The contact, support and relationship with teachers.

Obviously, the general impression is that all activities, educational and not, were assumed significantly satisfying with an overall satisfaction of about 85%. The argument is rational as the number and variety of activities, efforts for student integration and extended mobilities constitute a complex educational scheme that during its piloting implementation would be expected to cause a level of dissatisfaction. It was small, but the negative comments will be taken care for the future versions of the program. Similar feedback for the mobility periods was received by the trainers also. Moreover, the trainers were called to assess E-DRIVETOUR’s trainees. As the latter were selected according to their academic level and performance and their English language level, the teaching staff was requested to evaluate the quality of the selection procedure, judging mainly from their final performance. According to the teaching staff the students that were selected to participate in E-DRIVETOUR had a good academic level, they were active in the program and of adequate/good English knowledge. However, one could say that the English level of the students was a bit worse than expected. This is rational because the English degree acquired does not necessarily correspond to a comfortable attendance and good performance in an English taught course. Adequate time is required for them to adjust to such a situation. Finally, the teaching staff was satisfied by the students’ performance since the final grade over selection’s quality is 4.52 out of 5.00. Figure 3 summarizes the aforementioned conclusions. The same questionnaire required feedback about the students from the company partners. As depicted in Fig. 4, the results are more or less the same. Focusing though,

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

179

Fig. 3. Trainees’ performance in E-DRIVETOUR activities as evaluated by the educators.

on the last question about the satisfaction from the technical skills of the students, a reduction of excellence is observed. On one hand, this is rational as the students tend to be trained mainly theoretically and it cannot be expected from them to have acquired a full range of technical skills from the age of 22 to 25 years. On the other hand, a significant gap in the current educational systems around Europe is revealed, providing food for thought for engineering education reconsideration.

Fig. 4. Company partners’ aspect on E-DRIVETOUR trainees’ skills and performance.

Finally, Fig. 5 presents the results from the feedback of company partners on two significant questions. Particularly, they were asked about potential recruitment of some of the students and the impact of E-DRIVETOUR project for their company. The connection between the two pie charts is obvious. Those that considered the project very beneficial stated that they would possibly recruit students as their employees. A beneficial impact was mostly connected to an equal chance of recruiting or not students whereas neutral

180

T. Kosmanis et al.

Fig. 5. Company partners’ feedback regarding (a) possible future student recruitment and impact from participation in E-DRIVETOUR project.

response regarding benefits was connected to unlikelihood of recruitment. Of course, the sample of companies was small but a positive stand on the impact of E-DRIVETOUR project is rather obvious.

5 Project Sustainability The E-DRIVETOUR project and educational curriculum proposed the implementation of a series of learning approaches for the training of engineering students on electric vehicle technologies. Apparently, the mobility periods played a crucial role to the overall educational procedure. As a blended learning approach was to be implemented, the only way to include laboratory sessions was the gathering of all students in one of the University partners. Although there was the option to remotely execute some of the experiments, it was much more efficient and educational for the students to be in one place all together. The mobilities significantly enforced the integration of students and educators, something further enhanced by the fact that the students were working during the overall periods in international groups. Similar conclusions can be made for the industrial training mobilities as well. Its role in the overall educational procedure is indisputable. The continuation of the project and further development and exploitation of the results after the end of it involves two general axes of exploitation with different target group approached each time. The first axis concerns a postgraduate course on electric vehicle technology aiming University graduates and tertiary level professionals. The second axis is related to specialized seminars for automotive technicians wishing to gain mainly practical knowledge on electric vehicles in order to expand their business capabilities. Specifically, the establishment of a new postgraduate course on Electric Vehicle technology awarding the successfully attending students with 120 ECTS is the first goal. The target group of potential students will therefore be mainly engineers (electrical, mechanical, automotive, etc.) without excluding other persons of appropriate background. As the course is not unique in Europe, partnership extension to include more stakeholders

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

181

on Vehicle Electrification technology and attempt for cooperation or even merging with already existing, successful, similar courses will be important for the continuation of the program. The course could maintain the basic elements of the piloting E-DRIVETOUR curriculum and the blended learning character with the addition of carefully selected new topics, supported by new partners. Apparently, the e-learning platform can be sustained as well as most of the educational material with necessary improvements of course. In general, the course is expected to maintain partial online lectures and obligatory attendances mainly for laboratory experiments which shall be enriched. The internship will be also maintained since it has played a significant role in the program, but with additional options concerning the hosting companies. Finally, a dissertation, as practical as possible, will conclude the course. It is expected that the MSc modules and the internship will correspond to 90 ECTS with the dissertation to add 30 ECTS for a 120 ECTS curriculum. In order to increase attractiveness of the course, the very trendy micro certification approach will be implemented for selected combinations of MSc modules. Microcertification approach will be a flexible way of providing certified knowledge to students, seminar participants etc. Instead of having to successfully attend a lengthy course, the duration of which – e.g. 2 years – might prevent someone from joining it, the certified attendance of specific course module combination will lead to two benefits. First, this format will attract more people for attendance. Then, any graduate will have more benefits than just a degree. An effort will also be made to combine the overall MSc with a general certification. It is important to mention that the difficult part in designing the MSc course will be the direct connection of all certificates provided, with actual skills dictated by related industrial needs. For this purpose, efforts will be made to exploit the bonds with industry created during the E-DRIVETOUR project, that is existing partners, new potential partners, especially those having already expressed the will to cooperate. A possible cooperation with the Automotive Skills Alliance and integration of the course in the platform, would definitely help the connection of the program with actual skills and would increase its dynamics. On this basis, this will not be just another postgraduate course in Europe but a high level one with actual benefits for its potential students. Regarding the specialized seminars for automotive technicians, the target group is apparent. As mentioned in the introductory section, there are a lot of reports from professional organisations, automotive and related industries as well as other stakeholders about an existing shortage of skilled on electric vehicle technology technicians which is expected to increase in the forthcoming years. Even though several technical seminars appointed to automotive technicians or enthusiasts in general have already been launched, the needs of the automotive labour market are that many that any additional seminar is considered beneficial, especially if it is a certified seminar on skills demanded by the industry. The basic principles for such a seminar were its mainly experimental character, the connection not only with industry needs but also with national (depending on the country) and EU certification requirements for mechanic stores to have the capability to troubleshoot and maintain electric and hybrid electric vehicles. Of great importance is the addition of a course on working environment safety which is currently

182

T. Kosmanis et al.

missing from the E-DRIVETOUR curriculum. The survey for the current situation in various European countries is ongoing.

6 Discussion and Conclusions The E-DRIVETOUR project provided in its piloting phase and will provide in the future an excellent opportunity to allow Higher Educational Institutes and labour actors to jointly develop and implement a new learning and teaching experience related to the rapidly evolving industry of electric vehicles. Students as well as employees face difficulties in achieving a successful career in the electric automotive industry mainly due to the ignorance of the subject. The shortage of experienced professionals and experts in general is constantly mentioned by professional organisations, automotive and related industries as well as other stakeholders. Since it is a domain that is now emerging and academically is not adequately covered, the project was an innovative attempt. It brought together the needs and trends of the labour market with an advanced and wellprepared educational academic experience, benefiting both parts. There are plenty of high-valued effects resulting from this project, such as: • brought electric vehicles and educational results to the attention of relevant decisionmakers (of the labour market, policy makers etc.) • increased the awareness of experts in the field of electric vehicle technology • placed the focus on project highlights, such as good practices on the field • established a fruitful cooperation among academia and labour actors • enriched curricula, supporting especially young workers to advance their employability • developed ideas for future collaboration with other stakeholders • used of parts of the training material developed in undergraduate and postgraduate courses of the academic partners as well as procedures at the industrial partners • served as a lever for developing similar projects and actions in other academic institutions • created interest from industry actors to send their staff for training Regarding the courses, it has been proved that a blended learning approach requires a lot of efforts to be implemented. Nevertheless, it can be much more efficient than any traditional teaching approach. It is believed that more experimental sessions would be more beneficial for the students during the mobility periods. Labs are always very educational and increase understanding. Towards the same direction, the AR project provided a better insight to vehicle electrification technology as it forced the students to work with the simplified but realistic powertrains and powertrain parts’ datasheets. Similarly, it provided an additional motivation for them as AR is considered a very attractive technology for trainees. The opportunity provided to the students to work for a short period of time in a professional environment has triggered them. Of course, all three mobilities have caused inconvenience to some of the students, since they were required to be absent from their place of origin for about 1.5 month. However, the remark of one of them in the questionnaire summarizes the case: “A little tiring but awesome!”.

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

183

In total, it seems that the blended learning approach and the exploitation of modern technology increase the interest of students, their commitment and knowledge assimilation at the cost of greater efforts required by the organizers and from the students themselves.

7 Relationship with the SPI Manifesto Europe-wide qualification and certification of modern, digital skills has been one of the backbones of EuroAsiaSPI2 ever since its creation. In particular, the integration of diverse competences in product, services, and systems design has been identified as a key principle and success factor [27–29]. The SPI manifesto [30, 31] created in this community defines the values and principles required to deploy SPI efficiently and effectively. The principle “Create a learning organization” means that organizations need to continuously qualify and re-qualify personnel to strengthen and re-new core competences in rapidly evolving contexts of the digital era. Digitalization of Industry and E-Mobility has been among the core Thematic Workshops and Communities of EuroSPI for many years, placing this paper related not only to the development of a skill certification training program but also an Electric Vehicle training program right in the centre of the SPI community interest. Acknowledgements. This project is a highly collaborative endeavour requiring intense contributions of a huge number of individuals. We regret that we cannot cite all their names here and want to express our thanks to them in this way. Some of them, however, gave particularly valuable input to the work programme published in this article without being cited as co-authors. Special thanks are therefore due to George Katranas, Panagiotis Maroulas, Krzysztof Gorski, Iwona Komorska, Costin Badica, Ionut Muraretu, Matteo Marino, Christos Ioakeimidis, Dimitris Margaritis, Fotis Stergiopoulos, Dimitris Bechtsis, Di-mitris Triantafyllidis, Panagiotis Tzionas, Triantafyllia Anagnostaki, Sofia Sarakinioti, Vasileios Kartanos and Emmanouela Maroukian. The E-DRIVETOUR project [22] is financially supported by the European Commission in the Erasmus+ Programme under the project number 612522-EPP-1-2019-1-EL-EPPKA2-KA. This publication reflects the views only of the authors, and the Commission cannot be held responsible for any use which may be made of the information contained therein.

References 1. Ehsani M., Gao Y., Longo S., Ebrahimi K.: Modern Electric, Hybrid Electric and Fuel Cell Vehicles, 3rd edn. CRC Press (2018). ISBN 978-0429504884 2. Denton T.: Electric and hybrid vehicles. Taylor & Francis Ltd., ISBN 978-0367273231 3. Aravena, C., Denny, E.: The impact of learning and short-term experience on preferences for electric vehicles. Renew. Sustain. Energy Rev. 152 (2021). https://doi.org/10.1016/j.rser. 2021.111656 4. International Energy Agency, Global EV Outlook 2022. https://www.iea.org/reports/globalev-outlook-2022. Accessed 10 Feb 2023

184

T. Kosmanis et al.

5. Wellings, J., Greenwood, D., Coles, S.R.: Understanding the future impacts of electric vehicles—an analysis of multiple factors that influence the market. Vehicles 3(4), 851–871 (2021). https://doi.org/10.3390/vehicles3040051 6. Krishna, G.: Understanding and identifying barriers to electric vehicle adoption through thematic analysis. Transp. Res. Interdisc. Perspect. 10 (2021). https://doi.org/10.1016/j.trip.2021. 100364 7. Volvo Cars Global Newsroom, Volvo Cars to be fully electric by 2030. https://www.me-dia. volvocars.com/global/en-gb/media/pressreleases/277409/volvo-cars-to-be-fully-elec-tricby-2030. Accessed 10 Feb 2023 8. European Council for an Energy Efficient Economy, EU nations approve end to combustion engine sales by 2035. https://www.eceee.org/all-news/news/eu-nations-approve-end-to-com bustion-engine-sales-by-2035/. Accessed 08 Feb 2023 9. European Automobile Manufacturers’ Association, Overview – Electric vehicles: tax benefits & purchase incentives in the European Union. https://www.acea.auto/fact/overview-ele ctric-vehicles-tax-benefits-purchase-incentives-in-the-european-union-2022/. Accessed 08 Feb 2023 10. Clinton, B.C., Steinberg, D.C.: Providing the spark: impact of financial incentives on battery electric vehicle adoption. J. Environ. Econ. Manag. 98 (2021). https://doi.org/10.1016/j.jeem. 2019.102255 11. Institute of the Motor Industry, Automotive industry ramps up EV qualified technicians. https://tide.theimi.org.uk/industry-latest/news/automotive-industry-ramps-ev-qualif ied-technicians. Accessed 10 Feb 2023 12. Forbes, Repair Tech Shortage Costing Motorists Time and Money, CCC Study Shows. https://www.forbes.com/sites/edgarsten/2022/03/15/repair-tech-shortage-costing-motoriststime-and-money-ccc-study-shows/?sh=24848ed66ca0. Accessed 10 Feb 2023 13. Bloomberg, Britain’s Electric Car Dream Threatened by Shortage of Mechanics. https://www.bloomberg.com/news/articles/2022-12-07/britain-s-electric-car-dream-thr eat-ened-by-shortage-of-mechanics#xj4y7vzkg. Accessed 10 Feb 2023 14. Alotaibi, S., Omer, S., Su, Y.: Identification of potential barriers to electric vehicle adoption in oil-producing nations—the case of Saudi Arabia. Electricity 3(3), 365–395 (2022). https:// doi.org/10.3390/electricity3030020 15. The European Association for Electromobility (AVERE). https://www.avere.org/projects2 16. Automotive Skills Alliance (ASA). https://automotive-skills-alliance.eu/ 17. Alliance for Batteries Technology, Training and Skills (ALBATTS). https://www.project-alb atts.eu/en/home 18. Cybersecurity Engineer and Manager – Automotive Sector (CYBERENG). https://www.pro ject-cybereng.eu/ 19. ECQA Certified Electric Powertrain Engineer (ECEPE). https://www.project-ecepe.eu/ 20. Universitat Rovira I Virgili, Master’s Degree in Electric Vehicle Technologies. https://www. urv.cat/en/studies/master/courses/electric-vehicle/ 21. Nantes Université, Master’s Degree in Electric Vehicle Propulsion and Control (E-PiCo). https://www.ec-nantes.fr/study/erasmus-mundus-joint-master-de-grees/electric-vehicle-pro pulsion-and-control-e-pico 22. Erasmus Plus KA2 GA 612522-EPP-1-2019-1-EL-EPPKA2-KA, E-DRIVETOUR: Beyond the border of electric vehicles: an advanced interactive course. https://www.EDRIVETOU R.eu/. Accessed 04 Mar 2023 23. Gonzalez-Rubio, R., Khoumsi, A., Dubois, M., Trovao, J.P.: Problem- and project-based learning in engineering: a focus on electrical vehicles. In: Proceedings of the IEEE Vehicle Power and Propulsion Conference, pp. 1–6 (2016)

Supporting the Growth of Electric Vehicle Market Through the E-DRIVETOUR

185

24. Ramos, J.F., Lozano, J.J.F., Calderón, A.G.: Design of electric racing vehicles: an experience of interdisciplinary project-based education in engineering. In: World Electric Vehicle Symposium and Exhibition, pp. 1–6 (2013). https://doi.org/10.1109/EVS.2013.6914899 25. OpenSpace3D – Open Source Platform for 3D Environments. https://www.openspace3d.com/ 26. Autodesk Tinkercad. https://www.tinkercad.com/ 27. Riel, A., Tichkiewitch, S., Draghici, A., Draghici, G., Messnarz, R.: Integrated engineering collaboration skills to drive product quality and innovation. In: Proceedings of the Euro- SPI2 2009 International Conference. Madrid, pp. 2.11–2.20, September 2009 28. Riel, A.: Integrated design – a set of competences and skills required by systems and product architects. In: Riel, A., O’Connor, R., Tichkiewitch, S., Messnarz, R. (eds.) Systems, Software and Services Process Improvement, vol. 99, pp. 233–244. Springer, Heidelberg (2010). ISBN 978-3642156656. https://doi.org/10.1007/978-3-642-15666-3_21 29. Tichkiewitch, S., Riel, A.: European qualification and certification for the lifelong learning. Keynote Paper. In: Fischer, X., Nadeau, J-P. (eds.) Research in Interactive Design: Virtual, Interactive and Integrated Product Design and Manufacturing for Industrial Innovation, pp. 135–146. Springer, Paris (2011). ISBN 978-2817801681. https://doi.org/10.1007/978-28178-0169-8_10 30. Korsaa, M., et al.: The SPI manifesto and the ECQA SPI manager certification scheme. J. Softw. Evol. Process 24(5), 525–540 (2012) 31. SPI Manifesto. https://conference.eurospi.net/images/eurospi/spi_manifesto.pdf. Accessed 12 June 2022

Towards User-Centric Design Guidelines for PaaS Systems: The Case of Home Appliances José Hidalgo-Crespo(B)

and Andreas Riel

Grenoble Alpes University, CNRS, Grenoble INP, G-SCOP, 46 Avenue Félix Viallet, 38000 Grenoble, France [email protected]

Abstract. Moving towards Circular Economy often implies designing Industrial Product-Service Systems (IPS2 ). For established products on the consumer market, the most obvious IPS2 model to go for is Product-as-a-Service (PaaS), i.e., providing the product to consumers in a sort of leasing model. However, this move is generally confronted with huge challenges of customer acceptance. This research aims at establishing a method for determining design guidelines for the user-centric design of products, services, and business models for Product-asa-Service (PaaS) systems. It seeks to provide determinants of consumers’ decisions regarding the acceptance and use of PaaS. In particular, it studies leasing options of white goods (washing machines, fridges, and kitchens) through a usercentric methodology through a customer acceptance survey. The results allowed an understanding of consumer expectations and desires for leasing and which socio-demographic factors, as well as product and service attributes influence or even determine PaaS acceptance. Keywords: Design · Design Method · User-Centric

1 Introduction Current production and consumption patterns are widely understood to be a major cause of modern global environmental challenges. Circular business models such as productas-a-service (PaaS) systems have been heralded as one main form to decouple value creation from resource use and waste generation. Product leasing is a kind of use-oriented PaaS system in which the ownership of the product remains with the service provider [1], while the customer can utilize the service that the product offers over a certain period. However, as of 2022, we are still far from real-life implementation of these concepts, especially in developing countries where the concept of ownership is very much rooted in our inner cores. One plausible reason is that user requirements are in a continuum of change, and PaaS systems suppliers must continuously adapt their business models to meet them [2]. In fact, previous studies have shown that the reluctance of consumers regarding the acceptance of PaaS systems is one of the major barriers to their success [2, 3]. For instance, it is mentioned that the lack of ownership can significantly affect consumers’ interest in PaaS systems [4]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 186–195, 2023. https://doi.org/10.1007/978-3-031-42307-9_14

Towards User-Centric Design Guidelines for PaaS Systems

187

This observation complies with Brissaud et al.’s finding that this is a key PSS design challenge, and their proposal is to foster human-centric and value-driven design [2]. From a product design perspective, this challenge complies with the mass customization principle [5]. As for manufacturing systems, the reconsideration of value creation in production through the integration of several disciplines and stakeholders’ viewpoints [6], as well as the value co-creation in manufacturing and servicing [7] are particularly relevant aspects addressed previously in the community. An essential finding of [2] is also that designers are lacking methods and guidelines that allow them to consider user-centric co-design of services and their underlying products, as well as for business models that allow linking those together with a desired level of user acceptance. In a PSS context, any product will be used by several different users during its lifetime, which implies the need for such design guidelines to consider several user groups, which can be characterized by different kinds of attributes. This research aims at proposing a method for establishing user-centric PSS design guidelines based on the systematic investigation of the relevance of essential product attributes and functions for user group acceptance. More specifically, the research questions guiding this article are: How to determine guidelines that can be given to designers in order to help them design products in a way that consumers will accept them to be provided in a PaaS business model? (RQ1) What are the determinants of consumers’ decisions regarding the acceptance and usage of PaaS? (RQ2). By providing elements of answers to these questions, this paper contributes to the Circular Economy and PaaS design literature by giving insights into consumers’ PaaS acceptance behaviors and what drives them, together with the repercussions on the design of products and business models. The rest of this manuscript is structured as follows: The second section provides a literature review on the concepts of PaaS systems and discrete choice experiments. The third section introduces the method used to come up with design guidelines for the specific product group under investigation. The fourth section elaborates on the design guidelines based on the underlying data analysis and the regression framework. The fifth section concludes with a summary and a critical analysis of the contributions and limitations of this work.

2 State of the Art 2.1 PaaS Systems Concepts PaaS systems can be referred to as a solution to integrate services with products through alternative product uses and add more value to customers. Its ultimate objective is to improve every company´s profitability and competitiveness, as well as to satisfy customers’ needs while minimizing environmental impact [8]. There are three types of PaaS systems: result-oriented, product-oriented, and user-oriented [1, 9, 10]. The useoriented PaaS systems are generally acknowledged in the access-based service or sharing economy research field [1, 10]. In this last type of system, the service providers sell the accessibility and use of the specific products, but they keep the ownership and share some responsibilities to keep the equipment going, such as maintenance and moving services. Use-oriented PaaS systems, such as leasing, are often suggested to reduce material usage and environmental footprints of electrical and electronic equipment, such as washing

188

J. Hidalgo-Crespo and A. Riel

machines, kitchens and fridges [1]. However, a major condition for its acceptance is that the PaaS system proposal provides the consumer with an innovative and more advantageous alternative to other traditional buying systems. To achieve this, manufacturers need to understand what drives the consumers and the effect that product and service characteristics may affect the way they live their lives [4]. 2.2 Design and Consumer Acceptance Understanding the users’ expectations and desires with PaaS systems is a challenge faced by many manufacturers that need to be considered during the product and business model design. The deeper appreciation of users can allow the development of new product solutions that integrate value and cognitive and experiential benefits (user-centric). In fact, to be able to design interactive models in such a manner where relevant attributes from a user´s point of view is considered would increase the likelihood of user adoption (acceptance) of such systems. One previous study has shown that socio-demographic variables are more influential than personality traits [11]. They indicated that age is a critical variable to foresee problems with consumer electronic products (e.g., ease of use is more important for the older generation) also, education plays a significant role sometimes in unexpected ways. Normally, it has been seen that highly educated people are more focused on the operability of electronic products [11]. Scholars have identified that consumer acceptance is dependent on three main aspects, socio-demographics (gender, age, income, social status, household size, and education level), product attributes (washing cycles, fridge volume capacity, and the number of kitchen burners), and service attributes (contract time and monthly fee) [12, 13]. The alignment of companies with these different preferable product and service attributes, together with the consideration of the socio-demographic characteristics of the population can provide a comprehensive understanding for companies to successfully coordinate and adjust all necessary changes at the early stages of the product and business model development since modifications would be extremely difficult and costly to make as the design process advances. 2.3 Discrete Choice Experiment Consumer choice modeling, and particularly the use of Discrete Choice Experiment (DCE) models has become popular in the engineering design domain due to their ability to exploit target consumers’ preferences and design attributes in order to predict future market demand [14]. It is a type of survey-based quantitative technique, used especially for eliciting individual preferences [15]. It has been used in a variety of disciplines, such as marketing, psychology, environment, and economics [16]. In a DCE, consumers´ decisions can be defined based on the desired product attributes, leasing service attributes, and socio-demographic conditions [17].

3 Methodology The research approach chosen is a DCE for putting white goods in the PaaS context. The DCE choices were based on the specific product and business model attributes that were likely to significantly influence PaaS customer acceptance. For each white

Towards User-Centric Design Guidelines for PaaS Systems

189

good, particularly, washing machines, kitchens, and fridges were selected due to their prevalence and frequency of use and their environmental impacts when not redistributed through proper recovery channels. The Latin American population was chosen for the case study. A survey approach was followed to construct the consumers´ acceptance of leasing white appliances. In DCE, consumers’ decisions can be defined based on the desired product and usage attributes, together with the consumer’s socio-demographic characteristics. In this study, we investigate the consumers’ acceptance of PaaS leasing services using a similar set of attributes to construct the choice model. In this DCE, respondents selected their preferred option out of a predetermined set of alternatives. Table 1 provides the types of variables considered in the decision model. These characteristics were obtained by inspection of the websites of the principal retailer companies in the country. Specifically, respondents were asked to select their most preferred white appliance out of three different business models: (1) buy-cash, (2) buy-credit, and (3) pay-permonth leasing. The buy-cash model is the traditional model which buys directly the product from the retailer. The buy-credit model buys the product from the retailer with a direct 18-month credit, normally with high-interest rates meaning paying double or triple the cost of the product in the end. For the leasing pay-per-month models, the approach of [12] was followed. The participants were given 30 distinct choice tasks (10 per appliance) to complete in succession. Every choice task contained the three options for the DCE (1) buy-cash, (2) buy-credit, and (3) pay-per-month leasing), each with different product and service attributes. The respondents were then asked which of these choices, taking into account the "utility" they had gained from their traits, they would pick. If they didn’t want to choose any of the options offered, they could simply choose the none option. To determine the minimum number of samples needed to obtain reasonably accurate data, the central limit theorem was used, also applied by [1, 18]. The following Eq. (1) was used for the calculations:  2 (1) n = k *p*q*N ε2 *(N − 1) + k 2 *p*q where n is the minimum number of samples, k is a constant that depends on the level of confidence, e is the sampling error, p is the proportion of inhabitants that possess the characteristic we seek, and q is the number of inhabitants that do not (we assumed 0.5 for each one). Counts analysis is still a fascinating way to look at findings, despite simply being a type of descriptive statistics. Counts offer a quick and automatic way to figure out how much a particular payment model or attribute level was selected. Based on how frequently an option containing that level is selected from the total number of times it appeared in a choice task, it determines the number of “wins” for each level [12]. Correlation coefficients are used to measure how strong a relationship is between two variables. There are several types of correlation coefficient, but the most popular ones are Pearson (r) and Spearman’s (Rho) correlation coefficients. Both coefficients evaluate the strength of relationship between two variables, like all correlation coefficients do. The Pearson correlation coefficient and the Spearman correlation coefficient are comparable as a result. All bivariate correlation analyses use a single number between −1 and +1

190

J. Hidalgo-Crespo and A. Riel

to describe the degree of relationship between two variables. The correlation coefficient is the name given to this number. As the values of one variable rise, the values of the other variable also rise, according to a positive correlation coefficient, whereas a negative correlation coefficient expresses a negative relationship (as the values of one variable rise, the values of the other variable fall). If the correlation coefficient is zero, there is no correlation between the variables. Table 1. Considered variables Appliances

Sociodemographic Characteristics

Product Attributes

Service Attributes

Washing Machine

Gender: Male/Female Income Level: 1. None, 2. < $420.00, 3. $421.00 - $840.00, 4. $841.00 - $1,260.00, 5. $1,261.00 $1,680.00, 6. $1,681.00 – 2,100.00, 7. > $2,100.00 Social Status: 1. Low, 2. Middle-Low, 3. Middle, 4. Middle-Upper, and 5. Upper Class Household Size: 1. 1, 2. 2, 3. 3, 4. 4, 5. 5, 6. 6, 7. > 6 Age of Household Head (years old): 1. < 26, 2. 26–36, 3. 37–47, 4. 48–58, 5. 59–69, 6. > 69 Education level of household head: 1. None, 2. Primary school, 3. High school, 4. Pre-Grade, 6. Post-grade

Washing Cycles: 11 12 14

Contract Time (Months): 6 12 18 24 36 48 60 72 Monthly Fee (USD): $10.00 $15.00 $20.00 $25.00 $30.00 $35.00

Kitchen

Fridge

Volume Capacity (Liters): 610 623 751 760 Number of burners: 4 5 6

The Pearson coefficient works with a linear relationship between the two variables, whereas the Spearman Coefficient also works with monotonic correlations. This is the key distinction between the two correlation coefficients. The P-value is the probability that you would have found the current result if the correlation coefficient were in fact zero (null hypothesis). If this probability is lower than the conventional 5% (P < 0.05) the correlation coefficient is called statistically significant.

Towards User-Centric Design Guidelines for PaaS Systems

191

4 Results Tables 2, 3, and 4 summarize the DCE results. A total of 3,569 surveys were achieved during the project, representing a 99% of confidence level and 2.15% of margin error, using Eq. 1. By gender, acceptance rates are very similar. From the sociodemographic characteristics, only the social status and level of education of the household head have significant positive influence on the leasing level of acceptance. For the case of service attributes, the study did not find any significant relationship. Finally, for the case of product attributes, this study found a significant positive relationship between the leasing level of acceptance and the fridge volume capacity, meaning that people tend to favor larger fridges. Even though regression analysis did not offer more positive or negative influences between acceptance ratio and socio-demographics, product, and service attributes, the findings still suggest that the younger (69) population have the highest acceptance rate for the leasing of white appliances. For the contract time, the population prefers 48 months for washing machines, 24 months for fridges, and only 6 months for kitchens. The reason behind the low contract time for kitchens may be that the retail price of the best kitchen models in the market (around $400.00) is more approachable than those of fridges and washing machines (between $1,000.00 and $1,500.00). The willing monthly fee is higher for fridges ($35.00), followed by washing machines ($30.00), and very much lower value for kitchens ($10.00). For the case of product attributes, higher acceptance rates are found at 12 washing cycles for washing machines, four burners for kitchens, and 760 L volume capacity for fridges. Given that in the country, most of the population live in households (76.66% of the sampled population) rather than in apartments (15.95%), people prefer bigger appliances with higher capacities. As a key result of the study, Table 2 suggests design guidelines for product and business model development based on the acceptance rates of the model (highlighted in bold printing in Table 2). While some of these guidelines can be specific to a product or product group, the approach proposed to come up with them can be generalized to guide PaaS design guideline determination: 1. Define hypotheses around product attributes that may influence PaaS acceptance. 2. Define hypotheses around service attributes that may influence PaaS acceptance. 3. Define a PaaS business model (e.g., leasing), and hypotheses around this model’s parameters. 4. Perform a DCE with the PaaS target audience population. 5. Define the significance threshold. 6. For the product/service attributes and business model parameters that show significant influence, formulate guidelines for designers. 7. Evaluate the applicability of the determined guidelines to other product groups and PaaS business models. This method responds to RQ1 and leads to user-centric design guidelines that should foster PaaS customer acceptance. With their support, designers can make decisions on product and business model development at early stages. More importantly in the current context of increasingly many products being moved into circularity through PaaS, designers can use these guidelines to focus their attention on the modification of product attributes that will be decisive for the PaaS. In cases where product design

192

J. Hidalgo-Crespo and A. Riel

Table 2. Results: Leasing Acceptance (LA), coefficients, and product/business model design guidelines – Sociodemographic Variables

Table 3. Results: Leasing Acceptance (LA), coefficients, and product/business model design guidelines – Service Attributes

modifications are not possible or wanted, these rules still help in the design of the service process, and the business model. As for RQ2, the study found that social status, the level of education of the household head, as well as the fridge volume capacity, had a strong and positive influence on leasing acceptance, meaning that while the population increases their social status and level of education, the more they are willing to accept leasing. Consumer age also has an influence, and it can be inferred that the population will prefer bigger and highcapacity white appliances when choosing to lease over buying. Although the specific study results cannot be generalized without further diverse studies, the result shows that

Towards User-Centric Design Guidelines for PaaS Systems

193

Table 4. Results: Leasing Acceptance (LA), coefficients, and product/business model design guidelines – Product Attributes

both socio-demographics and product determinants do have a significant relevance on PaaS acceptance.

5 Conclusions This research aims to establish a method for determining PaaS design guidelines based on the identification of the most user-relevant design attributes of products, services, and business models through discrete choice experiments carried out with a representative sample of the target population. The particular case study investigates the leasing of white appliances (washing machines, fridges, and kitchens) inside a PaaS context through the development of a DCE and a household survey in Latin America. The proposed method uses different product and service attributes together with socio-demographic characteristics to understand what drives the population to increase their acceptance of leasing white appliances instead of buying them. It delivers insights that are deep enough for specifying design guidelines for the product, the service (process), as well as the underlying business model. These guidelines foster user-centric design, and can also provide the basis for establishing service scenario simulations to mitigate PaaS market acceptance risks. Given the intrinsically huge variety of aspects that user-centric design methods for IPS2 (and PaaS in particular) need to cover, this contribution has limitations that inspire opportunities for future work. So far, the proposed method has been applied in one case study around white goods. While the method is completely generic, the results achieved for RQ2 may be quite specific to this type of products. Studies in other product domains and with different populations. Furthermore, the design guidelines established from the study findings are of a qualitative rather than quantitative nature. Complementing the method with PaaS scenario simulation facilities could help take the next step towards more quantitative guidelines.

194

J. Hidalgo-Crespo and A. Riel

6 Relationship with the SPI Manifesto The SPI manifesto requests businesses to “use dynamic and adaptable models as needed” [19]. As major industries, including the automotive industry, are moving into PaaS models to satisfy the market need of pay-per-use models, this work attempts to give a contribution in the area clearly considering consumer needs and expectations. Acknowledgment. This research has been supported by the SCANDERE (Scaling up a circular economy business model by new design, leaner remanufacturing, and automated material recycling technologies) project granted from the ERA-MIN3 program under grant number 101003575 and funded by the project partner countries’ national funding agencies. This particular initiative has been co-funded by the French ADEME (Ecologic Transition Agency) under contract number 2202D0103. The authors want to sincerely thank SCANDERE consortium members, in particular Prof. Daniel Brissaud, Prof. Joost Duflou, and Prof. Tomohiko Sakao, for their valuable contributions.

References 1. Tukker, A.: Product services for a resource-efficient and circular economy - a review. J. Clean. Prod. 97, 76–91 (2015) 2. Brissaud, D., Sakao, T., Riel, A., Erkoyuncu, J.A.: Designing value-driven solutions: the evolution of industrial product-service systems. CIRP Ann. 71, 553–575 (2022) 3. Moreno, M., et al.: Re-distributed manufacturing to achieve a circular economy: a case study utilizing IDEF0 modeling. Procedia CIRP 63, 686–691 (2017) 4. Rexfelt, O., Hiort af Ornäs, V.: Consumer acceptance of product-service systems. J. Manuf. Technol. Manag. 20(5), 674–699 (2009) 5. Tseng, M.M., Jiao, J., Merchant, M.E.: Design for mass customization. CIRP Ann. 45(1), 153–156 (1996) 6. Kaihara, T., et al.: Value creation in production: reconsideration from interdisciplinary approaches. CIRP Ann. 67(2), 791–813 (2018) 7. Ueda, K., Takenaka, T., Fujita, K.: Toward value co-creation in manufacturing and servicing. CIRP J. Manuf. Sci. Technol. 1(1), 53–58 (2008) 8. Manzini, E., Vezzoli, C.: A strategic design approach to develop sustainable product service systems: examples taken from the ‘environmentally friendly innovation’ Italian prize. J. Clean. Prod. 11(8), 851–857 (2003) 9. Tukker, A.: Eight types of product-service system: eight ways to sustainability? experiences from suspronet. Bus. Strateg. Environ. 13, 246–260 (2004) 10. Piscicelli, L., Cooper, T., Fisher, T.: The role of values in collaborative consumption: insights from a product-service system for lending and borrowing in the UK. J. Clean. Prod. 97, 21–29 (2015) 11. Kim, C., Christiaans, H.H.C.M.: The role of design properties and demographic factors in soft usability problems. Des. Stud. 45, 268–290 (2016) 12. Rombouts, S.: Towards a better understanding of consumer acceptance and valuation of product-service systems (PSS) - a discrete choice experiment on laundry solutions. Simon Rombouts (2019) 13. Rousseau, S.: Millennials’ acceptance of product-service systems: leasing smartphones in Flanders (Belgium). J. Clean. Prod., 118992 (2019)

Towards User-Centric Design Guidelines for PaaS Systems

195

14. Chen, W., Hoyle, C., Wassenaar, H.J.: Decision-Based Design: Integrating Consumer Preferences into Engineering Design. Springer, London (2013). https://doi.org/10.1007/978-14471-4036-8 15. Louviere, J.J., Hensher, D.A.: On the design and analysis of simulated choice or allocation experiments in travel choice modeling. Transp. Res. Rec. 890, 11–17 (1982) 16. Johnston, R.J., et al.: Contemporary guidance for stated preference studies. J. Assoc. Envir. Resour. Econ. 4(2), 319–405 (2017) 17. He, L., Wang, M., Chen, W., Conzelmann, G.: Incorporating social impact on new product adoption in choice modeling: a case study in green vehicles. Transp. Res. Part D Transp. Environ. 32, 421–434 (2014) 18. Hidalgo-Crespo, J., Moreira, C.M., Jervis, F.X., Soto, M., Amaya, J.L.: Development of sociodemographic indicators for modeling the household solid waste generation in Guayaquil (Ecuador): Quantification, characterization and energy valorization. In: Paper presented at the European Biomass Conference and Exhibition Proceedings, pp. 252–259 (2021) 19. Pries-Heie, J., Johansen, J., Messnarz, R.: SPI Manifesto (2010). https://conference.eurospi. net/images/eurospi/spi_manifesto.pdf

Boosting the EU-Wide Collaboration on Skills Agenda in the Automotive-Mobility Ecosystem Jakub Stolfa1(B) , Marek Spanyik1 , and Petr Dolejsi2 1 VSB - Technical University of Ostrava, Ostrava, Czech Republic

{jakub.stolfa,marek.spanyik}@vsb.cz

2 European Automobile Manufacturers’ Association (ACEA), Brussels, Belgium

[email protected]

Abstract. The automotive-mobility ecosystem is ongoing rapid changes supporting the green and digital transition. This directly impacts all stakeholders, including companies, education and training providers, social partners, member states, and regions. The impact requires extensive collaboration on the skills agenda on all levels, to boost the skills intelligence, to know the trends and needed skills and job roles, and to provide relevant training and education courses. This paper provides an overview of the collaboration on skills agenda in the automotivemobility ecosystem in the context of the Pact for Skills, the particular European project FLAMENCO and its current results [1]. Keywords: ERASMUS+ Project · Collaboration · Skills Agenda · Automotive · Mobility · Green · Digital · Transformation

1 Background The automotive-mobility ecosystem is rapidly evolving, with new players and technologies emerging to meet users’ changing needs and preferences. Private-vehicle ownership is becoming no longer the only mobility option. Traditional OEMs are also expected to drive the development of the whole ecosystem, encompassing everything from raw materials to the completion of vehicles and mobility services related to vehicle use being private or shared. As society and industry evolve, all mobility players must define their strategic position and adjust core skills and competencies to navigate towards the future [2, 10] successfully. Technical skills are essential in the industry, and workers with those skills are in great demand. Technical skills boost productivity, reducing expenses and improving quality. Therefore, the industry could further grow, and with new technologies emerging, technical skills will become even more crucial [3]. Analogically to technical skills, soft skills also raise their importance and relevance, being that learnability, teamwork, adaptability, etc. [4]. To sustain or improve the job role and job positions of the workers, there has to be constant support towards continuous lifelong learning activities. The skills and competences needed in the automotive-mobility ecosystem are crucial to meet the changing needs of the industry to mitigate the challenges of green and digital transformation. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 196–204, 2023. https://doi.org/10.1007/978-3-031-42307-9_15

Boosting the EU-Wide Collaboration on Skills Agenda

197

1.1 Automotive Skills Alliance (ASA) - PfS Large-Scale Partnership Due to this current development, the Automotive Skills Alliance (ASA) was established in 2020 to satisfy the needs of the European automotive-mobility stakeholders. ASA is a large-scale Pact for Skills Partnership that aims to support and promote collaboration on the skills agenda in the automotive-mobility ecosystem. The ASA brings together different kinds of organisations, including industry, education and training providers, regions, and cluster representatives, to develop support up-/re-skilling activities for today’s automotive-mobility workforce and the future. The ASA supports continuous, pragmatic, and sustainable cooperation on the skills agenda in the automotive-mobility ecosystem. The alliance has over 110 partners, and they share information and expertise to collaborate on skills intelligence, the creation or promotion of needed training and education courses, ensuring mutual recognition of the skills achievements, supporting massive training courses delivery and supporting any relevant topics to the automotivemobility skills agenda [5, 6]. The ASA presents its ongoing work to its partners, and they work together to ensure that the automotive-mobility workforce has the necessary skills to meet the changing needs of the industry [6]. Further reading on focus areas, groups, and projects might be found in the following papers [11, 12]. 1.2 Project FLAMENCO - Boosting the Collaboration Subsequently, several projects were launched under the ASA to tackle the mentioned challenges. The European Project FLAMENCO is an ERASMUS+ co-funded project that aims to analyse and pilot forward-looking approaches and methods to enable and make sustainable collaboration on the skills agenda in Europe. The project’s goals are to define and improve the most effective and pragmatic ways of collaboration on the skills agenda in Europe and to develop a framework for collaboration that will enable stakeholders to work together to ensure that the automotive-mobility workforce has the necessary skills to meet the changing needs of the industry. Project FLAMENCO works on the need for collaboration by analysing and piloting forward-looking approaches and methods to enable and make sustainable collaboration on the skills agenda in Europe. The project’s overall goal is to develop a framework for collaboration that will enable stakeholders to work together to ensure that the automotive-mobility workforce has the necessary skills to meet the changing needs of the industry [7]. The project consortium is composed of 11 partners from 4 European countries [8].

2 Research on Collaboration Needs and Requirements To achieve the previously mentioned goals of the FLAMENCO project, the consortium has launched a stakeholder survey to support the assessment of collaboration habits and needs in the mobility ecosystem. The survey aimed to gather information from different stakeholders, including industry, education and training providers, regions, and cluster representatives, to identify the most effective ways to collaborate on the skills agenda. This section provides selected survey results in three main categories: – Collaboration Needs

198

J. Stolfa et al.

– Collaboration Framework – Collaboration Outcomes – Collaboration Challenges and Risks Encountered in Past Collaboration Overall, the survey was answered by 112 respondents (industry, training and education providers, social partners, national/regional representatives, and other stakeholders) from more than 17 countries worldwide [1]. 2.1 Collaboration Needs Respondents were asked about the need for overall collaboration in the automotivemobility ecosystem or the areas requiring improvements (Fig. 1).

Fig. 1. Importance of the Collaboration

The responses strongly recognise the need for collaboration in various aspects of skills development, training, and job definitions within the automotive-mobility ecosystem. Most participants rated the need for collaboration as 4 or 5, demonstrating a high level of importance placed on collaborative efforts to address the challenges and requirements of the industry. These findings highlight the potential for fostering effective stakeholder collaboration to drive innovation, enhance skills development, and support the automotive sector’s transition to green and digital technologies (Fig. 2). The responses highlight the diverse collaboration needs within the automotivemobility ecosystem. Networking, attending events, and engaging in discussions are highly valued, along with skills development, training, and resource sharing. These findings emphasise the significance of collaborative efforts in addressing the challenges and driving progress in the industry. Organisations should prioritise these collaborative activities to foster innovation, knowledge exchange, and practical skills development within the automotive-mobility sector (as seen in Fig. 2).

Boosting the EU-Wide Collaboration on Skills Agenda

199

Fig. 2. Collaboration Needs - Results Overview

2.2 Collaboration Framework Regarding the collaboration framework, stakeholders prefer regular personal or online meetings quarterly. Identified characteristics are seen in the figure below (Fig. 3). Regarding the organisational structure, respondents prefer a clear governance structure, decision-making, and project-based collaboration. Overall structure should be focused on the common goals and mutual benefits. In conclusion, while there is no unanimous agreement on the ideal format, the survey results highlight the significance of regular communication, information sharing, and stakeholder collaboration in the skills agenda. The preference for online meetings reflects the practicality and efficiency of virtual platforms, while the importance of personal meetings underscores the value of interpersonal connections. A flexible and adaptable approach that combines different formats may be beneficial to accommodate various preferences and maximise collaboration effectiveness. 2.3 Collaboration Outcomes The respondents’ comments reflect the significance of collaboration in skills development and training. Organisations value outcomes that enable them to define skills and occupations, develop knowledge bases, offer relevant training courses, and stay updated on industry trends. They highlight the importance of collaborative efforts in equipping organisations and their workforce with the necessary skills and knowledge to thrive in a rapidly evolving landscape (Fig. 4). The answers highlight various factors contributing to the evaluation process, such as consensus among all parties, the design and introduction of performance indicators,

200

J. Stolfa et al.

Fig. 3. Collaboration Framework - Results Overview

impact on society, clear definition of key performance indicators (KPIs), and the importance of evaluation and satisfaction surveys. Respondents also emphasise the need for defined targets, the assessment of results, and the knowledge exchange (as seen in Fig. 5). 2.4 Collaboration Challenges and Risks Encountered in Past Collaboration Respondents underlined the importance of resource availability, time management, effective communication, and the need for commitment and engagement from all parties involved to overcome these challenges and ensure successful collaborations. Additionally, this highlights the need for continuous monitoring and evaluation of collaborations to address potential risks and make improvements where necessary (Fig. 6). In conclusion: The responses to the questions in the collaboration challenges and outcomes chapter highlight the various challenges organisations face, such as resource constraints, lack of motivation, difficulty in setting up collaborations, limited opportunities in the sector, and the risks involved. Overcoming these challenges requires addressing resource availability, motivation, open communication, and effective leadership. By doing so, organisations can enhance collaboration outcomes and mitigate associated risks.

Boosting the EU-Wide Collaboration on Skills Agenda

Fig. 4. Collaboration Outcomes - Results Overview

Fig. 5. Collaboration Evaluation Aspects

201

202

J. Stolfa et al.

Fig. 6. Challenges and Risks - Results Overview

3 A Way Forward The project FLAMENCO is currently running a set of workshops enhancing the survey results, focusing on the stakeholders in the ecosystem. All the results are in the comprehensive report on the collaboration needs in the automotive-mobility sector (to be published by the end of 6/2023). Data will be analysed, and a dedicated methodology will be proposed based on the survey results. Those will also be tested within the current or newly established working groups under the ASA (Fig. 7).

Fig. 7. ASA Working Groups

Boosting the EU-Wide Collaboration on Skills Agenda

203

The FLAMENCO methodology will be implemented through particular communities created under particular ASA WGs. The overall outcomes of the groups will be measured and assessed to improve the proposed methodology. This would enable sustainable use of the methodology under the ASA and beyond. Besides the methodology, additional concrete deliverables such as training courses, skills and job roles definitions or micro-credentials framework will be produced during the project implementation. FLAMENCO represent only a piece of the puzzle to the overall general objective – support and make pragmatic and sustainable collaboration on the skills agenda in the mobility and automotive ecosystem. In the end should contribute to bringing benefits to all stakeholders involved, being that industry through a better-educated labour force, better-targeted education schemes for the education providers, social partners supporting the just transition, and regions to mitigate the local labour market challenges. Activities implemented in the FLAMENCO project, such as methodology, cooperation models, and their implementation, will also be discussed at EuroSPI 2024.

4 Relationship with the SPI Manifesto With this work, we contribute to the principles and values described in the SPI manifesto of the community [9]. Specifically, we aim to support the vision of different organisations and empower additional business objectives (5.1). Acknowledgements. We are grateful to the European Commission which has co-funded the Forward-looking project FLAMENCO (2023–2024) with a consortium of: 1) VSB – Technical University of Ostrava (VSB-TUO); 2) European Automobile Manufacturers’ Association (ACEA); 3) Association for Promoting Electronics Technology (APTE); 4) Automotive Skills Alliance (ASA); 5) EDUCAM; 6) EuroSPI Certificates and Services (EuroSPI); 7) International Software Consulting Network (ISCN); 8) InterTradeCard (ITC); 9) Olife; 10) Transilvania IT Cluster (ATIT); and 11) Technical University of Graz (TUG) [7, 8]. In this case, views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or the European Education and Culture Executive Agency (EACEA). Neither the European Union nor EACEA can be held responsible for them.

References 1. Results. FLAMENCO. https://project-flamenco.eu/results/ 2. Heineke, K., Hornik, T., Schwedhelm, D., Szilvacsku, I.: Defining and seizing the Mobility Ecosystem Opportunity. McKinsey & Company (2021). https://www.mckinsey.com/indust ries/automotive-and-assembly/our-insights/defining-and-seizing-the-mobility-ecosystemopportunity 3. Importance of technical skills to sustain in the automotive industry. PTT EDU. https://ptt. edu/blog/importance-of-technical-skills-to-sustain-in-the-automotive-industry/ 4. Project PASS-KSC, Study on Key Competences State of the Art, D3.1 Desk Research (2023). https://project-key-competence.eu/wp-content/uploads/2023/03/PASS-D3.1-Studyon-key-competences-state-of-the-art-FINAL-v2-1.pdf 5. Automotive Skills Alliance. https://automotive-skills-alliance.eu/about-us/

204

J. Stolfa et al.

6. Automotive Skills Alliance presents ongoing work to its 90+ partners - CLEPA – european association of automotive suppliers. CLEPA (2022). https://clepa.eu/mediaroom/automotiveskills-alliance-presents-ongoing-work-to-its-90-partners/ 7. FLAMENCO. https://project-flamenco.eu/ 8. About. FLAMENCO. https://project-flamenco.eu/about/ 9. https://conference.eurospi.net/images/eurospi/spi_manifesto.pdf. Accessed 20 Apr 2023 10. Stolfa, J., et al.: DRIVES—EU blueprint project for the automotive sector—a literature review of drivers of change in automotive industry. J. Softw. Evol. Process. 32(3), 2222 (2020) 11. Makkar, S.S., et al.: Automotive skills alliance—from idea to example of Sys/SW international standards group implementation. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI 2022, Salzburg, Austria, August 31 – September 2, 2022, Proceedings, pp. 125–134. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15559-8_9 12. Zorman, M., Norberg, A., Sandbacka, K., Spanyik, M., Stolfa, J., Alves, J.: Curriculum preparation for battery value chain skills needs. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI 2022, Salzburg, Austria, August 31 – September 2, 2022, Proceedings, pp. 189–195. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15559-8_14

Automotive Data Management SPICE Assessment – Comparison of Process Assessment Models Lara Pörtner1,2(B)

, Andreas Riel2

, Marcel Leclaire2 , and Samer Sameh Makkar3

1 Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, 38000 Grenoble, France

[email protected], [email protected]

2 CAMELOT Management Consultants AG, Gabrielenstraße 9, 80636 Munich, Germany

[email protected], [email protected]

3 Valeo, Smart Villages Company, F22, KM 28, Al Giza Desert 3650105, Giza Governorate,

Egypt [email protected]

Abstract. Many of the current innovations in the automotive environment revolve around autonomous driving, digitalization, connectivity, AI, and new services in the context of mobility. These innovations are based on the collection and use of data. However, the handling of data often plays a subordinate, barely visible role in today’s development and operations processes, with corresponding risks. ASPICE as an industry-standard guideline for evaluating system and software development processes helps automotive suppliers incorporate best practices to identify defects earlier in development and ensure that OEM requirements are met. With the purpose of the creation of a process model for data management that is aligned with Automotive SPICE® 3.1 and other established standards in the industry, a draft version of the Data Management SPICE was initiated by the intacs group. In this work, we are going to provide improvement potential on the content of the pilot draft of Data Management SPICE assessment, based on our industrial and academic experience in the field of Automotive and Data Management. A first comparison between Camelot Data Management Strategy Assessment and Data Management SPICE Assessment is given. Based on expert’s knowledge proposals to improve the quality and the content areas of the Data Management SPICE Assessment before issuing the released version of the standard are shown. Keywords: automotive engineering process · automotive SPICE® · data management assessment · maturity model

1 Introduction Data dominates many business models of the 21st century [1]. Moreover, with the development of wireless technology, connectivity is becoming a more common feature of daily life. Connected cars, for example, play an important role in this development. There, the advantages of digitalization, IoT and AI would be used. The innovations are based on the collection and use of data. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 205–219, 2023. https://doi.org/10.1007/978-3-031-42307-9_16

206

L. Pörtner et al.

There are increasing risks by the growing complexity of data creation, going from unstructured to structured data, data automatization, lack of processes and much more [2]. The importance of data and data-driven technologies has increased even more since the outbreak of COVID-19. Many companies are under intense pressure to make better use of their data now to predict future events, pivot quickly, and make plans and business models more resilient. 80% of the world’s data will be unstructured by 2025 [3]. It is questionable why reporting and master data is not already used superficially in the area of development and operations processes in the automotive industry to achieve efficiency. There is a need to align stakeholder expectations and requirements in the context of the data life cycle by using a common vocabulary. The bridge shall close and use the gap between requirements for SW and data used within those: Using the same Process Assesssment Model (PAM) framework for software and Data One model trying to close the research gap is the concept proposal of “Data Management SPICE” model which explicitly covers process groups for Data Management, Data Quality, Data Operations (inclusive Data Life Cycle, Meta Data Management and more) [4]. The focus of the assessment is explicitly on reporting and master data. Reporting data is all raw data which is relevant for the data reporting process of translating data into a digestible format to assess the ongoing performance of an organization [5]. Master data can be seen as the critical business information related to the transactional and analytical operations of the enterprise. This data needs to be cleansed, standardized, and integrated into an enterprise-wide system [6, 7].

2 Background and Related Work 2.1 Automotive SPICE Assessment The industry sector-specific Automotive SPICE® Process Assessment Model (PAM) [8] is a process assessment derived from the ISO/IEC 15504-5 International Standard for process assessments with the aim of creating a framework for the necessary interpretation of the general standard for the automotive industry and thus increasing the comparability of assessment results. The Automotive SPICE® PAM is increasingly used as an objective process assessment and the resulting subsequent process improvement at project and organizational level. The model includes a set of assessment indicators for process execution and capability. With associated process attributes defined in ISO/IEC 33020, it provides a general basis for performing process capability assessments, which makes it possible to record the results against a general rating scale. The process assessment model defines a two-dimensional model (process dimension and maturity dimension) to evaluate process capability. In the process dimension, the processes are defined and divided into process categories. Within a process category, there is a grouping into process groups. In the maturity dimension, on the other hand, a set of process attributes is defined divided into maturity levels, which provide measurable properties of process capability. 2.2 Data Management Data Management refers to a collection of measures, procedures, and concepts with the aim of ensuring the provision of data for optimal support of the various processes in

Automotive Data Management SPICE Assessment

207

the company. This includes measures to ensure data quality, consistency, and security as well as data lifecycle management [9]. At its core, a data management system helps ensure that data is secure, available, and accurate [2]. 2.3 Current Research Status Maturity models (MMs) that focus on data management are not yet widely used in research. A first approach to provide a maturity model in the area of Digital Transformation was already achieved in 2021 in the work of Gökalp and Martinez [10]. They developed a theoretically grounded Digital Transformation Capability Maturity Model (DX-CMM) as a holistic approach including two dimensions (process and capabilities) that can be applied across all industry sectors. It has the goal to identify the maturity level of Digital Transformation in the company and improvement opportunities to reach a higher level. Nevertheless, the amount of data continues to increase driven by digitalization and the management of data is becoming more and more crucial. Therefore, there is a need for a specific data-focused approach. The work of Pörtner et al. [11] compares the presented DX-CMM model with a practice-oriented approach by Camelot Management Consultants AG called Camelot Data Management Maturity Model (Camelot DMM). The model focuses on specific datarelated contexts, specifically in relation to master data. The discussion of the work shows that this approach provides a significantly more comprehensive overview of data in six different process groups (Strategy, Processes, Organization, Architecture, Master Data Quality, Data Modelling). The approach attempts to close the gap of an MM applicable in practice across all sectors by providing a roadmap tailored to the status of the data. The work of Gökalp et al. [12] provides a multidisciplinary assessment approach to a data-driven organization called Data Drivenness Process Capability Determination Model (DDPCDM). The model is validated regarding the applicability and usability through a multiple case study in two organizations. Nevertheless, smaller and mediumsized companies have not been validated, and the applicability for all industries and scales needs to be validated. Therefore, the practical applicability especially in the automotive industry is questionable. The need for a data management maturity model is emerging in the automotive industry, which is why the intacs group presented the pilot draft of Data Management SPICE assessment in 2022 [4]. Data Management SPICE defines the fundamental business processes of data management capabilities. It is intended as a “state of the art” reference for process improvement. It defines a PAM for the maturity of all types of data, data management, and for organizations which are processing data. The model represents a pilot draft, which is why there is room for improvement. For this reason, the experts at Camelot and the designers of the Data Management SPICE assessment have joined forces and cooperatively contributed their expertise to the further development of the PAM.

208

L. Pörtner et al.

3 Research Question and Methodology Based on the PAM from Automotive SPICE, this study aims to answer the following research question on the proposed pilot draft of Data Management SPICE assessment: RQ: How can a specific data-related approach with a data management capability assessment provide a successful roadmap for OEMs in automotive industry? To answer the research question the approach of Fig. 1 is followed. First, a direct comparison between the draft of the Data Management SPICE assessment and the Camelot Data Management Maturity Model (DMM), which has been used by Camelot Management Consultants AG for several years, is performed.

Fig. 1. Approach of work

For this purpose, both models are compared according to their process groups. The Camelot DMM [11] is an approach to standardize the Data Management Strategy assessment and provides an overview of the maturity level of data management in an organization. It provides a basis for deriving a data management roadmap. After the presentation of both models, as well as the direct comparison, an overview of the levels of understandability of the Data Management SPICE assessment dimensions is given. For this purpose, industry experts from Camelot analyze the dimensions independently. After that, the dimensions are mapped to highlight missing dimensions of the Data Management SPICE assessment. The next step is the analysis of the improvement potential, which results in a proposal for an adapted maturity map including base practices. The company specializes in four process groups. It should be noted that the focus of the data is on the reporting and master data in development and operations processes of the automotive industry. The proposal developed here has not yet been tested in practice. Consequently, the following remarks will be limited to model building and, consequently, to the comparison of the two models.

Automotive Data Management SPICE Assessment

209

The results will be presented to the “Verband der Automobilindustrie e.V.” (VDA) working group in order to be able to adapt the framework in a sustainable way and to follow the missing steps in the approach. The objective of this work is to improve the draft of Data Management SPICE assessment und to make a proposal for a Data Management SPICE framework which can be used in practice.

4 Data Management SPICE The Data Management SPICE assessment [4] defines the fundamental business processes of data management capabilities. It can be seen as an add-on or plugin to other PAMs. It defines a PAM for the maturity of all types of data, data management, and for organizations processing data to provide a bridge between applications developed in a SPICE framework and reliable data used by those, by specific criteria and spelled out expectations related to data quality. The model consists of three process groups (Managing Data, Data Quality, Data Operations).

Fig. 2. PAM overview of “Data Management SPICE” [13]

The process groups are evaluated using two types of assessment indicators: process capability indicators, which apply to maturity levels 1 to 5, and process execution indicators, which apply exclusively to maturity level 1. The indicators of process capability are Generic Practices (GP) and Generic Resources (GR). Base Practices (BP) and Work Products (WP) can be seen as indicators for process implementation. In the following, the focus of the analysis is on selected process groups and BPs. The demonstrable implementation of the BPs and the presence of work products with their expected work

210

L. Pörtner et al.

product characteristics provide objective evidence for the achievement of the process purpose. A base practice is an activity that is directed towards the purpose of a process. The BPs are described on an abstract level and define “what” to do without specifying “how” to do it. By implementing the BPs of a process, the basic results that reflect the purpose of the process should be achieved. The BPs are only the first step on the way to developing process capability. 4.1 An Automotive Example Scenario and Its Interpretation The following Fig. 2 shows a practical use case of the applicability of the assessment. The data to be analyzed focuses on reporting data and partially master data. Therefore, the focus is on the backend system where data processing and integration has its origin. The following describes how to apply the existing Base Practice MGD.1 BP1 to the example (Fig. 3).

Fig. 3. Data flow and point of demarcation [13]

Base Practice MGD.1 BP1: “Create business/use case. Develop and keep up to date a business/use case as basis for a data management and data quality strategy. Ensure alignment of business case/use case with stakeholder requirements.” The Interpretation for the System of Interest is Given by the Intacs Group [13] “A company shall create and regularly update a plan or scenario (business/use case) that outlines how it will manage and maintain the content and quality of its data. This plan should be in line with the needs and expectations of stakeholders, such as customers and

Automotive Data Management SPICE Assessment

211

employees. The company’s data management and quality strategy would then be based on this use case, and stakeholders, such as customers and customer service representatives, would be consulted to ensure that the strategy aligns with their needs and expectations. In the described example the business case is to collect data from different sources and to integrate them into one master data base. Finally tailoring sub databases for various applications and use cases. Fulfilling customer needs regarding content (data, meta data) and quality. This is following the defined quality strategy, especially when different sources are used to serve the same type of data set within the databases. The strategy is ensuring integrity of the database during processing and taking care of the final “product” fulfilling a required level of quality for content and meta information.”

5 PAM Mapping and Analysis First, the process groups of the Data Management SPICE assessment are mapped with the existing Camelot Capability Map in order to get an overview of the areas and their scope that are addressed in the Data Management SPICE assessment. A detailed description of the BPs, process outcomes and work products contained in each process group from Fig. 2 can be found in the VDA guidelines [4]. The following illustration (Fig. 4) shows the mapping of both models and provides a first overview of the topics covered by the Data Management SPICE assessment. In total, the assessment contains three process groups and 42 BPs, 36 process outcomes and 41 work products. Topics of the process groups that could be mapped directly to existing capabilities in the Camelot Capability Map are added in grey boxes, those that are not currently part of the capabilities are added to the sides of the respective dimensions. Here, especially the pillar of organization and data have been expanded with topics from Data Management SPICE assessment.

Fig. 4. Mapping of Data Management SPICE and Camelot Capability Maturity Map

In the next step, the BPs of the three process groups (Managing Data, Data Quality and Data Operations) will be examined in more detail and independently evaluated by

212

L. Pörtner et al.

two reviewers as experts from the field of data management according to their comprehensibility. The scale includes low (1), medium (2), or high (3) comprehensibility and is based on a personal assessment based on experience, as the assessors will later also need to directly capture the meaning behind each process group and its BPs. Based on the evaluation basis, three process groups with a low level of understandability (cf. Fig. 5) are selected by Camelot’s data management experts and improvement potential is noted. As experts in operations, improvement potential is also given for the DOP.1 and DOP.2 categories by Valeo.

Fig. 5. Overview of selected process groups

The improvement potential is shown separately for each process group in the following. Detailed descriptions of the BPs can be found in the VDA guidelines [4]. The following Fig. 6 shows the outcome of the evaluation of the reviewers regarding the understandability of the BPs. Process Group “Managing Data” MGD.2 has been selected for improvement potential due to its low level of understandability from the evaluation of the reviewers. Often a maturity level of 1 (low) is assigned by both reviewers (e.g., BP3). At most, comprehensibility is rated as medium (e.g., BP2). The short description of MGD.2 should be adjusted since next to the strategy the vision is also tackled in this process group. Proposal: MGD.2 Data Management Vision and Strategy

Automotive Data Management SPICE Assessment

213

Fig. 6. Evaluation of understandability of BPs

MGD.2 BP1 note is focusing on meta data strategy. The scope should be expanded to data strategy in general, not only including meta data. Moreover, it is recommended to check whether the vision/goal to fulfill the strategy is available. Proposal: BP1: Create data management strategy. Create and keep up to date a data and metadata management strategy covering content, context, and structure pertaining to data management. [OUTCOME 1] Note 1: Clear vision is defined. Strategic cornerstones are defined. Vision and related roadmap is existing and documented.

Note 4 up to 7 of BP3 from MGD.2 should be included in BP1 to shorten the number of BPs. Therefore, a deletion of BP3 and re-naming of BP1 is recommended.

214

L. Pörtner et al.

Proposal: BP1: Create data management strategy. Create and keep up to date a data and data management strategy covering content, context, and structure pertaining to data management. [OUTCOME 1]

For a better understanding of MGD.2 BP4 further notes regarding organization and governance are recommended to be added. Proposal: BP4: Manage business glossary. Create and keep up to date a business glossary for all data management related terms. Agree with and communicate to all affected parties. [OUTCOME 4] Note 9: Define terms for a particular business purpose including unique name and description. Note 10: Ensure that terms from the glossary are applied consistently for requirements and their implementation. " Note 11: Roles are clearly defined. Ownership and responsibilities defined and communicated. Relevant users are enabled and empowered to execute their role(s). Organization is having decision authority to drive decisions.

Content of MGD.2 BP5 is part of the Data Quality base practice. Therefore, BP5 should be deleted to shorten number of BPs and be included in BP regarding data quality. Moreover, the content of the notes of MGD.2 BP6 should be expanded by adding aspects regarding the topic of change management. Proposal: BP6: Establish a communication strategy. Establish a communication strategy and ensure alignment with data management strategy and all related data management activities. [OUTCOME 6] Note 12: Establish and implement a plan, mechanisms, and feedback loops for communication. Note 13: Typically, stakeholders are defined to ensure active identification and involvement of relevant parties for a continued communication and coordination; see MAN.3 (Project Management) Note 14: The Vision and strategy take the company cultural aspects into account; Continuous Change Management is implemented.

MDG.4 also shows improvement potential. MGD.4 BP2 Note 4 can be added with further topic specific examples to get a more detailed picture of the system architecture.

Automotive Data Management SPICE Assessment

215

Proposal: BP2: Provide platforms for the management of data. Create, keep up to date, and use (meta) data management platforms and maintain the corresponding infrastructure. [OUTCOME 2] Note 3: A data management platform serves as a system of record and as a trusted source of data. Typically, it is a collection of technologies and applications to manage data across the whole data lifecycle. Note 4: Check the following system related opportunities: Single source of truth; data governance system; system overarching workflow support; automated interfaces; restriction of maintenance & authorization within relevant systems based on lifecycle.

MGD.4 BP3 Note 5 can be added with used technology to get an overview of the complexity of the systems and the data landscape. Proposal: BP3: Manage technical interfaces. Identify, record, and manage all technical data interfaces based on data management platform and architecture requirements. [OUTCOME 3] Note 5: Typically, the description of technical capabilities of interfaces includes: e.g.: technology (IDOC, API, WebService, RFC, etc.), data exchange format, exchange protocol, estimated size and frequency of data exchange, tolerance limits, and criteria for their verification and validation.

Moreover, currently no BP is including the topic of workflow tool, central data management and system integration. Therefore, notes to BP3 should be added by focusing on the following content. Proposal: MGD.4 BP3 Note 6: Check the workflow tool and system integration opportunities. Technical workflows should be in place for all data related processes, supported by business rules and with content support. Systems should have harmonized data. The data is moreover synchronized automatically between systems. Documentation of mappings and interfaces is at hand (technical & functional). Note 7: A dedicated DM system is in place and data is centrally managed (single source of truth).

Process group “Data Quality” The short description of DQA.1 should be more detailed to illustrate the topics of its category better.

216

L. Pörtner et al.

Proposal: DQA.1 Data Quality Measurement and Reporting

DQA.1 BP1 should not only capture the strategy of data quality, but the scope as well. Therefore, the description has to be expanded. Proposal: BP1: Create data quality strategy and scope. Create and keep up to date a data quality strategy and scope which is in line with the data management strategy, data quality requirements, and data lifecycle. [OUTCOME 1]

For a better understanding of DQA.1 BP2 further points in note 5 regarding data quality criteria are recommended to be added. Proposal: BP2: Define criteria for data quality. Define and refine criteria for the required data quality objectives to achieve business/use cases and data quality requirements. [OUTCOME 1] Note 4: Typically, data quality depends on the domain of data and related business/use case: e.g., accuracy, completeness, consistency, credibility, correctness, accessibility, compliance, confidentiality, efficiency, precision, traceability, understandability, availability, portability, recoverability [see ISO/IEC 25012] Note 5: The criteria are used to verify that the data is fit for purpose: - DQ dimensions defined - DQ KPIs defined - DQ SLA defined - DQ Reports defined (per stakeholder group - Change Requests (tactical strategical) - DQ reports implemented/ DQ visualization (per stakeholder group - operational, tactical strategical) Note 6: Typically, criteria contain expected and measurable state, range, or thresholds.

DQA.1 BP4 needs to be adapted, as the focus is now not just on strategy, but more on the scope.

Automotive Data Management SPICE Assessment

217

Proposal: BP4: Achieve data quality objectives of the scope. Visualize, monitor, and communicate status of achieving the objectives of the data quality scope. [OUTCOME 3] Note 9: All defined criteria should be tracked. Note 10: The type of monitoring (continuous or event driven) and frequency of reporting depends on the respective criteria. Note 11: Deviations typically trigger problem management (see SUP.9) or further data assessment, profiling, or cleansing activities (see DQA.2).

Process Group “Data Operations” In terms of DOP.1 and DOP.2 it is necessary to have a deeper look into the notes of the BPs and to reduce the number of notes to avoid misunderstandings. Moreover, the count of process outcomes should be consistent with the number of BPs in the process.

6 Evaluation and Discussion The objective of this research was the demonstration of the improvement potentials in the BPs of four selected process groups based on expert knowledge. Regarding the level of understandability, the process groups Data Management Strategy (MDG.2), Data Management Platforms and Architecture (MDG.4), Data Quality Strategy (DQA.1), Deployment Planning and Control (DOP.1) and Data Operations and Optimization (DOP.2) have been chosen. In general, the first mapping of the process groups into the Camelot Capability Map shows that the main topics which have to be considered by measuring a maturity of data management are included. The Camelot DMM implies five main pillars (Strategy, Processes, Organization, Architecture, Data) with different capabilities. The assignment shows that the Data Management SPICE assessment affects all areas, only mixes the naming of the process groups, and does not consider them clearly separated from each other. The subsequent analysis of the comprehensibility of the BPs of the three process groups shows a low level in some areas. Therefore, the focus has been just on processes with low level of understandability to improve the Data Management SPICE Assessment standard. The process groups MGD.2, MGD.4, DQA.1, DOP.1 and DOP.2 have been chosen. In general, the number of notes being part of the BPs have to be shortened. Additional remarks are provided to make the existing BPs more understandable. In addition to that, the scaling of the maturity level for each process group is not sufficiently defined in both models, so that no comparability seems possible for determining the maturity level at this point. Experts with high experience in the area of data are needed for evaluation of the Data Management SPICE assessment. This lack has to be considered from an educational perspective in future work. The Data Management SPICE model creates a holistic overview without a data focus for providing a data management roadmap in the automotive industry based on reporting

218

L. Pörtner et al.

data and partially master data. However, this model is not yet established in practice and requires further validation. In general, our work provides some implications for improvements which are handed over to the intacs working group deriving a sustainable solution for the framework. Nevertheless, there are some limitations to this work. The entire framework and its application in practice requires knowledge in Automotive SPICE and expert knowledge in terms of data management. The second limitation is the strong focus on reporting and master data management, which is caused by the special use case scenarios. In order to identify a state of the art in terms of the quality of data, it is not enough to only include the quality of reporting and master data in such use cases. The scope of the data is limited and needs to be expanded. In addition, the whole research field of Data Management Assessments to meet OEM requirements in automotive industry is still at the beginning and can be seen as a constraint. Future research should focus on precising data management frameworks by analyzing more different types of data, not only reporting and master data. This extension is also to be validated and verified in practice. Moreover, it will also be questionable if it is possible to build the future only with data or if other capabilities related to data usage (e.g., Data Literacy) are needed to shape data-driven business models in the automotive industry. For this work, it is planned to analyze and to demonstrate statistically the dependency on different base practices and process groups. As a next step, the VDA guidelines have to be derived including the adapted definitions of the subcategories of the base practices and their notes. Then, the improved version of the Data Management SPICE assessment can be tested and finally implemented in the automotive industry.

7 Conclusion and Outlook Data Management SPICE assessment provides a standard framework to define the fundamental business processes of data management capabilities ensuring the OEM requirements in the automotive industry. It defines a PAM for the maturity of all types of data, data management, and for organizations which are processing data. This work improved the proposed pilot draft of the PAM with expert knowledge from the area of data management. The results are handed over to the working group to derive a sustainable assessment version. In our future work, we want to finalize the definitions of the process groups with its BPs and to test and validate our proposed improvement with pilot partners from the automotive industry before the PAM is applied during an assessment. From an education perspective, we will investigate how assessors can be trained and empowered to be able to consider the proposed assessment concept.

8 Relationship with the SPI Manifesto EuroAsiaSPI2 has been built on improvement and innovation around Capability Maturity Models [14, 15]. This also requires cooperative work by experts in this field. Camelot Management Consultants AG has made a contribution to this with its expert knowledge in the field of data management in order to efficiently advance the Data Management SPICE assessment. The results have been handed over to intacs working group working on the final version of Data Management SPICE assessment.

Automotive Data Management SPICE Assessment

219

Acknowledgements. This work was supported by experts in the field of data management from Camelot Management Consultants AG. We would like to thank Marcel Leclaire, Markus Weinerth and Helena Betz for supporting the improvement process with their knowledge. In addition, we thank Samer Sameh from Valeo for his cooperative work and introduction to the Data Management SPICE assessment.

References 1. Walters, E. (ed.): Data-Driven Law: Data Analytics and the New Legal Services. Auerbach Publications, New York (2018) 2. “Was ist Datenmanagement? | Definition, Bedeutung, Prozesse | SAP Insights”, SAP. https:// www.sap.com/germany/products/technology-platform/what-is-data-management.html 3. O’Reilly Media: “Council Post: The Unseen Data Conundrum”, Forbes. https://www.forbes. com/sites/forbestechcouncil/2022/02/03/the-unseen-data-conundrum/ 4. International Assessor Certification Scheme e.V. (intacs), “Data Management SPICE”, 2022 5. “What Is Data Reporting And Why It’s Important?”, Sisense. https://www.sisense.com/glo ssary/data-reporting/ 6. Berson, A., Dubov, L.: Master Data Management and Customer Data Integration for a Global Enterprise. McGraw-Hill, New York (2007) 7. Loshin, D.: Master Data Management. Elsevier/Morgan Kaufmann, Amsterdam, Boston (2009) 8. SPICE User Group/AutoSIG, “Automotive SPICE‚ Prozessassessment”, Verband der Automobilindustrie (V) 9. Luber, S.: Was ist Data Management/Datenmanagement? https://www.storage-insider.de/ was-ist-data-managementdatenmanagement-a-850258/ 10. Gökalp, E., Martinez, V.: Digital transformation maturity assessment: development of the digital transformation capability maturity model. Int. J. Prod. Res. 60(20), 6282–6302 (2021). https://doi.org/10.1080/00207543.2021.1991020 11. Pörtner, L., Möske, R., Riel, A.: Data management strategy assessment for leveraging the digital transformation: a comparison between two models: DX-CMM and Camelot DMM. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement, pp. 553-567. Springer, Cham (2022). https://doi.org/10.1007/978-3031-15559-8_40 12. Gökalp, M.O., Kayabay, K., Gökalp, E., Koçyi˘git, A., Eren, P.E.: Assessment of process capabilities in transition to a data-driven organisation: a multidisciplinary approach. IET Softw. 15(6), 376–390 (2021) 13. “intacts information letter 2023-04”, intacs.scheme International Assessor Certification Scheme (2023). https://intacs.info/ 14. Biro, M., Messnarz, R.: EuroSPI1999 (1999). https://conference.eurospi.net/images/procee dings/EuroSPI1999-ISBN-952-9607-29-2.pdf 15. Pries-Heje, J., Johansen, J., Messnarz, R.: SPI Manifesto (2010). https://conference.eurospi. net/images/eurospi/spi_manifesto.pdf

A Knowledge Management Strategy for Seamless Compliance with the Machinery Regulation Barbara Gallina1(B)

, Thomas Young Olesen2 , Eszter Parajdi3 , and Mike Aarup2

1 IDT, Mälardalen University, 883, 721 23 Västerås, Sweden

[email protected]

2 Grundfos, Bjerringbro, Denmark

{tyolesen,maarup}@grundfos.com 3 Grundfos, Székesfehérvár, Hungary [email protected]

Abstract. To ensure safety, the machinery sector has to comply with the machinery directive. Recently, this directive has been not only revised to include requirements concerning other concerns e.g., safety-relevant cybersecurity and machine learning-based safety-relevant reliable self-evolving behaviour but also transformed into a regulation to avoid divergences in interpretation derived from transposition. To be able to seamlessly and continuously comply with the regulation by 2027, it is fundamental to establish a strategy for knowledge management, aimed at enabling traceability and variability management where chunks of conformity demonstration can be traced, included/excluded based on the machinery characteristics and ultimately queried in order to co-generate the technical evidence for compliance. Currently, no such strategy is available. In this paper, we contribute to the establishment of such a strategy. Specifically, we build our strategy on top of the notion of multi-concern assurance, variability modelling via feature diagrams, and ontology-based modelling. We illustrate our proposed strategy by considering the requirements for the risk management process for generic machineries, refined into sub-sector-specific requirements in the case of centrifugal pumps. We also briefly discuss about our findings and the relationship of our work with the SPI manifesto. Finally, we provide our concluding remarks and sketch future work. Keywords: Machinery Directive · Machinery Regulation · Seamless and Continuous Compliance · Cyber Security Act · Cyber Resilience Act · Artificial Intelligence Act · EN 809:1998+A1 · Centrifugal pumps

1 Introduction Nowadays technological innovations such as increased connectivity via Internet of Things and usage of artificial intelligence for enabling self-evolving behaviour are being incorporated within machineries. These innovations have been progressively transforming traditional closed systems into open systems [23], i.e., “systems whose boundaries, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 220–234, 2023. https://doi.org/10.1007/978-3-031-42307-9_17

A Knowledge Management Strategy for Seamless Compliance

221

functions and structure change over time and are recognized and described differently from various points of view”. The concept of liquid modernity [2], coined by Bauman Zygmunt to describe the condition of constant change in all aspects of human life (e.g., identities, relationships, education, and global economics) within contemporary society, is progressively influencing other spheres of human life including consumer products, which from solid are being transformed into liquid. Centrifugal pumps, for instance, are being transformed as well. The digitalisation process is increasingly making them connected and partly autonomous, e.g., in fault-diagnostics for predictive maintenance. These innovations call for new requirements to ensure safety of the machinery. To face such innovations, the machine directive has been revised and new requirements have been included concerning e.g., safety-relevant cybersecurity and machine learningbased safety-relevant reliable self-evolving behaviour. In addition, to avoid divergences in interpretation derived from transposition, the directive has been transformed into a regulation. As a consequence of this revision, the machinery sector needs to adapt to the upcoming regulation, expected to be published in the Official Journal by July 2023. Currently, no ready off-the-shelf solution is available to manage the knowledge concerning the needed multi-concern frame of reference. Hence, in this paper, stemming from the research work conducted in the ET4CQPPAJ project [9], we contribute to the establishment of such a solution, expected to be integrated within DevOps [24] practices integrated with processes for open dependability systems [23]. Specifically, we propose a knowledge management strategy for seamless and continuous compliance with the machinery directive and its upcoming revision on top of the notion of multi-concern assurance, variability modelling via feature diagrams, and ontology-based modelling. Our strategy partly contributes to the creation of a frame of reference [23], i.e., “set of conventions for the constructions, interpretation and use of documents describing a common understanding of and explicit agreements on a system, its purpose, objectives, environment, actual performance, life-cycle and changes thereof”. We illustrate our proposed strategy by considering the requirements describing the risk management process for generic machineries, refined into sub-sector-specific process requirements in the case of centrifugal pumps. Our illustration is limited to a subset of hazard categories. We also provide our lessons learned.Then, we discuss the relationship of our work with the SPI manifesto. Finally, we provide our concluding remarks. The rest of the paper is organised as follows. In Sect. 2, we provide essential background information. In Sect. 3, we propose our strategy. In Sect. 4, we illustrate our strategy. In Sect. 5, we discuss about our findings and we explain the synergy with the SPI Manifesto. In Sect. 6, we briefly discuss related work. Finally, in Sect. 7, we present our concluding remarks.

2 Background In this section, we present essential background information on the context and problem space constituted of: the legislative and binding space, the increasingly binding space populated by standards and guidelines, machineries and sector-specific machineries such as pumps. All these spaces may include sector-independent or sector specific as well as sub-sector specific elements. The machinery sector for instance may be specialised (subsector) by considering specific machinery-types e.g., pumps. This section is not aimed

222

B. Gallina et al.

at being exhaustive. Rather, the intention is to exemplify the problem by focusing on the machinery sector, pumps and related spaces in the context of risk-based driven machinery engineering and pump-specific machinery engineering focusing on risk management processes. This section also recalls essential information on the solution space aimed at contributing to the engineering of the knowledge captured by the different spaces. Specifically, Base Variability Resolution is recalled. 2.1 Machinery Directive The Machinery Directive (MD) [8], Directive 2006/42/EC, is a European Union directive concerning machinery and certain parts of machinery. MD establishes a regulatory framework for placing machinery on the EU Market. MD has the dual aim of harmonising the Essential Health and Safety Requirements (EHSR) applicable to machinery on the basis of a high level of protection of health and safety, while ensuring the free circulation of machinery on the EU market. Manufacturers of products that fall under the scope of the MD, such as manufacturers of pumps, must issue a Declaration of Conformity (DoC) in order to sell their products in the EU. It shall be noted that the EHSR laid down are mandatory; however, taking into account the state of the art, it may not be possible to meet the objectives set by them. In that event, the machinery must, as far as possible, be designed and constructed with the purpose of approaching these objectives. According to the MD, a machinery is defined as an assembly, fitted with or intended to be fitted with a drive system other than directly applied human or animal effort, consisting of linked parts or components, at least one of which moves, and which are joined together for a specific application. To comply with MD, the manufacturer has to compile a technical file, which shall comprise: a) a construction file and b) for series manufacture, the internal measures that will be implemented to ensure that the machinery remains in conformity with the provisions of the MD. The construction file is expected to include: – a general description of the machinery, the overall drawing of the machinery and drawings of the control circuits, as well as the pertinent descriptions and explanations necessary for understanding the operation of the machinery (full detailed drawings, accompanied by any calculation notes, test results, certificates, etc., required to check the conformity of the machinery with the essential health and safety requirements). – the documentation on risk assessment demonstrating the procedure followed, including (i) a list of the essential health and safety requirements which apply to the machinery, (ii) the description of the protective measures implemented to eliminate identified hazards or to reduce risks and, when appropriate, the indication of the residual risks associated with the machinery. – the standards and other technical specifications used, indicating the essential health and safety requirements covered by these standards, – any technical report giving the results of the tests carried out either by the manufacturer or by a body chosen by the manufacturer or his authorised representative, – a copy of the instructions for the machinery, – where appropriate, the declaration of incorporation for included partly completed machinery and the relevant assembly instructions for such machinery,

A Knowledge Management Strategy for Seamless Compliance

223

– where appropriate, copies of the EC declaration of conformity of machinery or other products incorporated into the machinery, – a copy of the EC declaration of conformity; The MD provides a list of hazard categories (e.g., thermal hazards, electric hazards) that shall be considered when carrying out risk management, i.e., risk assessment and control. For sake of clarity, it shall be pointed out that the EU publishes also guidance documents (see for instance, [10]) to guide users to the application of the MD. 2.2 Machinery Directive-Related Harmonised Standards To assist engineers when they are conducting management (i.e., risk assessment and risk control/reduction) on any type of machinery, EN ISO 12100 [26] has been developed. In addition, sub-sector specific standards have been developed. EN 809:1998+A1 [5] represents a sub-sector specific standard to provide means of conformity with the essential requirements of the MD in the context of liquid pumps. This standard pre-selects the main hazard categories that may affect liquid pumps and focuses on risk reduction. Hence this standard pre-assesses the risk. If a category is already fully covered by this standard, EN ISO 12100 does not need to be considered. Regarding thermal hazards, EN 809:1998+A1 shall be complemented with ISO 13732-1 [25]. This because EN 809:1998+A1 does not deal with means to reduce hazards from surface temperatures which derive from the temperature at which the pumped fluid is delivered to the pump inlet. 2.3 Machinery Regulation Given a series of problems that emerged during the latest assessment of the MD, a revision proposal [13] was made circulate. The revision proposal addressed the MD-related problems. In addition, to avoid divergences in interpretation derived from transposition, proposed to transform the MD into a proper regulation. Hence, the draft global comprise, which is expected to be published by July 2023 as the machine regulation (MR) [15]. To comply with MR, the manufacturer has to provide the technical documentation, which shall include a set of evidential elements. In what follows, we recall a subset of those elements with the purpose of highlighting the potential for documentation reuse during the transition from Machine Directive to Machine Regulation. The recalled elements are taken from the MR text. (a) a complete description of the machinery or related product and of its intended use; (b) the documentation on risk assessment demonstrating the procedure carried out, including: (i) a list of the essential health and safety requirements that are applicable to the machinery or related product; (ii) the description of the protective measures implemented to meet each applicable essential health and safety requirement and, when appropriate, the indication of the residual risks associated with the machinery or related product; (c) design and manufacturing drawings and schemes of the machinery or related product and of its components, sub-assemblies and circuits;

224

B. Gallina et al.

(d) the descriptions and explanations necessary for the understanding of the drawings and schemes referred to in point (d) and of the operation of the machinery or related product; (e) the references of the harmonised standards referred to in Article 17(1) or common specifications adopted by the Commission in accordance with Article 17(3) that have been applied for the design and manufacture of the machinery or related product. In the event of partial application of harmonised standards or common specifications, the documentation shall specify the parts, which have been applied. For sake of clarity, it shall be pointed out that at the time being no harmonised standard is available yet. However, since some hazard categories (e.g., thermal hazards) remain the same, we can assume that the corresponding portion of the harmonised standards, valid for the MD, will remain valid also for the MR. It shall also be highlighted that the MR includes essential health and safety requirements in relation to corruption and that an explicit reference to the Cyber Security Act is present. In Article 17, it is stated: Machinery and related products that have been certified or for which a statement of conformity has been issued under a cybersecurity scheme adopted in accordance with Regulation (EU) 2019/881 and the references of which have been published in the Official Journal of the European Union shall be presumed to be in conformity with the essential health and safety requirements set out in Annex III, sections 1.1.9 and 1.2.1, as regards protection against corruption and safety and reliability of control systems in so far as those requirements are covered by the cybersecurity certificate or statement of conformity or parts thereof. Annex III, 1.1.9, states otherwise which requirements must be fulfilled. 2.4 MR-Related Regulations The upcoming Machine Regulation includes requirements that are related to cybersecurity and to the usage of artificial intelligence. Hence, even if not explicitly harmonised with the corresponding regulations, the MR is related to the Cyber Security Act [11], the Cyber Resilience Act (CRA) [14] and to Artificial Intelligence Act (AI Act) [12]. The published Cyber Security Act (CSA) defines the responsibility of EU Agency for cybersecurity (ENISA) and, within Article 51, it defines the security objectives in terms of the CIA triad, i.e., confidentiality, integrity and availability. These objectives are expected to be included within the EU cybersecurity certification schemes that shall be prepared by ENISA. CRA is a recent proposal for legislation aimed at ensuring that digital connected products and associated services, placed on the EU market, are cybersecure by design and by default. CRA seeks to establish common cybersecurity rules. CRA provides essential requirements, which will be supported by horizontal and vertical / sectorial standards providing the necessary details for the concrete implementation. At the time being, however, it is unclear with which type of standard the CRA will be supported. In its current version, within article 10 (2), it is stated that manufacturers shall undertake an assessment of the cybersecurity risks associated with a product with digital elements and take the outcome of that assessment into account during the planning, design, development, production, delivery and maintenance phases of the product with digital

A Knowledge Management Strategy for Seamless Compliance

225

elements with a view to minimising cybersecurity risks, preventing security incidents and minimising the impacts of such incidents, including in relation to the health and safety of users. Regarding vulnerability handling (risk management), within Annex 1, the proposal states that manufactures of the products with digital elements shall: 1. identify and document vulnerabilities and components contained in the product, including by drawing up a software bill of materials in a commonly used and machine-readable format covering at the very least the top-level dependencies of the product; 2. in relation to the risks posed to the products with digital elements, address and remediate vulnerabilities without delay, including by providing security updates. Similar to the CSA, it shall be noted that where machinery products are products with digital elements within the meaning of the CRA and for which an EU declaration of conformity has been issued on the basis of the CRA, those products shall be deemed to be in conformity with the essential health and safety requirements set out in Annex III, sections 1.1.9 and 1.2.175 to the Machinery Regulation proposal. The Artificial Intelligence Act states in its Annex IV that the technical documentation for compliance shall also include a detailed description of the risk management system in accordance with Article 9, where a set of requirements are listed. It shall be noted that the members of the European Parliament (MEPs) have recently adopted the Parliament’s negotiating position on the AI Act. The discussion with EU countries in the Council on the final form of the law is expected to take place as a next step. 2.5 Centrifugal Pumps Centrifugal pumps are machinery for transporting fluids by the conversion of rotational kinetic energy to the hydrodynamic energy of the fluid flow. The rotational energy typically comes from an engine or electric motor. The fluid enters the pump impeller via the inlet along or near to the rotating axis and is accelerated by the impeller, flowing radially outward into a diffuser or volute chamber (casing), from which it exits (outlet). Nowadays, pumps can be remotely connected. Pumps can also include artificially intelligent components, used for instance for predictive maintenance. Grundfos is a leader pumps manufacturer. Centrifugal pumps are part of Grundfos pumps production and can be customised in multiple ways to meet specific requirements, e.g., the need of handling high temperature liquids, the need of ensuring predictive maintenance. In [6], a study was conducted to show that through cloud-side collaboration, real-time monitoring of the running status of centrifugal pumps and intelligent diagnosis of centrifugal pump faults might become possible, hence, enabling failure avoidance via a constant assessment of the reliability of the pump and allowing maintenance to be conducted only when necessary. Since this is an evolving research area, it is not excluded that on-the-air repair or machine-learning based self-healing reconfiguration might take place. Hence, Grundfos is interested in a multi-concern frame of reference as well as in managing its knowledge.

226

B. Gallina et al.

2.6 Multi-concern Assurance The notion of Multi-concern Assurance was introduced in the context of the AMASS project [31, 32] to highlight that assurance of cyber physical systems cannot be limited to a single concern, e.g., safety assurance. Instead, multiple concerns shall be considered as well as their interplay and trade-offs. This necessity is even more true in the context of Industry 5.0, where cognitive cyber-physical systems are expected to play a major role in society. 2.7 Variability Management via Base Variability Resolution Base Variability Resolution (BVR) is a metamodel [22] for modelling (VSpec model), resolve (Resolution model) the variability at abstract level, i.e., without referring to the exact nature of the variability with respect to the base model and to realise (Realisation model) a new configuraration of the base model. In this paper, we limit out attention to the VSpec model. VSpec permits users to model the variability in a feature diagram-like fashion, i.e., via a tree structure, where vertices are called Vnodes and arcs (connecting Vnodes) represent implicit logical constraints on their resolutions. In what follows, we recall the BVR modelling elements used in this paper. For a complete introduction of BVR, the reader may refer to [22]. Root (feature) represents the starting node of the tree-structure. It is depicted with a rounded rectangle. Choice (feature) represents a yes/no decision. It is depicted with a rounded rectangle. Constraint, given in BCL (Basic Constraint Language), is a logical formula or expression over VSpecs used to restrict the allowed resolutions. It is depicted with a parallelogram with the textual constraints written inside. The parallelogram is linked to a VSpec, representing the context of the constraint. If not linked, it is interpreted as global constraint. Group dictates the number of choice resolutions, e.g., 1..1, which refers to xor in which exactly one of the child features must be selected; 1..N, refers to or in which at least one of the child features must be selected. The group is depicted with a triangle plus the textual notation representing the multiplicity. ChoiceOccurrence is depicted via a choice symbol enriched by textual notation to indicate the type. A dashed link between choices indicates that the child choice is not implied by its parent. A solid link between choices indicates that the child choice is implied by its parent. The cognitive effectiveness of BVR has been assessed by Bernhard et al. [3], based on Moody’s principles [28]. Despite some cognitive weaknesses revealed by the assessment, from a semantically transparent relationships perspective, the BVR inherent support for tree-like structures is cognitively effective in representing the semantics of the relationships. Hence, we believe that feature diagram-like representations can be intuitive and can represent a helpful modelling means for establishing a multi-concern frame of reference. BVR was extensively used within and beyond the AMASS project to enable intra-domain, cross-domain and cross-concern reuse [27] as well as co-engineering of mono-concern risk-based processes, specifically in the space [16], medical [21] and automotive domain [4].

A Knowledge Management Strategy for Seamless Compliance

227

3 Knowledge Management Strategy As seen in the background section, legislations, harmonised standards, and guidance provide requirements that contribute to constraining the engineering of the processfocused artefacts. We see that requirements are refined from high-level process-focused requirements, provided within legislations and guidance to the legislations application, to technically specialised requirements at generic machinery-level (provided by harmonised standards at generic machinery level) down to sub-sector-specific machinery (e.g., requirements provided at liquid pump-level). At each level, we have sets: sets of partly overlapping legislations, sets of partly overlapping standards, etc. Given these sets of overlapping elements, a product line approach constituted of the traditional two-phase engineering process can be adopted (as depicted in Fig. 1). First, the process-focused domain is engineered and then the process-focused artefacts based on the constraints imposed by the domain are engineered.

Fig. 1. Process-focused Domain and Artifact Engineering.

This high-level product line-oriented approach can be translated into BVR given feature diagrams in order to provide an intuitive representation. Once the intuitive representation is understood and shared by all stakeholders/roles, in our strategy, as depicted in Fig. 2, an ontology-based representation is expected to be adopted in order to enable the querying the knowledge graphs and the co-generation of the technical documentation.

Fig. 2. Ontology-based Continuous Compliance Strategy.

228

B. Gallina et al.

It shall be noted that at each level, different stakeholders with different expertise play a role. Often these stakeholders work in silos. Hence, a common frame of reference would be beneficial to expose them to the interfaces.

4 Exemplification of the Strategy In this section, we partly exemplify our knowledge management strategy. Specifically, we limit the exemplification to the BVR representation and we represent part of the knowledge contained within the legislations, standards, etc., related to the generic machinery and partly to the liquid pumps. The process-focused information is strictly dependent on the characteristics and type of the machinery. Hence, as depicted in Fig. 3, we need also to model the space of possibilities that interests us. At Grundfos, and in the context of our research project, the focus is on the specialisation of machinery with respect to the pump-type, which then may further be specialised (e.g., electric, electronic, digital) to distinguish the products within the product line and the need of conformity based on e.g., the presence/absence of digital elements enabling connectivity, evolutionary behaviour, etc. However, in this paper, we do not provide details regarding the technical pump system since our attention is limited to processfocused artefacts. The Multi-concern Machinery Compliance Line, which results by considering all legislations considered in the background, is depicted in Fig. 4. This figure shall be further refined to take into consideration the development of the individual features representing the technical evidence as a block (e.g., technical file (TE) for the machine directive (MD)) but also as reusable pieces of evidence when MR overlaps with MD or with the other regulations.

Fig. 3. Machinery Characteristics.

From a technical documentation perspective, Annex VII of the MD largely overlaps with Annex IV of the MR. At a first glance, this large overlap may remain hidden, given

A Knowledge Management Strategy for Seamless Compliance

Fig. 4. Multi-concern Machinery Compliance Line.

229

230

B. Gallina et al.

Fig. 5. Machinery Directive Technical File.

A Knowledge Management Strategy for Seamless Compliance

231

the different way that these pieces of documentation are expected to be packaged. MD and MR, however, do not only differ in the way they require to structure the technical documentation but also in the way safety is interpreted. MR expects manufacturers to embrace a larger categories of risks while conducting their risk assessment. If MD is in focus and if the pump is neither connected nor evolutionary, the technical documentation to be considered is limited to the technical file (TF), as depicted in Fig. 5. It shall be pointed out that Fig. 5 constitutes the resolution (inclusion/exclusion) of features from the Multi-concern Machinery Compliance Line. The root of Fig. 5 corresponds to the expansion of feature f2.1.1 of Fig. 4 based on the evidential elements, which were listed in Sect. 2.1.

5 Discussion and Synergy with the SPI Manifesto In Sect. 4, we provided a preliminary but timely illustration of our strategy. A more complex implementation/illustration is expected to be developed iteratively and incrementally along with the understanding of the upcoming regulations. Regarding the synergy with the SPI Manifesto [29], which targets software. Our strategy is not limited to software, it embraces organisational knowledge management in general and, in the context of this paper, it focuses on knowledge management in relation to commonalities and variabilities among risk management processes described within the machinery directive/regulation and further refined by harmonised standards and company-specific guidelines. Our strategy is specifically related to the SPI-principle Create a learning organisation. Our strategy is also related to the SPI-principle Ensure all parties understand and agree on process, since by managing knowledge related to risk assessment processes in a traceable and systematic manner, we contribute to ensuring that all parties understand and agree on the processes.

6 Related Work In the law community, Chiara [7] provides an informal (textual) and brief discussion about the interplay between the CRA and the MR. However, no technical solution for managing the knowledge is envisioned. In the knowledge management community, an important number of works proposes knowledge management strategies. However, few of them, mainly stemming from the intersecting computational law community, focus on compliance via ontology-based solutions. Robaldo et al. [30], for instance, conduct experiments to compare semantic web-based technologies for automating the compliance. To the best of our knowledge, our work represents a novelty with respect to 1) the technical solution and the case considered. From a technical solution perspective, no approach so far has proposed a two-phase strategy. Even though it shall be stated that this work is connected to another ongoing and unpublished work conducted by the first author in the context of a sibling national project [1], where the two-phase strategy is also explored but within a broader scope. This work also borrows the general idea of creating ontologies for compliance purposes from Gallina et al. [17–20], where ontology-based representation of automotive standards were proposed for compliance

232

B. Gallina et al.

purposes as well as for automated generation of safety cases. From a case perspective, no research so far has been conducted to elaborate a knowledge management strategy targeting the multi-concern frame of reference for seamless and continuous compliance with the machinery directive/regulation.

7 Conclusion and Future Work In this paper, we have presented our knowledge management strategy for enabling seamless and continuous compliance with the machinery directive and its upcoming revision. We have also applied it to the sub-sector populated by centrifugal pumps, focusing on risk management processes. Given, the space limits, we have largely simplified the case. Hence, in future, we plan to consider more complex cases in order to investigate the scalability and efficiency of our strategy. We also intend to translate BVR representations into ontology-based representations. Finally, we plan to continue digging into the same sector and sub-sector as well as exploring other sub-sectors to be able to reuse sub-sector independent modelling chunks.

References 1. 4DSafeOps Team: 4DSafeOps, Standards-Assurance Case-Process-Product-Aware SafeOps #49, Software Center. https://www.software-center.se 2. Bauman, Z.: Liquid Modernity. Polity Press, Cambridge (2000) 3. Bernhard, M., Holøs, Ø.: Building BVR Models Better. Master’s thesis, Department of Informatics, University of Oslo (2015) 4. Bramberger, R., Martin, H., Gallina, B., Schmittner, C.: Co-engineering of safety and security life cycles for engineering of automotive systems. Ada Lett. 39(2), 41–48 (2020). https://doi. org/10.1145/3394514.3394519 5. CEN: EN 809:1998+A1 Pumps and pump units for liquids - Common safety requirements (2009) 6. Chen, L., Wei, L., Wang, Y., Wang, J., Li, W.: Monitoring and predictive maintenance of centrifugal pumps based on smart sensors. Sensors 22(6) (2022). https://doi.org/10.3390/s22 062106. https://www.mdpi.com/1424-8220/22/6/2106 7. Chiara, P.G.: The Cyber Resilience Act: the EU Commission’s proposal for a horizontal regulation on cybersecurity for products with digital elements. Int. Cybersecur. Law Rev. 3(2), 255–272 (2022). https://doi.org/10.1365/s43439-022-00067-6 8. Council of the European Union: Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (recast), May 2006 9. ET4CQPPAJ Team: ET4CQPPAJ, Trace Evidence for Continuous Quality Product Process Assurance Justification, project #50, Software Center. https://www.software-center.se 10. European Commission: Guide to application of the Machinery Directive 2006/42/EC, October 2019 11. European Parliament & Council of the European Union: Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act), April 2019

A Knowledge Management Strategy for Seamless Compliance

233

12. European Parliament & Council of the European Union: Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, April 2021 13. European Parliament & Council of the European Union: Proposal for a Regulation of the European Parliament and of the Council on machinery products, April 2021 14. European Parliament & Council of the European Union: Proposal for a Regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements and amending Regulation (EU) 2019/1020, September 2022 15. European Parliament & Council of the European Union: Proposal for a Regulation of the European Parliament and of the Council on machinery products, January 2023 16. Gallina, B.: Quantitative evaluation of tailoring within SPICE-compliant security-informed safety-oriented process lines. J. Softw. Evol. Process - EuroSPI Special Issue 32(3), 1–13 (2020). https://doi.org/10.1002/smr.2212 17. Gallina, B., Castellanos Ardila, J.P., Nyberg, M.: Towards shaping ISO 26262 - compliant resources for OSLC-based safety case creation. In: Roy, M. (ed.) 4th International Workshop on Critical Automotive Applications: Robustness & Safety, CARS 2016. CARS 2016 - Critical Automotive applications: Robust ness & Safety, Goteborg, Sweden, September 2016 (2016). https://hal.archives-ouvertes.fr/hal-01375489 18. Gallina, B., Nyberg, M.: Reconciling the ISO 26262-compliant and the agile documentation management in the Swedish context. In: Roy, M. (ed.) Critical Automotive applications: Robustness & Safety, CARS 2015, Paris, France, September 2015 (2015). https://hal.arc hives-ouvertes.fr/hal-01192981 19. Gallina, B., Nyberg, M.: Pioneering the creation of ISO 26262-compliant OSLC-based safety cases. In: IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW), pp. 325–330 (2017). https://doi.org/10.1109/ISSREW.2017.41 20. Gallina, B., Padira, K., Nyberg, M.: Towards an ISO 26262-compliant OSLC-based tool chain enabling continuous self-assessment. In: 2016 10th International Conference on the Quality of Information and Communications Technology (QUATIC), pp. 199–204 (2016). https://doi. org/10.1109/QUATIC.2016.050 21. Gallina, B., Pulla, A., Bregu, A., Ardila, J.P.C.: Process compliance re-certification efficiency enabled by EPF-C $\circ $ BVR-T: a case study. In: Shepperd, M., Brito e Abreu, F., Rodrigues da Silva, A., Pérez-Castillo, R. (eds.) QUATIC 2020. CCIS, vol. 1266, pp. 211–219. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58793-2_17 22. Haugen, Ø., Øgård, O.: BVR – better variability results. In: Amyot, D., Fonseca i Casas, P., Mussbacher, G. (eds.) SAM 2014. LNCS, vol. 8769, pp. 1–15. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11743-0_1 23. IEC: IEC 62853 Open systems dependability (2018) 24. ISO/IEC/IEEE 32675: Information technology - DevOps - Building reliable and secure systems including application build, package and deployment (2022) 25. ISO/TC 159/SC 5: ISO 13732-1:2006 Ergonomics of the thermal environment - Methods for the assessment of human responses to contact with surfaces - Part 1: Hot surfaces (2006) 26. ISO/TC 199: ISO 12100:2010 Safety of machinery - General principles for design - Risk assessment and risk reduction (2010) 27. Javed, M.A., Gallina, B.: Safety-oriented process line engineering via seamless integration between EPF composer and BVR Tool. In: 22nd International Systems and Software Product Line Conference (SPLC), 10–14 September, Gothenburg, Sweden. ACM Digital Library (2018). https://doi.org/10.1145/3236405.3236406 28. Moody, D.: The “physics” of notations: toward a scientific basis for constructing visual notations in software engineering. IEEE Trans. Softw. Eng. 35(6), 756–779 (2009). https://doi. org/10.1109/TSE.2009.67

234

B. Gallina et al.

29. Pries-Heje, J., Johansen, J. (eds.): MANIFESTO software process improvement eurospi.net, Alcala, Spain (2010) 30. Robaldo, L., Pacenza, F., Zangari, J., Calegari, R., Calimeri, F., Siragusa, G.: Efficient compliance checking of RDF data. J. Logic Comput. 32, 369–401 (2023). https://doi.org/10.1093/ logcom/exad034 31. Ruiz, A., Gallina, B., de la Vara, J.L., Mazzini, S., Espinoza, H.: AMASS: architecturedriven, multi-concern, seamless, reuse-oriented assurance and certification of CPSS. In: 5th International Workshop on Next Generation of System Assurance Approaches for SafetyCritical Systems (SASSUR), Trondheim, Norway, September, Computer Safety, Reliability, and Security (SAFECOMP), Lecture Notes in Computer Science, vol. 9923. pp. 311–321 (2016). https://doi.org/10.1007/978-3-319-45480-1_25 32. de la Vara, J.L., Parra, E., Ruiz, A., Gallina, B.: Amass: a large-scale European project to improve the assurance and certification of cyber-physical systems. In: Franch, X., Männistö, T., Martínez-Fernández, S. (eds.) PROFES 2019. LNCS, vol. 11915, pp. 626–632. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-35333-9_49

SPI and Good/Bad SPI Practices in Improvement

Corporate Memory – Fighting Rework with a Simple Principle and a Practical Implementation Morten Korsaa1(B)

, Niels Mark Rubin2

, and Jørn Johansen1

1 Whitebox, Hørsholm, Denmark

{mk,jj}@whitebox.dk

2 Grundfos, Bjerringbro, Denmark

[email protected]

Abstract. Can we avoid the many minor misunderstandings that generate a lot of rework? Can new tools, and changing a few old habits, create more flow in the development work with a less annoying rework? - We have become used to the meetings and communications required to fix the misunderstandings between different stakeholders and the following rework. Increased number of stakeholders and complexity, in general, means that many projects are reporting a state of meeting suffocation where they are making unsatisfying little progress due to many meetings. One valuable principle that deals with the problem is “Corporate Memory” from Expectations Engineering, which will be briefly described, including the benefit it will bring. To demonstrate the principle in practice, this paper will show an implementation in the IS department of Grundfos. While this is a great example based on a software DevOps environment, it still serves as a general example for implementation in all other available settings. Keywords: Requirements engineering · Expectation engineering (EE) · Risk management · Configuration management · Test management · Corporate memory · Traceability · Structure

1 Introduction The goal is to reduce the amount of rework in complex engineering environments, where rework is defined loosely as all the annoying work that is due to correcting/fixing/debugging/. defects/issues/errors that could have been easily avoided if only the developers/engineers had the correct information. Rework is NOT when a prototype is made to test the feasibility of the concept, as is the “normal” in agile development. A fair definition of rework is based on the gut feeling of the engineer/developer. It is rework if they are not proud of the task or annoyed to do it again just because of the lack of appropriate information. Expectations Engineering is about establishing the best possible input to technical engineering, and one of the principles is that all stakeholders work in the same “Corporate Memory.” Please note that the principle addresses the quality of the work when establishing the expectations for the solutions independent of © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 237–257, 2023. https://doi.org/10.1007/978-3-031-42307-9_18

238

M. Korsaa et al.

whether the best time to do this is when refining the backlog in an agile project or in the definition phase of a waterfall project, or any mix of lifecycles. When you try to understand the concept, look for simplicity because it is a straightforward concept! The practical implementation, however, that is complicated. Sections 2 and 3 explain the simple principle. Section 4 describes how Grundfos has implemented the principle. It is a good practice that has demonstrated its high value - in one specific context. A great inspiration, but your context is different, and you must design your corporate memory to your context.

2 The Problem Today We have become used to the problem. We fix issues when we see them. It is what we do. But hold on! Some could have been avoided early and cheaply if only the right stakeholder had been asked at the right time. In fact, more than 70% of them all [3]. Most organizations spent approx. 30–40% of the time on rework, and the benefit is substantial if 70% of that can be avoided. The problem starts with and grows exponentially with the number of stakeholders with expectations for a solution. Each of the stakeholders: • • • •

Are essentially talking about the same new solution! Expect that their version is “correct.” Expects that everyone else has understood them. Expects that minor deviations will be fixed along the way.

From each stakeholder’s point of view, all of this is reasonable, but their version is not the only one, and their position is not entirely clear to the other stakeholders. When different stakeholders store these slightly different expectations of the solution in different places and formats, the information is typically redundant, missing, and outdated. This is the root cause of most of the rework. Rework that manifests when we realize a misunderstanding. The misunderstandings are the “small deviations that will be fixed along the way.” The problem is that there are surprisingly many, and they take a surprising amount of communication to solve. One clear symptom of this problem in your daily life is the number of meetings you have to attend to fix misunderstandings. If you are approaching the “meeting suffocation point,” you feel the pain. Another problem with redundant, missing, and outdated information is that it creates: • Uncertainty: – You cannot be sure that you have the correct information. – You realize you don’t know when it is too late. • Annoyance: – The time you spend looking for the correct information.

Corporate Memory – Fighting Rework

239

– The time you spend reworking stuff, you would have done right in the first place if only you had had the correct information. The purpose of the “corporate memory” is to deal with this.

3 Corporate Memory Principle The principle is straightforward, but the implementation is unfortunately not. The simple principle is that all stakeholders read and write in the same storage to avoid redundant information. This principle leaves no room for specific stakeholders’ preferences. All must chip in and use the same system; license issues should not be an obstacle. When all stakeholders work in the same space, there is little room for uncertainty about the information. And that leads us to the trickier part of corporate memory because it supports and relies on a well-known principle in communication. “The sender is responsible for the receiver’s understanding.” The quality of the information is essential to create real consensus among the stakeholders. The quality is good enough if all stakeholders have understood the relevant information in the same way as the sender of the information. Here we are at the root cause of the problem – all the minor misunderstandings. It takes more effort to ensure that all relevant stakeholders have understood your needs and requirements than writing a sketchy note on the back of a napkin and delivering it to a developer you happen to know. But by the end of the project, you will have to explain it in detail anyway. The “Corporate Memory” promise is that it pays off to create consensus early. If the input is not good enough, the designers will make assumptions to keep working. Then later, the designer will recognize the false assumptions and have to go back to the stakeholder, ask for clarification, and then redesign. The false assumption ripples down in the project when other designers build upon the false design. Now picture the situation where a provider of a need is not accessible for a clarification meeting one month from now. Picture how many defects the group of designers can produce in a month if an assumption is wrong and how much rework this creates. The purpose of the corporate memory is to facilitate stakeholder consensus by exposing misunderstandings and forcing compromises to be made in due time. Traditionally this has happened in documents like a requirement specification, excel sheets like traceability matrixes, discussion groups, e-mails, meetings, and minutes of meetings, where different stakeholders have different preferences with regard to the media. It was the only possibility since there had not been proper tool support in the past. But with the emergence of model-based development tools, there is an opportunity to move from a document-centric system to a model-based management of all the entities in engineering. Software and DevOps tools stacks have shown the way, and 3D models and related tools in construction are close behind. The movement is inevitable and very welcome, but it is a paradigm shift. Summing up the principles of a “Corporate Memory”: 1. Information about expectations for the solution in one place. 2. All stakeholders have access. 3. “The sender is responsible for the receiver’s understanding.”

240

M. Korsaa et al.

In the following sections, we will demonstrate a practical implementation.

4 Grundfos Case Grundfos is a worldwide pump manufacturing company. Two years ago, we started developing a next-generation Marking System in our Production IT department. The Marking System application handles all printing of various labels used in the pump production and laser engraving on nameplates used as production identification of the produced pumps. The marking system operates on production data delivered by a SAP system (Fig. 1).

Fig. 1. Grundfos marking system

The Marking System development team consists of 4–5 people. Based on the team’s broad experience in software development and requirement management, we decided to use Microsoft Azure DevOps/Visual Studio [4] for software development. Azure DevOps’s current state primarily focuses on supporting software development, so we decided to look for a tool that could complement it with requirements management. We found “Modern Requirements” [5], a plug-in for Azure DevOps, as the best tool for the job. The proceeding describes our results, based on team discussions during the ongoing software development and our personal experiences from many years of software development. 4.1 Requirements First, we started with the hard work of requirements elicitation for the application. It is well described that working carefully with requirements before design and coding will save a lot of time, money, and trouble later. But it is hard to encourage IT developers to work with requirements. It seems boring, and nobody sees any visible results except for some documents (Fig. 2).

Corporate Memory – Fighting Rework

241

Requirement

Fig. 2. Initial information model

Nevertheless, the requirement is an essential root foundation that clarifies, seen from the outside towards the application (the customer perspective) and seen from the inside (the developer perspective). It is essential not to use IT buzzwords or IT slang in the text descriptions, as non-IT stakeholders may become unsure of the meaning, and thereby it does not become a document of shared understanding. The requirements are where developers later seek clarifying information a long time after a project has started and find it nice that somebody did some work defining them in the first place. Ensuring progress with requirements work needs a dedicated person (i.e., requirement manager) in the project team. The person must accept the role as tireless demanding requirements to be described before design and implementation. The person should be interested in and responsible for ensuring that requirements are well defined and reviewed - and like conducting this process [7]. During our requirements elicitation, we built the information model in Azure DevOps with requirements as the model’s center. The Azure DevOps system can inter-link various items, so creating a network of information was easy. We decided to use two kinds of requirements, namely stakeholder requirements (business) and solution requirements (technical/design), to cover all needs (Fig. 3).

Fig. 3. Example of a stakeholder requirement

242

M. Korsaa et al.

It is important to identify all relevant stakeholders and describe their interest in the system and every requirement must have a key stakeholder (personas), Requirements were separated into three buckets: • • • •

twelve months ahead of us (capture of ideas and descriptions on a goal level) six months ahead (still very general descriptions – on a need/goal level) three months ahead (refined, going to be reviewed soon) In progress (in the process of being reviewed and implemented)

4.1.1 Reviews Requirements reviews took place by inviting stakeholders by email (including hyperlinks to the review) to comment on the descriptions directly in the Modern Requirements plugin and edit, reject, or approve the requirements, as shown in Fig. 4. Each stakeholder can approve, reject, or comment on the requirement.

Fig. 4. Example of several requirements being reviewed by various stakeholders

4.2 Features We separate requirements from features carefully. Features must be changeable independently of the requirements. Requirements must only describe the needs and wishes that give value - and features describe how the implementation is grouped/structured and the

Corporate Memory – Fighting Rework

243

assignment of terms and names to the functionality - for later when the implementation takes place. Later you may want to change the implementation/design due to design or technical matters – this can then be done without another round of requirements change and review if it does not collide with any solution requirements (Figs. 5 and 6). Child

N

Related

N Child

Requirement Collecon

N

M

Parent

Epic Collecon

M

Related

Parent

M

Requirement

N

M

Feature Related

Fig. 5. Requirements and features are separated

Fig. 6. Requirements and features organized in DevOps

We expand our information model to hold collections of requirements and collections of features. To better understand how the various entities are related, we use a graph notation, where, for example, requirements relating to features in an N:M relationship. 4.3 Work We now expand our information model further to include “the work.” Work is mainly related to features, while it is here that the implementation work takes place. By doing so, we can get full traceability from implemented features to the amount of work-related and vice versa. We use the generic Azure DevOps items Product Backlog Item (PBI), Bug, and Task to hold information about estimates and work done hours (Figs. 7 and 8).

244

M. Korsaa et al. Child

Epic Collecon

N

Related

N Child

Requirement Collecon

M

Parent

M Parent

Related

Parent

N

Requirement

M

N

M

Feature

PBI/Bug

N

1

1

N

Task

Related

Fig. 7. Updated information model

Fig. 8. Work in DevOps

4.4 Test Tests are generally done to verify the fulfillment of requirements (not to verify features). Features are tested on a low-level component base using unit tests to ensure consistency and code quality. The model is extended with test cases and test suites – all linked to requirements (Fig. 9). Child

Epic Collecon

N

Related

N Child

Requirement Collecon

M

M Parent

Related

Parent

N

Requirement

M

N

M

Feature

N

1

PBI/Bug

1

N

Task

Related

M

Test plan

1

1

N

Test suite (requirements based)

1

N

N

Tested by Tests

1

1

TestResult

TestRun

M

Testcase

N

1

Fig. 9. Test items added to the model

TestComplete TestCase

1

N

TestComplete Test Acvity

Corporate Memory – Fighting Rework

245

4.4.1 Test Case Each stakeholder requirement must be verified by at least one test case. Test cases can be executed manually or automatically. In our model, some test cases are executed by an external computer system that returns the test result to the test case. A primary goal for us has been to introduce automatic tests wherever it is possible. The possibility to run regression tests when the source code has been modified is valuable, while it can be done outside working hours on a test server. By making automatic user interface tests, we easily verify the stakeholder requirements by simulating the user interaction. Another advantage is that the external system may have a hardware configuration whose behavior takes part in the test (Fig. 10). Related

N

M

Requirement

N

Tested by Tests

M

Testcase

N

1

TestComplete TestCase

1

N

TestComplete Test Acvity

Fig. 10. Test case with external test execution

We have chosen “TestComplete” [6] as the tool for the user interface test. We are doing so by simulating the user interactions and expected responses, but other tools exist and are being considered. Test Complete allows us to describe the test in Gherkin language and execute it automatically on the external system. Solution requirements (technical/architectural requirements) are handled differently. They are fulfilled by a generic completed “DoneByDesign” test case when, e.g., solution architects inspect and accept the design (Fig. 11).

246

M. Korsaa et al.

Fig. 11. A test case description written in Gherkin

4.4.2 Test Suite A test suite is a collection of test cases that can be executed in one run. A test suite often has a standard setup or context required to run the included test cases (Figs. 12 and 13).

Fig. 12. Modern Requirements Test suite

Corporate Memory – Fighting Rework

247

4.4.3 Test Plan The test plan describes which test suites are to be run and when.

Fig. 13. Test plans

4.4.4 Test Run The test run is the execution of a test suite, and the result is how many test cases passed (Figs. 14 and 15).

Fig. 14. Test run

248

M. Korsaa et al.

Fig. 15. Traceability matrix showing relations between requirements and the verifying test cases

4.5 Correlation Between Requirements and Test Modern Requirements provides a tool to overview the relations between requirements and test cases. An excellent presentation that gives you a quick overview of missing test cases for a specific requirement. 4.6 Risks As a natural part of the software development process, we explore risks during the development. Risks are characterized by a probability value and an impact value, which gives a rank of severity. They are related to requirements or features (Figs. 16 and 17). Child

Epic Collecon

N

Related

N

M

Parent

M

Related

Requirement

N

M

Related

Feature Related

N

Related

N Related

M Related

M

Risk

Fig. 16. Risks are explored during development.

Corporate Memory – Fighting Rework

249

Fig. 17. Risks in Azure DevOps

When a risk is defined, it must be assigned an impact and a probability value, and the resulting severity is calculated automatically. Risks are revised and classified at monthly project meetings, and if the risk can be countered within the development team, corrective actions can be created as PBIs, scheduled, and initiated by either the project manager or the project team. 4.7 Baselining To produce a stable requirement platform as the foundation of features and later builds and code, we make a Baseline. A Baseline is a snapshot of the set of requirements we want in a release. The snapshot freezes the contents of the requirements (word for word), so we get a consistent relation between requirements and the later produced application release (Figs. 18 and 19). Tests are executed and verify the “Frozen requirements.” Later the requirements may be modified, but the frozen baseline remains the same (Fig. 20).

250

M. Korsaa et al. Baseline collecon

1

N

Baseline workitem 1

Hyper link Hyper link

Hyper link

Hyper link

1

Baseline Snapshot

N N

M Child

Requirement Collecon

N

M

Parent

M

Requirement

Fig. 18. Information model extended by baselines

Fig. 19. Modern Requirements – a set of requirements in a baseline

Baseline Test (done)

Requirement Done

Test (done)

Requirement Done

Test (done)

Requirement Done

Test (done)

Requirement Done

kode

Fig. 20. Relations between Test → Requirements → Application code

Corporate Memory – Fighting Rework

251

4.8 Application - Production and Traceability When the Baseline is generated, we have the link from requirements to the source code in a build/release. If a release has bugs found during execution in production, we can backtrack the release to the defining requirements (Figs. 21, 22 and 23).

Fig. 21. Baseline

Baseline collecon

1

N

Baseline workitem

1

N

Build/release

1

N

Field test

N

M

Test Environment

1 Hyper link Hyper link

Hyper link

Hyper link

1

Baseline Snapshot

N

Related

M

N

M

Requirement

Fig. 22. The linkage chain from the requirements to release and executed tests

These interlinked items are of high importance when it comes to traceability. Making this relation model ensures we can track down related requirements and test for a given release (Fig. 24). This back-track enables us to determine if a bug found during execution in production was related to weak or faulty requirements (Fig. 25).

252

M. Korsaa et al.

Fig. 23. A Baseline linked to application releases

Requirement

M

N

Baseline workitem

1

N

M

TestRun

Fig. 24. Back-tracking in the model

N

Build/release

Fig. 25. Corporate memory model Child

Baseline Snapshot

1

1

Baseline workitem

TestRun

1

Test suite (requirements based)

TestResult

N

N

Hyper link

Hyper link

N

1

1

1

1

1

Test plan

Requirement Collecon

Baseline collecon

Model of relaons between requirements, features, tasks and tests

N

N

1

M

Parent

M

M

Hyper link

N

M

Requirement

N

Hyper link

N

Build/release

N

Related

N

M

M

Tested by

Related

Related

1

Testcase

Tests

Related

M

N

Field test

Feature

Parent

N

1

N

Child

Child

Related

Related

M

N

1

N

M

N

Risk

Related

PBI/Bug

Parent

1

N

N

Item1

1

N

Item2

TestComplete Test Acvity

Task

Legend Each Item1 has relaon to many Item2's - and

1

Epic Collecon

Test Environment

TestComplete TestCase

M

Corporate Memory – Fighting Rework 253

254

M. Korsaa et al.

4.9 Grand Model Creating a corporate information model gives us a fully transparent world of information. Different roles with different needs have separate views of the model – requirement management, development team, test management, release management, project management, and stakeholders can all find relevant information. 4.10 Overview Azure DevOps has many easy ways to provide overviews of work progress (Figs. 26, 27, and 28).

Fig. 26. Requirement’s state overview – a view for the requirement manager

Fig. 27. Backlog view – a view for the development team

4.11 Credits At Grundfos, the corporate memory model has been developed over time, and we would like to thank Daniel Juhl Christoffersen and Nicolai Sixhøj Christensen from Grundfos for their great contributions, teamwork, and the discussions that led to our model.

Corporate Memory – Fighting Rework

255

Fig. 28. Sprints - Work overview – a view for the scrum master/project manager

5 Conclusion Clever use of a software development environment has provided the one “Easy to find” place where all needs, requirements, designs, risks, specifications, source code, and tests are captured, managed, reviewed, and approved. The team has experienced good requirement/development flow with few frustrations caused by show-stopping impediments. It takes great tenacity to keep requirements elicitation ahead of development – but everybody is happy when it happens. Combined with ongoing code reviews, and daily design discussions, we have experienced a non-stressful working environment where both the development team and stakeholders can orientate themselves in the model as familiar surroundings. Daniel Juhl Christoffersen, a member of the development team and Scrum Master, put it this way: “Implementing this model has proven valuable in many aspects of our daily work. Our decision to consolidate all project-related information in Azure DevOps has proven to be a game-changer. It has established a single source of truth for our team, making it effortless to search and access critical information such as requirements, code, diagrams, specifications, documentation, tests, and results. It also opens the ability to measure everything, helping us lift the quality of our work. One of the standout features of the model is the traceability it offers. With the ability to link everything together, we can easily trace, for example, released code back to baselined requirements and test results and vice versa; we now have comprehensive visibility into the entire development process. This traceability has been instrumental

256

M. Korsaa et al.

in ensuring that our work is aligned with project goals and requirements, providing invaluable insights, and mitigating potential risks. The streamlined processes and centralized source of truth have significantly improved our team’s efficiency and effectiveness. Knowing that our processes are robust and reliable, I now have more time and mental bandwidth to focus on creative and productive work. We continually refine the model and processes as part of our commitment to continuous improvement. I am confident that the model will continue contributing to our team’s success. The reduced stress, increased efficiency, and comprehensive traceability have made a tangible difference in our workflow. I am grateful for the positive impact that the model has had on our team, and I would not want to go back to our previous way of working.” Anette Refsgaard Laugesen, Manager in Grundfos Production IT, says, “We have invested much time in working with Requirements and the model, and now we really see the benefits. The team is much more efficient and knows what to do. We save much time not reaching out to stakeholders several times as it is clearer and more defined what to do.” Generality We think the relation model is overall generic and not limited to the shown kind of project. While it has excellent possibilities for interaction with external stakeholders (sub-contractors), we are confident that it can also be used as an overall project corporate memory for joint projects. Limitations In the plug-in tool Modern Requirements, we miss the feature that the baseline generation will produce a real work item in the dev-ops model. Today we must create an artificial baseline work item and hyperlink the baseline to it. A real work item would provide an automatically generated linkage from requirements and tests to released versions of the application.

6 Relationship with the SPI Manifesto Implementing a Corporate Memory has followed and enjoyed all the principles of the SPI Manifesto.[1]. • People – Know the culture and focus on needs. – A Corporate Memory embraces the culture, including the stakeholders, and facilitates their needs for consensus • People – Motivate all people involved. – A Corporate Memory is all about understanding each other. • People – Base improvements on experience and measurements – The Grundfos Corporate Memory builds upon one team’s experiences

Corporate Memory – Fighting Rework

257

• People – Create a learning organization – A Corporate Memory is establishing consensus • Business – Support the organization’s vision and objectives – The corporate commitment is all about this • Business – Use adaptable and dynamic models as needed – This Corporate Memory uses the Azure DevOps model and framework • Business – Apply risk management – Risks are incorporated as a central entity in the Corporate Memory. • Change – Manage the organizational change in your improvement effort – A Corporate Memory is heavily tool supported • Change – Ensure all parties understand and agree on the process – The development of the Corporate Memory has included the stakeholders • Change – Don’t lose focus – Efficiency and an excellent working place have been the drivers

References 1. Pries-Heje, J., Johansen, J.: Spi Manifesto, version 1.2. European system & software process improvement and innovation (2010) 2. Guide for Writing Requirements, Requirements Working Group, International Council on Systems Engineering, INCOSE-TP-2010-006-03 Version/Revision 3, July 2019 3. Kassab, M., DeFranco, J.F., Laplante, P.A.: software testing: the state of the practice. IEEE Softw. 34(5), 46–52. https://www.researchgate.net/publication/319995304_Software_ Testing_The_State_of_the_Practice 4. Microsoft Azure DevOps/Visual Studio. https://en.wikipedia.org/wiki/Visual_Studio 5. Modern Requirements – Application plug-in for Azure DevOps that enhances requirement handling, reviews, etc. https://www.modernrequirements.com/ 6. TestComplete, by SmartBear – An application that enables systematic user interface test and verification. https://en.wikipedia.org/wiki/TestComplete 7. Bennett-Therkildsen, A., Jørgensen, J.B., Nørskov, K., Rubin, N.M.: Redefinition of the requirements engineer role in Mjølner’s software development process. In: Doerr, J., Opdahl, A.L. (eds.) REFSQ 2013. LNCS, vol. 7830, pp. 285–291. Springer, Heidelberg (2013). https:// doi.org/10.1007/978-3-642-37422-7_20

Managing Ethical Requirements Elicitation Errikos Siakas1(B) , Harjinder Rahanu2 , Joanna Loveday2 , Elli Georgiadou2 , Kerstin Siakas3,4 , and Margaret Ross5 1 National Archaeological Museum, Athens, Greece

[email protected]

2 Middlesex University London, London, UK

{harjinder2,J.Loveday}@mdx.ac.uk

3 Interenational Hellenic University, Thessaloniki, Greece

[email protected]

4 University of Vaasa, Vaasa, Finland 5 Southampton Solent University, Southampton, UK

[email protected]

Abstract. The process of Requirements Elicitation (RE) demands from a software development team the need to communicate and engage with a variety of stakeholders, for numerous purposes regarding many aspects of the project. The aim is to translate the needs of the “customer” into accurate and actionable requirements. In this initial step of the software life cycle process several ethical challenges are invoked, which, if left unresolved, may lead to unintended consequences. Computer Ethics focuses on the questions of right and wrong that arise from the development and deployment of computers. Thus, it urges that the ethical and social impact of computers must be analysed. The purpose of normative ethics is to scrutinise standards about the rightness and wrongness of actions, the goal being the identification of the true human good. A rational appeal can be made to normative ethical principles to arrive at a judicious, ethically justifiable judgement. In software engineering, the Software Process Improvement (SPI) Manifesto was developed by groups of experts in the field, aimed to improve the software produced, through improving the process, the attitudes of software engineers, and the organisational culture and practices. In this position and constructive design research paper, we argue that software developers, in accordance with the SPI Manifesto aim of improving the software produced, address the ethical challenges invoked in the Requirements Elicitation process. The steps taken in this paper are: First we report on the findings of a broad literature review of related research, which refers to the current challenges in RE. Second, we source from ethical theory, generic Deontological and Teleological ethical principles that can serve as normative guidelines for addressing the challenges identified in the initial step. Third, we prescribe a set of ethical rights and duties that must be exercised and fulfilled by software developers for them to exhibit ethical behaviour. Each of these suggested actions are substantiated via an appeal to one, or several normative guidelines, identified in the second step. By identifying and recommending a set of defensible ethical obligations that must be fulfilled in the RE process, software developers can fulfil their ethical duties and thus reduce the number of unintended consequences that plague Requirements Elicitation. Ultimately RE must be underpinned with ethical consideration. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 258–272, 2023. https://doi.org/10.1007/978-3-031-42307-9_19

Managing Ethical Requirements Elicitation

259

Keywords: Computer Ethics · Requirements Elicitation · Ethical Theory · SPI Manifesto

1 Introduction Requirements gathering is a vital part of any project, but this exercise can easily become a challenging endeavour. Proper due consideration needs to be paid in requirements elicitation in a software project, which is viewed as one of the most salient and difficult tasks during the software process. Hofmann and Lehner [1] state that shortcomings in the handling and treatment of requirements are one of the main causes of failure of software projects. Ferreira Martins et al. [2] argues that the negative effects of a misconducted software requirements elicitation are well known project delays, cancellations and deliveries of incomplete work. The process of Requirements Elicitation (RE) demands from a software development team the need to communicate and engage with a variety of stakeholders, for numerous purposes regarding many aspects of the project. The aim is to translate the needs of the “customer” into accurate and actionable requirements. 1.1 Requirements Elicitation Sommerville and Sawyer [3] define Requirements Engineering as “the process of discovering, documenting and managing the requirements for a computer-based system”. The goal of this set of logically related activities is to produce a set of system requirements which, best reflects what the customer actually wants. In requirements engineering, requirements elicitation is the practice of “researching and discovering the requirements of a system from users, customers, and other stakeholders” [4]. Stakeholders have input at various stages of the system lifecycle, typically input at the initial stage where requirements are elicited; input essential during requirements analysis and negotiation, and finally accept the system into deployment. Thus, Ryan [5] argues careful selection of appropriate stakeholders is therefore fundamental to the success of the project. It follows that to gather requirements, the first step is to identify the right stakeholders from whom those requirements are to be gathered. Identification and involvement of stakeholders in RE process, require that stakeholders can be identified and that they are willing to participate in the collaborative elicitation and prioritization process. Stakeholders of globally available platforms, such as Facebook and Instagram, with millions of location-independent, heterogeneous and out of organizational reach endusers and people affected by the system, may either be unknown or cannot easily be identified for participating in RE activities. Current RE approaches try to deal with such stakeholders by online polls, questionnaires, or pilot studies. Siakas et al. [6] claim that social media and crowdsourcing are particularly useful in RE for involving stakeholders out of organisational reach working in a dynamic context where requirements evolve regularly. Crowdsourcing denotes the act of outsourcing tasks or business activities through an open call to a large group of external people in a self-selected network

260

E. Siakas et al.

of undefined individuals or community (a crowd) that use different social media for collaboration [7]. Johnson [8] argues that requirements elicitation is the most salient stage of software development, and that a list of incomplete requirements is one of the most common reasons for IT project failure. There exists a plethora of methods or techniques that have been proposed to acquire information for the purposes of elicitation. Based upon a comprehensive systematic literature review, several RE techniques are identified [9], as presented in Fig. 1.

Fig. 1. Techniques to acquire information for the purposes of requirements elicitation.

In terms of selecting a technique from the wealth of methods available, Davis, et al. [10] concludes that there is little consensus/agreement among experts on how best to elicit information or knowledge. Valusek and Fryback [11] state that acquisition, comprehension and volatility are three categories of problems that affect the correct definition of software requirements. Requirements volatility is the emergence of new requirements and modification or removal of existing requirements [12]. The reason for requirements volatility is that requirements are not fully known or understood in the beginning of a project. However, new requirements and alterations to requirements can appear during any development phase. This happens because there may be contextual alterations in e.g. organisational goals and objectives, policies, structures, work roles and environmental changes that directly have an influence on the system requirements. Stakeholders needs may also mature due to increased knowledge brought on by the development activities. If such alterations are not taken into consideration, the original requirements will become incomplete and inconsistent with the new situation. In addition, requirements are usually determined by individuals who may have conflicting needs and goals. In a g lobal context these individuals might come from different national, organisational and team cultures with different values and preferences. To lessen volatility risks an iterative process for requirements elicitation is proposed [12].

Managing Ethical Requirements Elicitation

261

This paper focuses on problems concerning acquisition, i.e., information or knowledge elicitation. 1.2 Computer Ethics In Requirements Elicitation, this initial step of the software life cycle process, several ethical challenges are invoked, which, if left unresolved, may lead to unintended consequences. Kallman and Grillo [13] state that it is clearly dangerous to rely solely on law as a moral guideline because in certain circumstances bad laws exist. Inadequate laws may bind rules on society that fail to provide moral guidance. History has presented us instances of immoral laws, which have excused society from fulfilling certain obligations and duties or allowed a society to justify their unethical behaviour. Ethical judgments simply do not have the same deductivity and objectivity as scientific ones. However, moral judgments should be based upon rational moral principles and sound, carefully reasoned arguments. Normative claims are supported by: “An appeal to defensible moral principles, which become manifest through rational discourse” [14]. A normative claim can only be substantiated, and a rational discourse presented, through an appeal to such principles. In Sect. 2 the authors identify the current issues concerning the RE process. Thus, with regards to the ethical issues raised by Requirements Elicitation, in Sect. 3 of this paper we will present a list of defensible ethical principles, which are taken from ethical theory. Several heuristics are suggested in Sect. 4, which if followed may lead to ethical guidance concerning RE. These normative claims are substantiated via the citation of one or a few of the ethical principles from Sect. 3. Thus, each heuristic is based upon rational moral and philosophical principles and sound, carefully reasoned arguments. 1.3 SPI Manifesto Three core values and ten principles constitute the Software Process Improvement (SPI) Manifesto, which serves as an expression to state-of-the-art knowledge on SPI. In planning a SPI project, the manifesto can be used to better facilitate the necessary corresponding change in the organisation [15–18]. The argument put forward in this paper that we, as SPI professionals, need to fulfil ethical duties concerning the software process assessment to improve the quality and productivity of software development processes, ultimately to produce high-quality software using a productive and efficient team, which all correlates with the values outlined in the SPI manifesto. The SPI values: must involve people actively and affect their daily activities; is what you do to make business successful; and is inherently linked with change. These three values can be decomposed into a set of principles, which in turn serve as foundations for action. The notion of improving the software development process, of which Requirements Elicitation is a pivotal phase, is implicitly implied in the SPI Manifesto values and principles. Making ethical choices is not a purely deductive exercise like mathematics. Many people may rely on intuition or personal preferences alone. However, computer ethics can provide a more logical and rational approach, whether formal or informal guidelines

262

E. Siakas et al.

or an academic theory are employed. Computer ethics provides rules and principles that, when applied to a case where ethical guidance is required, lead to a higher quality decision than one that relies on intuition or personal preferences alone [14, 40]. The aim of the paper is to identify the ethical issues invoked in the process of Requirements Elicitation (the case), via a Literature Review. Then by applying ethical normative principles to this case, the authors of this paper aim to produce several heuristics, that if adhered to by software developers can lead to higher quality decisions being arrived at compared to those attained without any conscious reasoning. These ethically defended decisions, i.e., heuristics, have the potential to address the ethical issues, which have been identified as concerns in this initial step of the software life cycle process. This paper is divided into five sections. Section 2 will describe the motivation of this study in more detail and will introduce the respective background. The research methodology used will be described in Sect. 3. Section 4 will briefly list the key milestones in the application of the selected methodology, leading on to Sect. 5 where the results of the review will be explained. Finally, Sect. 6 will discuss the findings and the limitations of the approach taken.

2 Ethical Challenges in Requirements Elicitation: A Literature Review The International Council on Systems Engineering (INCOSE) outlines several guidelines for writing better requirements, e.g., the Guide to Writing Requirements (GtWR), the Needs and Requirements Manual (NRM), and the Guide to Needs and Requirements (GtNR). INCOSE state that when defining needs and requirements, it is important that they have the characteristics of well-formed needs and requirements, concluding that the underlying analysis from which a need or requirement was derived is as important as how well the need or requirement statement is formed [38]. ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission) work in the development of International Standards to deal with fields of technical activity including systems and software engineering and the system life cycle processes. For example, ISO/IEC/IEEE 29148:2018 which provides a unified treatment of the processes and products involved in engineering requirements throughout the life cycle of systems and software [39]. Yet over 70% of project failures can be attributed to issue in the requirements gathering process [19]. Failure to gather requirements effectively typically lead to scope creep; running out of resources; missing deadlines; going over budget; and poor project delivery. Hussain et al. [20] argue that requirements engineering is pivotal and central to every successful software development project. They identified several reasons why software projects fail of which “poorly elicited, documented, validated and managed requirements contribute grossly to software projects failure”. Systems development is a “socio-technical endeavour to make use of human and technological resources to achieve a collective task” [21]. Requirements elicitation, seen as being a key and vital facet of requirements engineering, is based on multiple viewpoints to define stakeholder needs. A close interaction between developers, endusers of the system, the customer et al. is needed by requirements’ gathering. Therefore, the resulting system’s functionality is intimately tied to this human context.

Managing Ethical Requirements Elicitation

263

To commence with the elicitation of stakeholder requirements, the first step must be to identify the stakeholders from whom those requirements are to be gathered. The traditional definition of a stakeholder as someone who “has a stake in the project—that is, someone who is affected by the system in some way or can affect the system in some way” is not useful because it often difficult to find someone who is not affected by the system. Therefore, a more useful definition of a stakeholder is proposed: “someone who has a right to influence the system” [5]. But the method for selecting stakeholders must not overlook ethical concerns and marginalized social/stakeholder perspectives. The requirements gathered from only one group (level) will likely be biased by the “level of abstraction from which those people conceive the problem, their planning horizon, detailed acquaintance with the application, personal preconceptions, goals, and responsibilities” [22]. Therefore, a true articulation of the requirements can be obtained only from collecting information from all parties concerned. The authors also recognise the problems in fostering understanding among the different communities affected by the development of a given system. Seyff et al. [23] argue that most approaches to requirements elicitation, prioritization and negotiation promote the involvement of success-critical stakeholders, for example, end users of the system. But these approaches are found lacking because they do not sufficiently support non-traditional contexts such as mobile computing, cloud computing or software ecosystems. The authors argue that in these contexts the project requires the involvement of a vast number of “heterogeneous, globally distributed and potentially anonymous stakeholders”. Thus, these approaches to RE rightly involve success-critical stakeholders however, there is a lack of suitable elicitation techniques and novel RE approaches and methods are needed to give end users “their own voice”. The requirements elicitation process often starts with an interview between a customer and a requirements analyst. It is in these interviews that ambiguities “in the dialogic discourse may reveal the presence of tacit knowledge that needs to be made explicit”. The authors, thus argue that it is important to understand the nature of ambiguities in interviews and to provide analysts with “cognitive tools to identify and alleviate ambiguities” [24]. Requirements maybe ambiguous, inconsistent, or incomplete, making it difficult for engineers to understand what the system should do. Ferrari et al. [25] argue that ambiguity in communication is often perceived as a major impediment for knowledge transfer, resulting in unclear and incomplete requirements documents. The authors define ambiguity as a class of four main sub-phenomena, i.e., unclarity, multiple understanding, incorrect disambiguation and correct disambiguation. Ambriola and Gervasi [26] present the ambiguity in requirements engineering, focused on natural language (NL) ambiguities in requirements documents (i.e., textual documents). In the RE stage in addition to explicit communication and close collaboration between involved stakeholders, contextual, organizational, and cultural factors also need to be taken into consideration for increasing the probability of improved accuracy and completeness of the requirements, and ultimately success of the projects [6].

264

E. Siakas et al.

3 Defensible Ethical Principles There are a range of ethical theories that have been developed throughout history and one or a combination of these can be selected. Fundamentally there are two basic approaches to ethics: Teleological theories (consider the consequences of an action as a measure of goodness) and Deontological theories (emphasise the rightness of an action above the goodness it produces). Teleological theories give priority to the good over the right, and they evaluate actions by the goal or consequences that they achieve. Thus, correct actions are those that produce the best or optimise the consequences of choices, whereas wrong actions are those that do not contribute to the good. Three examples of the Teleological approach to ethics are Egoism, Utilitarianism and Altruism, see Fig. 2. According to a Deontological framework, actions are essentially right or wrong regardless of the consequences they produce. An ethical action might be deduced from a duty (Pluralism) or a basic human right (Contractarianism), see Fig. 2, but it never depends on its projected outcome [13]. 3.1 Deontological Principles In duty-based ethics (Pluralism) there are seven basic moral duties that are binding on moral agents. In rights-based ethics (Contractarianism) a right can be defined as entitlement to something. In the field of Information Technology three specific rights are identified: 1. The right to know, 2. The right to privacy, and 3. The right to property [13]. Seven further rights were advocated as ones in a digital world [28]. All ten rights, and seven basic duties, are identified and listed in Fig. 2. 3.2 Telelogical Principles Three examples of the Teleological approach to ethics are Egoism, Utilitarianism and Altruism [13]. Egoism is grounded in the concept of self-interest, which is used as justification when something is done to further an individual’s own welfare. The principle of Utilitarianism embodies the notion of operating in the public interest rather than for personal benefit. The Utilitarian principle determines an action to be right if it maximises benefits over costs for all involved, everyone counting equal. Altruism is invoked when a decision results in benefit for others, even at a cost to some including the altruist himself/herself. Thus, an action is determined to be right if it maximises the benefits of some, even at the cost to others involved. Kallman and Grillo [13] present a framework for ethical analysis. Amongst, a multitude of other details, it lists some basic moral principles and theories that can serve as normative guidelines for addressing the moral issues, cases where ethical and professional issues may have been invoked. The Deontological and Teleological principles outlined in Sects. 3.1 and 3.2, above, constitute this framework. In addition, the normative principles of Autonomy, Informed Consent and the Golden Rule are also considered. Because of their simplicity and concreteness, these principles are seen as serving “a more practical and direct way of coming to terms with a moral dilemma” [14]. These Deontological and Teleological normative principles [13, 14, 27, 28] are enumerated in Fig. 2.

Managing Ethical Requirements Elicitation

265

Fig. 2. Ethical Normative principles sourced from Ethical Theory.

The appropriate and respective normative principles presented above will be applied to the moral dilemmas that are invoked by systems development and deployment by business process engineers, software engineering teams, process improvement managers, and others.

4 Heuristics Several heuristics are suggested below, which if followed may lead to ethical requirements elicitation in RE. Each rule of thumb is substantiated by citing one or several ethical normative principles, listed in Fig. 2, above. Often there is a lack of relevant knowledge or inexperience in both developers and clients. These heuristics offer ethical instruct in such circumstances. 1. Use of Direct Observation: Antona, et al. [29] advocate the use of Direct Observation, as a method of understanding and investigating the user experience, which is usually deployed in field research used in anthropology, ethnography and ethnomethodology. The authors argue that by examining the users in context it can potentially produce a richer understanding of the relationships between “preference, behaviour, problems, and values”. Four basic principles that underly Ethnographic methods are: Natural settings (The foundation in ethnography is field work, where people are studied in their everyday activities); Holism (People’s behaviours are understood in relation to how they are embedded in the social and historical fabric of everyday life); Descriptive (The ethnographers describe what people do, not what they should do. No judgment is involved); and Members’ point of view (The ethnographers create an understanding of the world from the point of view of those studied) [30]. • Deontology (Pluralism): Justice

266

E. Siakas et al.

• • • •

Deontology (Pluralism): Beneficence Deontology (Pluralism): Non-injury Deontology (Contractarianism): The right not to be discriminated against Deontology (Contractarianism): The right to fair access to, and development of communication resources • Principle of Autonomy • Teleology: Utilitarianism 2. Use of Social Networks and Collaborative Filtering: Mulla [31] argues for the methods to identify and prioritize stakeholders and their respective requirements, particularly so in large scale software projects. The author advocates a method for eliciting requirements in large scale software projects using social networks and collaborative filtering. A social network is a structure, comprising of actors (individuals, corporate/collective units, etc.) and the relation(s) conferred upon them. Actors are linked to one another via relational or social ties. Ties can be an evaluation of one person by another (friendship, liking or respect); transfer of material resources (e.g., business transaction); an association/affiliation (e.g., belonging to the same social group) and formal relations (e.g., authority). Valued relations are determined from stakeholders assigning values to the ties and thus overtime a well-connected network is achieved, which can be interrogated to identify and priorities requirements. • • • • • • •

Deontology (Pluralism): Justice Deontology (Pluralism): Beneficence Deontology (Pluralism): Non-injury Deontology (Contractarianism): Political Participation Deontology (Contractarianism): Freedom of Expression Deontology (Contractarianism): The right not to be discriminated against Teleology: Utilitarianism

3. Use of Natural Language Processing (NLP): Natural Language Processing is a field of research and application that analyses how with the help of machine we can comprehend and manipulate natural language for further exploration, utilizing numerous computational techniques for the automated analysis and representation of human language [32]. Ambiguity in Natural Language Processing can be removed using Word Sense Disambiguation; Part of Speech Tagger; HMM (Hidden Markov Model) Tagger; and/or Hybrid combination of taggers with machine learning techniques. These examples of cognitive tools can help identify and alleviate the issues of ambiguity that may be found in textual documents generated in the requirements elicitation process. • Deontology (Pluralism): Beneficence • Deontology (Pluralism): Non-injury • Deontology (Contractarianism): The right to fair access to, and development of, communication resources • Teleology: Utilitarianism

Managing Ethical Requirements Elicitation

267

4. Conduct an Operational Feasibility Study: Stair and Reynolds [33] define an operational feasibility study as the process of determining how a system will be accepted by people (assessing employee resistance to change, gaining managerial support for the system, providing sufficient motivation and training, and rationalising any conflicts with organisational norms and policies) and how well it will meet various system performance expectations (for example, response time for frequent online transactions, number of concurrent users it must support, reliability, and ease of use). There is an ethical duty to assess the requirements elicited in the context of an operational feasibility study. In the first instance the study should determine how the requirements will be accepted by marginalized social/stakeholder perspectives. This may, for example, imply dialogue between analysts and trade unions representatives (expressing employee resistance). These representatives should have entitlements to be part of the consultation and elicitation process because the resulting system, which is designed and deployed, including the introduction and adoption of new technology, will affect its members’ working practices. • • • • • • •

Deontology (Pluralism): Beneficence Deontology (Pluralism): Non-injury Deontology (Contractarianism): Political Participation Deontology (Contractarianism): Freedom of Expression Deontology (Contractarianism): The right not to be discriminated against Teleology: Utilitarianism Principle of Informed Consent

5. Harness Social Network Site (SNS): Seyff et al. [23] report on the efficacy of using a popular social network site to support requirements elicitation, prioritization and negotiation. The use of SNS in this approach was applied to allow potential stakeholders to actively participate in RE activities of projects. Although there are limitations reported, the results show that a popular social network site can effectively support distributed RE. The use of SNS is advantageous to cope with short time-tomarket periods, when the methods to be used need to be fast, easy and inexpensive. Kengphanphanit and Muechaisri [34] report on an approach to extract requirements automatically from user feedbacks on social media and classify user feedbacks to requirements and non-requirements using Naïve Bayes’s Machine Learning. • • • • • • •

Deontology (Pluralism): Justice Deontology (Pluralism): Beneficence Deontology (Pluralism): Non-injury Deontology (Contractarianism): Political Participation Deontology (Contractarianism): Freedom of Expression Deontology (Contractarianism): The right not to be discriminated against Teleology: Utilitarianism

6. Ethical User Stories in Agile Software Development: A user story is used to acquire the details of a requirement from an end-user’s point of view, in the Agile software development approach [35]. The user story articulates a simple concise description of a requirement told from the user’s perspective, which specifies i) what type of user you are, ii) what you want and iii) the reason behind it. The authors of this paper

268

E. Siakas et al.

argue that a fourth element be introduced into the structure that permits an expression by the end user that of specifying ethical rights and /or duties that could be exercised via the requirement (Fig. 3).

> ser Role As a < U al> o G < t n I wa t> Duty> so that < some reason >. With the proposed introduction of I can exercise the user story can be a call to conversation spoken in an ethical language. Although a user story is not part of a contractual agreement, it does permit a place for the conversation allowing the essence of what type of functionality the businesspeople want, why it is needed [36] and know its ethical defence. Thus, stakeholders, development team, and the product owner negotiate their details in the context of ethical aspects. All development should be seen through this ethics lens. The British Computer Society (BCS) [37] argues that this is vital if practitioners are to become responsible computing professionals that they do so. If an end user is uncertain of what the ethical rights/duties are which they can exercise, then it is the professional duty of the developer to inform them [37]. • • • • • • •

Deontology (Pluralism): Justice Deontology (Pluralism): Self Improvement Deontology (Pluralism): Non-injury Deontology (Contractarianism): Political Participation Deontology (Contractarianism): Freedom of Expression Deontology (Contractarianism): The right not to be discriminated against Teleology: Utilitarianism

7. Select success critical stakeholders: In RE a judgement needs to be made as to whether a stakeholder is important enough to be engaged in the process. Stakeholders are important to identify and to manage for businesses and/or projects to have a high chance of success, but how can we ensure we use our limited resources, e.g., time and money, in an efficient way? Mendelow [41] proposes a a power-interest grid, see Fig. 4, which considers stakeholder power and expectations, therefore their likely interest(s). This matrix can be used to determine and present the potential influence of stakeholder groups.

Managing Ethical Requirements Elicitation

269

Power

High

Low Low

Interest

High

Fig. 4. A Mendelow Power-Interest grid

Therefore, in the approach to requirements elicitation, the potential use of the Mendelow power-interest grid can assist in the identification, prioritization and negotiation of success-critical stakeholders. • • • • •

Deontology (Pluralism): Justice Deontology (Pluralism): Beneficence Deontology (Pluralism): Non-injury Deontology (Contractarianism): Freedom of Expression Teleology: Utilitarianism

5 Conclusions The rationale of applying the ethical framework presented in this paper was to identify and defend ethical stances that can be taken in the concerns over requirements elicitation, a crucial stage in Requirements Engineering. In doing so, the authors conclude that the importance of ethical considerations in the RE can be brought to the attention of the systems development and software engineering community, and all stakeholders, thus help raise the visibility its ethical use. In doing so it can also contribute to project success, avoiding the pitfalls that are present in failed software development projects, which fail to address the issues in the RE process. The paper contributes to the current ethical and philosophical discourse relating to requirements elicitation. A set of heuristics for the ethical guidance has been proposed which will raise awareness of the moral issues and help guide analysts and users in the RE process. The development of a set of ethical heuristics presented this paper is an important one. There are instances where the relationship between law and ethics breaks down, and the law fails to provide moral guidance. Thus, to solely rely on the law for guidance, to exclusively fulfil legal duties, may lead to occasions where an individual fails to accomplish their ethical responsibility. A corresponding legal duty may not exist binding an analyst to undertake the proposed activities in the stated heuristics, but this should not be an obstacle stopping them from fulfilling these ethical obligations. Additional research could include an ethical analysis of each phase of the systems development life cycle (SDLC). Thus, at each stage of the process for planning, analysing, designing, coding, testing, deploying and maintaining information system,

270

E. Siakas et al.

systems analysts and developers will be conscious of the duty they have to incorporate ethical considerations into the system’s specification and design. Another area of investigation is to conduct an ethical analysis, using the principles outlined in Fig. 2, of the various elicitation techniques that are outlined in Fig. 1, and in which circumstances they are most effective and efficient. The results of such an analysis may permit the identification of the ethical efficacy of each approach. This may assist RE engineers to identify the most appropriate RE technique(s) to deploy, whilst concurrently maximising the ethical advantages. Future research looks to conducting empirical research based on observation and measurement of IS development projects to confirm, and substantiate, the efficacy of the proposed heuristics in this paper. Future literature review in terms of breadth of coverage may flag further ethical challenges in the Requirements Elicitation process, and thus enable additional heuristics to be identified and presented. The notion of ethical duty needs to be explicitly addressed in the SPI Manifesto. Although these are implied in the manifesto’s three values and ten respective principles, there needs to be a much more unequivocal statement with regards to how these notions must govern organisational and personal behaviour in relation to Software Process Improvement work. Thus, a fourth value could be appended to the SPI Manifesto: Ethical Duties and corresponding principles declared and adopted.

References 1. Hofmann, H.F., Lehner, F.: Requirements engineering as a success factor in software projects. IEEE Softw. 18(4), 58–66 (2001) 2. Ferreira Martins, H., et al.: Design thinking: challenges for software requirements elicitation. Information 10, 371 (2019). https://doi.org/10.3390/info10120371 3. Sommerville, I., Sawyer, S.: Requirements Engineering: A Good Practice Guide. Wiley (1997) 4. Rowel, R., Alfeche, K.: Requirements Engineering A good practice guide. John Wiley and Sons (1997) 5. Ryan, M.J.: The Role of Stakeholders in Requirements Elicitation, John Wiley & Sons Inc. (2014). https://onlinelibrary.wiley.com/doi/pdf/10.1002/j.2334-5837.2014.tb03131.x 6. Siakas, E., Rahanu, H., Georgiadou, E., Siakas, K.: Towards reducing communication gaps in multicultural and global requirements elicitation. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI. CCIS, vol. 1442, pp. 257–277. Springer, Cham (2021). https:// doi.org/10.1007/978-3-030-85521-5_17 7. Siakas, D., Siakas, K.: Value proposition through knowledge sharing and crowdsourcing: engaging users of social networking. J. Media Manag. Entrep. (JMME), 2(1), 124–138 (2020). https://doi.org/10.4018/JMME.2020010108, 8. Johnson, J.: CHAOS Report: Decision Latency Theory: It Is All About the Interval, The Standish Group (2018) 9. Alflen, N.C., Prado, E.P. and Grotta, A.: A Model for Evaluating Requirements Elicitation Techniques in Software Development Projects. In: ICEIS, vol. (2), pp. 242–249 (2020) 10. Davis, A., Dieste, O., Hickey, A., Juristo, N. and Moreno, A.M.: Effectiveness of requirements elicitation techniques: empirical results derived from a systematic review. In: 14th IEEE International Requirements Engineering Conference (RE 2006), Minneapolis/St. Paul, MN, USA, pp. 179–188 (2006). doi: https://doi.org/10.1109/RE.2006.17

Managing Ethical Requirements Elicitation

271

11. Valusek, J.R., Fryback, D.G.: Information requirements determination: Obstacles within, among and between participants. In: Gallers, R. (ed.) Information Analysis: Selected Readings, pp. 139–151. Addison Wesley, Reading, MA, USA (1987) 12. Siakas, E., Rahanu, H., Georgiadou, E., Siakas, K.: Requirements Volatility in Multicultural Situational Contexts. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds) Systems, Software and Services Process Improvement. EuroSPI 2022. Communications in Computer and Information Science, vol. 1646, pp. 633–655 (2022). Springer, Cham. https://doi.org/10. 1007/978-3-031-15559-8_45 13. Kallman, E.A., Grillo, J.P.: Ethical Decision Making and Information Technology: An Introduction with Cases New York: McGraw-Hill Inc. (1996) 14. Spinello, R.A.: Ethical Aspects of Information Technology Englewood Cliffs. Prentice-Hall Inc, New Jersey (1995) 15. Korsaa, M., et al.: The SPI Manifesto and the ECQA SPI manager certification scheme. J. Softw.: Evolut. Proc. 24(5), 525–540 (2012) 16. Korsaa, M., et al.: The people aspects in modern process improvement management approaches. J. Softw. Evolut. Proc. 25(4), 381–391 (2013) 17. Messnarz, R., et al.: Social responsibility aspects supporting the success of SPI. J. Softw. Evolut. Proc. 26(3), 284–294 (2014) 18. Sanchez-Gordon, M.L., Colomo-Palacios, R., Amescua, A.: Towards measuring the impact of the SPI manifesto: a systematic review. In: Proceedings of European System and Software Process Improvement and Innovation Conference, pp. 100–110 (2013) 19. Stieglitz, C.: Beginning at the end—requirements gathering lessons from a flowchart junkie. Paper presented at Project Management Institute Global Congress, Vancouver, British Columbia, Canada. Newtown Square, PA: Project Management Institute (2012). https://www.pmi.org/learning/library/requirements-gathering-lessons-flowchart-jun kie-5981 (Accessed 12 April 2023) 20. Hussain, A., Mkpojiogu, E.O.C., Kamal, F.M.: The Role of requirements in the success or failure of software project. Int. Rev. Manag. Mark. 6(7), 306–311 (2016) 21. Raza, S.A.: Managing ethical requirements elicitation of complex socio-technical systems with critical systems thinking: A case of course-timetabling project. Technol. Soc. 66, 101626 (2021). https://doi.org/10.1016/j.techsoc.2021.101626 22. Christel, M., Kang, K.: Issues in Requirements Elicitation. Software Engineering Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, Technical Report CMU/SEI-92-TR012 (2012). http://resources.sei.cmu.edu/library/asset-view.cfm?AssetID=12553 (Accessed 12 April 2023) 23. Seyff, N., Todoran, I., Caluser, K., Singer, L., Glinz, M.: Using popular social network sites to support requirements elicitation, prioritization and negotiation. J. Internet Serv. Appli. 6(1), 1–16 (2015). https://doi.org/10.1186/s13174-015-0021-9 24. Elrakaiby, Y., Ferrari, A., Spoletini, P., Gnesi, S., Nuseibeh, B.: Using argumentation to explain ambiguity in requirements elicitation interviews. In: IEEE 25th International Requirements Engineering Conference (RE), Lisbon, Portugal, vol. 2017, pp. 51–60 (2017). https://doi.org/ 10.1109/RE.2017.27 25. Ferrari, A., Spoletini, P., Gnesi, S.: Ambiguity and tacit knowledge in requirements elicitation interviews. Requirements Eng. 21(3), 333–355 (2016). https://doi.org/10.1007/s00766-0160249-3 26. Ambriola, V., Gervasi, V.: On the systematic analysis of natural language requirements with Circe. Autom. Softw. Eng. 13, 107–167 (2006). https://doi.org/10.1007/s10515-006-5468-2 27. Ross, W.D.: The Right and the Good. Clarendon Press (1930) 28. Hamelink, C.J.: The Ethics of Cyberspace. Sage Publications Limited (2000) 29. Antona, M., A., Ntoa, S., Adami, I. and Stephanidis, C. (2009). User requirements elicitation for universal access. In The Universal Access Handbook, Taylor & Francis

272

E. Siakas et al.

30. Blomberg, J., Burrell, M., Guest, G.: An ethnographic approach to design. In: Jacko, J.A., Sears, A. (eds.) The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications, pp. 964–986. Lawrence Erlbaum Associates, Mahwah, NJ (2002) 31. Mulla, N.: A new approach to requirement elicitation based on stakeholder recommendation and collaborative filtering. Int. J. Softw. Eng. Appli. 3(3) (2012). https://doi.org/10.5121/ijsea. 2012.3305 32. Vashistha, S.: Ambiguity in Natural Language Processing (2021). http://blog.ncuindia.edu/ 2021/02/ambiguity-in-natural-language-processing.html (Accessed: 12 April 2023) 33. Stair, R., Reynolds, G.: Fundamentals of Information Systems, Cengage Learning, 9th edn. (2017) 34. Kengphanphanit, N. and Muechaisri, P. (2020). Automatic requirements elicitation from social media (ARESM). In: Proceedings of the 2020 International Conference on Computer Communication and Information Systems, August 2020, pp 57–62. https://doi.org/10.1145/341 8994.3419004 35. Cohn, M.: User Stories Applied: For Agile Software Development. Pearson Education Inc., Boston, MA (2004) 36. Agile Alliance. What does INVEST Stand For? (2023). https://www.agilealliance.org/glo ssary/invest/ (Accessed 17 May 2023) 37. British Computer Society. BCS Code of Conduct (2023). https://www.bcs.org/membershipand-registrations/become-a-member/bcs-code-of-conduct/ (Accessed 17 May 2023] 38. INCOSE. INCOSE Guide to Writing Requirements V3.1 Summary Sheet (2022). https:// www.incose.org/docs/default-source/working-groups/requirements-wg/rwg_products/inc ose_rwg_gtwr_summary_sheet_2022.pdf?sfvrsn=a95a6fc7_2 (Accessed 17 May 2023) 39. ISO. ISO/IEC/IEEE 29148:2018 Systems and software engineering - Life cycle processes - Requirements engineering (2018). https://www.iso.org/obp/ui/#iso:std:iso-iec-ieee:29148: ed-2:v1:en (Accessed 17 May 2023) 40. Johnson, D.G.: Ethics online. Commun. ACM 40(1), 60–65 (1997) 41. Mendelow, A.L.: Environmental scanning: the impact of the stakeholder concept. In: Proceedings From the Second International Conference on Information Systems, pp. 407–418. Cambridge, MA (1991)

Process Improvement Based on Symptom Analysis Jan Pries-Heje1(B) , Jørn Johansen2 , Morten Korsaa2 and Hans Cristian Riis2

,

1 Department of People and Technology, Roskilde University, Roskilde, Denmark

[email protected]

2 Whitebox, Hørsholm, Denmark

{jj,mk,hc}@whitebox.dk

Abstract. A symptom is a feature which is regarded as indicating a condition or disease. For an organization a symptom may indicate a problematic condition. Based on 600 maturity CMMI assessments we identified 32 common symptoms across the organizations. We developed a web site with a survey instrument asking 44 companies whether they recognized the symptoms? Thus, from this survey we know which symptoms are common and which ones that rarely are perceived to be present. We then analyzed the symptoms using the Cognitive Maps techniques and identified consequences and root causes for each symptom. We also identified relationships between the symptoms and presented a map thereof. Further, we mapped the root causes from the cognitive maps to CMMI and the recommendations to improve. Finally, we discuss whether and how one can use the symptoms to make recommendations for improvements as a kind of “discount improvement model”. We conclude with an example of the intended use. Keywords: Cognitive map · Process Improvement · Maturity · Improvement · CMMI

1 Introduction Symptoms are easy to discuss. We do that in all types of diagnosis and analysis, and the authors are curious to find out if that is possible in a complex area like providing improvement recommendations to product development organizations. The tricky part is the number of steps between a symptom and a root cause. If your bikes tyre is flat, you know the root cause is a puncture, or a defect valve, and once that is established you know what to do. If a project is late, there can be numerous root causes, often even combined. In 1980 Philip Crosby wrote “Quality is free” [1] in which he pointed at five different “levels” which described how companies were approaching the challenge of producing good quality. In 1994 he elaborated the concepts in “Quality is still free” [2] and simplified his observations into a simple 5x5 “Quality Management Process Maturity Grid” with the five maturity levels on one axis and five different aspects on the other. In the © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 273–286, 2023. https://doi.org/10.1007/978-3-031-42307-9_20

274

J. Pries-Heje et al.

cross sections there is “symptoms” that help the user to identify which maturity level his organization is on from the symptoms he recognizes, and from that it was easy to create recommendations to the organization [2, p. 54]. Crosby’s five levels has been the inspiration to numerous maturity models, including CMMI® [3]. These models have elaborated on Crosby’s concept in specific domains and have made very thorough and precise models that describe the characteristics of the practices that leads to achieving the desired performance goals in defined process areas. These models have proven their worth in + 30 years. The models can be used to establish strength and weaknesses in a work practice and can be applied with different levels of rigour and precision. The most rigorous is a SCAMPI [4] class A appraisal designed to meet the requirements to be used in the court of law in disputes, to a SCAMPI class C appraisal that is based on a self-evaluation. Even the lightest type, self-evaluations, still requires significant effort and the variance of the answers to questions that address a practice where the interviewee is not capable, represent a problem. And while it is evident that all the practices are important, they are not prioritized, and a weak practice may under the given circumstances not be a great risk to the project. This led to our research question: Can we use symptoms to generate recommendations for improvement? What if we based on experience and data from + 600 assessments can make a tool that can generate recommendations based on symptoms with a reasonable level of precision and with that, help a manager to select from the hundreds of improvement possibilities those few that will increase the performance the most. The scope is restricted to the practices included in product and service development as defined by the CMMI® version 1.3 model [5] for which we have data. This scope has been appropriate for the industry in decades, but it do unfortunately exclude symptoms like “we often change management”, “we have many re-organizations” and the like.

2 Literature Other people have over time used symptoms for different purposes. In general, a symptom is defined as a feature which is regarded as indicating a condition or disease. For an individual, a symptom may indicate a medical disease that a medical doctor can then recommend a cure for. For an organization, a symptom may indicate a problematic condition or just a problem that needs a solution or at least an improvement recommendation. In computer science and software engineering symptoms have been used in many ways. Lee et al. [6] used symptoms for identifying software problems. Their approach was based on the assumption that failures due to the same fault often share common symptoms. However, they looked for symptoms within the computer. They write about their approach in “A memory dump captures the processor state at the time of a failure. Given a dump, analysts investigate key failure symptoms such as the software function being executed, the apparent, reason for the halt, and the error pattern” [6, p. 321]. Chahal and Singh [7] used symptoms to indicate bad software designs and developed metrics for studying the symptoms. The metrics they applied included characteristics such as abstraction, coupling, cohesion, inheritance, polymorphism, and encapsulation. Lee and Iyer [8] used symptoms as an indication for rediscovered software problems.

Process Improvement Based on Symptom Analysis

275

Macia et al. [9] used symptoms to understand what they call architectural erosion understood as the difference between the actual, extracted architecture and the intended architecture. More recently Jia et al. [10] studied symptoms, causes, and repairs of bugs inside a deep learning library. Here they focused not only on the symptoms but also on the root causes for symptoms and found that “root causes are more determinative than symptoms”. This focus on symptoms and what lies behind is known from Japanese five-timeswhy techniques – where you focus your problem solving by asking five times “why” in order to move past surface symptoms and determine root causes underneath. The techniques is known from the Six Sigma approach [11] as well as from the Japanese company Toyota [12]. Another approach that focused on symptoms, what they causes, and what lies underneath was developed by Colin Eden [13] that came up with what he called “Cognitive Mapping”. In this technique the relationship – e.g., this causes that - between two problem constructs is shown in the map as an arrow. Eden distinguish, however, between an arrow out of a construct to show a consequence of a symptom (or problem) and an arrow into a construct as a cause or an explanation. [16, p. 5]. Later Colin Eden together with Fran Ackerman [14] developed cognitive mapping into a technique that could be used to develop a strategy for a company coping with the forces in the surroundings of the company. Last but not least Pries-Heje, Johansen and Korsaa et al. [15, 16] have developed cognitive maps to be used for software process improvement.

3 Method “Can we use symptoms to generate recommendations for improvement?” is the research question we ask. To answer that we first gathered a list of symptoms. This was done through analysis of notes from the more than 600 assessments that two of the authors of this paper have performed. The result being a list of 32 symptoms. We then needed to verify that the 32 symptoms weren’t just a product of our imagination and possibly the bias we may have. To do that we developed a web site with a survey instrument where we asked respondents whether they could recognise a given symptom or not. In short, the outcome was that the respondents did recognise nearly all the symptoms. Some symptoms as very common and some symptoms being rarer. The specific outcome of using the survey instrument is presented later in this paper. Next, we developed cognitive maps [13] for each of the symptoms. We relied on the five-times-why techniques [11, 12] in that we tried to identify two levels of consequences above each symptom and two levels of causes below the symptoms making it five levels in total. In doing that we found that some symptoms ‘merged’ together either in their consequences; e.g. two different symptoms ended up having the same consequences, or in their causes; two different symptoms were caused by the same thing. We used these overlapping maps to develop a meta-map of all the symptoms and their relationships. The resulting map is shown and described in more detail later in the paper. Finally we took the cognitive maps as well as the meta-map and compared the (root) causes with the CMMI-model [3, 5] and the recommendations therein. By doing that

276

J. Pries-Heje et al.

we hoped to elicit an answer to the question of what to do if you saw a given symptom (in our list). And we believe it worked. We were able to map a lot of the processes in CMMI to our symptoms through the cognitive maps we had developed. So the answer to our research question was positive, yes, we can. We describe the details of our mapping to CMMI and the relationships found between symptoms and recommendations in a section of the paper below.

4 A Survey of Symptoms To determine the prevalence of certain symptoms we developed a website with a survey instrument where we asked respondents whether they could recognise the 32 symptoms – one by one - or not? After removing some incomplete answers, we had 37 respondents: 19 leaders/managers, 14 project managers, 4 developers, representing approximately 25.000 developers/engineers.

Fig. 1. How respondents rated six of the 22 symptoms

From the 37 respondents, we gathered insights into the perceived relevance of these symptoms from their perspective. Thus, in our survey on the website the phrasing of the symptoms was made like a statement so that the relevance of the symptoms (not the symptom itself) could be assessed and either agreed upon or rejected by the respondent. On a scale from 1 (totally agree) to 5 (totally disagree) the average relevance of the symptoms was 2,7. Although none of the symptoms were deemed irrelevant, some appeared to be perceived more prevalent in companies than others as shown in Fig. 1. For instance, symptom number 13 to the left in Fig. 1 was not considered particularly relevant, while others displayed a wider range of opinions, making them relevant further in the process.

Process Improvement Based on Symptom Analysis

277

In the beginning of the survey, we asked for the respondents’ professional role scoping for people with either a Leader, Project Leader, or Developer title. When the answers are compared to what role the respondent has, certain patterns show. In Fig. 2 we see that all 10 respondents who answered “totally agree” to symptom nr. 1 “We are not able to account for the time used by individuals on a given activity” were people with a leadership role. Developers lean more towards the middle/disagreeing.

Fig. 2. Responses to symptom no. 1 - We are not able to account for the time used by individuals on a given activity

Fig. 3. Occurrences of totally agree and partially agree for all symptoms.

Looking at the 32 symptoms filtered by what respondents voted most and second most relevant as we do in Fig. 3 symptoms 13 and 19 stand out as being less relevant than others, and while these will be reviewed in the next version, the remaining 30 symptoms remains as relevant symptoms. Considering that the survey’s primary goal was to identify any significantly deviant symptoms, it can be deemed successful. This objective, along with the insights gathered about the perceived importance and prevalence of the symptoms, contributes to a better understanding of common challenges in project and improvement work.

278

J. Pries-Heje et al.

5 Symptoms, Causes, and Consequences Before developing the website we developed cognitive maps [13] for each of the symptoms. We relied on the five-times-why techniques [11, 12] in that we tried to identify two levels of consequences above each symptom and two levels of causes below the symptoms. We assume that business problems are dependent on a set of patterns and relationships between symptoms and underlying problems, and symptoms leading to the experienced business problems. An example map is shown in Fig. 4 for the symptom; “We don’t know our performance on different (types of) tasks”.

Fig. 4. Example of the five levels for symptom #22

At the bottom of Fig. 4 we have shown our mapping of the underlying problems – also known as root causes – to CMMI [3, 5] processes. By doing that we get the recommendation to focus on Organizational Process Focus (OPD), SG1 (Establish Organizational Process assets), Establish Standard Processes (SP 1.1), Establish the Organization’s Process Asset Library (SP 1.5) and Establish Work Environment Standards (SP 1.6) and the Project Planning (PP) Provide Resources (GG 2.3). We believe, that identifying the most relevant symptoms, followed with extra questions to identify the most relevant and expected problems below and above the symptom, will give an insight full story related to the actual situation. When synthesising all the cognitive maps for the symptoms we found that some symptoms ‘merged’ together either in their consequences; two different symptoms ended up having the same consequences, or in their causes; two different symptoms were caused

Process Improvement Based on Symptom Analysis

279

by the same thing. We used these overlapping maps to develop a meta-map of all the symptoms and their relationships as shown in Fig. 5.

Fig. 5. The meta-map of related symptoms.

After having developed the meta-map it was then possible to sort the different levels of problems and their relation to CMMI practises – also for control of the connections across symptoms.

6 Mapping from Root Causes to CMMI In a CMMI based assessment, the performed practices are assessed against the model and rated to provide the best and worst performing processes. This input is used to generate process improvements, and it is the assessor’s responsibility to optimize the value of the recommendations based on the given context. This has the potential to make the recommendations more specific to the organisation’s practices, tools, and vocabulary. It may not be the lowest scoring process that will benefit most from process improvement. Maybe there is a “low hanging fruit” that will bring high benefit with little effort. This is a value that the assessor brings to the assessment, and this is not possible in this set-up since there is no present assessors. In the following sub-sections we show how we mapped the root causes at the bottom om Fig. 4 to CMMI. The legend we use is that: “text in quotes is from the model” and text in italic is from CMMI. More tips and tricks can be found in CMMI version 1.3 [5] next to the referred specific practices.

280

J. Pries-Heje et al.

6.1 Symptom 22 “We Don’t Know our Performance on Different Tasks” See Fig. 4 for an overview of the root causes we discuss in this sub-section. Root cause: “Insufficient competencies”. We know that it is causing “Large variation in employee performance” and that is consequently causing: “we don’t know our performance on different tasks”. This points to the process area Organizational Training: The purpose of Organizational Training (OT) is to develop skills and knowledge of people so they can perform their roles effectively and efficiently. There are two goals to achieve: • SG1: A training capability, which supports the roles in the organization, is established and maintained. However, since the higher-level problem is a large variation in employee performance, we can assume that training is available and that some employees have taken it, but not all. Hence SG1 is not relevant. • SG2: Training for individuals to perform their roles effectively is provided. This is the relevant goal. The first practice (SP2.1) seems to hit the target here: Deliver training following the organizational training tactical plan. The second (SP2.2) says Establish and maintain records of organizational training. If this practice was performed, the organization would know that not all project members have had the adequate training, which would lead to “large variation in employee performance” which would show as the symptom “we don’t know our performance on different tasks”. The third (SP2.3) says Assess the effectiveness of the organization’s training program and is even more reactive than SP2.2, but if performed, it would address the issue. We have three practices to consider for the recommendation. We assume that the user scenario for the “Discount Improvement Model” is a project/line manager with an immediate problem. Hence, we prioritize SP2.1 because it is the most proactive, and rephrase it in a recommendation as: Recommendation: Ensure that you provide the same training, in the skills required to perform the tasks where you do not know the performance, to all relevant employees that the top performing employees have had. This recommendation is generic by nature, but in the manager’s context, he knows what training is available and can assign the remaining employees to the training. Root cause: “Insufficient project management competence”. This looks like the previous root cause, but we know that it is focused on the project management process. We also know which of the two problems above is relevant. • “Large variation in employee performance” • “Insufficient breakdown of tasks” If the problem is “Large variation in employee performance“ then we must look for Project Management practices that deals with overview of individual performances.

Process Improvement Based on Symptom Analysis

281

The obvious process areas are “Project Planning” and “Project Monitoring and Control” for respectively planning and following up on the skills required and the performance. The goal we want to achieve is • Estimates of project planning parameters are established and maintained (SG1 in Project Planning.) The basis is a proper breakdown of the work you have when (SP1.1) Establish a top-level work breakdown structure (WBS) to estimate the scope of the project is performed well. Then the critical point here is to (SP1.2) Establish and maintain estimates of work product and task attributes, to know what performance to expect from the tasks. • A project plan is established and maintained as the basis for managing the project. (SG2 in Project Planning.) The main concern here is to Plan for resources to perform the project (SP2.4) and assign specific resources to the tasks to form a baseline to track against. • Actual project progress and performance are monitored against the project plan.(SG1 in Project Monitoring and Control) this is achieved by Monitoring actual values of project planning parameters against the project plan.(SP1.1) with focus on the variance of performances for the same task. Recommendation: For the types of tasks where performance is unknown, be sure to have a thorough breakdown into very specific tasks, estimate them and assign specific resources to them. Then track progress and act when the performance varies more than expected. If the problem is “Insufficient breakdown of tasks”, then the goal is again that: • Estimates of project planning parameters are established and maintained (SG1 in Project Planning.) and to achieve this: (SP1.1) Establish a top-level work breakdown structure (WBS) to estimate the scope of the project and do this better than it appears now. Recommendation: Define the work packages/backlog items in sufficient detail so the estimates of project tasks, responsibilities, and schedule can be specified (sub practice 2). Root cause: “No tool support”. We know that the relevant tool is a PM tool to manage the tasks and registration of the effort spent. Today this is either done in an agile lifecycle, in a tool like Jira, or in a sequential lifecycle in a tool like MS Project. Either way, the tool is a resource used to perform the “Project Planning” process. This is addressed in the generic practice (GP2.3) Provide resources. Recommendation: Establish tool support from a project management tool that can support the management of tasks, assignments, estimates and actuals. Root cause: Resistance against registration (e.g. of time used). This is a Project Monitoring and Control process and is addressed by the generic practice (GP2.1) Establish an Organizational Policy. Recommendation: Be very clear in the communication of why it is important for the teams and the organization to register.

282

J. Pries-Heje et al.

6.2 Symptom 24: “We Cannot Get the Competences and Resources Needed for the Project” In this sub-section we show another example of our mapping between the cognitive maps and the CMMI processes. The example we have taken is symptom no. 24 as shown in Fig. 6.

Fig. 6. The cognitive map of symptom 24: “We cannot get the competences and resources needed for the project.”

Root cause: “Insufficient line- and department management competence”. Please note from the figure above that there are three higher level problems related to this root cause. If the higher level problem is “Lack of resource planning” this points to the generic practice (GP2.3) Provide resources in the Project Planning process area. If the higher problem is “Silos are forming“ this points to the goal that (SG2) Coordination and collaboration between the project and relevant stakeholders are conducted in the Integrated Project Management (IPM) process area. If the higher level problem is “no portfolio management” it is adressed in the specific practice (SP1.4)Integrate the project plan and other plans that affect the project to describe the project’s defined process. In all three cases, the recommendation is the same and adresses the need for fact based management: Recommendation: Establish a measure of the organizational cost of understaffed projects. Root cause: “Insufficient project management competence”. Please note from the figure above that there are three higher level problems related to this root cause.

Process Improvement Based on Symptom Analysis

283

If the higher level problem is “Lack of resource planning” this points to the specific practice (SP2.4) Plan for project ressources in the Project Planning process area. This falls inside the PM’s responsibility, leading to this recommendation: Recommendation: Ensure that the projects resource requirements are very well defined including consequences of under-allocations. If the higher problem is “Silos are forming“ this points to the goal that (SG2)Coordination and collaboration between the project and relevant stakeholders are conducted in the Integrated Project Management (IPM) process area. If the higher level problem is “no portfolio management” this is adressed in the specific practice (SP1.4) Integrate the project plan and other plans that affect the project to describe the project’s defined process. These two instances must be solved at project portefolio level in what is often known as the project management office. The recommendation address the quality of the facts available. Recommendation: Establish a measure of the difference in total project performance in two scenarios: 1. Long and thin projects! 2. Fat and short projects! Root cause: “No tool support”. Please note from the figure above that there are two higher level problems related to this root cause. If the higher level problem is “Lack of resource planning” this points to the generic practice (GP2.3) Provide ressources in the Project Planning process area. This falls inside the PM’s responsibility, so the recommendation adresses this: Recommendation: Implement a tool to support the allocation of ressources to tasks. If the higher level problem is “no portfolio management” this points to the goal that Coordination and collaboration between the project and relevant stakeholders are conducted in the Integrated Project Management (IPM) process area adressed in the generic practice Provide ressources(GP2.3). Recommendation: Implement a tool that support project portfolio management.

7 Discussion We have now established a relationship between symptoms and recommendations. We call the resulting relationship list our “Discount improvement model”. The value of this will come from both the relevance of the recommendation, but also from the deselection of the hundreds of improvement possibilities that he doesn’t need to look at. The precision of such a simple list of relationships between symptoms and recommendations for improvements are obviously compromised compared to a recommendation from a trained assessor; That’s a fact. Never the less the simple “discount” elicitation of recommendations may have value and utility, especially if we can mitigate the risk of imprecise recommendations? As clearly found in the work described in the ImproAbilty [17] model improvements will happen when there is motivation and actions. Hence, the user’s prioritization of experienced symptoms means that the recommendation will fall into an area where problems are known. The user has been guided through two levels of analysis, and if he found the questions relevant, then the recommendation will be relevant, and the user will be motivated to take action because of this relevance.

284

J. Pries-Heje et al.

The recommended improvement action is always one of many approaches. This may be confusing, but we believe that any action out of the many will support improvement in relation to the symptom. But there can easily be an obviously smarter way to achieve the goal, seen from the user, that we will never point to. We can only hope that it is so obvious that he will take that action as well. For all we know, some action is better than no action, and that is as far as we can validate our list for now. So far, we have only worked with recommendations based on the symptoms and the underlying causes. But if the respondent has marked his role, we may customize the recommendations even more. Some recommendations will be perfectly valid for a project manager, but outside the scope of what a developer can influence. The last thing we will discuss is whether we are missing out important areas? In the validation survey, we asked for missing symptoms. It brought forward two suggestions: (1) Frequent re-organizations, and (2) Distance to decision makers. Both these symptoms sound very relevant and therefore indicate that our list of 32 symptoms are not complete. A strategy to embrace this is to include a feedback loop that will enhance our model over time.

8 Conclusion We have now provided an answer to our research question: Can we use symptoms to generate recommendations for improvement? In short, the answer is “yes”. By combining common symptoms with the most relevant CMMI processes through a cognitive mapping we can point to recommendations for each of the symptoms. Hence, by doing that we have provided what we have coined our “discount improvement model”. In the concrete our model could consist of a list with relationships between commonly found systems and relevant recommendations. An example of using such a list is the following: A line manager goes through the list of symptoms and find no. 22 “We don’t know our performance on different tasks” to be particularly relevant. The line manager recognise that this is causing “No respect for plans” and that leads to “Bad business”. It is certainly a relevant symptom to address. From the three underlying causes the line manager then recognises that the organisation he or she is managing indeed are lacking proper time registration. And from the next level of causes the manager realizes that we do have a time registration system, but there is a strong resistance against the use of it. Hence, our “discount recommendation” is “Be very clear in the communication of why it is important to the teams and the organization to do the registration.” We believe it makes good sense to the manager communicating why it is important to do proper time registration, team members will be less resistant, and that it has a purpose to establish data about specific tasks and establish respect for our plans. With reliable plans it is easier to do good business.

9 Further Work and the Relationship to the SPI Manifesto We are still working on getting more data, and more people to answer the questionnaire, to improve our discount improvement model. We are also working on improving the maps, the links to CMMI and to the recommendations. We believe this work will

Process Improvement Based on Symptom Analysis

285

implement a simple usable system to identify the most relevant practices to improve and recommendations on how to improve them. The real benefit will come, when the most optimal paths in the maps are identified through a set of questions for each level of problems implemented in an online system. The development of our “Discount improvement model” (DIM) support very well the principles in in the SPI Manifesto. • People – Know the culture and focus on needs. o The recommendations from DIM are derived from the user’s needs and will be implemented from inside the same culture. • People – Motivate all people involved. o Discussing symptoms is establishing the “why” we need to change which is known to be motivating. • People – Base improvements on experience and measurements o The recommendations is derived from the users experience • Business – Support the organization’s vision and objectives. o It is the managers responsibility to support. • Business – Use adaptable and dynamic models as needed. o DIM is a utilizing the combination of a stable and rigid CMMI model and is by itself highly adaptable. • Business – Apply risk management. o The managers prioritization of symptoms will be based on his own risk assessment. • Change – Don’t lose focus. o DIM will help the manager to focus on a few actions among the many possibilities.

References 1. Crosby, P.B.: Quality is free: The art of making quality certain. Vol. 2247. Signet Book (1980) 2. Crosby, P.B.: Quality is still free: making quality certain in uncertain times. McGraw-Hill Companies (1996) 3. Chrissis, M.B., M. Konrad, Shrum, S.: CMMI for development: guidelines for process integration and product improvement. Pearson Education (2011) 4. Team, S.U.: Standard cmmi appraisal method for process improvement (scampi) a, version 1.3: Method definition document (2011) 5. Team, C.P.: CMMI® for Development, Version 1.3. Preface, SEI, CMU (2006) 6. Lee, I., Iyer, R.K., Mehta, A.: Identifying software problems using symptoms. In: Proceedings of IEEE 24th International Symposium on Fault-Tolerant Computing 1994. IEEE (1994) 7. Chahal, K.K., Singh, H.: Metrics to study symptoms of bad software designs. ACM SIGSOFT Software Eng. Notes 34(1), 1–4 (2009)

286

J. Pries-Heje et al.

8. Lee, I., Iyer, R.K.: Diagnosing rediscovered software problems using symptoms. IEEE Trans. Softw. Eng. 26(2), 113–127 (2000) 9. Macia, I., et al. On the relevance of code anomalies for identifying architecture degradation symptoms. In: 2012 16th European Conference on Software Maintenance and Reengineering. IEEE (2012) 10. Jia, L., et al.: The symptoms, causes, and repairs of bugs inside a deep learning library. J. Syst. Softw. 177, 110935 (2021) 11. Pyzdek, T., Keller, P.: Six sigma handbook. McGraw-Hill Education (2014) 12. Liker, J.K.: Toyota way: 14 management principles from the world’s greatest manufacturer, McGraw-Hill Education (2021) 13. Eden, C.: Cognitive mapping. Eur. J. Oper. Res. 36(1), 1–13 (1988) 14. Ackerman, F., Eden, C.: Making Strategy. SAGE Pub, London (1998) 15. Pries-Heje, J., Johansen, J., Korsaa, M.: Symptom-based improvement advice: a new relevantfocused problem-based framework. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 139–150. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-85521-5_10 16. Pries-Heje, J., J. Johansen, and M. Korsaa, Symptom-based improvement recommendations. J. Softw.: Evol. Process e2375 (2021) 17. Pries-Heje, J., Johansen, J.: ImprovAbility: Success with process improvement (2013) Delta

SPI and Functional Safety and Cybersecurity

The New Cybersecurity Challenges and Demands for Automotive Organisations and Projects - An Insight View Thomas Liedtke1 , Richard Messnarz2(B) , Damjan Ekert2 , and Alexander Much3 1 VECTOR Consulting Services GmbH, Stuttgart, Germany

[email protected] 2 ISCN GesmbH, Graz, Austria [email protected] 3 Elektrobit AG, Erlangen, Germany [email protected]

Abstract. INTACS has developed and rolled out Automotive SPICE® for Cybersecurity Assessor training and developed training materials to prepare assessors to rate processes like SEC.1 – SEC.4 and MAN.7 Cybersecurity Risk Management. This requires from automotive projects a well-structured TARA (Cybersecurity Threat Analysis and Risk Assessment) and a basic understanding of automotive cybersecurity architectural frameworks to analyse cybersecurity scenarios and derive cybersecurity controls and requirements. This paper will outline the expectations from automotive projects and provide experiences from a first year of training and assessments on the market applying Automotive SPICE® for Cybersecurity. It will also give hints for how to create additional cybersecurity views in the system and software architecture. Keywords: Cybersecurity Assessment · cybersecurity threat and risk analysis · TARA · cybersecurity architectural analysis and vulnerability analysis · Automotive SPICE® rating examples

1 Introduction In June 2020, UNECE (WP.29 /GRVA) announced two new regulations on vehicle type homologation for vehicle manufacturers [51, 54]: • UN Regulation Number 155 (short: UN R 155) Establishment of a Cybersecurity Management System [55, 56] • UN Regulation Number 156 (short: UN R 156) Establishment of a Software Update Management System [57]. For the European Economic Area, the General Safety Regulation [GSR19] specifies two dates for effectiveness: • July 6, 2022 for new vehicle type approvals © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 289–315, 2023. https://doi.org/10.1007/978-3-031-42307-9_21

290

T. Liedtke et al.

• July 7, 2024 for first registration of vehicles (e.g., as a transition period for vehicles that received type approval prior to July 6, 2022). The UNECE regulations are marketplace oriented, i.e., it affects all vehicle manufacturers who want to sell vehicles in one of the 58 member states of WP 29, even if they do not develop and produce in a member state themselves. In the case of vehicle type approval, vehicle manufacturers figuratively put their hand in the fire for the entire supply chain in order to best achieve the following four main objectives of the two UNECE regulations mentioned above in particular: • • • •

Managing vehicle cyber risks Securing vehicles by design to mitigate risks along the value chain Detecting and responding to security incidents across vehicle fleet Providing safe and secure software updates and ensuring vehicle safety is not compromised, introducing a legal basis for so-called “Over-the-Air” (OTA) updates to on-board vehicle software. The UNECE requirements apply to all phases of the vehicle life cycle namely:

• Development phase • Production phase • Post-production phase In the automotive typical distributed development this leads in the market to the suppliers in particular of appropriate demands: • Establishment of a CSMS (e.g., according to ISO/PAS 5112 [47, 52] related to UN R 155 requirements 7.2.x) • Extended commitments/assurances, especially for the post-development phase, e.g., monitoring of products in the field/ ability to provide SW updates. ISO/SAE 21434 [53] is used as base of “state of the art” for developing and establishing a CSMS for suppliers. The interpretation document to the UNECE R 155 [54] maps the requirements to corresponding clauses of ISO/SAE 21434. While a CSMS focuses on clauses 5, 8, (partly 12–14) of ISO/SAE 21434 and can be audited e.g., according to the audit criteria from ISO/PAS 5112 (alternatively ACSMS [50]), the complementary engineering part is covered by Automotive SPICE® for Cybersecurity [3] (see Fig. 1). Figure 1 orange boxes are organization related, green ones are development related. Automotive SPICE® is based on the assumption that deficiencies in development processes can lead to deficiencies in products [1, 2, 11, 17–19] which is why vehicle manufacturers want to monitor and evaluate the processes used by their suppliers to develop cybersecurity for their products. For this reason, vehicle manufacturers have been conducting cybersecurity-focused assessments of their suppliers for several years. With the release of the Automotive SPICE® for Cybersecurity Process Assessment Model (PAM) [3] at the end of 2021, a standardized assessment model will be available on the market. Automotive SPICE® for Cybersecurity [3, 34] captures the TARA (threat analysis and risk assessment) and - if the risk treatment decision is “reduce” - the subsequent

The New Cybersecurity Challenges and Demands for Automotive Organisations

291

Fig. 1. Domains of Cybersecurity Activities in ISO 21434

development activities up to the Start of Production (SOP) (see Fig. 2). The intent is to conform to the ISO/SAE 21434 standard, but in a process-oriented manner. Typically, current Automotive SPICE® assessments now often include the cybersecurity aspect.

Fig. 2. Overview SEC-PAM model [60]

2 Example Cybersecurity Item Definition One part of a cybersecurity item definition is a high-level (preliminary) architecture of the system [6, 21, 22, 26, 30] which shows all main external and internal interfaces, functions of the item, important data, and item trust boundaries. Each interface can carry an attack and each attack can maliciously influence the functions or data of the system.

292

T. Liedtke et al.

Other components of an item definition are functions describing the intended behaviour of the item during all lifecycle phases (product development, production, operations and maintenance, and decommissioning). The cybersecurity item also allows to group the assets of the system. Assets are for example interfaces, functions, data or any kind of system element that has a value for the stakeholders (based on ISO/SAE 21434 the road user and in Automotive SPICE® any impacted stakeholder (E.g.: OEM, service provider, …)). Furthermore, assets can be safety goals, safety states, the Software, an ECU itself, the process for flashing software, … [61]. e.g. a data has a value if by maliciously changing the data the impacted functional behaviour leads to a loss of a function, an incorrect behaviour of a function, a hazardous behaviour of a function etc. e.g. an interface has a value if it carries a message which can be maliciously used/changed and leads to an execution of a malicious incorrect command, or a denial of service etc. To better explain the required methods in practice an example cybersecurity item for an ESCL (Electronic Steering Column Lock System) is used. See Fig. 3 below.

Fig. 3. Cybersecurity Item for the ESCL System

The New Cybersecurity Challenges and Demands for Automotive Organisations

293

Functions of the item (representing the system)/(see Fig. 3): • In case of lock command, it moves with an electric motor a bolt into a locking position of the steering column. • In case of unlock command, it moves with an electric motor a bolt into an open position not locking the steering column. • The functions lock and unlock are executed only in case of vehicle speed is lower than parking mode speed limit. • The threshold of the parking mode speed limit can be configured. • Beside the classical steering lock the system also includes a digital on/off message from the vehicle control unit. The motor of the steering rack is powered on only if the digital on message has been received. • Note: Steering without an EPS (Electric Power Steering) support from standstill is with cars of more than 2t weight impossible. Technical Elements and (sub-)components of the item (representing the system) (see Fig. 3). • Locking position identified by simple contact sensor at end positions. In case of contact the measured voltage is above a threshold (high) and in case of no contact below a threshold (low). • 2 functional cases: (1) open position is high and closed position is low means open position, OR open low and closed high means closed/locked position. • Invalid case: after the defined number of steps the stepper motor did not reach option (1) or option (2). • The ECSL as a function is part of the steering ECU. In the cybersecurity item it shows as a separate ECSL function. • The motor to move the bolt for the steering lock is a 5V stepper motor. The stepper motor has a set of configuration parameters e.g., the travel distance from open to closed position and vice versa in terms of number of steps. • The app on the mobile phone allows to lock or unlock the car which sends the command via Mobile phone standard Cellular V2X protocol to the telematics ECU which through a gateway server of the vehicle sends the command through the vehicle bus to the ECSL function. • The internal bus in the car is based on real time Internet between the telematics ECU and the gateway, and CAN FD between the gateway and the ECSL. A cybersecurity item allows to group the assets into A[x] asset groups. This asset grouping is done differently by each supplier. The main benefit is that it allows to create later chapters in the TARA. E.g.: A01 Input commands A02 Output messages A03 Mobile phone and apps A04 OBD On Board Diagose connection/internal busses

294

T. Liedtke et al.

A05 App and interface to vehicle A06 Sensors A07 ECSL function A08 data etc. There are more such asset groups than shown in the example. Trust boundaries are shown as dotted lines. In Fig. 3, for instance, the mobile phone is not inside the trust boundary and the responsibility of the car maker. In modern vehicles with an automatic gear box in many cases the steering lock system is replaced by the parking brake. However, due to functional safety the parking brake function is decomposed by ASIL B(D) to the parking brake itself and ASIL B(D) to the P position of the gear box (locking position of the gear box). This means that dependent on the vehicle architecture the cybersecurity item looks different. See Fig. 4. In such a system the parking brake is a function of the brake ECU which uses an electric servo motor to move to a locking position by putting pressure on brake discs by brake callipers. The servo motor uses a hall and an index sensor and is a 3 phase motor. Also in such a system the function is decomposed and realised by a an independent second channel using the parking lock function of the gear box. In automatic gear box driven cars usually a planet gear system is used where the gears are switched by electrohydraulic valves.

Fig. 4. Cybersecurity Item for a Parking Brake System

The New Cybersecurity Challenges and Demands for Automotive Organisations

295

3 TARA Explanation and Hints One of the key points of the recently published cyber security standard ISO/SAE 21434:2021 [53] is the performance of threat analysis and risk assessment (TARA). The risk assessment method (so called TARA described in clause 15) is universally applicable in different phases of the development life cycle (e.g.: high-level system phase or lower level for an architectural analysis. In a chain of step-by-step identification and analysis activities, it describes how to proceed from asset identification over threat scenario identification, impact rating, attack path analysis, attack feasibility rating, risk value determination to risk treatment decision. In the last step, one of four predefined options (see also ISO 31000 [62]) is selected. Once the decision has been made to reduce the risk, cybersecurity goals, cybersecurity controls and cybersecurity requirements have to be selected and specified. The clauses (requirements) of ISO/SAE 21434 are mainly describing what to do, but not exactly how to do it. Automotive SPICE® for cybersecurity is describing the process cybersecurity for risk management in process MAN.7. The mapping of the seven steps from ISO/SAE 21434 to Automotive SPICE® MAN.7 base practices can be found in Fig. 5.

Fig. 5. Mapping of ISO/SAE 21434:2021–15 and Automotive SPICE® MAN.7

The risk assessment method according to ISO/SAE 21434 is carried out in seven steps. The steps can be performed in different order. In our example (efficiency reasons) we’ll perform step 3 before 2. For some steps, exemplary possibilities for implementation are given. However, many options remain open; experience will have to prove the best application of the methods in the future. For each step, it is possible to consult further guidance or useful sources of information. For our item definition (s. chapter 2) we’ll present the step-wise approach: 3.1 Asset Identification and Impact Rating Initial start is the enumeration of assets. As shown in chapter 2 there can be different groups of assets enumerated. For all cybersecurity relevant assets to perform the next

296

T. Liedtke et al.

steps at least one cybersecurity property has to be assigned. Minimum selection can be performed from confidentiality, integrity, or availability. Usage of further cybersecurity properties is possible. E.g.: Authenticity, non-repudiation or authorization. Last part of this initial step is to identify the adverse consequence (damage scenario) towards the road user if the asset is not sufficiently protected in case the cybersecurity property will be compromised (s. Table 1 below). The cybersecurity properties in Table 1 were taken from the STRIDE [48] definition. Table 1. Identification of damage scenario and impact rating

The impact rating will be performed according the four as a minimum required set of impact categories: safety (S), financial (F), operability (O), and privacy (P). For each impact rating one of four levels has to be determined: severe, major, moderate, or negligible (rating tables are defined in the ISO/SAE 21434 annex). The selection should be justified for better understandability, verifiability, and maintainability. The rating for safety must be performed by a safety expert, looking-up the S-value mentioned in the HARA for the hazard attributable to the damage scenario. Hazards and damage scenarios are belonging together. Severity of hazards from functional safety point of view can be the same as from the appearance of the related damage scenario from cybersecurity point of view. At the end of the day the adverse impact (level) for the road user is the same. 3.2 Threat Scenario Identification Knowledge about the asset to be protected and adverse damage which can compromise a cybersecurity property is used to identify threat scenarios. In Table 4 MS STRIDE [48, 63] is referred as a systematic approach to obtain and identify threat (scenarios) directly from cybersecurity property. In our example we’ve to evaluate threats which can compromise assigned cybersecurity properties authenticity, integrity, non-repudiation and availability (Confidentiality and authorization were determined to be not applicable.).

The New Cybersecurity Challenges and Demands for Automotive Organisations

297

Functions are complex assets to evaluate conducting a TARA. In terms of architecture, a function typically “runs” over several interfaces and components and thus offers a large attack surface (see chapter 2) with multiple possibilities to violate cybersecurity properties and thus compromise assets. As can be seen in our example, the first three identified damage scenarios are identical (thus also in their impact on the road user). However, completely different threat scenarios can lead to identical damage scenarios. When searching for threat scenarios, one often finds that the description of the function as an asset is not sufficiently detailed. For example, authorization conditions and quality of service are often specified in additional requirements or conditions. E.g., not all safety mechanisms may be known when determining the threat scenarios in order to be able to decide at that point whether a “valid” command may be sufficient to actually execute the function (s. Table 2). Table 2. Threat scenario and attack path analysis

3.3 Attack Path Analysis and Attack Feasibility Rating To determine the attack feasibility the attack-potential-based approach described in the ISO/SAE 21434 will be used [58, 59]. This method is rating five attributes: • Elapsed time: less than one day till more than 6 months • Specialist expertise: laymen (ordinary person). Multiple experts (multiple highly experienced engineers who have expertise in different fields) • Knowledge of the item or component: public. Strictly confidential • Window of opportunity: dimension of possibility of access and necessary time • Equipment: unlimited (remote without precondition). Difficult (decapping an IC, cracking a cryptographic key)

298

T. Liedtke et al.

After determining the five attributes individually the attack feasibility value can be determined and leads to very low, low, medium or high attack feasibility. 3.4 Risk Value Determination and Risk Treatment Decision Last step of the threat analysis and risk assessment is to determine the risk value. According to the ISO/SAE 21434 standard the value must be between 1 and 5. Applying the matrix provided in the (informative) annex combined table can be seen in Table 3. Table 3. Risk value determination

3.5 General Remarks/Hints Our experiences have led to different findings of the support possibilities. The following part highlights some of them: • If threat analysis and risk assessment will be performed in a component-oriented manner to enable the use of STRIDE for threat scenario identification and threat modeling, at least an initial data flow diagram (DFD) containing the key entities has to be created. • An essential aspect of item definition is choosing the right level of abstraction, as it is very easy to get lost in unnecessary details. • The level of abstraction is crucial: if the goal is to identify high-level threats, we are only interested in the entire data flow, not in each individual CAN signal or message. • Caution in applying pre-filtering mechanisms. Significant risks may be overlooked if assumptions and conclusions are flawed. • Discard low impact (e.g., low severity) damage scenarios.

The New Cybersecurity Challenges and Demands for Automotive Organisations

299

• Discard threats/attack paths with low feasibility (low attack feasibilities). • Possible focus on remote attacks • Take care about assumptions (you’ve to specify claims for them and keep them current) • It is good to consider UNECE Annex 5 (mandatory prerequisite for UNECE homologation) for identifying threats and cybersecurity controls, but also don’t solely rely on it as it is grossly incomplete. • It is crucial to perform threat analysis and risk assessment as a team activity. Necessary stakeholder: System architects, software and hardware experts, tester, skilled attacker, cybersecurity, and safety experts. But… • Multiple stakeholder involvement costs effort • Often one or a few people in a single group (e.g., system engineering) create TARAs • Involvement of testers and skilled attackers is rare in development phase due to missing availability • Modeling/assessing is useful for the design, it cannot guarantee a secure implementation • TARA helps as an analysis method to identify open points, but often rather confirms the controls that one has already planned to implement. • TARA does not prescribe any iteration regarding the evaluation of the effectiveness of intended cybersecurity controls. This makes it difficult to assess the acceptability of the residual risk. • For the evaluation of the risk value, existing measures for possible risk reduction (e.g., safety measures that can make attacks more difficult) are not taken into account. A complete risk assessment in the sense that all possible types of assets are considered requires huge effort. Our experience shows that this is a long-term, project-based activity that needs to be carefully planned in advance. Approaches/parameters to be considered evaluating efficiency: • Divide the risk assessment into different risk assessments and start with a highlevel component-based risk assessment. Component-based risk assessments are easier to perform. The reason for this is that cybersecurity assets and their compromise are much easier to isolate and assess. Cybersecurity properties can be more easily understood and attacks on them considered. Also, the impact of an attack is easier to describe and evaluate. The attack surface is limited to the attack surface of the component. The difficulty with this approach remains to then map the component properties to the criticality of functions (which must ultimately be secured). • Another approach is to start with the functions of an item/component as the set of assets and based on the functions that can be attacked to perform a function-oriented risk assessment. Ultimately, they are the most important assets. Functions as assets are not easy to analyze. Functions typically span multiple interfaces through multiple components, creating a large attack surface. The number of attacks can increase, as

300





• •

T. Liedtke et al.

can the ease with which an attack can be carried out. Nevertheless, this is a frequently observed phenomenon. Assets that are always difficult to analyze are, for example, safety targets, safety mechanisms and their combination as well as complex functions that have a large attack surface. Our experience in the implementation of TARAs shows that they are usually not considered at the beginning and are gradually added when further information (refined architecture) of the project can be used. During the implementation and analysis of functions and components, other relevant processes to be investigated often come to light. These must be analyzed as additional assets. In our TARAs for SW analysis, for example, the question of the secure flash data update process always came up (function in the SW + process of the organization). Cybersecurity controls allocated to organizational measures should already be known at the organizational level and be correspondingly easy to address. Good knowledge of one’s own cybersecurity management system greatly facilitates the analysis. Breakdown of the risk assessment into different levels (system, software, hardware). At the beginning of the project, not enough details are known to consider all relevant information needed to identify all cybersecurity risks for all levels.

Another dimension that needs to be thought through are the types of attacks to be considered. Here, in case of doubt, attack chains and trees of any length can be set up. Focusing on attacks with perceived easier feasibility compared to others, or increased attractiveness of a potential damage (compromised asset) from a potential attacker’s perspective can reduce the complexity of the risk assessment: • Focusing exclusively on remote attack types leads to higher risk scores when the attack vector approach is used to determine attack feasibility. • Different situation for remote attacks without code insertion/execution • Define how social engineering attacks can be accounted for. These types of attacks are difficult to classify. Countermeasures are often defined via organizational rules in the cybersecurity management system rather than simply “implemented” down the road. • Tailor the attributes used to assess attack feasibility to your needs. • some of the attributes proposed by the standard are difficult to assess and cannot be evaluated unambiguously • some attributes frequently used for practical reasons are currently not considered (especially attacker motivation, potential benefits, attack scalability, … (experience from ISO discussion on ISO/IEC 18045 [58])). • Use attack trees for evaluating the feasibility of attack paths with special cases carefully, otherwise you will be bogged down in minutia that isn’t helpful.

The New Cybersecurity Challenges and Demands for Automotive Organisations

301

4 Cybersecurity Design and Requirements at System Level Cybersecurity design [20, 21, 26, 33, 44] requires additional views and methods to derive cybersecurity controls and cybersecurity requirements based on an architectural vulnerability analysis. An architectural vulnerability analysis at system level draws attack vectors on the system architecture and marks the interfaces which can be used to attack, shows the assets which can be maliciously changed (data, functions, etc.), and documents with the so-called threat modelling also the threats, and the counteractions (cybersecurity controls) to be implemented. For threat modelling different state of the art notations are allowed. In this paper we use the attack vector modelling which [21, 33, 35, 53] can also be drawn as a security critical signal or function path. A vulnerability analysis at the system architecture level usually bases on a number of cybersecurity analysis principles, e.g. (1) Every interface can carry an attack: This means that the first step is to identify and mark all interfaces of electronics and software and analyse in detail the communication protocol on these interfaces. Mark those interfaces which can carry a data or command which can lead to a malicious attack on asserts as cybersecurity critical. (2) The assets need to be highlighted in the cybersecurity design view/threat model. This means that an architect works together with the cybersecurity experts and highlights in the architecture the assets which were identified in the asset list in the TARA. Moreover, the architects might identify a further data, function, interface which needs to be added to the TARA. (3) An attack is modelled as a chain of events (attack scenario/attack path). The attacker might use an interface first to enter the system, which then influences a function malignantly (e.g., via tampering of data) and which then leads to adverse consequences for the functional behaviour. In more comprehensive attacks the attacker might through an interface get access to a vehicle via a gateway or back-end server and lead from there a number of further attacks. The idea of the threat modelling, attack vector modelling, etc. is to highlight this flow. (4) Along the attack path/vector different counter actions (called cybersecurity controls) are designed. One attack path/vector can contain a number of cybersecurity controls. The cybersecurity controls are dependent on the number functions offered by the CSM module (CSM – Cybersecurity Service Manager) which is interfaced with a HSM (HSM – Hardware Security Module). (5) A system and software architect has to analyse the HSM and the CSM to know which pool of cybersecurity control functions/mechanisms are available in the product to be sued in the cybersecurity design modelling.

302

T. Liedtke et al.

To explain the approach, we use the example in Fig. 3. The cybersecurity item in Fig. 3 allows a number of attack vectors. In ISO/SAE 21434 all attack vectors (see below) relate to an attack to compromize a specific asset by violating a specific cybersecurity property assigned to that asset. • Attack Vector: Malicious OBD Access • • • •

Malicious Tampering of Speed Limit Malicious Tampering of Index Sensor Data (e.g. offset) Malicious Tampering of Motor Zero Position Etc.

• Attack Vector: Malicious Command over Vehicle Bus • • • •

Malicious Spoofing of Lock Request Malicious Spoofing of Unlock Request Malicious Spoofing/Tampering of Vehicle Speed Etc.

• Attack Vector: Malicious Access to JTAG • Malicious Tampering/Change of Software • Malicious Tampering/Change of Data • Etc. In Fig. 6 you can see the example attack vector for “Malicious Command over Vehicle Bus”. In Table 4 you see the example of STRIDE [48, 63] as a typical mindset of system and software architects’ cooperation with cybersecurity engineers to link threats with cybersecurity properties which enables selection of cybersecurity controls to mitigate cybersecurity risks. Using Fig. 6 and Table 4 along the attack vector cybersecurity controls are selected and cybersecurity requirements can be derived and specified to establish the controls. An example of deriving cybersecurity requirements from cybersecurity controls as counter actions is shown in Tables 5 and 6. Tables 5 and 6 also show that the derived cybersecurity requirements linkable to the related cybersecurity goals specified during conduction of the TARA. Cybersecurity goals have to be achieved by implementing CS controls (to be shown via validation). CS controls will be established by implementation of CS requirements which have to be fulfilled (to be shown via verification).

The New Cybersecurity Challenges and Demands for Automotive Organisations

303

Fig. 6. Attack Vector Malicious Command over Vehicle Bus

Table 4. Cybersecurity Property - Attack/STRIDE Term – Cybersecurity Control

5 Cybersecurity Design and Requirements at Software Level Also at SW level cybersecurity design requires additional views and methods to derive cybersecurity controls and cybersecurity requirements based on an architectural vulnerability analysis. An analysis at the software architecture level to identify vulnerabilities usually bases on a number of cybersecurity analysis principles, e.g. (1) SW usually bases on a (base software) cybersecurity stack and the options which the stack is offering highly depend on the security architecture of the underlying processor. Available cybersecurity control functions/options will differ. This means

304

T. Liedtke et al. Table 5. Architecture Step 1: From cybersecurity properties to attack and CS controls

Table 6. Architecture Step 2: Converting cybersecurity controls to requirements for development

(2)

(3)

(4)

(5)

that every SW architect first together with the cybersecurity engineer checks what the underlying HSM and base SW are offering. If a given cybersecurity stack (base SW components) is integrated the SW architect and the cybersecurity engineer assure that the security manual-based integration steps and tests have been followed. In case of Vector e.g. this requires Autosar greater than 4.3, and the daVinci tool set and using the recommended integration and checking tools set. In SW architecture-based SW threat modelling the most common strategy is to analyse the state machine of the SW and to regard each state as a specific session of the software. And each session must run in secure mode. This leads to the fact that minimum number of threat models at SW level is one per SW session/state plus an integrated model which monitors the transitions between the states. Along the threat models per session/state different counter actions (called cybersecurity controls) are selected. One SW threat model can lead to a number of cybersecurity controls. Cybersecurity controls to be implemented in software are derived to cybersecurity software requirements which are forwarded to software development.

Cybersecurity controls are provided by the CSM module from e.g. Autosar (see public CSM specification, Fig. 7) .

The New Cybersecurity Challenges and Demands for Automotive Organisations

305

Fig. 7. Example Services of the CSM

To explain the approach, we use the example of the software/state session lock/unlock. Figure 8 shows the session concept before the cybersecurity analysis. Figure 9 shows the same session after adding the analysis of threats and the assignment of cybersecurity properties to be achieved. And Fig. 10 shows the session after adding the counter actions in form of cybersecurity controls using the CSM functionality. Explanations for Fig. 8. 1. Based on Speed (lock and unlock requires speed) and lock/unlock request the CAN module COMM_IN reads the speed and lock/unlock request every 20 ms and 2. Updates the RTE/signal layer. The SW state machine in state Lock/Unlock 3. Calls the SW module Basic Power Lock Motor 4. That actuates the motor and 5. Moves the bolt into locking position.

Fig. 8. Simplified functional flow of the lock/unlock session

The Fig. 10 shows the following cybersecurity controls integrated, they are derived along the attack path of the threat scenario. 1. The code block of the SW in the ROM is signed with a key (no authenticated code change has happened).

306

T. Liedtke et al.

Fig. 9. Simplified functional flow of the lock/unlock session after adding the threats and cybersecurity properties

Fig. 10. Simplified functional flow of the lock/unlock session after adding the cybersecurity controls

2. Authentication and integrity of the messages lock/unlock request, speed by SecOC (Secure OnBoard Communication) and MAC (Message Authentication Code). 3. The lib SecOC uses the CSM lib MacVerify function to check the message authenticity. 4. On RTE if MacVerify is failing a fault flag is set. 5. The speed parameter is secured by a hash key (data integrity). 6./7. Before a lock/unlock command is further interpreted (for whitelisting strategy of functions) a validation session drive mode function checks the state of the system (speed authentication was ok, lock/unlock request message authentication is ok, etc.) and then calls the Basic Power Lock Motor function of the electronic motor control unit. Once the cybersecurity controls are selected and allocated the cybersecurity requirements are derived, see Table 7. Terms used:

The New Cybersecurity Challenges and Demands for Automotive Organisations

307

Table 7. Deriving SW Requirements from Cybersecurity Controls

CAN: A Controller Area Network (CAN bus) is a robust vehicle bus standard designed to allow microcontrollers and devices to communicate with each other’s applications without a host computer. It is a message-based protocol, designed originally for multiplex electrical wiring within automobiles to save on copper, but it can also be used in many other contexts. For each device, the data in a frame is transmitted serially but in such a way that if more than one device transmits at the same time, the highest priority device can continue while the others back off. Frames are received by all devices, including by the transmitting device. Source: https://en.wikipedia.org/wiki/CAN_bus. Note: There are further protocols used in car communication such as Flexray, CANFD, and real time Ethernet. COMM_IN: In case of base software (lower operating system layer communication software) usually there is a SW module (e.g. here as example named COMM_IN) which is configured to read the incoming messages (also including functions like checksum check, filtering etc.) and writing the signals of the message (e.g. a message on the bus can contain speed, another message the command) to an interface layer called RTE (Run Time Environment). RTE: Run-Time-Environment is a term from Autosar and is based on a standard architecture concept for ECU (Electronic Control Unit) operating systems. This is the main communication interface between the lower and application layer of the software or also between the components in the software. E.g. the speed signal is written on the RTE and can be read from the RTE. According to Autosar interface types are separated. A sender/receiver is a data (e.g. a speed value, a close request command) which can written to the RTE and read as data from the RTE. And it offers servers/client interface where a service function can be called or provided. This way one module can call a function/port from another module.

6 Automotive SPICE® for Cybersecurity Experiences so Far and Outlook Automotive SPICE® assessments in the past had already been extended with VDA Guidelines, the extension by safety questions by the working group SOQRATES, and by discussions about how to apply Automotive SPICE® in case of cybersecurity [1,

308

T. Liedtke et al.

2, 7, 13, 14, 25, 27, 28, 29, 36, 46, 49]. Automotive SPICE® for Cybersecurity [3] was published 2021 and extended ASPICE 3.1 with additional cybersecurity related processes [34, 63] (s. Fig. 2). MAN.7 Cybersecurity Risk Management (relates to the TARA (ISO/SAE 21434 – clause 15) SEC.1 Cybersecurity Requirements Elicitation (see the system and software requirements examples in Table 6 and 7 in this paper) SEC.2 Cybersecurity Implementation (see the system and software design examples in this paper) SEC.3 Cybersecurity verification SEC.4 Cybersecurity validation ACQ.2 Supplier Request and Selection ACQ.4 Supplier monitoring For each of the processes the base practices are rated N/P/L/F and also the ISO 33020:2015 scale for rating of levels 2 to 3 is applied (see Fig. 11, Fig. 12, Fig. 13). Figure 11, Fig. 12, Fig. 13 show an example from a typical assessment result.

Fig. 11. ASPICE for Cybersecurity Assessment

The New Cybersecurity Challenges and Demands for Automotive Organisations

309

Fig. 12. ASPICE for Cybersecurity Process Attribute Profile

Fig. 13. ASPICE for Cybersecurity Process Capability Level Profile

When performing assessments 2022/2023 a number of lessons were learned and some important issues are highlighted below: If the cybersecurity assessment had been organised as an add on assessment (after the Autospice SPICE® 3.1 assessment) this usually led to a re-consolidation of a number of Automotive SPICE® processes as well. e.g. if you find that not all cybersecurity work products are maintained in the configuration item list then this influences the SUP.8 rating. e.g. if you find that cybersecurity related problems cannot be filtered and tracked and have incident reporting considered then this influences the SUP.9 rating. e.g. if you find out that cybersecurity roles and plans are not properly considered this impacts not only MAN.7 but also MAN.3. e.g. MAN.3 requires a project handbook with roles assigned and if the cybersecurity roles are missing the project handbook in general misses required evidences. e.g. if you find out that cybersecurity requirements are not properly analysed this should impact the SWE.1 and SYS.2 rating as well. And so forth. This led to the situation that usually in a cybersecurity add on assessment the processes MAN.3, SUP.1-SUP.10 are revisited for just the cybersecurity related content and the SEC.1-s.4 and MAN.7 processes are fully interviewed. Another observation is that in no assessment so far ACQ.2 was selected. The reason is that ACQ.2 is usually the responsibility of the acquisition department and takes place before a project officially starts.

310

T. Liedtke et al.

A typical wrong interpretation by suppliers is the following: The customer provided the entire cybersecurity stack and they just needed to integrate. The project says that no extra security requirements specification or analysis was necessary, an own analysis is not required. Assessors then rate Not or Partially. This is because customers do not guarantee to be complete in their analysis. They expect that the supplier does an own TARA, adds further cybersecurity requirements and covers all (the ones from the customer and the own). A typical wrong interpretation by suppliers is the following: The project says that they do not separate in the metric between cybersecurity and non-cybersecurity and you cannot see cybersecurity coverage separately. Then traceability cannot be rated high because in cybersecurity the goal is to check the coverage of the cybersecurity case. Only if already 100% is achieved this might work. A typical wrong interpretation by suppliers is the following: The project shows a system and software architecture in which the cybersecurity critical elements and interfaces are marked. No threat model or attack vector is defined. Is that sufficient to rate SEC.2 base practices Fully, the answer is no. It is expected from suppliers that an architectural vulnerability analysis is done and that requires to draw attack trees, attack vectors and threat models. A typical wrong interpretation by assessors is the following: In the ASPICE for Cybersecurity assessor training SecOC (Secure Onboard Communication) has been explained so that all projects implement it. Or in ASPICE for Cybersecurity assessor training it is explained that keys are stored in the secure HSM (Hardware Security Modul) and so all projects have to do it this way. The answer is here no because there are less critical systems where the customer and the supplier agreed tailored cybersecurity concepts. In this case the assessor must check what the TARA at vehicle level delivered and if the concept has been tailored. A typical wrong interpretation by manufacturers is the following: SEC.4 is still the verification of cybersecurity requirements. No, it is the validation of cybersecurity goals. In best case the product with the CS goals is given to a penetration test team that does not know the internal details and simulates a real hacker. Since the model is relatively new all assessors and projects are still learning how to apply the rating in a consistent way, we plan to collect more such experiences in the cybersecurity workshops at EuroSPI over the next years. In future the networking of vehicles, the integration of vehicles into an IT infrastructure with AI and big data will continue. This is also shown by studies of the EU blueprint project DRIVES [8, 9, 41–43]. Therefore the integration of cybersecurity concepts in vehicles and the surrounding infrastructure has started some time ago and becomes now a must and will grow over the next years [4–6, 10, 12, 23, 24, 31, 32, 37, 38, 45]. The challenge will be how to combine different domains within feasible assessments with realistic time frames. Projects do not want to perform only assessments if they have to deal with functional safety assessments, cybersecurity assessments, Automotive SPICE® assessments, CSMS audits, technical revisions, add-on/ plug-in assessments etc. Another challenge will be to interpret the result of a passed Automotive SPICE® for cybersecurity assessment. Assessors and projects should be careful to be lulled into

The New Cybersecurity Challenges and Demands for Automotive Organisations

311

a false sense of security. For instance regarding homologation of systems and vehicles. On the one side the assessment is focussing on processes, on the other side at the end products will be developed. Acknowledgements. We are grateful to the INTACS working group for cybersecurity which developed the Automotive SPICE® for cybersecurity training. The author Liedtke is lead of that group and the authors Messnarz, Ekert, Much are members of that group and contributed to this paper. We are grateful to the EU funded Ersamus+ project Grant Agreement No. 101087552 - FLAMENCO where the implementation of new skills in automotive industry are supported for ISCN in this paper. In these cases the publications reflect the views only of the author(s), and the Commission cannot be held responsible for any use which may be made of the information contained therein. We are grateful to a working party of Automotive suppliers SOQRATES [39] (https://soq rates.eurospi.net) who provided inputs for cybersecurity best practices. This includes: Dallinger Martin (ZF), Dorociak Rafal (HELLA), Dreves Rainer (Continental), Ekert Damjan (ISCN), Forster Martin (ZKW), Gasch Andreas (Cariad), Geipel Thomas (Robert BOSCH GmbH), Grave Rudolf (Tasking), Griessnig Gerhard (AVL), Gruber Andreas (CERTX), Habel Stephan (Continental), Karner Christoph (KTM), Kinalzyk Dietmar (AVL), König Frank (ZF), Kotselidis Christos (Pierer Innovation), Kurz-Griessnig Brigitte (Magna ECS), Lindermuth Peter (Magna Powertrain), Macher Georg (TU Graz), Mandic Irenka (Magna Powertrain), Mayer Ralf (BOSCH Engineering), Messnarz Richard (ISCN), Much Alexander (Elektrobit AG), Nikolov Borislav (MSG Plaut), Oehler Couso Daniel (Magna Powertrain), Pernpeintner Michael (Schäffler), Riel Andreas (Grenoble iNP, ISCN Group), Rieß Armin (BBraun), Santer Christian (AVL), Shaaban Abdelkader (AIT), Schlager Christian (Magna ECS), Schmittner Christoph (AIT), Sebron Walter (MSG Plaut), Sechser Bernhard (Process Fellows), Sporer Harald Infineon), Stahl Florian (AVL), Wachter Stefan, Walker Alastair (MSG Plau), Wegner Thomas (ZF), Geyer Dirk (AVL), Dobaj Jürgen (TU Graz), Wagner Hans (MSG Systems), Aust Detlev, Zurheide Frank (KTM), Suhas Konanur (ENX), Erik Wilhelm (Kyburz), Noha Moselhy (VALEO), Jakub Stolfa (VSB TUO), Michael Wunder (Hofer Powertrain), Svatopluk Stolfa (VSB TUO).

Relationship with the SPI Manifesto. A platform were such new cross-cutting approaches can be discussed is EuroAsiaSPI2 . Its mission is to develop an experience and knowledge exchange platform for Europe where Software Process Improvement (SPI) practices can be discussed and exchanged and knowledge can be gathered and shared [5, 15, 16, 38, 39, 40, 64]. The connected SPI manifesto defines the required values and principles for a most efficient SPI work. The principle “Use dynamic and adaptable models as needed” means that cybersecurity norms and views in future need to be integrated into the existing processes.

References 1. Automotive SPICE © 3.1, Process Assessment Model, VDA QMC Working Group 13/Automotive SIG (2017) 2. Automotive SPICE © Guidelines, 2nd Edition Nov 2017, VDA QMC Working Group 13 (2017) 3. Automotive SPICE for Cybersecurity, 1st Edition, Feb. 2021, VDA QMC Working Group 13 (2021)

312

T. Liedtke et al.

4. Armengaud, E., et al.: Development framework for longitudinal automated driving functions with off-board information integration (2019). arXiv preprint arXiv:1906.10009 5. Biró, M., Messnarz, R.: Key success factors for business based improvement. In: Proceedings of the EuroSPI’ 1999 Conference, Pori School of Technology and Economics. Ser. A., Pori, vol. 25 (1999) 6. Dobaj, J., Macher, G., Ekert, D., Riel,A., Messnarz, R.: Towards a security-driven automotive development lifecycle. J. Softw. Evol. Process (2021). https://doi.org/10.1002/smr.2407 7. Ekert, D., Messnarz, R., Norimatsu, S., Zehetner, T., Aschbacher, L.: Experience with the performance of online distributed assessments – using advanced infrastructure. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 629–638. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_47 8. EU Blueprint Project DRIVES. https://www.project-drives.eu/. Accessed 6 Apr 2021 9. European Sector Skill Council: Report. Eu Skill Council Automotive Industry (2013) 10. Feuer, E., Messnarz, R., Sanchez, N.: Best practices in e-commerce: strategies, skills, and processes. In: Smith, B.S., Chiozza, E (eds.) Proceedings of the E2002 Conference, E-Business and E-Work, Novel Solutions for a Global Networked Economy. IOS Press, Amsterdam (2002) 11. Höhn, H., Sechser, B., Dussa-Zieger, K., Messnarz, R., Hindel, B.: Software Engineering nach Automotive SPICE: Entwicklungsprozesse in der Praxis-Ein Continental-Projekt auf dem Weg zu Level 3. Systemdesign, dpunkt. Verlag, Kapitel (2015) 12. Innerwinkler, P., et al.: TrustVehicle--improved trustworthiness and weather-independence of conditionally automated vehicles in mixed traffic scenarios. In: International Forum on Advanced Microsystems for Automotive Applications, pp. 75–89 (2018) 13. ISO - International Organization for Standardization. ISO 26262 Road vehicles Functional Safety Part 1–10 (2011) 14. ISO – International Organization for Standardization. ISO CD 26262–2018 2nd Edition Road vehicles Functional Safety (2018) 15. Korsaa, M., et al.: The SPI Manifesto and the ECQA SPI manager certification scheme. J. Softw. Evol. Process 24(5), 525–540 (2012) 16. Korsaa, M., et al.: The people aspects in modern process improvement management approaches. J. Softw. Evol. Process 25(4), 381–391 (2013) 17. Christian, K., Messnarz, R., Riel, A., et al.: The AQUA automotive sector skills alliance: best practice in an integrated engineering approach. Softw. Qual. Prof. 17(3), 35–45 (2015) 18. Kreiner, C.J., et al.: Integrating functional safety, automotive SPICE and six sigma – the AQUA knowledge base and integration examples. In: Systems, Software and Services Process Improvement 21st European Conference, EuroSPI 2014, pp. 285–295 (2014) 19. Kreiner, C.J., et al.: Automotive knowledge alliance AQUA - Integrating automotive SPICE, six sigma, and functional safety. In: Systems, Software and Services Process Improvement 20th European Conference, EuroSPI 2013, Dundalk, Ireland, 25–27 June 2013, Proceedings, pp. 333–344 (2013) 20. Macher, G., Sporer, H., Brenner, E., Kreiner, C.: Supporting cyber-security based on hardwaresoftware interface definition. In: Kreiner, C., O’Connor, R.V., Poth, A., Messnarz, R. (eds.) Systems, Software and Services Process Improvement: 23rd European Conference, EuroSPI 2016, Graz, Austria, September 14-16, 2016, Proceedings, pp. 148–159. Springer International Publishing, Cham (2016). https://doi.org/10.1007/978-3-319-44817-6_12 21. Macher, G., Messnarz, R., Kreiner, C., et al.: Integrated safety and security development in the automotive domain. In: Working Group 17AE-0252/2017–01–1661. SAE International (2017) 22. Macher, G., Much, A., Riel, A., Messnarz, R., Kreiner, C.: Automotive SPICE, safety and cybersecurity integration. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017.

The New Cybersecurity Challenges and Demands for Automotive Organisations

23.

24.

25. 26. 27.

28.

29. 30.

31.

32.

33.

34.

35. 36.

313

LNCS, vol. 10489, pp. 273–285. Springer, Cham (2017). https://doi.org/10.1007/978-3-31966284-8_23 Macher, G., Diwold, K., Veledar, O., Armengaud, E., Römer, K.: The quest for infrastructures and engineering methods enabling highly dynamic autonomous systems. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 15–27. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_2 Macher, G., Druml, N., Veledar, O., Reckenzaun, J.: Safety and security aspects of failoperational urban surround perceptION (FUSION). In: Papadopoulos, Y., Aslansefat, K., Katsaros, P., Bozzano, M. (eds.) IMBSA 2019. LNCS, vol. 11842, pp. 286–300. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32872-6_19 Messnarz, R., et al.: Integrated automotive SPICE and safety assessments. Softw. Process: Improv. Pract. 14(5), 279–288 (2009). https://doi.org/10.1002/spip.429 Messnarz, R., Kreiner, C., Riel, A.: Integrating automotive SPICE, functional safety, and cybersecurity concepts: a cybersecurity layer model. Softw. Qual. Prof. 18(4), 13 (2016) Messnarz, R., König, F., Bachmann, V.O.: Experiences with trial assessments combining automotive SPICE and functional safety standards. In: Winkler, D., O’Connor, R.V., Messnarz, R. (eds.) Systems, Software and Services Process Improvement, pp. 266–275. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31199-4_23 Messnarz, R., Ekert, D., Zehetner, T., Aschbacher, L.: Experiences with ASPICE 3.1 and the VDA automotive SPICE guidelines – using advanced assessment systems. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) Systems, Software and Services Process Improvement: 26th European Conference, EuroSPI 2019, Edinburgh, UK, September 18–20, 2019, Proceedings, pp. 549–562. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-280055_42 Messnarz, R., Ekert, D.: Assessment-based learning systems - learning from best projects. Softw. Process Improv. Pract. 12(6), 569–577 (2007). https://doi.org/10.1002/spip.347 Messnarz, R., Much, A., Kreiner, C., Biro, M., Gorner, J.: Need for the continuous evolution of systems engineering practices for modern vehicle engineering. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 439–452. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_36 Messnarz, R., Macher, G., Stolfa, J., Stolfa, S.: Highly autonomous vehicle (System) design patterns – achieving fail operational and high level of safety and security. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 465–477. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_36 Messnarz, R., et al.: Automotive cybersecurity engineering job roles and best practices – developed for the EU blueprint project DRIVES. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 499–510. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_37 Messnarz, R., Colomo-Palacios, R., Macher, G., Riel, A., Biro, M.: Recent advances in cybersecurity and safety architectures in automotive, IT, and connected services. J. UCS J. Univ. Comput. Sci. (2021). https://lib.jucs.org/article/72072/ Messnarz, R., et al.: First experiences with the automotive SPICE for cybersecurity assessment model. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 531–547. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_35 SAE J3061, Cybersecurity Guidebook for Cyber-Physical Vehicle Systems, SAE - Society of Automotive Engineers, USA (2016) Schlager, C., Messnarz, R., Sporer, H., Riess, A., Mayer, R., Bernhardt, S.: Hardware SPICE extension for automotive SPICE 3.1. In: Larrucea, X., Santamaria, I., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2018. CCIS, vol. 896, pp. 480–491. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-97925-0_41

314

T. Liedtke et al.

37. Schmittner, C., et al.: Innovation and transformation in a digital world-27th interdisciplinary information management talks. Trauner Verlag Universitat 2019, 401–409 (2019) 38. Schmittner, C., Macher, G.: Automotive cybersecurity standards - relation and overview. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) Computer Safety, Reliability, and Security: SAFECOMP 2019 Workshops, ASSURE, DECSoS, SASSUR, STRIVE, and WAISE, Turku, Finland, September 10, 2019, Proceedings, pp. 153–165. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_12 39. SOQRATES, Task Forces Developing Integration of Automotive SPICE, ISO 26262 and SAE J3061 and ISO/SAE 21434. http://soqrates.eurospi.net/ 40. SPI Manifesto. http://2018.eurospi.net/index.php/manifesto. Accessed 2 Apr 2019 41. Stolfa, J., et al.: Automotive quality universities - AQUA alliance extension to higher education. In: Kreiner, C., O’Connor, R.V., Poth, A., Messnarz, R. (eds.) Systems, Software and Services Process Improvement: 23rd European Conference, EuroSPI 2016, Graz, Austria, September 14-16, 2016, Proceedings, pp. 176–187. Springer, Cham (2016). https://doi.org/ 10.1007/978-3-319-44817-6_14 42. Stolfa, J., et al.: Automotive engineering skills and job roles of the future? In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 352–369. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_26 43. Stolfa, J., et al.: DRIVES—EU blueprint project for the automotive sector—a literature review of drivers of change in automotive industry. J. Softw. Evol. Process 32(3), 2222 (2020) 44. Stolfa, J., et al.: Automotive cybersecurity manager and engineer skills needs and pilot course implementation, systems, software and services process improvement. In: 28th European Conference, EuroSPI 2021, Krems, Austria, 1–3 September 2021, Proceedings, CCIS, vol. 1442, pp. 335–348. Springer, Heidelberg (2021). https://doi.org/10.1007/978-3-031-155598_24 45. Veledar, O., Damjanovic-Behrendt, V., Macher, G.: Digital twins for dependability improvement of autonomous driving. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 415–426. Springer, Cham (2019). https://doi.org/10.1007/978-3030-28005-5_32 46. Wegner, T., et al.: Enough assessment guidance, it’s time for improvement – a proposal for extending the VDA guidelines. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 462–476. Springer, Cham (2020). https://doi.org/10. 1007/978-3-030-56441-4_34 47. Automotive Cybersecurity Management System Audit Guideline, 1st edn. VDA-QMC (2020) 48. The STRIDE Threat Model. Microsoft 49. Messnarz, R., Ekert, D., Zehetner, T., Aschbacher, L.: Experiences with ASPICE 3.1 and the VDA automotive SPICE guidelines – using advanced assessment systems. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 549–562. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_42 50. Automotive Cybersecurity Management System Audit. Quality Management in the Automotive Industry, 1st edn (2020). https://webshop.vda.de/QMC/de/acsms-de_2020 and https:// webshop.vda.de/QMC/de/acsms-eng_2020 51. Regulation (EU) 2019/2144 of the European Parliament and of the Council. Official Journal of the EU (2019). Accessed 16 Dec 2019 52. Road vehicles—Guidelines for auditing cybersecurity engineering 2022–03. ISO/PAS 5112 53. Road vehicles—Cybersecurity engineering. 2021–08; ISO/SAE 21434 54. UN Regulations on Cybersecurity and Software Updates to pave the way for mass roll out of connected vehicles (2020). https://unece.org/press/un-regulations-cybersecurity-and-sof tware-updates-pave-way-mass-roll-out-connected-vehicles

The New Cybersecurity Challenges and Demands for Automotive Organisations

315

55. Proposal for the Interpretation Document for UN Regulation No. 155 on uniform provisions concerning the approval of vehicles with regards to cyber security and cyber security management system (2020). https://unece.org/fileadmin/DAM/trans/doc/2020/wp29/WP29-18205e.pdf 56. Uniform provisions concerning the approval of vehicles with regards to cyber security and cyber security management system (2021). https://unece.org/sites/default/files/2021-03/ R155e.pdf 57. Uniform provisions concerning the approval of vehicles with regards to software update and software updates management system (2021). https://unece.org/sites/default/files/2021-03/ R156e.pdf 58. Information technology—Security techniques—Methodology for IT security evaluation. ISO/IEC 18045:2008(E) 59. E-safety vehicle intrusion protected applications. https://www.evita-project.org/, https:// www.evita-project.org/deliverables.html 60. Expert Review of SEC-PAM – Briefing. VDA/QMC (2020) 61. Enisa good practices for security of smart cars (2019). https://www.enisa.europa.eu/publicati ons/smart-cars 62. Risk management – Guidelines - DIN ISO 31000 (2018) 63. Messnarz, R., Ekert, D., Macher, G., Stolfa, S., Stolfa, J., Much, A.: Automotive SPICE for cybersecurity – MAN.7 cybersecurity risk management and TARA. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI 2022, Salzburg, Austria, August 31 – September 2, 2022, Proceedings, pp. 319–334. Springer International Publishing, Cham (2022). https://doi.org/ 10.1007/978-3-031-15559-8_23 64. Aschbacher, L., Messnarz, R., Ekert, D., Zehetner, T., Schönegger, J., Macher, G.: Improving organisations by digital transformation strategies – case study EuroSPI. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI 2022, Salzburg, Austria, August 31 – September 2, 2022, Proceedings, pp. 736–749. Springer International Publishing, Cham (2022). https:// doi.org/10.1007/978-3-031-15559-8_51

An Open Software-Based Framework for Automotive Cybersecurity Testing Thomas Faschang(B)

and Georg Macher

Graz University of Technology, 8010 Graz, Austria {faschang,georg.macher}@tugraz.at

Abstract. With the rise of cyberattacks in the last years, cybersecurity is of high importance in the context of the automotive domain [10, 22]. As current cars are more connected and reliant on embedded system technologies, the need for security engineering has tremendously accelerated. While ISO/SAE 21434 is available as a security engineering standard for the domain, frameworks and tools for cybersecurity training and testing of concepts are scarce. Automotive cybersecurity testbeds provide a specified and controlled environment for testing, evaluating, and learning cybersecurity solutions for vehicles, allowing researchers and engineers to be trained and upskill faster. Therefore, this work focuses on an embedded automotive systems framework for cybersecurity testing. The presented framework simulates a CAN controller network and allows researchers and engineers to test attack vectors and mitigation methods in a simulated environment, providing also basic implementations for the most common attack types. The presented framework is extendable for training and testing purposes with series controllers and real-world demonstrators. Keywords: Automotive Cybersecurity · Software Security Testbed · Controller Area Network

1 Introduction The automotive industry has always been a synonym for innovative solutions and concepts [11]. This also implies that the automotive domain is undergoing continuous changes and ongoing adaptations. One of these changes was the tremendous impact of cybersecurity engineering on all levels of the supply chain. But still, the business values related to cybersecurity engineering demonstrate the increasing importance and awareness in the automotive domain, as well as the need for continued investment and development of effective cybersecurity solutions and trainings for the industry [12, 19]. In order to achieve safe, automated, and interacting vehicles, cybersecurity needs to be improved [7] since evaluations and disclosures presented multiple vulnerabilities in almost all connected elements in current vehicles [1, 16, 20]. Cybersecurity’s increasing importance is evident by the increasing number of regulations and standards that are being developed to address different types of related issues.

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 316–328, 2023. https://doi.org/10.1007/978-3-031-42307-9_22

An Open Software-Based Framework for Automotive Cybersecurity Testing

317

The most prominent example is the ISO/SAE 21434 [8], a standard developed specifically for cybersecurity engineering in the automotive industry. With its publication, researchers have made various contributions for its implementation in the industry [2, 3]. The standard provides a framework for implementing cybersecurity throughout the development lifecycle of automotive products, but it provides no standardized procedure for cybersecurity validation and verification. Thereby it triggers the need for accessible testing and training frameworks [17]. As modern vehicles are not completely developed from scratch, the need for secure and reliable automotive embedded systems has become increasingly important. As such, the in-vehicle network known as the Controller Area Network (CAN) is still widely in use in automotive applications for communication between Electronic Control Units (ECUs). Although the original CAN protocol was not designed for security-related applications and lacks security feature support, enhancements of the protocol and adaptations of the embedded system architectures make it still applicable, but yet vulnerable to various cyber-attacks. Therefore, testing of implemented concepts and training of the most important cybersecurity mitigation approaches in a realistic automotive environment is of high importance. To address these topics, this paper presents an open and flexible CANbased embedded automotive system cybersecurity framework called PENNE. The framework provides a comprehensive solution to enhance the cybersecurity training of engineers and evaluation of in-vehicle network concepts to detect and prevent cyber-attacks on the CAN bus. The proposed framework consists of three main components: (a) an open ECU architecture that implements basic vehicle functionalities and communication, (b) an intrusion detection and gateway concept implementation accompanied by an authenticated encryption scheme for CAN messages, and (c) demonstrative implementations of the most common cybersecurity attacks on CAN bus, for testing and evaluation. The proposed framework is implemented based on open simulated ECUs and CAN network communication to evaluate its performance in detecting and preventing various types of cyber-attacks, such as denial-of-service, spoofing, and replay attacks. The rest of the paper is organized as follows: Sect. 2 provides an overview of related work, Sect. 3 describes the proposed framework in detail, Sect. 4 presents the experimental setup and results, and finally, Sect. 5 concludes the paper.

2 Related Work In this work, a special emphasis is put on the PASTA security testbed presented by Toyama et al. [21]. This framework provided the basic inspiration for the development of the PENNE framework. Aside from this, a general overview of related work on automotive security testbeds is given. 2.1 Automotive Security Testbed Implementations Luo et al. [5] conducted a systematic literature review of existing automotive cybersecurity testing solutions between 2010 and 2021. In their work, the authors showed an increasing number of research papers on automotive cybersecurity testing in 2015. The

318

T. Faschang and G. Macher

same year Miller and Valasek [14] demonstrated a remote attack on an unaltered Jeep, leading to recalls of 1.4 million vehicles. Before their prominent Jeep attack, Miller and Valasek [13] built a security testbed by extracting and isolating ECUs from real vehicles. They provide researchers with methods to build their own testbed with different low-cost ECUs. In 2017, Fowler et al. [6] used a commercial Hardware-In-The-Loop (HIL) system as a cybersecurity testbed. The authors showed that the HIL testbed is capable of testing ECUs for vulnerabilities from attacks via different aftermarket OBD-II dongles. In the same year, Zheng et al. [23] presented a testbed based on a real-time CAN bus simulation and a simulated infotainment system to explore security vulnerabilities in modern vehicle systems. They demonstrated different attacks on the CAN bus through an Ethernet-toCAN gateway that connects the infotainment system with the ECUs. Toyama et al. [21] presented a portable and adaptable testbed at the BLACK HAT EUROPE conference in 2018. This solution based on white-box ECUs inspired the present research paper and is described in more detail in Sect. 2.2. Oruganti et al. [15] developed a HIL-based cybersecurity evaluation testbed. Their solution integrates virtual models of the vehicle, the controllers, and the traffic as well as a virtual in-vehicle network simulation. In addition to CAN exploits, sensor attacks and the effect of running a vehicle during a cyberattack on traffic can be evaluated. In 2022, Shi et al. [18] presented a security testbed framework based on real vehicle data by connecting multiple Arduino boards via the CAN bus. They demonstrated various attacks and showed that their CAN message timestamps have high similarity with the network traffic of a real vehicle. 2.2 PENNE Inspiration: PASTA Portable Automotive Security Testbed with Adaptability To protect the intellectual property of vehicle manufacturers and suppliers, the firmware of many ECUs is not accessible to researchers. This increases the security of the ECUs to a certain extent, as attackers have to perform time-intensive reverse engineering in order to find faults in the firmware. However, it also increases the difficulty for benign security researchers who want to secure modern vehicles. In contrast to these black-box ECUs, Toyama et al. propose the usage of programmable white-box ECUs for a security testbed which brings the following benefits. – With white-box ECUs, researchers can program on actual hardware. – The in-vehicle network can be flexibly designed similar to that of an actual vehicle, which makes it possible to apply and evaluate security technology in an almost realistic environment. – The white-box ECUs are based on open technology. Therefore, it is possible to evaluate security without any contract with stakeholders such as OEMs and suppliers. Toyama et al. [21] combined 4 of these white-box ECUs together with various displays and actuators in a suitcase to create their “Portable Automotive Security Testbed with Adaptability” (PASTA). The four white-box ECUs (Body, Chassis, Powertrain, Gateway) serve as the backend of their solution, representing the different domains of a simulated passenger vehicle.

An Open Software-Based Framework for Automotive Cybersecurity Testing

319

While the body, chassis and powertrain ECU provide the regular functionality of the simulated vehicle, the gateway ECU allows researchers to implement security measures for CAN bus attacks. The ECUs can be connected directly on one CAN bus line or separately over the gateway ECU. The PASTA testbed offers two attack vectors for CAN bus intrusion attempts, one via the OBD-II port and another one via tapped CAN wires. However, the researchers provide no implemented attacks and no available security measures out-of-the-box. In addition to making the software publicly available, the researchers provided the vehicle simulation in a suitcase and connected their solution to the software-based CARLA vehicle simulator and to a remote miniature car. At the same time, the support of the software implementations and purchasing of the PASTA hardware has been suspended. In this paper, we improve on the idea of PASTA and present a cybersecurity framework called PASTA-Emulator With No Need for Hardware ECUs, short PENNE.

3 PENNE Testbed Design The aim of the PENNE cybersecurity testbed is to provide a free and opensource solution that is independent of any proprietary hardware and software components, while still giving a sophisticated simulation of a passenger vehicle. Therefore, the architecture of the testbed mimics a vehicle with 5 ECUs connected via the CAN bus using the socketcan implementation of the Linux kernel. Users of the PENNE testbed interact with a graphical user interface which allows them to apply various security measures and launch different attacks against the vehicle. Figure 1 gives a compact overview of the PENNE testbed. The basic functionalities are provided by the three vehicle function ECUs on the left side of the depiction, while the security ECUs on the right side support the security testing of direct and indirect attack vectors. The implementation of the PENNE framework in its current form can be openly accessed on the project’s Github page [4]. As the focus of the paper lies on the provision of a novel test framework and not on the validation of a vehicle simulation, threat modeling with STRIDE or a risk assessment using TARA was considered to be outside the scope of this paper. The following sections describe the components of the PENNE security testbed in greater detail. 3.1 Vehicle Function ECUs Similar to the PASTA testbed of Toyama et al. [21] the basic functionality of PENNEs simulated vehicle is provided by three domain ECUs, namely the body, chassis and powertrain ECU. The ECUs communicate with each other via a virtual CAN bus implemented using socketcan. The communication between the ECUs and the GUI is handled by socat relays. Body. The body ECU represents the exterior part of the vehicle, including the doors and windows, turn signals, headlights, brake lights, wipers and horn. When the testbed operator interacts with the vehicle’s inputs via the GUI, the chassis ECU notifies the body ECU with messages about its changed state over the CAN bus. The body ECU responds

320

T. Faschang and G. Macher socat relays

Vehicle Function ECUs

Direct Attack Vector on vcan0

Authenticated Encryption

Body Security ECUs

Graphical User Interface

Observer Chassis

vcan0

Gateway Powertrain

vcan1 Indirect Attack Vector on vcan1

OBD-II

Fig. 1. Overview of the PENNE Security Testbed.

according to its inherent control algorithms and updates the state of its components, which is then visualized via the GUI again. In addition, the body domain is affected by updates from the powertrain ECU, which also receives input updates from the chassis ECU. To visualize the state of the body ECU, a top-view depiction of the vehicle is provided by the GUI. Chassis. The chassis ECU takes inputs from all the vehicle parts that the PENNE operator can interact with and controls the gauges and lamps on the dashboard. The user can actively interact with the steering wheel, horn button, gear shift lever, wipers and turn indicator levers, parking brake lever, light switch, throttle and brake pedals, door handles, door lock button, window buttons, engine start/stop button and the hazard light button. As the chassis ECU contains all the devices the user can interact with, it is responsible for most communications triggered by user input happening on the CAN bus. Powertrain. The powertrain ECU is responsible for the speed calculation, the automatic shifting and the power-steering of PENNEs simulated vehicle. It also controls the engine state of the vehicle, which can be turned on with a button in the GUI when the shift lever is in the parking (P) position. When the engine is running and the shift lever is put in the driving (D) or reverse (R) position, the speed of the vehicle can be increased by pressing the throttle pedal in the GUI. The powertrain ECU receives the inputs of the pedals, the steering wheel rotation, the hand brake, the engine button and the shift lever via CAN messages from the chassis ECU, and sends the calculated results of its control algorithms back to the bus. 3.2 Graphical User Interface The GUI of the PENNE security testbed is based on the pygame library for Python 3 and consists of a setup panel, the PENNE dashboard, and an attack panel. When

An Open Software-Based Framework for Automotive Cybersecurity Testing

321

initiating the testbed, the operator can choose which security measures to be activated during the session. Depending on the choice of the security measures, the appearance of the PENNE dashboard changes. Figure 2 shows the dashboard with all three activated security measures next to the attack panel.

Fig. 2. The PENNE GUI With All Activated Security Measures and the Attack Panel.

During the operation of the PENNE testbed, the user can launch different attacks against the vehicle by clicking the buttons on the attack panel. The user can specify whether the chosen attack is executed directly via the intra-vehicle CAN bus on vcan0 or via the OBD-II port on vcan1. These two attack vectors represent the main attack groups that can be elaborated on the framework. 3.3 Attacks of the PENNE Testbed In order to provide operators with out-of-the-box demonstrations for common attacks on the CAN bus, the PENNE cybersecurity testbed is equipped with 10 pre-implemented attacks that generate visual or audible responses on the dashboard. The attacks can be divided into four categories that are described below. CAN Frame Spoofing. In a CAN frame spoofing attack, the malicious party sends a CAN frame with a specific ID and payload to the network. Unique messages with a specific ID are usually sent by a specific ECU. In the PENNE framework, for example only the chassis ECU sends CAN messages about the state of the throttle pedal. An attacker can send a message with a specific CAN ID to attack other ECUs’ operations by accepting this message as if it came from the original sender. CAN Bus Flooding. A CAN bus flooding attack happens if the attacker sends multiple malicious messages in a certain time window. This can cause legitimate messages to be delayed or lost among the large number of malicious messages on the bus. Due to the prioritization of low-ID CAN messages, such flooding attacks can completely shut down communications in a physical CAN network and thereby serve as denial-ofservice attacks. For the virtual CAN bus of PENNE, however, the scheduling system

322

T. Faschang and G. Macher

of the socketcan implementation does not allow such a complete denial of service. The PENNE security testbed includes flooding attacks for the RPM, throttle, brake, horn and random CAN messages. CAN Bus Fuzzing. The CAN bus fuzzing attack is similar to the flooding attack as it also sends a large number of messages in a specific time window, yet it serves a different purpose. The fuzzing attack iterates over all arbitration IDs from 0x000 to 0xFFF, and it sends messages with all possible values for the first byte of the messages’ payload. This way, an attacker can investigate how the ECUs react to all the different messages. Such fuzzing techniques are a viable tool in many fields of cybersecurity, as they can show weaknesses in the implementation of many systems. PENNEs pre-implemented fuzzing attack sends 4095(0xFFF) 255(0xFF) = 1044225 messages on the CAN bus, which takes around 20 s to complete. Sniffing and Replay Attack. The final attack category consists of two parts: First, the attacker sniffs the network’s traffic and stores the messages. Then, the attacker can send the recorded messages to the CAN bus. This is possible because of the broadcast-based nature of the CAN protocol, where all bus nodes can read any message sent by others. Sniffing attacks can collect secret keys or intellectual property, and replay attacks can unlock doors or steal vehicles. In contrast to the other attacks, PENNE provides the possibility to fine-tune the sniffing/replay attack in a separate window.

3.4 Security Measures of the PENNE Testbed In addition to the pre-implemented attacks described above, the PENNE testbed serves its users three different security measures that can be activated uniquely or combined with each other. By launching the described attacks against these security measures, their efficacy can be demonstrated out-of-the-box. Intrusion Detection System. In order to detect attackers, an intrusion detection system is implemented as an additional ECU that observes the CAN bus. This observer ECU stores the expected timings of all CAN messages at startup and listens to the communication on the vcan0 interface. It checks if the arbitration ID of a message is present in the timing table and if the payload is within the defined range. If a message does not match the defined criteria, the observer ECU sends a message to the GUI via the socat relay, and the ECUs visualization changes its color to orange while outputting the ID and cause of the detected message. Even though the observer ECU is able to detect attackers sending arbitrary messages or violating message frequencies, it can not prevent the attack but only send a warning message to the operator. Security Gateway. To decouple the intra-vehicle CAN bus vcan0 from the OBD-II port on vcan1 PENNE implements a security gateway as an additional ECU. This gateway ECU implements a whitelist approach with a read and a write-whitelist, allowing only certain CAN messages to be transmitted between the two domains. The ECU detects and blocks any unauthorized messages and notifies the PENNE operator through a message to the GUI. Therefore, the ECU visualization changes its color to orange and prints the arbitration ID of the blocked read or write attempt.

An Open Software-Based Framework for Automotive Cybersecurity Testing

323

CAN Bus Authenticated Encryption. PENNE supports authenticated encryption of CAN messages using the AES-GCM scheme. When the authenticated encryption security measure is activated, the ECUs share a randomly generated 128-bit key at startup. The AEG-GCM scheme generates a tag and an authenticated timestamp for each encrypted message, which provides integrity and authenticity. The receiving ECU confirms the message’s original state and freshness by comparing the tag and timestamp. If the timestamp is more than one second apart from the reception, the tag does not match or the decryption fails, the message is considered malicious and therefore discarded. PENNE uses the thoroughly tested and optimized AES-GCM implementation of the openssl library.

4 Results and Evaluation With the PENNE framework, we present a purely software-based testbed for cybersecurity in the automotive domain, which was initially inspired by the PASTA testbed of Toyama et al. [21]. In contrast to PASTA, PENNE reduces the need for physical whitebox ECUs. Therefore, we named the developed testbed “PASTA Emulator with no need for hardware ECUs” (PENNE). The ten pre-implemented attacks together with the three available security measures make the PENNE testbed stand out from other security testing solutions in the automotive domain. To operate the testbed, users need no more than a PC or a VM running Linux to elaborate on potential attacks and mitigation methods on vehicle ECU networks. 4.1 Testbed Capabilities In general, the PENNE testbed is capable of two things: (a) providing demonstrations of prominent attacks and countermeasures out of the box, as well as (b) being easily adaptable and extendable for training and cybersecurity testing purposes. Both of these key-features do not imply any high costs or risks for the operators of the testbed. Due to the usage of the socket can implementation and the simulation of real ECUs the architecture of PENNE perfectly resembles the electronic system connection architecture of a real vehicle. With PENNE, we provide a visually appealing platform for the demonstration and testing of cybersecurity attacks and countermeasures. In addition to the software vehicle simulation, the connection of a FREENOVE 4WD model car to the PENNE testbed via Bluetooth is implemented. A subset of the CAN messages can therefore be translated into Bluetooth commands in a way that the car could be controlled based on the activities of the PENNE testbed. This provides an additional physical demonstration of attacks and countermeasures for training purposes. 4.2 Message Timings The timing accuracy of the received CAN messages is crucial for the time-based intrusion detection system of the observer ECU. To determine the threshold for warnings from the observer ECU, the time deviations of 30,000 CAN messages were measured and

324

T. Faschang and G. Macher

plotted on the density distribution graph in Fig. 3. The measurements during the regular operation of the PENNE testbed showed that more than half of the received messages had a time deviation between 150 µs and 150 µs. Based on the measurements, a threshold of 8 ms was set for the IDS of the observer ECU, which flags any message with a larger deviation as an intrusion attempt. No false-positive alerts were generated during the tests, and all relevant attacks could still be detected. In addition to the measurement during regular usage, we measured the timing deviations during a CAN bus flooding attack. This resulted in a significantly wider distribution curve due to the higher timing deviations caused by the high. CAN bus load, which demonstrates the severity of such a flooding attack. Every bar in Fig. 3 sums the number of received CAN messages over an interval of 100 µs.

Number of Messages

12000

Regular Usage Flooding Attack

10000 8000 6000 4000 2000 0 -6

-5

-4

-3

-2

-1 0 1 2 Time Deviation [ms]

3

4

5

6

Fig. 3. Distribution of the Time Deviation of 30000 Incoming CAN Messages During Normal Operation and During a Flooding Attack.

4.3 Attack Prevention The effectiveness of the implemented attacks was tested against the three different security measures of PENNE. Table 1 summarizes the performance of the observer ECU, the gateway ECU, and the authenticated encryption scheme against different attack categories, which were tested on both, the simulated OBD-II diagnostic port on vcan1 and the intra-vehicle CAN bus vcan0 representing real-life attack scenarios. The analysis conducts, that no security measure on it own was able to prevent all the different attacks entirely. In the following sections, a description of the effectiveness of the implemented measures against the four attack categories is presented. CAN Frame Spoofing. Unlike the CAN bus flooding attacks, CAN frame spoofing attacks send individual messages without affecting the timings of other messages. The observer ECU is able to detect such spoofing attacks by identifying irregular arbitration IDs, message timings or payload values of the spoofed messages. The gateway ECU blocks spoofed messages over the OBD-II port on vcan1 that are not included in its

An Open Software-Based Framework for Automotive Cybersecurity Testing

325

Table 1. Effectiveness of the Attacks on the Security Measures of PENNE. Attack

Measure None

IDS

Gateway

AES-GCM

CAN Frame Spoofing - OBD-II

×







CAN Bus Flooding - OBD-II

×







CAN Bus Fuzzing - OBD-II

×







Sniffing Attack - OBD-II

×

×





Replay Attack - OBD-II

×







CAN Frame Spoofing - Clipping

×



×



CAN Bus Flooding - Clipping

×



×



CAN Bus Fuzzing - Clipping

×



×



Attack - Clipping

×

×

×



Replay Attack - Clipping

×



×



× … The attack is either noticed nor prevented by the security measure. ∼ … The attack is noticed or partially prevented by the security measure ✓ … The attack is fully prevented by the security measure

write-whitelist. Message authentication and encryption completely prevents CAN frame spoofing attacks as the attacker cannot encrypt or authenticate messages without the secret shared key, and recipients discard all malicious messages, so the attack has no impact. CAN Bus Flooding. CAN bus flooding attacks affect the functionality of the PENNE security testbed by forcing a certain behavior of the simulated vehicle or by influencing the timing of legit CAN messages on the bus. The observer ECU can detect such an attack and display a warning, but cannot prevent it. The gateway ECU can block flooding attacks from the OBD-II port on vcan1 without affecting the in-vehicle CAN bus vcan0. This is only possible if the arbitration IDs of the flooding attack are excluded from the writewhitelist of the gateway ECU. The AES authenticated encryption scheme prevents the functional impact of flooding attacks, but the temporal impact remains because the CAN bus still carries the heavy load of discarded attacker messages. CAN Bus Fuzzing. Fuzzing attacks on the CAN bus may also affect the regular message timings due to the high message frequency they impose on the bus. We however do not consider this impact in the attack analysis, as this side effect is not the intention of a fuzzing attack. The observer ECU reliably detects fuzzing attacks due to the irregular arbitration IDs and payload values of the sent CAN messages. The gateway ECU prevents all fuzzed messages from the OBD-II port on vcan1 that do not comply with its writewhitelist. The authenticated encryption scheme prevents fuzzing attacks, as the ECUs discard all messages that are not encrypted and tagged correctly.

326

T. Faschang and G. Macher

Sniffing Attack. Sniffing attacks read CAN messages without interfering with the communication on the bus, therefore the IDS of the observer ECU cannot detect it. The gateway ECU is able to deny a sniffing attack via the OBD-II port on vcan1 if the arbitration IDs of the desired messages are not included in the read-whitelist. Sniffing attacks on vcan0 however, cannot be prevented by the gateway ECU. The AES-GCM authenticated encryption scheme prevents malicious parties from obtaining any information aside from the arbitration ID and timing of sniffed CAN messages since the payload of the messages cannot be decrypted without the shared secret key. Replay Attack. After a successful sniffing attack, malicious parties may launch a replay attack with the captured message. The observer ECU can detect such a replayed message due to its timing irregularities. The gateway ECU can prevent replay attacks via the OBDII port on vcan1 if the arbitration ID of the message is not included in its write-whitelist. Authenticated encryption can also prevent replay attacks because an attacker cannot change the authenticated timestamp that is sent with every message. Receivers discard any message with a mismatched tag or an old timestamp.

5 Conclusion In this work, we presented the design, implementation and evaluation of an embedded automotive system framework for cybersecurity testing. The presented PENNE framework simulates a CAN controller network and allows researchers and engineers to test attack vectors and mitigation methods in a simulated environment. It further provides basic implementations for the most common attack types for CAN networks and is used for training and testing purposes with the ability to extend for series controllers and real-world demonstrator hardware. With the PENNE framework, we present a purely software-based testbed for cybersecurity in the automotive domain, initially inspired by the PASTA testbed work. The ten pre-implemented attacks together with the three available security measures make the PENNE testbed stand out from other security testing solutions in the automotive domain. The proposed framework consists of three main components: (a) an open ECU architecture that implements basic vehicle functionalities and communication, (b) an intrusion detection and gateway concept implementation accompanied by an authenticated encryption scheme for CAN messages, and (c) demonstrative implementations of the most common cybersecurity attacks on CAN bus, for testing and evaluation.

6 Relation to SPI Manifesto With this work, we are contributing to the principles and values described in the SPI manifesto of the community [9]. Specifically, we aim to enhance the involvement of people through training formats and thus improve the competitiveness of organisations (A.2). Our further objectives are to enhance learning organizations and learning environments (4.1) and thereby also to support the vision of different organizations and empower additional business objectives (5.1).

An Open Software-Based Framework for Automotive Cybersecurity Testing

327

Acknowledgments. This work was supported by TEACHING, a project funded by the EU Horizon 2020 research and innovation programme under GA n. 871385 - www.teaching-h2020.eu, and the ECQA Certified Cybersecurity Engineer and Manager – Automotive Sector project (CYBERENG), which is co-funded by the Erasmus+ Call 2020 Round 1 KA203 Programme of the European Union under the agreement 2020-1-CZ01-KA203-078494. This work is partially supported by Grants of SGS No. SP2021/87, VSB - Technical University of Ostrava, Czech Republic.

References 1. Miller, C., Valasek, C.: Remote Exploitation of an Unaltered Passenger Vehicle. Technical report, Black Hat 2015 (2015) 2. Dobaj, J., Ekert, D., Stolfa, J., Stolfa, S., Macher, G., Messnarz, R.: Cybersecurity threat analysis, risk assessment and design patterns for automotive networked embedded systems: a case study. JUCS – J. Univ. Comput. Sci. 27(8), 830–849 (2021). https://doi.org/10.3897/ jucs.72367 3. Dobaj, J., Macher, G., Ekert, D., Riel, A., Messnarz, R.: Towards a security-driven automotive development lifecycle. J. Softw. Evol. Process., e2407 (2021). https://doi.org/10.1002/smr. 2407 4. Faschang, T., Heinz, R.: Penne github repository (2023). https://github.com/AstroTV/PENNE 5. Luo, F., et al.: Cybersecurity testing for automotive domain: a survey. Sensors 22(23), 9211 (2022). https://doi.org/10.3390/s22239211 6. Fowler, D.S., Cheah, M., Shaikh, S.A., Bryans, J.: Towards a testbed for automotive cybersecurity. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 540–541. IEEE, Tokyo (2017). https://doi.org/10.1109/ICST.2017.62 7. Intel: Safety First for Automated Driving (2019) 8. ISO - International Organization for Standardization: ISO/SAE 21434 Road Vehicles Cybersecurity engineering (2021) 9. Korsaa, M., et al.: The SPI manifesto and the ECQA SPI manager certification scheme. J. Softw. Evol. Process 24(5), 525–540 (2012) 10. Levy, Y.: Global Automotive Cybersecurity Report. Technical report, Upstream Security Ltd. (2022) 11. Macher, G., Veledar, O.: Balancing exploration and exploitation through open innovation in the automotive domain – focus on smes. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 336–348. Springer, Cham (2021). https://doi.org/ 10.1007/978-3-030-85521-5_22 12. MeticulousResearch: Automotive Cybersecurity Market - Global Opportunity Analysis and Industry Forecast (2023–2030). Technical report, Meticulous Research (2023) 13. Miller, C., Valasek, C.: Car Hacking: For Poories (2014) 14. Miller, C., Valasek, C.: Remote exploitation of an unaltered passenger vehicle (2015) 15. Oruganti, P.S., Appel, M., Ahmed, Q.: Hardware-in-loop based automotive embedded systems cybersecurity evaluation testbed. In: Proceedings of the ACM Workshop on Automotive Cybersecurity, pp. 41–44. ACM, Richardson (2019). https://doi.org/10.1145/3309171.330 9173 16. Ring, M., Durrwang, J., Sommer, F., Kriesten, R.: Survey on vehicular attacks - building a vulnerability database. In: 2015 IEEE International Conference on Vehicular Electronics and Safety (ICVES), pp. 208–212. IEEE, Yokohama (2015)

328

T. Faschang and G. Macher

17. Schmittner, C., Wieland, K., Macher, G.: Cooperative and distributed cybersecurity analysis for the automotive domain. In: AmE 2022 - Automotive Meets ELECTRONICS, GMMSymposium, vol. 13, pp. 1–5 (2022) 18. Shi, D., Kou, L., Huo, C., Wu, T.: A CAN bus security testbed framework for automotive cyber-physical systems. Wirel. Commun. Mob. Comput. 2022, 1–11 (2022). https://doi.org/ 10.1155/2022/7176194 19. Schmittner, C., et al.: Automotive cybersecurity - training the future. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 211–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_14 20. Strobl, S., Hofbauer, D., Schmittner, C., Maksuti, S., Tauber, M., Delsing, J.: Connected cars — threats, vulnerabilities and their impact. In: 2018 IEEE Industrial Cyber-Physical Systems (ICPS), pp. 375–380. IEEE, St. Petersburg (2018) 21. Toyama, T., Yoshida, T., Oguma, H., Matsumoto, T.: PASTA: Portable Automotive Security Testbed with Adaptability (2018) 22. Umawing, J.: TikTok car theft challenge: Hyundai, Kia fix flaw (2023). https://www.malwar ebytes.com/blog/news/2023/02/tiktok-car-theft-challenge-hyundai-kia-fix-flaw 23. Zheng, X., Pan, L., Chen, H., Di Pietro, R., Batten, L.: A testbed for security analysis of modern vehicle systems. In: 2017 IEEE Trust- com/BigDataSE/ICESS, pp. 1090–1095. IEEE, Sydney (2017). https://doi.org/10.1109/Trustcom/BigDataSE/ICESS.2017.357

Requirements Engineering for Cyber-Physical Products Software Process Improvement for Intelligent Systems Thomas Fehlmann1(B)

and Eberhard Kranich2

1 Euro Project Office, 8032 Zurich, Switzerland

[email protected] 2 Euro Project Office, 47051 Duisburg, Germany

Abstract. Today’s cyber-physical products are software-intense. That means that the software process is decisive for the ability of these products to learn and adapt behavior, but in turn also to physically harm humans or the environment. Because such systems learn, change their behavior, unlearn, and adapt to their environment makes not only testing a challenge, but also requirements engineering. A new model for knowledge representation opens a possibility to make intelligent systems predictable at least for certain specific aspects. This in turn opens new challenges for identifying the areas where such predictability should precede over the adaptability that is protruding for intelligent systems. Adapting the software process leads to Continuous Requirements Engineering as an extension to Continuous Integration/Continuous Deployment & Delivery and Autonomous Real-time Testing. Keywords: Artificial Intelligence · Knowledge Representation · Knowledge Concepts · Continuous Requirements Engineering · Safe Design

1 Introduction Intelligent systems are characterized by software using methods of Artificial Intelligence (AI) for learning intelligent behavior rather than relying on programmers that make them smart, adaptive, autonomous, or similar. However, learned behavior is much more difficult to specify as a requirement than programmed behavior. The reason is that although learning success depends on the training set, it is not possible to safely predict success. One of the reasons is that the models used to learn behavior are multilinear optimization functions that are not symmetric. The order of learning thus plays an important role and must not be neglected. Moreover, with Deep Learning (DL), systems also can unlearn, for instance when an Advanced Driver Assistance System (ADAS) learns from users who exhibit reckless behavior. What should prevent the intelligent system from adapting such undesirable behavior? The answer we propose is to connect knowledge acquired with AI methods with conventional programming and DevOps. This works thanks to the Graph Model of Combinatory Logic, as proposed by Engeler and Scott 40 years ago. It provides a structure © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 329–342, 2023. https://doi.org/10.1007/978-3-031-42307-9_23

330

T. Fehlmann and E. Kranich

that represents both AI knowledge and programming concepts. The Lambda Theorem formulated by Barendregt [1] proves that the graph model implements not only base knowledge but also programmable functions. We use this kind of Lambda programming to write Concepts that need not being trained but allow for a strict framework keeping intelligent systems under control. It is something like a moral guideline, but contrary to humans, intelligent systems and robots have no choice except following concepts. As with every such certainty, one must be aware of the Russel antinomy and admit that no concept can ever fully prevent system failure. Concepts might be incomplete or contain programming errors. However, like morality in humans, it can make malfunction much less likely, because concepts can be tested, just as the behavior of any other software.

2 Literature Review Already in Greek mythology there are a lot of artificial birds, walking and talking statues and artificial servants. Homer reports in his Iliad that Hephaistos, the god of crafts, had made self-propelled vehicles and even artificial servants who were intelligent and learned crafts. There are many accounts by historians of ancient Greece and ancient Rome with detailed descriptions of self-propelled and self-walking mechanisms and androids. Similar narratives are known from other early cultures, especially China. Homer in his Iliad, book 18, tell us about “golden handmaids” that “worked for him, and were like real young women, with sense and reason [ν´ooς], also voice, and strength, and all the learning of the immortals” [2]. The Attic notion of ν´ooς is particularly interesting, because it not only means intelligent but also mindfulness in the sense of perception, heart, soul; and as thought, purpose, design. In philosophy, it is the mind or intellect, reason, both rational and emotional. In English, ν´ooς is related to “notion” but not to “term”. Thus, it was very wise to start talking about “Artificial Intelligence” rather than “Artificial Nous” in the early 20th century, when first logicians and then computer scientists started to develop perceptrons, constructing machines that collected enormous amounts of data to develop multilinear models allowing recognizing objects, and even natural language, somewhat alike neural networks. Today’s AI is far away from nous [ν´ooς], the mindfulness, the reason is missing. Let L be a basic set of knowledge about objects that are recognizable by AI. In the second half of the last century, Engeler and Scott published The Graph Model of Combinatory Logic [3], an algebraic structure constructed from a basic set of knowledge L containing Arrow Terms, ordered pairs, consisting of a set of arrow terms on the left hand side and single arrow term on the right side. These terms describe the constituent elements of directed graphs with multiple origins and a single node. Arrow terms are written as bi → a

(1)

The right-hand side a is an arrow term, and i is a choice function that selects a finite set of arrow terms from a class of arrow terms b. Both a and b might be basic terms recognizable by some AI model, thus level 0 arrow term without an arrow, or higherlevel terms. The construction of a graph model for representing knowledge acquired by

Requirements Engineering for Cyber-Physical Products

331

AI is described in full detail in Fehlmann & Kranich [4]. With the above conventions, (bi → a)j denotes a Concept, i.e., a non-empty finite set of arrow terms bi → a with level 1 or higher, together with two choice functions i, j. Each set element of a concept has at least one arrow. Obviously, the definition (1) is recursive and closed under the following application (2): M • N = (bi → a) • N = {a|∃bi → a ∈ M ; bi ⊂ N }

(2)

where M and N are sets of arrow terms, (bi → a) ⊂ M is the subset of arrow terms in M that are level 1 or higher. The parenthesis () indicates a finite or infinite subset; the choice function i used as an index ()i indicates a finite subset with i as its choice function serving as its construction rule. Barendregt [1] proved that this graph model is Turing-complete, which means that every computable function can be represented as an arrow term set, called Lambda Concept, and written mnemonically as λx.M

(3)

To prove this, Barendregt [5] showed that for every knowledge set M , another set of arrow terms, denoted as λx.M , can be constructed that acts as a function (λx.M ) • N with x as a placeholder for some other knowledge N . The result is a combination of knowledge in the sense that each occurrence of the “variable” x in M is replaced by N . Arrow terms of level 0, i.e., elements of the base set L, are referred as Observations, because traditional AI can tag them, recognize them with a suitable model, deal with them, and handle them. When we talk of Knowledge, we mean finite or infinite sets of arrow terms, including observations and concepts. You can apply any knowledge to every other knowledge. This makes the graph model sufficiently rich to describe whatever world you want. Knowledge in the sense of the graph model provide a common ground for AI and deterministic programming quite like the Turing machine’s tape provides a common medium for data and programs [6]. The impact on the development of AI might be quite comparable as Turing’s idea brought for computer science. At least, it eases the task of making AI safe and compliant to certain regulations, e.g., for medical instruments. According to Engeler [3], the motivation behind the graph model definition is, when starting with observations about some domain, arrow terms represent knowledge about that domain. This could be observable, temporal behavior, or, thanks to Lambda programming, strict rules imposed on how knowledge should be used and how it shall behave. Examples include the Neural Algebra of Engeler [7]. Thus, it might be of interest to engineers who want to handle knowledge, and in fact, AI was the ultimate vision at the time the graph model was conceived [8]. It is obvious that with concepts, we have the computer implementation of the Greek ν´ooς in mind. Recently, the question of whether AI is safe or can be made safe has been debated, and the related question of whether AI adheres to any moral standard, for example, with respect to racial or gender bias. In fact, whether AI is biased very much depends on the training set. Ganguli and his team [9] have observed that Large Language Models (LLM) have the capability to

332

T. Fehlmann and E. Kranich

“morally self-correct” – to avoid producing harmful outputs – if instructed to do so. However, this effect is not entirely reproducible, the LLM can also tilt in the opposite direction. There are attempts made to test the training sets, for instance by some bias benchmark for question answering [10]. But these tests do not explain how to modify the training set to get rid of the bias. Meeting safety standards by such tests, or by self-correction, seems very hard and the outcome not exactly predictable. Whether this is sufficient for meeting safety and privacy standards, notwithstanding EC legislation aiming at protecting the freedom and the rights of citizens, is at least questionable.

3 Hypotheses and Objectives The existence of a data structure that represents knowledge about objects and conceptual knowledge equally well leads to novel ways of dealing with AI. In one respect, intelligent systems can use AI to learn about the world’s objects that it is supposed to handle; in the other respect, intelligent systems can be programmed to enforce concepts that prevent them from breaking any law, or it is possible to feed intelligent systems with background informational concepts that otherwise would take ten thousand of training samples to learn from them. Moreover, learning success is difficult to assess or measure, while concepts can be tested. 3.1 Hypothesis Intelligent systems should have a means of programming concepts in them. This means that part of the knowledge is not acquired by training but is injected from programmers, subject of rules and law applicable to the domain where the intelligent systems are acting in. In some sense, this reflects how nature creates intelligent beings. Part of their skills are pre-programmed by genetics, the rest had been learned by observations or teaching by parents and the flock, or tribe. Overcoming the genetics part is difficult, but usually behavior is quite predictable. When creating intelligent systems artificially, this option should be considered for making intelligent systems compliant to regulations and product certification in the regulated business domains. 3.2 Research Questions • What does it entail to program concepts for intelligent systems? • How does requirements engineering adapt to intelligent systems? • What are the consequences for the software process and thus the Software Process Improvement (SPI) manifesto [11]?

Requirements Engineering for Cyber-Physical Products

333

4 Sample Concepts Programming concepts for intelligent systems is not something straightforward. It entails two different tasks: 1. Define the choice functions for selecting objects from the intelligent system’s domain. In AI, this process is called Grounding [12]. 2. Define the conceptual functionality as a Lambda Concept [5]. Programming concepts is thus quite different from programming algorithms. It is a two-step approach: It requires grounding the observations by learning with a suitable AI model and programming the intended algorithmic behavior. Suitable programming environments for concepts are not yet available. It requires a structure that can describe objects tagged and recognized by some AI model, implementing the graph model, and the programming environment that accommodates Lambda programs. Once this works, there are several performance challenges to overcome, as the way Barendregt proved the Lambda theorem is certainly not optimized in any way for computer execution. Moreover, it might be that computer architectures also change and probably go back from today’s digital machines back to more analogous computing principles like quantum computers. Since the topic of our paper is Requirements Engineering, we do not have to bother with implementation details. Whether concepts are implemented using the lambda theorem or not is not critical. 4.1 Learning from Samples An easy case for programming concepts is learning from samples. If a small number of samples always leads to similar outcomes, then the choice function of these outcomes can be reused to predict the outcome of a new challenge.

Fig. 1. An intelligent system should be able to derive the fourth conclusive image from only three samples. Today’s AI typically needs thousands of samples to learn how to solve this problem (Courtesy of [13]).

334

T. Fehlmann and E. Kranich

This setting is reflected in the ARC Challenge, shown in Fig. 1, invented by Chollet [14] who proposes a measurement method for “intelligence” that look similar to intelligence tests used for humans. Traditional AI has no other means dealing with such intelligence tests than processing a learning set of sufficient size. This is the reason why the ARC challenge is still unresolved. In an intelligence test for humans, the humans find out the concept behind the challenges and apply it.   With our graph term model, we can do like humans. Let bj → a be the concept learned by studying the samples given in the ARC challenge. The choice function j describes the characteristics of the preconditions given in the ARC challenge; e.g., select all black squares where all four sites are limited by green squares. These are then colored in yellow. Then,   (4) bj → a j describes the solution for any other selection of green and black squares observed in this context, selecting those colored in yellow according to the same choice function j. While learning (4) is hard, writing a Lambda program applying such a concept is easy. Why should we stick to the long and windy road of learning, when simple to use and well-known concepts are easy to program, based on the Lambda theorem? 4.2 Avoiding Racial or Gender Bias Also quite straightforward are concepts that are programmed to eliminate bias. Basically, you only must cross-check your result by exchanging choice functions and evaluation results still should be identical if the selected attribute has nothing to do with the business domain under scrutiny. For instance, this allows AI to help with hiring people. Another approach could be concepts that check training sets for equal representation of certain attributes in their choice functions, just by counting how many males and females, or colored people are represented in the training set. Excess samples can then simply be omitted. 4.3 Protecting Lives More complex are concepts that are important for safety, for instance to protect lives that might be threatened by cyber-physical systems. Programming such concepts is very much dependent on a solid familiarity with the technical constraints and the business domain. However, you can Lambda program concepts that force intelligent systems to act morally sound and implacable. Such concepts might become an obligation thanks to regulation in certain business domains; especially traffic with autonomous cars. 4.4 Learning Concepts Other concepts can be learned by an intelligent system. For instance, learning from the behavior and the wishes of a user, intelligent systems might be able to predict routes that fit the user’s preferences, overruling strict optimization that would select taking the fastest and most direct ones. Such a learned concept could compete with a fixed,

Requirements Engineering for Cyber-Physical Products

335

programmed concept, for instance if the driver of an autonomous car prefers to drive too fast through areas with speed limits. In such cases, the intelligent systems must have a means of selecting the most appropriate concept. Concepts open a wide range of new challenges for requirements engineering. Who else should prescribe fixed, Lambda Concepts if not the requirements engineer?

5 Continuous Requirements Engineering Intelligent systems relying on AI with concepts require programmers. They deliver the concepts when needed, maintain them, and correct errors. This is not much different from traditional DevOps with the exception that not only testing but also requirements elicitation and collection becomes a continual activity within the product lifecycle. Between the process of learning and that of developing concepts exists a direct, continuous feedback that makes sure new requirements are captured, recognized, evaluated, and possibly implemented. This is necessary against continuous learnings that might jeopardize certain lawful and moderate behavior. 5.1 Collecting Feedback and User Reactions The key to continuous requirements engineering is collecting feedback and user reactions. This is possible by instrumentation of the product by usage statistic collectors. If some functionality is often repeated and used more frequently than other can be counted. Errors encountered or actions cancelled by the user also reveal difficulties or misunderstandings in the execution of some functions. Care must be taken that human users of an intelligent system consent in providing feedback and information about what they are going to do. Privacy protection overrules the wish for collecting feedback, in any case. However, if users trust the system’s manufacturer, knowing for what purposes the feedback is collected, they might give such consent. 5.2 Net Promoter® Score (NPS) for Interpreting Feedback This immediately leads to the most popular solution currently available, namely, to build a Net Promoter® Score (NPS) feedback into the product itself. This works when users are humans who consciously use the product; for instance, an ADAS. It works less well if the intelligent product is hidden from the user, or possibly not at all used by a human, rather by another machine. However, in many cases such an NPS, survey is built into the Ops part of a product’s DevOps cycle, as part of the pipeline. The principle of NPS is to learn whether a user is promoter, neutral, or detractor for some specific product, or service. Promoters recommend the product with probability 9 or 10 on a scale from 0 to 10; detractors select some value between 0 to 6. Additionally, they can indicate by some free test why they are giving the selected score. This information is very valuable for understanding their requirements for a given product.

336

T. Fehlmann and E. Kranich

The NPS score is calculated by percentage of promoters minus the percentage of detractors, calculated from the total of all respondents. Respondents are classified into customer segments, such that a profile arises. With a profile, it is now possible to use SSTF to understand customer needs. The idea how this is done, is shown in Fig. 2:

Convergence Gap : Net Promoter Score

: NPS per Segment

: Explained NPS

: Customer Needs

Explain User Expectations Customer Segment Customer Segment Customer Segment Customer Segment

“Analyze”

Response

Customer Need

Customer Need

Customer Need

Customer Need

Customer Need

The Controls

Fig. 2. Explain Customer Segments’ Scores by SSTF

The first step is to guess the candidate customer needs. This is usually not difficult, as customers, especially promoters, tend to hint at their true needs when asked why they did select their score. The suspected customer needs should explain the recorded scores. For measuring this, we simply count the frequency of mention. This method for analyzing customer’s voice is well known from analyzing social media. First, you need clustering the verbatims from the NPS free text, to avoid duplicates, then to know the customer segment from where it is originating, and finally understand whether the verbatim comes from a positive or negative context. Today’s AI-powered chatbots can do all this in Natural Language, and the score is no longer needed. Nevertheless, the scores will not disappear totally, because it is a means to get the consent of customers to give their opinion away. This is important in view of privacy protection. The convergence gap is the difference (in length) between the profile vectors y and Ax. Because profiles are unit length vectors, they can be compared for determining the Convergence Gap. The convergence gap is thus an indication of how well the proposed customer needs explain the observed scores. 5.3 Importance and Satisfaction with a Product The total number of mentions of a specific customer need is a measure of its importance, per customer segment. However, the positive or negative mention is indicative of satisfaction with the product. Thus, we get two SSTF, I for the importance count, and S for

Requirements Engineering for Cyber-Physical Products

337

satisfaction. Both should ideally have a low convergence gap, but that is not always the case; especially not if the number of surveys is too low. Let S  be the transpose of S. The matrix SS  is symmetric and, if positive respectively complying with the prerequisite of the theorem of Perron-Frobenius [15, p. 359ff], it can be solved with the Eigenvector method. That might work even if some of the satisfaction scores are negative. If all that is the case, and the satisfaction matrix does not become negatively determined because of dissatisfaction being too big, then the explanation profiles gained from both importance and satisfaction SSTF can be combined. Typically, when importance is low but dissatisfaction high, the total priority profile should reflect that and increase value, to compensate for dissatisfaction. For details, consult the book Managing Complexity [15, p. 117]. 5.4 Automatization Within the DevOps Cycle Except for requirements traceability, applied to data movements, the process is fully automatable in a DevOps cycle; see Fig. 3.

Requirements Elicitation Continuous Requirements Engineering

User Values

Create

Build

Deploy

Operate

Analyze

Sanitize

Usage Count Net Promoter Support Tickets Learnings

Requirements as User Stories

Feedback

Fig. 3. Continuous Requirements Engineering with Usage Counts, NPS, and Support Tickets

Not automatable in turn is finding suitable user stories when the convergence gap of the effectiveness SSTF starts diverging. It is likely that this will happen because users learn together with the intelligent systems and thus change their NPS rating. Whenever this happens, it is an indication of changing customer needs, and it is the task of requirements engineers to perform requirements elicitation based on the new customer needs. The three additional requirements capture steps usage count statistics, NPS, and support tickets analysis become part of the DevOps pipeline. Chatbots handle support

338

T. Fehlmann and E. Kranich

tickets where possible, and with human supporters, transcripts can make sure valuable information is captured, scanned, and made available for further analysis. Both chatbots and human supporters can easily distinguish between promoters and detractors. Requirements elicitation remains manual work by requirements engineers. Upcoming steps might include a new release of the knowledge base for our intelligent systems, be it by learning new kinds of objects, or entities, or by learning or acquiring new concepts from programmers. NPS is already part of many product lifecycle processes. However, with softwareintense products, we need NPS as an additional step in the DevOps pipeline. For some products, there are no human users available that can fill out NPS surveys. In this case, AI can scan social media to gather the required feedback.

6 Requirements Elicitation Requirements elicitation refers to how new, or changes customer needs are transferred into functionality. For this, we need to understand how to represent functionality in the software process and how to formulate requirements for intelligent systems, recognizing objects and using concepts to handle actions in the physical world, consequentially. 6.1 Modeling Functionality Intelligent products need a model of their functionality. The model of choice is the ISO/IEC 19761 COSMIC standard [16] because it identifies objects of interest – thus the representation of the world objects the intelligent system is working on – and the data groups that are moved across these objects. At the same time, the model also allows for counting functionality – sizing functionality is a precondition for using Six Sigma Transfer Function (SSTF) for transforming user needs and user feedback into requirements for product features. The standard ISO/IEC 14143 is needed to define granularity of the functionality modelled in a consistent manner [17], notwithstanding that intelligent systems often rely on training and not only on programmed algorithms. The details can be found in the book Managing Complexity [15]. For this paper, we only need the basics, namely the existence of four kinds of objects – functional processes, persistent data, devices, and other applications, and the four kinds of data movements between the – Entry, eXit, Read, and Write. This is exemplified in the Data Movement Map, shown in Fig. 4. The fact that each data movement also carries information about the data group moved is not shown in a data movement map. The “Other App” in Fig. 4 might represent some AI model, e.g., a neural network that has been trained to tag objects in some video sequences recorded by yet another device, for instance, a camera. The data group sent from the “Functional Process” in turn might be used to calculate and select some other specific data group among those processed by “Other App”. Functionality is identified by the data groups moved and measured by counting the number of data movements moved within a certain application, as shown in the bar on top of Fig. 4. The granularity adopted here is not indicative for all technical details but for the granular level of details that we want as requirements. When working with

Requirements Engineering for Cyber-Physical Products

339

3 Entry (E) + 2 eXit (X) + 1 Read (R) + 1 Write (W) = 7 CFP Start

Device Data Log

Functional Process

Sensor

Actuator

Other App

1.// Move some Data 2.// Restore Past Data 3.// Move some Data into Other App 4.// Show Data from Other App 5.// Move Data to Actuator 6.// Action Completed 7.// Remember Data

Fig. 4. Sample Data Movement Map According to ISO/IEC 19761

predefined objects such as sensors and actuators, or an AI model, we do not want detailed requirements for all that is inside these objects. This might be left to another stage in the product development process, or to the supplier. 6.2 Functional Effectiveness Understanding values, such as customer needs, are not yet requirements. A requirement is a value transferred to some specific use case. In today’s software and product development environment, it most often takes the form of a user story. It expresses an expectation that the user of a system has, that the system does something with some defined quality and for fulfilling some specific goal. Whether such a system is intelligent or not, is not decisive. Using ISO/IEC 19761 COSMIC, it can be measured whether a set of user stories meets customer needs. Since functionality is measured in data movements between objects of interest, and the data movements are known for each user story, all that is needed is requirements traceability in the form of assigning user values to each data movement. It means that whenever a data movement is implemented, it must have at least one justification by the customer needs. If there is no need to fulfil, probably the data movement is not necessary. With this method, we get yet another SSTF matrix, by counting the number of data movements that are associated to some specific customers need and do this for every user story. The resulting matrix again is positive and thus has an Eigenvector solution explaining the importance profiles of the user stories. This can be used to identify user stories that can be omitted or help clearing disputes that might arise during sprint planning within the development team. The series of standards that explain how to use SSTF for functional effectiveness, and a lot more statistical methods in product development, is ISO 16355 [18].

340

T. Fehlmann and E. Kranich

6.3 Embracing Change With intelligent systems, both parties, users and systems learn continuously. The customers’ needs will change, and consequently the functionality must adapt as well. Intelligent systems need a DevOps lifecycle to keep learning what users want and keeps them happy. Continuous requirements elicitation is a software process that should go into a future version of the SPI manifesto [11], when it becomes updated for the world of 4th generation intelligent software-intense products. Most of the process steps needed for embracing change and requirements elicitation can be automated and become part of a DevOps pipeline, usage statistics, the NPS survey, and support tickets handled by either chatbots or support centers, or both. 6.4 Requirements for Intelligent Systems Using Concepts We therefore need to specify not only the objects that our intelligent systems must be able to recognize, but also the concepts that it must be able to learn and especially those that are compulsory, and most often the result of Lambda programming. Table 1. Requirements for Intelligent Systems Type Requirement What objects should it recognize? • Classical Artificial Intelligence • Pattern Recognition • Tagging scenarios & objects What concepts should it learn? • Some concepts are easier to learn than being programmed. • Continuously adapting to user’s behavior and preferences Which concepts are compulsory? • Safety • Security • Legal rules

Technical Solution Training Models • Typical samples from a typical world • DL, continuous updates • Neural networks Learning Concepts • Learning to perform actions typical for the world they are built for • Continuously adapting by collecting and evaluating experiences Lambda Concepts • Reacting on specific scenarios • Predefining behavior • Compulsory decisions

Thus, requirements are classified in three categories that correspond to three different technical solutions. Compulsory concepts can be identified with FUR for traditional programs, deterministic functionality, while type 1 and type 2 requirements have a significantly higher degree of indeterminacy that in turn are more difficult to validate. Table 1 shows the three categories of requirements that should be clearly identified and distinguished in the requirements elicitation process. Traditional functional requirements are always in category 3, as there is no level of uncertainty in implementation allowed. Functional tests can – strictly speaking – only be performed for category 3 requirements; however, the other categories shall be subject to Autonomous Real-time Testing, as explained in last years’ paper [19] and the book [20].

Requirements Engineering for Cyber-Physical Products

341

7 Limitations The current paper presents an extension of the classical requirements engineering, for intelligent, AI-driven systems, with a focus on cyber-physical systems that can impose harm to the physical environment. Concept programming for AI is currently not yet available but there is research underway that focuses on moral self-correction and other ideas like concepts for intelligent systems [12]. For requirements engineers, it is not important how engineers solve their problems and implement intelligent systems. What matters is that strict requirements are clearly stated and formulated even if the system is intelligent and able to learn and adapt behavior.

8 Conclusions We have shown what it entails to program concepts for intelligent systems. It requires basically using the graph model for representing knowledge [4] and make Lambda programming available for AI. Intelligent systems require special treatment for implementing strict and compulsory concepts. This also solves the problem of how intelligent systems can be registered with authorities to be safe and secure. The EU is preparing such regulations in the framework of cyber-resilience that call for Lambda Concepts in intelligent systems. The SPI manifesto [10] should be adapted to intelligent and self-learning products of the 4th industrial revolution and enhanced by specifying both requirements engineering and testing for software relying on AI or exhibiting self-modifying behavior by DL.

References 1. Barendregt, H.P.: The type-free lambda-calculus. In: Barwise, J. (ed.) Handbook of Mathematical Logic, vol. 90, pp. 1091–1132. Amsterdam, North Holland (1977) 2. Butler, S. (ed.): Homer, The Iliad, vol. 18. New York and Bombay: Longmans, Green and Co., London (1898) 3. Engeler, E.: Algebras and combinators. Algebra Universalis 13, 389–392 (1981) 4. Fehlmann, T.M., Kranich, E.: A general model for representing knowledge - intelligent systems using concepts (2023, preprint). https://web.tresorit.com/l/mCAQY#ALX3kSa3fAif nE0lB5Na6w. Accessed 16 June 2023 5. Barendregt, H., Barendsen, E.: Introduction to Lambda Calculus. University Nijmegen, Nijmegen (2000) 6. Turing, A.: On computable numbers, with an application to the Entscheidungsproblem. Proc. Lond. Math. Soc. 42(2), 230–265 (1937) 7. Engeler, E.: Neural algebra on “how does the brain think?” Theoret. Comput. Sci. 777, 296– 307 (2019) 8. Engeler, E.: The Combinatory Programme. Birkhäuser, Basel, Switzerland (1995) 9. Ganguli, D., et al.: The capacity for moral self-correction in large language models. arXiv: 2302.07459 [cs.CL]. Cornell University (2023) 10. Parrish, A., et al.: BBQ a hand-built bias benchmark for question answering. arXiv:2110. 08193 [cs.CL]. Cornell University (2021) 11. Korsaa, M., et al.: The SPI manifesto and the ECQA SPI manager certification scheme. J. Softw. Evol. Process 24(5), 525–540 (2012)

342

T. Fehlmann and E. Kranich

12. Zhong, V., Mu, J., Zettlemoyer, L., Grefenstette, E., Rocktäschel, T.: Improving policy learning via language dynamics distillation. arXiv:2210.00066v1 [cs.LG]. Cornell University (2022) 13. Pfeifer, R.: Lab42, Mindfire AG, Pfäffikon (2023). https://lab42.global/. Accessed 22 Feb 2023 14. Chollet, F.: On the measure of intelligence. arXiv:1911.01547 [cs.AI]. Cornell University (2019) 15. Fehlmann, T.M.: Managing Complexity – Uncover the Mysteries with Six Sigma Transfer Functions. Logos Press, Berlin, Germany (2016) 16. ISO/IEC 19761: Software engineering - COSMIC: a functional size measurement method, ISO/IEC JTC 1/SC 7, Geneva, Switzerland (2019) 17. ISO/IEC 14143: Information technology - Software measurement - Functional size measurement - Part 1: Definition of concepts, ISO/IEC JTC 1/SC 7, Geneva, Switzerland (2019) 18. ISO 16355: Applications of Statistical and Related Methods to New Technology and Product Development Process – Part 1: General Principles and Perspectives of Quality Function Deployment (QFD), ISO TC 69/SC 8/WG 2 N 14, Geneva, Switzerland (2021) 19. Fehlmann, T.M., Kranich, E.: Designing and testing cyber-physical products - 4th generation product management based on AHP and QFD, systems, software and services process improvement. In: Yilmaz, M., Messnarz, C.P.R., Wöran, B. (eds.) EuroSPI 2022, Salzburg: Communications in Computer and Information Science, vol. 1646. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15559-8_26 20. Fehlmann, T.M.: Autonomous Real-time Testing – Testing Artificial Intelligence and Other Complex Systems, Berlin. Logos Press, Berlin, Germany (2020)

Consistency of Cybersecurity Process and Product Assessments in the Automotive Domain Christian Schlager1(B) , Richard Messnarz2 , Damjan Ekert2 , Tobias Danmayr2 , Laura Aschbacher2 , Almin Iriskic1 , Georg Macher1 , and Eugen Brenner1 1 Technical University of Graz, Graz, Austria

[email protected] 2 ISCN, Graz, Austria [email protected] https://www.tugraz.at/institute/iti/home/, http://www.iscn.com

Abstract. A modern car is like an IT network. Car makers became IP service providers, and each car has a gateway server with a fixed IP address. Gateway servers are connected to domain controllers and each domain controller has a subnet of ECUs. An ECU (Electronic Control Unit) represents an embedded system integrating electronics, sensors, software and actuators. Such an IT service and communication-based architectures makes the vehicle vulnerable to attacks from outside. The UN (United Nations) reacted on this situation and published the UN 155 regulation for Cybersecurity Management Systems and UN 156 for Software Update Management Systems for automotive. This paper discusses what assessments and audits the automotive industry has been implemented to address the requirements for UN155 and UN156 and illustrates recent research done to closer link the different types of assessments and audits for cybersecurity. These different types of assessments and audits can be supported by the tool Capability Advisor (CapAdv). Keywords: assessment · quality · security · safety · process

1 Introduction To comply with UN 155 [36] and UN 156 [37] the car makers, for instance, in Europe must be able to show that they developed a complete cybersecurity case [3, 7, 20, 33, 35]. To have a clear guidance for Cybersecurity Management Systems (CSMS) the UN 155 regulation the ISO 21434 [15] norm has been developed and released in August 2021. And to provide guidance for the UN 156 regulation the ISO 24089 [13] norm for a SUMS (Software Update Management System) has been published by February 2023. The Upstream reports from the European Agency for Cybersecurity show an exponential growth of cybersecurity attacks on vehicles so that the UN 155 [36] and UN 156 [37] are seen mandatory for a type approval on the European market. Based on the UN 155 and ISO 21434 the German automotive association then developed an ACSMS audit [38] © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 343–355, 2023. https://doi.org/10.1007/978-3-031-42307-9_24

344

C. Schlager et al.

book and an Automotive for Cybersecurity assessment model. While the cybersecurity management system audit is done for the organization every 3 years, the Automotive for Cybersecurity assessment is applied on each project which has interfaces to outside that can be maliciously attacked by hackers [1]. In fact, 3 types of checks (see Fig. 1) have been agreed and are explained in the cybersecurity training materials of INTACS: 1. ACSMS (Automotive Cybersecurity Management system) Audit [7, 21–23, 38] 2. Automotive SPICE (ASPICE) for Cybersecurity Assessment [4, 11, 12, 16–19, 25, 29, 32, 39–41] 3. ISO 21434 Product Assessment [15]

Fig. 1. Overview of Audits

2 Types of Audits 2.1

Automotive SPICE for Cybersecurity

In 2021 an extension for the existing Automotive SPICE 3.1 assessment model [41] was published which included new cybersecurity related processes [41] to be assessed. – MAN.7 Cybersecurity Risk management – SEC.1 Cybersecurity Requirements Elicitation – SEC.2 Cybersecurity Implementation

Consistency of Cybersecurity Process and Product Assessments

345

– SEC.3 Cybersecurity Verification – SEC.4 Cybersecurity Validation – ACQ.2 Supplier Request and Selection Figure 2 highlights the relationships between the processes MAN.7 and SEC.1 to SEC.4. MAN.7 assesses the management of the TARA (Threat and Risk Analysis) and results in cybersecurity goals and risk treatment decisions [26].

Fig. 2. TARA

SEC.1 receives the cybersecurity goals as top level requirements and refines them (also considering threat modelling at different layers) into cybersecurity system, software and hardware requirements [6, 20]. SEC.2 refines the architecture and integrates cybersecurity controls [5, 24–26, 30], and implements the cybersecurity controls. SEC.3 plans and implements the cybersecurity verification, the cybersecurity requirements are tested. SEC.4 plans and implements a validation of the product by simulating hackers (e.g., by penetration testing by an independent expert team). In each of the processes SEC.1 to SEC.4 further vulnerabilities and risk might be identified which leads to an update of the TARA (Threat and Risk Analysis). In an ASPICE for cybersecurity assessment the processes are rated at capability levels 1 to 3: 1. Performed – Process Attribute (PA) 1.1: Process Performance 2. Managed – PA 2.1 Performance Management – PA 2.2 Work Product Management 3. Established – PA 3.1 Process Definition – PA 3.2 Process Deployment The ASPICE model is worldwide known and the rating scale is published in ISO 33020 [14] as a measurement framework. Applying ISO 33020 each PA (Process Attribute) is rated with N(ot), P(artially), L(argley), F(ully) and ISO 33020 contains

346

C. Schlager et al.

Fig. 3. Levels for ASPICE [39]

a table which explains how to aggregate from process ratings a capability level. To reach a level ≥2, the process attribute(s) of levels below shall be F. (See Fig. 3) The information of NPLF is taken by assessors to assess processes according to the predefined scope. The scope of processes assessed is agreed between the assessor and the responsible person for the project before the assessment starts. The scope of the assessment depends on the scope of the project and may contain the processes of VDA (Verband der Automobilindustrie) Scope. One result of the assessment is the capability profile. One example of a capability profile is shown in Table 1. Currently the minimum expectation of all car makers is that suppliers have no level 0 and minimum level 1 per cybersecurity process achieved, and premium car makers check already for the achievement of a capability level 2 in all cybersecurity processes. Table 1. Capability Profile

Process PA 1.1 PA 2.1 PA 2.2 PA 3.1 PA 3.2

Level

MAN.7

F

L

F

X

X

2

SEC.1

F

L

F

X

X

2

SEC.2

L

L

F

X

X

1

SEC.3

L

L

F

X

X

1

SEC.4

L

L

F

X

X

1

ACQ.2

F

L

F

X

X

2

2.2 Automotive Cybersecurity Management System (ACSMS) Audit The Automotive Cybersecurity Management System audit represents an organizational audit which is done every year by the IATF 16949 auditor using the checklist from the VDA red book. Either the IATF 16949 [10] auditor has cybersecurity skills or appoints a cybersecurity expert who joins his audit team. The set of checkpoints to be applied

Consistency of Cybersecurity Process and Product Assessments

347

Fig. 4. Example ACSMS Audit Question

are defined in the ACSMS audit book [38] published by VDA in December 2020. The questions in that book assesses whether the organization supporting all projects (and including the entire life cycle) have established a cybersecurity management system. The result is an audit protocol with either no deviations or with deviations. In case of no deviation a CSMS certificate will be issued for the organization and in case of deviations an action plan to improve will be established (Fig. 4). 2.3 ISO 21434 Product Assessment The ISO 21434 product assessment involves independent cybersecurity experts which check the work products. In case of automotive the work products to be checked relate to embedded systems [20]. The ISO norms and UN regulations do not specify the skills of these experts and therefore state of the art is used as an argument. i.e., Experts having proven cybersecurity experience will check the work products against state of the art. Currently the state-of-the-art content of work products is discussed in working parties and this term will evolve over time. i.e., state of the art can change over time. Example 1 - Work Product Review Checklist: MAN.7 - Work Product: TARA Threat Assessment and Risk Analysis Expected content based on state of the art (example review checklist from SOQRATES [34] working group): – The assets shown in the cybersecurity item shall be consistent with all assets shown in the TARA – The outcome of threat models at item level, system level, software level, hardware level has been considered in the TARA – For each threat related to an asset, a damage scenario has been identified. – The threats considered are derived from a known method such as STRIDE, CVSS, HEAVENS, etc. – For the damage scenario rating a mapping to the rating attributes defined in ISO 21434 Annex F has been done. – For the attack feasibility rating a mapping to the rating attributes defined in ISO 21434 Annex G has been done. – For the risk rating a defined method to combine the damage scenario rating and attack feasibility rating has been detected

348

C. Schlager et al.

– For each risk treatment decision (reduce the risk, avoid the risk) a cybersecurity goal has been specified. – For all cases with risk treatment decision share the risk a risk list has been created and jointly reviewed with the customer. – Etc. For each work product specific review checklists are applied. Deviations are documented and must be solved before the start of production. If they cannot be solved a residual risk report needs to be submitted to the customer. Example 2 - Work Product Review Checklist: SEC.1 - Work Product: 17–11 Software requirements specification expected content based on state of the art (example review checklist from SOQRATES [34] working group): – Are the system cybersecurity requirements linked to cybersecurity goals which are an output of the TARA. – Do the cybersecurity system requirements relate to a cybersecurity control (as a standard syntax is there a consistent naming convention used according to state-ofthe-art concepts such as STRIDE, CVSS, HEAVENS, etc.) – Are system cybersecurity requirements decomposed into cybersecurity software and hardware requirements. – Are cybersecurity software requirements assigned to software features/components in the cybersecurity stack of the software system. – Are cybersecurity software requirements related to software modules and interfaces in the software architecture. – Are cybersecurity software requirements consistent with the automotive specific Autosar version and integration manual. – Are there cybersecurity hardware requirements clearly outlining the processor family and HSM (Hardware Security Module) functionality. – Are there requirements for the Crypto-Stack concerning the functions to be supported. – Key Management [9] – MAC Message Authentication Code [9] – Signature – Symmetric encryption – Asymmetric encryption – Hashing – Random generator with seed key – One Time Password (OTP) – Used register for storing the keys. – Are there threat models at system level which led to the definition of cybersecurity system requirements. – Are there threat models at software level which led to the definition of cybersecurity software requirements. – Are there threat models at hardware level which led to the definition of cybersecurity software requirements. – Can the project demonstrate that cybersecurity experts/engineers participated in the elicitation and review of the requirements. – Etc.

Consistency of Cybersecurity Process and Product Assessments

349

3 Implementation of Links Between Product Evaluation and Project Assessment 3.1 Background While evaluating processes in Automotive SPICE for Cybersecurity is a very formal procedure, evaluating the product quality involves technological considerations and work product related review checklists. In this section results from a recent PhD work are published showing how the ASPICE for Cybersecurity [41] and ISO 21434 [15] work product assessment are related and how to track the consistency between both. The assumption is also that by linking them time can be saved by taking over knowledge gathered across both types of assessments. The two types of assessment (Automotive SPICE for Cybersecurity and ISO 21434 Product Assessment) differ in their procedure in assessment. While Automotive SPICE for Cybersecurity follows a very formal approach to assess processes for development off embedded systems [39], ISO 21434 Product Assessment focusses on the technical aspects of the product. In order to enhance one type of assessment by the other one it is useful to find out what is the basis for their assessment result. Performing Process Assessment and Product Assessment shows that most of the questions are about the defined work products like 17–11 Software requirements specification. The questions for the two different types of assessment differ. Therefore, the process assessment model (PAM) defined by [39] will be extended by the entities Attribute and Question. This newly introduced entities are marked in color in Fig. 5. The entities shown in Fig. 5 which are marked in white background color do not show differences between the two types of evaluation. The new types of entities have been introduced to figure out the different questions for the two types of assessment.

Fig. 5. Extended Relationship Model

350

C. Schlager et al.

3.2 Discussion The implementation of the additional view is based on the existing Capability Adviser assessment portal [8, 26–29]. In cooperation with TU Graz a software architectural analysis took place to analyze the combined approach and come up with a procedure that (1) allows to integrate the views and (2) delivers an assessment approach that really takes less time. The following 2 approaches have been discussed. 1. To ask product questions and to ask process questions in the same assessment tool. 2. To allow product experts to use their known product review checklists and have an interface to enter findings to the assessment system. Findings would directly propagate to base practices and facilitate the process assessment. 3.2.1 Approach 1 – Advantage • Product assessment and process assessment in one system with one interface. – Disadvantage • Product experts use review checklists and ready to use confirmation review forms. They are not trained as process assessors, they are rather acknowledged technical cybersecurity experts that can judge the vulnerability of a software, hardware, system. • If both enter ratings, how can we later integrate to different ratings in one? • The time to ask each product checklist question in a process assessment tool separately takes a lot of effort and time. While process assessors work in this way, product experts rather challenge the product first and then return to the checklist (at technical detail) and enter the test/examination results. 3.2.2 Approach 2 – Advantage • The product assessors are using still review and confirmation checklists which means that the product experts are working as usual, in a work environment they know from before. Process assessors work like before in the process assessment system Capability Adviser. So, both can keep the mode of work they are used to. • The time for the process assessments can be reduced by the following findings propagation algorithm and procedure. Product experts enter findings of their reviews in defined work product comment sections in the Capability Adviser portal system. The work products in the tool are linked to specific base practices and the tool automatically propagates the findings to the weakness section of the base practices. Those findings are displayed with a tag product finding. This means that process assessors will know weaknesses immediately once they open the process and base practices view. This allows to focus the interviews and safe time.

Consistency of Cybersecurity Process and Product Assessments

351

• The process assessment portal only provides a process and no product rating, so there will be no mix of 2 rating scales. Each rating scale stays consistent (product reviews showing deviations and process assessments showing process ratings separately). – Disadvantage • Product reviewers would not like to manually enter findings to the work products table of the process assessment portal. Here an update of the review checklists to allow an interface file that can be imported to the process assessment portal is required. 3.3 Conclusion The approach (2) will be implemented because only (2) offered time savings, would not lead to a conflict of 2 rating scales and would allow a senseful content mapping supporting a process assessor. Also, the team considered that process assessments might be before product assessments. If a process assessment takes place first, the weaknesses of the base practices based on their link to work products can be used to generate an output file that serves as an input to the product reviews, leading in this case to less time in the reviews by product experts. 3.4 Implementation

Fig. 6. Extension CapAdv

Figure 6 implementation of an ER Model in the Capability Adviser Database The software team defined a database concept as shown in Fig. 6. The tables in Fig. 6 are based on the extension of the existing database model of the Capability Adviser assessment system. The work products table includes the ASPICE 3.1 and ASPICE for cybersecurity

352

C. Schlager et al.

process assessment model work products. In the attribute Evaluation Questions, the product related questions are contained (see Work Product table in Fig. 6). Note: Since in option (b) the tool is not used to rate each question (this is done in the review checklists external to the tool still) this is an information and not a set of additional practices or questions to rate. The Wp pc table represents the relationship table between the work products and the base practices of the assessment model. The Wp results table creates a relationship between the work product (WP id), a specific assessment (Assessment id), a specific assessor (Assessor id), an instance (if an assessment with process instances has been created), if it was already checked in a product review (0/1) and a comment attribute with all findings / deviations imported from the review. In case of comments available in the work products table process assessors can decide per base practice whether to take over the findings to the weakness section of a base practice.

4 Conclusion Section 2 describes the two different assessment types which are used to assess products in the automotive domain that also have to consider cybersecurity. While the assessment type Automotive SPICE for Cybersecurity follows the procedure defined in [27, 28, 39– 41], the assessment of type ISO 21434 Product Assessment considers technical aspects about the product that shall be developed (Fig. 7).

Fig. 7. Work Products and Base Practices [28]

The team of TU Graz and ISCN currently implement the approach proposed which will allow to enter findings from product reviews to the assessment portal and copy the findings automatically as product related weakness inputs to the weakness sections base practices. The process assessors then decide whether to consider them in the process rating or not. This will lead to a consistency between process ratings and product findings. It will lead to a quicker overview of current weaknesses and facilitation of the work of process assessors. This will also work the other way round. If a process assessment takes place first, then weaknesses of practices are related to work products and can be exported to form an input for the work product reviewers. Tools supporting Automotive SPICE assessments show the base practices and display the work products [8, 26–29]. Base practices (for PA 1.1) come with rating rules and recommendations [40] and assessors need to document weaknesses and strengths. If a weakness is identified the Fully rating

Consistency of Cybersecurity Process and Product Assessments

353

Fig. 8. Rating and Commenting of Practices [28]

might be changed to a Largely (or even lower) rating and the weakness is documented (Fig. 8). Currently the research work at TU Graz in cooperation with ISCN elaborates with an experience exchange and innovation group of EuroSPI with leading automotive suppliers and electronics companies [2, 34] the work product review checklists and links in the tool that content to the work products and practices. This then leads to a more consistent assessment and allows to understand the evaluation from both views, a process, and a detailed product view.

5 Relation to SPI Manifesto SPI Manifesto [31] defines the following principles, which are considered in this paper. 1. Know the culture and focus on needs The assessment must not last longer with Cybersecurity in scope. 2. Motivate all people involved If work products are used by more than one process, the time for assessment is reduced. 3. Create a learning organization The mapping of work products reduces time for assessment. The proposals of Sect. 3 help the assessors to reduce the time for performing an assessment. The time reduction becomes higher with the amount of performed assessments.

References 1. Ahmad, F., Adnane, A., Franqueira, V., Kurugollu, F., Liu, L.: Man-in-the-middle attacks in vehicular ad-hoc networks: evaluating the impact of attackers strategies. Sensors 18 (2018). https://doi.org/10.3390/s18114040 2. Biro, M., Messnarz, R.: Key success factors for business based improvement. In: Proceedings of the EuroSPI 1999 Conference, Pori (1999). (Pori School of Technology and Economics. Ser. A., 25.)

354

C. Schlager et al.

3. Brennich, T., Moser, M.: Automotive Security auf dem Pruefstand. ATZelectronics, 48–53 (2020) 4. Cheng, B., Doherty, B., Polanco, N., Pasco, M.: Security patterns for connected and automated automotive systems. Autom. Softw. Eng. 1(1), 51–77 (2021). https://doi.org/10.2991/jase.d. 200826.001 5. Dobaj, J., Ekert, D., Stolfa, J., Stolfa, S., Macher, G., Messnarz, R.: Cybersecurity threat analysis and risk assessment and design patterns for automotive networked embedded systems: a case study. JUCS – Univ. Comput. Sci. 27(8), 830–849 (2021) 6. Dobaj, J., Macher, G., Ekert, D., Riel, A., Messnarz, R.: Towards a security-driven automotive development lifecycle. J. Softw. Evol. Process 24 (2021). https://doi.org/10.1002/smr.2407 7. Ebert, C.: Efficient implementation of standards for security, safety and UNECE. ATZelectronics Worldwide 9, 40–43 (2020) 8. Ekert, D., Messnarz, R., Norimatsu, S., Zehetner, T., Aschbacher, L.: Experience with the performance of online distributed assessments – using advanced infrastructure. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 629–638. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_47 9. Groza, B., Murvay, P.: Identity-based key exchange on in-vehicle networks: CAN- FD and FlexRay. Sensors 19 (2019). https://doi.org/10.3390/s19224919 10. IATF: IATF 16949 Anforderungen an Qualitätsmanagementsysteme für die Serien- und Ersatzteilproduktion in der Automobilindustrie (2016) 11. intacs: HW Spice, intacs Working Group HW Engineering Processes (2019) 12. intacs: Process Assessment Model SPICE for Mechanical Engineering, intacs Working Group MECH Engineering Processes (2020) 13. ISO: ISO 24089 Road vehicles - Software update engineering (2023) 14. ISO: ISO 33020 Information technology - Process assessment - Process measurement framework for assessment of process capability (2019) 15. ISO/SAE: ISO/SAE 21434: Strassenfahrzeuge, Cybersecurity Engineering (2021) 16. Ivanˇciˇc, J., Riel, A., Ekert, D.: An interpretation and implementation of automotive hardware SPICE. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 684–695. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-564414_51 17. Jadhav, A.: Automotive cybersecurity. In: Kathiresh, M., Neelaveni, R. (eds.) Automotive Embedded Systems. EICC, pp. 101–114. Springer, Cham (2021). https://doi.org/10.1007/ 978-3-030-59897-6_6 18. Kim, S., Shrestha, R.: Introduction to automotive cybersecurity. In: Automotive Cyber Security, pp. 1–13. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-8053-6_1 19. Laborde, R., Bulusu, S., Wazan, A., Oglaza, A., Benzekri, A.: A methodological approach to evaluate security requirements engineering methodologies: application to the IREHDO2 project context. Cybersecur. Privacy 1(3), 422–452 (2021). https://doi.org/10.3390/jcp103 0022 20. Nancy, L.: Engineering a Safer and More Secure World (2016) 21. Macher, G., Schmittner, C., Dobaj, J., Armengaud, E.: An integrated view on automotive SPICE and functional safety and cyber-security. In: SAE Technical Paper (2020). https://doi. org/10.4271/2020-01-0145 22. Macher, G., Schmittner, C., Veledar, O., Brenner, E.: ISO/SAE DIS 21434 automotive cybersecurity standard - in a nutshell. In: Casimiro, A., Ortmeier, F., Schoitsch, E., Bitsch, F., Ferreira, P. (eds.) SAFECOMP 2020. LNCS, vol. 12235, pp. 123–135. Springer, Cham (2020). https:// doi.org/10.1007/978-3-030-55583-2_9 23. Macher, G., Armengaud, E., Messnarz, R., Brenner, E., Kreiner, C., Riel, A.: Integrated safety and security development in the automotive domain (2017). https://doi.org/10.4271/2017-011661

Consistency of Cybersecurity Process and Product Assessments

355

24. Macher, G., Much, A., Riel, A., Messnarz, R., Kreiner, C.: Automotive SPICE, safety and cybersecurity integration. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 273–285. Springer, Cham (2017). https://doi.org/10.1007/978-3-31966284-8_23 25. MacGregor, J., Burton, S.: Challenges in assuring highly complex, high volume safety-critical software. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018, pp. 252–264. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_22 26. Messnarz, R., Ekert, D., Macher, G., Stolfa, S., Stolfa, J., Much, A.: Automotive SPICE for cybersecurity - MAN.7 cybersecurity risk management and TARA. In: Yilmaz, M., Clarke, P., Messnarz, R., Woeran, B. (eds.) Systems, Software and Services Process Improvement. EuroSPI 2022. Communications in Computer and Information Science, vol. 1646. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15559-8_23 27. Messnarz, R., Ekert, D., Macher, G., Much, A., Zehetner, T., Aschbacher, L.: Experiences with the automotive SPICE for cybersecurity assessment model and tools. J. Softw. Evol. Process (2022). https://doi.org/10.1002/smr.2519 28. Messnarz, R., Ekert, D., Zehetner, T., Aschbacher, L.: Experiences with ASPICE 3.1 and the VDA automotive SPICE guidelines – using advanced assessment systems. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 549–562. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_42 29. Messnarz, R., et al.: First experiences with the automotive SPICE for cybersecurity assessment model. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 531–547. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_35 30. Petho, Z., Intiyaz, K., Torok, A., Pasco, M.: Analysis of security vulnerability levels of in-vehicle network topologies applying graph representations. Electron. Test. 37, 613–621 (2021). https://doi.org/10.1007/s10836-021-05973-x 31. Pries-Heje, J., Johanson, J.: SPI Manifesto, European system and software improvement and innovation (2010). https://conference.eurospi.net/images/eurospi/spi_manifesto.pdf 32. Schlager, C., Messnarz, R., Sporer, H., Riess, A., Mayer, R., Bernhardt, S.: Hardware SPICE extension for automotive SPICE 3.1. In: Larrucea, X., Santamaria, I., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2018. CCIS, vol. 896, pp. 480–491. Springer, Cham (2018). https:// doi.org/10.1007/978-3-319-97925-0_41 33. Singh, M.: Cybersecurity in automotive technology. In: Information Security of Intelligent Vehicles Communication. SCI, vol. 978, pp. 29–50. Springer, Singapore (2021). https://doi. org/10.1007/978-981-16-2217-5_3 34. SOQRATES: Task Forces Developing Integration of Automotive SPICE, ISO 26262, ISO21434 and SAE J3061. http://soqrates.eurospi.net/ 35. Stolfa, J., et al.: DRIVES-EU blueprint project for the automotive sector-a literature review of drivers of change in automotive industry. J. Softw. Evol. Process 32(3) (2020). Special Issue: Addressing Evolving Requirements Faced by the Software Industry 36. UN: UN Regulation No. 155 - Cyber security and cyber security management system (2021) 37. UN: UN Regulation No. 156 - Software update and software update management system (2021) 38. VDA QMC: Automotive Cybersecurity Management system Audit (2020) 39. VDA QMC: Automotive SPICE Process Reference Model/Process Assessment Model (2015) 40. VDA QMC: Automotive Spice Guidelines 2nd Edition (2017) 41. VDA QMC: Automotive SPICE for Cybersecurity Process Reference and Assessment Model (2021)

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS Kanthanet Tharot1,2(B) , Quoc Bao Duong1 , Andreas Riel1 , and Jean-Marc Thiriet2 1 Univ. Grenoble Alpes, CNRS, Grenoble INP, G-SCOP, 46 Avenue Félix Viallet,

38000 Grenoble, France [email protected] 2 Univ. Grenoble Alpes, CNRS, Grenoble INP, GIPSA-Lab, 11 rue des Mathématiques, 38402 Saint-Martin-d’Hères, France

Abstract. Cyberattacks targeting Cyber-Physical Systems (CPS) are becoming increasingly concerning since the well-known Stuxnet attack. These systems are mostly based on Programmable Logic Controllers (PLCs), which have low cybersecurity protection levels that make them vulnerable to the next generation of cyberattacks. Therefore, building resilience to such attacks through cybersecurity has become a significant concern for Industry 4.0. Due to the lack of published research papers on effective methods to train for cyberattacks on manufacturing systems, this experimental paper proposes a low-cost platform for cyberattack scenarios, to demonstrate some possibilities to attack CPS. Attacks on critical CPS assets may have severe consequences in the physical world (e.g. accidents). An experimental environment used to train students and operators in such attacks and related prevention and mitigation measures can be used to sensitize and train staff in CPS cyber-security related challenges and mitigation strategies. This paper proposes such an experimental setup, and some fundamental and accessible training scenarios. Keywords: Cybersecurity · Cybersecurity training · Cyber-Physical Systems (CPS) · Industrial Control Systems (ICS) · Programable Logic Controller (PLC)

1 Introduction Industrial control systems (ICS) are crucial to the functioning of critical infrastructure, such as manufacturing environments, electric network grids, traffic control, etc. [1]. ICSs can be considered as Cyber-Physical Systems (CPS) which means that they have a direct behavior both in the digital and in the physical world [2]. ICSs are composed of multiple subsystems and components, including Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), Human Machine Interfaces (HMIs), Remote Terminal Units (RTUs), and Programmable Logic Controllers (PLCs). An essential element of cyber-physical systems for connecting the physical and the cyber world in ICSs are PLCs [3]. There are increasingly significant challenges in protecting PLC vulnerabilities due to updated cyberattacks on S7 PLCs, such as the wellknown Stuxnet which caused physical damage for the first time in 2010s [4]. Moreover, © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 356–365, 2023. https://doi.org/10.1007/978-3-031-42307-9_25

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS

357

Firoozjaei et al. [5] analyzed the attack mechanisms of 6 real-word ICS cyber incidents in the energy and power industries such as Stuxnet, BlackEnergy, Crashoverride, Triton, Irongate, and Havex. Some cyber-attacks targeted the control network to manipulate PLCs and alter plant processes. In addition to these types of attack, ransomware, such as WannaCry and Petya, can cause a range of additional negative impacts [6]. At a minimum, ransomware can result in unplanned shutdowns, which may require restoring control systems from backups and lead to data loss for days or even weeks. In the worst case, ransomware can cause irreparable damage to important equipment, necessitating new purchases and installations that may take several months to become fully operational. Given the rapidly increasing frequency and impact of such attacks on ICS, there is an urgent need to train and empower ICS designers and operators in essential Cybersecurity concepts. In our recent publication [12] we elaborated on published approaches addressing this need, as well as the related challenges. One particular challenge lies in the simplicity and effectiveness of the training environment and pedagogic training approach. In this context, the purpose of this paper is to propose a simple, easily affordable experimental training and education environment for teaching fundamental ICS Cybersecurity concepts. Along with that, ideas for pedagogically effective teaching scenarios shall be proposed. To accomplish our goal, we will explore the following research question: How to come up with a simple ICS environment that allows the effective teaching of fundamental Cybersecurity concepts? The upcoming Sect. 2 elaborates on the essential constituents we use in our concept and experimental setup. Section 3 describes the methodology we use in order to come up with a fundamental ICS Cybersecurity training and education concept, which the training setup described in Sect. 4 is part of. Finally, we review the results, draw conclusions, and outline future work.

2 Related Works and Concepts In this section, we first discuss about cybersecurity in both Operational Technology (OT) and Information Technology (IT). As defined by the Gartner glossary, IT [7] includes software, hardware, communication technologies, and related services. On the other hand, OT [8] refers to hardware and software that can detect or cause a change by directly monitoring and/or controlling industrial equipment, assets, processes, and events. Thus, our work focuses on cybersecurity in OT. Vulnerability assessments [3] can identify potential weaknesses in the system, while penetration testing can simulate an attack to test the system’s resilience and identify areas for improvement. Industrial cybersecurity is a critical consideration for any organization that relies on PLCs [1] or other critical control systems according to the survey report by Trend Micro [9]. Programmable logic controllers (PLCs) used in industrial control systems are vulnerable to cyberattacks [10], which can cause production downtime, equipment damage, and even loss of life. To prevent these attacks, industrial cybersecurity measures such as network segmentation, access control, and intrusion detection systems should be implemented, along with regular vulnerability assessments and penetration testing [1].

358

K. Tharot et al.

Network segmentation [11] is a crucial measure that can limit the spread of an attack by dividing a network into smaller, isolated subnets. Access control systems can be used to enforce user authentication, ensuring only authorized personnel have access to the system. Intrusion detection systems can be deployed to monitor and detect abnormal behavior on the network and also process, alerting security personnel to potential threats. Stuxnet [3, 4] is a highly advanced cyber-attack that focused on PLCs. It utilized zeroday exploits to attack machines running Microsoft Windows and reprogram Siemens PLCs. The attack impacted over 200,000 computers and resulted in physical harm to hundreds of machines, highlighting the potential for compromised PLCs to cause physical damage [6]. Stuxnet specifically targeted certain PLC models and updated the main cycle block with rogue code to increase the rotating frequency of connected centrifuges to damaging levels. Man-in-the-middle (MitM) is an attack strategy that breaks the security paradigm for encrypted data in transit. Protective schemes must be designed to maintain the confidentiality and integrity of data in transit using transport-level encryption. The authentication aspect is also a crucial issue, because in the field of OT particularly, the traceability of people acting on the system should be achieved. These are some examples of attacks which can be envisaged on PLCs. Some others can be also be envisaged [12]. Settimino1 is an open source ethernet library that is both versatile and powerful. It facilitates native interfacing between microcontrollers such as Arduino with Siemens S7 PLCs. The library can fully access PLC memory capability which is particularly noteworthy. Additionally, Settimino is PDU (Protocol Data Units) independent, meaning that data transfers are limited only by available board memory, making it possible to transfer large amounts of data without memory constraints. The Siemens S7 Protocol2 is a vital communication protocol commonly used in Siemens communications. It is a function or command-oriented protocol that enables the transmission of commands or replies between devices. The protocol uses ISO-onTCP (RFC1006), which is a block-oriented protocol, and the blocks are called PDUs. The maximum length of the PDUs depends on the CP (Communication Processor) and is negotiated during the connection.

3 Methodology Our objective is to create a security scenario-based training program for PLC-based ICS that teaches cybersecurity fundamentals through a combination of purely practical classroom and laboratory training [13]. Our approach as of Fig. 1 is therefore centered around the design of concrete cybersecurity scenarios derived from real-world attacks on ICS. Probably the richest and most exhaustive catalogue of such attacks and related defense tactics is the MITRE ATT&CK® Navigator [14]. This continuously updated knowledge base has been made available to the public through a web-based tool allowing to explore various attack and defense tactics, including their enchainment to attack/defense strategies in a visual and interactive way. Our concept suggests preparing students for the 1 https://settimino.sourceforge.net/. 2 https://snap7.sourceforge.net/siemens_comm.html.

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS

359

hands-on laboratory experimental environment in this environment, where their learning focus is clearly on ICS cybersecurity terminology, tactics, and strategies. Students will then be grouped in red and blue teams, the former planning for attack technique experiments, the latter for the corresponding defense counterparts. As for the experimental environment, whose basic configuration will be described Security Scenarios Design in the following section, the key requirement is to facilitate spoofing of any communication interfaces, as well as the injection of malicious commands through hardware devices. In our specific case, we require a platform that is Teaching Process Setup compatible with the Siemens PLC S7-1200, • Scenario Planning (MITRE ATT&CK) since our ICS teaching factory is based on this • Experimental Environment Lab type of PLC. Scenarios will involve carrying out experiments and executing cyberattacks using the S7 protocol (frequently used in autoTeaching Process Deployment mobile applications) and the Settimino library. • Classroom Teaching via Scenario Planning The proposed methodology and concept, how• Practical Scenario Experiment in Lab ever, are both independent of the particular environment. This means that the selected cybersecurity scenarios can be applied to Teaching Process Assessment several other ICS domain protocols such as • Teaching Effectiveness Modbus (general industrial networks), Ether• Student Satisfaction net/IP (time-critical applications), or HART IP (legacy wiring) [15]. Fig. 1. Scenario-based cybersecurity training concept

The validation methodology of our concept is based on the application of the teaching process to a set of students, and compare both the teaching effectiveness (through the evaluation of students’ performances in implementing the scenarios successfully, as well as through exams), and their satisfaction levels against another student sample having made only classroom teaching experience.

4 Experimentation on Cyberattacks-Based Scenarios This section provides further information on the materials necessary for designing security scenarios on Siemens PLC S7-1200, as well as instructions for their proper execution. It is crucial to have a deep understanding of both normal scenarios and those under attack, in order to conduct a thorough test of the implemented security measures and guarantee the system functions as intended according to the established methodology. 4.1 Procedure and Steps for Experimental Setup In our experimental setup as listed Table 1, we used the Settimino library with our environment as follows: an Arduino (Uno R3) equipped with an Ethernet shield (W5100),

360

K. Tharot et al.

a Siemens PLC (S7-1200), Switch (D-Link), TIA Portal V17, Arduino IDE, SCADA (PCVue). In order to connect an Arduino Uno with a PLC (Fig. 2), follow these steps: 1. Use a communication protocol such as S7 Protocol such as ISO-S7 are available 2. Configure the PLC to communicate with the Arduino over Ethernet. You can do this through specialized software such TIA Portal 3. Establish communication in local network with a switch 4. Use the Arduino to read and write information data to the PLC’s memory areas using the Settimino library

Switch (D-Link)

PC (TIA Portal V17)

Laptop (Arduino IDE)

Microcontroller (Uno R3 + W5100)

SCADA (PcVue)

PLC (S7-1200)

Fig. 2. An experimentation setup on cyber-attacks based-scenarios

Table 1. List of cybersecurity tools Tools

Cybersecurity Tools Version

Type

Role

17

Software for Siemens PLC

Target

PLC

S7-1200

Physical PLC

PcVue

15.2.1

HMI

Arduino

Uno R3

Low-cost Microcontroller

Ethernet Shield

W5100

Ethernet port

Settimino

2.0.0

S7 protocol library

TIA Portal

Attacker Development Environment

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS

361

4.2 Cyberattack Scenarios As pointed out in Sect. 3, we propose using the MITRE ATT&CK framework and navigator tool in order to choose from real-world cyberattack and -defense scenarios, and the underlying tactics and techniques. In order to demonstrate the potential and use of the experimental setup described before, we selected a Man-in-the-Middle (MitM) type of attack that is at the heart of numerous real-world attacks of ICS, including e.g. Stuxnet. The aim of such attacks is to intercept and modify communications between the PLC and industrial devices that constitutes the control network. Figure 3 illustrates the setup in both normal and under attack situations. In normal scenarios, it is important to take the following steps to ensure smooth operation: First, assign unique IP addresses to all components depicted in Fig. 2, including the PC, laptop, microcontroller, PLC, and SCADA (STEP 1). Once this is done, execute all programs with the PLC to verify that it is functioning properly (STEP 2). In order to test the input/output functionality of the system, program some ladder programs (STEP 3). Finally, to ensure that the program is synchronized with the interface, check the PcVue SCADA system (STEP 4). In situations where the system is being targeted by an attack, it is important to take additional measures to ensure its security. During a botnet attack inspired on Stuxnet and MitM, numerous compromised devices (bots) from a specific botnet are deployed throughout the targeted scenario (see Fig. 3). These bots have the ability to intercept and modify the messages exchanged between control devices. The objective of this attack is to alter the command such %M, %I, %Q, %DB to transmit and control I/O in order to impact the physical state of each I/O. Normal Scenario

Human-Machine Interface (HMI)

SCADA Server

Input-Output (I/O)

PLC

Under Attack Scenario

Human-Machine Interface (HMI)

SCADA Server

MitM as a Bot

PLC

Input-Output (I/O)

Fig. 3. Scenarios in normal and under attack context (adapted from [6])

362

K. Tharot et al.

Once the trainees have completed the normal scenarios, the first step is to install all the necessary libraries for the Arduino IDE, including Settimino (STEP 1). Next, they need to confirm that the PLC component is functioning properly and run some examples to test PLC I/O (STEP 2). To begin experimenting with basic microcontroller programming, they start by making some LEDs blink (STEP 3). Then, they use the library example to exchange some data with the PLC before assigning an IP address to the microcontroller, making sure that they are on the same local network via Ethernet, as shown in Fig. 2 (STEP 4). Afterwards, it is important to understand the connection functions in C programming, such as ConnectTo() and SetConnectionType() (STEP 5). Following that, to allow data access on TIA Portal to have full access to attributes (STEP 6). The aim of this experiment is to modify the command in database. To do so, read an existing database value on the PLC (STEP 7), and then check it is successfully modifying the database value (STEP 8). These scenarios describe the environment setup and provide instructions on how to launch an attack. Specifically, the bot utilizes an Arduino board with the Settimino library and communicates using the S7-ISO technique. To elaborate further, the physical layer can be accessed through the database by controlling the parameter %DB on the memory value as 0x84 in Fig. 4. Once this value is read, it can be modified in Boolean format, for example, by changing False into True. The impact of modifying this command will be discussed in the next session.

Under Attack Scenario on %DB

MitM Control %DB (0x84) as a bot

HMI on PcVue

LED as I/O from PLC Fig. 4. MitM attack on %DB (0x84) parameter

4.3 Discussion Both scenarios aim at teaching attack techniques such as MitM, highlighting the system’s vulnerabilities and the steps required to exploit them. Additionally, these two scenarios can cause harmful behavior at STEP 8, demonstrating that an attacker can control I/O without anyone being able to observe the changes in the system. It is important to note

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS

363

that the actions taken in this experiment could have potential impacts on the sensors, robot, or people working around the machine, as mentioned in the objective. These impacts could modify or disrupt commands such as %M, %I, %Q, and %DB, which may affect the environment around the PLC and process systems themselves. By controlling the parameter %DB (0x84), the impact on process systems can be interrupted, and I/O can be controlled freely, such as LED, without any concerns. Therefore, it is important to prioritize safety considerations in future experiments to ensure that potential risks are identified and appropriately mitigated. Since this experimental setup provides access to the essential PLC communication interfaces, numerous different attack scenarios can be emulated, and therefore integrated in a practical hands-on training.

5 Conclusion and Future Work We proposed a practice-oriented concept and a simple training environment for teaching fundamental ICS cybersecurity concepts. We presented an initial implementation of this experimental environment in a S7-1200 PLC-based setup with an Ardinuo board-driven extension to facilitate attack scenarios based on intercepting and modifying communication between the PLC and connected control devices. The results clearly demonstrate that PLCs are susceptible to attacks by a low-cost microcontroller, which can potentially cause physical harm in a real industrial environment. It is also crucial for industrial personnel to be well-trained in understanding the significance of cyberattacks such as Stuxnet and MitM, which are becoming increasingly probable with the development of connectivity and communication at various levels, in companies and beyond. This education platform can facilitate the training of students and staff to become a major element in mitigating such attacks and their impact. To further develop this concept, we will explore different ways to set up a scenario for security, provide guidelines for the process, and also include functional safety aspects in the teaching in order to demonstrate the critical relationship between functional safety and cybersecurity [16]. Teaching functional safety will also follow a practical real-case based approach as demonstrated in [17]. Moreover, we can teach people how to defend against these attacks and develop defense tools, such as IDS, depending on the type of attack. Expanding on the concept, we can experiment with new, highly performing microcontrollers, such as Raspberry Pi, or other microcontrollers. This will help us improve the overall effectiveness of the experiment and further enhance our understanding of cyberattacks in industrial environments. In order to reduce dependency on a physical experimentation environment, we intend investigating the targeted use of simulation tools to establish a digital twin of the experimental environment that is suitable for teaching [18]. Lastly, we can assess the learning performance of students or industrial staff to evaluate if they have learned about real cyberattacks and have a better grasp of the concepts presented. We believe that our concept and related training setup can be a valuable extension to recent practical cybersecurity training approaches such as [19].

364

K. Tharot et al.

6 Relationship with the SPI Manifesto Training students and staff is a fundamental prerequisite to implement several SPI Manifesto principles [20]: “Create a learning organisation”, “Apply risk management”, and “Manage the organisational change in your improvement efforts”. Cyebrsecurity of industrial systems is becoming a major topic in the EuroSPI community, as software-controlled systems are getting increasingly connected.

References 1. Ramirez, R., Chang, C.K., Liang, S.H.: PLC cyber-security challenges in industrial networks. In: MESA 2022 - 18th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Proceedings (2022) 2. DeSmit, Z., Elhabashy, A.E., Wells, L.J., Camelio, J.A.: Cyber-physical vulnerability assessment in manufacturing systems. Procedia Manuf. 5, 1060–1074 (2016) 3. Hui, H., McLaughlin, K., Sezer, S.: Vulnerability analysis of S7 PLCs: manipulating the security mechanism. Int. J. Crit. Infrastruct. Prot. 35, 100470 (2021) 4. Shakarian, P., Shakarian, J., Ruef, A.: Attacking Iranian nuclear facilities: stuxnet. Introduction to cyber-warfare, pp. 223–239 (2013) 5. Firoozjaei, M.D., Mahmoudyar, N., Baseri, Y., Ghorbani, A.A.: An evaluation framework for industrial control system cyber incidents. Int. J. Crit. Infrastruct. Prot. 36, 100487 (2022) 6. Perales Gómez, Á.L., et al.: SafeMan: a unified framework to manage cyber-security and safety in manufacturing industry. Softw. Pract. Exp. 51, 607–627 (2021) 7. Definition of Information Technology (IT): Gartner Information Technology Glossary. https:// www.gartner.com/en/information-technology/glossary/it-information-technology. Accessed 1 May 2023 8. Definition of Operational Technology (OT): Gartner Information Technology Glossary. https://www.gartner.com/en/information-technology/glossary/operational-technology-ot. Accessed 11 June 2023 9. TrendMicro, “Rethinking Tactics”. https://www.trendmicro.com/vinfo/fr/security/researchand-analysis/threat-reports/roundup/rethinking-tactics-annual-cybersecurity-roundup-2022. Accessed 1 May 2023 10. Ramirez, R., Chang, C.K., Liang, S.H.: PLC cybersecurity test platform establishment and cyberattack practice. Electronics 12, 1195 (2023) 11. Ghaleb, A., Zhioua, S., Almulhem, A.: On PLC network security. Int. J. Crit. Infrastruct. Prot. 22, 62–69 (2018) 12. Matoušek, P.: Security of smart grid communication habilitation. Brno University of Technology (2021) 13. Tharot, K., Quoc, B.D., Riel, A., Thiriet, J.-M.: A cybersecurity training concept for cyberphysical manufacturing systems (2023, preprint) 14. MITRE ATT&CK: The adversarial tactics techniques (2020). https://attack.mitre.org/ 15. Nawrocki, M., Schmidt, T.C., Wählisch, M.: Industrial control protocols in the internet core: dismantling operational practices. Int. J. Network Manag. 32(1) (2022) 16. Riel, A., Kreiner, C., Macher, G., Messnarz, R.: Integrated design for tackling safety and security challenges of smart products and digital manufacturing. CIRP Ann. 66(1), 177–180 (2017) 17. Messnarz, R., et al.: Implementing functional safety standards – experiences from the trials about required knowledge and competencies (SafEUr). In: McCaffery, F., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2013. CCIS, vol. 364, pp. 323–332. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39179-8_29

A Low-Cost Environment for Teaching Fundamental Cybersecurity Concepts in CPS

365

18. Dobaj, J., Riel, A., Macher, G., Egretzberger, M.: A Method for deriving technical requirements of digital twins as industrial product-service system enablers. In: Systems, Software and Services Process Improvement: 29th European Conference, EuroSPI 2022, Salzburg, Austria, August 31–September 2, 2022, Proceedings, pp. 378–392. Springer International Publishing, Cham (2022) 19. Schmittner, C., et al.: Automotive cybersecurity - training the future. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 211–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_14 20. Pries-Heje, J., Johansen, J., Messnarz, R.: SPI Manifesto (2010). https://conference.eurospi. net/images/eurospi/spi_manifesto.pdf

CYBERENG - Training Cybersecurity Engineer and Manager Skills in Automotive - Experience Svatopluk Stolfa1(B) , Jakub Stolfa1 , Marek Spanyik1 , Richard Messnarz2 , Damjan Ekert2 , Georg Macher3 , Michael Krisper3 , Christoph Schmittner4 , Shaaban Abdelkader4 , Alexander Much5 , and Alen Salamun6 1 FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic

[email protected]

2 I.S.C.N. GESMBH, Graz, Austria 3 Institute of Technical Informatics, TU GRAZ, Graz, Austria 4 AIT Austrian Institute of Technology GmbH, Vienna, Austria 5 Elektrobit, Erlangen, Germany 6 Real Security d.o.o., Maribor, Slovenia

Abstract. As cybersecurity becomes an integral part of car homologation, the importance of cybersecurity skills in automotive project development teams becomes crucial. It is not just about the experts in automotive security themselves, but also about the whole system development team that needs to have the skills to be able to understand, cope with and make a cybersecurity integral part of the system. In this paper, we are giving an overview and experience of the content of training, the skill sets defined as the main needed skills/competence and knowledge of the automotive cybersecurity managers and engineers and present the pilot course implementation experience. Keywords: skills · job roles · automotive cybersecurity · training · automotive cybersecurity manager · automotive cybersecurity engineer · cybersecurity

1 Introduction Cars are controlled to a large extent by computers, Electronic Control Units (ECUs) and many million lines of code in software, and are connected to the infrastructure by a gateway server in the vehicle. Cars are connected by V2I (Vehicle to Infrastructure), V2V (Vehicle to Vehicle), WLan, Mobile Phones, and buses (e.g. OBD), and all those interfaces can be attacked maliciously [2–4, 27, 45]. Car ECUs (e.g. steering lock ECU) receive commands on the bus that need to be protected/authenticated. Software in ECUs needs to be signed so that any change of software by unauthorised access can be recognised. And so forth. With the publication of ISO/SAE 21434 “Road vehicles - Cybersecurity engineering” in 2021, the first international cybersecurity engineering standard for road vehicles was published [14]. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 366–383, 2023. https://doi.org/10.1007/978-3-031-42307-9_26

CYBERENG - Training Cybersecurity Engineer

367

In addition to the ISO/SAE 21434, UNECE World Forum for Harmonization of Vehicle Regulations (WP.29) developed UN R-155 [41]. This regulation introduced and described the requirements for the cybersecurity-related aspects of type approval [14, 31, 32]. The training also explains 2 new published VDA standards. • VDA guideline for an ACSMS (Automotive Cybersecurity Management System) audit. • Automotive SPICE for Cybersecurity Assessment model. Extending from cybersecurity is also the topic of software updates where there is a similar situation, e.g. ISO 26262, ISO 24089 - Road vehicles—Software update engineering published 2023/2 and is in a similar way related to UN R156 [1, 12, 13, 15, 20, 42]. The training developed in the EU project explains the background and uses best practice examples to outline how to practically implement these requirements in an automotive organisation. In this paper, we are going to briefly overview the automotive cybersecurity managerial and engineering knowledge that was developed based on the analysis and is being supported by the development of a new comprehensive training material in the modular course EuroSPI Certified Cybersecurity Engineer and Manager – Automotive Sector (CYBRENG).

2 Automotive Cybersecurity Training Concept and Highlights EuroSPI Certified Cybersecurity Engineer and Manager – Automotive Sector (CYBERENG) was started as a European Union Erasmus + project to develop training for automotive cybersecurity. The project was coordinated by the Technical University of Ostrava1 and the project consortium included Real Security2 , ISCN3 , TU Graz4 , Elektrobit5 and AIT Austrian Institute of Technology6 . The project outcome - training course - was focused on training for the two fundamental roles [33], • An automotive cybersecurity engineer possesses the basic skills for active technical work regarding achieving automotive cybersecurity for a product throughout the complete lifecycle. • In comparison, an automotive cybersecurity manager focuses more on the process level and standard and regulatory compliance and managing automotive cybersecurity in a distributed process. The automotive-mobility sector is shifting towards connected, digital, cyber secure and automated vehicles with a significant degree of dynamic and adaptive behaviour, 1 https://www.vsb.cz/en. 2 https://www.real-sec.com/. 3 https://www.iscn.com/. 4 https://www.tugraz.at/home. 5 https://www.elektrobit.com/. 6 https://www.ait.ac.at/en/.

368

S. Stolfa et al.

driven by software. To be able to cope with this change and be able to propose innovative solutions [29, 30], engineers must be able to understand domain-specific knowledge. In this course, we introduce the skill set defined as the basic skills/competence and knowledge of automotive cybersecurity managers and engineers and present the course developed to cover these competence needs and requirements. The skill card is based on analysing stakeholders’ viewpoints and combined views from different technical domains, mainly mechatronics, computer science, software engineering, and cybersecurity [9, 10, 16, 21] (Table 1). Table 1. Skills Card Structure Skill Card Item

Engineer

Manager

U.1 Cybersecurity Management U1.E1 Legal Aspects and Privacy

Practitioner

U1.E2 Organisational Structure

Practitioner

U1.E3 Cybersecurity Planning

Practitioner

U.2 Cybersecurity Operation and Maintenance U2.E1 Life Cycle Assessment

Expert

U2.E2 Cybersecurity processes and audits

Expert

U2.E3 Incident Response Management

Expert

U2.E4 Supply Chain Security

Expert

U.3 Engineering Aspects of Cybersecurity U3.E1 System Threat Analysis and Cybersecurity Goals

Expert

Awareness

U3.E2 System Design and Vulnerability Analysis

Expert

Awareness

U3.E3 Software Design and Vulnerability Analysis

Expert

Awareness

U3.E4 Software Detailed Design and Cybersecurity

Expert

Awareness

U3.E5 Cybersecurity hardware and firmware design

Expert

Awareness

U.4 Testing Aspects of Cybersecurity U4.E1 Cybersecurity Verification and Validation at SW level

Awareness

U4.E2 Cybersecurity Verification and Validation at HW level

Awareness

U4.E3 Cybersecurity Verification and Validation at the System level

Awareness

Maturity levels seen in the second and third columns represent the level of skills/competence or knowledge that the specific job role should have, as defined in [5].

CYBERENG - Training Cybersecurity Engineer

369

3 Training Units and Elements - Overview and Knowledge Examples In this section, we provide an overview and training material examples of each element. The presented list is not exhaustive, just giving a very selective and brief piece of information to get some example insight into the much more knowledge/skill taught in the specific element, the full course structure overview without examples may be found in the following papers [26, 33, 34, 37, 38]. Every element is taught on examples, trainees develop their own examples and therefore apply the element knowledge under the supervision of the experienced trainer building skills. 3.1 U.1 Cybersecurity Management The unit outlines the subject of cybersecurity with a focus on management topics, such as: (1) Legal Aspects and Privacy; (2) Organisational Structure; and (3) Cybersecurity Planning and Incident Management. Element 1: Legal Aspects and Privacy This element gives an overview of the legal situation, cases and the business impact of legal aspects and privacy. Important norms and their main meaning for the homologation of cars in the context of dependability engineering are described, the same applies to issuing complex mechatronic products. A brief overview of main global automotive standards and regulations for cybersecurity: UN UN / WP.29 – World Forum for Harmonization of Vehicle Regulations UN Regulation 155 on Cybersecurity UN Regulation 156 on SW updates ISO ISO TC22/SC32/WG11 – Cybersecurity ISO/SAE 21434 Cybersecurity Engineering ISO TC22/SC32/WG12 – Software update ISO 24089 Software Update Engineering EU Cybersecurity Act. General Data Protection Regulation (GDPR) NIS Directive (Network and Information Security) China Cybersecurity Law & Data Security Law Personal Information Protection Law Information Security: SAC/TC 260

370

S. Stolfa et al.

Automotive: SAC/TC 114/SC 34/WG Cyber US National Highway Traffic Safety Administration (NHTSA) Cybersecurity Guidelines. Others Germany VDA QMC. UK BSI PAS 1885:2018. and more… Element 2: Organisational Structure This element gives an overview of cybersecurity-related roles in connection to cybersecurity planning within various organisational structures. The organisation institutes, governance, and a cybersecurity culture shall boost cybersecurity engineering, including: • Cybersecurity awareness management, • Competence management, and • Continuous improvement. The organisation shall demonstrate and maintain organization-specific rules and procedures to: • Enable the implementation of the requirements according to ISO 21434; • Support the implementation of the corresponding activities. The organisation is responsible for implementing management systems for cybersecurity, particularly a quality management system, and managing the technologies used in cyber-engineering [36]. Production Apply the cybersecurity requirements allocated to production. Prevent the introduction of vulnerabilities during production. Example: Ensure integrity of software flashed on ECUs, prevent adding of third-party dongles, access of unauthorised personal. Example: Debug-Interfaces are closed and/or not accessible. Operation and Maintenance Determine and implement the necessary remediation actions for cybersecurity incidents. Maintain cybersecurity during and after updates to the item or components after production until their end of cybersecurity support. Example: Prevent manipulation of OTA (Over the Air Update) channels, provide Cybersecurity updates to close vulnerabilities. End of cybersecurity support and decommissioning Communicate the end of cybersecurity support (e.g. after 15 years) to the Customers. Enable decommissioning of items and components with regards to cybersecurity. Example: Keep private data confidential under all circumstances.

CYBERENG - Training Cybersecurity Engineer

371

Example: Have solutions in place to “wipe” not further supported items if feasible. Element 3: Cybersecurity Planning This element deals with the establishment of a project plan and selection of cybersecurity engineering methods. Considerations related to development, production, or series maintenance (considering the entire lifecycle) are being outlined. Contract Cybersecurity Interface Agreement (CIA). Plan Cybersecurity Project Plan. Cybersecurity configuration Cybersecurity Configuration Item List. Configuration Management Plan. Qualified people Cybersecurity Competence Matrix. 3.2 U.2 Cybersecurity Operation and Maintenance The unit introduces cybersecurity-related lifecycle assessment, cybersecurity processes, audits, incident response management, and supply chain security aspects. Element 1: Life Cycle Assessment This element teaches skills needed to assess threats throughout the automotive lifecycle, not only focusing on the engineering aspects of cybersecurity. Analysis of threats, assessment of vulnerabilities and their resolution (key handling, update of SW according to UNECE regulations, EOL in production, trust provisioning and more). There are several standards and regulations that utilise lifecycle phases/stages. Some are already directly related to the automotive industry, while others are more generic. Some of these are more aligned than others, but mostly they redefine or respecify it for themselves. Threat Analysis process could be integrated in different lifecycles. There are two main viewpoints observed for lifecycles from the set of standards and regulations. One or a mix of these viewpoints are observed in each standard and regulations. Series Production Viewpoint This is the viewpoint typically seen by a manufacturer e.g. vehicle OEM Could also be considered the “project” view Example: Series production road vehicle has a conceptual design stage, followed by development (e.g. creation of a prototype), then goes into production and then followed by post-production i.e. stage where the OEM is no longer producing vehicles of that series.

372

S. Stolfa et al.

After a certain amount of time following post-production, the vehicle is no longer supported by the OEM e.g. for spare parts, updates to its systems, etc. Physical Instance Viewpoint This is the viewpoint typically seen by a user e.g. vehicle owner Example: the vehicle is produced, shipped to a dealership, and sold to a customer, whereby it becomes in use. Whilst in use, it likely goes into a garage for service and repairs and may be sold to other people. Finally, the vehicle may be decommissioned, which may be purposeful (e.g. owner can get more money for scrap than selling to a new owner) or by accident (e.g. vehicle is involved in a collision and written-off by an insurer). Element 2: Cybersecurity Processes and Audits This element gives an overview of requirements needed to collect evidence and to prepare for a cybersecurity process audit, including clauses of the ISO/SAE 21434 norm and mapping to the processes of an organisation. Other cybersecurity guidelines are described, such as: SAE J3061 and ISO PAS 5112. UNECE regulation UN R155 requires the operation of a certified cybersecurity management system (CSMS). UNECE regulation UN R156 requires a software update management system (SUMS) as a future condition of type approval. UNECE regulation UN R155 requires the operation of a certified cybersecurity management system (CSMS). UNECE regulation UN R156 requires that of a software update management system (SUMS) as a future condition of type approval. Cybersecurity Management System VDA QMC Audit Schema for the implementation of UNECE CSMS [39]. In the project a CSMS audit model has been configured and used in the Capability Adviser assessment system [44, 46, 47]. See Fig. 1. Example audit question Define a cybersecurity policy. Define a cybersecurity policy for the Cybersecurity Management System area of application. Examples for Q1 – Organization-specific policy/handbook for cybersecurity processes and activities (or similar), including commitment from the top management for ensuring cybersecurity – Organization chart, roles and responsibility matrix for cybersecurity organization – Guidelines for the communication for cybersecurity-related information for external instances and relevant stakeholders – Internal regulations for cybersecurity processes (e.g. directives, strategies, secure development handbook, security incident management handbook) – Regular management reviews of cybersecurity aspects

CYBERENG - Training Cybersecurity Engineer

373

Fig. 1. CSMS Audit Tool

Automotive SPICE ® for Cybersecurity The purpose of an Automotive SPICE® for Cybersecurity is to measure the capability of cybersecurity-related development processes according to ISO/SAE 21434. The objective of an Automotive SPICE® for Cybersecurity assessment is either to identify the process related product risks or to evaluate process improvement. Important! Not all aspects of the ISO/SAE 21434 are in the scope of Automotive SPICE® for Cybersecurity, as they are not performed in development relevant (e.g. Production). In the project the Automotive SPICE for Cybersecurity assessment model has been configured in the Capability Adviser assessment system. This has also been used in different cybersecurity assessments in cooperation with Tier 1 and OEM after its configuration [44, 46, 47] (Fig. 2).

Fig. 2. ASPICE for Cybersecurity Assessment Tool

374

S. Stolfa et al.

In the context of UNECE R-155 (subchapter 7.2.2.5), cybersecurity process risk shall be managed. The Automotive SPICE® for Cybersecurity has been created to identify such process-related product risk [28]. Cybersecurity Product Assessment Independent objective technical review by cybersecurity engineer / manager (e.g. from some other project or external) with mandatory qualifications (to ensure that the technical approach is sufficient in the given engineering context). Purpose is the judgement of the cybersecurity of the item or component and is input for the decision if the item is ready for release (e.g. transfer to post development phase). Inputs: A-SPICE for Cybersecurity Assessment Report, Organizational CS Audit report, project plan, CS plan, TARA results, Work Product characteristics from ASPICE for Cybersecurity and from ISO/SAE 2143 [17]. Element 3: Incident Response Management This element gives an overview of methods and approaches to handle and react to public incidents in the field with connected procedures to alert consumers and the authorities. Incident response management, procedures with all relevant suppliers and forming of an urgent response team are described within this element. General guidelines for IT Incident Response Management (IRM) apply also to automotive sector. Basic principles are described in ISO/IEC 27035-1:2016. Make sure external supplier also has IRM plan. • All external suppliers of components that are suspectable to security incidents must have IRM plan in place. Dedicate 24/7 contacts on both sides. • Contacts on both sides should be available 24/7. Share information about security incidents. • All information about existing or possible security incidents should be shared. Work together on solutions. • Work together with supplier on solution for further mitigation of similar security incidents. Element 4: Supply Chain Security This element gives an overview of the entire supply chain and necessary controls to keep up a secure environment. Setting up of Cybersecurity Interface Agreements based on security requirements with suppliers is discussed, as well as a definition of secure interfaces between suppliers during development, operation, and maintenance. Planning and preparation for cybersecurity audits and potential human risks are outlined. Cybersecurity Interface Agreement (CIA) is needed to manage the shared responsibility OEM Suplier 1 Suplier 2 …

CYBERENG - Training Cybersecurity Engineer

375

3.3 U.3 Engineering Aspects of Cybersecurity This unit gives an overview of analysis and design techniques for cybersecurity during the development, such as: (1) system threat analysis and cybersecurity goals; (2) system/software design and vulnerability analysis; (3) software detailed design and cybersecurity; and (4) hardware and firmware design. Element 1: System Threat Analysis and Cybersecurity Goals This element addresses types of attacks, known threat lists, cybersecurity assets, Threat and Risk Analysis (TARA), cybersecurity goals and threat modelling tools. System items, including the assets that could be attacked, are described [7, 8]. Input sources for analysis: Cybersecurity Item - Block Diagram (Item), Critical Function Analysis, Tech Stack, Asset List, System Level Threat Model. Impact analysis → thread analysis - scenarios and attacks → risk assessment → cybersecurity goals. Element 2: System Design and Vulnerability Analysis This element addresses related cybersecurity methods at the system level. This includes the application of cybersecurity design patterns on the system level, performing an attack tree analysis, vulnerability analysis and integration of proper defence mechanisms. Methods on how to write cybersecurity requirements and how to integrate cybersecurity views into the system architectural design are described. Attack Vector Modelling Additional system design views to model attack vectors are required. The system design views highlight assets that can be attacked. The system design views illustrate the attack flow. There is no fixed modelling notation for drawing attack vector designs. In automotive currently different modelling techniques are allowed. • Attack path on a UML diagram: • sequence diagram • Activity diagram • DFD diagram • … In the iNTACS for Automotive SPICE training, the assessors are asked to check whether threat models/modelling of attack vectors are included as an additional view in system design. Attention: Also, a dynamic view is required à an attack flow, a security signal flow, a STRIDE threat model, and an attack tree (if it also shows a hierarchy of interactions happening to launch the attack) [18, 25]. Element 3: Software Design and Vulnerability Analysis This element looks at the SW level-related cybersecurity methods, such as cyber secure

376

S. Stolfa et al.

data analysis, functions analysis and development of cybersecurity software requirements. Integration of cybersecurity views into the SW architectural design as well as application of up-to-date SW-related design patterns is outlined [22, 23]. Additional software design views to model attack vectors are required. The software design views highlight interfaces which can be attached and to which cybersecurity controls can be assigned. The software threat models are done per state of the software system. There is no fixed modelling notation for drawing attack vectors/threat models. Requirements derivation follows a defined cybersecurity pattern. Attack → Cybersecurity Property → Cybersecurity Control → Cybersecurity Requirement and must be traceable back to the cybersecurity goal. Element 4: Software Detailed Design and Cybersecurity This element considers cybersecurity methods at the software detailed design level, including SW design principles, critical code inspections, reviews, and selection of development tools and environments (secure session key generation by random generator, encryption of signals, or secure key store). Security Critical Function (SCF) implements a functionality (or sub-function) that was identified as security-critical in the system-level TARA. • SCFs shall only be active/accessible in SW states where they are needed. • SCFs always process/access Security Critical Data (SCD). SCFs shall meet its required Security Objectives. Security Critical Data (SCD) is accessed by a Security Critical Functions. • Shall follow a Create-Read-Update-Delete CRUD-Lifecycle analysis • SCD is “instantiated” at different places using different protection mechanisms • SCD shall meet its required Security Objectives Element 5: Cybersecurity Hardware and Firmware Design This element looks at the methods at the hardware detailed design level, including integration of HSM (Hardware Security Module) on ECU (Electronic Control Unit), the architecture of HSM, the controller operating systems, firmware interfaces, and configuration of secure com stack and the main diagnostics security services. All relevant communication between different hardware devices should be properly encrypted (with possible exception of Realtime critical direct communication) and devices should feature signed certificates for authentication. HSM – Hardware Security Module is responsible for everything in regards of cryptography (True Random Number Generator, cryptographic keys, encryption functions….) [24]. RT-OS – Real-Time Operating System used by HW modules should already provide cyber security hardened ecosystem for the developers and fulfill requirements of the automotive industry.

CYBERENG - Training Cybersecurity Engineer

377

3.4 U.4 Testing Aspects of Cybersecurity Unit 4 addresses different test levels and test methods to be applied in cybersecurity development, testing aspects of cybersecurity, testing types and methods as well as methods proposed by automotive norms [9]. Element 1: Cybersecurity Verification and Validation at SW Level This element includes aspects and requirements in SW testing to cover the cyber secure relevant SW requirements. Aspects include: (1) test methods proposed by automotive norms and guidebooks (ISO/SAE 21434, SAE J3061, UNECE and other sources such as OWASP); (2) SW unit testing (MISRA) and verification; (3) SW integration testing; (4) SW function testing; and (5) penetration testing. An overview of all aspects of verification and validation on SW level are introduced. Element 2: Cybersecurity Verification and Validation at HW Level This element includes aspects and requirements in HW testing to cover the cyber secure relevant HW requirements – certified HSM architectures, HSM module verification, integration of SSL/SSA libraries, and customer-specific test environments/tools. An overview of all aspects of verification and validation on HW level are introduced. Element 3: Cybersecurity Verification and Validation at System Level This element includes aspects and requirements in system testing to cover the cyber secure relevant system requirements, such as general knowledge on test methods proposed by the norms and the guidebooks (ISO/SAE 21434, SAE J3061, and the OWASP). Other validation and verification on the system level during system integration via system testing and penetration testing is discussed. An overview of all aspects of verification and validation on System level are introduced. What to test? Process, Implementation technologies, Infrastructure, People.

4 Course Implementation – Experience The Covid-19 situation and, therefore, the slowdown of the development of the training and also the slowdown of the integration into the university programs (all universities online, slow negotiation, …). We have decided to change the project plan. Instead of planned local courses, we have performed a joint pilot course among all partners’ students and trainees from the industry. Students and industrial partners attended the first pilot oneweek training programme (online - split to several training days) from 11/2022–3/2023. In addition, several students were trained during university courses mainly during 2022 and the beginning of 2023 - by VSB-TUO and TUG and inhouse/online courses were performed for the industry partners. Overall, more than 200 trainees/students were trained till 4/2023. The training was also used for train-the-trainers, where trainers from industry (ISCN, RealSec, AIT) and university lecturers (VSB-TUO, TUG) learned from each other. The EuroSPI online academy hosted the training, and the SOQRATES group integrated the courses to their regular meetings. This assured that major automotive suppliers were attending and field testing the course (Fig. 3).

378

S. Stolfa et al.

Fig. 3. Training in the EuroSPI Academy

5 Summary and Conclusions The overall indicator to train more than 100 trainees was reached (more than 209 total were reached). The quality of implementation was even better as the students and trainees got the opportunity to work together on examples, listen to each other ideas during practical work, get in contact with colleagues from different regions and companies, and learn from each other. The training was also used for train-the-trainers, where trainers from industry (ISCN, RealSec, AIT) and university lecturers (those who participated in the material creation and intended to become trainers) learned from each other. Moreover, trainers also benefited from the internal recording of the training events, so all partners’ trainers could use the recording to learn and understand and be able to demonstrate the knowledge. The trainers’ pool is envisioned to grow; this already started with the acceptance of two trainers outside the project partnership. So far, (more than 10) trainers have been certified. The project contribute to the achievement of some of the most relevant European priorities like: Transparency and recognition of skills and qualifications. The project applied the EuroSPI schema with a European-wide certificate and microcredentials. The trainers were trained to hold courses in industry and at the universities as a standard higher education courses. Trainers also shared knowledge with other trainers. Courses were ECTS-granted courses with the certificate obtained from the EuroSPI. In addition, an online training environment as a part of the EuroSPI online campus was set, allowing joint teaching across the universities. Promoting and rewarding excellence in teaching and skills development. The training uses new best practices for cybersecurity architectures and testing. Three partners in this project were members of the EU Blueprint project DRIVES [6], which analysed the key job roles to support future developments in the Automotive industry. One of the key job roles is to support the Automotive industry in developing highly autonomous vehicles, including connected and highly automated services embedded in infrastructure with IOT (Internet of Things). However, this new approach makes cars

CYBERENG - Training Cybersecurity Engineer

379

vulnerable to cybersecurity attacks, e.g. leading to crashes, non-availability of cars etc. All suppliers and car makers need to develop cybersecurity design strategies and skills for the workforce to develop such cyber-secure solutions - recognized as one of the most needed job roles by the pan-European survey run by European New Skills Agenda Blueprint project DRIVES - The Development and Research on Innovative Vocational Educational Skills (VSB-TUO was a coordinator). Tackling skills gaps and mismatches. All manufacturers and suppliers currently move towards highly autonomous vehicle and e-mobility designs and urgently need to retrain engineers to develop secure and safe car systems. No courses at the university level would cover that need until the CYBERENG project was launched.

6 Future of the CYBERENG Training From the experience of building up sustainable European and international partnerships, we know that fulfilling this essential requirement of building a win-win situation for all partners participating in the alliance is indispensable for sustainability and growth o the alliance after the funding period. This is formally complemented and supported by the following sustainable measures: – The rules and guidelines for creating new curricula and deploying and sustaining existing ones are clearly defined by the EuroSPI Framework, which has proven, as a branch of ECQA, successfully functioning in more than 20 different living European qualification programs. All the materials, including the proposed program, are entirely integrated into this framework. – All the training partners signed and a new once will sign a standard EuroSPI training body agreement. The schema is open to widening the several training bodies across Europe. – For exams and for issuing certificates, the EuroSPI infrastructure is used. This way, the partners can organize courses and exams in cooperation with EuroSPI (independently of training bodies, as required by the ISO 17024 standard [43]) at the end of the course. – Rules for the enhancement of the job role committee by new members are defined – The courses is promoted to the Automotive Skills Alliance and ASA Skills Framework – The university partners integrated the course into the existing lecturing programs with ECTS – A Job Role Committee (all partners, based on standard EuroSPI guidelines) was formed to maintain the skills card and test questions in regular meetings and workshops based on exchanging experiences and feedback from deploying the training on the market. This is essential to regularly align the skillset to the evolving market needs and to enrich and improve the pool of case studies and test questions. The main products are – – – –

skillset set of training materials e-learning portal established - https://academy.eurospi.net certification and certificate issuing established in cooperation with EuroSPI Certificates and Services - https://www.eurospi.net

380

S. Stolfa et al.

7 Relation to SPI Manifesto With this work, we contribute to the principles and values described in the SPI manifesto of the community [11, 19]. Specifically, we aim to enhance the involvement of people through training formats and thus improve the competitiveness of organisations (A.2). We aim to further enhance learning organisations and learning environments (4.1) and thereby also support the vision of different organisations and empower additional business objectives (5.1). Acknowledgements. The project ECQA Certified Cybersecurity Engineer and Manager – Automotive Sector (CYBERENG) is co-funded by the Erasmus+ Call 2020 Round 1 KA203 Programme of the European Union under the agreement 2020-1-CZ01-KA203-078494. This work is partially supported by Grants of SGS No. SP2023/065, VSB - Technical University of Ostrava, Czech Republic. We are grateful to a working party of Automotive suppliers SOQRATES [40] (https://soq rates.eurospi.net) who provided inputs for cybersecurity best practices. This includes: Dallinger Martin (ZF), Dorociak Rafal (HELLA), Dreves Rainer (Continental), Ekert Damjan (ISCN), Forster Martin (ZKW), Gasch Andreas (Cariad), Geipel Thomas (Robert BOSCH GmbH), Grave Rudolf (Tasking), Griessnig Gerhard (AVL), Gruber Andreas (CERTX), Habel Stephan (Continental), Karner Christoph (KTM), Kinalzyk Dietmar (AVL), König Frank (ZF), Kotselidis Christos (Pierer Innovation), Kurz-Griessnig Brigitte (Magna ECS), Lindermuth Peter (Magna Powertrain), Macher Georg (TU Graz), Mandic Irenka (Magna Powertrain), Mayer Ralf (BOSCH Engineering), Messnarz Richard (ISCN), Much Alexander (Elektrobit AG), Nikolov Borislav (MSG Plaut), Oehler Couso Daniel (Magna Powertrain), Pernpeintner Michael (Schäffler), Riel Andreas (Grenoble iNP, ISCN Group), Rieß Armin (BBraun), Santer Christian (AVL), Shaaban Abdelkader (AIT), Schlager Christian (Magna ECS), Schmittner Christoph (AIT), Sebron Walter (MSG Plaut), Sechser Bernhard (Process Fellows), Sporer Harald Infineon), Stahl Florian (AVL), Wachter Stefan, Walker Alastair (MSG Plau), Wegner Thomas (ZF), Geyer Dirk (AVL), Dobaj Jürgen (TU Graz), Wagner Hans (MSG Systems), Aust Detlev, Zurheide Frank (KTM), Suhas Konanur (ENX), Erik Wilhelm (Kyburz), Noha Moselhy (VALEO), Jakub Stolfa (VSB TUO), Michael Wunder (Hofer Powertrain), Svatopluk Stolfa (VSB TUO).

References 1. Much, A.: Automotive security: challenges, standards and solutions, software quality professional, September 2016 2. Riel, A., Kreiner, C., Messnarz, R., Much, A.: An architectural approach to the integration of safety and security requirements in smart products and systems design. CIRP Ann. 67(1), 173–176 (2018) 3. Automotive Cybersecurity by design. GuardKnox, 22 May 2022. https://www.guardk nox.com/automotive-cybersecurity-by-design/?utm_term=automotive+cybersecurity& amp;utm_campaign=GK%2BSearch&utm_source=adwords&utm_medium= ppc&hsa_acc=2477278000&hsa_cam=16131374351&hsa_grp=131730259 823&hsa_ad=580845163643&hsa_src=g&hsa_tgt=kwd-278111740315& amp;hsa_kw=automotive+cybersecurity&hsa_mt=p&hsa_net=adwords& hsa_ver=3&gclid=Cj0KCQjwma6TBhDIARIsAOKuANxdrGpAu3MS5aUH2dN8v VaQou_cmvrLOeFma_9A-WM7XPRebjWbrVQaAtDyEALw_wcB. Accessed 15 Apr 2022

CYBERENG - Training Cybersecurity Engineer

381

4. Automotive Cybersecurity: Vector Informatik GmbH, 15 April 2022. https://www.vector. com/int/en/products/solutions/safety-security/automotive-cybersecurity/?gclid=Cj0KCQ jwma6TBhDIARIsAOKuANw-AGiBvieIDAUEX7_fbKVBEAxveZMEtGUU7m3t6wv GPNqutJboxbkaAmblEALw_wcB#c2920. Accessed 15 Apr 2022 5. DRIVES, Project. DRIVES Framework: Deliverable 4.4.2 Full Definition and Rules of the ERFA. https://www.project-drives.eu/Media/Publications/211/Publications_211_202 11201_113520.pdf 6. EU Blueprint Project DRIVES. https://www.project-drives.eu/. Accessed 6 Apr 2023 7. Macher, G., Sporer, H., Berlach, R., Armengaud, E., Kreiner, C.: SAHARA: a securityaware hazard and risk analysis method. In: Design, Automation Test in Europe Conference Exhibition (DATE), pp. 621–624, March 2015 8. Macher, G., Messnarz, R., Kreiner, C., et al.: Integrated safety and security development in the automotive domain, Working Group 17AE-0252/2017-01-1661. SAE International, June 2017 9. GEAR 2030: European Commission, Commission launches GEAR 2030 to boost competitiveness and growth in the automotive sector (2016). http://ec.europa.eu/growth/tools-databa ses/newsroom/cf/itemdetail.cfm?item_id=8640 10. Hirz, M.: Automotive mechatronics training programme - an inclusive education series for car manufacturer and supplier industry. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 341–351. Springer, Cham (2020). https://doi.org/ 10.1007/978-3-030-56441-4_25 11. SPI Manifesto. https://conference.eurospi.net/images/eurospi/spi_manifesto.pdf. Accessed 20 Apr 2023 12. ISO - International Organization for Standardization: ISO 26262 Road vehicles Functional Safety Part 1-10 (2011) 13. ISO – International Organization for Standardization: ISO CD 26262-2018 2nd Edition Road vehicles Functional Safety (2018) 14. ISO/SAE 21434, Road vehicles—Cybersecurity engineering (ISO Standard No. 21434). International Organization for Standardization (2021). https://www.iso.org/standard/70918.html. Accessed 6 Apr 2023 15. Ito, M.: Supporting process design in the autonomous era with new standards and guidelines. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 525–535. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_39 16. Stolfa, J.: DRIVES—EU blueprint project for the automotive sector—a literature review of drivers of change in automotive industry. J. Softw. Evol. Process 32(3) 2020. Special Issue: Addressing Evolving Requirements Faced by the Software Industry 17. KGAS, Konzerngrundanforderungen Software, Version 3.2, Volkswagen LAH 893.909: KGAS_3602, KGAS_3665, KGAS_3153, KGAS_3157, November 2018 18. Khan, R., McLaughlin, K., Laverty, D., Sezer, S.: STRIDE-based threat modeling for cyber-physical systems. In: 2017 IEEE PES: Innovative Smart Grid Technologies Conference Europe (ISGT-Europe): Proceedings IEEE (2018). https://doi.org/10.1109/ISGTEurope. 2017.8260283 19. Korsaa, M., et al.: The SPI manifesto and the ECQA SPI manager certification scheme. J. Softw. Evol. Process 24(5), 525–540 (2012) 20. Macher, G., et al.: A study of electric powertrain engineering - its requirements and stakeholders perspectives. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 396–407. Springer, Cham (2020). https://doi.org/10.1007/978-3030-56441-4_29 21. Macher, G., Brenner, E., Messnarz, R., Ekert, D., Feloy, M.: Transferable competence frameworks for automotive industry. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI

382

22.

23.

24.

25. 26.

27.

28. 29.

30.

31. 32. 33.

34.

35. 36.

37.

S. Stolfa et al. 2019. CCIS, vol. 1060, pp. 151–162. Springer, Cham (2019). https://doi.org/10.1007/978-3030-28005-5_12 Macher, G., Much, A., Riel, A., Messnarz, R., Kreiner, C.: Automotive SPICE, safety and cybersecurity integration. In: Tonetta, S., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2017. LNCS, vol. 10489, pp. 273–285. Springer, Cham (2017). https://doi.org/10.1007/978-3-31966284-8_23 Macher, G., Druml, N., Veledar, O., Reckenzaun, J.: Safety and security aspects of failoperational urban surround perceptION (FUSION). In: International Symposium on ModelBased Safety and Assessment, pp. 286–300 (2019) Macher, G., Sporer, H., Brenner, E., Kreiner, C.: Supporting cyber-security based on hardwaresoftware interface definition. In: Systems Software and Services Process Improvement - 23nd European Conference, EuroSPI 2016 Proceedings. Springer, Heidelberg (2016). https://doi. org/10.1007/978-3-319-44817-6_12 Mahmood, H.: Application threat modeling using DREAD and STRIDE (2017). Accessed 03 Aug 2018. https://haiderm.com/application-threat-modeling-using-dread-and-stride Messnarz, R., et al.: Automotive cybersecurity engineering job roles and best practices – developed for the EU blueprint project DRIVES. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 499–510. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_37 Messnarz, R., Much, A., Kreiner, C., Biro, M., Gorner, J.: Need for the continuous evolution of systems engineering practices for modern vehicle engineering. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 439–452. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_36 Messnarz, R., Kreiner, C., Riel, A.: Integrating automotive SPICE, functional safety, and cybersecurity concepts: a cybersecurity layer model. Softw. Q. Prof. (2016) Korsaa, M., et al.: The people aspects in modern process improvement management approaches. J. Softw. Evol. Process 25(4), 381–391 (2013). Special Issue: Selected Industrial Experience Papers of EuroSPI 2010 Messnarz, R., et al.: InnoTEACH – applying principles of innovation in school. In: Stolfa, J., Stolfa, S., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2017. CCIS, vol. 748, pp. 294–301. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64218-5_24 SAE J3061, Cybersecurity Guidebook for Cyber-Physical Vehicle Systems, SAE - Society of Automotive Engineers, USA, January 2016 Schmittner, C., Macher, G.: Automotive Cybersecurity Standards-Relation and Overview International Conference on Computer Safety, Reliability, and Security, pp. 153–165 (2019) Schmittner, C., et al.: Automotive cybersecurity - training the future. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 211–219. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_14 Skills set for an ECQA Certified Cybersecurity Engineer and Manager (2021). https:// www.project-cybereng.eu/wp-content/uploads/2021/08/CYBERENG-IO2-Skills-Set.pdf. Accessed 15 Apr 2023 SOQRATES, Task Forces Developing Integration of Automotive SPICE, ISO 26262 an d SAE J3061. http://soqrates.eurospi.net/ Stolfa, J., et al.: Automotive quality universities - AQUA alliance extension to higher education. In: Kreiner, C., O’Connor, R.V., Poth, A., Messnarz, R. (eds.) EuroSPI 2016. CCIS, vol. 633, pp. 176–187. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-44817-6_14 Stolfa, J., et al.: Automotive engineering skills and job roles of the future? In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 352–369. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_26

CYBERENG - Training Cybersecurity Engineer

383

38. Stolfa, S., et al.: Automotive cybersecurity manager and engineer skills needs and pilot course implementation. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement. EuroSPI 2022. Communications in Computer and Information Science, vol. 1646. Springer, Cham (2022). https://doi.org/10.1007/978-3-03115559-8_24 39. VDA QMC, Automotive Cybersecurity Management System Audit, 1st edn., December 2020 40. Chauhan, Y.: The 7 security objectives of any organization for IT and network security. https://yogeshchauhan.com/the-7-security-objectives-of-any-organization-for-it-and-net work-security. Accessed 15 Apr 2022 41. UNECE World Forum for Harmonization of Vehicle Regulations (WP.29) UN R155. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreem ent-regulations-141-160. Accessed 14 42. 2023UNECE World Forum for Harmonization of Vehicle Regulations (WP.29) UN R156. https://unece.org/transport/vehicle-regulations-wp29/standards/addenda-1958-agreem ent-regulations-141-160. Accessed 1 Apr 2023 43. ISO/IEC 17024:2012 Conformity assessment—General requirements for bodies operating certification of persons. https://www.iso.org/standard/52993.html. Accessed 1Apr 2023 44. Messnarz, R., Ekert, D., Zehetner, T., Aschbacher, L.: Experiences with ASPICE 3.1 and the VDA automotive SPICE guidelines – using advanced assessment systems. In: Walker, A., O’Connor, R.V., Messnarz, R. (eds.) EuroSPI 2019. CCIS, vol. 1060, pp. 549–562. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28005-5_42 45. Aschbacher, L., Messnarz, R., Ekert, D., Zehetner, T., Schönegger, J., Macher, G.: Improving organisations by digital transformation strategies – case study EuroSPI. In: Yilmaz, M., Clarke, P., Messnarz, R., Wöran, B. (eds.) Systems, Software and Services Process Improvement. EuroSPI 2022. Communications in Computer and Information Science, vol. 1646. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-15559-8_51 46. Ekert, D., Messnarz, R., Norimatsu, S., Zehetner, T., Aschbacher, L.: Experience with the performance of online distributed assessments – using advanced infrastructure. In: Yilmaz, M., Niemann, J., Clarke, P., Messnarz, R. (eds.) EuroSPI 2020. CCIS, vol. 1251, pp. 629–638. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-56441-4_47 47. Messnarz, R., et al.: First experiences with the automotive SPICE for cybersecurity assessment model. In: Yilmaz, M., Clarke, P., Messnarz, R., Reiner, M. (eds.) EuroSPI 2021. CCIS, vol. 1442, pp. 531–547. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85521-5_35

Author Index

A Aarup, Mike I-220 Abdelkader, Shaaban I-366 Adel, Ahmed II-96 Akasaka, Tomoki II-300 Albers, Karsten I-156 Arakawa, Shin’ichi II-300 Arsovic, Andjela II-207 Aschbacher, Laura I-343, II-219 Ates, Alev II-3

B Ben-Rejeb, Helmi II-182 Berkta¸s, Osman Tahsin I-96 Bielinska, Sylwia I-124 Bonilla Carranza, David I-84 Breitenthaler, Jonathan II-219 Brenner, Eugen I-343 Buckley, Carla I-124 Byrne, Ailbhe I-20

C Camara, Abasse I-20 Clarke, Paul I-96 Clarke, Paul M. I-20, I-72, I-124 Colomo-Palacios, Ricardo I-47 Coptu, Andreea I-124 Cortina, Stéphane II-125 Cristian Riis, Hans I-273

D Danmayr, Tobias I-343, II-219 De Buitlear, Caoimhe I-20 Dhungana, Deepak I-36 Dolejsi, Petr I-196 Dragicevic, Nikolina II-275 Dubickis, Mikus II-219 Duong, Quoc Bao I-356

E Ekert, Damjan I-289, I-343, I-366, II-219 Ekman, Mats I-156 F Faschang, Thomas I-316 Fehlmann, Thomas I-329 Fessler, Anja I-3 Flatscher, Martina I-3 Freed, Martina I-124 G Gallina, Barbara I-220 Garnizov, Ivan II-207 Gasca-Hurtado, Gloria Piedad I-59 Georgiadou, Elli I-258, II-166, II-193 Ghodous, Parisa I-138 Gkioulos, Vasileios I-47 Griessnig, Gerhard II-113 Groher, Iris I-36 H Heere, C. II-30 Heimann, Christian II-260 Hidalgo-Crespo, José I-186 Hofmeister, Justus II-139 Hüger, Fabian I-111 I Iriskic, Almin

I-343

J Janez, Isabel I-3 Jobe, Pa Sulay I-72 Johansen, Jørn I-237, I-273 Jolevski, Ilija II-207 K Kinalzyk, Dietmar II-139 Konarzewski, Bianca II-289

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 M. Yilmaz et al. (Eds.): EuroSPI 2023, CCIS 1890, pp. 385–387, 2023. https://doi.org/10.1007/978-3-031-42307-9

386

Author Index

Picard, Michel II-125 Pörtner, Lara I-205 Poth Alaman, Gabriel II-234 Poth, Alexander I-111, II-16, II-30, II-46, II-151, II-234, II-260 Pries-Heje, Jan I-273

Korsaa, Morten I-237, I-273 Kosmanis, Theodoros I-171 Kottke, Mario II-16, II-46 Kranich, Eberhard I-329 Krisper, Michael I-366 L Lampropoulos, Georgios II-166, II-193 Leclaire, Marcel I-205 Leino, Tiina II-275 Levien, D.-A. II-30 Liedtke, Thomas I-289 Loughran, Róisín II-61 Loveday, Joanna I-258 M Macher, Georg I-316, I-343, I-366, II-248, II-275 Makkar, Samer Sameh I-205 Maratsi, Giorina I-138 Matuleviˇcius, Raimundas I-138 McCaffery, Fergal II-61 McCarren, Andrew I-20 McEvoy, Eric I-20 McHugh, Martin II-61 Messnarz, Richard I-124, I-289, I-343, I-366, II-219 Moselhy, Noha II-96 Much, Alexander I-289, I-366 Munoz, Marta II-219 Muñoz, Mirna I-84 Murata, Masayuki II-300 N Narayan, Rumy II-248, II-275 Noebauer, Markus I-36 Nugent, Christopher II-61 Nyirenda, Misheck II-61 O Odeleye, Olaolu II-219 Olesen, Thomas Young I-220 Özdemir, Taner II-79 P Parajdi, Eszter I-220 Peisl, Thomas II-3 Peña Pérez Negrón, Adriana

I-84

Q Quisbert-Trujillo, Ernesto

II-182

R Rahanu, Harjinder I-258, II-166, II-193 Ramaj, Xhesika I-47 Ramos, Lara II-219 Reiner, Michael I-171, II-289 Renault, Samuel II-125 Restrepo-Tamayo, Luz Marcela I-59 Riel, Andreas I-138, I-171, I-186, I-205, I-356 Ross, Margaret I-258, II-193 Rrjolli, Olsi II-151 Rubin, Niels Mark I-237 S Salamun, Alen I-366 Sánchez-Gordón, Mary I-47 Schardt, Mourine II-16, II-46 Schlager, Christian I-343 Schmelter, David I-156 Schmittner, Christoph I-366 Schösler, Hanna I-138 Seddik, Ahmed II-96 Siakas, Dimitrios II-166, II-193 Siakas, Errikos I-258 Siakas, Kerstin I-258, II-166, II-193 Solomos, Dionysios I-138 Song, Xuejing II-113 Spanyik, Marek I-196, I-366 Steghöfer, Jan-Philipp I-156 Stolfa, Jakub I-196, I-366 Stolfa, Svatopluk I-366 T Tessmer, Jörg I-156 Tharot, Kanthanet I-356 Thiriet, Jean-Marc I-356 Tüfekci, Aslıhan I-72 Tziourtzioumis, Dimitrios I-171

Author Index

U Ünal, Ay¸segül

387

II-79

Weber, Raphael I-156 Wittmann, Andreas I-111

V Valoggia, Philippe II-125 Veledar, Omar II-275

Y Yılmaz, Murat

W Walgenbach, Roland I-111 Walter, Bartosz II-207

Z Zelmenis, Mikus II-219 Ziegler, Alexander II-3

I-20, I-72, I-96, I-124