269 64 13MB
English Pages xxxvii, 376 Seiten Illustrationen 25 cm, 871 g [395] Year 2019
Frank J. Furrer
Future-Proof Software-Systems A Sustainable Evolution Strategy
Future-Proof Software-Systems
Frank J. Furrer
Future-Proof Software-Systems A Sustainable Evolution Strategy
Frank J. Furrer Computer Science Faculty Technical University of Dresden Dresden, Germany
ISBN 978-3-658-19937-1 ISBN 978-3-658-19938-8 (eBook) https://doi.org/10.1007/978-3-658-19938-8 Library of Congress Control Number: 2019936175 Springer Vieweg © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer Vieweg imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany
Foreword 1
I believe that the success of agile software development lies in software refactoring with continuous regression testing. When a refactoring takes place, the testing process guarantees that the modifications preserve the functions of the program. Preservation means here “to retain the invariant of the semantics of the program”, i.e., to keep something very important, the “brain” of the program, while to change something not so important, the morphology of the program. The developer, so to say, stays on a safe, large enough ice shelf in the sea of cold. That makes up for the success of the method. However, when software-systems get large and consist of many programs and models, these cannot be refactored together, and regression testing no longer does the job. Maintaining and extending large software-systems destroys another important inside value of a program: its architecture. Then other methods of “staying on the safe ice shelf” should take effect, methods that preserve the extensibility, the maintainability, the quality of the architecture, and thereby, its business value. However, beyond refactoring, few methods exist for architectural evolution, so that developers often lose their battle against disorder and chaos. It seems that entropy is the winner—and that is the reason why most of our software-systems become legacy after a number of years: They lose their heart, the flexible architecture. Into this gap, Frank J. Furrer positions his book “Future-Proof Software-Systems”. The method of Managed Evolution fights entropy, technical debt, and architecture erosion. At any point in time, the Managed Evolution process makes sure that the principles of a clean, flexible architecture are not violated. This is ensured by rigid process rules enforcing well-defined architectural principles, as well as the continuous employment of metrics to measure the flexibility of the architecture so that the software can stay in a “quality corridor”. It is well known that refactoring a (small) program comes with some effort, because the refactoring does not introduce new functions, but only restructures the program for better quality. For large software, Managed Evolution has the same effect: First, it costs, but overall, it pays off, because the effort of a disciplined evolution retains the business value of the software-system. Therefore, it could be said that Managed Evolution is a form of “agile development in the large”. Guess, why it will be successful: For huge software-systems, it shows how to stay on the safe ice shelf. v
vi
Foreword 1
Frank J. Furrer has been teaching Future-Proof Software-Systems here at Technische Universität Dresden now for more than 5 years, reaching out to many enthusiastic students in seminars and lectures. He has been able to share his longstanding experience in software-systems and software research with these students, and I wish, this experience would now also be shared with you. So, make yourself knowledgeable how softwaresystems can be kept in the “Managed Evolution corridor”. Learn how to stay on the safe, large enough ice shelf. October 2018
Prof. Dr. Uwe Aßmann Institut für Software- und Multimediatechnik Fakultät Informatik Technische Universitat Dresden D-01062 Dresden/Germany
Foreword 2
This book is about software architecture. Software architecture prescribes rules for the organization of the parts of a large software-system and establishes principles that govern the interactions among those parts. The book develops the architecture principles that must be followed in order to arrive at a future-proof software-system. A future-proof software-system is one that is not only of value at the time of its inception but one that can provide a useful service in the future of a changing world. Already the Greek philosopher Heraclitus observed more than 2000 years ago that Pantha rhei—All entities move and nothing remains still. In today’s world of accelerating technological and societal change, system requirements are in continuous flux, and only those software-systems will survive that are capable of evolving and integrating the necessary changes without destroying the underlying system architecture. It is the purpose of a software product to provide an effective service to a person or an organization that use this software in their daily work. This service-oriented view of software includes a deep understanding of the needs of the application environment and is open to alterations that are required to maintain the utility of the software-system in a dynamic world, where the evolving business environment requires the steady modification of existing functions and the addition of new software functions. This service-oriented view is thus much broader than a strictly product-oriented view of software. The author of this book, Frank J. Furrer, has worked with a world-renown banking organization and has acquired many years of first-hand experience of the impact of a changing business environment on the requirements and operation of large software-systems that are central to the success of an international bank. Out of this real-life experience, he distilled important software architecture principles that should be followed in order to arrive at a future-proof software-system. The comprehensive discussion of these principles forms the core of this interesting book.
vii
viii
Foreword 2
The book is an enjoyable read. It is a wonderful experience to absorb a text where the practical wisdom acquired in the life of an outstanding system architect is presented in a succinct form and supported by a number of real-life examples. I am sure that many readers of this book—being young students or experienced practitioners in the field of software engineering—will greatly benefit from the insights gained by working through the text. October 2018
Prof. Dr. Hermann Kopetz Em. o. Univ. Prof. Vienna University of Technology
Preface
Software is a key success factor for today’s and, even more, tomorrow’s products and services. Software has brought us a powerful communication and computing infrastructure, impressive business successes, highly useful products and services, and an impressive increase in efficiency of working, and many more benefits. In fact, we live in a software world. An Internet search for “software” results in 2’350’000’000 hits (May 2017). Unfortunately, software is also responsible for a number of accidents. Such accidents range from fatal airplane crashes (Airbus A400M crash near Sevilla in May 2015: https:// www.theregister.co.uk/2015/06/10/airbus_a400m_probe_torque_data, last accessed 13. May 2017), huge financial loss (Knight Capital Group lost $440 million in 30 min on August 1, 2012: https://www.bloomberg.com/news/articles/2012-08-02/knight-showshow-to-lose-440-million-in-30-minutes, last accessed 13. May 2017) to serious software-based security deficiencies (Partially successful attempt to steal $1 billion from the Bangladesh Bank in February 2016: https://www.wired.com/2016/05/insane-81m-bangladesh-bank-heist-heres-know, last accessed 17. May 2017). An Internet search for “software accident” results in 39’500’000 hits (May 2017). The tremendous impact of software on all areas of work and life is undisputed and is growing every day. The key success factor for most products and services is software and this trend accelerates. The software community has a strong responsibility to produce and operate dependable, trustworthy and useful software. The software should at the same time provide business value and guarantee a number of application-dependent quality of service properties, such as security, safety, performance, maintainability, etc. Quote: “Products are becoming more capable and complex at an exponential rate. Product delivery cycles decrease monotonically. The safety, reliability and security concerns for these systems are making these systems much more difficult to engineer. We need to be able to produce more capable systems in less time and with fewer defects”. [Douglass16]
This goal first requires an adequate development and evolution strategy for the software. The strategy must assure the continuous generation of business value and at the same ix
x
Preface
time the controlled improvement of specific quality of service properties of the software. Such a strategy has been developed, implemented and has proven its long-term value for a very large information system [Murer11, Furrer15]. This strategy was named Managed Evolution, expressing the stepwise, risk-controlled, integrated approach. A large number of theoretical papers starting in 1972 [Parnas72, Parnas79, Brooks75] and many case studies have conclusively shown that most of the properties of a softwaresystem are heavily dependent on the architecture of the system. The equation “good architecture = good software properties/bad architecture = bad software properties” is painfully true for almost all software properties. Thus, “good architecture” must be defined, designed, maintained, and evolved. A powerful and proven instrument are architecture principles: Architecture principles are enforceable rules which define structural properties of the software-system. Thus, if a proven, complete set of architecture principles is available, “good architecture” can be equalled to “adherence to the architecture principles”. Thus this methodology is called principle-based architecting. Combining Managed Evolution and principle-based architecting leads to the type of software we need: Delivering high business value and providing adequate quality of service properties to assure dependable and trustworthy operation in a long-term sustainable way. This type of software is called Future-Proof Software-Systems in this book. Quote: “This book is not a solution book, but a foundations book, addressing the fundamental issues of the evolution of future-proof sofware-systems”
Producing and evolving Future-Proof Software-Systems is not a purely engineering discipline. The whole company or organization must be aligned to this evolution strategy. Especially the business divisions and all the management levels must actively support the Managed Evolution. Is this wishful thinking? Indeed—no. Several organizations have successfully adopted some implementation of Managed Evolution. The increasing pressure from society, lawmakers, competition and the operational environment will force all producers of software to increase their business value and to considerably improve the quality of service properties of their software-systems. Managed Evolution is a way to do so. This monograph is strongly based on 13 years of active architecting in a global financial institution (reported in [Murer11]). I sincerely thank my colleagues Stephan Murer and Bruno Bonati for the development of the Managed Evolution basics. Since winter term 2013/2014, I am teaching a full lecture “Future-Proof SoftwareSystems” at the Computer Science Faculty of the Technical University of Dresden (Germany) every winter term. This has led to a considerable broadening of the material presented in [Murer11]. This monograph extends this knowledge. Stein am Rhein (Switzerland) December 2018
Frank J. Furrer
Preface
xi
References [Brooks75] Brooks FP (1975) The mythical man-month—essays on software engineering. Addison-Wesley Longman, Boston. ISBN 978-0-201-83595-3 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Berlin. ISBN 978-3-642-01632-5 [Furrer15] Furrer FJ (2010) Zukunftsfähige Softwaresysteme – Zukunftsfähig trotz zunehmender SW-Abhängigkeit Informatik Spektrum. Springer, Heidelberg. First online: 30 June 2015. https://doi.org/10.1007/s00287015-0909-6, http://link.springer.com/article/10.1007/s00287-015-0909-6. Accessed 31 Dec 2015 [Parnas72] Parnas D (1972) On the criteria to be used to decompose systems into modules. Communications of the ACM, Vol 15, Nr 12, December 1972. https://www.cs.umd.edu/class/spring2003/cmsc838p/Design/criteria.pdf. Accessed 16 May 2017 [Parnas79] Parnas DL (1979) Designing software for ease of extension and contraction. Third international conference on software engineering, Atlanta, GA, May 1978. IEEE transactions on software engineering, Vol SE-S, No 2, March 1979. http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/ family.pdf. Accessed 28 May 2017
Acknowledgements
My first sincere thanks go to my coauthors of [Murer11] and long-term colleagues Stephan Murer (former global chief architect of CREDIT SUISSE) and Bruno Bonati (former CIO of CREDIT SUISSE). The work in the global architecture team of CREDIT SUISSE from 1997–2009 led to the fundamental ideas and the Managed Evolution strategy on which this book is based. Thanks go also to all the CREDIT SUISSE IT-architects involved at that time. This book grew out of my lectures as an Honorary Professor at the faculty of Computer Science at the Technical University of Dresden (Germany) which started in the winter term of 2013/2014. Special thanks go to the chair of the software technology, Prof. Dr. Uwe Aßmann for his continued support. In addition, my numerous students contributed both to the quality of the content and the didactic flow by their active participation during the lectures and their active engagement in my seminars. A book is never the achievement of one author: He stands on the shoulders of giants—of which the extensive reference section at the end of each chapter is proof. Authoring an English-language book for a German-native speaker is not a simple task. I would like to acknowledge the great help from the language checker “Grammarly” (https://www.grammarly.com). Last, but not least I wish to express my gratitude to SPRINGER-Verlag for their extensive support during the creation of this book. Especially Hermann Engesser for his encouragement, Sybille Thelen for her continuous, valuable assistance, Dorothea Glaunsinger for the administration, and Stefan Kreickenbaum for his continuous support made the publication process a successful pleasure. Finally, thanks go to you—my reader—for investing your valuable time in reading this book.
xiii
Contents
Part I Foundation 1
Software Everywhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.1 Software Everywhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Opportunities for Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.3 Risks of Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 1.4 Balancing Opportunities and Risks of Software. . . . . . . . . . . . . . . . . . 7 1.5 Toward Dependable Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2
Force of Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.1 Entropy in Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.2 Technical Debt. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.3 Architecture Erosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.4 Fighting the Force of Entropy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3
Three Devils of Systems Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.1 Systems Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Three Devils of Systems Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 3.4 Change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5 Uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Structure of Complex Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.6.1 Horizontal Architecture Layers. . . . . . . . . . . . . . . . . . . . . . . 30 3.6.2 Vertical Architecture Layers. . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.6.3 Hierarchy Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.7 Types of Information Processing Systems. . . . . . . . . . . . . . . . . . . . . . . 35 3.7.1 Enterprise Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.7.2 Embedded Computers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.7.3 Cyber-Physical Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 xv
xvi
Contents
3.8 Systems-of-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4
Future-Proof Software-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.1 Why Future-Proof Software-Systems? . . . . . . . . . . . . . . . . . . . . . . . . . 45 4.2 Future-Proof Software-Systems Definition. . . . . . . . . . . . . . . . . . . . . . 46 4.3 Systems Engineering and Software Engineering. . . . . . . . . . . . . . . . . . 47 4.4 Business-IT Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.5 Business Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.6 Changeability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.7 Dependability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.8 Architecture, Architecture, Architecture!. . . . . . . . . . . . . . . . . . . . . . . . 54 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5
Evolution Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1 The Pathway to Future-Proof Software-Systems. . . . . . . . . . . . . . . . . . 57 5.2 Managed Evolution Strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2.1 Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.3 Business Value Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 5.4 Changeability Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.5 Dependability Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.6 Other Quality of Service Property Metrics . . . . . . . . . . . . . . . . . . . . . . 74 5.7 Software Quality Metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 5.8 Quality of Service Properties Scorecard . . . . . . . . . . . . . . . . . . . . . . . . 75 5.9 Managed Evolution Operationalization. . . . . . . . . . . . . . . . . . . . . . . . . 76 5.10 Continuous Improvement, Constant Rearchitecting, Regular Refactoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 5.11 Progress Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.12 Periodic Architecture Programs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.13 Processes for Managed Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 5.14 Architecture Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 5.15 The Value of Managed Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 5.16 Final Words . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
6 Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.1 Architecture Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 6.1.1 Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.1.2 Behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.1.3 Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.1.4 Levels of Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 6.1.5 Parts, Relationships, and Models in the Application Architecture Hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Contents
xvii
6.2
Key Importance of Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.1 Impact of Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 6.2.2 Longevity of Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 6.3 Legacy Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 6.4 Architecture Integration Challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 6.5 Evolutionary Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 6.6 Architecture Evaluation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6.7 How much Architecture?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 6.8 Architecture Tools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
7
Principle-Based Architecting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.1 Principles in Science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 7.2 Software Architecture Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 7.3 Horizontal Architecture Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 7.4 Vertical Architecture Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 7.5 Software Architecture Principle Formalization. . . . . . . . . . . . . . . . . . . 117 7.6 Software Architecture Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.7 Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 7.8 Anti-Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 7.9 Principle-Based Architecting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8
Context for Managed Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.1 Context Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 8.2 Company Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 8.3 Architecture Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 8.3.1 Formalized Achitecture Knowledge. . . . . . . . . . . . . . . . . . . . 127 8.3.2 Architecture Governance. . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 8.4 Architecture Organization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 8.5 Company Culture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 8.6 IT Staff. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 8.7 Metrics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 8.8 A Final Look . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
9
The Future. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 9.1 The Raise in Power of the Three Devils . . . . . . . . . . . . . . . . . . . . . . . . 140 9.1.1 Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 9.1.2 Change. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 9.1.3 Uncertainty. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 9.2 Increased Risk and More Sophisticated Threats . . . . . . . . . . . . . . . . . . 145
xviii
Contents
9.3
Increasing Abstraction Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 9.3.1 Separation of Concerns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 9.3.2 Abstractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 9.4 Models-to-Code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 9.5 Provably Correct Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
10 Special Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 10.1 Agile Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 10.1.1 The Agile Manifesto. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 10.1.2 Agile Application Spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . 160 10.1.3 Agile Methods and Future-Proof Software-Systems. . . . . . . 162 10.1.4 Large-Scale Agile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 10.1.5 Agility against Architecture?. . . . . . . . . . . . . . . . . . . . . . . . . 165 10.1.6 Agile Requirements Engineering. . . . . . . . . . . . . . . . . . . . . . 166 10.1.7 Agile Risk Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 10.2 Continuous Delivery and DevOps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 10.2.1 Continuous Delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 10.2.2 DevOps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 10.2.3 Continuous Delivery, DevOps and Future-Proof Software-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 10.3 Legacy Software Modernization/Migration. . . . . . . . . . . . . . . . . . . . . . 176 10.3.1 Legacy Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 10.3.2 Legacy Software Modernization/Migration. . . . . . . . . . . . . . 179 10.3.3 Legacy Software Replacement. . . . . . . . . . . . . . . . . . . . . . . . 181 10.3.4 Legacy Software Rearchitecting . . . . . . . . . . . . . . . . . . . . . . 182 10.3.5 Legacy Software Reengineering . . . . . . . . . . . . . . . . . . . . . . 184 10.3.6 Legacy Software Refactoring: Code . . . . . . . . . . . . . . . . . . . 186 10.3.7 Legacy Software Refactoring: Data/Information. . . . . . . . . . 188 10.3.8 Legacy Systems Modernization/Migration Strategies. . . . . . 189 10.3.9 Stop Legacy Software Creation. . . . . . . . . . . . . . . . . . . . . . . 190 10.4 Time in Software-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.4.1 Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.4.2 Systems Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.4.3 Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.4.4 Time Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 10.4.5 Absolute Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.4.6 Relative Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.4.7 Modeling Time in Software-Systems. . . . . . . . . . . . . . . . . . . 192 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
Contents
xix
Part II Principles 11 Principles for Business Value. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 11.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 11.2 α-Architecture Principle #1: IT-System as Investment . . . . . . . . . . . . . 202 11.3 α-Architecture Principle #2: Managed Evolution Culture. . . . . . . . . . . 203 11.4 α-Architecture Principle #3: Business-IT Alignment . . . . . . . . . . . . . . 204 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 12 Architecture Principles for Changeability. . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.2 ρ-Architecture Principle #1: Architecture-Layer Isolation . . . . . . . . . . 209 12.2.1 Architecture Layers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 12.2.2 Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 12.3 ρ-Architecture Principle #2: Partitioning, Encapsulation, and Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 12.3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 12.3.2 Partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 12.3.3 Partitioning Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 12.3.4 Encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 12.3.5 Interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 12.3.6 Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 12.4 ρ-Architecture Principle #3: Conceptual Integrity. . . . . . . . . . . . . . . . . 222 12.4.1 Conceptual Integrity in an Organization . . . . . . . . . . . . . . . . 222 12.4.2 Achieving Conceptual Integrity. . . . . . . . . . . . . . . . . . . . . . . 223 12.4.3 Domain Software Engineering. . . . . . . . . . . . . . . . . . . . . . . . 224 12.4.4 Hidden Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 12.5 ρ-Architecture Principle #4: Redundancy. . . . . . . . . . . . . . . . . . . . . . . 227 12.5.1 Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 12.5.2 Unmanaged Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 12.5.3 Managed Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 12.5.4 Redundancy Synchronization . . . . . . . . . . . . . . . . . . . . . . . . 233 12.5.5 Infiltration of Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . 234 12.5.6 Redundancy Avoidance Pattern. . . . . . . . . . . . . . . . . . . . . . . 234 12.6 ρ-Architecture Principle #5: Interoperability. . . . . . . . . . . . . . . . . . . . . 235 12.6.1 Interoperability Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 12.6.2 Contracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 12.6.3 Interface Contracts and Service Contracts. . . . . . . . . . . . . . . 238 12.6.4 Interoperability Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 12.7 ρ-Architecture Principle #6: Common Functions . . . . . . . . . . . . . . . . . 241 12.7.1 Common Data and Functionality. . . . . . . . . . . . . . . . . . . . . . 241 12.7.2 Synchronization Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . 243 12.7.3 Software Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
xx
Contents
12.8
ρ-Architecture Principle #7: Reference Architectures, Frameworks and Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 12.8.1 Architecture Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 12.8.2 Reference Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245 12.8.3 Architecture Frameworks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 12.8.4 Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 12.9 ρ-Architecture Principle #8: Reuse and Parametrization. . . . . . . . . . . . 251 12.9.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 12.9.2 Types of Reuse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 12.9.3 Business Case for Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 12.9.4 Context for Reuse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 12.10 ρ-Architecture Principle #9: Industry Standards. . . . . . . . . . . . . . . . . . 258 12.10.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 12.10.2 Types of Standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 12.10.3 Value of Standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 12.11 ρ-Architecture Principle #10: Information Architecture. . . . . . . . . . . . 262 12.11.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 12.11.2 Information and Information Architecture. . . . . . . . . . . . . . . 263 12.11.3 Enterprise Information Architecture . . . . . . . . . . . . . . . . . . . 264 12.11.4 Real-Time Information Architecture. . . . . . . . . . . . . . . . . . . 269 12.12 ρ-Architecture Principle #11: Formal Modeling. . . . . . . . . . . . . . . . . . 272 12.12.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 12.12.2 The Power of Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 12.12.3 Modeling Basics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 12.12.4 Modeling Paradigms and Languages. . . . . . . . . . . . . . . . . . . 274 12.12.5 Boxology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 12.12.6 Taxonomy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 12.12.7 Ontology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 12.12.8 Object-Oriented Software Technology . . . . . . . . . . . . . . . . . 278 12.12.9 Domain-Specific Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . 279 12.13 Formal Languages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 12.13.1 Architecture Description Languages. . . . . . . . . . . . . . . . . . . 281 12.13.2 Model Explosion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 12.13.3 Structural Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 12.13.4 Domain Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284 12.13.5 Business Object Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 12.14 ρ-Architecture Principle #12: Complexity and Simplification . . . . . . . 290 12.14.1 Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290 12.14.2 Complexity Tracking. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 12.14.3 Complexity Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 12.14.4 Active Simplification Step in Development Process. . . . . . . 294 12.14.5 Complexity Reduction Architecture Program. . . . . . . . . . . . 295 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Contents
xxi
13 Architecture Principles for Resilience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 13.1 Dependability and its Elements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309 13.2 Resilience Architecture Principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 13.3 Resilience Architecture Principle #1: Policies. . . . . . . . . . . . . . . . . . . . 312 13.3.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 13.3.2 Policy Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 13.4 Resilience Architecture Principle #2: Vertical Architectures. . . . . . . . . 315 13.5 Resilience Architecture Principle #3: Fault Containment Regions. . . . 318 13.6 Resilience Architecture Principle #4: Single Points of Failure. . . . . . . 320 13.7 Resilience Architecture Principle #5: Multiple Lines of Defense. . . . . 323 13.8 Resilience Architecture Principle #6: Fail-Safe States . . . . . . . . . . . . . 325 13.9 Resilience Architecture Principle #7: Graceful Degradation. . . . . . . . . 327 13.10 Resilience Architecture Principle #8: Dependable Foundation (Infrastructure). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 13.11 Resilience Architecture Principle #9: Monitoring. . . . . . . . . . . . . . . . . 331 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 14 Architecture Principles for Dependability. . . . . . . . . . . . . . . . . . . . . . . . . . . 337 14.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 14.2 Information Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.2.1 Context. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.2.2 Information Protection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 14.3 Functional Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 14.3.1 Infiltration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 14.3.2 Malicious Modification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 14.3.3 Secure Software Development. . . . . . . . . . . . . . . . . . . . . . . . 347 14.4 Safety. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 14.4.1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 14.4.2 Safety Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 14.4.3 Certification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 14.4.4 Safety-Systems Infrastructure . . . . . . . . . . . . . . . . . . . . . . . . 353 14.5 Engineering Trustworthy Cyber-Physical Systems. . . . . . . . . . . . . . . . 355 14.5.1 CPS and CPSoS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 14.5.2 Internet of Things. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 369
About the Author
Frank J. Furrer graduated in July 1970 as Diplomelektroingenieur at the Eidgenössische Technische Hochschule in Zürich, Switzerland and earned his PhD (Doktor der technischen Wissenschaften) in July 1974 from the same institution. From 1975 to 2009, he was active in industry, both as an entrepreneur and as a management consultant for Information Technology and IT Systems Architecture. In March 2013, he was invited by the Technische Hochschule Dresden, Germany (Faculty for Computer Science) to start teaching. In July 2015, he was appointed as honorary professor by the Technische Hochschule Dresden. This book is the result of the accumulated knowledge and experience of Prof. Dr. Frank J. Furrer in a professional lifetime in systems and software engineering, both in industry and as a teacher.
xxiii
List of Figures
Fig. 1.1 Software-defined radio—from hardware to software. a Full hardware implementation (Collins Radio 1970). b Mixed HW-SW implementation (Status 2017). c Software-defined radio vision. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Fig. 2.1 Introduction of technical debt and architecture erosion via projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Fig. 2.2 Internal and external sources of technical debt. . . . . . . . . . . . . . . . . . . 15 Fig. 2.3 Internal and external sources of architecture erosion . . . . . . . . . . . . . . 16 Fig. 2.4 Migration strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Fig. 3.1 Systems engineering process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Fig. 3.2 Phases of the systems engineering process (simplified) . . . . . . . . . . . . 23 Fig. 3.3 Systems engineering devil #1—complexity . . . . . . . . . . . . . . . . . . . . . 24 Fig. 3.4 Systems engineering devil #2—change. . . . . . . . . . . . . . . . . . . . . . . . . 26 Fig. 3.5 Systems engineering devil #3—uncertainty . . . . . . . . . . . . . . . . . . . . . 27 Fig. 3.6 Horizontal architecture layers (functional partitioning) . . . . . . . . . . . . 29 Fig. 3.7 Vertical architecture layers (quality of service properties partitioning) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Fig. 3.8 System architecture framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Fig. 3.9 Security architecture—access control. . . . . . . . . . . . . . . . . . . . . . . . . . 33 Fig. 3.10 Software hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Fig. 3.11 Anti-Skid Braking System (ABS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Fig. 3.12 MAPE-K reference architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Fig. 3.13 Vertical architecture “Autonomy”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Fig. 3.14 Governance regions in a system-of-systems. . . . . . . . . . . . . . . . . . . . . 40 Fig. 4.1 Software-system evolution as a transformation. . . . . . . . . . . . . . . . . . . 47 Fig. 4.2 Evolution cycles of a software-system . . . . . . . . . . . . . . . . . . . . . . . . . 48 Fig. 4.3 Business-IT alignment via domain model and business object model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Fig. 5.1 Transformation of system structure by evolution . . . . . . . . . . . . . . . . . 58 Fig. 5.2 Transformation of system properties by evolution . . . . . . . . . . . . . . . . 58 xxv
xxvi
List of Figures
Fig. 5.3 Managed evolution coordinate-system . . . . . . . . . . . . . . . . . . . . . . . . . 60 Fig. 5.4 Transformation of the primary characteristics by a project. . . . . . . . . . 61 Fig. 5.5 Sequence of projects generating an evolution trajectory. . . . . . . . . . . . 62 Fig. 5.6 Essence of managed evolution—the managed evolution channel. . . . . 63 Fig. 5.7 Trajectory to death for a software-system. . . . . . . . . . . . . . . . . . . . . . . 64 Fig. 5.8 Business value erosion as counterforce to business value generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Fig. 5.9 Resource consumption for an extension of the application landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Fig. 5.10 Software-system with run-time infrastructure and operating environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Fig. 5.11 Structure of a taxonomy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Fig. 5.12 Dependability taxonomy for a financial institution. . . . . . . . . . . . . . . . 73 Fig. 5.13 Metrics for the lowest level dependability concepts in the taxonomy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Fig. 5.14 Response time distribution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Fig. 5.15 Additional allocation of resources to improvement of changeability and dependability. . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Fig. 5.16 Some techniques for the improvement of QoS properties. . . . . . . . . . . 77 Fig. 5.17 Database extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Fig. 5.18 Tracking business value and changeability over time. . . . . . . . . . . . . . 80 Fig. 5.19 Introduction of digital certificates for authentication . . . . . . . . . . . . . . 81 Fig. 5.20 Architecture process for managed evolution. . . . . . . . . . . . . . . . . . . . . 82 Fig. 5.21 Opportunistic evolution trajectory. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Fig. 6.1 Structure and behavior of a software-system. . . . . . . . . . . . . . . . . . . . . 93 Fig. 6.2 Architecture hierarchy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Fig. 6.3 Application architecture hierarchy in a car. . . . . . . . . . . . . . . . . . . . . . 95 Fig. 6.4 Integration challenge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Fig. 6.5 Additional work done during integration (managed evolution). . . . . . . 101 Fig. 6.6 Evolutionary architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Fig. 6.7 Architecture effort. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Fig. 7.1 Horizontal and vertical architecture principles. . . . . . . . . . . . . . . . . . . 115 Fig. 7.2 Horizontal architecture principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Fig. 7.3 Vertical architecture principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 Fig. 7.4 Principle-pattern hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Fig. 7.5 The RBAC security pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Fig. 7.6 Principle-based architecting process. . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Fig. 8.1 Management tension fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Fig. 8.2 Architecture knowledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 Fig. 8.3 Architecture governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Fig. 8.4 Example for a functional IT architecture. . . . . . . . . . . . . . . . . . . . . . . . 131 Fig. 8.5 IT agility evolution in CREDIT SUISSE in 2005–2009. . . . . . . . . . . . 134
List of Figures
xxvii
Fig. 9.1 The three devils of software engineering. . . . . . . . . . . . . . . . . . . . . . . . 140 Fig. 9.2 Emergent behaviour and emergent information . . . . . . . . . . . . . . . . . . 142 Fig. 9.3 Size of software in an automobile. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 Fig. 9.4 Software-system evolution under time-to-market pressure. . . . . . . . . . 144 Fig. 9.5 Application architecture layer with element hierarchy. . . . . . . . . . . . . 148 Fig. 9.6 Abstractions in application architecture . . . . . . . . . . . . . . . . . . . . . . . . 149 Fig. 9.7 Code generation from models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Fig. 9.8 The ProCoS tower. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 Fig. 10.1 SCRUM™ development process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Fig. 10.2 Architecture escort for the agile methods. . . . . . . . . . . . . . . . . . . . . . . 163 Fig. 10.3 Large-scale scrum (LeSS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Fig. 10.4 SafeScrum™ model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Fig. 10.5 Assurance of functionality and quality of service properties . . . . . . . . 168 Fig. 10.6 Extension of agile development by risk management. . . . . . . . . . . . . . 169 Fig. 10.7 Continuous delivery and DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 Fig. 10.8 The essence of DevOps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Fig. 10.9 DevOps cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Fig. 10.10 Strengthening agile software production processes. . . . . . . . . . . . . . . . 174 Fig. 10.11 DevSecOps essence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Fig. 10.12 Legacy software context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Fig. 10.13 Legacy software modernization/migration . . . . . . . . . . . . . . . . . . . . . . 179 Fig. 10.14 Replacement of a communications application. . . . . . . . . . . . . . . . . . . 182 Fig. 10.15 Rearchitecture case examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Fig. 10.16 Information technology silos. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184 Fig. 10.17 Reengineering case examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 Fig. 10.18 Component/module refactoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Fig. 10.19 Accidentally removed dependency during refactoring . . . . . . . . . . . . . 187 Fig. 10.20 Legacy software causes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Fig. 11.1 Impact of architecture principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Fig. 12.1 Impact of architecture principles on system quality properties. . . . . . . 208 Fig. 12.2 Horizontal architecture-layer isolation . . . . . . . . . . . . . . . . . . . . . . . . . 209 Fig. 12.3 SQL misuse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210 Fig. 12.4 System consisting of parts and dependencies. . . . . . . . . . . . . . . . . . . . 212 Fig. 12.5 Partitioning hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Fig. 12.6 System partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Fig. 12.7 Good partitioning following high cohesion. . . . . . . . . . . . . . . . . . . . . . 216 Fig. 12.8 Bad partitioning violating cohesion. . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Fig. 12.9 Encapsulation of partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Fig. 12.10 External access to partitions via interfaces . . . . . . . . . . . . . . . . . . . . . . 219 Fig. 12.11 Access protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Fig. 12.12 Two different concepts of “time”. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Fig. 12.13 Sanction filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
xxviii
List of Figures
Fig. 12.14 Functional and data redundancy in the IT system. . . . . . . . . . . . . . . . . 228 Fig. 12.15 Unmanaged data redundancy—pirate database. . . . . . . . . . . . . . . . . . . 229 Fig. 12.16 Unmanaged functional redundancy—cut-and-paste code. . . . . . . . . . . 230 Fig. 12.17 Managed data redundancy—database mirroring. . . . . . . . . . . . . . . . . . 232 Fig. 12.18 Configuration and version management system . . . . . . . . . . . . . . . . . . 233 Fig. 12.19 Synchronization of replicas. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 Fig. 12.20 Redundancy avoidance pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Fig. 12.21 Interoperability levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Fig. 12.22 Contracts, preconditions, and postconditions . . . . . . . . . . . . . . . . . . . . 238 Fig. 12.23 Interface and service contracts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Fig. 12.24 Common data and functionality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Fig. 12.25 Common functionality and data for interest calculation. . . . . . . . . . . . 243 Fig. 12.26 Synchronization mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 Fig. 12.27 AUTOSAR structural reference architecture. . . . . . . . . . . . . . . . . . . . . 247 Fig. 12.28 Enterprise architecture as top architecture layer. . . . . . . . . . . . . . . . . . 248 Fig. 12.29 TOGAF overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 Fig. 12.30 Pattern hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250 Fig. 12.31 Types of reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Fig. 12.32 Component/service reuse cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Fig. 12.33 Engine control system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 Fig. 12.34 Business case for reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 Fig. 12.35 Digital certificate format X.509v3 (simplified). . . . . . . . . . . . . . . . . . . 261 Fig. 12.36 Information architecture structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262 Fig. 12.37 Enterprise information usage (transaction). . . . . . . . . . . . . . . . . . . . . . 265 Fig. 12.38 Transaction rollback for database consistency . . . . . . . . . . . . . . . . . . . 265 Fig. 12.39 Enterprise information architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . 266 Fig. 12.40 Enterprise data/information domains. . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Fig. 12.41 Real-time data/information architecture . . . . . . . . . . . . . . . . . . . . . . . . 270 Fig. 12.42 Data validation in an anti-skid braking system. . . . . . . . . . . . . . . . . . . 271 Fig. 12.43 Degree of formalization of a modeling language . . . . . . . . . . . . . . . . . 275 Fig. 12.44 Car ontology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Fig. 12.45 OWL example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Fig. 12.46 UML and semantics. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Fig. 12.47 UML model for a system-of-systems (SoS) . . . . . . . . . . . . . . . . . . . . . 284 Fig. 12.48 Domain model for a financial institution. . . . . . . . . . . . . . . . . . . . . . . . 285 Fig. 12.49 Subdomains and application/data assignments. . . . . . . . . . . . . . . . . . . 286 Fig. 12.50 Top level business object model for a financial institution. . . . . . . . . . 287 Fig. 12.51 First refinement level of the business object model. . . . . . . . . . . . . . . . 288 Fig. 12.52 Business object structure for “address”. . . . . . . . . . . . . . . . . . . . . . . . . 289 Fig. 12.53 Utilization of business objects in business processes . . . . . . . . . . . . . . 289 Fig. 12.54 System complexity metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Fig. 12.55 Complexity generation visualization. . . . . . . . . . . . . . . . . . . . . . . . . . . 292
List of Figures
xxix
Fig. 12.56 Complexity management processes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 293 Fig. 13.1 Dependability—protection process. . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Fig. 13.2 Policy process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313 Fig. 13.3 Information security policy architecture. . . . . . . . . . . . . . . . . . . . . . . . 315 Fig. 13.4 Top-level internet security architecture for a financial institution. . . . . 317 Fig. 13.5 Faults, errors, and failures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Fig. 13.6 Fault containment region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 Fig. 13.7 Single points of failure (a) and redundancy (b). . . . . . . . . . . . . . . . . . . 322 Fig. 13.8 Multiple lines of defense. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Fig. 13.9 System failure classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Fig. 13.10 Graceful degradation after failure of system components. . . . . . . . . . . 327 Fig. 13.11 Degraded automatic teller machine operation. . . . . . . . . . . . . . . . . . . . 328 Fig. 13.12 Technical infrastructure evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Fig. 13.13 Extrusion detection by monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Fig. 14.1 Dependability—protection process. . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Fig. 14.2 Topology of architecture principles. . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Fig. 14.3 Conceptional model for information security. . . . . . . . . . . . . . . . . . . . 341 Fig. 14.4 Entry points for undesired code/software . . . . . . . . . . . . . . . . . . . . . . . 344 Fig. 14.5 Malicious database destruction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 Fig. 14.6 Conceptional model of hazards in a CPSoS . . . . . . . . . . . . . . . . . . . . . 350 Fig. 14.7 Time-triggered architecture (TTA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Fig. 14.8 IoT architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
List of Tables
Table 2.1 Examples of damage of technical debt introduced by projects. . . . . . . 12 Table 3.1 Managing complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Table 4.1 Quality properties of a software-system . . . . . . . . . . . . . . . . . . . . . . . . 49 Table 5.1 NPV calculation example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Table 5.2 Project Table for the Changeability Metric. . . . . . . . . . . . . . . . . . . . . . 68 Table 5.3 Managed Evolution quality scorecard. . . . . . . . . . . . . . . . . . . . . . . . . . 76 Table 5.4 Database modernization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Table 5.5 Value of managed evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Table 6.1 Parts, relationships, and models in the architecture hierarchy. . . . . . . . 95 Table 6.2 Risk evaluation for legacy system modernization. . . . . . . . . . . . . . . . . 99 Table 6.3 Some Suggestions for Evolutionary Architecture. . . . . . . . . . . . . . . . . 103 Table 7.1 The 12 application landscape architecture principles. . . . . . . . . . . . . . 116 Table 7.2 Architecture principle documentation template. . . . . . . . . . . . . . . . . . . 117 Table 11.1 Business value of a software-system. . . . . . . . . . . . . . . . . . . . . . . . . . . 202 Table 12.1 12 Architecture principles for changeability. . . . . . . . . . . . . . . . . . . . . 208 Table 12.2 Common functionality and data for interest calculation. . . . . . . . . . . . 243 Table 12.3 Types of reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 Table 12.4 Types of standards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Table 12.5 AADL component-type declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Table 12.6 Functionality listing of subdomain 1.1.9. . . . . . . . . . . . . . . . . . . . . . . . 286 Table 12.7 Simplification checklist. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294 Table 13.1 Resilience architecture principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 Table 13.2 Input data stream fault detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 Table 14.1 Data/information security classification . . . . . . . . . . . . . . . . . . . . . . . . 342 Table 14.2 IoT device limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
xxxi
List of Definitions
Definition 1.1 Definition 2.1 Definition 2.2 Definition 3.1 Definition 3.2 Definition 3.3 Definition 3.4 Definition 3.5 Definition 4.1 Definition 4.2 Definition 5.1 Definition 5.2 Definition 5.3 Definition 5.4 Definition 5.5 Definition 5.6 Definition 5.7 Definition 6.1 Definition 6.2 Definition 6.3 Definition 6.4 Definition 6.5 Definition 7.1 Definition 7.2 Definition 7.3 Definition 7.4 Definition 7.5 Definition 8.1 Definition 8.2 Definition 8.3
Dependable Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Technical Debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Architecture Erosion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Systems Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Application Landscape. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Cyber-Physical System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 System-of-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Future-Proof Software-System. . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Business-IT-Alignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 Managed Evolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 Business Value Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 Changeability Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Dependability Taxonomy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Metric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 Quality of Service Properties Scorecard. . . . . . . . . . . . . . . . . . . . 75 Managed Evolution Architecture Process. . . . . . . . . . . . . . . . . . . 82 Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Legacy System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Architecture Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Architecture Principle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 Anti-Pattern. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Principle-Based Architecting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Architecture Knowledge. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Architecture Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Corporate Culture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132 xxxiii
xxxiv
Definition 9.1 Definition 9.2 Definition 9.3 Definition 9.4 Definition 10.1 Definition 10.2 Definition 10.3 Definition 10.4 Definition 10.5 Definition 10.6 Definition 10.7 Definition 10.8 Definition 10.9 Definition 10.10 Definition 10.11 Definition 10.12 Definition 10.13 Definition 10.14 Definition 10.15 Definition 10.16 Definition 12.1 Definition 12.2 Definition 12.3 Definition 12.4 Definition 12.5 Definition 12.6 Definition 12.7 Definition 12.8 Definition 12.9 Definition 12.10 Definition 12.11 Definition 12.12 Definition 12.13 Definition 12.14 Definition 12.15 Definition 12.16 Definition 12.17 Definition 12.18 Definition 12.19 Definition 12.20 Definition 12.21 Definition 12.22
List of Definitions
Emergent Property/Behavior. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Cyber-Physical System-of-Systems. . . . . . . . . . . . . . . . . . . . . . . 142 Abstraction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Correctness. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Agile Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Architecture Escort. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164 Requirements Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Project Risk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168 Cross-Platform Risk Generation. . . . . . . . . . . . . . . . . . . . . . . . . . 170 Continuous Delivery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 DevOps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 DevSecOps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Recoverability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Forensic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Legacy Software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 Legacy Software Modernization/Migration. . . . . . . . . . . . . . . . . 180 Legacy Software Replacement. . . . . . . . . . . . . . . . . . . . . . . . . . . 181 Legacy Software Rearchitecting. . . . . . . . . . . . . . . . . . . . . . . . . . 183 Legacy Software Reengineering. . . . . . . . . . . . . . . . . . . . . . . . . . 184 Legacy Software Refactoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Dependency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Partitioning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Cohesion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Encapsulation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Coupling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Conceptual Integrity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 Domain Software Engineering. . . . . . . . . . . . . . . . . . . . . . . . . . . 224 Hidden Assumptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 Unmanaged Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 Managed Redundancy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Contract. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 Industry Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Reference Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246 Architecture Framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 Reuse. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 Parametrization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 Business Rule. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256 Industry Standard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Enterprise Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
List of Definitions
Definition 12.23 Definition 12.24 Definition 12.25 Definition 12.26 Definition 12.27 Definition 12.28 Definition 12.29 Definition 12.30 Definition 12.31 Definition 12.32 Definition 12.33 Definition 12.34 Definition 12.35 Definition 12.36 Definition 13.1 Definition 13.2 Definition 13.3 Definition 13.4 Definition 13.5 Definition 13.6 Definition 13.7 Definition 13.8 Definition 13.9 Definition 13.10 Definition 13.11 Definition 13.12 Definition 14.1 Definition 14.2 Definition 14.3 Definition 14.4 Definition 14.5 Definition 14.6 Definition 14.7 Definition 14.8
xxxv
Real-Time Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Analytic Information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 Information Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Domain Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Business Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Business Object Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Metadata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273 Modeling Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274 Ontology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276 Domain-Specific Language (DSL). . . . . . . . . . . . . . . . . . . . . . . . 280 Formal Language . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Architecture Description Language . . . . . . . . . . . . . . . . . . . . . . . 281 Complexity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292 (Repetition): Dependable Software. . . . . . . . . . . . . . . . . . . . . . . . 310 Resilience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310 Policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 Vertical Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Fault Propagation & Fault-Containment Region . . . . . . . . . . . . . 318 Single Point of Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 Multiple Lines of Defense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Fail-Safe State. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325 Graceful Degradation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327 Fault-Tolerance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Resilience Infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Information Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Functional Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 Secure Software Development Process. . . . . . . . . . . . . . . . . . . . . 348 Safety . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Certification. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352 Safety Case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 Trustworthy Cyber-Physical System and Cyber-Physical System-of-Systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Internet of Things. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
List of Principles
Principle 5.1 Principle 8.1 Principle 11.1 Principle 11.2 Principle 11.3 Principle 12.1 Principle 12.2 Principle 12.3 Principle 12.4 Principle 12.5 Principle 12.6 Principle 12.7 Principle 12.8 Principle 12.9 Principle 12.10 Principle 12.11 Principle 12.12 Principle 13.1 Principle 13.2 Principle 13.3 Principle 13.4 Principle 13.5 Principle 13.6 Principle 13.7 Principle 13.8 Principle 13.9 Principle 14.1 Principle 14.2 Principle 14.3 Principle 14.4
Continuous Rearchitecting, Refactoring, and Reengineering . . . . 79 Central Architecture Department. . . . . . . . . . . . . . . . . . . . . . . . . . 132 IT-System as Investment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Managed Evolution Culture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Business-IT Alignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 Architecture Layer Isolation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 Partitioning, Encapsulation & Coupling. . . . . . . . . . . . . . . . . . . . . 222 Conceptual Integrity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Interoperability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 Common Functions, Data, and Tables. . . . . . . . . . . . . . . . . . . . . . 244 Reference Architectures, Frameworks, and Patterns . . . . . . . . . . . 251 Reuse and Parametrization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 Industry Standards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261 Information Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 Formal Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289 Complexity and Simplification. . . . . . . . . . . . . . . . . . . . . . . . . . . . 295 Policies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 Vertical Architectures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317 Fault Containment Regions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320 Single Point of Failure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 Multiple Lines of Defense . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 324 Fail-Safe States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Graceful Degradation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Dependable Foundation (Resilience Infrastructure). . . . . . . . . . . . 330 Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Information Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343 Functional Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348 Safety. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 Internet of Things (IoT). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 xxxvii
Part I Foundation
1
Software Everywhere
Abstract
This chapter introduces the important role of software in nearly all products and services in our modern world. Our life, work, and society have become highly dependent on software—in fact, we live today in a software world! The positive side of software is that it offers a nearly unlimited power to implement functionality. The negative side is that faults, failures, and errors in software can heavily impact comfort, business, and interactions and may even endanger health, life, and property. Software has made the transition from internal parts of products and services to valuable industrial assets of companies and institutions. The software assets of many companies represent their core value. The care for and the evolution of the software assets therefore are central to the business case for such companies. Software-systems need to satisfy today’s business operations requirements in a timely and cost-effective way. At the same time, they must be evolved to meet future’s business, environmental, and societal needs. Therefore, the software-systems must be future-proof. The long-term evolution of future-proof software-systems is an emerging field of knowledge and practice. Longterm evolution strategies lie in the tension field of business success, investment decisions, and software architecture. Such strategies need adequate mindsets, principles, and processes to assure the continuous success of the companies depending on the software.
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_1
3
4
1 Software Everywhere
1.1 Software Everywhere Quote: “Software, because it’s virtual and does not conform to any physical laws, can be bended and twisted into just about anything you can imagine. There are so many ways to get from point A to point B. And, whatever the end goal is could be infinitely improved upon. We only stop because we run out of time, money, or interest.” (Jake Trent 2011, https://jaketrent. com/post/software-dev-hard-dont-be-negative-about-it/)
Software is infinitely malleable—it is pure thinking, which becomes hard reality when executed on a computer. This great freedom in building software-systems is the reason for the tremendous success of software in nearly all areas of our life and work. It is difficult to find a modern product or service, which is not strongly based on functionality implemented in software. In fact, more and more functionality is removed from specialized hardware and implemented in software: “Software is everywhere”. This is not only true for visible functionality of software, such as electronic banking, word processing, drawing programs, etc., but even more for “embedded software”. Embedded software is invisible software, which is hidden in devices or products, such as a brake control unit in a car, a pacemaker for the human heart, or the microcontroller in your handy. Example 1.1 demonstrates this strong trend of replacing hardware functionality by software functionality: Up to 1985, the full functionality of a radio transceiver was purely implemented in hardware. From then on, hardware functionality was continuously replaced by software functionality running on embedded processors and specialized signal processors (DSP’s [Lapsley 97]). Apart from a cost advantage, the software implementation allowed for a great variety of modulation schemes, etc. on the same hardware. Also, updates to new algorithms, e.g., improvements in error-correcting coding became possible by simply downloading new software. Example 1.1: Software-Defined Radio
Software-defined radio (SDR, [Clark16]) is an impressive example of how software continuously replaces hardware elements. Up to 1984, all elements of both the receiver and the transmitter of a radio set were hardware, such as amplifiers, mixers, filters, etc. Figure 1.1a shows such a traditional radio transceiver block diagram. Zero software! In 1984, the idea came up, to start replacing some of the hardware functions by software implementations using digital signal processing [Tan13]. Figure 1.1c shows the vision of a fully digital, software-defined transceiver. The wireless signal is converted directly after the antenna to a digital signal. Note that of course still hardware is necessary—in the form of digital signal processors, embedded processors, memory, …—but nearly all the functionality is implemented in software! Due to current technological restrictions, however, this vision is far in the future. Today’s state of the art (2017) is shown in Fig. 1.1b. The high-frequency processing (= front end processing) is still in hardware. As soon as the signal is transformed
1.1 Software Everywhere
5
to the much lower intermediate frequency IF, it is sampled and digitized. The now digital signal is processed by software. The tremendous and unstoppable pervasiveness of software offers both great opportunities and benefits for people and society, but also forms the base for some serious risks. This book only deals with risks and consequences of technical nature. The effect of software on social issues, employment, and environment is not considered (see, e.g., [Larrey17]). Ă
ĂƚĂŽƌ ^ƉĞĞĐŚ
Z& ŵƉůŝĨŝĞƌ
dyͬZy ^ǁŝƚĐŚ
Z&WŽǁĞƌ ŵƉůŝĨŝĞƌ
Zy DŝdžĞƌ
/& &ŝůƚĞƌ
/& ŵƉůŝĨŝĞƌ
ŵƉůŝĨŝĞƌ
ĂƌƌŝĞƌ KƐĐŝůůĂƚŽƌ
>ŽĐĂů KƐĐŝůůĂƚŽƌ
dy DŝdžĞƌ
ĞƚĞĐƚŽƌ
/& &ŝůƚĞƌ
/& ŵƉůŝĨŝĞƌ
DŽĚƵůĂƚŽƌ
ŵƉůŝĨŝĞƌ
ĂƚĂŽƌ ^ƉĞĞĐŚ
,ĂƌĚǁĂƌĞ
ď
ĂƚĂŽƌ ^ƉĞĞĐŚ
Z& ŵƉůŝĨŝĞƌ
dyͬZy ^ǁŝƚĐŚ
Z&WŽǁĞƌ ŵƉůŝĨŝĞƌ
Zy DŝdžĞƌ
/& &ŝůƚĞƌ
ͬͲ ŽŶǀĞƌƚĞƌ
ŝŐŝƚĂů^ŝŐŶĂůWƌŽĐĞƐƐŽƌƐ ŵďĞĚĚĞĚWƌŽĐĞƐƐŽƌƐ ^ŽĨƚǁĂƌĞ
/& &ŝůƚĞƌ
ͬͲ ŽŶǀĞƌƚĞƌ
ŝŐŝƚĂů^ŝŐŶĂůWƌŽĐĞƐƐŽƌƐ ŵďĞĚĚĞĚWƌŽĐĞƐƐŽƌƐ ^ŽĨƚǁĂƌĞ
>ŽĐĂů KƐĐŝůůĂƚŽƌ
dy DŝdžĞƌ
ĂƚĂŽƌ ^ƉĞĞĐŚ
,ĂƌĚǁĂƌĞ
^ŽĨƚǁĂƌĞ
Fig. 1.1 Software-defined radio—from hardware to software. a Full hardware implementation (Collins Radio 1970). b Mixed HW-SW implementation (Status 2017). c Software-defined radio vision
6
1 Software Everywhere
Đ
ĂƚĂŽƌ ^ƉĞĞĐŚ
ͬͲ ŽŶǀĞƌƚĞƌ
ŝŐŝƚĂů^ŝŐŶĂůWƌŽĐĞƐƐŽƌƐ ŵďĞĚĚĞĚWƌŽĐĞƐƐŽƌƐ ^ŽĨƚǁĂƌĞ
ͬͲ ŽŶǀĞƌƚĞƌ
ŝŐŝƚĂů^ŝŐŶĂůWƌŽĐĞƐƐŽƌƐ ŵďĞĚĚĞĚWƌŽĐĞƐƐŽƌƐ ^ŽĨƚǁĂƌĞ
dyͬZy ^ǁŝƚĐŚ
ĂƚĂŽƌ ^ƉĞĞĐŚ
,ĂƌĚǁĂƌĞ
^ŽĨƚǁĂƌĞ
Fig. 1.1 (continued)
1.2 Opportunities for Software In the last decades, the software industry has brought us tremendous success stories. Software now holds the world championship in chess and GO, it beats the four world’s best poker players devastatingly, it runs our (nearly) autonomous transports (cars, trains, airplanes), it enables global electronic commerce, it has completely changed the military landscape, it scores daily new medical achievements in diagnostics and treatment, it enriches our personal and business life, it provides a rich and easily accessible knowledge base, it supports human and social interactions, it guides modern and helpful robots—and gives us countless more benefits. Quote: “Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.” (Marc Andreessen, 2011)
Moreover, the story goes on: We will see more and more software! Software will generate new businesses, new forms of interaction, new formats of entertainment, it will continue to change the way we live and work, and it will implement more and more of the functionality of our products and services. Software generates many opportunities. It will continue to transform our environment, our work conditions, and our possibilities for enriched lives.
1.4 Balancing Opportunities and Risks of Software
7
1.3 Risks of Software Software is produced by people—although its production is more and more supported by automatic generation from precise models. Software is complex: Most products and services contain software counting many millions of lines of code (SLOC’s). Software is continuously being modified, extended, and is relentlessly growing. People involvement, complexity, and growth in the production of software is a difficult mix! When resource pressure such as time limits and funding restrictions in the development cycle are added, it often leads to dangerous situations. Shortcuts are taken, testing is neglected, technical debt is accumulated, and the architecture erodes. This often results in faulty software which may cause massive harm to the operation, to the use, and possibly to life and property. An Internet search for “Software related accidents” returned 2’510’000 hits (June 2017). The inevitable fact is that faulty software poses a significant risk to products and services. Quote: “The best way not to have any defects in your system is not to put defects in your system.” (Bruce Powel Douglass, 2016)
How can this risk be managed? The simple answer is: By building dependable softwaresystems. Unfortunately, this simple answer is extremely difficult to implement. It touches the whole development and operations chain of software: Starting from adequate architecture, dependable design, careful implementation, up to serious testing, etc. Ten thousands of papers and books have been written upon this subject. Many methodologies and techniques have been introduced. This vast body of knowledge constitutes the engineering discipline of systems and software engineering (see, e.g., [INCOSE16, Douglass16, Romanovsky17, Wasson15, DOD16, Nakajima17]). Systems and software engineering of dependable software-systems is therefore one of the great challenges of today—and tomorrow! This book offers a contribution to the engineering of dependable, long-lived, and mission-critical software-systems.
1.4 Balancing Opportunities and Risks of Software Looking at the impact of software, history suggests that producing and using software is a summit walk between success stories and catastrophes [Furrer17]. The software community has the responsibility to deliver as many success stories as possible and to be responsible for as few catastrophes as possible. Note that we refer not only to major catastrophes with loss of life or property but also to minor catastrophes such as loss of data because of a computer virus, crash of the operating system, hacking of a personal computer, blue screen, stop of your car because of ❘Error 457❘, shutout from your home because of a malfunction, delay or cancelling of your flight because of a computer problem, etc.
8
1 Software Everywhere
Producing dependable software is a balancing act between the investment into the implementation of business functionality and the investment into the quality of service properties of the software. If these two investments are not balanced, the software will deteriorate over time and even become dangerous for its users. In addition, the resistance to change of the software-system will continuously grow until it becomes unmodifi able. Maintaining this balance is one of the key responsibilities of the management of any company producing software. Maintaining the balance needs not only the strong commitment and funding by management but also a number of other elements: An adequate evolution strategy, a strong architecture-based approach, a suitable company culture, highly formalized and effective processes, and—last but not least—excellent people in the right positions. This book covers one possible, proven evolution strategy, and the principle-based architecting for software-systems.
1.5 Toward Dependable Software In order to reduce the risks in the use of software, we need to produce and operate dependable software. Definition 1.1: Dependable Software “Dependability” refers to the user’s ability to depend on a system in its intended environment, with its intended use, as well as when these assumptions are violated or external events cause disruptions. In practice, this translates into: “The software does what it should do, and does not what it should not do.” Dependability is a complex term: It encompasses a number of quality properties. Typical quality properties of a system are safety, security, availability, trustworthiness, recoverability, etc. The totality of the quality properties is often called “the …illities”. A system is only dependable if all the relevant quality properties are guaranteed to a sufficient degree for the intended application. Quote: “When someone builds a bridge, he uses engineers who have been certified as knowing what they are doing. Yet when someone builds you a software program, he has no similar certification, even though your safety may be just as dependent upon that software working as it is upon the bridge supporting your weight.” (David Parnas, 1979) Read more at: https://www.brainyquote.com/search_results.html?q=software+dependab ility
Which quality properties are most relevant for dependability? This depends upon the application field of the software. In a car, the most important quality properties are safety (= no accidents), security (= no hostile influence), reliability (= no engine failures on the motorway), and conformance to all laws and regulations. In an e-banking system,
References
9
however, we need the quality properties security (= defense against hackers), integrity (= don’t digitally lose my money), and availability (= 24 h/7 days). As a last example, in a medical diagnostic system, such as MRI [Westbrook11], the important property qualities are safety (= never harm a patient), trustworthiness (deliver reliable results), and precision (deliver significant image data). Implementing all quality of service properties to 100% would be a commercially unfeasible project. Therefore, for each field of application, the decision must be taken: • Which quality of service properties must be continuously improved? • Which quality of service properties must be made “as good as necessary”? The answer to these two central question forms the foundation for the dependability of the software-system (see, e.g., [Aviziensis04, Bernardi13, Axelrod13, Barbacci95, Leveson11, Visser16, Knight12, Knight14]).
References [Aviziensis04] Avizienis A, Laprie JC, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Transactions on Dependable and Secure Computing 1(1), Janauary-March. https://www.nasa. gov/pdf/636745main_day_3-algirdas_avizienis.pdf. Accessed 6 June 2017 [Axelrod13] Axelrod CW (2013) Engineering safe and secure software-systems. Artech House, Norwood, MA, USA. ISBN 978-1-60807-472-3 [Barbacci95] Barbacci M, Klein MH, Longstaff TA, Weinstock CB (1995) Quality attributes Software Engineering Institute (SEI), Carnegie Mellon University, Technical Report CMU/SEI-95-TR-021, December. http://resources.sei.cmu.edu/asset_ files/TechnicalReport/1995_005_001_16427.pdf. Accessed 6 June 2017 [Bernardi13] Bernardi S, Merseguer J, Petriu DC (2013) Model-driven dependability assessment of software systems. Springer, Berlin. ISBN 978-3-642-39511-6 [Clark16] Clark P, Clark D (2016) Field expedient SDR—introduction to software defined radio, 2nd edn. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-5368-1476-7 [DOD16] U.S. Department of Defense, Space Science Library (2016) Systems engineer ing fundamentals. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-5377-0346-6 [Douglass16] Douglass BP (2016) Agile systems engineering. Morgan Kaufmann, Waltham. ISBN 978-0-12-802120-0 [Furrer17] Furrer FJ (2017) Software—Gratwanderung zwischen Erfolgen und Katastrophen? Informatik-Spektrum Springer, Heidelberg, 40(3): 264–269. First Online: 19 April. 2016 https://doi.org/10.1007/s00287-016-0973-6 [INCOSE16] International Council on Systems Engineering (2016) Systems engineering handbook—a guide for system life cycle processes and activities, 4th edn. INCOSE, San Diego. ISBN 978-1-118-99940-0 [Knight12] Knight J (2012) Fundamentals of dependable computing for software engineers. Chapman and Hall/CRC, London. ISBN 978-1-439-86255-1
10
1 Software Everywhere
[Knight14] Knight JC (2014) Fundamentals of dependable computing tutorial ICSE 2014. http://2014.icse-conferences.org/sites/default/files/icse.tutorial.session.1.pdf. Accessed 14 Aug 2017 [Lapsley 97] Lapsley P, Bier J, Shoham A, Lee EA (1997) DSP processor fundamentals— architectures and features (IEEE Press Series on signal processing). Wiley, New York. ISBN 978-0-780-33405-2 [Larrey17] Larrey P (2017) Connected world—from automated work to virtual wars: the future, by those who are shaping it. Portfolio Penguin, London. ISBN 978-0-241-30842-4 [Leveson11] Leveson NG (2011) Engineering a safer world—systems thinking applied to safety. MIT Press, Cambridge. ISBN 978-0-262-01662-9 [Andreessen 11] Andreessen M (2011) Why Software Is Eating The World. The Wall Stree Journal, New York, USA, August 20, 2011. https://www.wsj.com/articles/SB1 0001424053111903480904576512250915629460. Accessed 22 June 2019 [Nakajima17] Nakajima S, Talpin J-P, Toyoshima M, Yu H (eds.) (2017) Cyber-physical system design from an architecture analysis viewpoint (Communications of NII Shonan meetings). Springer Nature Singapore Pvt Ltd., Singapore. ISBN 978-981-10-4435-9 [Romanovsky17] Romanovsky A, Ishikawa F (eds.) (2017) Trustworthy cyber-physical systems engineering. CRC Press, Boca Raton. ISBN 978-1-4978-4245-0 [Tan13] Tan L, Jiang J (2013) Digital signal processing—fundamentals and applications, 2nd edn. Elsevier, Amsterdam. ISBN 978-0-124-15893-1 [Visser16] Visser J (2016) Building maintainable software—ten guidelines for futureproof code. O’Reilly Media, Inc., Sebastopol. ISBN 978-1-491-95352-5 [Wasson15] Wasson CS (2015) System engineering analysis, design, and development— concepts, principles, and practices (Wiley series in systems engineering and management), 2nd edn. Wiley, Hoboken. ISBN 978-1-118-44226-5 [Westbrook11] Westbrook C, Roth CK, Talbot J (2011) MRI in practice, 4th edn. WileyBlackwell, Chicester. ISBN 978-1-4443-3743-3
2
Force of Entropy
Abstract
Entropy is a concept from thermodynamics: It measures the disorder of a physical system. The force of entropy is the second law of thermodynamics, which states that the entropy in a closed system increases—and so does its disorder. The same can be observed in a software-system: An ideal initial software-system, i.e., having a perfect architecture and no defects, will continuously deteriorate due to the evolution activi ties. Adaptive and corrective maintenance will gradually introduce technical debt and architecture erosion into the system. The software-system thus becomes harder and harder to extend and is difficult to maintain. In order to maintain the viability of a software-system, the force of entropy must actively and consequently be resisted. The means to do so are first the strict adherence to proven architecture principles and patterns to avoid architecture erosion and the generation of technical debt. Second, periodic rearchitecture programs to restore an adequate architecture and to eliminate technical debt must be carried out.
2.1 Entropy in Software Entropy [Kafri13] is a concept from thermodynamics: It has a long and interesting history. Entropy in a physical system has various definitions, coming from thermodynamics, statistical mechanics, and more. Generalized, entropy expresses the amount of order or disorder in a system. Low entropy means a well-organized or well-structured system, whereas high entropy means a system with a high degree of disorder. Entropy is a property of a system. The force of entropy is a process, which over time leads to an increase or decrease of order in the system. It is the force of entropy, which has such a large impact on software-systems. Entropy enters the software-system via the © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_2
11
12
2 Force of Entropy
evolution process, i.e., modifications and maintenance (see, e.g., [Lehmann80, Hunt00, Mannaert12, Hubert02]). The force of entropy means that any modification to an existing software-system potentially increases its disorder and thus its entropy. The reason is that the project team will often neglect architecture, take shortcuts, violate best practices, produce bad code, must accept wrong management decisions, is under tremendous time pressure—and a myriad of other detrimental reasons (Examples in Table 2.1).
Table 2.1 Examples of damage of technical debt introduced by projects Project objective
Disorder/entropy introduced
P1: Functional extensions
A programmer copies and pastes a routine, modifies it, and uses it in –1 his own code. Damage: Unmanaged functional redundancy
P2: Database access
A designer decides to use vendor-specific database function calls (not part of the SQL-standard). Damage: Vendor lock-in
–1
P3: Functional extension
An architect decides to allow direct access to internal data of a module. Damage: Violation of partitioning and encapsulation
–1
P4: Functional
A designer plans the direct access to the operating system function ality of a server Damage: Break of the isolation principle of architecture layers
–1
P5: Database extension
A DB designer adds new fields to the DB instead of extending the adequate DB schema. Damage: unmanaged data redundancy and loss of conceptual integrity
–1
P6: Corrective maintenance
A programmer does not understand a module producing faults in operation. He rewrites the module from scratch and exchanges it. Damage: Unmanaged functional redundancy, possible future divergence
–1
P7: Functional extension
A new data access authentication mechanism is introduced in a large –1 software-system. Instead of explementing the old access check code, only the new functionality is implemented. Damage: Dead code which may allow unauthorized access to sensitive data
P8: Requirements divergence
–1 The business requirements for new functionality are split and assigned to five different teams for implementation. Damage: Due to lack of architectural coordination, some functionali ties and data are implemented in an overlapping form. Unmanaged functional and data redundancy
P9: Functional extension
A programmer violates the naming conventions Damage: Loss of conceptual integrity, documentation difficulties
Metric
–1
2.1 Entropy in Software
13
Quote: “The force of entropy means that disorder is the only thing that happens automati cally and by itself. If you want to create a completely ad-hoc IT-architecture, you do not have to lift a finger … it will happen automatically as a result of day-to-day IT activity.” (Richard Hubert, 2002)
Thus, the software-system suffers from continuous degradation, resulting in an increase of disorder or entropy. The visible result is that the software-system becomes more resistant to change, needs more and more effort for adaptive and corrective maintenance, exhibits reduced quality properties, may become unstable, and show any number of neg ative behaviors. This slow and deadly process is shown in Fig. 2.1: Many of the projects executed on the software-system generate technical debt and gradually erode the good initial architecture. As a consequence, the properties of the software-system degrade. At the end, the software-system becomes extremely costly to extend and maintain, and thus commer cially useless. The only way to prevent this accumulation of disorder is to work against it in a planned, controlled, and effective manner. An increase in disorder/entropy can be introduced: (a) through all information assets (requirements, specifications, architecture, design decisions, implementation forms) and (b) in all phases of the evolution process of the software-system. Example 2.1: Impact of the Force of Erosion
A software-system is modified by adaptive and corrective maintenance activities as shown in Fig. 2.1 by individual projects Pi: Each of the selected projects may introduce some damage in the form of technical debt with respect to disorder/entropy of the software-system (see Table 2.1). 7HFKQLFDO'HEW
3URMHFW 3URMHFW 3URMHFW 3URMHFW 3URMHFW 3URMHFW
$UFKLWHFWXUH(URVLRQ
Fig. 2.1 Introduction of technical debt and architecture erosion via projects
14
2 Force of Entropy Quote: “Every project has time pressures ‐ forces that can drive the best of us to take shortcuts.” (Andrew Hunt, 2000)
Unfortunately, there are myriads of ways to introduce damaging disorder/entropy into a software-system. In principle, every person or team touching any of the information assets—starting from business requirements all through to the implemented code—can damage the system. Damaging disorder/entropy can also be introduced into a software-system in every phase of the development process. In fact, the earlier in the process the damaging disorder/entropy is generated, the worse its consequences will be. For example, writing greenfield requirements without aligning them with the existing functionality and data in the software-system is a sure recipe for the generation of redundancy and thus technical debt.
2.2 Technical Debt Much of the damaging disorder/entropy in a software-system is caused by technical debt [Fowler09, Lilienthal16, Suryanarayana14, Beine18, Sterling13, Tornhill18]. Definition 2.1: Technical Debt Technical debt in an IT system is the result of all those nec essary things that you choose not to do now, but will impede future evolution if left undone. Ward Cunningham 1992 Technical debt slowly but mercilessly accumulates in small steps in the software-system. It is most often the result of uncoordinated or unprincipled actions, sometimes also lazy, incompetent, careless, or forced behavior of people. There are two sources of technical debt (Fig. 2.2): a) Internal sources: Actions executed from inside the organization owning the software-system; b) External sources: Impacts from the environment. Internal sources of technical debt are under the control of the organization maintain ing the software-system. They include accumulation of mistakes, shortcuts, and careless acts; missed explementations (dead code); missed upgrades; bad configurations; violation of principles and best practices … and some more. External sources of technical debt are not under the control of the organization maintaining the software-system—they are imposed by the environment. A heavy and costly source is the change of a programming paradigm, such as the transition from procedural programming to modules, then to objects, later to components, then to services, and a few years ago to contracts. The advantages of each new programming paradigm were so obvious that all existing code had to be migrated.
15
2.3 Architecture Erosion
/ŶƚĞƌŶĂů^ŽƵƌĐĞƐ͗
ĐĐƵŵƵůĂƚŝŽŶŽĨ ŵŝƐƚĂŬĞƐнƐŚŽƌƚĐƵƚƐ
^ŽĨƚǁĂƌĞͲ^LJƐƚĞŵ
ĞĂĚĐŽĚĞ
ĂƌĞůĞƐƐ Žƌ ƐŬŝƉƉĞĚ ƵƉŐƌĂĚĞƐ
dĞĐŚŶŝĐĂů Ğďƚ
sŝŽůĂƚŝŽŶŽĨ ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ ΘWĂƚƚĞƌŶƐ
ĂĚ;ŽƌŝŐŶŽƌĞĚͿ ƉƌŽŐƌĂŵŵŝŶŐďĞƐƚ ƉƌĂĐƚŝĐĞƐΘŐƵŝĚĞůŝŶĞƐ
dŝŵĞ
džƚĞƌŶĂů^ŽƵƌĐĞƐ͗
ƌĐŚŝƚĞĐƚƵƌĞ ƉĂƌĂĚŝŐŵ ĞǀŽůƵƚŝŽŶ ;WƌŽĐĞĚƵƌĂů ƉƌŽŐƌĂŵŵŝŶŐ ⇒ ŵŽĚƵůĞƐ ⇒ ŽďũĞĐƚƐ ⇒ ĐŽŵƉŽŶĞŶƚƐ ⇒ ĐŽŶƚƌĂĐƚƐ
EĞǁƉƌŽŐƌĂŵŵŝŶŐ ůĂŶŐƵĂŐĞƐ
dĞĐŚŶŽůŽŐLJƉƌŽŐƌĞƐƐ
>ĂǁƐΘƌĞŐƵůĂƚŝŽŶƐ
Fig. 2.2 Internal and external sources of technical debt
2.3 Architecture Erosion The second driver for the increase in disorder/entropy in a software-system is architecture erosion [Perry92, DeSilva11, Albin03, Bell16, Erl05, Fairbanks10, Li13, Gutteridge18]. Definition 2.2: Architecture Erosion Architecture erosion is the process where an initially well-designed, adequate architecture of a software-system is gradually destroyed by the activities of evolution and maintenance of the software-system. Architecture erosion is a serious source of damage to the software-system. It occurs by daily evolution (extension and maintenance) activities when they are not strictly e xecuted in accordance with the relevant architecture principles and the target architecture. The sources for architecture erosion can also be classified into internal and external (Fig. 2.3). Internal sources of architecture erosion are under the control of the organization maintaining the software-system. They include, e.g., the violation of proven architecture principles and patterns, deferred rearchitecting and refactoring, or a lag behind progress in technology platforms. External sources of architecture erosion are not under the control of the organization maintaining the software-system—they are generated by the environment. A typical and deep-reaching example is the change of an architecture paradigm [Buschmann11]: The evolution of the monolithical architecture to client/server architecture, then the introduction of the service-oriented architecture (SOA) and lately the migration to cloud-based
16
2 Force of Entropy
/ŶƚĞƌŶĂů^ŽƵƌĐĞƐ͗
džƚĞƌŶĂů^ŽƵƌĐĞƐ͗
sŝŽůĂƚŝŽŶŽĨ ƌĐŚŝƚĞĐƚƵƌĞWƌŝŶĐŝƉůĞƐ
^ŽĨƚǁĂƌĞͲ^LJƐƚĞŵ
/ŐŶŽƌĂŶĐĞŽĨ ƌĐŚŝƚĞĐƚƵƌĞWĂƚƚĞƌŶƐ
ƌĐŚŝƚĞĐƚƵƌĞ ƌŽƐŝŽŶ
ĞĨĞƌƌĞĚƌĞͲ ĂƌĐŚŝƚĞĐƚŝŶŐĂŶĚ ƌĞĨĂĐƚŽƌŝŶŐ
>ĂŐďĞŚŝŶĚƚĞĐŚŶŽůŽŐLJ ƉƌŽŐƌĞƐƐ
ǀŽůƵƚŝŽŶŽĨƌĐŚŝƚĞĐƚƵƌĞ WĂƌĂĚŝŐŵƐ͗ DŽŶŽůŝƚŚ⇒ͬ^ ͬ^⇒ ^Ž ^Ž ⇒ tĞď^ĞƌǀŝĐĞƐ ϯƌĚ ƉĂƌƚLJ ^t ƚĞĐŚŶŽůŽŐLJ ĐŚĂŶŐĞ EĞǁůĂǁƐ ΘƌĞŐƵůĂƚŝŽŶ
dŝŵĞ
EĞǁƚŚƌĞĂƚƐ ĨƌŽŵ ĞŶǀŝƌŽŶŵĞŶƚ
Fig. 2.3 Internal and external sources of architecture erosion
web service architectures. An often overlooked source of massive architecture erosion is the integration of third-party software into an existing software-system. This may introduce clashes between architectures in many ways: Redundancy, incompatibility, restrictions, breaking of partitions or models, etc. Also, a change of database technology may generate architecture erosion: A migration from classical DB technology to relational DB technology or to NoSQL technology may be necessary. Finally, new laws, regula tions, and industry standards may make part of a software architecture obsolete.
2.4 Fighting the Force of Entropy The force of entropy is very powerful—and quite invisible. But its impact can be highly detrimental for the software-system: Gradually technical debt is accumulated and the initial sound architecture is eroded. A viable software-system thus continuously becomes more costly to extend functionally, becomes harder to maintain, and generates malfunctions in operation. Can we fight against the force of entropy? Can we win the fight? In fact, we need a strategy which does three things in a timely, effective, and con trolled way: • Avoid the generation of technical debt and the architecture erosion during initial development and modifications (extension, maintenance) of the software-system; • Continuously eliminate technical debt and architecture damage (= Continuous rearchitecting and refactoring); • Remove accumulated technical debt and repair architecture damage periodically (= Execute refactoring/rearchitecting programs).
2.4 Fighting the Force of Entropy
17
Avoiding the generation of technical debt and the architecture erosion needs strong processes for the evolution and maintenance of the software-system. All development activi ties must be architecture driven, and an empowered architecture team must lead and review the system design process. Continuous elimination of technical debt and ongoing rearchitecting is the essence of the Managed Evolution strategy presented later. Each project modifying the software- system must not only generate new functionality, but must at the same time improve some part of the existing system. This balance between the creation of business value and improving the software-system is the key to future-proof software-systems. As it is impossible to completely avoid the accumulation of technical debt and to prevent architecture erosion, the software-system must be “cleaned” periodically. The organization owning the software-system must provide the resources to carry out peri odic refactoring/rearchitecting programs. Such a program targets a specific improvement and may be quite costly. In many cases, it includes the refactoring of legacy code. This is especially true when an architecture paradigm change—such as the migration from COBOL/CORBA applications to JAVA/SOA applications (see Example 2.2). Example 2.2: COBOL/CORBA to JAVA/SOA + Security
Some decades ago, many commercial software-systems were based on the program ming language COBOL [Murach04] and the middleware CORBA [Bolton01] and were running on mainframes. When the programming language JAVA [Niemeyer17] and the new architecture paradigm service-oriented architecture (SOA, [Erl05]) were introduced, the obvious advantages and scalability of this new technology forced a migration of the existing code base of nearly all organizations operating large software-systems. At the same time, the organization (a large financial institution) decided to enhance its security of access by introducing personal digital certificates stored on a smart card. The organization at that time had more than 5'000 applications with several hun dred million lines of code. Thus, a major rearchitecture program was started. The migration strategy is shown in Fig. 2.4: It consists of three risk-controlled steps. 1. Change programming language COBOL ⇒ JAVA (Functionality unchanged); 2. Change interface CORBA ⇒ SOA (Functionality unchanged; 3. Add the functional extensions (Authentication by #certificates). Each step was fully executed and completely tested before the next migration step was initiated.
18
2 Force of Entropy
KK>→ :s
>ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ WƌŽŐƌĂŵŵŝŶŐ >ĂŶŐƵĂŐĞ͗KK> /ŶƚĞƌĂĐƚŝŽŶ͗KZ
^ĞĐƵƌŝƚLJ ŶŚĂŶĐĞŵĞŶƚƐ ;ηĞƌƚƐͿ DŝŐƌĂƚĞĚƉƉůŝĐĂƚŝŽŶ WƌŽŐƌĂŵŵŝŶŐ >ĂŶŐƵĂŐĞ͗:s /ŶƚĞƌĂĐƚŝŽŶ͗^Ž
^ƚĞƉ Ϯ͗ZĞͲƌĐŚŝƚĞĐƚ KZ → ^Ž &ƵŶĐƚŝŽŶĂůŝƚLJ ƵŶĐŚĂŶŐĞĚ ^ƚĞƉ ϯ͗džƚĞŶĚ &ƵŶĐƚŝŽŶĂůŝƚLJ ŶŚĂŶĐĞ ^ĞĐƵƌŝƚLJďLJ ηĞƌƚƐ
DŝŐƌĂƚŝŽŶWƌŽĐĞƐƐ
KZ→ ^Ž
^ƚĞƉ ϭ͗ZĞĨĂĐƚŽƌ ŽĚĞ KK>→ :s &ƵŶĐƚŝŽŶĂůŝƚLJ ƵŶĐŚĂŶŐĞĚ
Fig. 2.4 Migration strategy
References [Albin03] Albin ST (2003) The art of software architecture. Wiley, Indianapolis. ISBN 978-0-8493-0440-7 [Beine18] Beine G (2018) Technical debts—economizing agile software architecture. De Gruyter, Oldenbourg. ISBN 978-3-1104-6299-9 [Bell16] Bell M (2016) Incremental software architecture—a method for saving fail ing IT implementations. Wiley, Hoboken. ISBN 978-1-119-11764-3 [Bolton01] Bolton F (2001) Pure Corba. Sams Publishing Inc., Indianapolis. ISBN 978-0-6723-1812-2 [Buschmann11] Buschmann F, Henney K (2011) Software architecture, styles and paradigms OOP, January 24–28, Munich. http://www.sigs.de/download/oop_2011/ downloads/files/Fr2_Buschmann_Henney_Architecture_Styles_And_ Paradigms.pdf. Accessed 11 June 2017 [Cunningham92] Cunningham W, (1992) The WyCash Portfolio Management System. OOPSLA ’92 Experience Report, March 26, 1992. http://c2.com/doc/oop sla92.html. Accessed 30 Aug. 2019 [DeSilva11] de Silva L, Balasubramaniam D (2011) Controlling software architecture erosion: a survey. Journal of Systems and Software. https:// www.researchgate.net/profile/Dharini_Balasubramaniam/publication/220377694_Controlling_software_architecture_erosion_A_survey/ links/554879370cf2e2031b387506.pdf. Accessed 11 June 2017 [Erl05] Erl T (2005) Service oriented architecture—concepts, technology, and design. Prentice Hall Computer, Indianapolis. ISBN 978-0-133-85858-7 [Fairbanks10] Fairbanks G (2010) Just enough software architecture—a risk-driven approach. Marshall & Brainerd, Boulder. ISBN 978-0-9846181-0-1 [Fowler09] Fowler M (2009) Technical debt. http://martinfowler.com/bliki/Technical Debt.html. Accessed 24 May 2013
References
19
[Gutteridge18] Gutteridge L (2018) Avoiding IT disasters—fallacies about enterprise systems and how you can rise above them. Thinking Works, Vancouver. ISBN 978-1-7753-5750-6 [Hubert02] Hubert R (2002) Convergent architecture. Wiley, New York. ISBN 978-0-471-10560-0 [Hunt00] Hunt A, Thomas D (2000) The pragmatic programmer—from journeyman to master. Addison Wesley, Boston. ISBN 978-0-201-61622-4 [Kafri13] Kafri O, Kafri H (2013) ENTROPY—god’s dice game. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-48268-769-9 [Lehmann80] Lehmann MM (1980) Programs, life cycles, and laws of software evolution. proceedings of the IEEE Vol. 68, Vol. 9, September, p 1060–1076. https:// www.ifi.uzh.ch/dam/jcr:00000000-2f41-7b40-ffff-ffffd5af5da7/lehman80. pdf. Accessed 8 June 2017 [Li13] Li H (2013) The myth of enterprise system pollutions—the hidden demons. CreateSpace Independent Publishing Platform. ISBN 978-1-4812-8050-1 [Lilienthal16] Lilienthal C (2016) Langlebige Softwarearchitekturen—Technische Schulden analysieren, begrenzen und abbauen. Dpunkt, Heidelberg. ISBN 978-3-86490-292-5 [Mannaert12] Mannaert H, De Bruyn P, Verelst J Exploring entropy in software systems— towards a precise definition and design rules. ICONS (2012), The seventh international conference on systems. ISBN 978-1-61208-184-7. https:// www.researchgate.net/profile/Herwig_Mannaert/publication/266350680_ Exploring_Entropy_in_Software_Systems_Towards_a_Precise_Definition_ and_Design_Rules/links/54d487c80cf25013d02991ec.pdf. Accessed 8 June 2017 [Murach04] Murach M, Price A, Menendez R (2004) Murach’s mainframe COBOL. Mike Murach & Associates, Fresno. ISBN 978-1-890774-24-0 [Niemeyer17] Niemeyer P, Leuck D (2017) Learning Java—a bestselling hands-on Java tutorial, 5th edn. O’Reilly, Farnham. ISBN 978-1-4919-4218-5 [Perry92] Perry DE, Wolf AL (1992) ACM SIGSOFT SOFTWARE ENGINEERING NOTES, Vol 17, No 4, p 40.http://users.ece.utexas.edu/~perry/work/papers/ swa-sen.pdf. Accessed 11 June 2017 [Sterling13] Sterling C (2013) Managing software debt—building for inevitable change. Pearson Education, Boston. ISBN 978-0-321-94861-8 [Suryanarayana14] Suryanarayana G, Samarthyam G, Sharma T (2014) Refactoring for software design smells—managing technical debt. Morgan Kaufmann, Waltham. ISBN 978-0-128-01397-7 [Tornhill18] Tornhill A (2018) Software design X-Rays—fix technical debt with behavioral code analysis. O’Reilly, Sebastopol. ISBN 978-1-680-50272-5
3
Three Devils of Systems Engineering
Abstract
Systems engineering is the engineering discipline for building, extending, and maintaining information technology systems. During the system engineering process, the developers encounter the three devils of systems engineering: Complexity, Change, and Uncertainty. Coping with the three devils needs special attention and dedicated effort.
3.1 Systems Engineering A software-system is never complete: New business requirements, changes in the operating environment, and the force of entropy require continuous adaptation. The adaptation process provides new functionality, implements new legal and compliance or regulatory requirements, adapts to new technology, and compensates the force of entropy. This is done through the systems engineering process [Jenney10, Kossiakoff11, Stevens11, MITRE14, INCOSE16, Dickerson10]. Definition 3.1: Systems Engineering Systems engineering is an interdisciplinary field of engineering and engineering management that focuses on how to design, implement, maintain, and manage complex systems over their life cycles. https://en.wikipedia.org/wiki/Systems_engineering. The adaptation process is shown in Fig. 3.1: We see the software-system at a certain time tn. The software-system is exposed to new business requirements (new
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_3
21
22
3 Three Devils of Systems Engineering
&ŽƌĐĞ ŽĨ ŶƚƌŽƉLJ
ƵƐŝŶĞƐƐ ZĞƋƵŝƌĞŵĞŶƚƐ
^ŽĨƚǁĂƌĞͲ^LJƐƚĞŵ
ĞǀŝůƐ ŽĨ^LJƐƚĞŵƐͲŶŐŝŶĞĞƌŝŶŐ͗ ŽŵƉůĞdžŝƚLJ ŚĂŶŐĞ hŶĐĞƌƚĂŝŶƚLJ
ǀŽůƵƚŝŽŶWƌŽĐĞƐƐ͗ ^LJƐƚĞŵƐŶŐŝŶĞĞƌŝŶŐ
^ŽĨƚǁĂƌĞͲ^LJƐƚĞŵ
KƉĞƌĂƚŝŶŐ ŶǀŝƌŽŶŵĞŶƚ ZĞƋƵŝƌĞŵĞŶƚƐ ƚŝŵĞ ƚŶ
ƚŶнτ
Fig. 3.1 Systems engineering process
functionality), to a changing operating environment (legal, technological), and to the force of entropy. This requires a transformation of the software-system to remain viable and useful to its users. This transformation is executed via the systems engineering process: The softwaresystem is evolved via a number of projects, which continuously modify the softwaresystem and compensate the effects of the force of entropy. The systems engineering process is impacted by the three devils of systems engineering (Fig. 3.1): • Complexity; • Change; • Uncertainty. The systems engineering process has a number of phases. A number of different process models exist, a simplified version is shown in Fig. 3.2. The systems engineering process is not a topic throughout this book, but here we stress the fact that the production of future-proof software-systems requires a strongly architecture-driven process. In all phases, architecture must govern the activities!
23
3.2 Three Devils of Systems Engineering
ƌĐŚŝƚĞĐƚƵƌĞ ŽŶĐĞƉƚ
ĞǀĞůŽƉŵĞŶƚ
ƌĐŚŝƚĞĐƚƵƌĞ
ƌĐŚŝƚĞĐƚƵƌĞ ŚĂŶŐĞ ^ƉĞĐŝĨŝĐĂƚŝŽŶ
ƌĐŚŝƚĞĐƚƵƌĞ
ƌĐŚŝƚĞĐƚƵƌĞ ZĞƋƵŝƌĞͲ ŵĞŶƚƐ
ĞƉůŽLJͲ ŵĞŶƚ
KƉĞƌĂƚŝŽŶ
EĞǁƌĞƋƵŝƌĞͲ ŵĞŶƚƐ
ƚŝŵĞ
Fig. 3.2 Phases of the systems engineering process (simplified)
3.2 Three Devils of Systems Engineering Decades of software engineering have demonstrated that the systems engineering process is made difficult by three facts: 1. Software-systems are highly complex constructs. During their lifecycles, the complex ity increases continuously. Complexity makes the systems very difficult to understand, to predict their behavior, and to evolve them. 2. Software-systems are forced to change all the time. They must be adapted to new requirements, respond to environmental changes, and undergo corrective and predictive maintenance. Change cycles may be very short. This continuous, rapid change makes the software-systems difficult to manage and to ensure their conceptual integrity. 3. Uncertainty is an established companion in the modern world. In the systems engineering process, we find uncertainty in all phases, from requirements to the operation. This fact forces some decisions, which may not be well founded and require assumptions, which may prove to be wrong later. We call these three facts the three devils of systems engineering. A systems engineering process can only be successful, if we deal adequately with the three devils.
24
3 Three Devils of Systems Engineering
3.3 Complexity Complexity [Holland14, Mitchell09, Johnson10, Goldreich08, Maurer17, Flood93] is a highly interesting and varied topic. We find complexity in many scientific disciplines, such as biology, sociology, weather science—and also in computer science. Intuitively, complexity makes a system complicated, difficult to understand, hard to modify, and tough to evolve. The definition we will use in this book is Definition 3.2: Complexity Complexity is that property of an IT system which makes it difficult to formulate its overall behavior, even when given complete information about its parts and their relationships Quote: “How do you make things simple? Simple. Get rid of complexity. Understand it, recognize it, eliminate it, and banish it.” (Roger Sessions, 2008)
In software engineering, complexity manifests itself in structural (architectural) complexity [Sessions08, Sessions09, Fairbanks10] and in functional complexity [Bundschuh08]. Structural complexity refers to the number of parts in the software-system and to the intensity of their relationships. Functional complexity is measured as the size of the functionality of their parts and interfaces. Principles to minimize structural complexity and to reduce functional complexity will be presented in Chap. 12. Complexity impacts the systems engineering process in all phases: Therefore, we may look at complexity as the first devil of systems engineering (Fig. 3.3). In fact, successfully managing complexity is a key activity for effective development and for long-term viable software-systems, i.e., for future-proof software-systems. Fig. 3.3 Systems engineering devil #1—complexity. (© www.123rf.com [used with permission])
3.3 Complexity
25
Complexity in an IT system has two forms [Brooks95, Moseley06]: 1. Essential complexity: This is caused by the problem to be solved. Nothing can remove it. It represents the inherent difficulty of the requirements to be implemented. 2. Accidental complexity: This is caused by solutions that we create on our own or by impacts from our environment. It can be managed and minimized. Unnecessary complex systems are risky systems: It is difficult to ensure their quality properties, and they become hard to evolve and maintain. Table 3.1 shows some methods to manage complexity: More information follows later in this book. The important fact is that essential complexity can be contained and its impact can be minimized by an adequate and consistent architecture of the software-system. Accidental complexity must be detected, avoided, and eliminated in all phases of the development and maintenance process. Even with the best process it will not be possible to completely avoid accidental complexity. Therefore, the organization responsible for the software-system needs to carry out periodical simplification programs, where the application landscape is examined with the goal to identify, assess, and remove accidental complexity. Because complexity is highly difficult to understand and to document, it impacts our software negatively in the following way: • • • •
Loss of conceptual integrity; Duplication of models, functionality, data, and implementation; Inconsistent architecture; “Far”-effects: Changing one part of the system may have bad and unexpected effects in another part; • Emergence: The complex system develops unexpected, potentially harmful, properties, behavior, or information. Complexity, therefore, must be managed through the whole systems engineering process.
Table 3.1 Managing complexity Essential Complexity
Accidental Complexity
Structural Complexity
Enterprise architecture Business-IT alignment Strong architecture process Architecture principles (e.g., “Partitioning, Encapsulation, and Coupling”)
Enterprise governance (IT) Development process Architecture principles (e.g., “redundancy”, “simplification”)
Functional Complexity
Domain models Architecture evaluation Architecture principles (e.g., “Formal modeling”)
Requirements management Business-IT alignment Architecture principles (e.g., “Re-use”
26
3 Three Devils of Systems Engineering
3.4 Change A viable software-system is continuously changing: New business requirements, updates due to technology or the operating environment, and the corrective maintenance process force relentless change. The change occurs at a rapid and unpredictable rate, sometimes in places where it was not expected! The time cycles for the implementation of the changes are becoming shorter and shorter. The pressure from the market, the competition, the technology, and the users are merciless. Managing change successfully is a difficult, but decisive activity for future-proof software-systems. This requires two preconditions: 1. An adequate, well-maintained architecture of the software-system; 2. A strong, compulsory development process. Quote: “…if you’re afraid to change something it is clearly poorly designed.” (Martin Fowler)
Why is change considered the systems engineering devil #2 (Fig. 3.4)? It is the impact of the change requests: They force fast modifications of already complex software-systems. Often many modifications are executed in parallel, with difficult coordination.
Fig. 3.4 Systems engineering devil #2—change. (© www.123rf.com [used with permission])
3.5 Uncertainty
27
This unfortunately encourages teams to neglect front-end work (proper architecture, clean designs), to generate technical debt (shortcuts, conceptual carelessness, redundancy), and to encourage architecture erosion (violation of architecture principles, local optimization with global damage). Because change has a high intricacy to coordinate and balance, it impacts our software negatively in the following way: • • • • •
Uncoordinated projects and systems engineering efforts; Redundancy in requirements, specifications, models, and implementation artifacts; Architecture erosion; Accumulation of technical debt; Conflicting requirements, specifications, and implementations.
Change, therefore, must be organized and coordinated through the whole systems engineering process and within the whole organization.
3.5 Uncertainty Uncertainty refers to situations where no or incomplete information is available. In IT processes, uncertainty is present in all phases. Uncertainty, both during development and during operation, forces weakly founded decisions with possibly far-reaching consequences. Therefore, uncertainty is the devil #3 in systems engineering (Fig. 3.5). Fig. 3.5 Systems engineering devil #3—uncertainty. (© www.123rf.com [used with permission])
28
3 Three Devils of Systems Engineering
Uncertainty may be classified into: 1. External uncertainty: generated by changes in the market, in the operating environment, and threats; 2. Internal uncertainty: from all internal activities. Coping with uncertainty during the production or evolution of software-systems requires specific methods [Garlan10, Letier14, McConnell06, Salay12, Bergler94]. These consider options as part of the processes or even of the architecture. Uncertainty during the operation of the software-system is becoming a serious problem. More and more software-systems—especially cyber-physical systems-of-systems (CPSoS)—have to work correctly in operating environments, which are fast changing, unpredictable, complex, and dangerous [Alur15, Lee17, Nakajima17, Romanovsky17, Loukas15]. Good examples are autonomous cars, electric grid management, train control systems, pilotless planes, etc. Quote: “It is time to embrace uncertainty as a first-class entity, recognizing that in a world where we cannot hope to achieve perfection, we must rethink many of the ways in which we conceive, engineer and validate our software-based systems.” (David Garlan, 2010)
Such systems must be equipped with significant autonomous behavior, based on awareness of their environment and autonomous decision-making based on artificial intelligence techniques. A remarkable architecture for such systems is the MAPE-K architecture introduced with the technology of autonomic computing. Autonomic computing [IBM06, Lalanda13] provides the software runtime systems with self-* properties, such as self-configuration, self-healing, self-optimization, and self-protection. Additional self-* properties have been defined in the literature (see, e.g., [Lalanda13]). Systems based on the autonomic concept, therefore, exhibit optimized behavior, driven by objectives, and reacting intelligently to their environment. Many of these systems even have self-learning capabilities [Alpaydin16], making them even more powerful (see Example 3.3). Because uncertainty will have unknown and unforeseen impacts or effects, it negatively affects our software in the following way: • • • • •
Unfounded or inadequate decisions; Maladjusted implementations; Unanticipated risks or hazards; Unprepared disasters and catastrophes; Sudden changes in markets, the operating environment, or user behavior.
Uncertainty, therefore, must be assessed and risk mitigated through the whole systems engineering process and must be tracked throughout the life span of the software.
3.6 Structure of Complex Systems
29
3.6 Structure of Complex Systems Today’s software-systems are highly complicated and complex. They may contain hundreds of millions of source lines of code (see: http://www.informationisbeautiful.net/ visualizations/million-lines-of-code, last accessed 24.06.2017). They are built of millions of parts and are highly interconnected. In addition, in many cases, they need to guarantee stringent quality of service properties, such as safety, security, real-time behavior, etc. In order to be able to build, evolve, and maintain these systems, they need a strong, comprehensive, and adequate structure. The structure must foremost allow to partition the system and to suitably isolate the partitions. In fact, partitioning with adequate granularity, functional separation, and respecting non-functional properties (such as the rate of change) is the indispensable foundation of a future-proof software-system. We need a generic architecture for this structure. One proven possibility to partition very large software-systems is to decompose it along three axis: • Horizontal functional decomposition (= Horizontal architecture layers, Fig. 3.6); • Vertical quality of service decomposition (= Vertical architecture layers, Fig. 3.7); • Hierarchy levels (Fig. 3.8).
,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
,ŝĞƌĂƌĐŚLJ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
Fig. 3.6 Horizontal architecture layers (functional partitioning)
30
3 Three Devils of Systems Engineering
,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
,ŝĞƌĂƌĐŚLJ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ZĞĂůͲ dŝŵĞ
^ĞĐƵƌŝƚLJ
^ĂĨĞƚLJ
dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
͙
sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
Fig. 3.7 Vertical architecture layers (quality of service properties partitioning) ,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
,ŝĞƌĂƌĐŚLJ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
^Ž^
/ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ ƉƉůŝĐĂƚŝŽŶ
dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
ŽŵƉŽŶĞŶƚ
ZĞĂůͲ dŝŵĞ
^ĞĐƵƌŝƚLJ
^ĂĨĞƚLJ
^ĞŶƐŽƌͬĐƚƵĂƚŽƌ
͙
sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
Fig. 3.8 System architecture framework
3.6.1 Horizontal Architecture Layers The horizontal architecture layers (Fig. 3.6) provide containers for the unique and unambiguous assignment of functionality. Every type of function is exclusively located in one of the five horizontal layers, and each of the layers has its adequate architecture.
3.6 Structure of Complex Systems
31
In addition, the five layers are isolated from each other by using standardized access mechanisms (see: Sect. 12.2). The horizontal layers are defined as follows: Business Architecture (see, e.g., [Simon15, Muller12]): This layer contains the business model of the organization, the set of business objects with their properties and relationships, the business rules, the business and management processes, and—last but not least—the domain model [Evans04, Millett15] and the enterprise architecture [Bernard12, Ahlemann12]. Enterprise architecture (EA) is an important framework for linking strategy, business, and technology of an organization. Applications Architecture (see, e.g., [Lattanze09, Bass13, Rozanski12]): This layer contains the full set of applications (= applications landscape). An application is the set of programs, services, and data which executes part of a business process. The application landscape is the totality of all applications and their relationships in an organization. This layer also contains all management software, such as enterprise resource planning, human resources management, etc. Some of the programs in a large organization may be third-party software. The integration of third-party software into the organization’s application landscape may pose a considerable integration problem, e.g., with respect to semantics and synchronization of data. Information Architecture (see, e.g., [Rosenfeld15, Kleppmann17, Sarkar15]): Information (or data) is the raw material for information processing systems. The programs process data in the applications according to the business processes and business rules. In an IT system, many types of information can be found, such as customer information, reference information, accounting information, archive information, reporting information, etc. Note that in cyber-physical systems additional types of information exist (see: Sect. 12.11). The organization, precise semantic definitions, storage and access structures, search methods, etc. are highly important for the IT system. This layer contains the adequate artifacts, such as information models [Simsion05], semantic definitions (e.g., in the form of ontologies [Stuart16] or taxonomies [Stewart08]), database schemas and tables, and database management and access algorithms. Integration Architecture (see, e.g., [Ramanathan13, Roshen09, Ferreira13]): Information processing systems consist of many different parts, sometimes from different manufacturers. Data, information, and control have to be exchanged in an effective, dependable, and sometimes high-performance way between the parts. The rate of change forces the system to accept changes in the configuration, i.e., parts leaving or joining the information processing system. The integration architecture provides the means and techniques to allow this data, information, and control exchange between highly heterogeneous systems, often using bus systems [Chappell04]. The integration architecture also provides the techniques to govern, manage, maintain, find, and use services [Erl11, Alonso04]. Technical Architecture (see, e.g., [Erl17, Arrasjid16, Laan17]). Any software needs an execution platform, i.e., the required hardware, networks, systems and communications software to run the software. The execution platform is often highly distributed and is provided on-site or in the cloud, or a mixture of both. Furthermore, in many applications,
32
3 Three Devils of Systems Engineering
the execution platform is mission-critical and needs to provide high availability, transactional integrity, business continuity, archiving, and monitoring capabilities. Therefore, the execution platform needs an adequate, flexible, and versatile architecture. This layer contains the organization’s technology strategy [Peppard16], the technology portfolio [Maizlish05], the policies and processes for updating, maintaining, configuring, monitoring and reporting, and the strategies/processes/technologies for disaster prevention and disaster recovery.
3.6.2 Vertical Architecture Layers The horizontal architecture layers cover the full business functionality of the organization. Based on these layers, the organization would be able to execute its business and management processes. However, the functionality of the horizontal architecture layers does not guarantee quality of service properties! A system based purely on the horizontal architecture layers cannot provide, e.g., sufficient security, safety, real-time performance, etc. Therefore, it is not viable and needs additional components. These additional components are provided by the vertical architecture layers (Fig. 3.7). One vertical architecture layer is defined for each of the quality of service properties. So we have a security architecture, a safety architecture, a real-time architecture, etc. The number of vertical architectures corresponds to the number of quality properties which are relevant for the application field (More information in: Chap. 14). Why do we need vertical architecture layers? Could we not implement, e.g., the security functionality directly into the horizontal applications architecture layer? There are two reasons for defining and maintaining vertical architecture layers: 1. Most quality of service properties impact many or all of the horizontal architecture layers (Example 3.1). Therefore, defining, specifying, and documenting a vertical architecture layer independently lead to a massive reduction of system complexity. This structure also allows the use of “architectural views” [Clements10], greatly simplifying the understanding, governance, and documentation of the IT system; 2. Orthogonality: Most of the quality of service requirements will impose restrictions or demands on the implementation of the horizontal architecture layer elements. It is a tremendous simplification, if the vertical architecture layers can be defined, specified, and documented (architecture view) independently, i.e., if they are orthogonal to the horizontal layers. Example 3.1: Vertical Architecture “Security”—Access Control
In a financial institution’s information system, access to sensitive data must be strongly protected. Only the correctly authenticated and authorized persons must have access to their financial records. Granting access to any information record includes two steps:
33
3.6 Structure of Complex Systems
1. Authentication of the user’s identity; 2. Access control according to a rights database (= Authorization). A modern security architecture solution requires the following steps: a) Identify the user and assign a digital identity. Modern systems use smart cards with a personal digital certificate [Buchmann13] to uniquely identify the person. The digital certificates are stored on a credit card like smart card; b) Define and store the access rights of the person (= digital ID) in an access rights database; c) Whenever the person (= digital ID) attempts to access an application or a data set, check the access rights and allow/deny access. The steps a)–c) are requirements from a dependable security architecture for access control. For the implementation, the security architecture elements need to be integrated into the horizontal (functional) architecture layers. This integration is shown in Fig. 3.9. The security architecture elements are integrated as follows: • Business architecture layer: Execute an identification process for the person, issue the digital identity (= digital certificate), and define the access rights; • Applications architecture layer: Check the access rights for the digital identity before any access to applications or information is granted;
^ĞĐƵƌŝƚLJ ƌĐŚŝƚĞĐƚƵƌĞ ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶƐ ƌĐŚŝƚĞĐƚƵƌĞ
WĞƌƐŽŶ⇔ ŝŐŝƚĂů/⇒ ĐĐĞƐƐƌŝŐŚƚƐ
ŝŐŝƚĂů/
ĐĐĞƐƐ ŽŶƚƌŽů
/ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ĐĐĞƐƐZŝŐŚƚƐ ĂƚĂďĂƐĞ
/ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
sĞƌŝĨŝĐĂƚŝŽŶͬƵƚŚĞŶƚŝĐĂƚŝŽŶ^ĞƌǀŝĐĞƐ
dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
Fig. 3.9 Security architecture—access control
^ŵĂƌƚĂƌĚ ZĞĂĚĞƌ
34
3 Three Devils of Systems Engineering
• Information architecture layer: Maintain the digital ID and the access rights database; • Integration architecture layer: Provide secure services for authentication, authorization, monitoring, and record keeping; • Technical architecture layer: Provide the smart cards, load the corresponding digital certificates, and operate the secure smart card readers. Figure 3.9 shows two important facts: 1. The security architecture for access control is defined and specified independently from the horizontal, functional architectures; 2. No direct security functionality is implemented in the horizontal architectures (only calls to the security infrastructure!). Never implement security functionality directly into applications!
3.6.3 Hierarchy Levels The third axis in the structural organization in Fig. 3.8 is the hierarchy. The following levels of hierarchy can be identified (we are mainly interested in the hierarchy of software elements as in Fig. 3.10):
^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ
ƉƉůŝĐĂƟŽŶ >ĂŶĚƐĐĂƉĞ
ƉƉůŝĐĂƟŽŶ
ŽŵƉŽŶĞŶƚ
WƌŽŐƌĂŵ͕DŽĚƵůĞ
Fig. 3.10 Software hierarchy
3.7 Types of Information Processing Systems
35
• Program, Module: These are the smallest units of software. They have a clearly defined functionality and are the building blocks of components; • Component: The component is an encapsulated section of functionality with defined interfaces. Components are the building blocks of applications; • Application: An application is a composition of components, which work together to achieve a specific, high-level task; • Application Landscape: The application landscape is the totality of all applications of an organization (see Definition 3.3); • System-of-Systems: For many advanced tasks, the cooperation of different application landscapes, different organizations, and different governance regions is required. This leads to the formation of systems-of-systems (see: Sect. 3.14). Note: When cyber-physical systems are considered, the interfaces between the cyber part and the physical part must be included. These are sensors for the input of physical information to the software-system and actuators for the control of the physical devices by software (Fig. 3.8). The key field of architectural interest for future-proof software-systems are, not is the application landscapes and their cooperation in systems-of-systems (see Definition 3.3 below). Definition 3.3: Application Landscape Set of interacting applications and data, cooperating to achieve a common objective: For example, operate a bank, drive a car, or control a manufacturing process.
3.7 Types of Information Processing Systems It has become common to categorize information processing systems coarsely into: • Enterprise computing; • Embedded computer systems; • Cyber-physical systems. Although the principles for building future-proof software-systems are mostly the same for all three types, they differ considerably in their quality of service properties, i.e., in their vertical architectures.
3.7.1 Enterprise Computing Enterprise computing designates the information technology which is used to execute a company’s or organization’s business operations (e.g., [Stair17]). This includes business applications, management applications, databases and database management, enterprise
36
3 Three Devils of Systems Engineering
resource planning, etc. An enterprise computing system supports the operations of one specific company, such as a financial institution, a car reservation system, an insurance company, or a government tax office. Such systems are typically query systems, where user’s input their query or information via a peripheral device (PC, laptop, mobile device) and receive back the answer via the same device.
3.7.2 Embedded Computers Embedded computers are “invisible” computers (e.g., [Wolf00]): they are embedded into a product and provide the user with a specific functionality. Examples for embedded computing are our smartphones (which today contain quite powerful processors and large memories), smart watches (with all sort of added functionalities), and GPS navigation systems. Embedded computers are also frequently used to execute control functions, such as the anti-skid braking control or the automatic distance keeping in a car, the autopilot in an aeroplane, cardiac pacemakers, etc. Such embedded computers with interfaces to the real world are called cyber-physical systems (see below and, e.g., [Lee17]).
3.7.3 Cyber-Physical Systems Cyber-physical systems have a tremendous proliferation: They can be found in myriads of devices where they interact with and control a specific part of the real world. A precise definition is given in Definition 3.4. Definition 3.4: Cyber-Physical System A cyber-physical system (CPS) consists of a collection of computing devices communicating with one another and interacting with the physical world in a feedback loop. (R. Alur, 2015) A comprehensive example for a cyber-physical (embedded) system is given in Example 3.2. Example 3.2: Automotive Anti-Skid Braking Control (ABS)
In critical situations, a car must be stopped in the shortest possible time—even if the road and weather conditions are bad. To enable a short stopping distance, all modern cars have computer-controlled anti-skid braking control systems onboard [Zaman15]. The ABS-system is shown in Fig. 3.11. The physical part of the system consists of: • The four wheels of the car; • The four wheel rotation sensors (measuring the rotation speed of each wheel); • The four hydraulic braking calipers (braking each wheel individually);
3.7 Types of Information Processing Systems
37
LJďĞƌ^LJƐƚĞŵ
^ǁĂƌŶŝŶŐůŝŐŚƚ
ŵďĞĚĚĞĚ^ͲŽŵƉƵƚĞƌ
WŚLJƐŝĐĂů^LJƐƚĞŵ
ͬͲŽŶǀĞƌƚĞƌ
tŚĞĞůƌŽƚĂƚŝŽŶ ƐĞŶƐŽƌͬ&Z
ͬͲŽŶǀĞƌƚĞƌ
ͬͲŽŶǀĞƌƚĞƌ
ƌĂŬĞĂůŝƉĞƌ ͬ&Z
tŚĞĞůƌŽƚĂƚŝŽŶ ƐĞŶƐŽƌͬ&>
tŚĞĞůƌŽƚĂƚŝŽŶ ƐĞŶƐŽƌͬZZ
ƌĂŬĞĂůŝƉĞƌ ͬ&>
tŚĞĞůƌŽƚĂƚŝŽŶ ƐĞŶƐŽƌͬZ>
ͬͲŽŶǀĞƌƚĞƌ
ƌĂŬĞĂůŝƉĞƌ ͬZZ
ƌĂŬĞĂůŝƉĞƌ ͬZ>
Fig. 3.11 Anti-Skid Braking System (ABS)
• Four analog/digital converters (converting the analog rotation speed signals to a digital format); • Four digital/analog converters (converting the digital control signal back to an analog signal); • The ABS warning light indicating an ABS-system failure. The cyber part of the system is the embedded ABS-computer. The ABS-system is not a stand-alone system, but is networked with many other electronic systems in the car, such as the automatic distance control, the electronic stability control computer, etc. The operating mode of the ABS-system is as follows: • The four wheel rotation sensors (front right, …, rear left) measure the wheel rotation rate of each individual wheel (100x/sec); • The embedded ABS-computer identifies differences in the wheel rotation rates (e.g., indicating a skid), calculates the situation-adequate rotation rate for each wheel (the ABS-computer may receive additional information from other systems in the car, e.g., the gyroscope or the electronic stability system), and issues individual braking signals for each hydraulic caliper; • The wheels are individually slowed down to stabilize the car. This system—as many cyber-physical systems are—is a closed control loop system governed by software [Astrom11].
38
3 Three Devils of Systems Engineering
Many modern cyber-physical systems have to work correctly in highly complex, fastchanging, and uncertain environments—such as driverless cars [Maurer16], unmanned aircrafts [Valavanis14], conductorless trains, intelligent traffic control systems [USA-GA15], medical robots [Schweikard15], rescue robots [Sreejith12], planetary explorer vehicles [Gao16], and many more. In order to complete their mission in a successful and safe way, such systems need a high degree of autonomy, i.e., they must be able to adapt to unknown situations, learn from their environment, and take stand-alone decisions. Autonomy thus becomes an important capability (property) of such advanced systems. Autonomy has been studied for a long time. In 2006, a reference architecture for autonomic computing “MAPE-K” [IBM06, Lalanda13] was presented, which has found many applications. Such challenging autonomic systems rely on artificial intelligence [Tianfielda04, Russell17] and machine learning [Alpaydin16]. Example 3.3 shows both the MAPE-K reference architecture and how it fits into the generic architecture presented in Sect. 3.6 above. Example 3.3: MAPE-K Reference Architecture
The MAPE-K (Monitor-Analyze-Plan-Execute-Knowledge) reference architecture [IBM06] is shown in Fig. 3.12. Basically, this is a software-governed feedback control loop. The physical world is sensed via sensors (measuring physical quantities such as temperature, speed, etc.—but also including cameras, gyroscopes, etc.). The input data from the physical world is read in and preprocessed and then analyzed. The analysis results are transferred to the planning functionality, which decides on the actions to be taken. Finally, the execute block generates the necessary signals to control the physical world, which are then applied via actuators. All four functional blocks use knowledge and artificial intelligence capabilities of the central block. Fig. 3.12 MAPE-K reference architecture
E>z
DKE/dKZ
W>E
ĞǀĞůŶͿ
>ĞǀĞů ;>ĞǀĞůŶͲϭͿ
ƉƉůŝĐĂƚŝŽŶ ŽŵĂŝŶ ŶͲϭ>ĞǀĞů ŽŶĐĞƉƚƐ ĞĨŝŶŝƚŝŽŶ͕WƌŽƉĞƌƚŝĞƐ
^ƵďũĞĐƚ ^ĐŽƉĞ
dŽƉ>ĞǀĞů ŽŶĐĞƉƚƐ
ŶͲϭ>ĞǀĞů ŽŶĐĞƉƚƐ
ĞĨŝŶŝƚŝŽŶ͕ WƌŽƉĞƌƚŝĞƐ
ĞĨŝŶŝƚŝŽŶ͕WƌŽƉĞƌƚŝĞƐ
>ĞǀĞů ;>ĞǀĞůŶͲϮͿ
ŶͲϮ>ĞǀĞů ŽŶĐĞƉƚƐ
ĞĨŝŶŝƚŝŽŶ͕ WƌŽƉĞƌƚŝĞƐ
ŶͲϮ>ĞǀĞů ŽŶĐĞƉƚƐ
ĞĨŝŶŝƚŝŽŶ͕ WƌŽƉĞƌƚŝĞƐ
͙ ŶͲϮ>ĞǀĞů ŽŶĐĞƉƚƐ
ĞĨŝŶŝƚŝŽŶ͕ WƌŽƉĞƌƚŝĞƐ
ŶͲϮ>ĞǀĞů ŽŶĐĞƉƚƐ
ĞĨŝŶŝƚŝŽŶ͕ WƌŽƉĞƌƚŝĞƐ
͙ ͙ Fig. 5.11 Structure of a taxonomy
ĞƚĐ͘
͙
72
5 Evolution Strategies
and time must be invested—the taxonomy will stay for a long time in the organization and it is very difficult to change it once it is used for many applications! In Example 5.5, the taxonomy for the dependability property for a financial institution is shown. Example 5.5: Dependability Taxonomy for a Financial Institution
The dependability taxonomy for a financial institution has three levels: The top level concept is “dependability”—our focus of attention. The n-1 level includes Security, Compliance, Accountability, and Business Continuity. Security is the top priority for the customers of a financial institution. Compliance to all laws and regulations in the legislations in which the institution is active is an essential part of the business. The number and the impact of banking regulations is rising every year, see e.g.: http://www.moodysanalytics.com/risk-perspectives-magazine/integrated-risk-management/regulatory-spotlight/global-banking-regulatoryradar. Accountability in all interactions, business processes, and customer interactions is also a strong requirement. Finally, Business Continuity must assure that the business processes are only interrupted for a short time and no data is lost in the case of failures, disasters, or attacks. The n-1 levels are then further divided into ten n-2 concepts. To assure dependability, all ten n-2 concepts have to be cared for sufficiently. This means, that on all levels of the architecture defined in Fig. 3.8, effective protection means and strong technology must be implemented. The next step is to introduce metrics: Metrics are necessary for most of the lowest level concepts (n-2 in Fig. 5.12) in order to ensure a required dependability. The metric and the corresponding data are only defined and acquired for the lowest level concepts—the metrics for the concepts higher up in the taxonomy are aggregated (see Fig. 5.13). Definition 5.5: Metric Standards of measurement by which efficiency, performance, progress, or quality of a plan, process, or product can be assessed. http://www.businessdictionary.com/definition/metrics.html Here a word of caution is in order: Although meaningful and reliable metrics are indispensable in most applications, it is a (very hard) fact, that metrics are difficult, expensive and laborious in the long run! Defining the metric and the corresponding data-acquisition process must, therefore, be done extremely carefully, always with the cost/benefit in mind (see e.g., in the introduction in [Fenton15]). Unreliable or unfocussed metrics do more harm than good! Quote: “Correctness is clearly the prime quality. If a system does not do what it is supposed to do, then everything else about it matters little.” (Bertrand Meyer)
73
5.5 Dependability Metric ŽŶĨŝĚĞŶƚŝĂůŝƚLJ
ƉƉůŝĐĂƚŝŽŶ ŽŵĂŝŶ
/ŶƚĞŐƌŝƚLJ
^ĞĐƵƌŝƚLJ
&ŝŶĂŶĐŝĂů /ŶƐƚŝƚƵƚŝŽŶ
ǀĂŝůĂďŝůŝƚLJ EĂƚŝŽŶĂů>ĂǁƐΘ ZĞŐƵůĂƚŝŽŶƐ
ŽŵƉůŝĂŶĐĞ YƵĂůŝƚLJŽĨ ^ĞƌǀŝĐĞ WƌŽƉĞƌƚLJ
/ŶƚĞƌŶĂƚŝŽŶĂů>ĂǁƐΘ ZĞŐƵůĂƚŝŽŶƐ
ĞƉĞŶĚĂďŝůŝƚLJ dƌĂĐĞĂďŝůŝƚLJ ĐĐŽƵŶƚĂďŝůŝƚLJ
ƵĚŝƚĂďŝůŝƚLJ dƌƵƐƚǁŽƌƚŚŝŶĞƐƐ ŝƐĂƐƚĞƌ ZĞĐŽǀĞƌLJ
ƵƐŝŶĞƐƐŽŶƚŝŶƵŝƚLJ
&ĂƵůƚdŽůĞƌĂŶĐĞ
Fig. 5.12 Dependability taxonomy for a financial institution ŽŶĨŝĚĞŶƚŝĂůŝƚLJ
ƉƉůŝĐĂƚŝŽŶ ŽŵĂŝŶ ^ĞĐƵƌŝƚLJ
&ŝŶĂŶĐŝĂů /ŶƐƚŝƚƵƚŝŽŶ
ǀĂŝůĂďŝůŝƚLJ
ŽŵƉůŝĂŶĐĞ YƵĂůŝƚLJŽĨ ^ĞƌǀŝĐĞ WƌŽƉĞƌƚLJ
/ŶƚĞŐƌŝƚLJ
EĂƚŝŽŶĂů>ĂǁƐΘ ZĞŐƵůĂƚŝŽŶƐ /ŶƚĞƌŶĂƚŝŽŶĂů>ĂǁƐΘ ZĞŐƵůĂƚŝŽŶƐ
ĞƉĞŶĚĂďŝůŝƚLJ dƌĂĐĞĂďŝůŝƚLJ ĐĐŽƵŶƚĂďŝůŝƚLJ
ƵĚŝƚĂďŝůŝƚLJ dƌƵƐƚǁŽƌƚŚŝŶĞƐƐ
ƵƐŝŶĞƐƐŽŶƚŝŶƵŝƚLJ
ŝƐĂƐƚĞƌ ZĞĐŽǀĞƌLJ &ĂƵůƚdŽůĞƌĂŶĐĞ
Fig. 5.13 Metrics for the lowest level dependability concepts in the taxonomy
Any metric must be focussed and tailored specifically to the application (based on the taxonomy) and the organizational needs. The second, also very challenging task after building the taxonomy, is to define the expressiveness and value of the required metrics. A vast literature on dependability metrics exists, e.g., [Eusgeld08, Jaquith07, Brotby13, Hayden10, Young10, Brotby08, Wong00, Mateski17, Herrmann07, Aroms12, Janicak15, Hubbard16, Freund14], and many more.
74
5 Evolution Strategies
5.6 Other Quality of Service Property Metrics The key ideas of the Managed Evolution strategy (Definition 5.1) are: 1. Business value, changeability, and dependability are continuously improved; 2. Business value, changeability, and dependability are tracked by reliable metrics; 3. All (other) quality attributes are as good as necessary, i.e. as requested by the business units. Points 1 and 2 have been handled above. We have now to deal with the other quality of service properties. There is a vast number of additional quality of service properties, which characterize a software-system (see Table 4.1). Some of them may be needed to manage specific Software-systems, and thus they may also need a suitable metric, such as the response time metric in Example 5.6. Example 5.6: Response Time
Response time of an Internet banking application is a good example for a quality of service property. The metric is the time required from pressing “ENTER” to the appearance of the first element of the response on the user’s screen. The response time is dependent upon the load, i.e. on the number of concurrent users. The number and activity of the users vary over the day, resulting in a measured response time distribution as shows in Fig. 5.14. Figure 5.14 also show the slowest response time (peak), the 24 h average, and the desired target response time.
ZĞƐƉŽŶƐĞdŝŵĞ
ϭϬƐĞĐ
ƉĞĂŬ ϱƐĞĐ ϮϰŚĂǀĞƌĂŐĞ ƚĂƌŐĞƚ
ϭƐĞĐ
Ϭ͗ϬϬ
Fig. 5.14 Response time distribution
ϭϮ͗ϬϬ
Ϯϰ͗ϬϬ
5.8 Quality of Service Properties Scorecard
75
The 24 h average response time is higher than the response time target required by the business: Therefore, the metric shows an unsatisfactory performance and more servers and load balancers must be installed to reduce the response time. Fortunately, today technologies exist to adapt the server power dynamically to the response time.
5.7 Software Quality Metrics The sections above presented metrics for the business value, the changeability, and the dependability, as well as for other quality of service properties. These are external metrics, because they measure some externally visible property of the software-system. Some of the classical, internal metrics which express some internal quality attribute of the software, are often also highly valuable for future-proof software-systems [Fenton15, Kan02, Jones17, Galin17, Abran10, Genero05, Ejiogu05, Oo11, Gupta17]. Such metrics—when chosen by the architecture team—must also be restricted to positive cost/benefit value and be carefully maintained.
5.8 Quality of Service Properties Scorecard How can IT management and the architecture team keep a clear view of the tracked quality of service properties of their software-systems? One proven presentation form is the quality of service properties scorecard (Definition 5.6). Definition 5.6: Quality of Service Properties Scorecard The quality of service properties scorecard is a table, which presents the quality of service properties selected for the specific software-system, and their quantitative evolution over time For a software-system under Managed Evolution, the scorecard has a basic form as shown in Table 5.3: First, the primary quality of service properties are listed. Remember that dependability is not an individual quality of service property, but is compounded from a number of application-specific quality of service properties (Example 5.5). The other quality of service properties (such as performance, response time, throughput, energy consumption, etc.) are also application specific. The same holds true for (internal) software metrics. All are listed in the scorecard and their successive values for time periods T1, T2, T3, … are recorded in the scorecard. Statistically significant periods must be chosen (meaningful averages). For some properties, e.g., compliance in Example 5.5, a checklist instead of a numerical metric is used: Compliance is measured by comparing the application with a list of laws and regulations and noting no/partial/full compliance. For a very large software-system, such as a financial institution or car electronics, the scorecard must be adapted to individual subsystems—therefore, more than one scorecard exists for the software-system.
76
5 Evolution Strategies
Table 5.3 Managed Evolution quality scorecard QoS Property
Metric
Period T1
Period T2
Period T3
… …
Key Managed Evolution Properties Business Value
NPV
Value1
Value2
Value3
Changeability
Definition 5-3
Value1
Value2
Value3
Dependability:
•
QoS Property A
Metric A
Value1
Value2
Value3
•
QoS Property B
Checklist
Assesment1
Assesment2
Assesment3
•
QoS Property C
Metric C
Value1
Value2
Value3
•
QoS Property D
Metric D
Value1
Value2
Value3
•
QoS Property E
Checklist
Assesment1
Assesment2
Assesment3
… Other Quality of Service Properties
5.9 Managed Evolution Operationalization Definition 5.1 and Fig. 5.6 presented the Managed Evolution strategy. So far, the theoretical foundation and justification have been laid. The next question is: “How can we operationalize the strategy?” How are business value, changeability, and dependability continuously improved? The precondition is that (most) projects are allocated development money and development time not only for the generation of business value, but also for improving changeability and dependability (Fig. 5.15). For (almost) all projects an additional budget in money and longer time-to-market should be allowed. In today’s competitive market, this requires the appreciation by the business units (which allocate the money), and a management strongly committed to long-term sustainability of the information technology systems.
5.10 Continuous Improvement, Constant Rearchitecting, Regular Refactoring The project team must not only present the solution, the estimated cost, and time-to-market for the implementation of the requested new functionality, but at the same time also the additional cost and additional development time necessary for the local improvement
5.10 Continuous Improvement, Constant Rearchitecting, Regular Refactoring ŚĂŶŐĞĂďŝůŝƚLJ
77
ĞƉĞŶĚĂďŝůŝƚLJ
ƵƐŝŶĞƐƐsĂůƵĞ
ƵƐŝŶĞƐƐsĂůƵĞ
Fig. 5.15 Additional allocation of resources to improvement of changeability and dependability
of the software-system. When these additional resources are approved, the project team— with the help of additional specialists within the organization—can implement both the functional extension and the improvement of the quality of service properties of the software-system in parallel. For the improvements, a number of techniques exist (Fig. 5.16): A selection of them will have to be chosen, which are suitable for the objectives or the project on hand (see e.g., Example 5.7).
ĞƉĞŶĚĂďŝůŝƚLJ ŚĂŶŐĞĂďŝůŝƚLJ
'ĞŶĞƌĂƚŝŶŐŶĞǁ ĨƵŶĐƚŝŽŶĂůŝƚLJ
ŽŝĚĂŶĐĞ ŽĨƚĞĐŚŶŝĐĂů ĚĞďƚ ŐĞŶĞƌĂƚŝŽŶ ĚŚĞƌĞŶĐĞ ƚŽĂƌĐŚŝƚĞĐƚƵƌĞƉƌŝŶĐŝƉůĞƐ hƐĞ ŽĨĂƉƉůŝĐĂďůĞ ƉĂƚƚĞƌŶƐ ŽŶĨŽƌŵĂŶĐĞ ǁŝƚŚ ƌĞůĞǀĂŶƚŝŶĚƵƐƚƌLJ ƐƚĂŶĚĂƌĚƐ /ŵƉůĞŵĞŶƚĂƚŝŽŶŽĨŶĞǁ ƌĞƐŝůŝĞŶĐĞ ŵĞĐŚĂŶŝƐŵƐ ZĞĂƌĐŚŝƚĞĐƚŝŶŐ ZĞĨĂĐƚŽƌŝŶŐ dĞĐŚŶŽůŽŐLJƌĞƉůĂĐĞŵĞŶƚ DŽĚĞůŝŵƉƌŽǀĞŵĞŶƚ ͙
ƵƐŝŶĞƐƐsĂůƵĞ
Fig. 5.16 Some techniques for the improvement of QoS properties
78
5 Evolution Strategies Example 5.7: Database Extension
For the introduction of new Internet banking functionality an existing database had to be extended by a number of new fields. The existing database was a legacy database without a data model, with redundancy and overlaps. The quick (and dirty) extension would be just to add the new fields. The Managed Evolution approach, however, required a thorough modernization of the database. The modernization included a full data model [Simsion05, VanRenssen14, Kleppmann17], the elimination of redundancy and overlaps, the fit of the new fields into the data model, and the programming of a compatibility layer to allow legacy applications to still access the database without modifications of the old applications—but unfortunately, with some loss of performance—as shown in Fig. 5.17. Quote: “Data models are perhaps the most important part of developing software, because they have such a profound effect: not only on how the software is written, but also on how we think about the problem that we are solving.” (Martin Kleppmann, 2017)
The activities in this project are shown in Table 5.4. An interesting fact is the effort allocation: Only 20% of the effort were used for the functional extension asked by the application, and 80 % of the effort was invested into the rearchitecting/refactoring (modernization) of the database! The legacy applications can now individually and in their own pace be migrated to the new database model. A serious problem in large software-systems is the downward compatibility with the existing, large base of legacy applications (which takes a long time to be modernized and migrated). When the last legacy application has been migrated, the compatibility layer can be explemented.
ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ
>ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ >ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ >ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ >ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ >ĞŐĂĐLJƉƉůŝĐĂƚŝŽŶ
EĞǁƉƉůŝĐĂƚŝŽŶ EĞǁƉƉůŝĐĂƚŝŽŶ EĞǁƉƉůŝĐĂƚŝŽŶ
ŽŵƉĂƚŝďŝůŝƚLJ>ĂLJĞƌ
DĂŶĂŐĞĚ ǀŽůƵƚŝŽŶ
;ĂͿdžŝƐƚŝŶŐĂƚĂďĂƐĞ Fig. 5.17 Database extension
;ďͿdžƚĞŶĚĞĚΘDŽĚĞƌŶŝnjĞĚĂƚĂďĂƐĞ
5.10 Continuous Improvement, Constant Rearchitecting, Regular Refactoring
79
Table 5.4 Database modernization Business Value (New Functionality)
Rearchitecting & Refactoring
Deliverables
Seven new fields added • Data Model developed and (for the new Internet applications) implemented • Redundancy eliminated • Overlaps eliminated • Conceptual integrity established • Compatibility Layer introduced
Effort (DevC and TtM)
20 %
80 %
The Managed Evolution Strategy is based on continuous improvement, constant rearchitecting [Feathers19, Bernstein15, Miller98, Seacord03, Ulrich02], and regular refactoring [Fowler99, Burchard17, Ambler11, Kerievsky04] in parallel with the development of new business value. Some of the possible improvement activities include: • Elimination of technical debt • Rearchitecting for adherence to architecture principles • Refactor to applicable patterns • Refactor to improve existing code • Migrate to relevant industry standards • Implement new resilience mechanisms • Rearchitecting parts of the software-system • Refactoring applications • Technology replacement • Model improvement • … We formulate this as our first Principle (Principle 5.1): Principle 5.1: Continuous Rearchitecting, Refactoring, and Reengineering
Whenever a part of the software-system is modified or extended to generate new business value, some existing part of the software-system must be improved by rearchitecting, refactoring, or reengineering to improve the quality of the software-system. In the average, the investment (in money and time) of the rearchitecting, refactoring, or reengineering effort should be a fixed percentage of the investment into the creation of new business value. The percentage of the investment assigned to rearchitecting, refactoring, or reengineering activities is a much disputed question. Experience has shown [Murer11], that sustainability in the sense of the Managed Evolution Strategy requires a percentage of 8–11%.
80
5 Evolution Strategies
Business Value
Changeability k€ days
NPV
UCP2
t
t
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
P1 P2 P3 P4 P5 P6 P7 P8 P9 P10
Time (Measurement Periods)
Time (Measurement Periods)
Fig. 5.18 Tracking business value and changeability over time
5.11 Progress Tracking Consequently following the Managed Evolution Strategy leads to a measurable improvement (Fig. 5.6) of the key characteristics. Thanks to the metrics, the progress over time can be tracked. Business value and changeability are single numeric quantities and can be shown over time in diagrams (Fig. 5.18). In Fig. 5.18 the cumulative business value is shown, i.e. the new business value accumulated in the period by the projects. Estimating the business value erosion—which should be deducted from the generated business value—is difficult and is not considered here. Tracking dependability is more complicated: Because dependability is a conglomerate of quality of service properties, all lines of the Managed Evolution scorecard (Table 5.3) must be evaluated and assessed.
5.12 Periodic Architecture Programs There are some causes for the degeneration of the software-system, which cannot be rectified by the continuous improvement introduced by the Managed Evolution strategy. Such causes include externally generated technical debt (see: Force of Entropy) or the introduction of new technology. In such cases, a specific architecture program must be launched to adapt the software-system.
5.13 Processes for Managed Evolution
81
Example 5.8: Introduction of Digital Certificates
Until the year 2000, a financial institution used the standard UserName/Password scheme to allow their employees to access data-sensitive application (Fig. 5.19). After 2000, the institution decided to strengthen the authentication process by assigning digital certificates [Feghhi98, Buchmann13], stored on smart cards, to all employees (Fig. 5.19). Because such a new technology deployment cannot be done via incremental managed evolution steps, a company-wide architecture program had to be launched. As a single-sign on infrastructure already existed (https://archive.is/20140315095827/ http://www.authenticationworld.com/Single-Sign-On-Authentication), the architecture program had the following objectives: • Implement and deploy a unified digital certificate management infrastructure; • Define and deploy the necessary processes for digital certificate management.
5.13 Processes for Managed Evolution Software is built, evolved, and maintained via specific processes. Each organization depending on Software-systems has a large number of particular processes, which are defined in accordance with company structure, company culture, and tradition. For our focus, the most important process is the software development process. A very large number of different software development processes exist (Examples: [Thayer12a,
hƐĞƌEĂŵĞ WĂƐƐǁŽƌĚ
hƐĞƌEĂŵĞ ĂƚĂͲƐĞŶƐŝƚŝǀĞ ƉƉůŝĐĂƚŝŽŶ
WĂƐƐǁŽƌĚ ηĞƌƚ
^ŝŶŐůĞͲ^ŝŐŶ KŶ/ŶĨƌĂƐƚƌƵĐƚƵƌĞ
ŝŐŝƚĂůĞƌƚŝĨŝĐĂƚĞ DĂŶĂŐĞŵĞŶƚ/ŶĨƌĂƐƚƌƵĐƚƵƌĞ
Fig. 5.19 Introduction of digital certificates for authentication
82
5 Evolution Strategies
Thayer12b, Thayer12c, Kuhrmann16, Münch14, Kruchten03, Ahern08, Oram10, Shuja07, Armour03, Jeffries15, Microsoft11, Humble10, Nygard16, Lines12]).
5.14 Architecture Process In Managed Evolution, the crucial process is the architecture process: Definition 5.7: Managed Evolution Architecture Process The Managed Evolution architecture process is a formal operational procedure to define, specify, review, and enforce architectural principles, patterns, reference models, and standards during software development, evolution, and maintenance. An adequate, consistent, and well-maintained architecture is the foundation of futureproof software-systems. Architecture is the result of a strong, effective, and respected architecture process. Quote: “The process of architecting is a series of decisions. Documenting these architecture decisions with the justifying rationale is very valuable later in the life cycle of the system of interest” (Tim Weilkiens, 2016).
The architecture process for Managed Evolution is shown in Fig. 5.20. The process has three phases:
dŽƉDĂŶĂŐĞŵĞŶƚĂĐŬŝŶŐ
ƌĐŚŝƚĞĐƚƵƌĞ dĞĂŵ ƌĐŚŝƚĞĐƚƵƌĞ WƌŽĐĞƐƐ
ƌĐŚŝƚĞĐƚƵƌĞ ŝƌĞĐƚŝǀĞƐ ZĞƉŽƐŝƚŽƌLJ
^ƉĞĐŝĨŝĐĂƚŝŽŶ Θ ĞƐŝŐŶWŚĂƐĞ
ϮŶĚ ƌĞǀŝĞǁ
ϯƌĚ ƌĞǀŝĞǁ
ZĞǀŝĞǁ͕ŶĨŽƌĐĞ
ŽŶƐƵůƚ͕^ƵƉƉŽƌƚ ϭƐƚ ƌĞǀŝĞǁ
ĞĨŝŶĞ WƌŝŶĐŝƉůĞƐ͕WĂƚƚĞƌŶƐ͕ ^ƚĂŶĚĂƌĚƐ
/ŵƉůĞŵĞŶƚĂƚŝŽŶWŚĂƐĞ
WƌŽũĞĐƚƐ
Fig. 5.20 Architecture process for managed evolution
ĞƉůŽLJŵĞŶƚ WŚĂƐĞ
5.15 The Value of Managed Evolution
83
1. In the first phase, the architecture directives valid for all software development, evolution, and maintenance activities are defined, formalized, and deposited in a repository; 2. In the second phase, which corresponds to the individual specification and design phases of the projects, the architecture team acts as consultants and advisors. They actively help the project team to comply with the architecture directives and assist their decisions; 3. During the third phase, which corresponds to the implementation and deployment phases of the individual projects, the architecture team reviews the project work and—if necessary—enforces the full adherence to the architecture directives. Three reviews take place: The first checks the design done by the project, including the correct integration into the existing software-system. If any violations of architecture directives are found, the project must rectify the design before it is allowed to proceed. The second review examines the implementation. Again, if any deviations from the design or any violation of architecture directives are found, these must be corrected before deployment is allowed. Finally, a third review scrutinizes the actual deployed system and corrects any faults. Each review may delay the project: If the review is unsuccessful and the project has to do some correction work, time (and money) is lost. This impacts development cost and time-to-market—which is not pleasant for the business unit waiting for the new functionality. Therefore, a strong backing of the architecture process by the top management is indispensable. In addition, competent IT architects with the necessary soft skills are needed [Chou13].
5.15 The Value of Managed Evolution Table 5.5 demonstrates the financial value of Managed Evolution. It assumes three organizations A, B, and C and lists the quantified changeability of their application landscapes. The organization A develops software at the rate of 4.2 k€/UCP and 0.8 days/ UCP [Murer11]. The organization B needs 10.0 k€/UCP and 4.0 days/UCP. The last Table 5.5 Value of managed evolution Changeability
Planned Project Volume
Disadvantage
DevC (k€ per UCP)
TtM (days per UCP)
A
4.2 k€/UCP
0.8 days/UCP
10‘000
42‘000‘000.-
50‘000
210’000’000.-
0.-
B
10.0 k€/UCP
4.0 days/UCP
10‘000
100’000’000.-
58’000’000.-
50‘000
500’000’000.-
290’000’000.-
C
50.0 k€/UCP
10.0 days/UCP
10‘000
500’000’000.-
458’000’000.-
50‘000
2’500’000’000.-
2’290’000’000.-
Organization
(#UCP)
Total DevC (k€/year)
Penalty 0.-
84 ŚĂŶŐĞĂďŝůŝƚLJ
5 Evolution Strategies KƌŐĂŶŝnjĂƚŝŽŶ
KƌŐĂŶŝnjĂƚŝŽŶ
Φ 8&3 GD\V8&3
Φ 8&3
KƌŐĂŶŝnjĂƚŝŽŶ
GD\V8&3
Φ 8&3 GD\V8&3
ƵƐŝŶĞƐƐsĂůƵĞ
Fig. 5.21 Opportunistic evolution trajectory
organization, C, has to invest 50.0 k€/UCP and 10.0 days/UCP. These are measured values at the end of a fiscal year. Now assume that in the following year each organization wants to implement 10,000 (Case 1) or 50,000 (Case 2) Use Case Points of new functionality. The development cost for the three organizations are listed in Table 5.5: The financial penalty due to low changeability is worrying. The organizations B and C are in a very difficult competitive situation compared to the organization A. They will need tremendously higher effort— year for year! The situation is presented in Fig. 5.21: The organizations B and C followed an opportunistic strategy with main emphasis on generating business value and neglecting the important quality of service properties of the application landscape. The continuously deteriorating ability to adapt the system (changeability) at some time will make the system unmanageable (= trajectory to death). As high changeability is strongly dependent on good architecture, the business value of good architecture can be recognized [Schoen18].
5.16 Final Words A few final words on Managed Evolution: 1. The creation of business value is essential, but not the only objective; 2. Continuously improving changeability is the key to the sustainable competitiveness of the organization;
References
85
3. Dependability is the foundation of customer satisfaction and of survivability in the hostile environment (These facts are exemplified in a nontechnical novel worth reading [Rodin00]); 4. Guaranteeing the other quality of service properties “as good as necessary” is a cost tradeoff; 5. Managed Evolution is only successful if it is consistently applied for extended times (decades) and if it is supported by committed top management. If the IT system is evolved following another strategy—such as the opportunistic strategy in Fig. 5.21—at some point in time, the system becomes nonviable. It then cannot support the business operations any longer and has to be replaced. Replacing a large, mission-critical IT system is a very high risk. A recent example (April 2018) of a (nearly) failed IT system replacement has been demonstrated by the British bank TSB (https://www.ft.com/content/55582946-4856-11e8-8ae9-4b5ddcca99b3, last accessed 29.04.2018).
References [Abran10] Abran A (2010) Software metrics and software metrology. Wiley, Piscataway. ISBN 978-0-470-59720-0 [Ahern08] Ahern DM, Clouse A (2008) CMMI distilled—a practical introduction to integrated process improvement, 3rd edn. Addison-Wesley Professional, Upper Saddle River (SEI Series in Software Engineering). ISBN 978-0-321-46108-7 [Ambler11] Ambler SW, Sadalage PJ (2011) Refactoring databases—evolutionary database design. Addison Wesley, Upper Saddle River. ISBN 978-0-321-77451-4 [Anda08] Anda B, Dreiem H, Sjoberg DIK, Jorgensen M (2008) Estimating software development effort based on use cases—experiences from industry. http:// www.bfpug.com.br/Artigos/UCP/Anda-Estimating_SW_Dev_Effort_Based_ on_Use_Cases.pdf [Armour03] Armour PG (2003) The laws of software process—a new model for the production and management of software. Auerbach, Boca Raton. ISBN 978-0-849-31489-6 [Aroms12] Aroms E (2012) NIST special publication 800–55 rev1: security metrics guide for information technology systems. CreateSpace, Scotts Valley. ISBN 978-1-4701-5204-8 [Aviziensis04] Avizienis A, Laprie J-C, Randell B, Landwehr C (2004) Basic concepts and taxonomy of dependable and secure computing IEEE transactions on dependable and secure computing, Vol. 1, No. 1, January–March. https://www.nasa. gov/pdf/636745main_day_3-algirdas_avizienis.pdf. Accessed: 6. June 2017 [Bernstein05] Bernstein L, Yuhas CM (2005) Trustworthy systems through quantitative software engineering. Wiley Interscience, Hoboken (IEEE Book Series). ISBN 978-0-471-69691-9 [Bernstein15] Bernstein D (2015) Beyond legacy code – nine practices to extend the life (and value) of your software. O’Reilly UK Ltd, Dallas. ISBN 978-1-680-50079-0
86
5 Evolution Strategies
[Boehm00] Boehm BW, Abts C, Brown AW, Chulani S, Clark BK, Horowitz E, Madachy R, Reifer D, Steece B (2000) Software cost estimation with COCOMO II. Prentice Hall PTR, New Jersey. ISBN 978-0-13-026692-2 [Brotby08] Brotby WK (2008) Information security management metrics—a definitive guide to effective security monitoring and measurement. Taylor & Francis, Boca Raton. ISBN 978-1-420-05285-5 [Brotby13] Brotby WK, Hinson G (2013) PRAGMATIC security metrics – applying metametrics to information security. Taylor & Francis, Boca Raton. ISBN 978-1-439-88152-1 [Buchmann13] Buchmann JA, Karatsiolis E, Wiesmaier A (2013) Introduction to public key infrastructures. Springer, Berlin. ISBN 978-3-642-40656-0 [Burchard17] Burchard E (2017) Refactoring JavaScript—turning bad code into good code. O’Reilly UK Ltd, Beijing. ISBN 978-1-491-96492-7 [Cusumano10] Cusumano MA (2010) Staying power—six enduring principles for managing strategy & innovation in an uncertain world. Oxford University Press, Oxford. ISBN 978-0-19-921896-7 [Chou13] Chou W (2013) Fast-tracking your career—soft skills for engineering and IT professionals. Wiley, Hoboken. ISBN 978-1-118-52178-6 [Doane17] Doane M (2017) Enterprise taxonomy governance—practical advice for building and maintaining your enterprise taxonomy. CreateSpace, Scotts Valley. ISBN 978-1-54637-377-3 [Eickhoff11] Eickhoff J (2011) Onboard computers, onboard software and satellite operations—an introduction. Springer, Berlin (Springer Aerospace Technology). ISBN 978-3-642-25169-6 [Ejiogu05] Ejiogu LO (2005) Software metrics—the discipline of software quality. Booksurge Publishing, Charleston. ISBN 978-1-4196-0242-9 [Eliot17] Eliot LB (2017) Advances in AI and autonomous vehicles: cybernetic selfdriving cars: practical advances in artificial intelligence (AI) and machine learning. LBE Press Publishing, South Carolina. ISBN 978-0-6929-1517-2 [Eusgeld08] Eusgeld I (2008) Dependability metrics—advanced lecture. Springer Lecture Notes in Computer Science, Berlin (GI-Dagstuhl Research Seminar, 2005). ISBN 978-3-540-68946-1 [Feathers19] Feathers M (2019) Brutal Refactoring – More Working Effectively with Legacy Code. Addison Wesley, Boston, MA, USA, ISBN 978-0-321-79320-1 [Feghhi98] Feghhi J, Feghhi J, Williams P (1998) Digital certificates—applied internet security. Addison-Wesley, Amsterdam. ISBN 978-0-201-30980-5 [Fenton15] Fenton N, Bieman J (2015) Software metrics—a rigorous and practical approach, 3rd edn. Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series. CRC Press, Boca Raton. ISBN 978-1-439-83822-8 [Fowler99] Fowler M (1999) Refactoring—improving the design of existing code. Addison Wesley, Boston (Object Technology Series). ISBN 978-0-201-48567-7 [Freund14] Freund J, Jones J (2014) Measuring and managing information risk—a FAIR approach. Butterworth-Heinemann, Oxford. ISBN 978-0-124-20231-3 [Furrer15] Furrer FJ (2015) Zukunftsfähige Softwaresysteme—Zukunftsfähig trotz zunehmender SW-Abhängigkeit. Informatik Spektrum & Springer, Heidelberg. 30 June. https://doi.org/10.1007/s00287-015-0909-6, http://link.springer.com/article/10.1007/s00287-015-0909-6. Accessed: 31. Dec 2015 [Galin17] Galin D (2017) Software quality. Wiley & IEEE Computer Society, Murray Hill. ISBN 978-1-119-13449-7
References
87
[Garmus01] Garmus D, Herron D (2001) Function point analysis—measurement practices for successful software projects. Addison-Wesley, Boston. ISBN 978-0-201-69944-3 [Garmus10] Garmus D, Russac J, Edwards R (2010) Certified function point specialist examination guide. Routledge, Boca Raton. ISBN 978-1-4200-7637-0 [Genero05] Genero M, Piattini M, Calero C (eds) (2005) Metrics for software conceptual models. Imperial College Press, London. ISBN 978-1-8609-4497-0 [Gupta17] Gupta R (2017) Measurement of software quality factors using CK metrics. LAP LAMBERT Academic Publishing, Saarbrücken. ISBN 978-3-6598-9331-5 [Harris17] Harris M (2017) The business value of software. Productivity Press, Milton. ISBN 978-1-4987-8286-9 [Hayden10] Hayden L (2010) IT security metrics—a practical framework for measuring security and protecting data. McGraw-Hill, New York. ISBN 978-0-071-71340-5 [Herrmann07] Herrmann DS (2007) Complete guide to security and privacy metrics—measuring regulatory compliance, operational resilience, and ROI. Auerbach, Boca Raton. ISBN 978-0-8493-5402-1 [High14] High PA (2014) Implementing world class it strategy—how it can drive organizational innovation. Wiley, Bognor Regis. ISBN 978-1-118-63411-0 [Hopkinson16] Hopkinson M (2016) Net present value and risk modelling for projects, New edn. Advances in Project Management. Routledge, Abingdon. ISBN 978-1-4724-5796-7 [Hubbard16] Hubbard DW, Seiersen R (2016) How to measure anything in cybersecurity risk. Wiley, Hoboken. ISBN 978-1-119-08529-4 [Humble10] Humble J, Farley D (2010) Continuous delivery—reliable software releases through build, test, and deployment automation. Addison Wesley, Boston. ISBN 978-0-321-60191-9 [Janicak15] Janicak CA (2015) Safety metrics—tools and techniques for measuring safety performance, Revised edn. Bernan Print, Lanham. ISBN 978-1-5988-8754-9 [Jaquith07] Jaquith A (2007) Security metrics—replacing fear, uncertainty, and doubt. Addison-Wesley Professional, Upper Saddle River. ISBN 978-0-321-34998-9 [Jeffries15] Jeffries R (2015) The nature of software development—keep it simple, make it valuable, Build It piece by piece. The Pragmatic Bookshelf, Dallas. ISBN 978-1-94122-237-9 [Jones17] Jones C (2017) A guide to selecting software measures and metrics. Taylor & Francis, Boca Raton. ISBN 978-1-138-03307-8 [Kan02] Kan SH (2002) Metrics and models in software engineering, 2nd edn. Addison-Wesley Longman, Amsterdam. ISBN 978-0-201-72915-3 [Kerievsky04] Kerievsky J (2004) Refactoring to patterns. Addison Wesley, Boston. ISBN 978-0-321-21335-8 [Kleppmann17] Kleppmann M (2017) Designing data-intensive applications—the big ideas behind reliable, scalable, and maintainable systems, Revised edn. O’Reilly UK Ltd, Sebastopol. ISBN 978-1-449-37332-0 [Kochs18] Kochs H-D (2018) System dependability evaluation including s-dependency and uncertainty—model-driven dependability analyses. Springer, Cham. ISBN 978-3-319-64990-0 [Kruchten03] Kruchten P (2003) The rational unified process—an introduction, 3rd edn. Addison-Wesley Professional, Upper Saddle River. ISBN 978-0-321-19770-2
88
5 Evolution Strategies
[Kuhrmann16] Kuhrmann M, Münch J, Richardson I, Rausch A, Zang H (eds) (2016) Managing software process evolution: traditional, agile and beyond—how to handle process change. Springer, Cham. ISBN 978-3-319-31543-0 [Laprie13] Laprie J-C (ed) (2013) Dependability: basic concepts and terminology: in English, French, German, Italian and Japanese. Springer, Berlin (Softcover reprint of the original 1st edition 1992). ISBN 978-3-709-19172-9 [Lines12] Lines MW, Ambler S (2012) Disciplined agile delivery—a practitioner’s guide to agile software delivery in the enterprise. Prentice Hall Inc. & IBM Press, Upper Saddle Rive. ISBN 978-0-132-81013-5 [Mateski17] Mateski M, Trevino CM, Veitsch CK, Harris M, Maruoka S, Frye J (2017) Cyber Threat Metrics. CreateSpace, Scotts Valley. ISBN 978-1-5424-7775-8 [Mathew14] Mathew RG, Bandura A (2014) Progressive function point analysis—advanced estimation techniques for IT projects. CreateSpace, Scotts Valley. ISBN 978-1-5023-5416-7 [Mausberg16] Mausberg F (2016) Aufwandsschätzung mit Use Case Points—Manipulation durch Subjektivität. Grin Publishing, München. ISBN 978-3-6683-3784-8 [McConnell06] McConnell S (2006) Software estimation—demystifying the black art. Microsoft Press, New York. ISBN 978-0-735-60535-0 [Microsoft11] Microsoft Official Academic Course (2011) Software development fundamentals—exam 98–361. Microsoft Official Academic Co, Hoboken. ISBN 978-0-470-88911-4 [Miller98] Howard Wilbert Miller (1998) Reengineering legacy software systems. Butterworth-Heinemann & Digital Press, Woburn. ISBN 978-1-55558-195-1 [Münch14] Münch J, Armbrust O, Kowalczyk M, Soto M (2014) Software process definition and management. Springer, Berlin (The Fraunhofer IESE Series on Software and Systems Engineering). ISBN 978-3-642-42842-5 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Berlin. ISBN 978-3-642-01632-5 [NISO05] ANSI/NISO Z39-19-2005 (R2010) Guidelines for the construction, format, and management of monolingual controlled vocabularies. ISBN 1-88012465-3. http://www.niso.org/apps/group_public/download.php/12591/z39-192005r2010.pdf. Accessed: 5. July 2017 [Nygard16] Nygard M (2017) Release it!—design and deploy production-ready software, 2nd edn. O’Reilly UK Ltd, Raleigh. ISBN 978-1-680-50239-8 [Oo11] Oo T, Oo AK (2011) Analyzing object-oriented systems with software quality metrics—an empirical study for software maintainability. LAP LAMBERT Academic Publishing, Saarbrücken. ISBN 978-3-8433-7748-5 [Oram10] Oram A (2010) Making software—what really works, and why we believe it. O’Reilly and Associates, Sebastopol. ISBN 978-0-596-80832-7 [Peppard16] Peppard J, Ward J (2016) The strategic management of information systems—building a digital strategy, 4th edn. Wiley, Chichester. ISBN 978-0-470-03467-5 [Redmond13] Redmond-Neal A (2013) Starting a taxonomy project—taxonomy basics SLA annual conference, June 9. https://www.sla.org/wp-content/uploads/2013/05/ StartingTaxProject_Redmond-Neal.pdf. Accessed: 5. July 2017 [Rodin00] Rodin R (2000) Free, perfect, and now—connecting to the three insatiable customer demands: a CEO’s true story, revised edn. Free Press, New York. ISBN 978-0-6848-6312-2
References
89
[Ross06] Ross JW, Weill P, Robertson DC (2006) Enterprise architecture as strategy— creating a foundation for business execution. Harvard Business Review Press, Boston. ISBN 978-1-5913-9839-4 [Schoen18] Schön H, Furrer FJ (2018) Gute Softwarearchitektur ist Business Value—Ein Ansatz zur Bewertung von SW-Architektur. Informatik Spektrum & Springer, Heidelberg. Vol. 41, Nr. 4, p 240–249. http://link.springer.com/article/10.1007/ s00287-018-1108-z [Schweikard15] Schweikard A, Ernst F (2015) Medical robotics. Springer, New York. ISBN 978-3-319-22890-7 [SCN01] SCN Education B.V. (ed) (2001) Electronic banking—the ultimate guide to business and technology of online banking. Vieweg Verlagsgesellschaft, Wiesbaden. ISBN 978-3-528-05754-1 [Seacord03] Seacord RC, Plakosh D, Lewis GA (2003) Modernizing legacy systems—software technologies, engineering processes, and business practices. Addison Wesley, Boston. ISBN 978-0-321-11884-7 [Shuja07] Shuja AK, Krebs J (2007) IBM rational unified process reference and certification guide—solution designer. Addison Wesley Publishing Inc, Upper Saddle River. ISBN 978-0-131-56292-9 [Simsion05] Simsion GC (2005) Data modeling essentials, 3rd edn. Morgan Kaufmann, Amsterdam. ISBN 978-0-12-644551-0 [Stewart11] Stewart DL (2011) Building enterprise taxonomies. Mokita Press, Lexington. ISBN 978-0-5780-7822-9 [Thayer12a] Thayer RH, Dorfman M (2012) Software engineering essentials, volume 1: the development process, 4th edn. Software Management Training Press, Carmichael. ISBN 978-0-9852-7070-4 [Thayer12b] Thayer RH, Dorfman M (2012) Software engineering essentials, volume 2: the supporting processes—a detailed guide to the IEEE SWEBOK and the IEEE CSDP/CSDA exam, 4th edn. Software Management Training Press, Carmichael. ISBN 978-0-9852-7071-1 [Thayer12c] Thayer RH, Dorfman M (2012) Software engineering essentials, volume 3: the engineering fundamentals, 4th edn. Software Management Training Press, Carmichael. ISBN 978-0-9852-7072-8 [Ulrich02] Ulrich WM (2002) Legacy Systems Transformation Strategies. Prentice Hall, Upper Saddle River. ISBN 978-0-13-044927-X [VanRenssen14] van Renssen A (2014) Semantic information modeling in formalized languages. www.gellish.net. ISBN 978-1-304-51359-5 [Weill04] Weill P, Ross JW (2004) IT Governance. Harvard Business School Press, Boston. ISBN 978-1-59139-253-8 [Wong00] Wong C (2000) Security metrics—a beginner’s guide. Osborne Publisher, New York. ISBN 978-0-071-74400-3 [Wong18] Wong W (2018) The risk management of safety and dependability—a guide for directors, managers and engineers. Woodhead Publishing, Oxford. ISBN 978-0-0810-1439-4 [Young10] Young C (2010) Metrics and methods for security risk management. Syngress, Burlington. ISBN 978-1-8561-7978-2
6
Architecture
Abstract
The central question in modern systems engineering is without doubt: “Which mechanisms, methods, and processes are required to successfully manage complexity, change and uncertainty?” Long and proven experience has shown that the underlying structure, i.e., the systems architecture, determine most of the properties of a complex system! Adequate, well-maintained, and strictly enforced systems architecture during system generation, evolution, and maintenance is the key success factor for the value of long-lived, dependable, trustworthy, and economically viable software- systems. Fortunately, systems and software architecture are becoming more and more a true engineering discipline with accepted principles, patterns, processes, and models. Gone are the days where architecture was a “black art” only mastered by a few professionals. This chapter introduces the key concepts of software architecture.
6.1 Architecture Definition The central theme of this book is architecture. A considerable number of architecture definitions exist: We have chosen the IEEE [IEEE42010] definition of software and systems architecture: Definition 6.1: Architecture The fundamental organization of a system embodied in its parts, their relationships to each other and to the environment, their properties, and the principles guiding its design and evolution IEEE 42010, 2011: IEEE Recommended Practice for Architectural Description of Software-Intensive Systems
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_6
91
92
6 Architecture
The key concepts of an architecture are the parts and their relationships within a system, including their properties. Both in the delineation and in the definition of parts and relationships there is a nearly limitless freedom. Even more so in their conceptual specifications. In addition, there are uncountable ways to implement them! Quote: “Because architecture design is declarative, it is deceptively easy to do. However, it is difficult to get it right, difficult to do well, and dreadfully easy to make mistakes.” (Anthony J. Lattanze, 2009)
The architect has, therefore, to delineate and define first the parts within the system. Then he has to determine which relationships between the parts and the system’s environment are needed. He has to do that both when a system is initially constructed and also during each evolution cycle. Fortunately, there are proven principles to guide the architect in his work. These will be presented in part 2. Future-proof software-systems are mainly based on software architecture (e.g., [Lattanze09, Bass13, Gorton06, Hohmann03]). In many cases, however, also the execution environment and the operating environment have to be considered: The execution environment consists of all the hardware, systems and communications software, monitoring, backup, business continuity elements, etc. The operating environment encompasses all the system external parts, such as partners, stock exchanges, government organizations, etc. In such cases, the more general term systems engineering is used (see: Systems Engineering and Software Engineering above and, e.g., [Kossiakoff11, Wasson15, Weilkiens16, Douglass16]). Architecture has to describe both structure and behavior of a software-system, as well as its evolution over time.
6.1.1 Structure Structure is the static composition of a system. Structure is changed by evolution, i.e., by executing projects that modify the system. Structure is visualized in Fig. 6.1. Definition 6.2: Structure The structure of a software-system identifies and describes all parts within the system, their relationships to each other, the relationships to the system’s operating environment, and the properties of the parts and the relationships Structures consist of elements, relations among the elements, and the important properties of both
6.1.2 Behavior When a system is stimulated by an input, the input data is processed by the functionality of the system, and the result is returned as an output (Fig. 6.1). The relationship between input and output is behavior. Note that behavior also includes the actions in case of errors, malfunctions, and failures.
6.1 Architecture Definition
93 /ŶƚĞƌŶĂůZĞůĂƚŝŽŶƐŚŝƉƐ
WĂƌƚƐ &
& &
/ŶƉƵƚ
&
&
&
&
&
&
&
& &
& &
&
&
&
&
& &
KƵƚƉƵƚ
& džƚĞƌŶĂů ZĞůĂƚŝŽŶƐŚŝƉƐ
Fig. 6.1 Structure and behavior of a software-system
Definition 6.3: Behavior Behavior of a software-system is the reaction of the system (= output) after receiving an input. Behavior includes functionality, timing, and error/ failure handling
6.1.3 Properties Both the parts and the relationships have properties. Properties in the definition is taken widely, i.e., properties can be quality attributes (such as safety, security attributes), technical properties (such as speed of a link), or constraints (such as access limitations).
6.1.4 Levels of Architecture Architecture is hierarchical: It follows the natural order of composition of systems, as shown in Fig. 6.2. The system is composed of smaller units, forming larger units (=bottom up). Here, we focus on the application architecture layer of Fig. 3.8. For each level within the hierarchy, a specific architecture exists (Fig. 6.2): The parts, the relationships, and the properties of the architectural elements are different for each level. Note that Fig. 6.2 shows a very important, additional architecture: The conceptual architecture. Conceptual architecture is an overarching architecture and contains all the definitions, terminology, concepts, evolution principles, etc. to assure the conceptional integrity of the complete system. The conceptional architecture should also include the meta-models for the hierarchical models on each level. An example of a practical architecture hierarchy is given in Example 6.1.
94
6 Architecture
,ŝĞƌĂƌĐŚLJ
ŽŶĐĞƉƚƵĂů ƌĐŚŝƚĞĐƚƵƌĞ
^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ
^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐƌĐŚŝƚĞĐƚƵƌĞ ^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ ƌĐŚŝƚĞĐƚƵƌĞ
ŽŵƉŽƐŝƚŝŽŶ
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ ƌĐŚŝƚĞĐƚƵƌĞ
ŽŵƉŽƐŝƚŝŽŶ
ƉƉůŝĐĂƚŝŽŶ
ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ŽŵƉŽƐŝƚŝŽŶ
ŽŵƉŽŶĞŶƚ
ŽŵƉŽŶĞŶƚ ƌĐŚŝƚĞĐƚƵƌĞ
ŽŵƉŽƐŝƚŝŽŶ
^ĞŶƐŽƌͬĐƚƵĂƚŽƌ
/ŶƚĞƌĨĂĐĞƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞƌĨĂĐĞ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞƌĂĐƚŝŽŶ
/ŶƚĞƌĂĐƚŝŽŶ
WŚLJƐŝĐĂů tŽƌůĚ
Fig. 6.2 Architecture hierarchy
Example 6.1: Architecture Hierarchy in a Car
As an example, Fig. 6.3 shows the application architecture hierarchy in a modern car (see, e.g., [Staron17]). The “visible” parts are the high-level functionalities, such as electronic stability control (ESC), Pedestrian Alert, Brake Control, Motor Control, etc. They form the individual applications in the architecture hierarchy. These highlevel functionalities communicate and interact with each other to constitute the full (electronic) functionality of the car—this corresponds to the application landscape. Finally, the car operates in its environment, i.e., the road infrastructure and the traffic—which is embedding the car as a constituent system in a large systems-of-systems. Figure 6.3 is a good implementation of the strong architecture principle “Layering” (see: ρ-Architecture Principle #1: Architecture Layer Isolation). A hierarchical layer structure with encapsulated layers, strongly separated by formal means—usually contract-based services—greatly reduces the complexity of large systems.
6.1.5 Parts, Relationships, and Models in the Application Architecture Hierarchy Table 6.1 shows the parts, relationships, and (some) models in the application architecture hierarchy. For each level of the hierarchy, the constituent parts and relationships are different. Therefore, also the tools and models vary.
6.1 Architecture Definition
95
ŽŶĐĞƉƚƵĂů ƌĐŚŝƚĞĐƚƵƌĞ ^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ ƌĐŚŝƚĞĐƚƵƌĞ͗ ŵďĞĚĚŝŶŐ ƚŚĞĐĂƌ ŝŶƚŽ ƚŚĞ ^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐƌĐŚŝƚĞĐƚƵƌĞ͗ŵďĞĚĚŝŶŐƚŚĞĐĂƌ ƚŚĞǁŽƌůĚ ǁŽƌůĚ ŽĨ ƚƌĂĨĨ Ĩ ŝĐ͕ ŝŶĐůƵĚŝŶŐ ĂƌϮĂƌ͕ ƌ ĞƚĐ͘ ŽĨƚƌĂĨĨŝĐ͕ŝŶĐůƵĚŝŶŐ ĂƌϮĂƌ͕ĞƚĐ͘
^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ ŽŵƉŽƐŝƚŝŽŶ
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ ƌĐŚŝƚĞĐƚƵƌĞ͗ ƌĐŚŝƚĞĐƚƵƌĞ͗ŽŽƉĞƌĂƚŝŽŶ ŽŽƉĞƌĂƚŝŽŶ ŽĨ ŽĨĂůůƐƵďƐLJƐƚĞŵƐ Ăůů ƐƵďƐLJƐƚĞŵƐ ;ĂƉƉůŝĐĂƚŝŽŶƐͿ с ĞůĞĐƚƌŽŶŝĐ ůĂŶĚƐĐĂƉĞ ;ĂƉƉůŝĐĂƚŝŽŶƐͿсĞůĞĐƚƌŽŶŝĐůĂŶĚƐĐĂƉĞ
ŽŵƉŽƐŝƚŝŽŶ
ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ͗ ƌĐŚŝƚĞĐƚƵƌĞ͗ƌĐŚŝƚĞĐƚƵƌĞ ƌĐŚŝƚĞĐƚƵƌĞ ŽĨĨƵůů ŽĨ ĨƵůů ƐĞƚ ŽĨƐƵďƐLJƐƚĞŵƐ͕ ŽĨ ƐƵďƐLJƐƚĞŵƐ͕ ƐƵĐŚ ĂƐ ^͕ ĞŶŐŝŶĞ ĐŽŶƚƌŽů͕ ŝŶĨŽƌƚĂŝŶŵĞŶƚ͕ ĞƚĐ͘ ƐƵĐŚĂƐ ^͕ĞŶŐŝŶĞ ĐŽŶƚƌŽů͕ŝŶĨŽƌƚĂŝŶŵĞŶƚ͕ĞƚĐ͘
ƉƉůŝĐĂƚŝŽŶ ŽŵƉŽƐŝƚŝŽŶ
ŽŵƉŽŶĞŶƚ ƌĐŚŝƚĞĐƚƵƌĞ͗ ƌĐŚŝƚĞĐƚƵƌĞ͗ŽŵƉŽƐŝƚŝŽŶ ŽŵƉŽƐŝƚŝŽŶ ŽĨŵŽĚƵůĞ ŽĨ ŵŽĚƵůĞ ƚŽ ĨŽƌŵ ĨŽƌŵ ĨƵŶĐƚŝŽŶĂů ƐƵďƐLJƐƚĞŵƐ͕ŝŶĐůƵĚŝŶŐϯ ƐƵďƐLJƐƚĞŵƐ͕ŝŶĐůƵĚŝŶŐ ϯƌĚ ƉĂƌƚLJ ĐŽŵƉŽŶĞŶƚƐ
ŽŵƉŽŶĞŶƚ ŽŵƉŽƐŝƚŝŽŶ
/ŶƚĞƌĨĂĐĞ ƌĐŚŝƚĞĐƚƵƌĞ͗ KƌŐĂŶŝnjĂƚŝŽŶ ĂŶĚ ŝŶƚĞƌĂĐƚŝŽŶ ŽĨĂůů /ŶƚĞƌĨĂĐĞƌĐŚŝƚĞĐƚƵƌĞ͗KƌŐĂŶŝnjĂƚŝŽŶ ŽĨ Ăůů ƐĞŶƐŽƌƐͬĂĐƚƵĂƚŽƌƐ ŝŶ ƚŚĞ ĐĂƌ͕ŝ ƌ ŶĐůƵĚŝŶŐ ďƵƐͲƐLJƐƚĞŵƐ ŝŶƚŚĞĐĂƌ͕ŝŶĐůƵĚŝŶŐ
^ĞŶƐŽƌͬĐƚƵĂƚŽƌ /ŶƚĞƌĂĐƚŝŽŶ
/ŶƚĞƌĂĐƚŝŽŶ
WŚLJƐŝĐĂů tŽƌůĚ
Fig. 6.3 Application architecture hierarchy in a car
Table 6.1 Parts, relationships, and models in the architecture hierarchy Architecture hierarchy level
Parts
Relationships
Models
Conceptual architecture
Concepts, Terminology, Models, Principles, Patterns,
Semantic connections between the parts
Domain Model, Domain Ontology
Systems-ofSystems
Application Landscapes Service Contracts of different Organizations
Formal Service Contract Models and Languages
Application Landscape
Applications
Interface Contracts (The difference between interface contracts and service contracts will be explained later)
Formal Interface Contract Models and Languages
Application
Components and Data
Various Access Mechanisms Component to Functionality and Data Composition Models, Data Base Schemas
Component
Program Modules and Data Base Segments
Programming Constructs for Design Models Interactions and Access
Interface
Software Drivers for Sensors and Actuators
Various Techniques for Data Models of the physical Fusion Sensors and Actuators
96
6 Architecture
6.2 Key Importance of Architecture Quote: “Although software has no mass, it does have weight, weight that can ossify any system by creating inertia to change and introducing crushing complexity.” (Grady Booch)
A long history of software engineering has clearly shown that architecture is the single most important success factor for software projects. The following equation is quite a platitude: • Bad architecture ⇒ Bad software, failing projects; • Good architecture ⇒ Good software, successful projects. The larger and more complex the systems become, the more importance goes to architecture. Architecture is the strong means to manage complexity, absorb change, and deal with uncertainty. It does so by building and maintaining a sound foundation for the construction and integration of software. Quote: “If you think good architecture is expensive, try bad architecture.” (Brian Foote, Joseph Yoder)
Architecture is not static: It is initially defined when a software-system is born and is continuously adapted and changed with each modification done to the system during its life cycle. The architecture of a software-system, therefore, needs continuous attention and care in order to avoid architecture erosion. This is the reason for the high importance architecture has for future-proof software-systems—and also the reason for the existence of this book.
6.2.1 Impact of Architecture Software architecture is the undisputed, reliable foundation for: • Taming complexity (see Complexity): Continuously growing complexity is an unpleasant fact of modern software-systems. At the same time, complexity is the cause for many of the difficulties in developing, maintaining, and operating softwaresystems. Part of the complexity—the essential complexity—cannot be reduced, it can only be managed. The other part—accidental complexity—can be mostly eliminated through good architecture and careful designs. Taming complexity thus means to structure the system in such a way that the impact of complexity on understanding, evolving, maintaining, and operating the system is minimized.
6.2 Key Importance of Architecture
97
• Managing change (see Change): The perpetual pressure for change requires contin uous modifications of the software-systems. In larger systems, many projects modify the software in parallel. If the software-system does not have a high, sustainable changeability, the impact of weak architecture results in higher cost and longer time-to-market for the changes—and may finally make the system commercially unviable. Again, correct architecting maintains or improves changeability (see Managed Evolution Strategy) and thus assures the long-time competitiveness of the software-system. • Cope with uncertainty (see Uncertainty): We have uncertainty in all phases of the software-system life cycle. Future requirements are unpredictable, specifications may be unprecise, the environment is not fully known and changing dynamically, the design may be incomplete, tools may inhibit some desired actions, etc. Coping with uncertainty means to take intelligent decisions by limiting the impact of uncertainty to containment regions, which best absorb uncertainty. An adequate underlying architecture greatly simplifies such decisions. Architectural choices and decisions, therefore, have a powerful and long-lasting impact on the software-systems, both in successfully coping with the three devils of systems engineering, but also on all the systems quality properties. Modern studies demonstrate that the success of companies and organizations strongly depend on their high-quality IT systems [High09, High14].
6.2.2 Longevity of Architecture Software architecture is very long-lived: It is conceived when the software-system is initially designed, lives through all the modifications during the life cycle, and impacts the development and operations at all times. Even an excellent architecture is in danger to erode and deteriorate over time. Architecture erosion is slow and gradually. It is often not felt immediately, but its effects impair the evolution of the software-system. Therefore, the architecture of the software-system must be treated as a highly valuable asset of the company and should be maintained very carefully. In fact, the defense of the architectural integrity of the whole system is key to the sustainability and commercial viability of the system! Maintaining—or even improving—the architectural integrity is less of a technical task, but a central management responsibility. The company must, therefore, establish a suitable architecture organization (headed by an IT chief architect), populated by excellent people, providing strong architecture-centric development processes, and establishing an architecture-aware company culture [Murer11]. Quote: “Good architects don’t want to be in a place where architecture is seen as a form of corporate entertainment.” (Gregor Hohpe, 2016)
98
6 Architecture
6.3 Legacy Systems Most of today’s software-systems contain parts of considerable age. Some commercial systems have parts which are older than 30 years and are still in heavy use. Such parts have been built when the software world was different: The system engineering process was different, vanished programming languages were used, the architecture paradigms were markedly different, and the available development and runtime infrastructure are now obsolete. Such software-systems are called legacy systems (see: Legacy Software Modernization/Migration). Definition 6.4: Legacy System A legacy system is an obsolete computer system which is still in use, because its data can not be changed to newer or standard formats, its application programs can not be upgraded, or its development/execution platform can not be changed. Note: “can not” = with a reasonable effort (money, time &and people) or acceptable risk Legacy systems have a number of bad properties: • • • • • • • •
Very low changeability (= high resistance to change), Weak resilience, Eroded architecture, Badly or not documented, Obsolete technology (hardware and software), Large technical debt, Lost knowledge (people left), Difficult integration context.
However, you find also some good properties: • • • •
Invaluable implicit knowledge of the domain and the business processes, Stable operation (mature), Good solutions/algorithms, Often: surprisingly good code.
Quote: “Every day, we lose time, money, and opportunities because of legacy code.” (David Scott Bernstein, 2015)
Most of today’s legacy systems are under pressure to be modernized. The reasons are a high resistance to change for new business requirements (low changeability), weak dependability (susceptibility to attacks, faults, …), technology pressure, new architecture
6.4 Architecture Integration Challenge
99
Table 6.2 Risk evaluation for legacy system modernization Concern
Topics
Operational risk (Ongoing Operation)
Minimizing the risk of operational failures
Fit-for-Future
Technical Debt?
Cost/Time
Total effort
Additonal constraints
e.g., Certification
paradigms, and serious knowledge shortage (because the people who built the systems are no longer available, and the systems may be badly documented—the author encountered operational legacy systems where even the source code was lost). Modernizing legacy systems is a specific engineering discipline [Bernstein15, Brodie95, Feathers07, Heuvel07, Miller98, Seacord03, Ulrich02, Warren99]. Basically, two choices exist for legacy system modernization: • Migrate: Gradual, stepwise rearchitecting, reengineering, and refactoring of the system, • Replace: Rebuild from scratch (or use third-party software) to replace the system or parts of the system. Both ways have a significant amount of risk and must be carefully planned. Some of the concerns useful in deciding which method to use are given in Table 6.3: Highest on the list of decision criteria is the risk of operational failures, faults, and unavailability. While modernizing a legacy system, the two software-systems—the legacy system and the replacement—must run in parallel in order to allow comparison and fallback Table 6.2. The Managed Evolution strategy attempts to do continuous, gradual rearchitecting, reengineering, and refactoring of legacy system parts whenever they are touched by a new development (Fig. 5.6). This is possible in many cases (e.g., refactoring part of a database), but not if larger, functional units have to be modernized. In the latter case, specific migration/replacement projects have to be executed (see: Legacy Software Modernization/Migration).
6.4 Architecture Integration Challenge By far the largest number of software developments concerns the extension of an existing system, i.e., “greenfield developments” are quite rare. The newly developed software, therefore, has to be integrated into the existing system, possibly also cooperating with systems outside of the own organization.
100
6 Architecture
Application integration must be carefully planned and designed: All software artifacts generated by the new or extended application—architecture, design decisions, data structures and semantics, models, etc.—must be consistently integrated and care must be taken, not to damage the existing architecture. The integration aspects, therefore, form an essential part of the development process. Note that integration aspects can even limit the possibilities and opportunities expected from the new developments. Integration, as shown in Fig. 6.4, means that new parts, modified parts, and deleted parts must be integrated into the existing system. Integration can become difficult, if legacy systems are involved: Their structure, partitioning, and technical infrastructure may require some modernization of the legacy parts involved. Successful integration—preserving or improving the architecture, modernizing the affected parts of the legacy system, and adapting the runtime infrastructure—requires specific skills, knowledge, and techniques. Because of the long-livedness of the IT systems, this should be a continuous, managed process in the organization [Hammer07a, Halevy12, Duvall07, Ghosh16, Paulheim11, Microsoft04, Allen15, Brackett12]. During integration, the Managed Evolution strategy also must be applied: Whenever an interaction with existing software is generated, this existing part should be improved. A simple example is shown in Example 6.2.
&
'ƌĞĞŶ͗EĞǁ
&
zĞůůŽǁ͗DŽĚŝĨŝĞĚ
&
ZĞĚ͗ĞůĞƚĞĚ
&ƵŶĐƚŝŽŶĂůŝƚLJ ĂƚĂ &
& &
&
&
& &
&
&
& &
&
&
& &
Fig. 6.4 Integration challenge
& &
& &
&
&
&
&
& &
&
& &
&
&
&
&
&
&
& &
&
&
&
&
&
&
&
/ŶƚĞƌŶĂů ĚĞƉĞŶĚĞŶĐLJ ;ƌĞůĂƚŝŽŶƐŚŝƉͿ
& džƚĞƌŶĂů ĚĞƉĞŶĚĞŶĐLJ ;ƌĞůĂƚŝŽŶƐŚŝƉͿ
101
6.4 Architecture Integration Challenge Example 6.2: Database Extension
For a new business requirement, an existing database had to be extended by a number of new data fields. The existing database was a DB2-implementation [Mullins12] and had at the time of creation not be formally modeled. The addition of the new fields was a small task, however, following the Managed Evolution strategy, the following additional work was executed (see Fig. 6.5):
Quote: “The data model is a relatively small part of the total systems specification but has a high impact on the quality and useful life of the system.” (Graeme C. Simson, 2005)
a) Formally modeling all the affected database segments, including the legacy part [Simsion05a,b]; b) Checking for redundancy of elements in the modified database segments against the databases of the whole system (although the redundancy found could not be eliminated, because of the many applications accessing the database segment—but at least the redundancy was known and documented); c) Assessing the access security of the database segments, including the audit processes [Nathan05]. Here some weaknesses were detected and rectified. As a result, not only was the integration successful, but the existing system had been improved and additional documented insight into the legacy system had been gained.
ϮdžƚĞŶƐŝŽŶ
Fig. 6.5 Additional work done during integration (managed evolution)
ƉƉůŝĐĂƚŝŽŶƐ
džŝƐƚŝŶŐ;ůĞŐĂĐLJͿ ϮĚĂƚĂďĂƐĞ ƐĞŐŵĞŶƚƐ
DĂŶĂŐĞĚ ǀŽůƵƚŝŽŶ͗ ZĞĚƵŶĚĂŶĐLJ ŚĞĐŬ
DĂŶĂŐĞĚ ǀŽůƵƚŝŽŶ͗ ĐĐĞƐƐ^ĞĐƵƌŝƚLJƵĚŝƚн/ŵƉƌŽǀĞŵĞŶƚ
DĂŶĂŐĞĚ ǀŽůƵƚŝŽŶ͗ &ŽƌŵĂůDŽĚĞů
102
6 Architecture
6.5 Evolutionary Architecture Several times in this book, the fact has been mentioned that software architecture is not static, but must be adaptive. New requirements—from business, from the operating environment, from technology evolution, from legal and regulatory changes, etc.—force relentless changes in the software-system. Newer developments, such as Continuous Delivery [Humble10, Weller17] and DevOps [Bass15, Kim16] have accelerated this trend by shortening the development/deployment cycle. Therefore, the software architecture must evolve in sync with the functional modifications of the system, i.e., we must build and maintain evolutionary architectures [Hohpe16, Ford17, Bell16, Erder16], sometimes also called continuous, incremental, or reactive architectures. Figure 6.6 shows the process for evolutionary architecture: • New requirements drive the development of new functionality; • The architecture must continuously be evolved in sync with the new functional landscape (adapted structure and behavior of the system); • The evolved architecture preserves or improves the desired quality properties (safety, security, integrity, performance …) of the system. In order to preserve or improve the software architecture during the evolution cycle, some suggestions are given in Table 6.3.
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
͙
&ƵŶĐƚŝŽŶĂůŝƚLJ
͙ ǀŽůƵƚŝŽŶĂƌLJ ƌĐŚŝƚĞĐƚƵƌĞ ͙ YƵĂůŝƚLJWƌŽƉĞƌƚŝĞƐ
Fig. 6.6 Evolutionary architecture
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
Are all new concepts defined in the domain model? Are all used Ax: Formal modeling exactly as defined in the domain model? Are the functional requirements and the quality property requirements clearly separated? Are the requirements reduced to the absolute necessary? Is accidental complexity removed?
Domain match
Orthogonality
Simplification
Specifications
Is unmanaged redundant functionality or data introduced into the system?
Redundancy
Are the new functionality and data assigned to the correct partitions?
Is all the redundancy introduced into the system necessary and, Ax: Redundancy if yes, managed? Are the parts and relationships to be rearchitected, refactored or Managed Evolution reengineered unambiguously delineated and specified? Legacy system Is the impact on changeability and dependability understood? Is Managed evolution it sufficiently positive? Are all relevant industry standards respected? Is none violated by client-specific enhancements? Are all Are all
Partitioning
Redundancy
Managed evolution
Managed evolution
Industry standards
Horizontal architecture principles and patterns
Vertical architecture principles and patterns
Ax: Industry standards
Are the specifications precise, complete, consistent, and as formal as possible?
Precision
Domain model
Simplification
(continued)
Architecture framework (…)
Ax: Redundancy
Are all new concepts, terminology, and attributes precisely defined Ax: Conceptual integrity and consistent with the existing conceptual architecture model?
Conceptual integrity
Reference
Requirements
Question
Architecture concerns
Phase
Table 6.3 Some Suggestions for Evolutionary Architecture
6.5 Evolutionary Architecture 103
Has the impact on all quality properties been assessed and is it acceptable? Is the impact on the legacy system fully evaluated, understood and risk analyzed? Is the best integration approach used? Are all architecture choices well documented and justified? Also changes to the legacy system?
Quality properties
Legacy system
Architecture choices
Process
Architecture evaluation
Legacy system
Fitness function
Reference
Is the development process strong enough to validate and verify Reviews, code checkers, all design decisions?
Are projects running in parallel and modifying same parts of the system sufficiently coordinated?
Question
Architecture concerns
Implementation Parallel projects
Phase
Table 6.3 (continued)
104 6 Architecture
6.7 How much Architecture?
105
6.6 Architecture Evaluation When is an architecture or an architecture extension good? When is it adequate? When is it tolerable? When is it inacceptable? This question can (at least partly) be answered by an architecture evaluation. The architecture evaluation is executed as soon as the architecture has been designed and documented, but before the implementation starts. Definition 6.5: Architecture Evaluation Architecture evaluation is a process (mainly by reviewing) to assess the suitability of a planned architecture or extension, to: a) Assess the adequacy of the architecture for the system which it should support, b) Choose the optimum architecture in case of two or more competing architectures, c) Assure that the architecture will meet the quality properties of the planned system, d) Guarantee that the legacy system architecture will be maintained or improved, i.e., is never damaged [Adapted from] Paul Clements 2005 The most important participant for the architecture evaluation is the evaluation team: This consists of an experienced group of highly knowledgeable people with deep knowledge of the organization’s IT systems. In many cases, also additional groups are called to the architecture evaluation activity, such as stakeholders or specialists in different disciplines, such as security, safety, etc. A number of methods for architecture evaluation exist [Clements02, Knodel16, Zayaraz10]. Quote: “Usually, any reasonable architecture will support the desired funtionality, but only a carefully chosen architecture will enable the desired qualities.” (George Fairbanks, 2010)
Architecture evaluation must be an early, established, and accepted step in the software or system development process of an organization. It must be executed with care and diligence, because architectural mistakes which are not detected at an early stage may have grave consequences—they may impair the viability of the emerging software, and they are very costly to remedy later.
6.7 How much Architecture? Architecture is expensive. It needs excellent people, strong processes, proven knowledge, and unwavering management support. Most architecting must be done up-front, i.e., at the beginning of a software project: This activity requires time and money—and
106
6 Architecture
dŽƚĂůƉƌŽũĞĐƚ Žƌ ƉƌŽĚƵĐƚ ĞĨĨŽƌƚ
ϭϬϬй
ƌĐŚŝƚĞĐƚƵƌĞ ĞĨĨŽƌƚ
ϱй WƌŽũĞĐƚŽƌ ƉƌŽĚƵĐƚ ƌŝƐŬ ůŽǁ
ŚŝŐŚ
Fig. 6.7 Architecture effort
therefore could be seen as a delay and unnecessary cost. Thus a legitimate question is: “How much architecture is necessary?” Obviously, the answer depends on the size of the project or product to be implemented and also on the size and complexity of the existing system into which it must be integrated. One proven approach to estimating the necessary up-front architecture effort is to look at it from a risk perspective [Fairbanks10, Ford17]. This approach is shown in Fig. 6.7: Projects or products with low risk need little architecture effort, whereas projects or products with high risk need considerable architecture effort. The risk in this context includes • The potential damage which could be generated for the company (liability risk, reputation damage, competitive disadvantage, legal or regulatory consequences, product recall probability, etc.); • The risks to the existing system into which the new functionality must be integrated (architecture erosion, redundancy generation, far-effects, cannibalization, stability and other quality of service attributes, accidental complexity, etc.); • The risks to the conceptual integrity of the total system (by introducing new, badly defined or incompatible concepts, by not respecting the domain models, by duplicating existing functionality or data, by breaking partitions, etc.). The first step while starting a software development is to assess the above risks associated with the new project or product. The risks must be made explicit, i.e., they must
6.8 Architecture Tools
107
be clearly listed with all their possible consequences. Furthermore, the locations of the impact of the risks in the existing system must be identified. Based on this risk assessment, the architecture team can formulate the architecture requirements, and the system architecture can be suitably evolved. An invaluable help in this assessment are models, especially precise, complete, consistent domain, and business object models: Fitting the model of the new parts into the model of the existing system provides great insights and confidence.
6.8 Architecture Tools Modern, large system architecture is not possible without architecture tools. The most important are listed here with references for additional information (a detailed description would exceed the size of this book): • Architecture principles and architecture patterns: Architecture principles are the fundamental carriers of architecture knowledge. They are precisely formulated, can be checked against architecture designs, and are enforceable. Architecture principles are the soul of this book. Architecture patterns are proven, generic solutions to specific scopes which can be adapted to actual solutions. The rest of this book is focused on architecture principles. • Models: Models are abstractions of the real world. They zoom in to specific structural and behavioral properties of software-system parts and represent them in a syntactical and semantically rich language, often with a graphical representation. Modeling is the topic of the architecture principle ρ-Architecture Principle #11: Formal Modeling. • Industry standards: Standards are precise specifications for various areas of technology. Their primary use is to enable interoperability, validation and certification, security and safety, and other system properties. A large body of different standards, issued and maintained by many standardization bodies exist. Industry standards are the topic of the architecture principle ρ-Architecture Principle #5: Interoperability and ρ-Architecture Principle #9: Industry Standards. • Views: Architecture models for larger systems soon become too large to be understood, presented, and discussed. This fact is called “model explosion”. Fortunately, two possibilities exist to make the models more readable: Views partition a model into stakeholder concerns. The model is represented as the full set of elements, relationships, and properties. A view extracts (only for presentation) the concerns of a specific stakeholder, such as the security or safety engineer. • Model refinement: Models of complex systems must be constructed hierarchically. They start from a top-level model and are refined in lower level submodels and confidence.
108
6 Architecture
Formal modeling of structure, behavior, and quality properties of a system is a strong help in avoiding mistakes during the whole life cycle of the system. Formal modeling is described in ρ-Architecture Principle #11: Formal Modeling. • Reference Architectures: A reference architecture provides a template solution for an architecture for a particular application domain, such as automotive, aerospace, etc. The reference architecture is a strong interoperability agent to technically synchronize the work products of different suppliers. Reference ρ-Architecture Principle #7: Reference Architectures, Frameworks and Patterns. • Architecture Frameworks: An architecture framework establishes a common practice for creating, interpreting, analyzing, and using architecture descriptions within a particular application domain [ISO/IEC/IEEE 42010]. Architecture Frameworks will be explained in ρ-Architecture Principle #7: Reference Architectures, Frameworks and Patterns • Architecture Description Languages: Architecture description languages (ADL) are languages for the representation of the structural software model of a system. They have graphical and textual descriptions. Their advantage is that they describe architecture in a formal, exchangeable, and communicable way. A considerable number of ADLs are available today [Delange17, Dissaux05]. • Architecture Editors: Architecture editors are commercial tools which allow to define, specify, model, and represent the architecture of a software system. For future-proof software-systems, it is essential that the organization decides on and enforces a comprehensive and consistent set of the available architecture tools from this list.
References [Allen15] Allen M, Cervo D (2015) Multi-domain master data management: advanced MDM and data governance in practice. Morgan Kaufmann, Waltham. ISBN 978-0-128-00835-5 [Bass13] Bass L, Clements P, Kazman R (2013) Software architecture in practice (SEISeries), 3rd edn. Addison-Wesley (Pearson Education), Upper Saddle River. ISBN 978-0-321-81573-6 [Bass15] Bass L, Weber I, Zhu I (2015) DevOps—a software architect’s perspective (SEI Series in Software Engineering). Addison Wesley, Upper Saddle River. ISBN 978-0-134-04984-7 [Bell16] Bell M (2016) Incremental software architecture—a method for saving failing IT implementations. Wiley, Hoboken. ISBN 978-1-119-11764-3 [Bernstein15] Bernstein D (2015) Beyond legacy code—nine practices to extend the life (and value) of your software. O’Reilly UK Ltd., Dallas. ISBN 978-1-680-50079-0
References
109
[Brackett12] Brackett M (2012) Data resource integration: understanding and resolving a disparate data resource. Technics Publications, LLC, Seattle. ISBN 978-1-9355-0423-8 [Brodie95] Brodie ML, Stonebraker M (1995) Migrating legacy systems—gateways, interfaces & the incremental approach. Morgan Kaufmann Publishers Inc., San Francisco. ISBN 978-1-55860-330-1 [Clements02] Clements P, Kazman R, Klein M (2002) Evaluating software architectures— methods and case studies (SEI Series in Software Engineering). AddisonWesley, Boston. ISBN 0-201-70482-X [Delange17] Delange J (2017) AADL (Architecture Analysis and Design Language) in practice—become an expert of software architecture modeling and analysis. Reblochon Development Company ISBN 978-0-6928-9964-9 [Dissaux05] Dissaux P (ed) (2005) Architecture description languages (IFIP Advances in Information and Communication Technology). Springer, New York. ISBN 978-1-461-49895-7 [Douglass16] Douglass BP (2016) Agile systems engineering. Morgan Kaufmann Publishers (Elsevier), Waltham. ISBN 978-0-12-802120-0 [Duvall07] Duvall PM, Matyas S, Glover A (2007) Continuous integration: improving software quality and reducing risk. Addison-Wesley, New Jersey. ISBN 978-0-321-33638-5 [Erder16] Erder M, Pureur P (2016) Continuous architecture—sustainable architecture in an agile and cloud-centric world. Morgan Kaufmann (Elsevier), Waltham. ISBN 978-0-12-803284-8 [Fairbanks10] Fairbanks G (2010) Just enough software architecture—a risk-driven approach. Marshall & Brainerd Publishers, Boulder. ISBN 978-0-9846181-0-1 [Feathers07] Feathers M (2007) Working effectively with legacy code. Prentice Hall International, Upper Saddle River. ISBN 978-0-13-117705-5 [Ford17] Ford N, Parsons R, Kua P (2017) Building evolutionary architectures—support constant change. O’Reilly Media Inc, Sebastopol. ISBN 978-1-491-98636-3 [Ghosh16] Ghosh P (2016) Semantic integration of applications: application integration by linking semantically related objects shared across applications. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-5305-7375-2 [Gorton06] Gorton I (2006) Essential software architecture. Springer, Heidelberg. ISBN 978-3-540-28713-1 [Halevy12] Halevy A, Doan A, Ives ZG (2012) Principles of data integration. Elsevier Ltd, Oxford. ISBN 978-0-124-16044-6 [Hammer07a] Hammer K, Timmerman T (2007) Fundamentals of software integration. Jones & Bartlett Publication Inc., Sudbury. ISBN 978-0-7637-4133-4 [Heuvel07] WJ van den Heuvel (2007) Aligning modern business processes and legacy systems—a component-based perspective. MIT Press, Cambridge. ISBN 978-0-262-22079-9 [High09] High PA (2009) World-class IT—why businesses succeed when IT triumphs. Wiley, Jossey-Bass. ISBN 978-0-470-45018-5 [High14] High PA (2014) Implementing world class IT strategy—how IT can drive organizational innovation. Wiley, New York. ISBN 978-1-118-63411-0 [Hohmann03] Hohmann L (2003) Beyond software architecture—creating and sustaining winning solutions. Pearson Education, Addison-Wesley, Boston. ISBN 978-0-201-77594-8
110
6 Architecture
[Hohpe16] Hohpe G (2016) 37 things one architect knows about IT transformation—a chief architect’s journey. Leanpub Publishing, Wroclaw. ISBN 978-1-53708-298-1 [Humble10] Humble J, Farley D (2010) Continuous delivery—reliable software releases through build, test, and deployment automation. Addison Wesley, New Jersey ISBN 978-0-321-60191-9 [IEEE42010] [IEEE42010] ISO/IEC/IEEE 42010 (2011) Systems and software engineering—architecture description International Standards Organization (ISO), 2017. https://www.iso.org/standard/50508.html [Kim16] Kim G, Willis J, Debois P, Humble J, Willis J (2016) The DevOPS handbook— how to create world-class agility, reliability, and security in technology organizations. IT Revolution Press, Portland. ISBN 978-1-942788-00-3 [Knodel16] Knodel J, Naab M (2016) Pragmatic evaluation of software architectures (The Fraunhofer IESE Series on Software and Systems Engineering). Springer, Cham. ISBN 978-3-319-34176-7 [Kossiakoff11] Kossiakoff A, Sweet WN, Seymour SJ, Biemer SM (2011) Systems engineering—principles and practice, 2nd edn. Wiley, Hoboken. ISBN 978-0-470-40548-2 [Lattanze09] Lattanze AJ (2009) Architecting software intensive systems—A practicioner’s guide. Auerbach Publications, Taylor & Francis Group, LLC, Boston. ISBN 978-1-4200-4569-7 [Microsoft04] Microsoft Corporation (2004) Guidelines for application integration (patterns & practices). Microsoft Press, Redmond ISBN 978-0-735-61848-0 [Miller98] Miller HW (1998) Reengineering legacy software systems. ButterworthHeinemann, Digital Press, Woburn. ISBN 978-1-55558-195-1 [Mullins12] Mullins CS (2012) DB2 developer’s guide—A solutions-oriented approach to learning the foundation and capabilities of DB2 for Z/OS. IBM Press, Prentice Hall revised edition. ISBN 978-0-132-83642-5 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—A strategy for very large information systems. Springer, Heidelberg. ISBN 978-3-642-01632-5 [Nathan05] Nathan RB (2005) Implementing database security and auditing. Elsevier Digital Press, Oxford. ISBN 1-55558-334-2 [Paulheim11] Paulheim H (2011) Ontology-based application integration. Springer, New York. ISBN 978-1-461-41429-2 [Seacord03] Seacord RC, Plakosh D, Lewis GA (2003) Modernizing legacy systems—software technologies, engineering processes, and business practices. Addison Wesley, Boston ISBN 978-0-321-11884-7 [Simsion05a] Simsion GC (2005) Data modeling essentials, 3rd edn. Morgan Kaufmann Publishers, San Francisco ISBN 978-0-12-644551-0 [Simsion05b] Simsion GC, Witt GC (2005) Data modeling essentials, 3rd edn. Morgan Kaufmann (Elsevier), San Francisco ISBN 978-0-12-644551-0 [Staron17] Staron M (2017) Automotive software architectures – an introduction. Springer International Publishing, Switzerland. ISBN 978-3-319-58609-0 [Ulrich02] Ulrich WM (2002) Legacy systems transformation strategies. Prentice Hall, Upper Saddle River ISBN 978-0-13-044927-X [Warren99] Warren I (1999) The renaissance of legacy systems—method support for software-system evolution. Springer, London ISBN 978-1-85233-060-6 [Wasson15] Wasson CS (2015) System engineering analysis, design, and development— concepts, principles, and practices (Wiley Series in Systems Engineering and Management), 2nd ed. Wiley, Hoboken. ISBN 978-1-118-44226-5
References
111
[Weilkiens16] Weilkiens T, Lamm JG, Roth S, Walker M (2016) Model-based system architecture. Wiley, Hoboken. ISBN 978-1-118-89364-7 [Weller17] Weller C (2017) Continuous delivery: a brief overview of continuous delivery. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-98139-294-0 [Zayaraz10] Zayaraz G (2010) Quantitative approaches for evaluating software architectures—frameworks and models. VDM Verlag, Saarbrücken ISBN 978-3-6392-4041-2
7
Principle-Based Architecting
Abstract
In the previous chapters, the essential knowledge about software architecture, software evolution, and software engineering has been presented. The questions coming up at this point are (1) How is good architecture defined? (2) How is good architecture formalized? (3) How is good architecture taught? (4) How is good architecture enforced? The answers in this book are: By defining, formalizing, strictly applying, and enforcing architecture principles. Architecture principles represent proven, longtested, reliable knowledge about successful systems’ architecting—sometimes called “the eternal truths of software architecture”. Today, architecture principles (and their derivatives: Architecture patterns) are available for all disciplines of architecting, i.e., for all horizontal and vertical architecture layers. The following chapters introduce, explain, and justify an important number of architecture principles. The main focus is on architecture principles for changeability and dependability. For this approach to architecting software-systems, the term “Principle-Based Architecting” has been coined.
7.1 Principles in Science Principles have long been used in science to formalize fundamental insights in a specific area of scientific knowledge (see, e.g., [Devons1923, Russell1903, Dirac1930, Stark1910, Graupe13, Kandel1992, Born1999, Harms00, Alur15]). Definition 7.1: Principle A principle is a fundamental truth or proposition that serves as the foundation for a system of belief, or behavior, or for a chain of reasoning. (Oxford Dictionary)
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_7
113
114
7 Principle-Based Architecting
Whenever a scientific or technical discipline has reached a sufficient degree of maturity, the underlying principles become clear and can be formalized. Scientific principles represent an objective truth which has been verified in many applications.
7.2 Software Architecture Principles Architecture principles target the engineering discipline of software engineering, or more precise, software architecting. They have been distilled from a long history of theoretical work, practical experience, and reflections on failed projects. Definition 7.2: Architecture Principle Architecture principles are fundamental insights—formulated as enforceable rules—how a good software-system should be built. The history of software architecture principles started around 1970. Three landmark papers must be mentioned: • Edgar Dijkstra: “Go To Statement Considered Harmful”, [Dijkstra68], • D. L. Parnas: “On the Criteria to be Used in Decomposing Systems into Modules” [Parnas71], • L. A. Belady, M. M. Lehman: A Model of Large Program Development, 1976 [Belady76]. These three profound thinkers founded the art of principle-based software architecting by identifying software construction principles of general validity. Software architecture principles are focused rules, how to construct software-systems. They impose strict prescriptions on the construction of software-systems. If all architecture principles are respected during the development, maintenance, and evolution, good architecture results. Quote: “Software engineering principles should capture the nature and behavior of software-systems and guide their development. Such principles would help in restricting degrees of freedom in software development and achieving degrees of predictability and repeatability similar to those of classical engineering disciplines.” (Hadar Ziv, 1996)
Architecture principles are focused on particular quality properties of the system. Figure 7.1 shows the classification according to the architecture framework of Fig. 3.8: • Horizontal Architecture Principles: This set of principles focuses on the functional architecture layers. They mainly affect the property “changeability”, i.e., they enable the software-systems to be efficiently developed, maintained, evolved, and operated. These architecture principles are valid and important for any software-system. • Vertical Architecture Principles: This set of principles impacts the quality properties of the system, such as “security”, “safety”, “real-time capability”, etc. They stem from the highly specialized fields of implementing specific quality properties in the system (from Table 4.1).
7.3 Horizontal Architecture Principles
115
sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ ,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
,ŝĞƌĂƌĐŚLJ
,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ZĞĂůͲ dŝŵĞ
^ĞĐƵƌŝƚLJ
^ĂĨĞƚLJ
dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ ͙
sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
Fig. 7.1 Horizontal and vertical architecture principles
7.3 Horizontal Architecture Principles The business architecture (Fig. 7.2) defines the organization, structure, and the processes of the company. Part of the business architecture is the enterprise architecture [Ross06, Lankhorst17, Bernard12a, Bernard12b, Bente12, Perroud13]. The enterprise architecture has some impact on the application landscape architecture, mainly through the domain and the business object models. However, in many cases, the longevity of the application landscape is significantly higher than that of the enterprise architecture.
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
Fig. 7.2 Horizontal architecture principles
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ
116
7 Principle-Based Architecting
Table 7.1 The 12 application landscape architecture principles Principle number
Principle title
A1
Architecture layer isolation
A2
Partitioning, encapsulation, and coupling
A3
Conceptual integrity
A4
Redundancy
A5
Interoperability
A6
Common functions
A7
Reference architectures, frameworks, and patterns
A8
Reuse and parametrization
A9
Industry standards
A10
Information architecture
A11
Formal modelling
A12
Complexity and simplification
Changeability is arguably the most important property of the application landscape. Changeability (often also called “resistance to change”) assures the technical and commercial viability of the software-system in the long run. Changeability is a property of the application landscape, not only of the individual applications. Therefore, the full application landscape must be considered: Application Architecture, Information Architecture, and Integration Architecture (Fig. 7.2) must be taken into account. Amazingly, the results of the authors decade-long work as an application landscape architect and of some research work done at the Technical University of Dresden have shown that 12 application landscape architecture principles (horizontal principles) are sufficient to construct future-proof software-systems with good changeability. These 12 principles are listed in Table 7.1 and are presented in detail in part 2. The lowest architecture level in Fig. 7.2 is the technical architecture. The technical architecture has its own set of architecture principles [Murer11, Laan17, Arrasjid16, Robertson04]. The interaction of technical architecture and applications landscape architecture is governed by ρ-Architecture Principle #1: Architecture Layer Isolation.
7.4 Vertical Architecture Principles The vertical architecture principles (Fig. 7.3) govern the quality of the system properties. The rules for a vertical architecture principle often cut across all the horizontal layers, i.e., to implement the vertical architecture principle, activities in all horizontal layers may be necessary.
117
7.5 Software Architecture Principle Formalization
sĞƌƚŝĐĂůƌĐŚŝƚĞĐƚƵƌĞWƌŝŶĐŝƉůĞƐ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ZĞĂůͲ dŝŵĞ
^ĞĐƵƌŝƚLJ
^ĂĨĞƚLJ
dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
͙
Fig. 7.3 Vertical architecture principles
In most organizations, particular quality properties—such as safety or security—are the responsibility of specialized departments. They define the principles, patterns, and standards to be used and supervise the implementation. More about vertical architecture principles can be found in Chap. 13.
7.5 Software Architecture Principle Formalization Software architecture principles must be unambiguously understandable by all levels of the hierarchy in the IT department, i.e., from the CIO to the programmer. Also, they must be enforceable—usually via reviews or code analysis. Therefore, a standard format as shown in Table 7.2 is useful. This template will be used throughout part 2. Table 7.2 Architecture principle documentation template
Principle: title Statement 1: … Statement 2: … Statement 3: … …
118
7 Principle-Based Architecting
7.6 Software Architecture Patterns Although architecture principles are clear, precise, and enforceable, their actual implementation may not be evident during the development. One great help for the development teams is patterns. Many existing patterns support the application of architecture principles in more detail for particular problems. Patterns live in a hierarchy as shown in Fig. 7.4.
7.7 Patterns Patterns form a very valuable treasure of proven knowledge in many scientific dis ciplines. The idea of a pattern as reusable expert knowledge was introduced by Christopher Alexander in 1977 for the support of building houses and towns [Alexander1977, Alexander1980]. Since then, patterns have been identified and made available in many disciplines, especially software engineering—started by the famous books by the “gang of four” (Erich Gamma, Richard Helm, Ralph E. Johnson, John Vlissides, [Gamma94]). Definition 7.3: Pattern A pattern is a proven, generic solution to recurring architectural or design problems, which can be adapted to the task at hand.
Fig. 7.4 Principle-pattern hierarchy
ďƐƚƌĂĐƟŽŶ ŚŝŐŚ
ƌĐŚŝƚĞĐƚƵƌĞWƌŝŶĐŝƉůĞ
ƌĐŚŝƚĞĐƚƵƌĞWĂƩĞƌŶƐ ƌĐŚŝƚĞĐƚƵƌĞWĂƩĞƌŶƐ ƌĐŚŝƚĞĐƚƵƌĞWĂƩĞƌŶƐ ƌĐŚŝƚĞĐƚƵƌĞWĂƩĞƌŶƐ ƌĐŚŝƚĞĐƚƵƌĞWĂƩĞƌŶƐ
ĞƐŝŐŶWĂƩĞƌŶƐ ĞƐŝŐŶWĂƩĞƌŶƐ ĞƐŝŐŶWĂƩĞƌŶƐ ĞƐŝŐŶWĂƩĞƌŶƐ ĞƐŝŐŶWĂƩĞƌŶƐ ĞƐŝŐŶWĂƩĞƌŶƐ
ůŽǁ
/ŵƉůĞŵĞŶƚĂƟŽŶWĂƩĞƌŶƐ /ŵƉůĞŵĞŶƚĂƟŽŶWĂƩĞƌŶƐ /ŵƉůĞŵĞŶƚĂƟŽŶWĂƩĞƌŶƐ /ŵƉůĞŵĞŶƚĂƟŽŶWĂƩĞƌŶƐ
7.7 Patterns
119
The literature about patterns in software engineering is very rich. A Google search for “software pattern” delivered 1’460’000’000 results (02.04.2018). We find pattern books about business and enterprise architecture (e.g., [Greefhorst11, Hruby06, Perroud13]), about the horizontal architecture layers (e.g., as a small sample: [Cloutier08, Buschmann96, Fowler02, Hohpe03]), and about the vertical architecture layers (see, e.g., as a small selection: [Hanmer07, Schumacher05, Fernandez-Buglioni13, Dyson04]). Also, for the IT infrastructure, many patterns are published (e.g., [Völter02, Cope15, Berczuk02]). Example 7.1: RBAC Pattern [Fernandez, ISBN 978-1-119-99894-5]
The role-based access control is the standard protection method in today’s m odern commercial systems. It is based on a standardized pattern and is implemented in many products. The situation is that a user wants to access protected/confidential information and needs to be authorized (Fig. 7.5). The User and Role classes describe registered users and their predefined roles. Users are assigned to roles, roles are given rights according to their functions. Access is granted or denied based on the right of the role and the ID of the user. Architecture and software patterns are also an excellent source for the continuous education of software engineers (My students who want to become software architects have to read at least six pattern books per year—life-long!). Understanding patterns greatly improves the insight into the foundations of successful software engineering. For the practical application, many pattern catalogs are available on the Web.
Fig. 7.5 The RBAC security pattern
120
7 Principle-Based Architecting
7.8 Anti-Patterns In 1998, Scott W. Ambler came up with an interesting idea: Anti-Patterns [Ambler98]. While patterns guide a user on how to solve a problem, anti-patterns try to prevent the user from making mistakes. Many useful anti-patterns have been published (e.g., [Brown98, Tanzer18]). Definition 7.4: Anti-Pattern Anti-patterns are common approaches to solving r ecurring problems that prove to be ineffective. (Scott W. Ambler 1998)
Example 7.2: Anti-Pattern “Design by Committee” [Norman13, Ambler98]
This anti-pattern occurs when everyone, from the company founder to the security guy, takes part in architectural decision-making. Make sure that the only people influencing software architecture are the ones who are actually experts in the field. Design by committee is a term sometimes used to describe a design that is flawed because too many people provided input. The phrase implies a lack of a coherent vision and, perhaps as a result, a failure to successfully solve the problems the design was intended to solve. Donald Norman: “You don’t do good software design by committee. You do it best by having a dictator (= Chief architect)”.
7.9 Principle-Based Architecting Many processes and methods exist to create, maintain, and evolve the architecture of software-systems (see, e.g., [ITManagement13, Wout10, Cervantes16, Rozanski11]). In this book, we use the principle-based architecting process. Definition 7.5: Principle-Based Architecting Principle-based architecting is a process, where the knowledge used to create, maintain, and evolve a software architecture is contained in architecture principles and patterns. These principles and patterns are consistently applied throughout the evolution of the software-system and are reasonably enforced. A simplified form of the principle-based architecting process is shown in Fig. 7.6: New requirements drive a system evolution. The new requirements may force an adaptation of the software architecture. The adaptation of the architecture is governed by the architecture principles and patterns. While evolving the architecture, the preservation or improvement of the existing software architecture (possibly a legacy system) is an
121
References ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ Θ WĂƚƚĞƌŶƐ
ƌĐŚŝƚĞĐƚŝŶŐ
&
&
džŝƐƚŝŶŐ ;ůĞŐĂĐLJͿ ƐLJƐƚĞŵ
&
&
& &
&
&
&
&
ĞǀĞůŽƉŵĞŶƚΘĞƉůŽLJŵĞŶƚ
&
&
&
&
& & &
ZĞǀŝĞǁ
ZĞǀŝĞǁ
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
&
& &
&
&
Fig. 7.6 Principle-based architecting process
essential task. The applicable architecture principles and patterns must be consistently applied and reasonably enforced (In some cases, trade-offs between quality properties or compromises because of business pressure must be allowed—but the technical debt must be listed and eliminated later!) throughout the whole development process. Enforcing the architecture principles and patterns requires periodic reviews. During the reviews, the designs are checked for any violation of the architecture principles. Unjustified violations must be corrected before the project can continue.
References [Alexander1977] Alexander C (1977) A pattern language: towns, buildings, construction. Oxford University Press, Oxford. ISBN 978-0-195-01919-3 [Alexander1980] Alexander C (1980) The timeless way of building. Oxford University Press, Oxford. ISBN 978-0-195-02402-9 [Alur15] Alur R (2015) Principles of cyber-physical systems. MIT Press, Cambridge. ISBN 978-0-262-02911-7 [Ambler98] Ambler SW (1998) Process patterns: building large-scale systems using object technology. Cambridge University Press, Cambridge. ISBN 0-521-64568-9 [Arrasjid16] Arrasjid JY, Gabryjelski M, McCain C (2016) IT architect: foundation in the art of infrastructure design—a practical guide for it architects. IT Architect Resource, Upper Saddle River. ISBN 978-0-9966-4770-0
122
7 Principle-Based Architecting
[Belady76] Belady LA, Lehman MM (1976) A model of large program development. IBM Sys J 3:225–252. https://cseweb.ucsd.edu/~wgg/CSE218/ BeladyModel-10.1.1.86.9200.pdf. Accessed 28 Oct 2018 [Bente12] Bente S, Bombosch U, Langade S (2012) Collaborative enterprise architecture: enriching EA with lean, agile, and enterprise 2.0 practices. Morgan Kaufmann, Waltham. ISBN 978-0-124-15934-1 [Berczuk02] Berczuk SP (2002) Software configuration management patterns: effective teamwork, practical integration. Addison-Wesley, Boston. ISBN 978-0-201-74117-9 [Bernard12a] Bernard SA (2012) An introduction to enterprise architecture, 3rd edn. AuthorHouse, Bloomington. ISBN 978-1-4772-5800-2 [Bernard12b] Bernard SA (2012) EA3 An introduction to enterprise architecture— linking strategy, business and technology. AuthorHouse, Bloomington. ISBN 978-1-4772-5800-2 [Born1999] Born M, Wolf E (1999) Principles of optics: electromagnetic theory of propagation, interference and diffraction of light. Cambridge University Press, Cambridge. ISBN 978-0-5216-4222-4 [Brown98] Brown WH, Malveau RC, McCormick HW, Mowbray TJ (1998) Anti patterns—refactoring software, architectures, and projects in crisis. Wiley, Hoboken. ISBN 978-0-471-19713-3 [Buschmann96] Buschmann F, Meunier R, Rohnert H (1996) Pattern-oriented software architecture, vol. 1: a system of patterns. Wiley, Hoboken [Also Vols. 2, 3 and 4]. ISBN 978-0-471-95869-7 [Cervantes16] Cervantes H, Kazman R (2016) Designing software architectures: a practical approach. SEI series in software engineering. Pearson Education, London. ISBN 978-0-134-39078-9 [Cloutier08] Cloutier R (2008) Applicability of patterns to architecting complex systems—making implicit knowledge explicit. VDM, Saarbrücken. ISBN 978-3-8364-8587-6 [Cope15] Cope R, Naserpour A, Erl T (2015) Cloud computing design patterns. Pearson Education, London. ISBN 978-0-133-85856-3 [Devons1923] Devons WS (1923) The principles of science—a treatise on logic and scientific method. Richard Clay & Sons, London. https://ia801407. us.archive.org/0/items/theprinciplesof00jevoiala/theprinciplesof00jevoiala.pdf. Accessed 31 Mar 2018 [Dijkstra68] Dijkstra E (1968) Go to statement considered harmful. Commun ACM 11(3). https://homepages.cwi.nl/~storm/teaching/reader/Dij kstra68.pdf. Accessed 31 Mar 2018 [Dirac1930] Dirac PAM (1930) The principles of quantum mechanics. Oxford University Press, Oxford (Reprint: www.Snowballpublishing.com, 2013). ISBN 978-1-6079-6560-2 [Dyson04] Dyson P, Longshaw A (2004) Architecting enterprise solutions: patterns for high-capability internet-based systems. Wiley, Hoboken. ISBN 978-0-470-85612-3 [Fernandez-Buglioni13] Fernandez-Buglioni E (2013) Security patterns in practice: designing secure architectures using software patterns. Wiley, Hoboken. ISBN 978-1-119-99894-5 [Fowler02] Fowler M (2002) Patterns of enterprise application architecture. Pearson Professional, USA. ISBN 978-0-321-12742-6
References
123
[Gamma94] Gamma E, Helm R, Johnson RE, Vlissides J (1994) Design patterns—elements of reusable object-oriented software. Pearson Professional, USA Reprint. ISBN 978-0-201-63361-0 [Graupe13] Graupe D (2013) Principles of artificial neural networks, 3rd edn. World Scientific, Singapore. ISBN 978-9-8145-2273-1 [Greefhorst11] Greefhorst D, Proper E (2011) Architecture principles—the cornerstones of enterprise architecture. Springer, Heidelberg. ISBN 978-3-642-20278-0 [Hanmer07] Hanmer R (2007) Patterns for fault tolerant software. Wiley, Hoboken. ISBN 978-0-470-31979-6 [Harms00] Harms AA, Kingdon DR, Schoepf KF (2000) Principles of fusion energy. World Scientific, Singapore. ISBN 978-9-8123-8033-3 [Hohpe03] Hohpe G, Woolf B (2003) Enterprise integration patterns: designing, building, and deploying messaging solutions. Pearson Professional, USA. ISBN 978-0-321-20068-6 [Hruby06] Hruby P (2006) Model-driven design using business patterns. Springer, Berlin. ISBN 978-3-540-31054-7 [ITManagement13] IT Architecture Management Institute (2013) Architecture management body of knowledge: AMBOK® guide for information technology, 2nd edn. Architecture Management Institute, Ontario. ISBN 978-0-9868-6261-8 [Kandel1992] Kandel ER, Schwartz JH, Jessell TM (1992) Principles of neural science. Appleton & Lange, Norwalk. ISBN 978-0-8385-8034-9 [Laan17] Laan S (2017) IT infrastructure architecture. Lulu Press, Morrisville. ISBN 978-1-3269-1297-0 [Lankhorst17] Lankhorst M (2017) Enterprise architecture at work: modelling, communication and analysis, 4th edn. Springer, Berlin. ISBN 978-3-662-53932-3 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Berlin. ISBN 978-3-642-01632-5 [Norman13] Norman DA (2013) Design of everyday things, 2nd edn. MIT Press, Cambridge. ISBN 978-0-262-52567-1 [Parnas71] Parnas DL (1971) On the criteria to be used in decomposing systems into modules. Department of Computer Science, Carnegie-MelIon University Pittsburgh. http://repository.cmu.edu/cgi/viewcontent.cgi ?article=2979&context=compsci. Accessed 31 Mar 2018 [Perroud13] Perroud T, Inversini R (2013) Enterprise architecture patterns: practical solutions for recurring IT-architecture problems. Springer, Berlin. ISBN 978-3-642-37560-6 [Robertson04] Robertson B, Sribar V (2004) The adaptive enterprise: IT infrastructure strategies to manage change and enable growth. Intel Press, USA. ISBN 978-0-9712-8872-0 [Ross06] Ross JW, Weill P, Robertson DC (2006) Enterprise architecture as strategy—creating a foundation for business execution. Harvard Business Review Press, Brighton. ISBN 978-1-5913-9839-4 [Rozanski11] Rozanski N, Woods E (2011) Software systems architecture—working with stakeholders using viewpoints and perspectives, 2nd edn. Addison Wesley, Boston. ISBN 978-0-321-71833-4
124
7 Principle-Based Architecting
[Russell1903] Russell B (1903) The principles of mathematics. George Allen & Unwin, London [Schumacher05] Schumacher M, Fernandez-Buglioni E, Hybertson D, Buschmann F, Somerlad P (2005) Security patterns: integrating security and systems engineering. Wiley, Hoboken. ISBN 978-0-470-85884-4 [Stark1910] Stark J (1910) Prinzipien der Atomdynamik, vol 3. S. Hirel, Leipzig [Tanzer18] Tanzer D (2018) Quick glance at agile anti-patterns. Independently published. ISBN 978-1-9802-2631-4 [Völter02] Völter M, Schmid A, Wolff E (2002) Server component patterns: component infrastructures illustrated with EJB. Wiley, Hoboken. ISBN 978-0-470-84319-2 [Wout10] van’t Wout J, Waage M, Hartman H, Stahlecker M, Hofmann A (2010) The integrated architecture framework explained: why, what, how. Springer, Berlin. ISBN 978-3-642-11517-2
8
Context for Managed Evolution
Abstract
In the previous chapter, architecture principles have been introduced, explained, and justified. A number of essential architecture principles are defined in part 2. Architecture principles are the treasure of modern software architecting, distilled from decades of experience and valuable work of many architects. In order to generate the impact on actual systems, they must be taught, ingrained into the development process, and finally enforced. The organization of the company must be geared toward Principle-Based Architecting. The strategy of Managed Evolution, paired with Principle-Based Architecting, forms a robust ecosystem for the creation, maintenance, and evolution of today’s complex software-systems. This chapter introduces the context for the Managed Evolution, i.e., the most important processes to successfully implement Managed Evolution and Principle-Based Architecting.
8.1 Context Requirements What is necessary to successfully implement sustainable Managed Evolution and Principle-Based Architecting in an organization? Seven major factors are decisive [Murer11, Boar99, Beijer10]: 1. Dedicated and supportive company management; 2. Sound, formalized architecture knowledge; 3. A competent and trusted architecture organization; 4. An efficient and respected system evolution process;
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_8
125
126
8 Context for Managed Evolution
5. An IT-aware, quality company culture; 6. Excellent and loyal staff; 7. Realistic metrics. These seven factors must be ingrained in the organization and work harmoniously together. Continuous attention and effort are required to achieve and maintain this objective.
8.2 Company Management Quote: “The war of competition is a war of information systems. Those companies that can build, maintain, and extend their IT infrastructure with alacrity will have significant advantage.” (Bernard H. Boar, 1999)
Many decades ago, information technology systems were perceived by the company management as cost centers, possibly with some (limited) rationalization potential. Today, the perception has completely changed: Powerful information technology is the indispensable precondition for business success. The turning point was the landmark book “Enterprise Architecture As Strategy: Creating a Foundation for Business Execution” [Ross06]. Since then, most industries—and their management—have recognized the crucial value of information technology for their success. However, the tension field between company management, IT architecture, and project management as shown in Fig. 8.1 will never go away: The fundamental interests and objectives of the three stakeholders remain different.
ŽŵƉĂŶLJDĂŶĂŐĞŵĞŶƚ
/dƌĐŚŝƚĞĐƚƵƌĞ
dĞĐŚŶŝĐĂůĞďƚǀŽŝĚĂŶĐĞ &ƵƚƵƌĞͲWƌŽŽĨ^ŽĨƚǁĂƌĞͲ^LJƐƚĞŵƐ ůĞĂŶƌĐŚŝƚĞĐƚƵƌĞǀŽůƵƚŝŽŶ ͙
Fig. 8.1 Management tension fields
ĂůĂŶĐĞ^ŚĞĞƚ WƌŽĨŝƚ ^ƚƌĂƚĞŐŝĐ&ƵƚƵƌĞ ͙
WƌŽũĞĐƚDĂŶĂŐĞŵĞŶƚ
&ƵŶĐƚŝŽŶĂůŝƚLJ ^ŚŽƌƚdŝŵĞͲƚŽͲDĂƌŬĞƚ >ŽǁŽƐƚ ͙
8.3 Architecture Knowledge
127
Most differences occur between project management and IT architecture: IT architecture has to restrict the freedom of the project management by enforcing the future-proof architecture, avoiding technical debt, limiting the choices of technologies, and requesting complete models and documentation. These disputes are resolved during the software development (evolution) process. In most cases, controversies can be resolved amicably. However, if this is not the case, the IT architecture must enforce its objectives with the firm and explicit backing of the company management.
8.3 Architecture Knowledge 8.3.1 Formalized Achitecture Knowledge In the last decades, information system architecture has—fortunately—evolved from a “black art” mastered by only a few experienced individuals to a (relatively) mature engineering discipline. The main reason is that architecture knowledge is today available in proven, well-documented, and actionable form. Definition 8.1: Architecture Knowledge Architecture knowledge is proven, actionable, and well-documented theoretical and practical expert knowledge how to define, build, maintain, and evolve information technology architectures with desired, welldefined properties. In our case, the primary properties are business value, changeability, and dependability. The secondary properties are domain-specific choices from Table 4.1. Quote: “A successful architecture greatly depends on the formulation of its principles. Only effective and coherent principles can result in an unambiguous architecture that fits its purpose and drives change.” (Peter Beijer, 2010)
Information systems architecture—or software architecture—is today a huge field of engineering (a Google search for “information system architecture” resulted in 392’000’000 hits, March 9, 2018). This fact poses problems for finding, understanding, teaching, applying, consolidating, and enforcing IT architecture. Therefore, the first task for the architecture team in an organization is to reduce, condense, and concisely document the relevant, binding architecture knowledge applicable to the organization. The architecture knowledge is divided into two groups of insights, which form the the fundamental theme of this book: • Architecture knowledge which is valid for all systems, i.e., covering the primary properties “business value”, “changeability”, and “dependability”, • Additional architecture knowledge, specific to an application domain, covering particular quality properties.
128
8 Context for Managed Evolution
The fundamental organization of architecture knowledge is shown in Fig. 8.2: In the center, we find the system evolution process, which is driven by new requirements and continuously transforms the system—and thus also evolves its architecture. The system evolution process is steered by the architecture principles and architecture patterns (see also Fig. 8.3). The system evolution process is strongly supported by models—especially, the domain and business object models—and additional tools, such as architecture description languages (ADL). The first shell—the reference architectures—surrounds the process. Reference architectures are used by many industries, such as automotive [Scheid15, Autosar17], avionics [Eveleens06, Annighoefer15], telecommunications [Czarnecki17], manufacturing [Heidel17]. Reference architectures facilitate the cooperation of different partners by setting technical standards. The second, outer, shell are architecture frameworks. Architecture frameworks are powerful collections of methods, principles, processes, governance, etc., to support the IT development of an organization. A number of architecture frameworks exist (e.g., [TOGAF11, Wout10]). Architecture frameworks are very useful when an organization is building up, revising, auditing, or modernizing its IT operation.
ƌĐŚŝƚĞĐƚƵƌĞ&ƌĂŵĞǁŽƌŬ ZĞĨĞƌĞŶĐĞƌĐŚŝƚĞĐƚƵƌĞ ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ
ƌĐŚŝƚĞĐƚƵƌĞ WĂƚƚĞƌŶƐ
^LJƐƚĞŵǀŽůƵƚŝŽŶ & &
& &
& &
& & &
&
& &
&
&
&
&
& &
ŽŵĂŝŶΘƵƐŝŶĞƐƐ KďũĞĐƚ DŽĚĞůƐ
Fig. 8.2 Architecture knowledge
&
&
&
&
&
& &
& &
&
&
&
&
&
&
&
&
&
&
&
&
&
&
&
&
&
& &
&
&
& & &
&
& & &
DŽĚĞůƐ͕ >͛Ɛ
&
8.3 Architecture Knowledge
129
ƌĐŚŝƚĞĐƚƵƌĞ ĂŶŐƵĂŐĞƐ
/dƌĐŚŝƚĞĐƚƵƌĞ 'ŽǀĞƌŶĂŶĐĞ ZĞǀŝĞǁ
ƌĐŚŝƚĞĐƚŝŶŐ
ZĞǀŝĞǁ
ZƵůĞƐ
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
ĞǀĞůŽƉŵĞŶƚΘĞƉůŽLJŵĞŶƚ
džŝƐƚŝŶŐ;ůĞŐĂĐLJͿƐLJƐƚĞŵĐŽŶƐƚƌĂŝŶƚƐ
Fig. 8.3 Architecture governance
8.3.2 Architecture Governance Quote: “IT Governance is no longer some stand-alone function, but is an integral part of any organization’s overall corporate governance. If an organisation cannot survive as a competitive player without IT, then the Board cannot apply acceptable corporate governance without overt IT Governance.” (Deloitte & Touche, 2003)
Formalizing and documenting the architecture knowledge relies on a number of formalized techniques (see: Architecture Tools). This formalized knowledge is applied and enforced during the development process as shown in Fig. 8.3. The correct adherence to the rules of good architecting is assured by the architecture governance. Definition 8.2: Architecture Governance Architecture governance is the practice and orientation by which enterprise architectures and other architectures are managed and controlled at an enterprise-wide level. http://pubs.opengroup.org/architecture Architecture governance is a management responsibility. Architecture governance is based on a hierarchy of governance structures: • Corporate governance (= Company and management structure, business and organizational processes), • Technology governance (Enterprise architecture, technology portfolio management),
130
8 Context for Managed Evolution
• Information system governance (IT department management and processes), • Architecture governance (IT architecture definition, application, and enforcement). In a large organization, the IT governance may exist at different geographic locations, i.e., be federated [Murer11]. A strong, consequent, and focussed IT governance is the foundation of both Managed Evolution and Principle-Based Architecting.
8.4 Architecture Organization Quote: “Principles provide enduring overall direction and guidance for the long-term evolution of IT assets. They provide a basis for dispersed but integrated decision-making and serve as a tiebraker in settling disputes. Principles address the perpetual management problem of influence at a distance. Though the decision maker (= Chief IT Architect) cannot be everywhere, and neither can nor should make every decision, agreed-to-principles provide influence without presence. This is very important if one hopes to promote coordinated but independent actions across a large and often opionated organizational community over time.” (Bernard Boar, 1999).
An IT architecture must be: • Defined and specified, • Formulated, • Communicated, • Applied, • Enforced, • Maintained, • Improved. This list requires an adequate IT architecture organization. All architecture functions must be made the responsibility of explicitly assigned people. The chain of responsibility (reporting structure) must be consistent and transparent. Defined, appropriate escalation procedures must be present. Interaction with all stakeholders—especially, the business units and the development departments—must be formalized and heavily used. Example 8.1: Functional Architecture Organization
One possible architecture organization structure is to align it with the functional architecture. Figure 8.4 shows the functional architecture at CREDIT SUISSE [Murer11].
131
8.4 Architecture Organization
As shown in Fig. 8.4, architecture is first an engineering discipline: Here, the architecture principles, patterns, standards, guidelines and best practices, as well as the architecture metrics are developed and made binding for the whole organization. Second, architecture is a process producing deliverables: The first deliverable is the continuous alignment between the expectations of the business units and the possibilities of the IT department (Business to IT alignment). The second deliverable is a sub-process which permanently manages the complexity of the IT landscape, aiming to reduce the complexity as much as possible and introduce methods and tools for the complexity management. All architecture knowledge and decisions must be communicated in a suitable form to the full community which is affected. This is the communications subprocess. Communication, however, is not sufficient for a solid application of the binding architecture knowledge. Therefore, a training subprocess comprises regular training activities and instruction material for all stakeholders. The final activity in this stream is the maintenance and improvement of the architecture process as part of the overall system development process of the organization. Functional architecture also serves as a governance instrument, responsibly enforcing principles, patterns, and standards. An important part of the governance activity is the management of the technology portfolio, the service portfolio, and the applications portfolio.
&ƵŶĐƚŝŽŶĂů/dƌĐŚŝƚĞĐƚƵƌĞ ŶŐŝŶĞĞƌŝŶŐ ŝƐĐŝƉůŝŶĞ
ƌĐŚŝƚĞĐƚƵƌĞ WƌŽĐĞƐƐ
ƌĐŚŝƚĞĐƚƵƌĞ WƌŝŶĐŝƉůĞƐ
ƵƐŝŶĞƐƐʹ/d ůŝŐŶŵĞŶƚ
ƌĐŚŝƚĞĐƚƵƌĞ 'ƵŝĚĞůŝŶĞƐ Θ ĞƐƚ WƌĂĐƚŝĐĞƐ
ŽŵƉůĞdžŝƚLJ DĂŶĂŐĞŵĞŶƚ
ƌĐŚŝƚĞĐƚƵƌĞ ^ƚĂŶĚĂƌĚƐ ƌĐŚŝƚĞĐƚƵƌĞ DĞƚƌŝĐƐ
ƌĐŚŝƚĞĐƚƵƌĞ ŽŵŵƵŶŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚ͚Ɛ dƌĂŝŶŝŶŐ ƌĐŚŝƚĞĐƚƵƌĞ WƌŽĐĞƐƐ
Fig. 8.4 Example for a functional IT architecture
'ŽǀĞƌŶĂŶĐĞ /ŶƐƚƌƵŵĞŶƚ
^ƚƌƵĐƚƵƌĞ
/dWƌŝŶĐŝƉůĞƐ ŶĨŽƌĐĞŵĞŶƚ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ
/d^ƚĂŶĚĂƌĚƐ ŶĨŽƌĐĞŵĞŶƚ
ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
dĞĐŚŶŽůŽŐLJ WŽƌƚĨŽůŝŽ DĂŶĂŐĞŵĞŶƚ
/ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
^ĞƌǀŝĐĞ WŽƌƚĨŽůŝŽ DĂŶĂŐĞŵĞŶƚ ƉƉůŝĐĂƚŝŽŶƐ WŽƌƚĨŽůŝŽ DĂŶĂŐĞŵĞŶƚ
/ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ sĞƌƚŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞƐ
132
8 Context for Managed Evolution
Finally, the result of all architecture activities are the IT architectures, i.e., the horizontal and the vertical architectures (as introduced in Fig. 3.8). These are the foundation of the organization’s IT system. Principle 8.1: Central Architecture Department
Sustainable Managed Evolution and successful Principle-Based Architecting is only possible with a competent, powerful, and responsible central architecture department under the lead of a respected Chief Architect.
8.5 Company Culture Company (or corporate) culture is a soft phenomenon with hard consequences. Serious studies have demonstrated that an adequate company culture enables an organization to achieve excellent results. Organizations with a bad or inappropriate company culture can never perform in the top league [Dyer18, Jones03, Goffee03]. Definition 8.3: Corporate Culture Corporate (or company) culture refers to the beliefs and behaviours that determine how a company’s employees and management interact and how they handle outside business transactions https://www.investopedia.com Quote: “In fact, today it is increasingly recognized that one element matters the most: The nature of relationships within the organization—the way people act toward each other, the ‘social capital’ of the organization.” (Rob Goffee, 2003)
The most crucial facet of company culture are values [Cameron11]. Values are the tenets of a company—believable, true, and long-lasting. They must be accomplishable and assessable on all levels of the organization. The values are expressed in the company culture statement—an explicit document guiding the behaviour of the company and its employees. Which values should be introduced and maintained in the company culture with respect to future-proof software-systems? They can be summarized as follows: 1. Establish, foster, and esteem trust between the business units and the IT department (requires a mature alignment between business units and the IT department). Note: This trust is indispensable to achieve the Managed Evolution channel (Fig. 5.6), i.e., to obtain the necessary money and time. 2. Recognize and accept the necessity and merit of enforcing principles. Note: Some principles—such as ethical principles—must be enforced without fail, whereas for architecture principles, some room for justified compromise exists. However, remember not to accumulate technical debt (see: Technical Debt).
8.7 Metrics
133
8.6 IT Staff The system evolution process (Fig. 8.3) is executed by people. Even if the automated support for software development is increasing year by year (see Models-to-Code, Provably Correct Software), the people involved in the evolution process are crucial for the quality of the resulting system. Although people management is not a topic of this book, its importance for the sustained success of Managed Evolution must be stressed [Halvorson16].
8.7 Metrics Metrics (Definition 5.5) have two characteristics: • They are necessary and useful, • They are expensive and dangerous. Quote: “Two key characteristics of measurement in top-performing software organizations are that it is: (1) highly integrated into management and technical processes (2) supported by the corporate culture.” (John McGarry, 2002)
They are necessary and useful because, first, metrics help us to understand the e volution. Second, they allow us to control progress. Third, they indicate how to improve our products and processes [Fenton15]. They are expensive and dangerous because they are difficult to implement, they can lead to wrong results, and their conclusions can be misused for inappropriate means (such as qualifying people). Much work has been done in the definition and application of metrics. For Managed Evolution, the two metrics business value (Business Value Metric) and changeability (Changeability Metric) are essential. The changeability evolution is shown in Example 8.2. Example 8.2: Changeability Metric
The changeability metric introduced in [Murer11] (see Definition 5.3) expresses the efficiency of the software development process in an organization. Note that for “changeability” the term “agility” was used in [Murer11]. The terminology has been changed later to avoid confusion with agile methods. The changeability metric is a property of an organization, i.e., all influence factors of the software development cycle are included. However, if we assume that an organization has a reasonably efficient software development process, the single most crucial factor for high changeability is the architecture quality of the applications landscape.
134
8 Context for Managed Evolution
The evolution of the changeability (agility) at CREDIT SUISSE for 2005–2009 is shown in Fig. 8.5: A gradual improvement can be seen, which is the result of continuous improvement of the software development activity. A colossal number of metrics for specialized fields have been developed, such as for software quality [Abran10, Ejiogu05, Fenton15, Gupta17, Jones17, Oo11], for reuse [Poulin97], for security [Hayden10, Herrmann07, Jaquith07, Aroms12, Young10, Brotby08, Brotby13, Eusgeld08, Wong00], resilience [Francis14], models [Genero05, Kan02], safety [Janicak15], cybersecurity [Mateski17], and many more fields. Quote: “Measurement and risk management are synergistic. Both disciplines emphasize the prevention and early detection of problems rather than waiting for problems to become critical.” (John McGarry, 2002)
Therefore, an organization needs a carefully defined, designed, implemented, and executed measurement program [Fenton15]. It starts with a crystal-clear definition of the purpose of each envisaged metric. You need to know precisely how the metric will be used and what knowledge you can gain from it. Next, you define the process for gathering the required data, for calculating the metric and for drawing conclusions from the metric, or more precise: From the time series provided by the metric. Note that the metric’s process must be seamlessly integrated into the affected processes of the organization and that it should be automated as much as possible. Finally, the metric’s results and conclusions must be used in the relevant management processes, to understand and improve the operations.
Fig. 8.5 IT agility evolution in CREDIT SUISSE in 2005–2009
References
135
8.8 A Final Look Implementing Managed Evolution and Principle-Based Architecting is not only a technical challenge. The whole organization needs to be adapted at all levels to realize sustainable success. An organization which does not adapt their culture, processes, and management objectives to Managed Evolution will not succeed in the long run.
References [Abran10] Abran A (2010) Software metrics and software metrology. Wiley, Piscataway. ISBN 978-0-470-59720-0 [Annighoefer15] Annighoefer B (2015) Model-based architecting and optimization of distributed integrated modular avionics. Shaker, Aachen. ISBN 978-3-8440-3420-2 [Aroms12] Aroms E (2012) NIST Special Publication 800-55 Rev1: Security Metrics Guide for Information Technology Systems. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-4701-5204-8 [Autosar17] AUTOSAR (2017) Consortium: AUTOSAR—Layered Architecture https:// www.autosar.org/fileadmin/user_upload/standards/classic/4-3/AUTOSAR_ EXP_LayeredSoftwareArchitecture.pdf. Accessed 11 Mar 2018 [Beijer10] Beijer P, de Klerk T (2010) IT architecture—essential practices for IT business solutions. www.lulu.com. ISBN 978-1-4457-0603-0 [Boar99] Boar BH (1999) Constructing blueprints for enterprise IT architectures. Wiley, New York. ISBN 978-0-471-29620-1 [Brotby08] Brotby WK (2008) Information security management metrics—a definitive guide to effective security monitoring and measurement. Taylor & Francis Ltd, Boca Raton. ISBN 978-1-420-05285-5 [Brotby13] Brotby WK, Hinson G (2013) PRAGMATIC security metrics—applying metametrics to information security. Taylor & Francis Ltd., Boca Raton. ISBN 978-1-439-88152-1 [Cameron11] Cameron KS, Quinn RE (2011) Diagnosing and changing organizational culture: based on the competing values framework, 3rd edn. Jossey-Bass Publishers, San Francisco. ISBN 978-0-4706-5026-4 [Czarnecki17] Czarnecki C, Dietze C (2017) Reference architecture for the telecommunications industry: transformation of strategy, organization, processes, data, and applications. Springer, Cham. ISBN 978-3-319-46755-9 [Dyer18] Dyer C (2018) The power of company culture: how any business can build a culture that improves productivity, performance and profits. Kogan Page Publishers, London. ISBN 978-0-749-48195-7 [Ejiogu05] Ejiogu LO (2005) Software metrics—the discipline of software quality. Booksurge Publishing, Charleston. ISBN 978-1-4196-0242-9 [Eusgeld08] Eusgeld I (2008) Dependability metrics—advanced lecture (GI-Dagstuhl Research Seminar, 2005), Springer Lecture Notes in Computer Science. ISBN 978-3-540-68946-1 [Eveleens06] Eveleens RLC (2006) Open systems integrated modular avionics—the real thing mission systems engineering, pp. 2-1–2-22. National Aerospace Laboratory, Report RTO-EN-SCI-176. https://www.sto.nato.int/publications. Accessed 11 Apr 2018
136
8 Context for Managed Evolution
[Fenton15] Fenton N, Bieman J (2015) Software metrics - a rigorous and practical approach. Chapman & Hall/CRC Innovations in Software Engineering and Software Development Series, 3rd edn. ISBN 978-1-439-83822-8 [Francis14] Francis R, Bekera B (2014) A metric and frameworks for resilience analysis of engineered and infrastructure systems. Reliability Engineering and System Safety. 121(2014), 90–103. https://blogs.gwu.edu/seed/files/2012/07/ Reliability-Engineering-and-System-Safety-2014-Francis-1y5jkh9.pdf. Accessed 3 Sep 2017] [Genero05] Genero M, Piattini M, Calero C (2005) (eds.) Metrics for software conceptual models. Imperial College Press, London. ISBN 978-1-8609-4497-0 [Goffee03] Goffee R, Jones G (2003) The character of a corporation – how your company’s culture can make or break your business, 2 edn. Profile Books, Ltd., London. ISBN 978-1-86197-639-0 [Gupta17] Gupta R (2017) Measurement of software quality factors using CK metrics. LAP LAMBERT Academic Publishing, Saarbrücken. ISBN 978-3-6598-9331-5 [Halvorson16] Halvorson C (2016) People Management: everything you need to know about managing and leading people at work. CreateSpace Independent Publishing Platform. ISBN 978-1-5229-7235-8 [Hayden10] Hayden L (2010) IT security metrics—a practical framework for measuring security and protecting data. McGraw-Hill Education Ltd., New York. ISBN 978-0-071-71340-5 [Heidel17] Heidel R, Hoffmeister M, Hankel M, Döbrich U (2017) Basiswissen RAMI 4.0: Referenzarchitekturmodell und Industrie 4.0-Komponente. Beuth-Verlag, Berlin. ISBN 978-3-41026-482-8 [Herrmann07] Herrmann DS (2007) Complete guide to security and privacy metrics—measuring regulatory compliance, operational resilience, and ROI. Auerbach Publishers Inc., Boca Raton. ISBN 978-0-8493-5402-1 [Janicak15] Janicak CA (2015) Safety metrics—tools and techniques for measuring safety performance. Bernan Print, Maryland. Revised edition, . ISBN 978-1-5988-8754-9 [Jaquith07] Jaquith A (2007) Security Metrics – Replacing Fear, Uncertainty, and Doubt Addison-Wesley Professional, USA, ISBN 978-0-321-34998-9 [Jones03] Jones G, Goffee R (2003) The character of a corporation: How your company’s culture can make or break your business. Profile Books Ltd., London. ISBN 978-1-8619-7639-0 [Jones17] Jones C (2017) A guide to selecting software measures and metrics. Taylor & Francis Ltd., Boca Raton. ISBN 978-1-138-03307-8 [Kan02] Kan SH (2002) Metrics and models in software engineering, 2nd edn. AddisonWesley Longman, Amsterdam. ISBN 978-0-201-72915-3 [Mateski17] Mateski M, Trevino CM, Veitsch CK, Harris M, Maruoka S, Frye J (2017) Cyber threat metrics. CreateSpace Independent Publishing Platform. ISBN 978-1-5424-7775-8 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Heidelberg. ISBN 978-3-642-01632-5 [Oo11] Thida O, Aung KO (2011) Analyzing object-oriented systems with software quality metrics—an empirical study for software maintainability. LAP LAMBERT Academic Publishing, Saarbrücken. ISBN 978-3-8433-7748-5 [Poulin97] Poulin JS (1997) Measuring software reuse – principles, practices, and economic models. Addison Wesley Longman, Reading. ISBN 0-201-63413-9
References
137
[Ross06] Ross JW, Weill P, Robertson DC (2006) Enterprise Architecture as Strategy— Creating a Foundation for Business Execution. Harvard Business Review Press, Boston. ISBN 978-1-5913-9839-4 [Scheid15] Scheid O (2015) AUTOSAR compendium, Part 1: application & RTE. CreateSpace Independent Publishing Platform, Bruchsal. ISBN 978-1-5027-5152-2 [TOGAF11] The open group (2011) TOGAF® Version 9.1, 10th edn. Van Haren Publishing, Zaltbommel. ISBN 978-9-0875-3679-4 [Wong00] Wong C (2000) Security metrics - a beginner’s guide. Osborne Publisher, New York. ISBN 978-0-071-74400-3 [Wout10] van’t Wout J, Waage M, Hartman H, Stahlecker M, Hofmann A (2010) The integrated architecture framework explained: why, what, how. Springer, Berlin. ISBN 978-3-642-11517-2 [Young10] Young C (2010) Metrics and methods for security risk management. Syngress Publishing, Burlington. ISBN 978-1-8561-7978-2
9
The Future
Abstract
The coming years will change the way we engineer and operate software-systems. On the one hand, the dependency of the society on software is rising fast. On the other hand, the systems become more and more complicated, autonomous, and artificially intelligent, thus generating increased risks. The methods, processes, and technologies of systems engineering will have to cope with these challenges in order to build sustainable, dependable software. This chapter describes some of the likely influences on the software of the future.
Quote: “The best way to predict the future is to create it”. (Abraham Lincoln)
Predicting the future is difficult and in many cases useless. This is no different in software engineering. However, a few bad and a few good trends can be identified, which will dramatically influence software engineering in the next years. Only a very few of the apparent trends are presented here, but decisive ones. The bad trends are: • The three devils of systems engineering will gain more and more impact; • The risk of software-based systems increases and more sophisticated threats appear. On the good side, we will see (already partly manifest): • The massive use of higher abstraction levels in software engineering; • The widespread use of models to directly generate code (Executable models); • The possibility to formally and reliably check software artifacts with respect to verification and validation (Provable correct software). © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_9
139
140
9 The Future
9.1 The Raise in Power of the Three Devils We met the three devils of systems engineering and their influence on systems engineering earlier in this book (see: Three Devils of Systems Engineering). Their power and impact will considerably increase in the future (Fig. 9.1).
9.1.1 Complexity The first devil represents complexity. There is no doubt that the amount of functionality, the degree of interconnected operation, the rate of innovation, and the technical possibilities increase continuously over time. This results in larger and larger, and more and more complex systems (Metrics for complexity will be introduced later). This fact significantly increases the difficulty to understand, document, evolve, and operate the systems and contributes to the generation of risk [Mahajan14]. Quote: “Adding 25 percent more functionality doubles the complexity of the software-system”. (Roger Sessions, 2008)
As soon as we move from systems to systems-of-systems (SoS, [Jamshidi09a, b, Luzeaux11]), additional challenges appear: • In most cases, the constituent systems are not all under the same governance, i.e., some of the constituent systems have different owners (Fig. 3.14). This adds significant organizational and managemental complexity to the already high technical complexity;
ϭƐƚ Ğǀŝů ŽĨ^LJƐƚĞŵƐŶŐŝŶĞĞƌŝŶŐ͗ ŽŵƉůĞdžŝƚLJ
ϮŶĚ Ğǀŝů ŽĨ^LJƐƚĞŵƐŶŐŝŶĞĞƌŝŶŐ͗ ŚĂŶŐĞ
ϯƌĚ Ğǀŝů ŽĨ^LJƐƚĞŵƐŶŐŝŶĞĞƌŝŶŐ͗ hŶĐĞƌƚĂŝŶƚLJ
Fig. 9.1 The three devils of software engineering [© www.123rf.com (used with permission)]
9.1 The Raise in Power of the Three Devils
141
• The individual constituent systems have their own objectives, i.e., they have been built for their specific tasks. Combining such systems into systems-of-systems needs alignment and coordination; • Systems-of-systems may introduce emergence [Bedau08, Bondavalli16, Mittal18, Charbonneau17, Sethna06] in the form of emergent properties, emergent behaviour or emergent information (Definition 9.1).
Definition 9.1: Emergent Property/Behavior An emergent property/behavior is a property, behavior, or aggregated information, which a collaboration of constituent systems has, but which the individual constituent systems do not have To achieve an emergent property, an emergent behaviour or emergent information is the objective when building or assembling systems-of-systems (Example 9.1). Example 9.1: Emergent Behaviour “Flying”
If you look at an airplane, none of its individual parts (Engines, seats, coffee machine in the plane, …) can fly. When you start assembling the parts, suddenly the assembly—the “plane”—takes off and flies! Emergence is thus a typical feature of very complex assemblies, i.e., systems-of-systems. Emergent properties can be positive (wanted, beneficial) or negative (unwanted, detrimental). Emergence must be understood and managed in the complex systems [Mittal18]. Emergence can be functional, or it can create new information through the cooperation of the constituent systems (Fig. 9.2). Functional emergence is the new functionality or behaviour of the SoS which is not part of either of the constituent systems. It can be desired/expected, i.e., be the design goal of the SoS. It can, however, also be unexpected and undesired, resulting in damaging behaviour or even accidents. Information emergence [Marr15, Schneier16] results from the aggregation of data from different constituent systems—in some cases without their consent (virtual SoS). Figure 9.2 shows the three characteristics of a system-of-systems: 1. The constituent systems with their contracts; 2. The conditions for joining the system-of-system, i.e., the ensemble of all collaborating constituent systems; 3. The relationships (exchange of functionality, control, and information) between the constituent systems. The next step in systems complexity are cyber-physical systems-of-systems (CPSoS, [Romanovsky17, Alur15, Lee17, Bondavalli16]).
142
9 The Future 'ŽǀĞƌŶĂŶĐĞ ZĞŐŝŽŶ )XQFWLRQDO(PHUJHQFH
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
ŶƐĞŵďůĞ ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ ŽŶĚŝƚŝŽŶƐ ĨŽƌ ũŽŝŶŝŶŐ ƚŚĞ^Ž^ ,QIRUPDWLRQ(PHUJHQFH
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵ
Fig. 9.2 Emergent behaviour and emergent information
Definition 9.2: Cyber-Physical System-of-Systems A cyber-physical system-of-systems (CPSoS) is a set or arrangement of collaborating constituent processing systems and cyber-physical systems resulting in unique capabilities of the integrated system With CPSoS, again, a new level of complexity is reached. Because of the close interaction between the computing systems and the real world, much of the complex behaviour, uncertainty, change, and randomness of the real world is imported into the computing system—and thus into the software! How can this increasing complexity be tamed? The following engineering practices are essential: • Define, implement, maintain, and evolve an adequate structure (= architecture) of the complete system; • Ensure the conceptual integrity throughout the system; • Use formal methods and concepts as much as possible. These are covered in detail by the architecture principles in part 2.
9.1.2 Change Change is the relentless driver for the evolution of the software-systems [Steffen16]. New functional and nonfunctional requirements from the markets, new technologies, new service delivery channels, etc., require a continuous adaptation of the software-system.
9.1 The Raise in Power of the Three Devils
143
Fig. 9.3 Size of software in an automobile
As an example, the growth of software size in automobiles is shown in Fig. 9.3. More and more code is required to match the expectations of the users. Not only is the size of the software-systems increasing, but also the rate of change is rising: The pressure of short time-to-market calls for the production of more code in less time, leading to tensions between the business units and the IT department within an organization. The business units want the shortest possible time-to-market, whereas, the IT department requires sufficient time for the delivery of high-quality software. Many times, a situation as shown in Fig. 9.4 arises. Three scenarios are possible: • A standard, responsible development process is executed—leading to the regular project end in Fig. 9.4. In this case, no damage is done to the software-system; • The development team is forced by time constraints to take shortcuts—leading to a shorter time-to-market, but damage is done to the software-system—project end with shortcuts in Fig. 9.4; • The development team is allowed to work according to the Managed Evolution strategy, thus using more development time to improve some part of the software-system related to the development on hand—project end following Managed Evolution in Fig. 9.4. The impact of the second (= Change) devil is to favour shortcuts and allow deterioration of the software-systems. How can this tendency be counteracted? The following prerequisites are essential: • A strong, active management which is committed to Managed Evolution (on all levels of the organization); • A competent, trustworthy central architecture team—with an excellent chief architect—having enough power to enforce Managed Evolution; • A sufficient, comprehensive set of proven architecture principles, patterns, guidelines, and standards to direct the evolution of the software-system—to which this book contributes.
144
9 The Future
dƚD dŝŵĞͲƚŽͲDĂƌŬĞƚ
WƌŽũĞĐƚ^ƚĂƌƚ WƌŽũĞĐƚŶĚ ĨŽůůŽǁŝŶŐ DĂŶĂŐĞĚ ǀŽůƵƚŝŽŶ
WƌŽũĞĐƚŶĚ ǁŝƚŚ ^ŚŽƌƚĐƵƚƐ ZĞŐƵůĂƌWƌŽũĞĐƚ ŶĚ
Fig. 9.4 Software-system evolution under time-to-market pressure
9.1.3 Uncertainty Uncertainty is the third devil of systems engineering. Uncertainty is a situation, which faces imperfect, unknown, or fuzzy information [Kochenderfer15, Bergman15, Liu16, Ziv97, Garlan10]. The risk in this situation is to take wrong or perilous decisions or actions, either by man or by machine (= software). It is this lack of reliable information, which makes the third devil dangerous. Uncertainty is present during all phases of the software lifecycle. During the requirements’ definition and specification phase unclear, misleading, contradictory, or incomplete information about the system to be built or extended is generated. During the development phase, many explicit or implicit assumptions, approximations, and simplifications are used. Those may be unsubstantiated, wrong, incomplete, or not realistic. The result are implementations, which are based on shaky ground, opening the field for problems during later operation. Finally, during the operation phase of the software-system, we face the largest uncertainty: A violation of the underlying assumptions, faults, and malfunctions in the infrastructure [Setola16], problems with partner systems, or disruptions due to hostile activities or natural disasters may trigger a (possibly dangerous) misbehaviour of our software-systems. Quote: “Uncertainty permeates software development but is rarely captured explicitly in software models”. (Hadar Ziv, 1996)
In autonomous systems, such as self-driving cars [Eliot17] or autonomous robots [Siegwart11, Sejnowski18] the working environment is very often uncertain: Many
9.2 Increased Risk and More Sophisticated Threats
145
situations occur, which cannot be predicted and in some cases are too complex and fast changing to be foreseen. Autonomous systems are strongly based on artificial intelligence technologies, especially machine learning and are sensitive to uncertainty [Li17a]. Therefore, dealing with uncertainty during the whole life cycle of our software-systems becomes essential. The means which help us to do so are: • Formal methods and formal models to introduce precision in all artifacts of the software development process (lack of precision and completeness is a substantial factor for uncertainty); • Monitoring the system’s operation for detecting anomalous, unsafe, critical, unmanageable, or uncontrollable situations and take safe action; • Built-in resilience, such as fault-tolerance, fail-safe states, graceful degradation.
9.2 Increased Risk and More Sophisticated Threats One first consequence of increasing complexity is that the systems will have to cope with more and more failures, faults, errors, malfunctions of parts, and unavailability of partner systems [Gertsbakh11, Verma13]. Our systems must be able to survive such incidents—also multiple or chained incidents—with acceptable quality of service and never with fatal consequences. However, we must be aware of these serious threats and understand, assess, and mitigate the risks they generate. Quote: “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology”. (Bruce Schneier)
The second consequence is the increasingly hostile environment in which our systems have to operate. The first adverse condition are malicious attacks on our systems, such as hacking, denial-of-service, pishing, tracking cookies, and malware [Volynkin09]. Many of these attacks have a commercial background, in fact, the digital crime industry [Kshetri10, Goodman15, Taylor14] is a flourishing, highly profitable, often well-organized endeavour. Digital warfare [Segal16, Hadji-Janev15] has become a real, highly dangerous threat to many of our information and cyber-physical systems [Schneier18]. Digital espionage [Corera17, DeSilva15] has also reached a very sophisticated and damaging level of success, leaking important technical development and research results to unknown parties. Furthermore, stealing confidential and personal information, such as e-mails, and using it to generate fake news or damage the reputation or the political activities of persons has become a fact in the last years. The third consequence is a current trend of unlimited data gathering [Schneier16, Mitnick17, O’Neil17], such as web surfing history, purchase habits, movement patterns, communication activities, or circle of friends. This constitutes an enormous base of highly personal information in mostly unknown hands. Such information can be misused, where the use for building targeted advertisement profiles is the least worrying.
146
9 The Future
Last but not least, the tremendous progress in intelligent and autonomous systems— especially systems based on machine learning [Alpaydin16, Sejnowski18]—generate new, severe fears, and risks [Harris12, Kaplan17, Ross17]. Note that we need not only to consider obvious autonomous systems such as self-driving cars and trains but uncounted applications in robotics, finance, commerce, marketing, etc. It can be expected, that the increase in complexity, the acceleration of the rate of change, and the higher uncertainty during development and operation of the systems open up even new technical opportunities for unintentional and intentional danger. These technical possibilities are of potentially immeasurable risk and need to be addressed in our systems engineering activities—at least partially. An immense responsibility lies with lawmakers, political institutions, and the public [Marion16]. The two sections above described some of the very likely adverse impacts on systems and systems engineering. Fortunately, there is also some good news (there is more light on the horizon, but these three topics will likely have the strongest impact in the near future): • Increasing the abstraction level in software- and systems engineering is a potent instrument to manage complexity; • Defining and specifying the software—including functional and nonfunctional properties—by formal models reduces the complexity again and is a strong defense against uncertainty during the development process. Automatically generating the code from these executable models reduces the risk of errors in the code massively; • A powerful ally in producing dependable software is the formal verification and validation of software artefacts, starting from models and reaching down to operating systems. This will eventually lead to provably correct software, not only concerning the functional behaviour but also covering the non-functional properties (Security, safety, …).
9.3 Increasing Abstraction Levels 9.3.1 Separation of Concerns Quote: “The core of software development is the design of abstractions. An abstraction is not a module, or an interface, class, or method: it is a structure, pure and simple—an idea reduced to its essential form”. (Daniel Jackson, 2006)
One powerful method to manage the complexity of software-systems is to use abstractions. The primary abstraction structure our software-system is generated by partitioning the software-system into layers. Abstraction structures are not limited to two-dimensional structures: See Fig. 3.8 for a three-dimensional abstraction of Cyber-Physical Systems-of-Systems Architecture, where on each layer, only the necessary information with an adequate degree of detail is presented. The uppermost layer only contains the
9.3 Increasing Abstraction Levels
147
fundamental concepts and a few properties and relationships. The lower layers successively add more and more information. Definition 9.3: Abstraction Abstraction is the act of representing essential features without including the background details or explanations. In the computer science and software engineering domain, the abstraction principle is used to reduce complexity and allow efficient design and implementation of complex software-systems (www.techopedia.com) The fundamental principle of abstraction [Jackson06] is: • Identify, define, and specify the specific concerns with respect to the development, implementation, and documentation of the software-system; • Separate individual concerns; • Define individual abstractions for each concern.
9.3.2 Abstractions Unfortunately, the practice of describing and documenting software-systems is very fragmented. For a complete documentation, a number of different abstractions are required. A full example is presented here (Example 9.2): Example 9.2: Application Landscape Abstractions
We restrict ourselves to the application landscape as defined in Fig. 3.8. First, we extract the elements of interest from Fig. 3.8: • The horizontal layer “application architecture”; • The hierarchy. resulting in Fig. 9.5. In Fig. 9.5, we see the constituent elements of the application architecture layer: • Lowest hierarchy level: Sensors and actuators which allow the exchange of signals with the physical world. Of course, there are systems without sensors and actuators, such as an Internet banking application. We treat here the general case; • The sensors feed their signals into the software components (first level of processing) and software components issue signals for influencing the physical world via actuators; • Components are assembled into applications; • The totality of all applications of one organization (= under same governance) form the application landscape; • Application landscapes of different organizations are cooperating to reach a common goal (which each individually could not reach), thus forming a system-of-systems.
148
9 The Future
ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
^Ž^
ƉƉůŝĐĂƚŝŽŶ >ĂŶĚƐĐĂƉĞ
ƉƉůŝĐĂƚŝŽŶ
ŽŵƉŽŶĞŶƚ
^ĞŶƐŽƌͬĐƚƵĂƚŽƌ
WŚLJƐŝĐĂů tŽƌůĚ
Fig. 9.5 Application architecture layer with element hierarchy
A number of abstractions can now be introduced in Fig. 9.5, resulting in Fig. 9.6. The abstractions in Fig. 9.6 are: • The system-of-systems (SoS) model [COMPASS12a, b, Jamshidi09a, b]; • Formal contracts [Plösch04, Benveniste12, Paik17, Newcomer02]; • The application landscape model (UML [Seidl15, Lano09], SysML [Weilkiens08], and ADL [Dissaux05, Feiler12]); • Individual application models (UML [Seidl15, Lano09], SysML [Weilkiens08, Dori16], and ADL [Dissaux05, Feiler12]); • Architecture views [Rozanski11, Clements10]; • The domain model [Evans04, Mayr16]; • The Business Object Model; • Domain-specific languages [Kleppe09, Fowler10, Voelter13]; • Application and domain ontologies [Gaševic09, Tudorache08]; • Interface contracts [Pugh06, Cambell16]; • The component model [Cheesman00, Stevens04]; • The component composition model [Assmann03, Szyperski02, Heineman01, Rausch08]. Quote: “Abstractions are articulated, explained, reviewed and examined deeply, in isolation from the details of the implementation”. (Daniel Jackson, 2006)
The significant number of possible and useful abstractions in Example 9.2 shows that software engineering has still a long way to go to develop new, consistent abstractions
149
9.4 Models-to-Code
ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ DŽĚĞů
&ŽƌŵĂůŽŶƚƌĂĐƚƐ ^Ž^ DŽĚĞů
ƌĐŚŝƚĞĐƚƵƌĞ sŝĞǁƐ /ŶƚĞƌĨĂĐĞŽŶƚƌĂĐƚƐ
ŽŵĂŝŶDŽĚĞů ƉƉůŝĐĂƚŝŽŶ DŽĚĞů
ƵƐŝŶĞƐƐKďũĞĐƚ DŽĚĞů
ƉƉůŝĐĂƚŝŽŶ KŶƚŽůŽŐLJ
ŽŵĂŝŶͲ^ƉĞĐŝĨŝĐ >ĂŶŐƵĂŐĞ ŽŵƉŽŶĞŶƚ ŽŵƉŽƐŝƚŝŽŶ DŽĚĞů
ŽŵƉŽŶĞŶƚ DŽĚĞů WŚLJƐŝĐĂůtŽƌůĚ
Fig. 9.6 Abstractions in application architecture
with larger coverage. Such is expected in the future, especially in the wake of more formal methods.
9.4 Models-to-Code Quote: “Modeling and translation are hard because abstraction is hard and requires practice to master”. (Leon Starr, 2017)
One big step in the quality of software and for productivity is the direct generation of executable code directly from models (Models-to-code). Today, many approaches exist, which are mostly based on procedures as shown in Fig. 9.7: The starting point is a sufficiently detailed and complete model, defining both structure and behaviour. The model must correctly reflect all functional and also the nonfunctional properties. The model should then be checked (model checking) by a formal model checker. The next step is a translator, which translates the model syntax into executable code. During the translation, the generated code should whenever possible, also be formally checked [Nieves17, Boulanger12, Hinchey13]. The key for models-to-code is a modeling language with enough precision and sufficient completeness. There are basically three possibilities: • Specific modeling languages, such as xUML (= Executable UML, [Mellor02, Starr17, Raistrick04]) or SysADL [Oquendo16]; • Starting with a domain model, using a domain-specific language (DSL) and generating the code [Kelly08]; • Specifying the business logic in the form of business rules and using a business-rule engine for their direct execution [Chisholm03, Ross03, Halle01].
150
9 The Future ^LJƐƚĞŵDŽĚĞů ^ƚƌƵĐƚƵƌĞ
ĞŚĂǀŝŽƵƌ
&ƵŶĐƚŝŽŶĂů ZĞƋƵŝƌĞŵĞŶƚƐ
EŽŶͲ&ƵŶĐƚŝŽŶĂů ZĞƋƵŝƌĞŵĞŶƚƐ ^ĞĐƵƌŝƚLJ ^ĂĨĞƚLJ ZĞƐŝůŝĞŶĐĞ ͙
DŽĚĞůŚĞĐŬŝŶŐ &ƌĂŵĞǁŽƌŬƐ >ŝďƌĂƌŝĞƐ /ŶĨƌĂƐƚƌƵĐƚƵƌĞ
ŽĚĞ'ĞŶĞƌĂƚŝŽŶ ;dƌĂŶƐůĂƚŝŽŶͿ
&ŽƌŵĂůsĞƌŝĨŝĐĂƚŝŽŶ Θ sĂůŝĚĂƚŝŽŶ
ŽĚĞ
Fig. 9.7 Code generation from models
As of today (2018), the models-to-code approach does not have a significant industrial following. The reasons are (1) that modeling is difficult, (2) the available tools are not mature enough, and (3) integrating a new model into an existing, larger model is hard. It is expected, however, that the importance of automatic code generation will considerably grow in the future.
9.5 Provably Correct Software One dream of the software community—especially for safety-critical systems—is provably correct software. For many of the systems—especially, the mission-critical cyber-physical systems— provable correctness is an essential, heavy future requirement. Provable correctness is a difficult property to achieve and requires both a strongly partitioned architecture and a powerful formal mathematical foundation. Quote: “Program testing can be used to show the presence of bugs, but never to show their absence”. (E. W. Dijkstra)
We start with the definition of correctness: Definition 9.4: Correctness Correctness means that the program and the hardware faithfully implement the control formulas of the mathematical model for the total system, and nothing else. (EU Project ProCosII: Provably Correct Systems)
9.5 Provably Correct Software
151
Correctness in this sense means that the program meets its specifications 100%. The specifications are generated from the business expectations, via formal requirements, leading to formal specifications, which then form the foundation for source program construction. Quote: “Although correctness proofs are invaluable, they provide no validation that a specification is correct. If the formal requirements erroneously state that our autopilot software shall keep the aircraft upside down in the southern hemisphere, we can analyze our program to prove that it will indeed flip our plane as it crosses the equator. We still need validation through testing or other means to show that we are building the right application”. (John W. McCormick, 2015)
Once the correctness of the source code has been proven, the focus shifts to the compiler translating the source code into executable code. The compiler also must work provably correct [Palsberg92]. In addition, the operating system including all required system software must operate in a provably correct way [Zhu01]. Finally, also the execution platform, i.e., the run-time hardware must execute the programs correctly [Jifeng93]. Much effort has been invested in this field. An illustrative example is shown in Example 9.3: The result of the ProCoS/ProCoS II-projects [He95]. Example 9.3: ProCoS Tower
The ProCoS Tower is the strongly partitioned architecture used in the project ProCoS (= provably correct systems, [Hinchey17]). In Fig. 9.8, the ProCoS Tower is shown embedded in the horizontal architecture layers of Fig. 3.6. The ProCoS Tower strongly partitions the software to be proven into: • The informal expectations—part of the business architecture layer: Here, the stakeholders of the future functionality describe their expectations; • The formal requirements and the formal system specification—part of the application and information architecture layers: Here, the expectations are strongly formalized to form the foundation for the proving algorithms; • The programs—part of the application and information architecture layers: The source code is developed from the formal requirements and system specifications; • The code—part of the integration and technical architecture layers: this is the executable format tailored to the underlying hardware; • The hardware (processor): Run-time environment for the executable code. A number of formal languages have been developed and proven to be correct in the last few decades, such as the Z-language [Jacky97, Diller94, Knight12], the B-language [Abrial10, Kernighan99] or specialized languages like SPARK [McCormick15]. With the rising importance of safety-critical systems—and at the same time, the higher product liability and regulatory obligations—a renewed interest is visible in provably correct software [Hinchey17, Hobbs15, Rierson13, Griffor16].
152
9 The Future
,ŽƌŝnjŽŶƚĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
/ŶĨŽƌŵĂůdžƉĞĐƚĂƚŝŽŶƐ
&ŽƌŵĂůZĞƋƵŝƌĞŵĞŶƚƐ &ŽƌŵĂů^LJƐƚĞŵ^ƉĞĐŝĨŝĐĂƚŝŽŶ WƌŽŐƌĂŵƐ
/ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ ŽĚĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
ZƵŶͲdŝŵĞWůĂƚĨŽƌŵ
WƌŽŽ^ dŽǁĞƌ
Fig. 9.8 The ProCoS tower
The progress in the wide application of provably correct systems will be slow, but necessary. Specialized industries are taking it up carefully, needing a new breed of systems engineers and improved tool chains.
References [Abrial10] Abrial JR (2010) Modeling in event-B: system and software engineering (Englisch) Gebundene Ausgabe—13. Mai 2010. Cambridge University Press, Cambridge. ISBN 978–0-521-89556-9 [Alpaydin16] Alpaydin E (2016) Machine learning—the new AI. The MIT Press, Cambridge. ISBN 978-0-262-52951-8 [Alur15] Alur R (2015) Principles of cyber-physical systems. The MIT Press, Cambridge. ISBN 978–0-262-02911-7 [Assmann03] Assmann U (2003) Invasive software composition. Springer, Berlin. ISBN 978-3-540-44385-8 [Bedau08] Bedau MA, Humphreys P (eds) (2008) Emergence—contemporary readings in philosophy and science. The MIT Press, Cambridge. ISBN 978–0-262-02621-5 [Benveniste12] Benveniste A, Caillaud B, Nickovic D, Passerone R, Raclet JB, Reinkemeier P, Sangiovanni-Vincentelli A, Damm W, Henzinger T, Larsen K (2012) Contracts for systems design, INRIA RESEARCH REPORT, No. 8147, November 2012, ISSN 0249-6399. http://hal.inria.fr/docs/00/75/85/14/PDF/RR-8147.pdf. Accessed 23 Sep 2017 [Bergman15] Bergman N (2015) Surviving the techstorm—strategies in times of technological uncertainty. LID Publishing Ltd. 204 The Record Hall, Baldwins Gardens, London EC1N 7RJ. ISBN 978–1-9106-4919-0
References
153
[Bondavalli16] Bondavalli A, Bouchenak S, Kopetz H (eds) (2016) Cyber-physical systems of systems: foundations—a conceptual model and some derivations: the AMADEOS legacy. Springer Lecture Notes in Computer Science, Heidelberg. ISBN 978–3-319-47589-9 [Boulanger12] Boulanger JL (ed) (2012) Formal methods—industrial use from model to the code. Wiley, London, 2012. ISBN 978–1-848-21362-3 [Cambell16] Cambell E (2016) Microservices architecture—make the architecture of a software as simple as possible. CreateSpace Independent Publishing Platform. ISBN 978–1-5300-0053-1 [Charbonneau17] Charbonneau P (2017) Natural complexity—a modeling handbook. Princeton University Press, Princeton. ISBN 978–0-691-17035-0 [Cheesman00] Cheesman J (2000) UML components—a simple process for specifying component-based software. Addison-Wesley Longman (Component Software Series), Amsterdam. ISBN 978–0-201-70851-6 [Chisholm03] Chisholm M (2003) How to build a business rules engine—extending application functionality through metadata engineering. Morgan Kaufmann, Heidelberg. ISBN 978–1-558-60918-1 [Clements10] Clements P, Bachmann F, Bass L, Garlan D et al (2010) Documenting software architectures—views and beyond, 2nd edn. Addison-Wesley (SEI Series in Software Engineering), New York. ISBN 978–0-321-55268-6 [COMPASS12a] COMPASS (Comprehensive Modelling for Advanced Systems of Systems, EU-Project, FP7-287829) (2012) Initial report on SoS architectural models Document Number D22.1, March 2012. http://www.compass-research.eu. Accessed 23 Sep 2017 [COMPASS12b] COMPASS (Comprehensive Modelling for Advanced Systems of Systems, EU-Project, FP7-287829) (2013) Initial report on guidelines for architectural level SoS modelling Document Number D21.2, March 2013. http://www.compass-research.eu. Accessed 23 Sep 2017 [Corera17] Corera G (2017) Cyberspies—the secret history of surveillance, hacking, and digital espionage. Pegasus Books, New York. ISBN 978–1-6817-7459-6 [DeSilva15] De Silva E (ed) (2015) National security and counterintelligence in the era of cyber espionage. Information Science Reference (Advances in Digital Crime, Forensics, and Cyber Terrorism). ISBN 978–1-4666-9661-7 [Diller94] Diller A (1994) Z—an introduction to formal methods (Englisch) Taschenbuch—2nd edition 16. Juni 1994. Wiley, New York. ISBN 978–0-471-93973-3 [Dijkstra76] Dijkstra EW (1976) A discipline of programming. Pearson Education & Prentice-Hall, Englewood Cliffs. ISBN 978-0-132-15871-8 [Dissaux05] Dissaux P (ed) (2014) Architecture description languages. Springer (IFIP Advances in Information and Communication Technology), New York. ISBN 978–1-461-49895-7 [Dori16] Dori D (2016) Model-based systems engineering with OPM and SysML. Springer, New York. ISBN 978-1-493-93294-8 [Eliot17] Eliot LB (2017) Advances in AI and autonomous vehicles: cybernetic selfdriving cars: practical advances in Artificial Intelligence (AI) and machine learning. LBE Press Publishing, South Carolina. ISBN 978-0-6929-1517-2 [Evans04] Evans E (2006) Domain-driven design—tackling complexity in the heart of software, 7th edn. Pearson Education (Addison-Wesley), Boston. ISBN 978-0-321-12521-5
154
9 The Future
[Feiler12] Feiler PH, Gluch DP (2012) Model-based engineering with AADL. AddisonWesley Longman (SEI Series in Software Engineering), Amsterdam. ISBN 978-0-321-88894-5 [Fowler10] Fowler M, Parsons R (2010) Domain specific languages. Addison-Wesley, New York. ISBN 978-0-321-71294-3 [Garlan10] Garlan D (2010) Software engineering in an uncertain world FoSER 2010, November 7–8, Santa Fe, New Mexico, USA. https://www.nitrd.gov/ nitrdgroups/images/0/08/Software_Engineering_in_an_Uncertain_World_-_ p125.pdf. Accessed 12 June 2017 [Gaševic09] Gaševic D, Djuric D, Devedžic V (2009) Model driven engineering and ontology development, 2nd edn. Springer, Heidelberg, Germany. ISBN 978-3-642-00281-6 [Gertsbakh11] Gertsbakh I, Shpungin Y (2011) Network reliability and resilience. Springer, Heidelberg. ISBN 978-3-642-22373-0 (SpringerBriefs in Electrical and Computer Engineering) [Goodman15] Goodman M (2015) Future crimes—everything is connected, everyone is vulnerable and what we can do about it. Doubleday Publisher, London. ISBN 978-0-385-53900-5 [Griffor16] Griffor E (2016) Handbook of system safety and security—cyber risk and risk management, cyber security, threat analysis, functional safety, software systems, and cyber physical systems. Syngress, Boston. ISBN 978-0-1280-3773-7 [Hadji-Janev15] Hadji-Janev M, Bogdanoski M (eds) (2015) Handbook of research on civil society and national security in the era of cyber warfare. Information Science Reference (Advances in Digital Crime, Forensics, and Cyber Terrorism), Hershey. ISBN 978-1-4666-8793-6 [Halle01] von Halle B (2001) Business rules applied—building better systems using the business rules approach. Wiley, New York. ISBN 978-0-471-41293-9 [Harris12] Harris R (2012) The fear index. Arrow, London. ISBN 978-0-099-55327-4 [He95] He J, Hoare CAR, Fränzle M, Müller-Olm M, Rüdgier Olderog E, Schenke M, Hansen MR, Ravn AP, Rischel H (1995) Provably correct systems ESPRIT project 7071 „Provably Correct Systems“. https://pdfs.semanticscholar.org/706 b/67d2aad1f7d1a45ba15d4c639b9c0d0654e2.pdf. Accessed 24 Sep 2017 [Heineman01] Heineman GT, Councill WT (2001) Component-based software engineering—putting the pieces together. Addison-Wesley, Boston. ISBN 978-0-201-70485-3 [Hinchey13] Hinchey MG (2013) Requirements to design to code—towards a fully formal approach to automatic code generation. Nasa Technical Reports Server. Hampton, VA, US (NASA Langley Research Center). ISBN 978-1-2892-5458-2 [Hinchey17] Hinchey M, Bowen JP, Olderog ER (eds) (2017) Provably correct systems. Springer, Cham, Switzerland (NASA Monographs in Systems and Software Engineering). ISBN 978-3-319-48627-7 [Hobbs15] Hobbs C (2015) Embedded software development for safety-critical systems. Taylor & Francis, London. ISBN 978-1-498-72670-2 [Jackson06] Jackson D (2006) Software abstractions—logic, language, and analysis. MIT Press, Cambridge. ISBN 978-0-262-10114-1 [Jackson10] Jackson S (2010) Architecting resilient systems—accident avoidance and survival and recovery from disruptions. Wiley, Hoboken. ISBN 978-0-470-40503-1 [Jacky97] Jacky J (1997) The way of Z—practical programming with formal methods. Cambridge University Press, Cambridge. ISBN 978-0-521-55976-8
References
155
[Jamshidi09a] Jamshidi M (ed) (2009) Systems of systems engineering—principles and applications. CRC Press & Taylor & Francis Group, Boca Raton. ISBN 978-1-4200-6588-6 [Jamshidi09b] Jamshidi M (ed) (2009) Systems of systems engineering—innovations for the 21st century. Wiley, Hoboken. ISBN 978-0-470-19590-1 [Jifeng93] Jifeng H, Page I, Bowen J (2005) Towards a provably correct hardware implementation of OCCAM CHARME 1993: correct hardware design and verification methods, S 214–225. https://link.springer.com/chapter/10.1007/ BFb0021726. Accesssed 13 June 2005 [Kaplan17] Kaplan J (2017) Humans need not apply—a guide to wealth and work in the age of artificial intelligence. Yale University Press, New Haven. ISBN 978–0-3002-2357-6 [Kelly08] Kelly S, Tolvanen JP (2008) Domain-specific modeling—enabling full code generation. Wiley, Hoboken. ISBN 978-0-470-03666-2 [Kernighan99] Kernighan BW (1999) A tutorial introduction to the language B. Bell Laboratories, Murray Hill. https://web.archive.org/web/20150611114644/ https://www.bell-labs.com/usr/dmr/www/btut.pdf. Accessed 9 Oct 2017 [Kleppe09] Kleppe A (2009) Software language engineering—creating domain-specific languages using metamodels. Addison-Wesley, Upper Saddle River. ISBN 978-0-321-55345-4 [Knight12] Knight J (2012) Fundamentals of dependable computing for software engineers. Chapman and Hall und CRC, Boca Raton. ISBN 978-1-439-86255-1 [Kochenderfer15] Kochenderfer MJ (2015) Decision making under uncertainty—theory and application. MIT Lincoln Laboratory Series, Cambridge. ISBN 978-0-262-02925-4 [Kshetri10] Kshetri N (2010) The global cybercrime industry—economic, institutional and strategic perspectives. Springer, Heidelberg. ISBN 978-3-642-11521-9 [Lano09] Lano K (2009) Model-driven software development with UML and java. Cengage Learning, London. ISBN 978-1-844-80952-3 [Lee17] Lee EA, Seshia SA (2017) Introduction to embedded systems—a cyberphysical systems approach, 2nd edn. The MIT Press, Cambridge. ISBN 978-0-262-53381-2 [Li17a] Li D, Du Y (2017) Artificial intelligence with uncertainty. Taylor & Francis, Boca Raton. ISBN 978-1-498-77626-4 [Liu16] Liu B (2016) Uncertainty theory. Springer, Berlin. ISBN 978–3-662-49988-7 (Softcover reprint of the original 4th edition 2015) [Luzeaux11] Luzeaux D, Ruault JR, Wipplere JL (eds) (2011) Complex systems and systems of systems engineering. iSTE Publishing, London. ISBN 978–1-84821253-4 (Distributed by John Wiley & Sons Inc., New York) [Mahajan14] Mahajan S (2014) The art of insight in science and engineering—mastering complexity. The MIT Press, Cambridge. ISBN 978-0-262-52654-8 [Marion16] Marion N (2016) Introduction to cybercrime—computer crimes, laws, and policing in the 21st century. Praeger Security International, Boston. ISBN 978-1-4408-3533-9 [Marr15] Marr B (2015) Big data—using SMART big data, analytics and metrics to make better decisions and improve performance. Wiley, New York. ISBN 978-1-118-96583-2 [Mayr16] Karagiannis D, Mayr HC, Mylopoulos J (eds) (2016) Domain-specific conceptual modeling—concepts, methods and tools. Springer, Cham, Switzerland. ISBN 978-3-319-39416-9
156
9 The Future
[McCormick15] McCormick JW, Chapin PC (2015) Building high integrity applications with SPARK. Cambridge University Press, Cambridge. ISBN 978-1-107-65684-0 [Mellor02] Mellor SJ, Balcer M (2002) Executable UML—a foundation for model driven architecture. Addison-Wesley Longman, Amsterdam. ISBN 978–0-201-74804-8 [Mitnick17] Mitnick K (2017) The art of invisibility—the world’s most famous hacker teaches you how to be safe in the age of big brother and big data. Little, Brown and Company, New York. ISBN 978-0-3165-5454-1 [Mittal18] Mittal S, Diallo S, Tolk A (eds) (2018) Emergent behaviour in complex systems—a modeling and simulation approach. Wiley, Hoboken. ISBN 978-1-119-37886-0 [Newcomer02] Newcomer E (2002) Understanding web services: XML, WSDL, SOAP, and UDDI. Addison-Wesley Professional (Independent Technology Guides), Boston. ISBN 978-0-201-75081-2 [Nieves17] Nieves J (2017) Formal methods industrial use from model to the code. CreateSpace Independent Publishing Platform. ISBN 978–1-9746-1992-4 [O’Neil17] O’Neil C (2017) Weapons of math destruction—how big data increases inequality and threatens democracy. Penguin, London. ISBN 978–0-141-98541-1 [Oquendo16] Oquendo F, Leite J, Batista T (2016) Software architecture in action—designing and executing architectural models with SysADL grounded on the OMG SysML standard. Springer. ISBN 978–3-319-44337-9 [Paik17] Paik HY, Lemos AL, Barukh MC, Benatallah B, Natarajan A (2017) Web service implementation and composition techniques. Springer. ISBN 978–3-319-55540-9 [Palsberg92] Palsberg J (1992) A provably correct compiler generator Proceedings ESOP’92. Springer (LNCS 582), pp 418–434. http://web.cs.ucla. edu/~palsberg/paper/esop92.pdf. Accessed 8 Oct 2017 [Plösch04] Plösch R (2004) Contracts, scenarios and prototypes—an integrated approach to high quality software. Springer, Berlin. ISBN 978-3-540-43486-0 [Pugh06] Pugh K (2006) Interface-oriented design. Pragmatic Bookshelf. ISBN 978–0-9766-9405-2 [Raistrick04] Raistrick C, Francis P, Wright J, Carter C, Wilkie I (2004) Model driven architecture with executable UML. Cambridge University Press, Cambridge. ISBN 978–0-521-53771-1 [Rausch08] Rausch A (ed) (2008) The common component modeling example—comparing software component models. Springer Lecture Notes in Computer Science/ Programming and Software, Berlin. ISBN 978-3-540-85288-9 [Rierson13] Rierson L (2013) Developing safety-critical software—a practical guide for aviation software and DO-178C compliance. Taylor & Francis, Boca Raton. ISBN 978-1-439-81368-3 [Romanovsky17] Romanovsky A, Ishikawa F (eds) (2017) Trustworthy cyber-physical systems engineering. CRC Press, Boca Raton. ISBN 978-1-4978-4245-0 [Ross03] Ross RG (2003) Principles of the business rule approach. Pearson Education & Addison-Wesley Information Technology, Boston. ISBN 978-0-201-78893-8 [Ross17] Ross A (2017) The industries of the future. Simon + Schuster, London. ISBN 978-1-4711-3526-2 [Rozanski11] Rozanski N, Woods E (2011) Software systems architecture—working with stakeholders using viewpoints and perspectives, 2nd edn. Addison-Wesley, Upper Saddle River. ISBN 978-0-321-71833-4
References
157
[Schneier16] Schneier B (2016) Data and Goliath—the hidden battles to collect your data and control your world. Norton & Company, New York. ISBN 978–0-3933-5217-7 [Schneier18] Schneier B (2018) Click Here to Kill Everybody: Security and Survival in a Hyper-connected World. Norton & Company, New York, ISBN 978-0-393-60888-5 [Sessions08] Sessions R (2008) Simple architectures for complex enterprises.Microsoft Press, Redmond. ISBN 978-0-7356-2578-5 [Segal16] Segal A (2016) The hacked world order—how nations fight, trade, maneuver, and manipulate in the digital age. PublicAffairs, New York. ISBN 978-1-6103-9415-4 [Seidl15] Seidl M, Scholz M, Huemer C, Kappel G (2015) UML @ classroom— an introduction to object-oriented modeling. Springer, Cham. ISBN 978–3-319-12741-5 [Sejnowski18] Sejnowski TJ (2018) Deep learning revolution. The MIT Press, Cambridge. ISBN 978-0-262-03803-4 [Sethna06] Sethna JP (2006) Entropy, order parameters, and complexity. Oxford University Press, Oxford. ISBN 978-0-19-856677-9 [Setola16] Setola R, Rosato V, Kyriakides E (eds) (2016) Managing the complexity of critical infrastructures—a modelling and simulation approach. Springer International Publishing, Cham. ISBN 978-3-319-51042-2 [Siegwart11] Siegwart R, Nourbakhsh IR, Scaramuzza D (2011) Introduction to autonomous mobile robots. The MIT Press, Cambridge. ISBN 978–0-262-01535-6. (Intelligent Robotics and Autonomous Agents Series) [Starr17] Starr L, Mangogna A, Mellor S (2017) Models to code—with no mysterious gaps. Apress, New York. ISBN 978–1-4842-2216-4 [Steffen16] Steffen B (ed) (2016) Transactions on foundations for mastering change I, vol 1. Springer Lecture Notes in Computer Science, Cham. ISBN 978–3-319-46507-4 [Stevens04] Stevens P, Pooley R, Maciaszek L (2004) Using UML—software engineering with objects and components. Addison-Wesley, Munich. ISBN 978–0-582-89596-6 [Szyperski02] Szyperski C, Gruntz D, Murer S (2002) Component software—beyond object-oriented programming. Addison-Wesley (Component Software Series), London. ISBN 978–0-201-74572-6 [Taylor14] Taylor RW, Fritsch EJ, Liederbach J (2014) Digital crime and digital terrorism. Prentice Hall. Upper Saddle River, New Jersey, USA. ISBN 978-0-133-45890-9 [Tudorache08] Tudorache T (2008) Ontologies in engineering—modeling, consistency and use cases. VDM Verlag, Saarbrücken. ISBN 978-3-639-04979-4 [Verma13] Verma AK, Ajit S, Kumar M (2013) Dependability of networked computer-based systems. Springer, Heidelberg (Springer Series in Reliability Engineering). ISBN 978-1-447-12693-5 [Voelter13] Voelter M (2013) DSL engineering—designing, implementing and using domain-specific languages. CreateSpace Independent Publishing Platform. Upper Saddle River, New Jersey, USA. ISBN 978-1-4812-1858-0 [Volynkin09] Volynkin A (2009) Modern malicious software—taxonomy and advanced detection methods. VDM Verlag, Saarbrücken. ISBN 978-3-6391-2295-4
158
9 The Future
[Weilkiens08] Weilkiens T (2008) Systems engineering with SysML/UML—modeling, analysis, design. Morgan Kaufmann (The MK/OMG Press), Heidelberg. ISBN 978-0-123-74274-2 [Zhu01] Zhu MY, Luo L, Xiong GZ (2001) A provably correct operating system: δ-Core ACM SIG operating systems review, Vol. 35, No 1, January 2001. https://www.researchgate.net/publication/234830984_A_Provably_Correct_ Operating_System_DeltaCORE. Accessed 8 Oct 2017 [Ziv97] Ziv H, Richardson DJ (1997) The uncertainty principle in software engineering ICSE’97. http://jeffsutherland.org/papers/zivchaos.pdf. Accessed 19 Sep 2017
10
Special Topics
Abstract
The chapters up to this point have presented an ideal, architecture-centric view of software-systems. Experience shows daily that this idealized world is under continuous pressure from many fronts. Cost factors, time-to-market, short-sighted managers, myopic project teams, and—last but not least—new development paradigms trying to reduce the cost and time-to-market for software development. The philosophy of future-proof software-systems is endangered. This chapter evaluates some of the recent developments, i.e., Agile Methods, Continuous Delivery, and DevOps. Finally, the important topic of legacy system modernization is presented.
10.1 Agile Methods 10.1.1 The Agile Manifesto Quote: “Most people that say they follow the Agile methodology, aren’t. They use Agile as an excuse to not document and not do things properly”. (Neil Rerup, 2018)
In 2001, 17 software engineers—many of them distinguished, respected scientists— published a new software development methodology: The Agile software development method. This proposal was a reaction to the increasing heaviness, slowness, and overhead of the current software development processes, such as CMMI (Capability Maturity Model Integration, [Chrissis11]) or the ISO/IEC 25010:2011 (Systems and Software Engineering/ Systems and Software Quality Requirements and Evaluation (SQuaRE)/System and Software Quality Models [https://www.iso.org/obp/ui/#iso:std:iso-iec:25010:ed-1:v1:en]).
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_10
159
160
10 Special Topics
They published the basic vital statements of the Agile method in their Manifesto for Agile Software Development (http://agilemanifesto.org, Definition 10.1). The Manifesto was later supplemented by 12 principles, such as, the principle 11: “The best architectures, requirements, and designs emerge from self-organizing teams” [http://agilemanifesto.org/principles.html]. Quote: “In today’s fast-paced, often “agile” software development, how can the secure design be implemented? In my experience, tossing requirements, architectures, and designs ‘over the wall’ and into the creative, dynamic pit of Agile development is a sure road to failure”. (Brook S.E. Schoenfield, 2015)
The Agile method immediately triggered heated discussions between Agilists and Traditionalists. Agilists saw the dawn of a new, more productive software area, whereas traditionalists identified an unforgivable step backward into the dark ages of software engineering. The truth is somewhere in the middle: If Agile methods are adequately utilized in an adapted organization, they can indeed bring significant progress concerning time-to-market and cost. However, if you have to deal with large, complex, long-lived systems with stringent quality of service properties—such as security, safety, or business continuity—Agile methods become dangerous because of their curtailing of conceptual, modeling, architecture, and design effort [Meyer14, Boehm04]. Definition 10.1: Agile Method The Agile method is a software development methodology based on the following four key statements: 1. Individuals and interactions over processes and tools; 2. Working software over comprehensive documentation; 3. Customer collaboration over contract negotiation; 4. Responding to change over following a plan. http://agilemanifesto.org/
10.1.2 Agile Application Spectrum The embittered battle between agilists and traditionalists continues to this day. However, there is no doubt that Agile methods, especially in conjunction with the SCRUM™ framework (Example 10.1), have brought significant advantages under the following conditions: • The targeted requirements can be implemented by a small team (~ 8 people) working face-to-face and including stakeholder representatives; • The requirements have only a minimal, strictly local impact on the software-system. This is often the case if the system is based on a microservice architecture [Amundsen16, Newman15];
10.1 Agile Methods
161
• A first overarching, effective process is in place to ensure the adequate overall architecture and its evolution of the system. The process must detect and prevent any architecture violation and architecture erosion; • A second overarching, undebatable process is in place to enforce the global quality of service properties (e.g., security, safety, business-continuity, conformance to laws and regulations) of the system (or no global quality of service properties are required). Example 10.1: SCRUM™ [Schwaber17]
“Scrum is a process framework that has been used to manage work on complex products since the early 1990s. Scrum is not a process, technique, or definitive method. Rather, it is a framework within which you can employ various processes and techniques. Scrum makes clear the relative efficacy of your product management and work techniques so that you can continuously improve the product, the team, and the working environment [Schwaber17]”. Scrum has been adopted by the Agile community as the most popular Agile process [Rubin12]. It starts with a backlog of requirements (Fig. 10.1). The requirements are then split into suitable specifications for a Sprint-cycle. A Sprintcycle is the essential development phase for incremental software development and should have a typical duration of 30 days. During a Sprint-cycle, the full team meets each day (24 h) for a discussion of progress, difficulties, or roadblocks. The result of each Sprint-cycle is progress in the functionality of the software or product.
6SULQJ&\FOHGD\V
6SULQW >'HYHORSPHQW&\FOH@ 5HTXLUHPHQWV %DFNORJ
6SULQW 6SHFLILFDWLRQV %DFNORJ
Fig. 10.1 SCRUM™ development process
'DLO\5HYLHZ&\FOHVKRXUV
6RIWZDUH 5HOHDVHV
162
10 Special Topics
10.1.3 Agile Methods and Future-Proof Software-Systems Quote: “The sheer scale of systems will change everything. ULS systems will necessarily be decentralized in a variety of ways, developed and used by a wide variety of stakeholders with conflicting needs, evolving continuously, and constructed from heterogeneous parts”. (Lida Northrop, 2006)
The focus of this book is very large, long-lived, mission-critical IT-systems with strong dependability, an easy changeability, and high business value. Traditionally, such systems were associated with a solid foundation, based on systems architecture, (semi-) formal modeling, reliable development processes, and extensive documentation. Such a reliable foundation made the construction and operation of today’s—and much more of tomorrow’s—megasystems or ultra-large-scale systems (= ULS systems) feasible [Stevens11, Northrop06a, b, Northrop13]. On the other hand, the commercial pressure to react in a significantly shorter time to new requirements, changes in the environment, and new legal and compliance regulations is rising every year. The promise of the Agile methods was seen as the light in the tunnel. However, the Agile method traded damage to the foundation against the desired time-tomarket (Definition 10.1). The Agile methods advocated to severely reducing the amount of work invested in architecture, modeling, and documentation—especially the up-front work, i.e., the work done before the coding starts [Meyer14, Boehm04]. So, the question comes up: What can future-proof software-systems safely take over from the Agile community? How far can we trade speed against quality? The undebatable answer is: 1. NO compromise in the Managed Evolution strategy (Managed Evolution Strategy); 2. NO compromise in architecture policies, principles, and standards (Principle-Based Architecting); 3. NO compromise in quality of service properties (Security, safety, business-continuity, etc.); 4. Use of an adapted Agile incremental development process, e.g., Scrum™ is possible. The possible gain in speed lies in an adapted development process. This, however, requires also an adapted organization. One possible solution is shown in Fig. 10.2 [Leffingwell07, Bloomberg13, Erder15, Coplien10, Rumpe19, Beine18]. The Scrum™ cycle is at the center. However, there are now two sets of requirements driving the cycle: • Architecture requirements, leading to architecture specifications backlog: These represent the architectural demands for the business functionality to be implemented, such as the actions to maintain or improve the quality of service properties. They also include the Managed Evolution strategy implementation, by, e.g., stating which reengineering or refactoring tasks have to be done in the actual Sprint run; • Business (functional) requirements, leading to the Sprint business requirements backlog—which are handled by the standard Scrum™ methods, roles, and tools.
10.1 Agile Methods
163
ƌĐŚŝƚĞĐƚƵƌĞ'ŽǀĞƌŶĂŶĐĞ͕KƌŐĂŶŝnjĂƚŝŽŶ͕ĂŶĚWƌŽĐĞƐƐ $UFKLWHFWXUH 5HTXLUHPHQWV %DFNORJ
$UFKLWHFWXUH 6SHFLILFDWLRQV %DFNORJ
ƌĐŚŝͲ ƚĞĐƚƵƌĞ ƐĐŽƌƚ
ƌĐŚŝͲ ƚĞĐƚƵƌĞ ƐĐŽƌƚ
ƌĐŚŝͲ ƚĞĐƚƵƌĞ ƐĐŽƌƚ 6RIWZDUH 5HOHDVHV
6SULQW >'HYHORSPHQW&\FOH@
%XVLQHVV 5HTXLUHPHQWV %DFNORJ
6SULQW%XVLQHVV 5HTXLUHPHQWV %DFNORJ
ŶǀŝƌŽŶŵĞŶƚ͗ƌĐŚŝƚĞĐƚƵƌĞWƌŝŶĐŝƉůĞƐ͕^ƚĂŶĚĂƌĚƐ͕WĂƚƚĞƌŶƐ
Fig. 10.2 Architecture escort for the agile methods
Three elements from the future-proof software-system canon are transferred to the adapted, now Agile development environment: 1. The architecture governance, organization, and process: The imperative power of architecture remains (Architecture, Architecture, Architecture!); 2 The development process is embedded—and must respect—all architecture principles, standards, and patterns (Principle-Based Architecting); 3. A new role is added to the Scrum™ method: The architecture escort (Definition 10.2). Experienced architects from all disciplines involved—application architects, security architects, etc.—are delegated from the central architecture organization and work as esteemed members of the Scrum™ team, although not coding, but consulting. Quote: “The larger and more complex the system and the higher the criticality of failure, the more the team will need to base their daily decisions on an agreed-upon and intentional architecture”. (Dean Leffingwell, 2007)
It is through the architecture escort that conformity to the adequate architecture is established and assured. The security architect, e.g., will ensure that all necessary security concerns are correctly implemented. He/she will do that by continuous involvement with the project and the team, gently enforcing security. In (hopefully) rare cases when he cannot convince the team to implement necessary protection, he will have to escalate, and the team will be forced by management to comply.
164
10 Special Topics
Definition 10.2: Architecture Escort Experienced, specialized architect (application architect, security architect, safety architect, etc.) delegated to be a member of a SCRUM™-team with the responsibility to: • Consult and guide the team to conform to the architectures (horizontal and vertical architecture layers) defined by the central architecture team; • (Gently) enforce the architecture policies, principles, and standards; • Escalate (as a last resort) if the SCRUM™-team violates any of the architecture policies, principles, and standards.
10.1.4 Large-Scale Agile In the last decade, serious progress has been made to apply Agile methods to large-scale systems [Leffingwell07, Leffingwell10, Larman16, Larman10, Larman08]. The preconditions for large-scale agile are: • Multiple Agile/Scrum-teams (~ 8 people) work in parallel on the same project (= Multi-team); • The Agile/Scrum-teams may work in different sites, possibly in locations far away from each other (= Multi-site). Quote: “Large-Scale Scrum (LeSS) achieves the same balance as Scrum for larger product groups. It adds a bit more concrete structure to Scrum”. (Craig Larman, 2016)
The basic principles are the same, e.g., there is one synchronized Sprint for all teams, and the Agile/Scrum-teams have the same organization. However, some more roles for the coordination are added, and effective tele-conferencing-capabilities are provided. Literature teaches two levels of large agile: (a) Large-Scale Scrum (LeSS) for up to eight teams, and (b) Large- Scale Scrum Huge (LeSS Huge, Example 10.2) for more then 8 teams and multi-site teams. Example 10.2: Large-Scale Scrum (LeSS)
Large-Scale Scrum (LeSS) is an extension of Scrum for large-scale systems and products [Larman16]. Figure 10.3 shows the two expansions: Multi-team capability and Multi-site ability. The work of the teams is steered by disjunct requirement areas—which correspond to some kind of domain model (Organizing and distributing the work to be done). The application of Large-Scale Scrum to future-proof software-systems necessitates the addition of architecture effort in the form of an intentional architecture definition and maintenance process (Fig. 10.3, Agility against Architecture?). This process assures the adequate, future-proof evolution of the system or product architecture (according to the architecture principles of this book).
165
10.1 Agile Methods ,QWHQWLRQDO$UFKLWHFWXUH'HILQLWLRQ'RFXPHQWDWLRQ3URFHVV (QIRUFHPHQW 0XOWL6LWH
DƵůƚŝͲ^ŝƚĞ
5HTXLUH PHQW $UHDγ
6LWH 6SULQW7HDP' 6LWH6SULQW7HDP& 6LWH6SULQW7HDP% 6LWH6SULQW7HDP$
5HTXLUH PHQW $UHDβ
6LWH 6SULQW7HDP' 6LWH6SULQW7HDP& 6LWH6SULQW7HDP% 6LWH6SULQW7HDP$
5HTXLUH PHQW $UHDα
6SULQW7HDP' 6SULQW7HDP& 6SULQW7HDP% 6SULQW7HDP$
^ƉƌŝŶƚ^ƚĂƌƚ
0XOWL7HDP
DƵůƚŝͲdĞĂŵ
^ƉƌŝŶƚŶĚ
W
Fig. 10.3 Large-scale scrum (LeSS)
10.1.5 Agility against Architecture? Agile methods entail a loss of discipline. The reduced front-end work, less documentation, and omitted reviews are some of the causes. Less discipline endangers the architecture and the quality properties of the software-system. Therefore, a recipe is required, to ensure the long-lived architecture, the required quality of service properties, and the governance of the tradeoff between agility and architecture. Quote: “I have seen security and Agile work seamlessly together as a whole, more than once. This process requires that security expertise be seen as part of the team”. (Brook S.E. Schoenfield, 2015)
The following two solutions are possible: 1. Embedding of the development process into an environment, which ensures the adherence to the intentional architecture (Fig. 10.2, 10.3); 2. Enriching the Agile development process through specific activities and artifacts (Example 10.3). In an organization, which produces mission-critical software-systems, both solutions will have to be implemented well at the same time. If the structure and the management processes of the organization are not adapted to Agile, the effect of introducing Agile may well be negative overall [Moran15, Crowder16].
166
10 Special Topics
Example 10.3: SafeScrum™
SafeScrum™ is a typical example of an enriched Agile development process [Hanssen18, Myklebust18]. SafeScrum™ was motivated by requirements from the railway signaling domain—a highly safety-critical application of software-systems. The development of safety-critical systems is guided by document-centric standards and by heavy processes. The enriched Agile process (Fig. 10.4) brings the flexibility of Agile methods, while still complying with the governing safety standards EN 50128:2011 and IEC 61508–3. This is achieved by adding the artifacts and activities shown in Fig. 10.4. An important part is adding a second backlog: The safety requirements backlog (as specialized from Fig. 10.2. This allows separating the frequently changing functional requirements from the more stable safety requirements.
10.1.6 Agile Requirements Engineering Quote: “Requirements: No other part is more difficult to rectify later”. (Fredrick P. Brooks, 1986)
The starting point—and a bridge to future-proof software-systems—for effective Agile methods is the adequate management of requirements [Leffingwell10, McDonald15]. Requirements engineering (Definition 10.3) is mostly a process issue and must be successfully adapted to the organization and the software development methodology used.
6FUXP$GG2Q 6DIHW\ 5HTXLUHPHQWV %DFNORJ
6DIHW\ 5HTXLUHPHQWV %DFNORJ
6FUXP$GG2Q &RQILJXUDWLRQ 0DQDJHPHQW DQG 5HJUHVVLRQ7HVWLQJ
6FUXP$GG2Q 6DIHW\ 5HTXLUHPHQWV 7UDFH
6DIHW\ 6SULQW %DFNORJ
6SULQW >'HYHORSPHQW&\FOH@
3URGXFW 5HTXLUHPHQWV %DFNORJ
6FUXP$GG2Q &RPPXQLFDWLRQZLWK $VVHVVRU6DIHW\ 0DQDJHU
3URGXFW 6SULQW %DFNORJ
Fig. 10.4 SafeScrum™ model
)XQFWLRQDO 6DIHW\ 9DOLGDWLRQ
6FUXP$GG2Q 5$069 9DQG&KDQJH 6DIHW\,PSDFW$QDO\VLV
6RIWZDUH 5HOHDVHV
10.1 Agile Methods
167
Definition 10.3: Requirements Engineering Requirements engineering is a systematic and disciplined approach to the specification and management of requirements with the following goals: 1. Knowing the relevant requirements, achieving a consensus among the stakeholders about these requirements, documenting them according to given standards, and managing them systematically; 2. Understanding and documenting the stakeholders’ desires and needs, specifying and managing requirements to minimize the risk of delivering a system that does not meet the stakeholders’ desires and needs, including cost and time-to-market. (Klaus Pohl, 2015) Three types of requirements can be distinguished in a software-system [Pohl15]: 1. Functional requirements: These define the functionality to be built or extended; 2. Quality requirements: These define and quantify the necessary quality of service properties (Table 4.1) of the system. As explained before, the quality of service requirements often influence both the viability of the system and the system architecture more than the functional requirements do; 3. Constraints: Requirements of this type restrict either the system itself or the development process. For future-proof software-systems, the by far most important constraint is the mandatory use of Managed Evolution (Managed Evolution Strategy) as the e volution strategy. Agile methods put most emphasis on the (flexible) management of functional requirements. However, for future-proof software-systems, both the quality of service properties and the Managed Evolution strategy are of crucial significance. In fact, the Agile community has understood this fact and valuable literature about building large, longlived, and mission-critical software-systems exists [Larman08, Larman10, Larman16, Gruver12, Scheerer19, Schneider-Winters15]. All these extensions integrate both the quality requirements and the constraints into the requirements management process and the Agile method (Fig. 10.5). Many possibilities exist to integrate functional, quality of service properties, and constraints conformity assurance to the requirements into the Agile development process (Fig. 10.5): The actual implementation depends on the method and procedures in the organization.
168
10 Special Topics
10.1.7 Agile Risk Management Projects potentially generate two types of risk: 1. Operational (or product) risk: Risk resulting from insufficient implementation of the quality of service properties, such as security flaws, safety deficiencies, inadequate availability, etc. Such risks lead to potentially harmful events in the operation of the software-system or product. This risk is reduced by an adequate extension of the Agile method (e.g., Review/Quality gate in Fig. 10.5); 2. Project risk: Potentially harmful uncertainty in all project activities during the full development process (Definition 10.4), such as missing cost or time-to-market deadlines, unreliable suppliers, lack of sufficient people skills, etc.
Definition 10.4: Project Risk Project risk is potentially harmful uncertainty relating to project objectives, such as development cost, time-to-market, team skills, schedule risk, supplier risk, etc. (Moran14)
Quote: “Many Agile software development processes at best implicitly tackle risk and those methodologies that lack a risk management framework suffer from deficiencies”. (Alan Moran, 2014)
ƌĐŚŝƚĞĐƚƵƌĞ 'ŽǀĞƌŶĂŶĐĞ͕KƌŐĂŶŝnjĂƚŝŽŶ͕ĂŶĚ WƌŽĐĞƐƐ ඹ
4XDOLW\RI6HUYLFH $UFKLWHFWXUH 6SHFLILFDWLRQV %DFNORJ
4XDOLW\RI6HUYLFH $UFKLWHFWXUH 5HTXLUHPHQWV %DFNORJ
ය
)XQFWLRQDOLW\ %XVLQHVV 5HTXLUHPHQWV %DFNORJ
&RQVWUDLQWV
ර
5HTXLUH PHQWV $UHD
)XQFWLRQDOLW\ 6SULQW%XVLQHVV 6SHFLILFDWLRQV %DFNORJ
ƌĐŚŝͲ ƚĞĐƚƵƌĞ ƐĐŽƌƚ
ZĞǀŝĞǁͬ YƵĂůŝƚLJ 'ĂƚĞ 6RIWZDUH 5HOHDVHV
ŐŝůĞ ĞǀĞůŽƉŵĞŶƚ WƌŽĐĞƐƐ
6SULQW&RQWUDLQWV %DFNORJ
Fig. 10.5 Assurance of functionality and quality of service properties
10.1 Agile Methods
169
Both the operational risk and the project risk management processes are well understood for traditional, i.e. planned developments (e.g., [Hopkin18, Fairbanks10, Griffor16, Freund14, Hodson19, Letier14, Young10], and Fig. 13.1). Their high predictability allows the installation of interwoven (risk-) subprocesses and mandatory reviews. The high flexibility and liquidity of Agile processes handicap serious risk identification, assessment, mitigation, and management processes. Again, when large, long-lived, mission-critical systems—depending on future-proof software-systems—are developed using Agile methods, extensions of the Agile development process are required [Moran14]. One possible solution is suggested in Fig. 10.6: A process step “Risk Identification & Evaluation/Mitigation” is added to be executed before each iteration (Sprint) and a checkpoint “Risk Mitigation Assessment” is performed before the release of the new or modified software. Only if the residual risk is acceptable, can the software be released into production. Quote: “We encourage enterprise-level identification of risk drivers and that the project be risk-scoped within this context during project initiation”. (Alan Moran, 2014)
Agile methods are more prone to generate cross-platform risks (Definition 10.5). In such cases, a change made in part A of the software-system may generate a new risk in—the seemingly unrelated—part R of the software-system. The developer team of part A has no knowledge of a possible impact on part R (= hidden relationship).
ŽŵƉĂŶLJ WŽůŝĐŝĞƐΘ ^ƚĂŶĚĂƌĚƐ 4XDOLW\RI6HUYLFH $UFKLWHFWXUH 6SHFLILFDWLRQV %DFNORJ
ය
)XQFWLRQDOLW\ %XVLQHVV 5HTXLUHPHQWV %DFNORJ
&RQVWUDLQWV
ර
5HTXLUH PHQWV $UHD
)XQFWLRQDOLW\ 6SULQW%XVLQHVV 6SHFLILFDWLRQV %DFNORJ
6SULQW&RQWUDLQWV %DFNORJ
>ĂǁƐΘ ZĞŐƵůĂƚŝŽŶƐ
ZĞƐŝĚƵĂů ZŝƐŬ ĐĐĞƉƚĂŶĐĞ 'ĂƚĞ
ŐŝůĞ ĞǀĞůŽƉŵĞŶƚ WƌŽĐĞƐƐ
Fig. 10.6 Extension of agile development by risk management
ZŝƐŬ DŝƚŝŐĂƚŝŽŶ ƐƐĞƐƐŵĞŶƚ
ඹ
4XDOLW\RI6HUYLFH $UFKLWHFWXUH 5HTXLUHPHQWV %DFNORJ
ZŝƐŬ/ĚĞŶƚŝĨŝĐĂƚŝŽŶΘǀĂůƵĂƚŝŽŶͬDŝƚŝŐĂƚŝŽŶ
/ŶƚĞƌŶĂƚŝŽŶĂů ^ƚĂŶĚĂƌĚƐ
6RIWZDUH 5HOHDVHV
170
10 Special Topics
Definition 10.5: Cross-Platform Risk Generation A change in a part of the softwaresystem generates an unexpected risk in a different part of the software-system, such as a new attack entry or an unforeseen fault opportunity Cross-platform risks are difficult to foresee in individual projects but can have very harmful effects. Experience in traditional, planned processes has shown that reviews by the experienced members of a central architecture team are often able to point out such possibilities. Example 10.4: Cross-Platform Vulnerability
Facebook Hack (https://www.nytimes.com/2018/09/28/technology/facebook-hack-databreach.html [last accessed 01.10.2018]) In September 2018 unknown hackers gained access to more than 50 millions of users’ accounts. This was possible due to a security vulnerability in a relatively minor function: Facebook’s “View As” feature, which lets users see how their profile appears on other people’s screens. The vulnerability allowed hackers to steal access tokens. The stolen access tokens could then be used to take over people’s accounts. The stolen access tokens theoretically also give admission to any other service in which a person uses their Facebook account to register—including applications like Tinder, Spotify, Airbnb, or a niche smartphone game. As a consequence, highly personal information could be viewed and copied. Facebook temporarily disabled the “View As” feature and executed a thorough security review. This is an example of a relatively small, peripheral function which was implemented with deficient security and massively impacted the whole platform (cross-platform risk, Definition 10.5).
10.2 Continuous Delivery and DevOps The search for methods and processes to realize shorter time-to-market for softwaresystems resulted in two innovative solutions: • Continuous Delivery: Automated process to transfer relatively small shares of a specification into production, therefore achieving frequent delivery and reducing time-to-market; • DevOps: Cooperation culture involving all stakeholders—from specification to production personnel—to minimize waste of time and resources and shorten time-to-market.
10.2 Continuous Delivery and DevOps
171
10.2.1 Continuous Delivery Quote: “Continuous Delivery provides the business with faster availability of new features and with more reliable IT-systems”. (Eberhard Wolff, 2017)
The basic ideas of continuous delivery are: (1) split the functional specifications of new requirements into smaller shares with frequent deployment, leading to short development time, and (2) accelerate the processes for delivery of new or changed software into production by a high degree of automation (Definition 10.6, [Wolff17, Humble10, Narayan15, Rossel17]). By optimizing and frequent execution, the automated delivery processes, including extensive testing, the quality of the operational software is enhanced. Definition 10.6: Continuous Delivery Set of seamless, optimized, reliable, repeatable, automated processes to frequently transfer new or changed software into production (software release) (Wolff17). A key element of continuous delivery is the continuous delivery pipeline CD pipeline. A possible CD pipeline for a generic process is shown in Fig. 10.7: First, the requirements for new features of the software are split into smaller, manageable shares. They are then fed to the development process (traditional or Agile), and the software is built or modified. The new or changed source modules are then checked in to the source repository via the versioning system. From then on the automated delivery process starts by executing the unit/module tests. Following these tests, the new software is integrated into
,QWHJUD WLRQ
5HOHDVH 5HSRVLWRU\
ĂƵƚŽŵĂƚĞĚ ŽŶƚŝŶƵŽƵƐ ĞůŝǀĞƌLJ ĞǀKƉƐ
Fig. 10.7 Continuous delivery and DevOps
5HOHDVH'HFLVLRQ
'HY3URF
6RXUFH 5HSRVLWRU\
8VHU$FFHSWDQFH 7HVW
6RXUFH 0RGXOHV
8QLW0RGXOH7HVW
9HUVLRQLQJ &RQILJ0JPW
,QWHJUDWLRQ5HJUHVVLRQ7HVWV
)HHGEDFN
5HTV
ĞƉůŽLJŵĞŶƚ
ĂƵƚŽŵĂƚĞĚ
172
10 Special Topics
the software-system, followed by automated integration/regression tests. The next step— in the most cases manual—are the user acceptance tests: The new software is checked either by the business representatives or by selected customers. When all checks are successfully completed, the software is ready to be deployed and is stored in the release repository. The release decision is (usually) also made by a person who launches the automated deployment into production. From all tests, extensive feedback is routed back to the development team, allowing to fix errors or improve quality. All steps of the continuous delivery processes are regularly improved. Therefore, the quality and reliability of the continuous delivery pipeline are increased (especially the automated tests)—which consecutively enhances the quality of the software introduced into production.
10.2.2 DevOps Successful DevOps heavily relies on the well-implemented continuous delivery chain. It adds the team cooperation dimension to the automated continuous delivery processes ([Dibbern18, Bass15, Davis16], and Definition 10.7). Definition 10.7: DevOps DevOps is a software development practice where development teams and operation teams closely cooperate. The information on how the application runs is used to improve how the application is being built. Cooperation is executed as a sequence of rapid interactive development and deployment steps. Quote: “DevOps is a movement that envisions no friction between the development groups and the operations groups”. (Len Bass, 2015)
DevOps objective is to reduce the time-to-market and increase the quality of a committed change to the software-system. It addresses the loss of time, resources, and quality from the requirements phase to the availability to the customers. The essence of DevOps is to remove obstacles in the delivery pipeline (Fig. 10.7). The long-standing way of software development is shown in Fig. 10.8. Part a) traces the flow: Individual, specialized teams are responsible for requirements gathering, specification formalization, architecture and design, development, testing, deployment, and operations. When such teams complete their work, they hand it over to the following team. Often, the teams are then disassembled, and the people are individually assigned new work. When an application fails in operation, the Ops-team determines the cause, often calling back on the developers—which may not be available or their knowledge of the specific application has faded. This process may cause a lot of friction, loss of time, and unacceptable outages. Part b) of Fig. 10.8 illustrates the essence of DevOps: The barriers in the process are removed, and selected people from development and operations form a specific team caring for the change of the particular application. This team accompanies the specific
173
10.2 Continuous Delivery and DevOps
application throughout the entire lifecycle. This, e.g., allows requirements from Ops to be recognized at the earliest possible moment and flow into the change request—reducing possible failures during deployment or operations. On the other hand, the development people are accompanying the deployment and operations phases and are available for a quick diagnosis and repair of the application after a fault. The DevOps cycle—which is run-through for each change—is shown in Fig. 10.9. The DevOps cycle starts with the planning of a requested change.
D
'HY
2SV
E
'HY
2SV
+DQGRYHU
+DQGRYHU
+DQGRYHU
+DQGRYHU
ŽƌĞdĞĂŵ $SSOLFDWLRQ
Fig. 10.8 The essence of DevOps
'HY
7(67
'(3/2ŽŐŝƐƚŝĐƐ ;>K'Ϳ
ĐĐŽƵŶƚŝŶŐŽŶƚƌŽů ;KͿ
ĂƐŝĐ&ĂĐŝůŝƚŝĞƐ ;^Ϳ
Fig. 12.26 Synchronization mechanisms
“Access via services” forces the use of common functions, data, and tables through services. This ensures that the requestor always receives the latest, correct information.
12.7.3 Software Infrastructure Providing all the common functions, data, and tables with the corresponding, reliable, and standardized synchronization mechanisms from a dedicated, specialized domain to all users has great benefits for changeability and dependability. During development, less time is lost because of the reuse, and higher quality is achieved because only proven and standardized mechanisms are used. During runtime, the operational reliability is significantly higher, because only proven, adequate synchronization mechanisms are deployed. Such a domain represents an enterprise-wide software infrastructure (Fig. 12.26), located on top of the technical infrastructure. The architecture principle is summarized in Principle 12.6. Principle 12.6: Common Functions, Data, and Tables
1. Identify all common functions, common data, and common tables (= cross-cutting concerns in an IT architecture); 2. Define a responsible, unambiguous master source for all common functions, common data, and common tables (= single source of truth);
12.8 ρ-Architecture Principle #7: Reference Architectures …
245
3. Provide adequate, reliable synchronization techniques for all copies, strictly avoiding unmanaged redundancy (= copy management); 4. Enforce the use of the approved synchronization techniques during development, deployment, and runtime; 5. Whenever possible provide and enforce a company-wide software infrastructure.
12.8 ρ-Architecture Principle #7: Reference Architectures, Frameworks and Patterns 12.8.1 Architecture Knowledge Quote: “Good software architecture follows good rules. One thing has changed: Back then, we did not know what the rules were. Consequently, we broke them, over and again. Now, with half a century of experience behind us, we have a grasp of those rules“. (Robert C. Martin, 2018)
In the last decades, software engineering has progressed from a “black art” to an accepted engineering discipline. Key to this engineering discipline is systems- and software architecture knowledge. How is the knowledge about systems- and software architecture formulated, documented, taught, and applied? In the course of the long history of systems- and software engineering, a number of valuable approaches have been developed: • Architecture principles (Sect. 7.2); • Reference architectures; • Architecture frameworks; • Patterns. Their value for systems- and software engineering, both in education and in development, cannot be overestimated—they are true gems of easily available software engineering knowledge.
12.8.2 Reference Architectures Reference architectures [Cloutier10] are essential instruments for the cooperation of partners in a specific industry. Reference architectures represent the knowledge of longlived, proven, historically grown architecture understanding for particular application domains. They are detailed templates—often with a strong normative character—for all architecture concerns (Definition 12.16).
246
12 Architecture Principles for Changeability
Definition 12.16: Reference Architecture A reference architecture in the field of software architecture or enterprise architecture provides a template solution for an architecture for a particular domain (such as automotive, avionics, financial systems, etc.). It also provides a common vocabulary with which to discuss implementations, often with the aim to stress commonality. (https://en.wikipedia.org/wiki/Reference_architecture) Reference architectures exist for many application domains, such as automotive (www. autosar.org), aerospace [Annighöfer15], telecommunications [Czarnecki17], Industry 4.0 (www.plattform-i40.de/I40/Redaktion/EN/Downloads/Publikation/rami40-an-introduction. pdf), Internet of Things IoT (www.opengroup.org/iot/wp-refarchs/p3.htm), Big Data (http:// www.oracle.com/technetwork/topics/entarch/oracle-wp-big-data-refarch-2019930.pdf). In many cases, reference architectures significantly contributed to the success of an application domain by forcing the whole supply chain to conform to the standards set by the reference architecture. As a formidable example, we introduce AUTOSAR (AUTomotive Open System Architecture) which was founded by a consortium in 2003 and had an unequaled impact on the progress of the automotive industry (Example 12.12). The AUTOSAR consortium has today more than 150 global partners. Quote: “Modern cars have evolved from mechanical devices into distributed cyber-physical systems which rely on software to function correctly”. (Miroslaw Staron, 2017)
Example 12.12: Automotive Reference Architecture AUTOSAR
One of the most influential and successful reference architectures is AUTOSAR (= AUTomotive Open System Architecture, www.autosar.org)—a reference architecture for the vehicle industry. AUTOSAR today consists of a set of documentation of far more than 1,500 pages. However, the AUTOSAR documentation covers significantly more than only the structural automotive reference architecture. It also includes a meta-model, a development methodology, middleware specifications, certification support, and a great number of templates. The native AUTOSAR documentation is well organized but specifically geared to automotive development specialists. Comprehensive secondary literature exists, such as [Scheid15] and [Schaeuffele16]. The structural AUTOSAR reference architecture is shown in Fig. 12.27. As with any modern structural architecture, it is a layered architecture. Figure 12.27 illustrates the correspondence of the AUTOSAR reference architecture with the horizontal architecture layers of Fig. 3.6. Quote: “Cooperate on Standards—Compete on Implementations”. (AUTOSAR)
The normative impact of AUTOSAR on the whole global car and truck industry is awesome: The entire value creation chain from basic parts to the finished car is dominated by the AUTOSAR standardized software and process models.
247
12.8 ρ-Architecture Principle #7: Reference Architectures … hdK^Z ^ŽĨƚǁĂƌĞ
сƉƉůŝĐĂƚŝŽŶƐ Θ/ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
/ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌ hdK^Z ĂƐŝĐ^ŽĨƚǁĂƌĞ сdĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌ
ƉƉůŝĐĂƚŝŽŶ ^ŽĨƚǁĂƌĞ ŽŵƉŽŶĞŶƚ
ĐƚƵĂƚŽƌ ^ŽĨƚǁĂƌĞ ŽŵƉŽŶĞŶƚ
^ĞŶƐŽƌ ^ŽĨƚǁĂƌĞ ŽŵƉŽŶĞŶƚ
hdK^Z /ŶƚĞƌĨĂĐĞ
hdK^Z /ŶƚĞƌĨĂĐĞ
hdK^Z /ŶƚĞƌĨĂĐĞ
͙
ƉƉůŝĐĂƚŝŽŶ ^ŽĨƚǁĂƌĞ ŽŵƉŽŶĞŶƚ hdK^Z /ŶƚĞƌĨĂĐĞ
hdK^ZZƵŶƚŝŵĞ ŶǀŝƌŽŶŵĞŶƚ;ZdͿ ^ƚĂŶĚĂƌĚŝnjĞĚ /ŶƚĞƌĨĂĐĞ
hdK^Z/Ŷƚ͘
^ƚĂŶĚĂƌĚ/Ŷƚ͘
hdK^Z/Ŷƚ͘
^ĞƌǀŝĐĞƐ
ŽŵŵƵŶŝĐĂƚŝŽŶ
hďƐƚƌĂĐƚŝŽŶ
^ƚĂŶĚĂƌĚ/Ŷƚ͘
^ƚĂŶĚĂƌĚ/Ŷƚ͘
^ƚĂŶĚĂƌĚ/Ŷƚ͘
KƉĞƌĂƚŝŶŐ ^LJƐƚĞŵ
^ƚĂŶĚĂƌĚ/Ŷƚ͘
hdK^Z /ŶƚĞƌĨĂĐĞ
ŽŵƉůĞdž ƌŝǀĞƌƐ
µďƐƚƌĂĐƚŝŽŶ
ůĞĐƚƌŽŶŝĐŽŶƚƌŽůhŶŝƚ;hͿ,ĂƌĚǁĂƌĞ
ƵƐ
^ĞŶƐŽƌƐ /ŶƉƵƚ
ĐƚƵĂƚŽƌƐ KƵƚƉƵƚ
Fig. 12.27 AUTOSAR structural reference architecture
Note that a reference architecture provides a significant amount of domain-specific knowledge, in the case of AUTOSAR knowledge about the car and truck development.
12.8.3 Architecture Frameworks A reference architecture (Reference Architectures) provides significant information and knowledge about the target domain, i.e., the area of application (such as AUTOSAR for the car and truck domain). An architecture framework is different: It adds an architecture layer to the horizontal architecture layers (Fig. 3.6). This top layer is the enterprise architecture layer as shown in Fig. 12.28. The objective of architecture frameworks is to provide methods and tools to define, implement, and evolve the IT architecture of an organization (Definition 12.17). It can be seen as a rich, proven toolbox to support the architecture department in the organization. Definition 12.17: Architecture Framework An architecture framework is a set of methods and tools for developing a broad range of different IT architectures. It enables IT users to design, evaluate, and build the right architecture for their organization, and reduces the costs of planning, designing, and implementing architectures based on open systems solutions. (http://pubs.opengroup.org/architecture) An enterprise architecture [Lankhorst17, Behara15, Bernard12] covers all areas of an enterprise, including the organization, management hierarchies, governance, policies, etc.—and, of course,also all the IT architectures. An architecture framework can be seen as a “cookbook” to organize an enterprise.
248
12 Architecture Principles for Changeability
,ŽƌŝnjŽŶƚĂůƌĐŚŝƚĞĐƚƵƌĞ >ĂLJĞƌƐ
ŶƚĞƌƉƌŝƐĞ ƌĐŚŝƚĞĐƚƵƌĞ
ƵƐŝŶĞƐƐ ƌĐŚŝƚĞĐƚƵƌĞ ƉƉůŝĐĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶĨŽƌŵĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ /ŶƚĞŐƌĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ dĞĐŚŶŝĐĂů ƌĐŚŝƚĞĐƚƵƌĞ
Fig. 12.28 Enterprise architecture as top architecture layer
Quote: “An architecture framework is a set of foundational structures which can be used for developing a broad range of different architectures”. (TOGAF, 2010)
A number of architecture frameworks exist, such as the Zachman Framework for Enterprise Architecture© (https://www.zachman.com), The Open Group Architectural Framework (TOGAF©, http://www.opengroup.org), The U.S. Federal Enterprise Architecture (FEA©, https://www.whitehouse.gov), the Gartner Methodology© (https:// www.gartner.com), and some more. All these frameworks have their advantages, disadvantages, and a varying degree of industry acceptance. In the following, we focus on TOGAF (http://www.opengroup.org/subjectareas/enterprise/togaf, [Blokdyk18a, TOGAF11]) and present the framework in Example 12.13. Example 12.13: The TOGAF Architecture Framework
The TOGAF (The Open Group Architecture Framework, http://www.opengroup.org/ subjectareas/enterprise/architecture) is an industry-neutral set of methods and tools for the development and evolution of enterprise architecture [TOGAF11, Blokdyk18a]. It covers all areas of enterprise architecture and is continuously refined by the Open Group Architecture Forum. The current version (2018) is TOGAF 9.2. Figure 12.29 shows an overview of the key TOGAF areas. The key TOGAF areas are (Fig. 12.29): (1) The architecture development method (ADM), (2) The architecture content framework, (3) A set of reference models, (4)
249
12.8 ρ-Architecture Principle #7: Reference Architectures … dK'&ΞƌĐŚŝƚĞĐƚƵƌĞ&ƌĂŵĞǁŽƌŬ;sĞƌƐŝŽŶϵ͘ϭͿ ƌĐŚŝƚĞĐƚƵƌĞĞǀĞůŽƉŵĞŶƚDĞƚŚŽĚ ;DͿ dĞƐƚĞĚĂŶĚƌĞƉĞĂƚĂďůĞ ƉƌŽĐĞƐƐĨŽƌĚĞǀĞůŽƉŝŶŐ ĂƌĐŚŝƚĞĐƚƵƌĞƐ
D'ƵŝĚĞůŝŶĞƐΘdĞĐŚŶŝƋƵĞƐ
ŽůůĞĐƚŝŽŶŽĨŐƵŝĚĞůŝŶĞƐĂŶĚ ƚĞĐŚŶŝƋƵĞƐĨŽƌƚŚĞ ĂƉƉůŝĐĂƚŝŽŶŽĨdK'&
ƌĐŚŝƚĞĐƚƵƌĞŽŶƚĞŶƚ&ƌĂŵĞǁŽƌŬ DĞƚŚŽĚƐĨŽƌƚŚĞ ƐƉĞĐŝĂůŝnjĂƚŝŽŶŽĨŐĞŶĞƌŝĐ ĨŽƵŶĚĂƚŝŽŶĂƌĐŚŝƚĞĐƚƵƌĞƐƚŽ ŽƌŐĂŶŝnjĂƚŝŽŶͲƐƉĞĐŝĨŝĐ ĂƌĐŚŝƚĞĐƚƵƌĞƐ
ZĞĨĞƌĞŶĐĞDŽĚĞůƐ ^ĞůĞĐƚŝŽŶŽĨĂƌĐŚŝƚĞĐƚƵƌĞ ƌĞĨĞƌĞŶĐĞŵŽĚĞůƐ͕Ğ͘Ő͘ ƚŚĞ&ŽƵŶĚĂƚŝŽŶ ƌĐŚŝƚĞĐƚƵƌĞ
ŶƚĞƌƉƌŝƐĞŽŶƚŝŶƵƵŵ
ƌĐŚŝƚĞĐƚƵƌĞĂƉĂďŝůŝƚLJ&ƌĂŵĞǁŽƌŬ
dĂdžŽŶŽŵŝĞƐΘdŽŽůƐĨŽƌ ĐĂƚĞŐŽƌŝnjĂƚŝŽŶĂŶĚƐƚŽƌĂŐĞŽĨ ĂƌĐŚŝƚĞĐƚƵƌĞĂĐƚŝǀŝƚŝĞƐ
KƌŐĂŶŝnjĂƚŝŽŶ͕ƉƌŽĐĞƐƐĞƐ͕ƐŬŝůůƐ͕ ƌŽůĞƐ͕ĂŶĚƌĞƐƉŽŶƐŝďŝůŝƚŝĞƐƚŽ ĞƐƚĂďůŝƐŚĂŶĚŽƉĞƌĂƚĞĂŶ ĞŶƚĞƌƉƌŝƐĞĂƌĐŚŝƚĞĐƚƵƌĞĨƵŶĐƚŝŽŶ
Fig. 12.29 TOGAF overview
ADM guidelines and techniques, (5) The notion of an enterprise continuum, and (6) The architecture capability framework. TOGAF—like most architecture frameworks—contains an impressive amount of knowledge, experience, and wisdom. As such, it is very valuable for all architects in the enterprise—even if the organization does not implement TOGAF.
12.8.4 Patterns Quote: “There are patterns to the way we do engineering. By learning to recognize the patterns, we can use the robust solutions over and over”. (Elecia White, 2012)
Patterns (see: Patterns) are a very rich source of tried and tested knowledge for all areas and levels of systems- and software development (Fig. 12.30). Patterns provide generic, proven solutions to a problem at hand (Definition 7.3). In most cases, patterns are not directly applicable, but must be understood and adapted by the architect or designer to the problem he needs to solve. We have therefore generic patterns (from the literature) and adapted patterns (matched to the architecture and organization). Definition: Architecture Pattern (See Definition 10.3): A pattern is a proven, generic solution to recurring architectural or design problems, which can be adapted to the task at hand. Quote: “To become and remain a valuable, successful software professional you need to read and understand six pattern books every year”. (Frank J. Furrer, 2018)
250
12 Architecture Principles for Changeability
ďƐƚƌĂĐƚŝŽŶ
ƵƐŝŶĞƐƐWƌŽĐĞƐƐ WĂƚƚĞƌŶ WƌŽĚƵĐƚ >ŝŶĞWĂƚƚĞƌŶ
ŚŝŐŚ
ƌĐŚŝƚĞĐƚƵƌĞ WĂƚƚĞƌŶƐ
ƌĐŚŝƚĞĐƚƵƌĞ WĂƚƚĞƌŶƐ
ƌĐŚŝƚĞĐƚƵƌĞ WĂƚƚĞƌŶƐ
ƌĐŚŝƚĞĐƚƵƌĞ WĂƚƚĞƌŶƐ
ĞƐŝŐŶWĂƚƚĞƌŶƐ
ĞƐŝŐŶWĂƚƚĞƌŶƐ
ĞƐŝŐŶWĂƚƚĞƌŶƐ
ĞƐŝŐŶWĂƚƚĞƌŶƐ
/ŵƉůĞŵĞŶƚĂƚŝŽŶ WĂƚƚĞƌŶƐ
/ŵƉůĞŵĞŶƚĂƚŝŽŶ WĂƚƚĞƌŶƐ
/ŵƉůĞŵĞŶƚĂƚŝŽŶ WĂƚƚĞƌŶƐ
/ŵƉůĞŵĞŶƚĂƚŝŽŶ WĂƚƚĞƌŶƐ
ůŽǁ ^ƚƌƵĐƚƵƌĞ
ĞŚĂǀŝŽƵƌ
^ĞĐƵƌŝƚLJ
^ĂĨĞƚLJ
Fig. 12.30 Pattern hierarchy
Numerous patterns are available for nearly all areas of information technology, such as • System structure and system architecture: [Cloutier07, Gamma94, Buschmann96, Microsoft04]; • Enterprise architecture: [Dyson04, Fowler02, Hohpe03, Perroud13]; • Business patterns: [Kelly12, Hruby06]; • Software Requirements: [Withall07]; • Processes: [Ambler98]; • Services: [Burns18]; • Configuration management: [Berczuk02]; • Interaction design: [Borchers01]; • Refactoring: [Brown98, Kerievsky04]; • Metadata: [Voss13, Hay06]; • Pattern languages: [Buschmann97, Coplien95, Vlissides96]; • Cloud: [Cope15]; • Java, C++: [Metsker06, Nesteruk18]; • Security: [Fernandez-Buglioni13, Schumacher05]; • Fault-Tolerance: [Hanmer07]; • Parallel programming: [Mattson13]; • Model-Driven Engineering: [Hruby06, Jenney10]; • Service-oriented architecture (SOA): [Rotem-Gal-Oz12]; • Domain engineering: [Millett15]; • Agile development: [Tanzer18]; • Components: [Völter02];
12.9 ρ-Architecture Principle #8: Reuse and Parametrization
251
The consequent and consistent use of patterns greatly contributes to the conceptual integrity of the IT system. Because the use of reference architectures, frameworks, and patterns depends strongly on the organization and the development process, the Principle 12.7 cannot be formulated in an absolute form (As the other architecture principles are). Principle 12.7 has, therefore, more the character of a strong recommendation. Principle 12.7: Reference Architectures, Frameworks, and Patterns
1. If a reference architecture exists in your field of application, extract the most of it to improve your architecture and to achieve industry-cooperation capability; 2. While developing and maintaining your enterprise architecture, chose an adequate architecture framework and try to use as much knowledge from this framework as possible; 3. In your organization, maintain a well-organized, comprehensive repository of patterns for all levels and areas of software development. Actively encourage and review the usage of patterns; 4. Make the use of reference architectures, architecture frameworks, and patterns an integral, valuable part of your architecture development process.
12.9 ρ-Architecture Principle #8: Reuse and Parametrization 12.9.1 Introduction Quote: “We know that through reuse we can achieve quality, productivity, reliability, flexibility, low cost, and all kinds of benefits. Unfortunately, in most cases we do not know how to achieve reuse”. (Paolo Predonzani, 2000)
Successful reuse (Definition 12.18, [Jacobson97, Ezran13, Hooper13, Lutowski05]) is a solid contributor to high changeability. However, note the adjective “successful”. Reuse is difficult, demanding, and needs an adequate organization and governance behind it. Building reusable software assets is a planned, managed, and controlled activity, strongly supported by management. Definition 12.18: Reuse Utilization of software artifacts in another context or in a different application. The power of well-organized reuse was initialized in a seminal paper by Doug McIlroy in 1968 and recognized and published in the groundbreaking work of Ivar Jacobson in 1997 [Jacobson97]. Since then reuse has been the topic of much attention and publication. Reuse is best known in the field of component software engineering [Sametinger97, Szyperski11, Heineman01, Cheesman00, Ramachandran08, Apperly02, Lau18].
252
12 Architecture Principles for Changeability
Component reuse and component composition [Hamlet10] are still important software development technologies on the application level of the software hierarchy (Fig. 3.10). In the wake of service-oriented architecture (SOA, [Erl05, Erl16, Roshen09, Wik15]), service reuse became important [Murer11]. However, maturing of reuse understanding has demonstrated, that nearly all artifacts in the software development process can be successfully reused (= reusable assets): • Requirements; • Specifications; • Adapted Patterns; • Frameworks; • Reference architectures; • Programs/Modules; • Components; • Services; • Systems (for systems-of-systems). Successful reuse is primarily a governance and process issue [GAO18, Murer11, Korra13, Predonzani00, Jalender10, Leach13]. For successful reuse—both technically and commercially—the organization, especially the development process, must be geared towards the reusability of selected software artifacts.
12.9.2 Types of Reuse There are three basic types of reuse (Table 12.3): • Noninvasive reuse: The software asset is reused without any modification. This is often called “black box reuse” (Fig. 12.31); • Invasive reuse: The software asset is reused with some in-house modifications. This is often called “grey box” or “white box” reuse (Fig. 12.31); • Parametrization and business rules: The software asset (especially components) is built in such a way, that it can be adapted by setting parameters or loading business rules. Reuse can be applied to either in-house built software assets or third party assets. Quote: “Reuse requires some blind faith on the part of the reuser that the component being used will be as suitable and reliable as documented. It would not take more than a couple of bad experiences in using other components for an entire software team to lose faith in the reusability”. (Sampath Korra, 2013)
12.9 ρ-Architecture Principle #8: Reuse and Parametrization
253
Table 12.3 Types of reuse Type of reuse
In-house artifacts
3rd party artifacts
Noninvasive
The reuse asset is built in-house and is used unmodified. Necessary extensions are handled via consecutive, upward-compatible versions
The reuse asset is acquired from a third party and is used unmodified. In-house modifications are usually not possible
Invasive
The reuse asset is built in-house and is used with a varying degree of modifications. This generates a chain of dependent reuse assets (both in content and in time)
The reuse asset is acquired from a third party and is used with a varying degree of modifications. This requires access to the source code and generates a chain of dependent reuse assets (both in content and in time)
Parametrization
The reuse asset is built in-house and is used unmodified. The reuse asset has a set of parameters, which allow varying degrees of adaptation at run-time
The reuse asset is acquired from a third party and is used unmodified. The reuse asset has a set of parameters, which allow varying degrees of adaptation at run-time
Business Rules
The reuse asset is built in-house and is used unmodified. The functionality—or part of the functionality—is loaded at run-time in the form of executable business rules
The reuse asset is acquired from a third party and is used unmodified. This is often a business rules engine. The functionality—or part of the functionality—is developed in-house and is loaded at run-time in the form of executable business rules
Fig. 12.31 Types of reuse hŶŵŽĚŝĨŝĞĚ;ϭ͗ϭͿƌĞƵƐĞ
ZĞƵƐĞ
ůĂĐŬŽdž
ZĞƵƐĞ
'ƌĞLJŽdž
>ŝŵŝƚĞĚŵŽĚŝĨŝĞĚ ƌĞƵƐĞ ;^ƉĞĐŝĨŝĐĐŚĂŶŐĞƐфϮϱйͿ
ZĞƵƐĞ
tŚŝƚĞŽdž
^ŝŐŶŝĨŝĐĂŶƚůLJŵŽĚŝĨŝĞĚ ;^ƉĞĐŝĨŝĐĐŚĂŶŐĞƐ≥ ϮϱйͿ
254
12 Architecture Principles for Changeability
Black box reuse (Fig. 12.31)—either of components or services—is the ideal case: Components or services are well tested, documented, and well accessible in an organized repository. The software teams search for suitable components or services and reuse them. If this works well in an organization, higher software quality, lower development cost, faster time to market, and higher conceptual integrity of the IT system results [Leach13]. However, in some cases, the component or service required by the application cannot be entirely provided by a reusable component or service in the repository. In such a case, the required change to the component or the service must be done by the software team owning the component or the service. The component or the service is then extended in an upward-compatible form, extensively tested, versioned, and provided to the potential users (Fig. 12.32). The successful implementation of this component/service reuse cycle is a very demanding task for an organization! Grey box reuse or white box reuse—either of components or of services—bears the risk of unmanageable fault propagation (e.g., Example 12.7). Any undetected fault in a reusable asset will be transferred to all components or services based on it, making it very difficult to track and fix the propagated faults. Grey box and white box reuse are therefore not recommended. A particularly bad practice is to download code snippets from the Internet and use them in own programs. Parametrization (Definition 12.19) is a powerful technique to adapt the behaviour of software to a specific task without changing the code. The software is written in such a way, that loadable parameters can influence it. An example of parametrization is given in Example 12.14. Definition 12.19: Parametrization In information technology, a parameter is an item of information—such as a name, a number, a selected option, a table, or a curve
ZĞƵƐĂďůĞƐƐĞƚ
ůĂĐŬŽdž
ŽŵƉŽŶĞŶƚͬ^ĞƌǀŝĐĞZĞƵƐĞͲLJĐůĞ
ZĞƵƐĂďůĞƐƐĞƚKǁŶĞƌ Ξǁǁǁ͘ƐŚƵƚƚĞƌƐƐƚŽĐŬ͘ĐŽŵ
EĞǁ ZĞƋƵŝƌĞŵĞŶƚƐ
ůĂĐŬŽdž
ůĂĐŬŽdž
sdž͘LJнϬ͘ϭ
ůĂĐŬŽdž
sdž͘LJ нϬ͘Ϯ
Fig. 12.32 Component/service reuse cycle
ZĞƉŽƐŝƚŽƌLJ
ZĞƉŽƐŝƚŽƌLJ
ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ ƉƉůŝĐĂƚŝŽŶ
12.9 ρ-Architecture Principle #8: Reuse and Parametrization
255
family—that is passed to a program by a user, another program, or a configuration file. Parameters affect the operation of the program receiving them. (https://whatis.techtarget. com/definition/parameter)
Example 12.14: Engine Management System
All modern combustion engines are controlled by sophisticated engine management software. Such engine control systems are among the most complex systems in a vehicle. Their functions include the management of the combustion processes in the engine with the objective to achieve the desired performance requirements and low fuel consumption, at the same time keeping pollution emissions low. Engine control systems [Reif12, Schaeuffele13] are typical cyber-physical systems Sect. 3.13), i.e., software controls physical machinery. The structure of an engine control system is shown in Fig. 12.33. The key element in Fig. 12.33 is the engine management controller—a powerful microprocessor with the engine management software. The engine management controller receives input from the engine via sensors. Input signals include the state of the accelerator pedal, the angle of the crankshaft, the air temperature, the air humidity, etc. Input signals are processed and generate the output signals, i.e., the engine control signals. Output signals include the pulses for the injection system, the signal for the fuel pump, the control of the turbocharger, etc. The interesting part in Fig. 12.33 is the box “parametrization”. It represents a large set of parameters, in the form of curve families, parameter tables, etc. The set of
ŽŵŵƵŶŝĐĂƟŽŶƐ
^ĞŶƐŽƌƐ ŶŐŝŶĞ
ĐƚƵĂƚŽƌƐ ŶŐŝŶĞ
ŶŐŝŶĞDĂŶĂŐĞŵĞŶƚŽŶƚƌŽůůĞƌ
^ŽŌͲ ǁĂƌĞ
DŝĐƌŽͲ ĐŽŶƚƌŽůůĞƌ
DĞŵŽƌLJ
WĂƌĂŵĞƚĞƌdĂďůĞ
WĂƌĂͲ ŵĞƚƌŝnjĂƟŽŶ
WĂƌĂŵĞƚĞƌdĂďůĞ WĂƌĂŵĞƚĞƌdĂďůĞ ƵƌǀĞĨĂŵŝůŝĞƐ
Fig. 12.33 Engine control system
256
12 Architecture Principles for Changeability
parameters is part of the new engine development process, where the parameter set is adjusted to provide the desired operation and emission of the new engine type (without modification of the code). This example shows the power of the parametrization technique. Another possibility to adapt the behavior of software without modifying the code is business rules (Definition 12.20, [Ross03, Boyer11, Witt12, VonHalle01, Chisholm03]). Business rules are formulated externally of the software and are executed—often in a business rule engine (BRE)—at run-time. Definition 12.20: Business Rule A business rule is a formalized statement that describes a step in a business policy or procedure. Business logic describes the sequence of operations that is associated with data in a database to carry out the rule. A business rules engine (BRE) is a software component that allows non-programmers to add or change business logic in a business process management (BPM) system via business rules. (https://searchmicroservices.techtarget.com/definition/ business-rules-engine-BRE) Business rule engines are used mainly in commercial applications as part of the overall software-system. They have the additional advantage, that the formulation of business rules—using formal, specific business rule languages (see e.g., http://wiki.ruleml.org/ index.php/RuleML_Home)—can be done by business analysts and does not need trained programmers.
12.9.3 Business Case for Reuse Quote: “Reusable software requires considerable more effort in planning, design, development, documentation, and implementation.”
Reuse has costs and benefits. The business case for reuse is shown in Fig. 12.34. Developing a piece of functionality needs a certain amount of development time and development cost. Because the functionality only has a one-time usage, its value remains constant over time. When developing reusable functionality, the initial cost and time-tomarket are (considerably) higher. Because you can reuse the software, its value raises with each reuse because you save most of the development time and the development cost through reuse. Developing reusable software calls for a certain foresight: Deciding which and how to implement reusability must be based on anticipated future needs. Quote: “The best way to bootstrap reuse is to get management to invest in the development of reusable assets”. (Jeffrey S. Poulin, 1997)
12.9 ρ-Architecture Principle #8: Reuse and Parametrization
257
Φ
a
WƌŽũĞĐƚ ŽŶĞͲƚŝŵĞ ƵƐĞ
ĞǀĞůŽƉŵĞŶƚ ŽƐƚ sĂůƵĞ ƚ
ĞǀĞůŽƉŵĞŶƚ dŝŵĞ
b Φ
WƌŽũĞĐƚ
ĞǀĞůŽƉŵĞŶƚ ŽƐƚ sĂůƵĞ ZĞƵƐĂďůĞ^ŽĨƚǁĂƌĞ
ZĞƵƐĞ ;ŶͲƚŝŵĞƵƐĞͿ
ƚ ĞǀĞůŽƉŵĞŶƚ dŝŵĞ
Fig. 12.34 Business case for reuse
In order to prove the business case for reuse in an organization, reuse must be measured. Reuse measurement [Poulin79, Murer11] can be done in many different ways. The most significant successes in reuse has been realized via the service-oriented architecture, making heavy reuse of services. An example of the application of business rules is given in Example 12.15. Example 12.15: Business Rules
A car rental agency has implemented its application programs as business rules. One rule is Verbal Expression: “A car with accumulated mileage greater than 5,000 since its last service must be scheduled for service.” Formal Expression as executable business rule: If Car.miles-current-period > 5000 then invoke Schedule-service (Car.id) End if
12.9.4 Context for Reuse Successful reuse is difficult, although many organizations have implemented highly effective component and service reuse strategies. Long-term, successful reuse requires a specific context consisting of:
258
• • • •
12 Architecture Principles for Changeability
A committed management; A farsighted reuse strategy; An adequate reuse organization; Competent software architects (on all levels). Principle 12.8: Reuse and Parametrization
1. Use only the black-box (noninvasive) concept to build reusable software. Add new requirements via versioning; 2. Whenever possible, configure the reusable modules via parameters or business rules (loaded or initiated at run-time); 3. Install and consequently use a configuration management system to control the distribution of reusable, versioned software assets; 4. Provide the four elements of successful reuse: Committed management, reusestrategy, reuse-organization and competent software architects; 5. Adapt your software development process to produce reusable software.
12.10 ρ-Architecture Principle #9: Industry Standards 12.10.1 Introduction For many centuries, products were manufactured by a few persons in the same room. When industrialization started, this changed: Different groups of people or several companies contributed by each manufacturing a part of the product. This required agreements to make sure, that the individually manufactured parts fit each other. When the number of manufacturers grew, a new method of forcing the cooperation appeared: The industry standard (Definition 12.21). One of the first industry standards was the “Deutsche Industrie Norm [DIN] 1”, published on March 1, 1918 (https://www.din.de/en). Definition 12.21: Industry Standard A standard is: • A formally established norm for (technical) systems; • A document which establishes uniform (engineering or technical) criteria, principles, methods, processes, and practices. Today, industry standards are indispensable in all areas and on all levels of technology. A rich and extensive literature on standards and standardization exists (e.g.; [Schneiderman15, Uslar13, Russell14, Jakobs06, Jakobs00, DeVries99, Loshin99]). Here we focus on information technology standards.
12.10 ρ-Architecture Principle #9: Industry Standards
259
12.10.2 Types of Standards Four types of standards exist (Table 12.4): Table 12.4 Types of standards Type of standard
Definition
Examples
Industry-specific standard
Developed and maintained by consortia of industrial organizations working in a specific field
• AUTOSAR (www.autosar.org). A worldwide partnership which develops the standardized software framework for intelligent mobility; • IMA (https://standards.globalspec.com/ std/2018378/rtca-do-297): Integrated modular avionics are real-time computer networked airborne systems • RSSB (https://www.rssb.co.uk/): British railway industry standardization consortium
Standards association
• ISO (https://www.iso.org) International Developed and maintained by overarching associations in order Standards Organization: Large, international standardization body; to develop e.g., fundamental technologies or sample processes • ITU (https://www.itu.int) International Telecommunication Union: ITU is the premier global forum through which parties work towards consensus on a wide range of issues affecting the future direction of the ICT industry; • IETF (https://www.ietf.org/) Internet Engineering Task Force: Premier Internet standards body, developing open standards through open processes
Government standard
Developed and maintained by government agencies to enforce specific quality properties of systems or to guide certification
• DSGVO (https://dsgvogesetz.de/) Europäische Datenschutz-Grundverordnung (European Data Protection Directive); • FAA (https://www.faa.gov/) US Federal Aviation Administration: Technical, operational, and certification standards for the aviation industry; • CC (https://www.commoncriteriaportal. org) Common Criteria: Driving force for the widest available mutual recognition of secure IT products
Company standard
Developed and maintained by a specific company as in-house standard for internal use
• Domain model Business object model
260
12 Architecture Principles for Changeability
1. Industry-specific standards: These standards are developed and maintained by consortia of industrial organizations working in a specific field, such as the road vehicle, aerospace or train industry; 2. Standards association: These standards are developed and maintained by overarching associations to develop, e.g., fundamental technologies, reference architectures, or sample processes; 3. Government standards: These standards are established and maintained by government agencies to enforce specific quality properties of systems, such as security or safety, or to state requirements for certification; 4. Company standards: In some cases, no suitable standard is available. In such situations, an organization is forced to develop their in-house standards for internal use.
12.10.3 Value of Standards Standards are of high value because: • They are reliable carriers of proven knowledge in many areas, such as architecture, technology, processes, certification; • They enable interoperability and cooperation of organizations; • They contribute to the quality of products and services, such as safety, security, etc.; • They facilitate the certification of products and services. Drawbacks of the standardization process are that it takes a long time to agreement and publication and that in some standardization areas a fierce competition of major market players takes place, resulting in different standards for the same topics. An excellent example of a powerful, highly successful standard is digital certificates (Example 12.16). Quote: “A very simple data structure (= X.509) relating a name to a key becomes the critical piece of a very complex puzzle when applied to electronic commerce in general”. (Jalal Feghhi, 1999)
Example 12.16: Digital Certificates
The concept of a digital certificate [Feghhi98] enabled the whole world of secure electronic commerce—and signified the dawn of a new era with far-reaching applications. The digital certificate is the key element of public key infrastructure PKI [Austin01] which is the foundation of most electronic transactions.
12.10 ρ-Architecture Principle #9: Industry Standards
sĞƌƐŝŽŶ ĞƌƚŝĨŝĐĂƚĞ ^ĞƌŝĂůEƵŵďĞƌ
261
ĞƌƚŝĨŝĐĂƚŝŽŶ ƵƚŚŽƌŝƚLJ WƌŝǀĂƚĞ
280
12 Architecture Principles for Changeability
Eric Evans proved in his pioneering book in 2004 [Evans04] that the conceptual gap between the people working in the business domains and the engineers working in the IT domains contributed massively to the creation of accidental complexity. In fact, right from the beginnings of business support by IT, the two populations were using different terminology, different concepts, and exhibited dissimilar thinking. Where business people talked about “mortgage”, “account”, “wheel rotation sensor”, “lateral acceleration”, etc., the IT people talked about “classes”, “subclasses”, “inheritance”, “interface”, etc. He proposed to align business and IT and proposed domain-driven design—which later mutated to domain software engineering (Definition 12.8). The key idea was to “force” the IT engineers to adapt the business terminology and concepts and to use them as much as possible directly in their code: The classes represent then business objects with all their properties and behaviour. The conceptional bridge between the business areas and IT are the domain model (Sect. 12.13.4) and the business object model (Business Object Model). Definition 11.8 (Repetition): Domain Software Engineering Domain Software Engineering [DSE] is an architectural methodology for evolving a software-system that closely aligns with business domains Quote: “One thing we can be certain of is that information technology professionals will be spending their time and efforts on creating languages and the tooling to support their use.” (Anneke Kleppe, 2009)
An interesting development which followed domain-specific modeling are domainspecific languages (DSL, Definition 12.33, [Fowler10, Voelter13, Kleppe09, Combemale16]). DSL’s are specifically created for targeted application domains and contain a significant amount of (business) domain knowledge and many constructs, which help model and generate behaviour. Definition 12.33: Domain-Specific Language (DSL) A domain-specific language is any (formal) language that is created to describe and create software-systems. DSL’s are unique, because of their focus on a certain application domain. A significant part of the domain knowledge is, therefore, included in a DSL. (Anneke Kleppe 2009)
12.13 Formal Languages A number of formal languages (Definition 12.34) have been developed to build provably correct systems (Provably Correct Software). The most used are the Z language [Jacky97, Diller94], the general-purpose language B [Abrial96, Kernighan99], the event-oriented language Event-B [Abrial10] or specialized languages like SPARK for embedded systems development [McCormick15]. Due to its sound foundation on a logic
12.13 Formal Languages
281
system (predicate logic and set theory), the consecutive steps of the development can be proven to be correct and consistent. Definition 12.34: Formal Language A formal language is a method and notation for specifying, designing, and coding software-systems. It supports the central aspects of software life cycle: The technical specification, the design by successive refinement steps, the layered architecture, and the executable code generation. (Jean-Raymond Abrial 1996) Formal languages represent a promising approach to dependable software construction. However, after a highlight in the last decades of the previous century—e.g., in safety-critical software development for railway systems—the use of this approach faded. Today, unfortunately, this method of software development seems almost extinct.
12.13.1 Architecture Description Languages Architecture Description Languages (ADL’s) are formalized languages created to describe and analyze the structure and properties of hardwaresoftware –systems. Mostly, they use a textual representation with a strong syntax and support for the semantics of the application domain. A number of ADLs have been created in the past decades: • Some of them standardized by industry consortia, such as AADL [Delange17, Feiler12]; EAST-ADL [Blom12]; ArchiMate© [https://publications.opengroup.org/ c162]; • Others developed by companies and used in their proprietary products (such as SimuLink [https://ch.mathworks.com/de/products/simulink.html] or Wolfram System Modeler [http://www.wolfram.com/system-modeler/]). Definition 12.35: Architecture Description Language Architecture Description Languages (ADLs) are computer languages describing the software and hardware architecture of a system. The description may cover software features such as processes, threads, data, and subprograms as well as hardware component such as processors, devices, buses, and memory. The connections between the components can be described in logical as well as physical terms. (Stefan Björnander 2011) The use of ADL’s is increasing, and many application domains are using them more and more to create higher software quality, primarily by analyzing the properties of the models before implementation starts. Acceptance and broad use of ADL’s is rising in many industries.
282
12 Architecture Principles for Changeability
Example 12.21: AADL (Architecture Analysis and Design Language) Quote: “AADL is a technical foundation for architecture-centric model-based software-systems engineering.” (Peter H Feiler, 2012)
AADL [Delange17, Feiler13] has been standardized by the SAE (https://www.sae. org/). AADL was initially developed for avionics and was known formerly as the Avionics Architecture Description Language. AADL models both the software and hardware architecture of an embedded, real-time system. Its modeling constructs cover hardware (such as processors, buses, ports), software (such as components, relationships), properties (such as latency, execution times, threads), and analysis constructs (such as end-to-end flows). An example of the textual representation of an AADL component declaration is shown in Table 12.5: The AADL language is precise, rich, and (reasonably) understandable by humans. AADL allows the modeling and analysis of very large systems—specifically realtime and safety-critical systems [Dissaux14].
12.13.2 Model Explosion Quote: “One of the most difficult skills to master in modeling is how to exclude information from a model or a representation.” (Benjamin A. Liebermann, 2007)
This section will explore the topic “model explosion”: Model explosion refers to the fact that models of real systems tend to become colossal in the number of elements and relationships. This very often makes a representation—e.g., in graphical form—impossible. Table 12.5 AADL component-type declaration
Component Type Declaration 1 2 3 4 5 6 7 8 9 10 11
process control features input_speed: in data port speed_data; toggle_mode: in event port; throttle_cmd: out data port throttle_data; error_set: feature group all_errors; flows speed_signal_path: flow path input_speed -> throttle_cmd; properties Period => 20 ms; end control;
© Feiler/Gluch (ISBN 978-0-321-88894-5), 2013
12.13 Formal Languages
283
and the understanding of such giant models is difficult. Three valuable techniques are available to manage model explosion. They are: • Partitioning; • Hierarchical models, model refinement (levels of detail); • Views. Partitioning means the subdivision of large models into cohesive submodels: This requires a careful grouping of related model elements into containers (e.g., using the domain model, Sect. 12.13.4), which can then be modeled and represented independently. This categorization is called horizontal partitioning because all the submodels are on the same hierarchical level of modeling. Another method to reduce the apparent complexity of models is model refinement: The model is split into hierarchical levels, the topmost level being the overview model. The levels below successively show more details (refinement). This hierarchical breakdown is called vertical partitioning because the submodels are arranged in a vertical hierarchy. Another, very useful way to simplify the readability and representation of models is to generate views (Vertical Architecture Layers). Views partition a model into stakeholder concerns. The model contains the full set of elements, relationships, and properties. However, a view extracts—only for presentation—the concerns of a specific stakeholder, such as the security or safety engineer. In modeling, the modeling elements are annotated with their affiliation to one or more specific stakeholder concerns.
12.13.3 Structural Models Structural Models represent the structure of the system, i.e., the parts and their relationships. A modeling language with a graphical representation—such as UML or SysML (Object-Oriented)—is well suited for these models. As an illustration, Example 12.22 shows a possible structural UML-model of the top level of a system-of-systems (SoS). Example 12.22: Structural Model of a System-of-Systems
The expressivity and understandability of structural models are shown in Fig. 12.47: It represents a possible top-level model for a system-of-systems.
Definition 11.25 (Repetition): Domain Model A domain model is a high-level conceptual grouping/categorization of all entities, functionalities, and their properties into disjunct containers, called (business) domains. The rules for categorization are defined by the cohesion, i.e., all entities and functionality to support a specific business area, are assigned to the same domain.
284
12 Architecture Principles for Changeability
ŶǀŝƌŽŶŵĞŶƚ ϭ ϭ hƐĞƌƐ
ŝŶƚĞƌĂĐƚƐ ďĞŶĞĨŝƚ
ϭ͘͘Ύ
^LJƐƚĞŵͲŽĨͲ^LJƐƚĞŵƐ
ϭ͘͘Ύ
^Ž^
ϭ
ϭ
ϭ DŝƐƐŝŽŶKǁŶĞƌ DK ŝŵƉůĞŵĞŶƚƐ
DŝƐƐŝŽŶD
ϭ
ϭ͘͘Ύ ŽŽƉĞƌĂƚŝŽŶ ŐŽǀĞƌŶƐ ŝŶƚĞƌĐŽŶŶĞĐƚƐ ŽŵĂŝŶ ϭ͘͘Ύ ϭ͘͘Ύ ϭ ϭ͘͘Ύ
ŽŽƌĚŝŶĂƚŽƌ ϭ ĞŶĨŽƌĐĞƐ ϭ͘͘Ύ ŽŽƉĞƌĂƚŝŽŶ ^ƚĂŶĚĂƌĚƐ ^
ŝƐ ĚĞĨŝŶĞĚ ďLJ ϭ
ϭ͘͘Ύ ŽŶƐƚŝƚƵĞŶƚ ^LJƐƚĞŵŽŵĂŝŶ ^ ϭ
ϭ
ϭ͘͘Ύ ŽŽƉĞƌĂƚŝŽŶ DĞĐŚĂŶŝƐŵ
ϭ͘͘Ύ ŽŽƉĞƌĂƚŝŽŶ ŽŶƚƌĂĐƚ
ϭ͘͘Ύ WƌŽĐĞƐƐ WƌŽĐ
ϭ͘͘Ύ ŽŶƐƚŝƚƵĞŶƚ 'ůŽďĂů ϭ ϭ͘͘Ύ ^LJƐƚĞŵ ;ƐLJŶĐŚƌŽŶŝnjĞĚͿ ĚĞůŝǀĞƌƐ ^ dŝŵĞ ϭ͘͘Ύ ƵƚŝůŝnjĞƐ ϭ͘͘Ύ ĂƉĂďŝůŝƚLJ ϭ͘͘Ύ ďƵŝůĚƐ ϭ͘͘Ύ &ƵŶĐƚŝŽŶ &
ϭ͘͘Ύ
ϭ &ƵŶĐƚŝŽŶ KǁŶĞƌ &K
ŵĂŶĂŐĞƐ
Fig. 12.47 UML model for a system-of-systems (SoS)
12.13.4 Domain Model We have encountered the notion of a domain model [Godinez10, McGovern03, Lieberman06] several times in previous sections (Definition below). Domain models are among the most useful and valuable models in IT architecture. Their impact is: • Significant avoidance of redundancy-creation in all phases of the software-systems development process; • Management of essential complexity and massive reduction of accidental complexity (Definition 12.36); • Excellent instrument for business–IT alignment as a commonly understood and accepted language (Definition 7.2); • Powerful categorization instrument for discussions about the application landscape (Note that domain models can be created for all the horizontal layers of the architecture framework (Fig. 3.8). They are also powerful for the technical infrastructure); The domain model is best explained by an example (Example 12.23). Example 12.23: Domain Model for a Financial Institution
Global financial institutions operate IT megasystems [Murer11, Seese16, Keyes00]. Their governing organization is huge and fragmented. Dependability properties of the IT system, such as integrity, availability, security, confidentiality, are of utmost importance. The financial institution and its customers depend 100% on the IT system.
12.13 Formal Languages
285
To assure the dependability properties, models have been used for a long time, both as an important IT management tool and for the guidance of the software development process. The domain model—as the top model for the application landscape—plays a decisive role. A representative domain model [Murer11] is shown in Fig. 12.48: The top level contains 20 domains (organized in 7 areas), covering the complete operation of the financial institution. The domain model is a consistent, cohesive functional categorization scheme, and should not be influenced by management structures or by reorganizations—in fact, the domain model will survive managerial and organizational changes, even mergers & acquisitions! The domain model soon becomes enormous (Sect. 12.13.2): The number of individual functions and data items easily go into the 10,000s. Therefore, the domain model has to be hierarchically subdivided into subdomains (Fig. 12.49). Each subdomain includes a redundancy-free, disjunct list of functions and data (Table 12.6). The full categorization of the functionality and data of the financial institution (Extract in Table 12.6), forms the foundation of the successful, correct, and redundancy-free management of functionality and data/information throughout the complete software-systems development process. Note that the textual representation in Table 12.6 is only the first step: A representation as a taxonomy (Sect. 12.12.6) or preferably an ontology (Sect. 12.12.7) should follow.
ϱ͗ŽŵŵƵŶŝĐĂƚŝŽŶƐΘŽůůĂďŽƌĂƚŝŽŶ ^ƚƌĞĞƚ^ŝĚĞ/ŶƚĞƌĨĂĐĞƐ;^^/Ϳ
ůŝĞŶƚŽŵŵƵŶŝĐĂƚŝŽŶ;,Ϳ
WƌŽĚƵĐƚŽŶƚƌŽů ;WZͿ
^ŝŶŐůĞĐĐŽƵŶƚƐ ;^Ϳ ^ĞƚƚůĞŵĞŶƚĂŶĚůĞĂƌŝŶŐ ;^>Ϳ ƵƐƚŽĚLJ ;zͿ ŽƌƉŽƌĂƚĞĐƚŝŽŶƐ ;KͿ
ϳ͗ŶƚĞƌƉƌŝƐĞŽŵŵŽŶ^ĞƌǀŝĐĞƐ >ŽŐŝƐƚŝĐƐ ;>K'Ϳ
ĐĐŽƵŶƚŝŶŐŽŶƚƌŽů ;KͿ
Fig. 12.48 Domain model for a financial institution
ĂƐŝĐ&ĂĐŝůŝƚŝĞƐ ;^Ϳ
ƵƐƚŽŵĞƌΘWĂƌƚŶĞƌ ;h^Ϳ
ƌĞĚŝƚƐĂŶĚ^LJŶĚŝĐĂƚŝŽŶ ;Z^Ϳ
dƌĂĚŝŶŐ ;dZͿ
WĂLJŵĞŶƚƐ ;WzͿ
ϭ͗WĂƌƚŶĞƌƐΘWĞƌƐŽŶƐ
tĞĂůƚŚDĂŶĂŐĞŵĞŶƚΘ ĚǀŝƐŽƌLJ ;tDͿ
&ŝŶĂŶĐŝĂů/ŶƐƚƌƵŵĞŶƚƐ͕ZĞƐĞĂƌĐŚΘDĂƌŬĞƚĂƚĂ;&/EͿ
ϰ͗ĂƐŚĂŶĚƐƐĞƚKƉĞƌĂƚŝŽŶƐ
ŶƚĞƌƉƌŝƐĞŽŶƚĞŶƚDĂŶĂŐĞŵĞŶƚ ;DͿ
ϯ͗dƌĂĚŝŶŐĂŶĚDĂƌŬĞƚƐ
Ϯ͗&ŝŶĂŶĐĞ͕/ŶǀĞƐƚŵĞŶƚΘ^ĂůĞƐ
ZĞŐƵůĂƚŽƌLJ͕ZŝƐŬĂŶĚ>ŝƋƵŝĚŝƚLJ ;ZZ>Ϳ &ŝŶĂŶĐŝĂůĐĐŽƵŶƚŝŶŐ ;&Ϳ
ϲ͗ĐĐŽƵŶƚŝŶŐ͕ŽŶƚƌŽůůŝŶŐĂŶĚZĞƉŽƌƚŝŶŐ
ƵƐŝŶĞƐƐWĂƌƚŶĞƌƉƉůŝĐĂƚŝŽŶƐ;WͿ
286
12 Architecture Principles for Changeability
ŽŵĂŝŶ
^ƵďĚŽŵĂŝŶ
^ƵďĚŽŵĂŝŶ
^ƵďĚŽŵĂŝŶ
ƉƉůŝĐĂƚŝŽŶƐ ;ŽĚĞнĂƚĂͿ
^ƵďĚŽŵĂŝŶ
Fig. 12.49 Subdomains and application/data assignments Table 12.6 Functionality listing of subdomain 1.1.9
Subdomain 1.1.9: Analytics and Intelligence 1.1.9-1 Maintain the intelligence and rules to enable and support the controlled receipt and distribution of trade and non-trade data and information from/to clients 1.1.9-2 Co-ordinate client relationship activities, instruction gathering (client operational attributes) and research recommendations for the customers 1.1.9-3 Assign and maintain Sales force to Client Coverage roles and perform analytics on coverage information 1.1.9-4 Manage client marketing campaign and advertisement distribution. 1.1.9-5 Enable sales force to do client account planning and review on key performance metrics such as revenue target, and visit target.
12.13.5 Business Object Model The business object model [Murer11, Godinez10, McGovern03], Definition 12.28) is the most effective instrument for business-IT alignment (Sect. 11.4). It introduces and defines the terminology familiar to the business areas and provides a very strong structuring impact on the IT implementation. Definition 11.27 (Repetition): Business Object Model The business object model is a (semi-formal, in most cases hierarchical) representation of all the business objects, their properties, their functionality, and their relationships in the enterprise.
12.13 Formal Languages
287
As seen above (Sect. 12.13.2), models for real systems become very large, with possibly 10,000s of business objects and even more relationships. Therefore, these models must be organized by refinement, i.e., by hierarchical layering adding more and more detail in the lower layers. Refinement is shown in Example 12.24, Example 12.25, and Example 12.26 for a financial institution. Example 12.24: Top Level Business Object Model for a Financial Institution
The top level (= Enterprise level) of the business object model for a financial institution [Murer11] contains 11 business objects and is depicted in Fig. 12.50. Note that— although the model looks impressively simple—a tremendous effort of many people was spent to develop the model. Example 12.25: Business Object Refinement Level
The first hierarchical refinement level of the top level model of Fig. 12.50 is shown in Fig. 12.51: The top-level business object “Party” was used and broken down—again hierarchically—into 13 2nd level business objects. For understandability also a part of the refinement of “Agreement” is included.
KƌŐĂŶŝnjĂƚŝŽŶŶƚŝƚLJ
ŐƌĞĞŵĞŶƚ ŽďůŝŐĂƚĞƐͬĞŶƚŝƚůĞƐ
ĞK
ZĞƋƵĞƐƚ ĞK
ĞK
ĐŽŶƚĂŝŶƐ ;ƐƚĂŶĚĂƌĚͿ
dĞƌŵŽŶĚŝƚŝŽŶ ĞK
ŶĞĞĚƐͬƌĞĐĞŝǀĞƐ
ƐƵƉƉŽƌƚƐͬŝŶĐůƵĚĞƐ
ƉƌŽǀŝĚĞƐ ƌƵůĞƐĨŽƌ
KƉĞƌĂƚŝŽŶ ĞK
ƚƌĂŶƐĨĞƌƐͬ ƚƌĂŶƐĨŽƌŵƐ
ŝŶŝƚŝĂƚĞƐͬƌĞƐƵůƚƐĨƌŽŵ
ƐƉĞĐŝĨŝĞƐ
WƌŽĚƵĐƚ
ŝŶĐůƵĚĞƐͬƐƉĞĐŝĨŝĞƐ
ŽĨĨĞƌƐ
ĐŽŶŽŵŝĐZĞƐŽƵƌĐĞ ĞK
ĞŵďŽĚŝĞƐ
ƉƌŽĚƵĐĞƐ
&ŝŶĂŶĐŝĂů/ŶƐƚƌƵŵĞŶƚ ĞK
ŽĐƵŵĞŶƚͬZĞƉŽƌƚ ĞK
Fig. 12.50 Top level business object model for a financial institution
ŶĞĞĚƐͬƌĞĐĞŝǀĞƐ
ŝƐĐŽŵŵŝƚƚĞĚ ƚŽ
ŽǁŶƐͬĐŽŶƚƌŽůƐ
ŐƌĞĞŵĞŶƚWŽƌƚĨŽůŝŽ
ŝƐƐƵĞƐͬ ĂĐƚƐ ŽŶ
ŝƐ ĐŽŶƚƌĂĐƚƵĂů ďĂƐĞĨŽƌ
ĂŐŐƌĞŐĂƚĞƐ
ĞK
WĂƌƚLJ ĞK
ĐŽŶƚĂŝŶƐ ;ŝŶĚŝǀŝĚƵĂůͿ
ŝƐƐƵĞƐͬĂĐƚƐ ŽŶ
ŵĂŶĂŐĞƐ
ŽďůŝŐĂƚĞƐͬĞŶƚŝƚůĞƐ
ĞK
288
12 Architecture Principles for Changeability
WĂƌƚLJ
ŶƚĞƌƉƌŝƐĞ >ĞǀĞů ŽŵĂŝŶ >ĞǀĞů
ŐƌĞĞŵĞŶƚ
ZĞĨŝŶĞŵĞŶƚ
ĞK
ĞK
ƌĞĨŝŶĞŵĞŶƚ
ƌĞĨŝŶĞŵĞŶƚ
WĂƌƚŶĞƌWĂƌƚŶĞƌŽŶƚĞdžƚ ĚK
WĂƌƚŶĞƌŽƐƐŝĞƌŽŶƚĞdžƚ ĚK
^ĞŐŵĞŶƚĂƚŝŽŶ ĚK
WĂƌƚŶĞƌ'ƌŽƵƉ
WĂƌƚŶĞƌ ĚK
ŽƐƐŝĞƌ ĚK
WĂƌƚŶĞƌŐƌĞĞŵĞŶƚ ĚK
ĚK
/ŶƐƚƌƵĐƚŝŽŶ ĚK
ĚĚƌĞƐƐ ĚK
ĚƌĞƐƐŝŶŐ/ŶƐƚƌƵĐƚŝŽŶ ĚK
ŽŵƉůŝĂŶĐĞ ĚK
sĂƌŝŽƵƐĂƚĂ ĚK
^ĞƌǀŝĐŝŶŐ ĚK
ŽŶƚĂĐƚ ĚK
Fig. 12.51 First refinement level of the business object model
Example 12.26: Business Object Structure
The lowest level of refinement is the individual business objects, i.e., their form and content. Figure 12.52 reveals the generic structure of the business object “Address”. It lists the attributes and the operations which can be executed on the business object. This business object template is then instantiated (= assigned to a real client) and implemented in the code, e.g., as a class or EJB. The lower part of the business object structure contains the operations (Note that this is a simplified example: In reality error conditions, error processing, pre- and postconditions (Example 12.10) and other information is necessary in the full definition of the business object). Once all business objects in the enterprise are defined, they can be instantiated (in the languages of choice) and used in the business processes as shown in Fig. 12.53. The business processes are implemented by the business logic. This architecture principle has introduced a number of modeling paradigms and languages. The presentation is not exhaustive, but covers the currently (2018) most important paradigms and languages. It is the responsibility of the architecture department in the organization to evaluate, choose, install, and enforce a suitable set of modeling suites. The principle is formalized in Principle 12.11.
12.13 Formal Languages
289
Fig. 12.52 Business object structure for “address”
ĚĚƌĞƐƐ
dƌŝŐŐĞƌ
√
ƵƐŝŶĞƐƐWƌŽĐĞƐƐ
ZĞƐƵůƚ
ĞK
ĞK
ĞK
ƩƌŝďƵƚĞƐ
ƩƌŝďƵƚĞƐ
ƩƌŝďƵƚĞƐ
KƉĞƌĂƟŽŶƐ
KƉĞƌĂƟŽŶƐ
KƉĞƌĂƟŽŶƐ WĂƌƚŶĞƌ
ĂƚĂ
ĂƚĂ
ĂƚĂ
Fig. 12.53 Utilization of business objects in business processes
Principle 12.11: Formal Modeling
1. Install and foster a modeling culture in your organization, i.e., teach the value of models and enforce their standardized use; 2. Agree on a set of modeling techniques (and tools) and formulate them as binding internal standards; 3. Model as many parts of your IT system as possible (organization and skills constraints may limit your choices); 4. Use the highest possible degree of formalization;
290
12 Architecture Principles for Changeability
5. Use industry-standard modeling instruments & tools (without any vendor-specific extensions); 6. Treat models as a long-term, highly valuable assets in your company and maintain them in a repository; 7. Keep models complete and up-to-date. They shall represent the current state of your IT system at all times; 8. Involve the business units (business analysts, product line designers) during the modeling process.
12.14 ρ-Architecture Principle #12: Complexity and Simplification 12.14.1 Complexity Complexity (introduced in Sect. 3.3) is a strange and fascinating phenomenon: Although we find it widespread in all areas of science, philosophy, society, technology, etc. [Johnson10, Mahajan14, Downey18, Kopetz19] it is generally not well understood. What we experience every day, is that those complex systems are difficult to understand, to maintain, to evolve—and in some cases also to use. In addition, highly complex systems tend to be more susceptible to faults, failures, errors, attack, and accidents. On the other hand, complex systems are required to solve complex problems—of which there are enough in our world of cyber-physical systems and software universes. Complexity—or rather the dealing with complexity—therefore, is an essential issue in software engineering. Managing complexity can make or break a system [Jeffries15, Brand13, Sessions08, Sessions09]. Many complexity metrics have been introduced in the literature. The most intuitive complexity metric is based on the number of parts, number of relationships and the complexity of the individual parts and relationships (Fig. 6.1, [Flood10]) as shown in Example 12.27. Example 12.27: System Complexity Metric
Figure 12.54 shows systems with growing complexity in number of parts and number of relationships. For simplicity it is assumed, that each part has only two states (which is a gross oversimplification). The relevant numbers for the size of the state space are shown. The complexity factor is calculated as the product of the three columns. The growth of the complexity factor is scary. Software Complexity Complexity (Sect. 3.3), change (Sect. 3.4) and uncertainty (Sect. 3.5) are the three devils of systems engineering (Sect. 3.2). Of these three, complexity—i.e., the rising complexity of our systems over time—may be the most dangerous. A relentless and accelerating flow of new requirements from users, business, environment, and the law forces the development of more and more complex systems.
12.14 ρ-Architecture Principle #12: Complexity and Simplification W
W
W
W
W
W W
W
W
W
W
W
W
291 W
W W
W
W
W
EƵŵďĞƌŽĨWĂƌƚƐ
EƵŵďĞƌŽĨZĞůĂƚŝŽŶƐŚŝƉƐ
^ƚĂƚĞ^ƉĂĐĞ
ŽŵƉůĞdžŝƚLJ&ĂĐƚŽƌ
Ϯ
ϭ
ϰ
ϴ
ϯ
ϯ
ϴ
ϳϮ
ϰ
ϲ
ϭϲ
ϯϴϰ
ϱ
ϭϬ
ϯϮ
ϭ͛ϲϬϬ
ϲ
ϭϱ
ϲϰ
ϱ͛ϳϲϬ
W
Fig. 12.54 System complexity metric
Quote: “Model-driven engineering (MDE) is the discipline that focuses on how modeling can be used to better manage essential complexity and reduce accidental complexity associated with developing software-intensive systems.” (Benoit Combemale, 2017)
Complexity is generated by adding new functionality to existing systems. Two types of complexity must be differentiated (Definition 12.36): Essential complexity and accidental complexity. Essential complexity is inherent to the problem to be solved: It cannot be reduced (or only by reducing the requirements). Essential complexity needs a minimum of software size/complexity to be implemented. Accidental complexity, however, is unnecessary complexity added during the development process. Accidental complexity may be generated by a lack of conceptual integrity, carelessness, bad architecture, the introduction of redundancy, and many more reasons. Quote: “Containing complexity growth requires continuous and substantial architectural intervention and strong management commitment.”
Complexity is a dangerous property of an IT system: It creeps up, incrementally growing over a long time. It occurs locally in many different specifications, programs, and interfaces, but its impact is global. Complexity may grow to such a state, that the IT system becomes unmanageable or commercially unviable (Fig. 5.7). The serious management of complexity is, therefore, an essential task during the development and evolution process of any system.
292
12 Architecture Principles for Changeability
Definition 12.36: Complexity Complexity is that property of an IT system which makes it difficult to formulate its overall behavior, even when given complete information about its parts and their relationships. Essential complexity: Is rooted in the problem to be solved. Nothing can remove it. Represents the inherent difficulty. Accidental complexity: Is caused by solutions that we create on our own or by impacts from our environment.
12.14.2 Complexity Tracking Complexity in the system implementation can be visualized as in Fig. 12.55. The codebase of the software-system consists of two types: Code implementing essential complexity and code fragments implementing accidental complexity. Note that Fig. 12.55 is only a visualization: Essential and accidental code cannot be distinguished in the implementation! Any extension of the system (Fig. 12.55) will in many cases introduce both essential code or data (desired) and accidental code or data (undesired). The objective of complexity management is to optimize essential code/data and to minimize accidental code/data. Quote: “The more lines of code there are, the more complicated something gets. The more complicated something gets, the harder it is to understand it.” (https://stackoverflow.com)
^LJƐƚĞŵdžƚĞŶƐŝŽŶƐ džŝƐƚŝŶŐ^LJƐƚĞŵŽĚĞďĂƐĞ ĐĐŝĚĞŶƚĂů ŽŵƉůĞdžŝƚLJ ŽĚĞ
ƐƐĞŶƚŝĂů ŽŵƉůĞdžŝƚLJ ŽĚĞ
ĐĐŝĚĞŶƚĂů ŽŵƉůĞdžŝƚLJ ŽĚĞ
ƐƐĞŶƚŝĂů ŽŵƉůĞdžŝƚLJ ŽĚĞ ĐĐŝĚĞŶƚĂůŽŵƉůĞdžŝƚLJŽĚĞ
Fig. 12.55 Complexity generation visualization
12.14 ρ-Architecture Principle #12: Complexity and Simplification
293
In Fig. 12.55 a complexity metric was implicitly assumed: The size of the code, measured as the number of Source Lines of Code (#SLOCs). Source lines of code measures the size of a computer program by counting the number of lines in the text of the program’s source code, without counting nonfunctional lines, such as titles, comments, versioning, etc. #SLOCs is not the best metric [Singh17]: In fact, for the Managed Evolution we used #UCP or #FP (Sect. 5.2). However, #SLOCs is a pragmatic metric and can easily be implemented and automated. Note that different programming languages require different #SLOCs to implement the same functionality. Conversion factors exist in the literature [Boehm00].
12.14.3 Complexity Management Successfully managing complexity is fundamentally a process issue: Its objective is to optimize essential complexity and minimize (or eliminate) accidental complexity in all phases of the system development and evolution. Two processes must be implemented in the organization (Fig. 12.56): 1. Introduction of a simplification step into the development process; 2. Periodically carry out complexity reduction architecture programs.
Ă
ZĞƋƐ
^ƉĞĐƐ
ƌĐŚ
ĞƐŝŐŶ
^ŝŵƉůŝĨLJ
ZĞǀŝĞǁ
ƵŝůĚ
dĞƐƚ
ŚĞĐŬͲ ůŝƐƚ
ď ŶĂůLJnjĞ͕ ƐƐĞƐƐ͕ Đƚ
Fig. 12.56 Complexity management processes
Re-Architecture Program 2019 Eliminate … Refactor … Replace … Redesign … etc.
294
12 Architecture Principles for Changeability
12.14.4 Active Simplification Step in Development Process The upper part of Fig. 12.56 shows the simplification step in the system development process. It is located after the system, or the system extension has been designed, i.e., when the solution proposal and its integration into the existing system are ready for implementation. The simplification step consists of an in-depth review executed by a very experienced team with excellent knowledge both of software architecture, and of the existing system. It is preferably led by the domain architect, i.e., the responsible architect for the domain (Sect. 12.13.4). The review is guided by a checklist (Table 12.7). The result of the review are recommendations for simplification in all steps of the development process. Note that the review results may impact even the requirements: If some specific requirement results in a high (essential) complexity of the solution, a discussion with the business unit or the product owner must be sought to reduce or adapt the requirements (this is often possible and benefits both partners). Note that the simplification step must be understood and backed by management. Table 12.7 Simplification checklist Optimize essential complexity
Minimize accidental complexity
Identify the most complex part of the requirements—and their impact on the proposed solution: Can they be simplified (without compromising the business objectives)?
Is any redundancy introduced into the system? Is any unmanaged redundancy eliminated? Is any managed redundancy justified and correctly designed? Note: valid for all artifacts, i.e., models, specifications, business objects, …
Does the proposed solution fit into the domain and business object model?
Does the proposed solution offer the least possible complexity? Are other architectural or design choices with less complexity possible?
Can standard or COTS (Commercial Of-TheShelf) products be used instead of an own development?
Is the implementation technology (Implementation language, team, infrastructure technology, run-time platform, …) well suited for the solution? Are there alternatives?
Are all the nonfunctional requirements, such as Are all horizontal architecture principles safety, security, performance, business continu- (changeability) respected? Are any deviations ity, etc. (vertical architecture layers) well speci- well documented and justified? fied?—and not overspecified? Are all manual process steps automated and eliminated as far as possible or reasonable? Are the recovery and rollback mechanisms optimized? Are they using the organization’s standard procedures? Are the legal & compliance requirements clearly defined, approved, and planned correctly?
Are the impacts of the system extensions on the organization’s processes fully known and acceptable?
References
295
12.14.5 Complexity Reduction Architecture Program The lower part of Fig. 12.56 shows the execution of a re-architecture program. Even most carefully designed and implemented systems accumulate accidental complexity, technical debt, architecture erosion, and dead code over time. Some of it is continuously eliminated by Managed Evolution (Sect. 5.2). However, Managed Evolution is not enough: The chief architect needs to launch (on budgets assigned to him) periodic re-architecture programs. A re-architecture program has a clearly defined objective, such as re-architecting, reengineering, or refactoring a specific part of the application landscape or the technology portfolio, e.g., with the goal of reducing accidental complexity. A dedicated team will then work for the duration of the program to achieve the objectives. Principle 12.12: Complexity and Simplification
1. Add an explicit step “Simplification” after the design phase to your system development process. Explicitly carry out reviews by an experienced and knowledgeable team in order to optimize essential complexity and to minimize accidental complexity; 2. Scrutinize all phases of the development process, exploring all simplification avenues. If necessary, try to reduce the complexity of the requirements; 3. Use and maintain a “simplification checklist” to guide the simplification review. Expand the checklist with issues found during all reviews; 4. Periodically execute well-defined re-architecture programs to eliminate specific complexity issues from your application landscape, technology portfolio, or product.
References [Abrial96] Abrial J-R (1996) The B-Book—assigning programs to meanings. Cambridge University Press, Cambridge. ISBN 978-0-521-02175-3 [Abrial10] Abrial J-R (2010) Modeling in Event-B: system and software engineering (Englisch) Gebundene Ausgabe—13. Mai 2010. Cambridge University Press, Cambridge. ISBN 978-0-521-89556-9 [Allemang11] Allemang D, Hendler J (2011) Semantic web for the working ontologist—effective modeling in RDFS and OWL. Morgan Kaufmann, Waltham. ISBN 978-0-123-85965-5 [Alur15] Alur R (2015) Principles of cyber-physical systems. MIT Press, Cambridge. ISBN 978-0-262-02911-7 [Ambler98] Ambler SW (1998) Process patterns: building large-scale systems using object technology. Cambridge University Press, Cambridge. ISBN 0-521-64568-9
296
12 Architecture Principles for Changeability
[Annighöfer15] Annighöfer B (2015) Model-based architecting and optimization of distributed integrated modular avionics. Shaker Publishing, Aachen. ISBN 978-3-8440-3420-2 [Apperly02] Apperly H et al (2002) Service- and component-based development— using the select perspective and UML. Addison-Wesley, London. ISBN 978-0-321-15985-4 [Arp15] Arp R, Smith B, Spear AD (2015) Building ontologies with basic formal ontology. MIT Press, Cambridge. ISBN 978-0-262-52781-1 [Austin01] Austin T (2001) PKI: a Wiley tech brief—implementing and planning digital certificate systems. Wiley, New York. ISBN 978-0-471-35380-5 [Baader10] Baader F (ed) (2010) The description logic handbook—theory, implementation and applications, 2nd edn. Cambridge University Press, Cambridge. ISBN 978-0-521-15011-8 [Baader17] Baader F, Horrocks I, Lutz C, Sattler U (2017) An introduction to description logic. Cambridge University Press, Cambridge. ISBN 978-0-521-69542-8 [Behara15] Behara GK, Paradkar SS (2015) Enterprise architecture—a practitioner’s handbook. Megan-Kiffer Press, Tampa. ISBN 978-0-9296-5256-6 [Benveniste12] Benveniste A, Caillaud B, Nickovic D, Passerone R, Raclet J-B, Reinkemeier P, Sangiovanni-Vincentelli A, Damm W, Henzinger T, Larsen K (2012) Contracts for systems design. Inria research report, N°8147, November 2012. ISSN 0249-6399. http://hal.inria.fr/ docs/00/75/85/14/PDF/RR-8147.pdf. Accessed 23 Sep 2017 [Berczuk02] Berczuk SP (2002) Software configuration management patterns: Effective teamwork, practical integration. Addison-Wesley Professional, Boston. ISBN 978-0-201-74117-9 [Bernard12] Bernard SA (2012) An introduction to enterprise architecture, 3rd edn. AuthorHouse, Bloomington. ISBN 978-1-4772-5800-2 [Bernstein09] Bernstein PA, Newcomer E (2009) Principles of transaction processing. Morgan Kaufmann, Burlington. ISBN 978-1-558-60623-4 [Blokdyk18a] Blokdyk G (2018) The open group architecture framework—the ultimate step-by-step guide. 5starcooks, Texas. ISBN 978-0-6551-7423-3 [Blokdyk18b] Blokdyk G (2018) Enterprise information archiving—a complete guide. 5starcooks, Texas. ISBN 978-0-6551-7675-6 [Blokdyk18c] Blokdyk G (2018) Information policy—a clear and concise reference. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-9869-4593-6 [Blom12] Blom H, Lönn H, Hagl F, Papadopoulos Y, Reiser M-O, Sjöstedt C-J, Chen D-J, Kolagari RT (2012) An architecture description language for automotive software-intensive systems EAST-ADL. White Paper, Version 2.1.12. http://www.maenad.eu/public/conceptpresentations/ EAST-ADL_WhitePaper_M2.1.12.pdf. Accessed 11 Aug 2018 [Boehm00] Boehm BW, Abts C, Brown AW, Chulani S, Clark BK, Horowitz H, Madachy R, Reifer D, Steece B (2000) Software cost estimation with COCOMO II. Prentice Hall PTR, New Jersey. ISBN 978-0-13-026692-2 [Boerger18] Börger E, Raschke A (2018) Modeling companion for software practitioners. Springer, Berlin. ISBN 978-3-662-56639-8 [Booch01] Booch G, Maksimchuk R, Engle M (2001) Object-oriented analysis and design with applications, 3rd edn. Addison-Wesley Object Technology, Upper Saddle River, N.J., USA. ISBN 978-0-201-89551-3
References
297
[Borchers01] Borchers J (2018) A pattern approach to interaction design. Wiley, Chichester. ISBN 978-0-471-49828-5 [Borky18] Borky JM, Bradley T (2018) Effective model-based systems engineering. Springer, Berlin. ISBN 978-3-319-95668-8 [Bouwers11] Bouwers E, van Deursen A, Visser J (2011) Quantifying the encapsulation of implemented software architectures. Delft University of Technology, Software Engineering Research Group, report TUDSERG-2011-031-a. https://repository.tudelft.nl/islandora/object/ uuid:3121dc72-8c47-448a-a207-69f9f47f095b/datastream/OBJ. Accessed 15 Mai 2018 [Box98] Box D (1998) Essential COM. Addison-Wesley Professional, Reading. ISBN 978-0-201-63446-4 [Boyer11] Boyer J (2011) Agile business rule development—process, architecture, and JRules examples. Springer, Berlin. ISBN 978-3-642-19040-7 [Brambilla17] Brambilla M, Cabot J, Wimmer M (2017) Model-driven software engineering in practice, 2nd edn. Morgan & Claypool, San Rafael. ISBN 978-1-681-73233-6 [Brand13] Brand F (2013) Komplexe Systeme—Neue Ansätze und zahlreiche Beispiele. Oldenbourg Wissenschaftsverlag, München. ISBN 978-3-486-58391-5 [Brown98] Brown WH, Malveau RC, McCormick HW, Mowbray TJ (1998) Anti patterns—refactoring software, architectures, and projects in crisis. Wiley, New York. ISBN 978-0-471-19713-3 [Burns18] Burns B (2018) Designing distributed systems—patterns and paradigms for scalable, reliable services. O’Reilly, Sebastopol. ISBN 978-1-491-98364-5 [Buschmann96] Buschmann F, Meunier R, Rohnert H (1996) Pattern-oriented software architecture, vol 1. A system of patterns. Wiley, Chichester (Also Vols 2, 3 and 4). ISBN 978-0-471-95869-7 [Buschmann97] Martin RC, Riehle D, Buschmann F (1997) Pattern languages of program design (Part 3). Addison-Wesley, Reading. ISBN 978-0-201-31011-5 [Carter16] Carter PA (2016) SQL server AlwaysOn revealed, 2nd edn. Apress, Berkeley. ISBN 978-1-4842-2396-3 [Cheesman00] Cheesman J (2000) UML components—a simple process for specifying component-based software. Component software series. AddisonWesley Longman, Amsterdam. ISBN 978-0-201-70851-6 [Chen15] Chen PPS (2015) The entity-relationship model—a basis for the enterprise view of data. Component software series. Sagwan Press, (Reproduction of the classical original). ISBN 978-1-3400-7470-8 [Chisholm03] Chisholm M (2003) How to build a business rules engine—extending application functionality through metadata engineering. Morgan Kaufmann, Amsterdam. ISBN 978-1-558-60918-1 [Cloutier07] Cloutier R, Verma D (2007) Applying pattern concepts to systems (enterprise) architecture. Syst Eng 10(2):138–154. http://calimar.com/ JEA-Cloutier-Verma.pdf. Accessed 18 Aug 2017 [Cloutier10] Cloutier R, Muller G, Verma D, Nilchiani R, Hole E, Bone M (2010) The concept of reference architectures. Syst Eng 13(1). https://doi.org/10.1002/sys.20129. http://wiki.lib.sun.ac.za/images/c/cb/ TheConceptOfReferenceArchitectures.pdf. Accessed 17 June 2018
298
12 Architecture Principles for Changeability
[Cope15] Cope R, Naserpour A, Erl T (2015) Cloud computing design patterns. Pearson Education, Boston. ISBN 978-0-133-85856-3 [Coplien95] Coplien JO, Schmidt D (1995) Cloud computing design patterns. Addison-Wesley, Reading. ISBN 978-0-201-60734-5 [Czarnecki17] Czarnecki C, Dietze C (2017) Reference architecture for the telecommunications industry: transformation of strategy, organization, processes, data, and applications. Springer, Berlin. ISBN 978-3-319-46755-9 [Daum03] Daum B (2003) Modeling business objects with XML schema. Morgan Kaufmann, Burlington. ISBN 978-1-558-60816-0 [Debbabi10] Debbabi M, Hassaïne F, Jarraya Y, Soeanu A, Alawneh L (2010) Verification and validation in systems engineering—assessing UML/ SysML design models. Springer, Berlin. ISBN 978-3-642-15227-6 [Delange17] Delange J (2017) AADL (Architecture Analysis and Design Language) in practice—become an expert of software architecture modeling and analysis. Reblochon Development Company, Toulouse, France. ISBN 978-0-6928-9964-9 [Delligatti13] Delligatti L (2013) SysML distilled—a brief guide to the systems modeling language. Addison-Wesley Professional, Upper Saddle River, N.J., USA. ISBN 978-0-321-92786-6 [DeVries99] De Vries HJ (1999) Standardization – a business approach to the role of national standardization organizations. Springer, Boston, ISBN 978-0-792-38638-4 [Dietz06] Dietz J (2006) Enterprise ontology—theory and methodology. Springer, Berlin. ISBN 978-3-540-29169-5 [Dijkstra76] Dijkstra EW (1976) A discipline of programming. Pearson Education (Prentice-Hall), Englewood Cliffs. ISBN 978-0-132-15871-8 [Diller94] Diller A (1994) Z—an introduction to formal methods (Englisch) Taschenbuch. Wiley, New York. ISBN 978-0-471-93973-3 [Ding17] Ding W, Lin X, Zarro M (2017) Information architecture—the design and integration of information spaces. Morgan & Claypool, San Rafael. ISBN 978-1-6270-5976-3 [Dissaux14] Dissaux P (ed) (2014) Architecture description languages (IFIP Advances in information and communication technology, vol 176). (Englisch) Taschenbuch—22. November 2014 Springer. ISBN 978-1-461-49895-7 [Dori16] Dori D (2016) Model-based systems engineering with OPM and SysML. Springer, New York. ISBN 978-1-493-93294-8 [Downey18] Downey AB (2018) Think complexity—complexity science and computational modeling, 2nd edn. O’Reilly, Beijing. ISBN 978-1-492-04020-0 [Drewer17] Drewer P, Schmitz K-D (2017) Terminologie-Management— Grundlagen, Methoden, Werkzeuge. Spinger Vieweg, Heidelberg. ISBN 978-3-662-53314-7 [Dubrova13] Dubrova E (2013) Fault-tolerant design. Springer, New York. ISBN 978-1-461-42112-2 [Duffy04] Duffy DJ (2004) Domain architectures—models and architectures for UML applications. Wiley, West Sussex. ISBN 978-0-470-84833-3 [Dyson04] Dyson P, Longshaw A (2004) Architecting enterprise solutions: patterns for high-capability internet-based systems. Wiley, Chichester. ISBN 978-0-470-85612-3
References
299
[EDM12] Enterprise Data Management Council (EDM) (2012) The financial industry business ontology—demystifying financial industry semantics. https://www.omg.org/hot-topics/documents/finance/1110_Bennett.pdf. Accessed 5 July 2018 [Effingham13] Effingham N (2013) An introduction to ontology. Polity Press, Cambridge. ISBN 978-0-7456-5255-9 [Erl05] Erl T (2016) Service oriented architecture—concepts, technology, and design, 2nd edn. Prentice Hall Computer, New Jersey. ISBN 978-0-133-85858-7 [Erl08] Erl T (2008) Web service contract design and versioning for SOA. Prentice Hall, Boston. ISBN 978-0-136-13517-3 [Erl11] Erl T et al (2011) SOA governance—governing shared services: on-premise and in the cloud. Prentice Hall service-oriented computing series. Pearson Education, Hoboken, NJ, USA. ISBN 978-0-138-15675-6 [Erl16] Erl T (2016) Service-oriented architecture: analysis and design for services and microservices, 2nd edn. Prentice Hall, Boston. ISBN 978-0-133-85858-7 [Erl17] Erl T (2017) Service infrastructure- on-premise and in the cloud. Prentice Hall service technology series. Pearson Education, Harlow. ISBN 978-0-133-85872-3 [Euzenat13] Euzenat J, Shvaiko P (2013) Ontology matching, 2nd edn. Springer, Berlin. ISBN 978-3-642-38720-3 [Evans04] Evans E (2004) Domain-driven design—tackling complexity in the heart of software, 7th edn. Pearson Education, Addison-Wesley, Boston. ISBN 978-0-321-12521-5 [Ezran13] Ezran M (2013) Practical software reuse, 1st edn. Springer, London. ISBN 978-1-852-33502-1 [Farley17] Farley J (2017) Online and under attack—what every business needs to do now to manage cyber risk and win its cyber war. CreateSpace Independent Publishing Platform, ISBN 978-1-5423-4290-2 [Feghhi98] Feghhi J, Feghhi J, Williams P (1998) Digital certificates—applied internet security. Addison-Wesley Longman, Amsterdam. ISBN 978-0-201-30980-5 [Feiler12] Feiler PH, Gluch DP (2012) Model-based engineering with AADL. SEI series in software engineering. Addison-Wesley Longman, Amsterdam. ISBN 978-0-321-88894-5 [Fensel09] Fensel D, Lausen H, Polleres A, de Bruin J, Stollberg M, Roman D, Domingue J (2009) Enabling semantic web services: the web service modeling ontology. Springer, Berlin. ISBN 978-3-642-070884 [Fensel10] Fensel D, Kerrigan M, Zaremba M (2010) Implementing semantic web services—the SESA framework. Springer, Berlin. ISBN 978-3-642-09575-7 [Fensel12] Fensel D (2012) The Knowledge Acquisition and Representation Language (KARL). Springer, Heidelberg, Germany. ISBN 978-1-461-35959-3 [Fernandez-Buglioni13] Fernandez-Buglioni E (2013) Security patterns in practice: designing secure architectures using software patterns. Wiley, Chichester. ISBN 978-1-119-99894-5
300
12 Architecture Principles for Changeability
[FIBO17] FIBO (2017) Financial industry business ontology community group reports. https://www.w3.org/community/fibo/. Accessed 5 July 2018 [Flood10] Flood Rl, Carson ER (2010) Dealing with complexity—an introduction to the theory and application of systems science, 2. edn. Springer, New York. ISBN 978-1-441-93227-3 [Fowler02] Fowler M (2002) Patterns of enterprise application architecture. Pearson Professional, Boston. ISBN 978-0-321-12742-6 [Fowler03] Fowler M (2003) UML distilled—a brief guide to the standard object modeling language, 3rd edn. Addison-Wesley Professional, Boston. ISBN 978-0-321-19368-1 [Fowler10] Fowler M, Parsons R (2010) Domain specific languages. AddisonWesley, Boston. ISBN 978-0-321-71294-3 [Friedenthal17] Friedenthal S, Oster C (2017) Architecting spacecraft with SysML—a model-based systems engineering approach. CreateSpace Independent Publishing Platform, ISBN 978-1-5442-8806-2 [Furrer12] Furrer FJ (2012) Eine kurze Geschichte der Ontologie—Von der Philosophie zur modernen Informatik (A short history of ontology— from philosophy to modern information technology) Informatik spektrum. Springer, Heidelberg, 37(4), August 2014, 308–317 (Sonderheft Ontologie) August 2012. https://doi.org/10.1007/s00287-012-0642-3. SpringerLink: http://link.springer.com/article/10.1007/s00287-0120642-3. Accessed 3. Aug 2018 [Gamma94] Gamma E, Helm R, Johnson RE, Vlissides J Design patterns—elements of reusable object-oriented software. Pearson Professional. ISBN 978-0-201-63361-0 [GAO18] United States General Accounting Office (GAO) (2018) Software reuse—major issues need to be resolved before benefits can be achieved. CreateSpace Independent Publishing Platform, ISBN 978-1-7190-8044-6 [Gartner14] Gartner Inc. (2014) The Gartner enterprise information management framework, Gartner summits. https://blogs.gartner.com/andrew_white/ files/2016/10/On_site_poster.pdf. Accessed 18 July 2018 [Gartner16] Gartner R (2016) Metadata—shaping knowledge from antiquity to the semantic web. Springer, Cham. ISBN 978-3-319-40891-0 [Gaševic09] Gaševic D, Djuric D, Devedžic V (2009) Model driven engineering and ontology development, 2nd edn. Springer, Berlin. ISBN 978-3-642-00281-6 [Godinez10] Godinez M, Hechler E, Koenig K, Lockwood S, Oberhofer M, Schroeck M (2010) The art of enterprise information architecture— a systems-based approach for unlocking business insight. IBM Press & Wiley, New Jersey. ISBN 978-0-137-03571-7 [Gomez04] Gomez-Perez A, Fernandez-Lopez M, Corcho O (2004) Ontological Engineering – With examples from the areas of Knowledge Management, e-Commerce and the Semantic Web. Springer-Verlag, London, UK. ISBN 978-1-849-96884-3. [Gómez-Pérez04] Gómez-Pérez A, Fernandez-Lopez M, Corcho O (2004) Ontological engineering: with examples from the areas of knowledge management, e-Commerce and the semantic web, 2nd edn. Springer, London. ISBN 978-1-852-33551-9
References
301
[Govindappa18] Govindappa M (2018) Aerospace software certification DO178b/ EA-12b standard. CreateSpace Independent Publishing Platform, ISBN 978-1-9849-0440-9 [Großjean09] Großjean A (2009) Corporate terminology management—an approach in theory and practice. VDM Verlag Dr. Müller, Saarbrücken. ISBN 978-3-6391-24217 [Grosso01] Grosso W (2001) Java RMI. O’Reilly & Associates, Sebastopol. ISBN 978-1-5659-2452-9 [Gudera16] Gudera A (2016) Beware of car hacking—a systematic analysis. Tredition GmbH, Hamburg. ISBN 978-3-7323-6368-1 [Gutteridge18] Gutteridge L (2018) Avoiding IT disasters—fallacies about enterprise systems and how you can rise above them. Thinking Works, Vancouver. ISBN 978-1-7753-5750-6 [Hamlet10] Hamlet D (2010) Composing software components—a software-testing perspective. Springer, Berlin. ISBN 978-1-489-99821-7 [Hanmer07] Hanmer R (2007) Patterns for fault tolerant software. Wiley, Hoboken. ISBN 978-0-470-31979-6 [Hay06] Hay DC (2006) Data model patterns—a metadata map. Morgan Kaufmann, Amsterdam. ISBN 978-0-120-88798-9 [Hay11] Hay DC (2011) UML and data modeling—a reconciliation. Technics Publications & LLC, Westfield. ISBN 978-1-9355-0419-1 [Hebeler09] Hebeler J, Fisher M, Blace R, Perez-Lopez A (2009) Semantic web programming. Wiley, Indianapolis. ISBN 978-0-470-41801-7 [Heineman01] Heineman GT, Councill WT (2001) Component-based software engineering—putting the pieces together. Addison-Wesley, Boston. ISBN 978-0-201-70485-3 [Hinton15] Hinton A (2015) Understanding context—environment, language, and information architecture. O’Reilly and Associates, Sebastopol. ISBN 978-1-449-32317-2 [Hitzler09] Hitzler P, Rudolph S, Krötzsch M (2009) Foundations of semantic web technologies. Chapman & Hall/CRC Publishers, Boca Raton, FL, USA. ISBN 978-1-420-09050-5 [Hohpe03] Hohpe G, Woolf B (2003) Enterprise integration patterns: designing, building, and deploying messaging solutions. Pearson Professional, Boston. ISBN 978-0-321-20068-6 [Holt14] Holt J, Perry S (2014) SysML for systems engineering—a modelbased approach, 2nd edn. IET Professional Applications of Computing, Computing and Networks, London. ISBN 978-1-8491-9651-2 [Hooper13] Hooper JW (1991) Software reuse—guidelines and methods, 1st edn. Springer, Heidelberg, Germany. ISBN 978-1-461-36677-5 [Hruby06] Hruby P (2006) Model-driven design using business patterns. Springer, Berlin. ISBN 978-3-540-31054-7 [ISO9075] International Organization for Standardization (ISO) (2016) ISO/IEC 9075-14:2016—information technology—database languages—SQL, 5th edition. https://www.iso.org/standard/63566.html [Jacky97] Jacky J (1997) The way of Z—practical programming with formal methods. Cambridge University Press, Cambridge. ISBN 978-0-521-55976-8 [Jacobson97] Jacobson I (1997) Software reuse—architecture, process and organization for business success. Addison-Wesley, Harlow. ISBN 978-0-201-92476-3
302
12 Architecture Principles for Changeability
[Jakobs00] Jakobs K (2000) Information technology standards & standardization— a global perspective. IGI Publishing, Hershey. ISBN 978-1-8782-8970-4 [Jakobs06] Jakobs K (2006) Advanced topics in information technology standards and standardization research. Idea Group Publishing, Hershey. ISBN 978-1-5914-0938-0 [Jakus13] Jakus G (2013) Concepts, ontologies, and knowledge representation. Springer, New York. ISBN 978-1-461-47821-8 [Jalender10] Jalender B, Govordhan A, Remchand P (2010) A pragmatic approach to software reuse. J Theor Appl Inf Technol 14(3). https://pdfs.semanticscholar.org/b72a/a8cddb12044d26fbf818bad30bb1fb22d8e4.pdf. Accessed 5 July 2018 [Jeffries15] Jeffries R (2015) The nature of software development—keep it simple, make it valuable, build it piece by piece. Pragmatic Bookshelf, Dallas. ISBN 978-1-94122-237-9 [Jenney10] Jenney J, Gangl M, Kwolek R, Melton D, Ridenour N, Coe M (2010) Modern methods of systems engineering—with an introduction to pattern and model based methods. Amazon Distribution, Leipzig. ISBN 978-1-14637-7735-7 [Johnson10] Johnson N (2010) Simply complexity—a clear guide to complexity theory. Oneworld Publications, Oxford. ISBN 978-1-8516-8630-8 [Kaye03] Kaye D (2003) Loosely coupled—the missing pieces of web services. RDS Press, Marin County. ISBN 978-1-881378-24-2 [Kelly08] Kelly S, Tolvanen J-P (2008) Domain-specific modeling—enabling full code generation. Wiley, Hoboken. ISBN 978-0-470-03666-2 [Kelly12] Kelly A (2012) Business patterns for software developers. Wiley, West Sussex. ISBN 978-1-119-99924-9 [Kleppe09] Kleppe A (2009) Software language engineering—creating domain-specific languages using metamodels. Addison-Wesley, New Jersey. ISBN 978-0-321-55345-4 [Kent12] Kent W (2012) Data and reality—a timeless perspective on perceiving and managing information in our imprecise world, 3rd edn. Technics Publications LLC, Westfield. ISBN 978-1-9355-0421-4 [Kerievsky04] Kerievsky J (2004) Refactoring to patterns. Addison-Wesley, Boston. ISBN 978-0-321-21335-8 [Kernighan99] Kernighan BW (1999) A tutorial introduction to the language B. Bell Laboratories, New Jersey. https://web.archive.org/ web/20150611114644/https://www.bell-labs.com/usr/dmr/www/btut. pdf. Accessed 9 Oct 2017 [Keyes00] Keyes J (2000) Financial services information systems. Auerbach, London. ISBN 978-0-849-39834-6 [Kleppmann17] Kleppmann M (2017) Designing data-intensive applications—the big ideas behind reliable, scalable, and maintainable systems. O’Reilly, Sebastopol. ISBN 978-1-449-37332-0 [Klosterboer10] Klosterboer L (2010) Implementing ITIL configuration management, 2nd edn. Pearson Education, Upper Saddle River. ISBN 978-0-131-38565-8 [Knight19] Knight A (2019) Hacking connected cars—tactics, techniques, and procedures. Wiley & Sons Inc., New York, USA. ISBN 978-1-119-49180-4
References
303
[Koehler17] Koehler TR (2017) Understanding cyber risk—protecting your corporate assets. Chapman Hall, Boca Raton. ISBN 978-1-472-47779-8 [Kopetz19] Kopetz H (2019) Simplicity is Complex. Springer Nature, Cham, Switzerland. ISBN 978-3-030-20410-5 [Korra13] Korra S, Raju SV, Babu AV (2013) Strategies for designing and building reusable software components.Int J Comput Sci Inf Technol 4(5), 655– 659. http://ijcsit.com/docs/Volume%204/Vol4Issue5/ijcsit2013040501. pdf. Accessed 30 June 2018 [Lacy06] Lacy LW (2006) OWL—representing information using the web ontology language. Trafford Publishing, Victoria. ISBN 978-1-412-03448-7 [Lambe07] Lambe P (2007) Organising knowledge—taxonomies, knowledge and organisational effectiveness. Chandos Publishing, Oxford. ISBN 978-1-8433-4227-4 [Lankhorst17] Lankhorst M (2017) Enterprise architecture at work: modelling, communication and analysis, 4th edn. Springer, Berlin. ISBN 978-3-662-53932-3 [Loshin99] Loshin P (1999) Essential ethernet standards – RFCs and protocols made practical. Wiley, USA. ISBN 978-0-471-34596-1 [Lau18] Lau K-K, Di Cola S (2018) An introduction to component-based software development. World Scientific Publishing Company, New Jersey. ISBN 978-9-813-22187-1 [Leach13] Leach RJ (2013) Software reuse—methods, models, costs, 2nd edn. Ronald J Leach Publishing, ISBN 978-1-9391-4235-1 [Leffingwell10] Leffingwell D (2010) Agile software requirements—lean requirements practices for teams, programs, and the enterprise. Addison Wesley, Upper Saddle River. ISBN 978-0-321-63584-6 [Lewis01] Lewis PM, Bernstein A, Kifer M (2001) Databases and transaction processing—an application-oriented approach. Pearson Education, Upper Saddle River. ISBN 978-0-201-70872-1 [Lieberman06] Lieberman BA (2006) The art of software modeling. Auerbach, Boca Raton. ISBN 978-1-420-04462-1 [Little03] Little D, Chapa DA (2003) Implementing backup and recovery— the readiness guide for the enterprise. Wiley, New York. ISBN 978-0-471-22714-4 [Loukas15] Lukas G (2015) Cyber-physical attacks—a growing invisible threat. Butterworth-Heinemann (Elsevier), Oxford. ISBN 978-0-12-801290-1 [Lutowski05] Lutowski R (2005) Software requirements: encapsulation, quality, and reuse. Auerbach, Boca Raton. ISBN 978-0-8493-2848-0 [Mahajan14] Mahajan S (2014) The art of insight in science and engineering—mastering complexity. The MIT Press, Cambridge. ISBN 978-0-262-52654-8 [Marcus03] Marcus E (2003) Blueprints for high availability. Wiley, Indianapolis. ISBN 978-0-471-43026-1 [Marz15] Marz N, Warren J (2015) Big data—principles and best practices of scalable realtime data systems. Manning, Shelter Island. ISBN 978-1-617-29034-3 [Mattson13] Mattson TG, Sanders B, Massingill B (2013) Patterns for parallel programming. Software patterns series. Addison-Wesley, Boston. ISBN 978-0-321-94078-0
304
12 Architecture Principles for Changeability
[Mayr16] Karagiannis D, Mayr HC, Mylopoulos J (eds) (2016) Domain-specific conceptual modeling—concepts, methods and tools. Springer, Cham. ISBN 978-3-319-39416-9 [McCormick15] McCormick JW, Chapin PC (2015) Building high integrity applications with SPARK. Cambridge University Press, Cambridge. ISBN 978-1-107-65684-0 [McGovern03] McGovern J, Ambler SW, Stevens ME, Linn J, Sharan V, Jo EK (2003) Practical guide to enterprise architecture. Prentice Hall, Upper Saddle River. ISBN 978-0-131-41275-0 [Metsker06] Metsker SJ, Wake WC (2006) Design patterns in Java. Software patterns series. Addison-Wesley, Upper Saddle River. ISBN 978-0-321-33302-5 [Meyer98] Meyer B (1998) Object-oriented software construction, 2nd edn. Prentice Hall, Upper Saddle River. ISBN 978-0-136-29155-8 [Meyer09] Meyer B (2009) A touch of class—learning to program well with objects and contracts. Springer, Berlin. ISBN 978-3-540-92144-5 [Micouin14] Micouin P (2014) Model based systems engineering—fundamentals and methods. Wiley-ISTE, London. ISBN 978-1-848-21469-9 [Microsoft04] Microsoft Corporation (2004) Guidelines for application integration (Patterns & practices). Microsoft Press, Redmond. ISBN 978-0-735-61848-0 [Millett15] Millett S, Tune N (2015) Patterns, principles, and practices of domaindriven design. Wiley, Indianapolis. ISBN 978-1-118-71470-6 [Moeller16] Moeller DPF (2016) Guide to computing fundamentals in cyber-physical systems. Springer, Cham. ISBN 978-3-319-25176-9 [Moller06] Moller A, Schwartzbach M (2006) An introduction to XML and web technologies. Addison-Wesley, Essex. ISBN 978-0-321-26966-9 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Berlin. ISBN 978-3-642-01632-5 [Nesteruk18] Nesteruk D (2018) Design patterns in modern C++—reusable approaches for object-oriented software design. Apress, New York. ISBN 978-1-484-23602-4 [Nielson19] Nielson F, Nielson HR (2019) Formal methods—an appetizer. Springer, New York. ISBN 978-3-030-05155-6 [Oezsu11] Özsu MT, Valduriez P (2011) Principles of distributed database systems, 3rd edn. Springer, New York. ISBN 978-1-441-98833-1 [Pan12] Jeff P, Staab S, Aßmann U, Ebert J, Zhao Y (2012) Ontology-driven software development. Springer, Berlin. ISBN 978-3-642-31225-0 [Parnas71] Parnas DL (1971) On the criteria to be used in decomposing systems into modules. Department of Computer Science, Carnegie-MelIon University Pittsburgh. http://repository.cmu.edu/cgi/viewcontent.cgi?art icle=2979&context=compsci. Accessed 31 March 2018 [Pastor07] Pastor O, Molina JC (2007) Model-driven architecture in practice— a software production environment based on conceptual modeling. Springer, Berlin. ISBN 978-3-540-71867-3 [Paulheim11] Paulheim H (2011) Ontology-based application integration. Springer, New York. ISBN 978-1-461-41429-2 [Perroud13] Perroud T, Inversini R (2013) Enterprise architecture patterns: practical solutions for recurring IT-Architecture problems. Springer, Berlin. ISBN 978-3-642-37560-6
References
305
[Pohl05] Pohl K, Böckle G, van der Linden FJ (2005) Software product line engineering—foundations, principles and techniques. Springer, Berlin. ISBN 978-3-540-24372-4 [Pohl12] Pohl K, Hönninger H, Achatz R (2012) Model-based engineering of embedded systems—the SPES 2020 methodology. Springer, Berlin. ISBN 978-3-642-34613-2 [Pollock04] Pollock JT, Hodgson R (2004) Adaptive information—improving business through semantic interoperability, grid computing, and enterprise integration. Wiley, New York. ISBN 978-0-471-48854-5 [Pomerantz15] Pomerantz J (2015) Metadata. MIT Press, Cambridge. ISBN 978-0-262-52851-1 [Poulin79] Poulin JS (1997) Measuring software reuse—principles, practices, and economic models. Addison-Wesley Longman, Reading. ISBN 0-201-63413-9 [Prasath10] Prasath R (2010) Message passing approaches in interconnection networks: towards distributed applications. VDM Verlag, Saarbrücken. ISBN 978-3-6392-6732-7 [Provost13] Provost F, Fawcett T (2013) Data science for business—what you need to know about data mining and data-analytic thinking. O’Reilly and Associates, Sebastopol. ISBN 978-1-449-36132-7 [Rahimi10] Rahimi SK, Haug FS (2010) Distributed database management systems—a practical approach. IEEE Computer Society Press & Wiley, New York. ISBN 978-0-470-40745-5 [Ramachandran08] Ramachandran M (ed) (2008) Software components—guidelines & applications. Nova Science Publishers Inc, New York. ISBN 978-1-6045-6870-7 [Ray03] Ray ET (2003) Learning XML, 2nd edn. O’Reilly and Associates, Beijing. ISBN 978-0-5960-0420-0 [Redmond13] Redmond-Neal A (2013) Starting a taxonomy project—Taxonomy basics SLA annual conference, June 9, 2013. https://www.sla.org/ wp-content/uploads/2013/05/StartingTaxProject_Redmond-Neal.pdf. Accessed 5 July 2017 [Reif12] Reif K (2012) Automobilelektronik—Eine Einführung für Ingenieure, 4th edn. Vieweg + Teubner, Wiesbaden. ISBN 978-3-8348-1498-2 [Roshen09] Roshen W (2009) SOA-based enterprise integration—a step-by-step guide to services-based application integration (Programming & web development—OMG). Osborne Publishing, ISBN 978-0-0716-0552-6 [Ross03] Ross RG (2003) Principles of the business rule approach. Pearson Education & Addison-Wesley Information Technology, Boston. ISBN 978-0-201-78893-8 [Rotem-Gal-Oz12] Rotem-Gal-Oz A (2012) SOA patterns. Manning, Shelter Island. ISBN 978-1-933-98826-9 [Rumpe17] Rumpe B (2017) Agile modeling with UML—code generation, testing, refactoring. Springer, Cham. ISBN 978-3-319-58861-2 [Russell14] Russell AL (2014) Open standards and the digital age—history, ideology, and networks. Cambridge University Press, Cambridge. ISBN 978-1-107-61204-4 [Sametinger97] Sametinger J (2013) Software engineering with reusable components, 1st edn. Springer, Berlin. ISBN 978-3-642-08299-3
306
12 Architecture Principles for Changeability
[Schaeuffele13] Schäuffele J, Zurawka T (2013) Automotive software engineering— Grundlagen, Prozesse, Methoden und Werkzeuge effizient einsetzen, 5th edn. Springer Fachmedien, Wiesbaden. ISBN 978-3-8348-2469-1 [Schaeuffele16] Schäuffele J, Zurawka T (2016) Automotive software engineering— principles, processes, methods, and tools, 2nd edn. SAE International, Warrendale. ISBN 978-0-7680-7992-0 [Scheid15] Scheid O (2015) AUTOSAR compendium, Part 1: application & RTE. CreateSpace Independent Publishing Platform, Bruchsal. ISBN 978-1-5027-5152-2 [Schneiderman15] Schneiderman R (2015) Modern standardization—case studies at the crossroads of technology, economics, and politics. Standards Information Network, IEEE, USA. ISBN 978-1-1186-7859-6 [Schoen19] Schön H, Strahringer S, Furrer FJ, Kühn T Business role-object specification—a language for behavior-aware modeling of business objects. 14th International Conference on Wirtschaftsinformatik, February 24–27, Siegen, Germany [Schumacher05] Schumacher M, Fernandez-Buglioni E, Hybertson D, Buschmann F, Somerlad P (2005) Security patterns: integrating security and systems engineering. Wiley, New York. ISBN 978-0-470-85884-4 [Scribner00] Scribner K, Stiver MC (2000) Understanding SOAP: simple object access protocol. SAMS, Indianapolis. ISBN 978-0-6723-1922-8 [Sedkaoui18] Sedkaoui S (2018) Data analytics and big data. ISTE Ltd & Wiley, London. ISBN 978-1-786-30326-4 [Seese16] Seese D, Weinhardt C, Schlottmann F (2008) Handbook on information technology in finance, 1st edn. Springer, Berlin. ISBN 978-3-662-51827-4 [Sessions08] Sessions R (2008) Simple architectures for complex enterprises. Microsoft Press, Redmond. ISBN 978-0-7356-2578-5 [Sessions09] Sessions R (2009) The IT complexity crisis—danger and opportunity. White paper, November 2009. http://www.objectwatch.com/whitepapers/ITComplexityWhitePaper.pdf. Accessed 8. Febr. 2013 [Seidl15] Seidl M, Scholz M, Huemer C (2015) UML @ classroom—an introduction to object-oriented modeling. Springer, Cham. ISBN 978-3-319-12741-5 [Shirley94] Shirley J, Hu W, Magid D (1994) Guide to writing DCE applications (OSF distributed computing environment), 2nd edn. O’Reilly & Associates, Sebastopol. ISBN 978-1-5659-2045-3 [Simsion05] Simsion GC (2005) Data modeling essentials, 3rd edn. Morgan Kaufmann, Amsterdam. ISBN 978-0-12-644551-0 [Singh17] Singh J (2017) Functional software size measurement methodology with effort estimation and performance indication. Wiley-IEEE Computer Society Press, Hoboken. ISBN 978-1-119-23805-8 [Slama99] Slama D, Garbis J, Russell P (1999) Enterprise Corba. Pearson Education & Addison-Wesley, Upper Saddle River. ISBN 978-0-130-83963-3 [Snedaker13] Snedaker S (2013) Business continuity and disaster recovery planning for IT professionals, 2nd edn. Syngress Publisher, Amsterdam. ISBN 978-0-1241-0526-3
References
307
[Sowa99] Sowa JF (1999) Knowledge representation: logical, philosophical, and computational foundations. Thomson Learning, Brooks. Pacific Grove, CA, USA. ISBN 978-0-534-94965-5 [Spivak01] Spivak SM, Brenner B (2001) Standardization essentials: principles and practice. CRC Press, Boca Raton, FL, USA. ISBN 978-0-824-78918-3 [Starr17] Starr L, Mangogna A, Mellor S (2017) Models to code—with no mysterious gaps. Apress, New York. ISBN 978-1-4842-2216-4 [Stepanov15] Stepanov AA, Rose DE (2015) From mathematics to generic programming. Addison-Wesley, Upper Saddle River. ISBN 978-0-321-94204-3 [Sterling17] Sterling T, Anderson M, Brodowicz M (2017) High performance computing—modern systems and practices. Morgan Kaufmann, Cambridge. ISBN 978-0-124-20158-3 [Stevens04] Stevens P, Pooley R, Maciaszek L (2004) Using UML—software engineering with objects and components. Addison-Wesley, Harlow. ISBN 978-0-582-89596-6 [Stewart11] Stewart DL (2011) Building enterprise taxonomies. Mokita Press, USA. ISBN 978-0-5780-7822-9 [Sutton18] Sutton D (2018) Business continuity in a cyber world—surviving cyberattacks. Business Expert Press & LLC, New York. ISBN 978-1-9474-4146-0 [Szyperski11] Szyperski C (2011) Component software—beyond object-oriented programming, 2nd edn. Addison-Wesley, New York. ISBN 978-0-321-75302-1 [Tanzer18] Tanzer D (2018) Quick glance at agile anti-patterns. Independently published, ISBN 978-1-9802-2631-4 [Tarr99] Tarr T, Ossher H, Harrison W, Sutton SM (1999) N degrees of separation—multi-dimensional separation of concerns ICSE 1999, Los Angeles. http://www.cs.bilkent.edu.tr/~bedir/CS586-AOSD/Syllabus/ NDegreesOfSeparation.pdf. Accessed 15 May 2018 [Thomas97] Thomas P, Weedon R (1997) Object-oriented programming in Eiffel, 2nd edn. Addison-Wesley, Harlow. ISBN 978-0-201-33131-8 [TOGAF11] The Open Group (2011) TOGAF® Version 9.1.Van Haren Publishing, 10th edn, 2011. ISBN 978-9-0875-3679-4 [Uslar13] Uslar M et al (2013) Standardization in smart grids (Power systems). Springer, Berlin. ISBN 978-3-642-42961-3 [VanRenssen14] van Renssen A (2014) Semantic information modeling in formalized languages. www.gellish.net. ISBN 978-1-304-51359-5 [Viescas18] Viescas JL (2018) SQL queries for mere mortals—a hands-on guide to data manipulation in SQL, 4th edn. Pearson Professional & AddisonWesley, Boston. ISBN 978-0-134-85833-3 [Vlissides96] Vlissides J, Coplien JO, Kerth NL (1996) Pattern languages of program design (Part 2). Addison-Wesley, Reading. ISBN 978-0-201-89527-8 [Voelter13] Voelter M (2013) DSL engineering—designing, implementing and using domain-specific languages. CreateSpace Independent Publishing Platform, ISBN 978-1-4812-1858-0 [Volter06] Volter M (2006) Model-driven software development. Wiley, Hoboken. ISBN 978-0-470-02570-3
308
12 Architecture Principles for Changeability
[Völter02] Völter M, Schmid A, Wolff E (2002) Server component patterns: component infrastructures illustrated with EJB. Wiley, Hoboken. ISBN 978-0-470-84319-2 [VonHalle01] von Halle B (2001) Business rules applied—building better systems using the business rules approach. Wiley, Hoboken. ISBN 978-0-471-41293-9 [Voss13] Voss J (2013) Describing data patterns—a general deconstruction of metadata standards. Create Space Publishing. ISBN 978-1-4909-31869. https://edoc.hu-berlin.de/handle/18452/17446. Accessed 27 June 2018 [Walicki16] Walicki M (2016) Introduction to mathematical logic. World Scientific, Singapore. ISBN 978-9-8147-1996-4 [Walmsley12] Walmsley P (2012) Definitive XML schema. Prentice Hall, Upper Saddle River. ISBN 978-0-132-88672-7 [Warwick90] Warwick K, Tham MT (eds) (1991) Failsafe control systems—applications and emergency management. Springer, Dordrecht. ISBN 978-0-412-37740-2 [Weerawarana05] Weerawarana S, Curbera F, Leymann F, Storey T, Ferguson DE (2005) Web services platform architecture: SOAP, WSDL, WS-policy, WS-addressing, WS-BPEL, WS-reliable messaging, and more. Prentice Hall, Upper Saddle River. ISBN 978-0-131-48874-8 [Weilkiens08] Weilkiens T (2008) Systems engineering with SysML/UML—modeling, analysis, design. Morgan Kaufmann & MK/OMG Press, Amsterdam. ISBN 978-0-123-74274-2 [Weill04] Weill P, Ross JW (2004) IT governance. Harvard Business School Press, Boston. ISBN 978-1-59139-253-8 [Wik15] Wik P (2015) Service-oriented architecture—principles and applications. CreateSpace Independent Publishing Platform, ISBN 978-1-5238-0794-9 [Withall07] Withall S (2007) Software requirement patterns. Microsoft Press, Redmond. ISBN 978-0-735-62398-9 [Witt12] Witt G (2012) Writing effective business rules. Morgan Kaufmann, San Francisco. ISBN 978-0-123-85051-5 [Wlaschin17] Wlaschin S (2017) Domain modeling made functional. Pragmatic Bookshelf, Raleigh. ISBN 978-1-6805-0254-1 [Yourdon79] Yourdon E, Constantine LL (1979) Structured design—fundamentals of a discipline of computer program and systems design. Prentice Hall, Englewood Cliffs. ISBN 978-0-138-54471-3 [Yu16] Yu L (2016) A developer’s guide to the semantic web. Springer, Berlin. ISBN 978-3-662-50652-3 [Zeng16] Lei Zeng M, Qin J (2016) Metadata, 2nd edn. Neal Schuman Publishing, Chicago. ISBN 978-1-7833-0052-5 [Zimmermann05] Zimmermann O, Tomlinson M, Peuser S (2005) Perspectives on web services: applying SOAP, WSDL and UDDI to real-world projects, 2nd edn. Springer, Berlin. ISBN 978-3-540-00914-6
Architecture Principles for Resilience
13
Abstract
In today’s very complex systems, errors, faults, malfunctions, attacks, and malicious activities are no longer exceptions, but everyday facts. Such events may lead to unavailability, failures, or unacceptable delays in the services or products implemented by software. The consequences of such events may be grave—possibly endangering life, health, property, profit, or reputation. The systems must, therefore, be designed and implemented to exhibit sufficient resilience against adverse incidents. Building resilient systems is a significant engineering discipline with a rich history and extensive literature. This chapter provides an introduction to resilience engineering and presents some of the fundamental principles for designing and evolving the architecture of resilient systems.
13.1 Dependability and its Elements Quote: Dependability = “A system does, what it should do and does not what is should not do. … and it does that for all wanted and unwanted, or all predicted and unpredicted situations.” (Anonymous)
Today’s society, business world, and private lives are contingent on software-systems (and their underlying execution infrastructure). Failures in software-systems may cause loss of service or property, endangering lives or the environment, or can make user’s life unpleasant. Therefore, we need software-systems with high dependability (Definition 13.1, [Leveson11, Hollnagel06, Jackson10, Smith11, Knight12]), which is the third pillar of future-proof software-systems (Sect. 4.7). © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_13
309
310
13 Architecture Principles for Resilience
Definition 13.1 (Repetition): Dependable Software Dependability is the property of a software-system to work correctly in its intended environment, with its intended use, as well as when these assumptions are violated or external events cause disruptions. Dependability exhibits high resilience against operating errors, failures, attacks, malfunctions, design and implementation errors, etc. Dependability is a (relatively) complicated concept: It consists of two pillars (Fig. 13.1): 1. General resilience: The capabilities common to all systems to absorb a disruption and recover to an acceptable level of performance (Definition 13.2). Resilience is implemented by nine resilience architecture principles (presented in Sect. 13.2); 2. Application-specific dependability properties: Which properties (from Table 4.1) constitute the dependability of specific systems depends upon the application area. For an Internet banking system, properties such as security and availability are essential. For car electronics system, properties such as safety and reliability are foremost. The method to determine which dependability properties are necessary for a specific application is the use of a property taxonomy (Fig. 5.13). Definition 13.2: Resilience Resilience is the capability of a system with specific characteristics, before, during and after a disruption:
Fig. 13.1 Dependability—protection process
13.1 Dependability and its Elements
311
• to absorb the disruption, recover to an acceptable level of performance; • and sustain that level for an acceptable period of time. http://www.incose.org/practice/techactivities/wg/rswg/. Figure 13.1 shows the process for building dependable systems: First, dependability is founded on nine general resilience principles (Table 13.1) which are valid for all systems and cover both the software-system and its underlying execution infrastructure [Francis14, Gertsbakh11, Hollnagel06, Hollnagel13, Jackson10, Zongo18a, Zongo18b, Flammini18]. Second, the quality properties—such as security, safety, performance, availability, etc.—affecting the application are determined and the dependability properties taxonomy for the specific field of application (Fig. 5.12) is generated. For both—resilience and application-specific properties—proven, enforceable principles exist. For each of the dependability properties, the threats to the protection assets are determined. Protection assets may be accounts, engineering drawings, passwords, business plans, etc. Threats include theft, unauthorized modification, destruction, etc. Concerning information technology, the protection assets are stored in digital form in computer or database systems. Once the threats are established, the risk is assessed: How probable is the materialization of the threat? What is the possible damage of the threat? This step is called risk management, and many different processes exist for this task (e.g., [Wheeler11, Hopkin18, Hodson19]). The dependability architect’s work starts after risk analysis is done: He and his team design mitigation measures—protective hard- and software instruments—to prevent the success of the threats or to minimize the impact of a successful threat. A very rich knowledge, literature, and experience are available for this task.
Table 13.1 Resilience architecture principles Resilience architecture principles R1
Policies
R2
Vertical architectures
R3
Fault containment regions
R4
Single points of failure
R5
Multiple lines of defense
R6
Fail-safe states
R7
Graceful degradation
R8
Dependable foundation (Infrastructure)
R9
Monitoring
312
13 Architecture Principles for Resilience
Quote: “The way in which we achieve dependability is through a rigorous series of engineering steps, with which many engineers are unfamiliar.” (John Knight, 2012)
Finally, the last step of risk assessment is done: Zero risk is not possible in a technical system—a residual risk always exists. The residual risk may be very low, however, it must be known, quantified, and documented. Then follows a decision (usually by the risk or compliance officer of the organization) if the residual risks are acceptable or not acceptable. If they are acceptable, the project can continue. If some of the residual risks are not acceptable, the dependability architects return to work and improve the mitigation measures with respect to the protection asset/threat until the residual risk becomes low enough to be acceptable.
13.2 Resilience Architecture Principles The first line of defence in the software-system is the built-in resilience. Resilience against disruptions relies on both the resilience of the execution platforms and resilience of the applications. The basic foundation of resilient systems are the nine resilience architecture principles listed in Table 13.1—which will be explained in detail below.
13.3 Resilience Architecture Principle #1: Policies 13.3.1 Introduction Achieving and maintaining high dependability in an IT system—especially in a large or a very large IT system, or in a complicated product—is an extremely challenging task. Not only the intriguing complexity but also the rapid evolution with continuous changes make this a very difficult endeavour. In addition, the IT systems or products are built, maintained, and evolved by large or very large, often global, organizations with a huge workforce and fragmented management structures. This constitutes a significant risk for divergence and for the introduction of weaknesses affecting the dependability. Quote: “If you do not have the support of your upper management, your program is doomed to fail before you finish writing the policy.” (Scott Barmann, 2007)
Definition 13.3: Policy A policy is a binding document with a general commitment, direction, or intention and is formally stated by top management. A policy covers one topic, such as information security, vehicle safety, or product quality assurance.
313
13.3 Resilience Architecture Principle #1: Policies
http://www.praxiom.com/iso-definition.htm#Policy. We need, therefore, an instrument to define, communicate, and enforce the dependability objectives of the organization throughout the development cycle and during operation. The adequate solution is policies. Policies are documents which cover specific dependability sectors and are binding for all people and departments of the organization (Definition 13.3, [Blokdyk18c, Flynn09, Barman07, Landoll16]).
13.3.2 Policy Process IT policies are a valuable asset not only for the IT department, but for the whole organization. Well-written policies strongly support high-quality software development. However, assembling a complete set of consistent policies in an organization is a formidable task. A policy process is shown in Fig. 13.2: The policy process must be under the governance of the top management. The most important task falls to the policy consolidation team. Their responsibility is to define a complete, consistent, and non-overlapping set of policies for all relevant areas of the IT system. Quote: “Policy is the most important nontechnology-related aspect of computer security.” (Paul E. Proctor, 2002)
The first step of the process is policy creation: An experienced team authors the policies. Policy implementation means publishing, communicating, and periodic training of all
Top Management
Policy Coordinaon Team
Policy Creaon
Policy Implementaon
Consistent, nonoverlapping, focused policies
Supporng standards (industry and proprietary)
Fig. 13.2 Policy process
Policy Enforcement
Policy Assessment
Metrics
Policy Review
314
13 Architecture Principles for Resilience
stakeholders. Policy enforcement is the activity to check adherence to all relevant policies in all activities of building, maintaining, evolving, and operating the IT-system. At some intervals, the efficiency of the policies must be assured—in any case whenever a security break occurs—by controls. Finally, weaknesses in the policies must be reviewed and rectified (feedback). A policy obliges people in the organization to behave conforming to the objectives. If such guidance is not available, the correct behaviour cannot be expected. One intentional or accidental misbehaviour can open a severe vulnerability of the company with respect, e.g., to confidentiality or integrity of the information. In addition, having effective policies in force may affect liability in case of damage and support litigation. Note that policies are not only required for commercial enterprises, but also for companies producing products. Quote: “Important concept to remember: Information is an asset. You might not be able to assign it a value, but your competitors might pay thousands or even millions of dollars to understand or even steal those assets.” (Scott Barman, 2002)
Example 13.1: Information Security Policy
For many organizations, their information is the most valuable asset. Information assets are mostly stored in electronic form and come in many forms: Customer data, financial strategies, blueprints for a new fighter plane, new trading algorithms, control programs for autonomous cars, the source code of popular programs, etc. Because most organizations are connected electronically to the outside world—via Internet, e-mail, partner networks, etc.—their information is subject to threats, such as theft, copying, unauthorized modifications, espionage, damage, or deletions. Successful protection of electronic information assets starts with an effective information security policy [Proctor02, Barman07, Peltier04, Wahe11]. Because an information security policy has to cover a number of areas, the information security policy is split up into hierarchical, focused, concise policies (Fig. 13.3, [Bacik08]).
Principle 13.1: Policies
1. Define a complete, consistent, and nonoverlapping framework of policies covering all areas of IT system dependability (= policy framework); 2. Develop and enforce precise, concise, and comprehensive policies for all areas according to the policy framework; 3. Support the policies by carefully selected standards, methodologies and best practices; 4. Keep all policies and supporting material up-to-date and adapted to the changing environment (= regular policy reviews);
315
13.4 Resilience Architecture Principle #2: Vertical Architectures
Internaonal Standards, Laws, Regulaons, and Recommendaons
Expert Knowledge
Informaon security policy
Informaon classificaon and owernship policy
Internet security policy
Access control policy
Monitoring and audit policy
Configuraon policy
………
Security technology policy
Fig. 13.3 Information security policy architecture
5. Periodically assess the impact of the policies and improve them whenever a deficiency is found or an international standard/law/regulation is updated or a new major threat appears; 6. Consequently apply the policies to the evolution of the applications and the infrastructure.
13.4 Resilience Architecture Principle #2: Vertical Architectures Figure 3.7 introduced the architecture framework with horizontal architecture layers and vertical architecture layers (Definition 13.4). The vertical architecture layers contain all policies, methods, processes, the architecture, hardware, and software to implement (and guarantee) a specific quality of service property—such as security, safety, real-time capability, etc. Definition 13.4: Vertical Architecture Container for all policies, methods, processes, the architecture, hardware, and software to implement (and guarantee) a specific quality of service property. Vertical architecture layers (Sect. 3.6.2) have been introduced to symbolize the orthogonality of functions and data between the “functional” (horizontal) layers and the “properties” (vertical layers): The implementations of horizontal functionality and data should be as independent as possible from the implementations of the quality of service properties. The main reasons for the desired orthogonality are the changeability and the different speed of evolution (applications change much faster than infrastructure elements).
316
13 Architecture Principles for Resilience
For the vertical layers, the same construction rules apply: They need individual policies, adequate architectures guided by architecture principles, standards, methods, and metrics. However, the universe of vertical layers is much larger than that of the horizontal layers, because the number of possible quality properties for a software-system is large (Table 4.1). Whereas, the upper four horizontal layers can be covered by three (Chap. 11) + twelve (Chap. 12) architecture principles, a nearly countless number of architecture principles and patterns exist for the vertical layers. It is possible in this book to state seven foundational principles for resilience, but for the specific quality of service property layers, only examples and general guidance can be given. Example 13.2 presents a top-level Internet security architecture for a financial institution [Murer11], which forms part of the vertical architecture layer “security”. Example 13.2: Internet Security Architecture
The internet is omnipresent in nearly all applications and products of modern life. It offers fantastic opportunities, but also severe risks (e.g., [Kshetri10, Wall07]). For sensitive industries, such as financial institutions, Internet security is a serious concern. Quote: “No matter how secure a system is, that system will come under attack. Some attacks will suceed.” (O. Sami Saydjari, 2018)
Internet security starts with a policy (Fig. 13.3), followed by a strong security architecture. A rich literature about building Internet security architectures exist (see e.g., [Saydjari18, Killmeyer06, Scholz14, Mead16]). Figure 13.4 shows a possible toplevel Internet security architecture for a financial institution. The applications of the bank communicate via the Internet with customers, partners, and the authorities via a secure link (SSL, TLS [Oppliger16]). The bank identifies itself via a digital certificate. The communication is fed through a Firewall I and investigated in a front-end server A: The data stream is checked for malware and other possible threats. The front-end server A converts the data stream to another protocol P2. The converted data stream is fed through a Firewall II and investigated in a second front-end server B. Both firewalls and both front-end servers are from different manufacturers and run on different system-software, malware detection, etc. No two elements are identical: This is a typical application of the multiple lines of defense principle (Resilience Architecture Principle #5: Multiple Lines of Defense). All critical elements, such as virus signatures, are updated frequently.
317
13.4 Resilience Architecture Principle #2: Vertical Architectures
Updates
Rights Banking Applicaons
FrontEnd Server B
Authorizaon
P2
Authencaon
FrontEnd Server A
Firewall II
TLS
Firewall I
Internet
Contracts
Monitoring Incident Response
Analysis
Log Files
Fig. 13.4 Top-level internet security architecture for a financial institution
Next follows authentication (using the contracts database) and authorization (based on the rights database). If both checks are positive, the access to the respective applications or information is granted. A significant activity is (near) real-time monitoring: All elements generate exceptions and log-files and store them in the log database. The exceptions and log-files are analyzed, and any anomaly detected is fed to the incidence response team. The dependability of the system is directly dependent on the quality of the vertical architectures (and the quality of their implementation). The effort invested in defining, specifying, and implementing the vertical architectures must be matched to the risk/damage potential of your application (Caution: be on the safe side). If you have a critical application, the investment is high. If you have a low-risk application, the effort may be limited. Remember that in today’s world, both the requirements and the environment are continuously changing: Maintain and evolve your vertical architectures in sync with both of them. Principle 13.2: Vertical Architectures
1. Separate the different quality of service properties into vertical architecture layers; 2. Define specific policies, methods, processes, principles, the architecture, hardware, and software for each vertical architecture layer (i.e., for each quality of service property);
318
13 Architecture Principles for Resilience
3. Avoid the implemention of functionality or data for vertical architecture layers in the horizontal architecture layers (Orthogonality); 4. The dependability of the system is directly dependent on the quality of the vertical architectures (and the quality of their implementation); 5. Match the quality of the vertical architectures to the risk/damage potential of your application (Caution: be on the safe side); 6. Continuously maintain/evolve your vertical architectures in sync with changing environments and requirement.
13.5 Resilience Architecture Principle #3: Fault Containment Regions The components of a software-system are subject to external and internal faults. External faults include invalid input data, the early or late arrival of events, malfunction of the underlying execution platform, attacks, etc. Internal faults include programming errors, cache overflows, malware, etc. An external or internal fault causes an error in the software component. The error (often) results in the failure of the component (Fig. 13.5, [Kopetz11]). Quote: “A bug in a program can cause some piece of data to be incorrect, and that data can be used in subsequent calculations so that more (perhaps a lot more) data becomes incorrect. Some of that data can be transmitted to peripherals and other computers, thereby extending the erroneous state to a lot of equipment.” (John Knight, 2012)
The failure of one component may constitute a fault for the successive component: A fault may thus propagate through a chain of components and finally crash the system. This sequence of events is called fault propagation (Fig. 13.5) and is very detrimental to dependability! Definition 13.5: Fault Propagation & Fault-Containment Region Fault Propagation: Transfer of a fault from a failed system to the following system, causing a fault, again transferred to a following system and thus possibly generating a chain reaction which crashes the system (Domino effect); Fault Containment Region: Carefully architected components (or groups of components) which—when failed—do not negatively impact dependent components. To avoid fault propagation (Definition 13.5), therefore, is an essential task in architecting dependable software-systems. The technique to do so is to build fault containment regions in the system (Definition 13.5).
319
13.5 Resilience Architecture Principle #3: Fault Containment Regions
Component ERROR
Failure
External fault
Internal fault
Fault
Component ERROR
Component System Crash
ERROR
Failure Fault
Fig. 13.5 Faults, errors, and failures
Fig. 13.6 Fault containment region
Figure 13.6 shows two components A and B. Component A fails due, e.g., to an external fault. Without fault containment, the fault would propagate to component B and possibly also cause a failure. Fault containment can either be done at the boundary of the failing component (here: component A) or at the boundary of a dependent component (here: component B).
320
13 Architecture Principles for Resilience
Fault containment means: 1. Fault detection: The boundary of the fault containment region detects a faulty input from a failed component; 2. Fault handling: The faulty input is not or only partly accepted, corrected, or replaced, e.g., by simulated data or its impact is ignored. The difficult task is fault detection, especially in data streams transferred in the system. The possible detection mechanisms are shown in Tab. 13.2 ([Knight12, Kopetz11, Zeigler17, Bondavalli16]). Temporal domain faults can be detected by a synchronized time reference, e.g., a global clock. An excellent example of a fully fault-contained, safe architecture is the time-triggered architecture (TTA, [Kopetz11, Peti08]). Value (syntactic) faults can be recognized by defining, using, and checking a strict syntax, typed variables, and range boundaries for all messages and their elements. For this purpose, formal contracts (Sect. 12.6.2) are strong mechanisms [Benveniste12, Erl08, Meyer09, Plösch04]. Content (semantic) faults are difficult to identify. The most promising method is to use models for “correct” (= intended) behaviour, e.g., use the business object model for the check of admissible operations [Dubrova13]. Principle 13.3: Fault Containment Regions
1. Carefully partition the system into fault containment regions (= Architecture task); 2. Use functional cohesion and degree of criticality as a guiding principle to define the fault containment regions; 3. Build fault propagation boundaries around each system part (→ Interfaces), if possible both at the outgoing and the incoming boundary; 4. Strictly and consistently use effective fault detection mechanisms at the fault containment region boundaries (temporal, syntactic and semantic examination); 5. Provide sufficient redundant information, e.g., in the form of models about the intended behavior of the system parts (components).
13.6 Resilience Architecture Principle #4: Single Points of Failure Quote: “Effective business continuity management must provide for end-to-end system availability from both a technical and a business perspective.” (Robert W. Buchanan, 2003)
Interface contract violation, Integrity protection (e.g., #certificates) Semantic comparison: Taxonomy, Ontology, Business Object Model
Value (syntactic) domain: Incorrect or out of range values, faulty syntax, false check-sum
Content (semantic) domain: Wrong, faked, malicious, or unallowed content Repeat request Redundancy Fail safe state
Repeat (N times) Algorithms (e.g., averaging over time) Fail safe state
Comparison to timing frame (global Repeat (N times) clock) and timing model Redundancy Fail safe state
Temporal domain: Arrival too early, too late, or not at all
Failure handling
Detection
Type
Table 13.2 Input data stream fault detection
Model (model of generalized “correct” behavior of the system)
Formal contract (e.g., XML)
Time reference (system-wide clock)
Information
13.6 Resilience Architecture Principle #4: Single Points of Failure 321
322
13 Architecture Principles for Resilience
Single points of failure (Definition 13.6) are a significant danger for dependable systems: They can be well hidden in any of the horizontal and vertical architecture layers and remain undetected for years. When the single point of failure fails, whole large softwaresystems may stop correct operation. Therefore, the avoidance, detection, and elimination of SPOFs is essential [Buchanan02]. Definition 13.6: Single Point of Failure A single point of failure (SPOF) is a part of a system that, if it fails, will stop the entire system from working. The remedy against single points of failure is managed redundancy: Once a SPOF is identified (or has manifested itself through a system failure), an adequate type of redundancy—possibly multiple redundancy—must be introduced. Single points of failure can be present in the architecture (Example 13.3), in the hardware, or in the software of the system. The task of the system architect is to eliminate SPOFs in all phases of the development process [Schmidt06, Marcus03, Hole16]. Example 13.3: Architectural Redundancy
Figure 13.7a shows a simple architecture for a web application: The customer interacts with the applications via one firewall, one web server, two application servers, and one database server. A single power source powers the whole system. It is evident that if either the firewall, the web server, the database server, or the power supply fails, the system will be unavailable. They all constitute a single point of failure! (a)
Internet
(b)
Internet Firewall
Firewall
WebServer
WebServer
Applicaon Server
Power Supply
WebServer
Applicaon Server
Applicaon Server
Database Server
Firewall
Applicaon Server Database Server Sync Power Supply
Fig. 13.7 Single points of failure (a) and redundancy (b)
Database Server
Emergency Power Supply
13.7 Resilience Architecture Principle #5: Multiple Lines of Defense
323
Eliminating all single points of failure results in the architecture of Fig. 13.7b: It is obviously much more complicated (and costly). However, this architecture will survive any single failure—and even specific multiple failures. Principle 13.4: Single Point of Failure
1. Identify possible single points of failure early in the architecture/design process (Note: single points of failure can occur on all levels of the architecture stack, and in both the horizontal and vertical architecture layers); 2. Carefully avoid to introduce single points of failure during the development process (Reviews); 3. Eliminate identified single points of failure by introducing adequate managed redundancy.
13.7 Resilience Architecture Principle #5: Multiple Lines of Defense Quote: “Your identity can be stolen, your company’s intellectual property can be copied and sold, and even hacks that just a few years ago sounded like science fiction will soon be possible: vehicle systems can already be hacked, and our power grid can be manipulated or sabotaged by terrorists.” (Nick Selby, 2018)
Malicious attacks on computer systems, such as commercial systems, infrastructure control systems, vehicles, etc., are today a regular fact of life. All network-connected systems will come under attack at some time with a very high probability. Unfortunately, many of these attacks will be more or less successful. Attacker and defenders play a continuous “cat-and-mouse” game. Both attack and defense technologies are becoming more and more sophisticated—and dangerous, e.g., both sides have started to use machine learning to attack and to defend systems [Chio18]. Threats to the system come either: • Over connected networks (Internet, e.g., [Saxe18, Selby17, Diogenes18]); • From insiders (Intranet, e.g., [Mehan16, Stolfo09]). Defending against network attacks is difficult, however, defending against insider attacks is even more challenging. One general, powerful principle is to use multiple lines of defense (Definition 13.7). The basic idea is to use different computer technologies in assuring the defense functionality (see Example 13.2) and multiple levels of organizational checks.
324
13 Architecture Principles for Resilience
Definition 13.7: Multiple Lines of Defense Multiple lines of defense represent the use of multiple computer technologies or multiple organizational checks to help mitigate the risk of one component of the defense being compromised or circumvented. Because each of the computer security technologies has weaknesses (some known, many unknown, e.g., zero-day exploits, [Shein04]), the probability of a successful attack is significantly reduced when two or more completely different security technologies are used consecutively (Fig. 13.8). Even when one technology has been broken or bypassed, probably the second technology will stop the attack and keep the system secure. One effective, additional line of defense is (real time) monitoring: To spot and identify anomalies or deviations from expected behaviour can stop an attack or at least a repetition of the same attack. Principle 13.5: Multiple Lines of Defense
1. For each threat and incident implement multiple, independent lines of defense; 2. For outsider threats implement multiple technological lines of defense, for insider threats implement multiple organizational checks; 3. For each line of defense use different methods, techniques and technologies; 4. Use—if possible: real-time—monitoring for the early detection of security breaks.
1st line of defense
Threat
Technology A Threat Protecon assets Technology B
2nd line of defense
(Real-Time Monitoring)
Fig. 13.8 Multiple lines of defense
Aack Alert
13.8 Resilience Architecture Principle #6: Fail-Safe States
325
13.8 Resilience Architecture Principle #6: Fail-Safe States For computer-controlled systems there is one outstanding expectation: No harm to people, property, environment or the society should be caused by a failure of the controlling system. In case of a failure, the system should go automatically and timely into a failsafe state (Definition 13.8). Quote: “As engineers we sometimes find designing equipment to be well-built is much easier than designing it to fail predictably.” (Peter Herena, 2011)
Architecting and designing fail-safe systems is a challenging task and needs much effort, but is of decisive importance in many application domains (e.g., [Anwar18, Jain17, Mahmoud13, Al-Malki11, Flammini12, Allocco10, Ericson15]). Engineering fail-safe systems is a subdiscipline of fault-tolerant computing, for which a rich set of methodologies and literature exists [Koren07, Butler07, Dubrova13, Pelliccione07, Hanmer07, Knight12]. A classification of fault-tolerant behaviour is shown in Fig. 13.9: Fail-safe systems are by far the hardest to design and implement. Definition 13.8: Fail-Safe State Fail-safe means that a system will not endanger lives, property, society, or the environment when it fails. It will go into a fail-safe state and stop working.
Architecture & Design Effort
Very high
Fail-Safe System
High
Industrystandard
Graceful Degrada on System Conven onal System Planned, gracious failure behavior Various failure modes
Fig. 13.9 System failure classification
Guaranteed safe failure behavior
326
13 Architecture Principles for Resilience
A fail-safe system does not mean that failure is impossible or improbable—but that the system’s design and implementation prevent unsafe consequences of the failure. For safety-critical systems, fail-safe behaviour is a fundamental requirement, as shown in Example 13.4. In the analysis and design of fail-safe systems, often the state-machine approach is used [Borger03, Wagner06]. Example 13.4: Automotive Cruise Control
In a (simplified) automotive cruise control system (ACC), the operation is controlled by software in an ECU (electronic control unit). The driver sets the desired speed, and the ACC maintains this speed, independently of flat, uphill, or downhill driving. While operating correctly, the ACC shall: • Maintain the set speed within narrow margins (less than ±3 km/h, because of speed limits); • Have a high reliability (>0.999); • Immediately stop operation when Cancel or the brake pedal is pressed. In case of a failure, the ACC shall: • Immediately stop operation: NEVER accelerate or brake; • Indicate the “ACC failed” indicator in the dashboard. The fail-safe state in this case is: “Immediately stop any intervention with the car and light the ACC failed indicator in the dashboard” (see e.g., [Holzmann09]). Of course, the mechanical breaking is still working.
Principle 13.6: Fail-Safe States
1. Execute a careful hazard analysis of your full system to identify all (= goal) critical or harmful states; 2. Document all paths to the critical or harmful states in a formal way, such as state chart diagrams; 3. Model your application (or the software part of it) as a finite state machine; 4. Define fail-safe state(s); 5. Implement the fail-safe state; 6. Implement reliable paths from all nodes to the fail-safe state(s).
13.9 Resilience Architecture Principle #7: Graceful Degradation
327
13.9 Resilience Architecture Principle #7: Graceful Degradation Many of our modern software-systems are large, complex, and depend on many components to operate to the full extent of their functionality. It is a daily fact that some of the components can fail at any time: Which is the desired behaviour of the system after one or several of its components failed? During the development, systems should be engineered in such a way, that they can tolerate a specified number of component failures without becoming unavailable to their users. This engineering task is called graceful degradation (Definition 13.9): Instead of failing completely, component failures cause a degradation in the quality of service, e.g., by offering reduced functionality, temporally limited security, or the degradation of other quality properties. Definition 13.9: Graceful Degradation Property of a system to continue operation at some reduced level of functionality or performance, or with lower quality of a system property, after one or several of its components failed (= reduced service quality). Graceful degradation is shown in Fig. 13.10: The system retains its full functionality and quality of service when a few components fail. When more components fail, some properties of the system become degraded, but the system is still available to the users. When even more components fail, the result is a system failure and thus unavailability to the users. The key technology for graceful degradation is fault-tolerance (Definition 13.10, [Dubrova13, Mansouri16, Goloubeva06, Pullum01, Butler07, Butler10]). In some
Fig. 13.10 Graceful degradation after failure of system components
328
13 Architecture Principles for Resilience
systems, such as electronic banking, fault-tolerance is a welcome attribute. In other systems, such as safety-critical systems or autonomous systems, fault tolerance is a basic requirement of their design (see e.g., [Yang17, Riascos10, Caccavale03, Mahmoud13]). Definition 13.10: Fault-Tolerance Providing functionality or service that are consistent with its specification in spite of faults of failures. Implementation of fault-tolerance is based on planned, managed redundancy. Some components are duplicated (or even triplicated) so that the back-up component takes over the functionality in case of failure of the primary component. An example for graceful degradation is given in Example 13.5. Example 13.5: Reduced ATM Functionality
An automatic teller machine (ATM) allows you to retrieve cash at any time using your bank card or a credit card. The amount of cash which can be retrieved depends on the balance in your account. During normal operation the following steps are executed (Fig. 13.11): 1. The ATM reads the card and checks its formal validity; 2. The function “Cash” is chosen by the customer; 3. The PIN is checked; 4. The desired amount (e.g., € 1000.-) is inputted; 5. The ATM requests permission (balance available) from the bank computer (account database) via a remote link; 6. The bank computer clears the transaction and allows payout; 7. Cash is disbursed.
Bank
Accounts Database
Fig. 13.11 Degraded automatic teller machine operation
Internet
Automac Teller Machine
13.10 Resilience Architecture Principle #8: Dependable Foundation (Infrastructure)
329
If the ATM cannot contact the bank computer (due to an internal IT problem in the bank or Internet unavailability), the ATM goes into degraded operation: 1. The ATM reads the card and checks its formal validity; 2. The function “Cash” is chosen by the customer; 3. The PIN is checked; 4. The desired amount (e.g., € 1000.-) is inputted; 5. The ATM requests permission (balance available) from the bank computer (account database) via a remote link; 6. Failure: The bank computer cannot return authorization; 7. The ATM goes into degraded mode: Each customer can only retrieve max. 100 €/ day (locally stored on the card). Although it may be annoying for the customer to have such a strict limit (his balance would readily permit the disbursement of the 1000 €), he at least is not left without cash. At the same time, the risk for the bank is limited, even if the degraded operation lasts over a longer time. Principle 13.7: Graceful Degradation
1. Investigate the possibility for graceful degradation in your planned system (= Business task); 2. Define the allowable degraded modes of operation and assess their risk; 3. Architect and implement proven graceful degradation technologies (for specific resilience properties, such as availability, performance, safety, security, …); 4. Compensate component failures by carefully planned redundancy.
13.10 Resilience Architecture Principle #8: Dependable Foundation (Infrastructure) In the last decades of software engineering, a tendency appeared: More and more of the standardized or productized functionality was shifted into the technical infrastructure (Fig. 13.12). Many dependability functions are now implemented in powerful, proven, and reliable products, such as RBAC products (Role-based access control, Example 7.1), network monitoring, SSO products (Single Sign-On for large application landscapes, [Orondo14]), the synchronization of distributed databases [Oezsu11], or malware filters [Saxe18].
13 Architecture Principles for Resilience
Applicaon Soware Applicaon Soware
Applicaon Soware
Infrastructure Services
Applicaon Soware
Infrastructure Services
Technical Infrastructure
Technical Infrastructure
Technical Infrastructure
Commodies Sourcing
1980
2000
1960
Infrastructure Services Technical Infrastructure
Technical Infrastructure grows
330
Commodies Sourcing
2020
Fig. 13.12 Technical infrastructure evolution
The technical infrastructure (= Technical architecture layer in Fig. 3.8), therefore, supports more and more of the dependability functionality—and is often called resilience infrastructure (Definition 13.11). Definition 13.11: Resilience Infrastructure Set of proven resilience technologies, products, and services located in the technical infrastructure and supporting the dependability properties (availability, security, performance, …) of software-systems. Products supporting dependability properties have a different life cycle then applications. First, they need an evaluation, selection, and procurement phase. Later follows the regular update phase dictated by the vendor. Technical infrastructure engineers are better suited to care for products, than the application engineers using the products (via services). Therefore, an actively managed care for a resilient infrastructure is a powerful contribution to the dependability of the software-system. Principle 13.8: Dependable Foundation (Resilience Infrastructure)
1. Use a resilience infrastructure as part of the foundation for dependable software-systems; 2. Only use proven resilience technologies and services supporting the resilience properties (availability, security, performance, …) through the technical infrastructure; 3. Whenever possible use industry-standard based resilience techniques (Avoid vendor lock-in).
13.11 Resilience Architecture Principle #9: Monitoring
331
13.11 Resilience Architecture Principle #9: Monitoring Monitoring (Definition 13.12) is supervising the IT-system during operation (see also Fig. 13.4). Specific software products gathers data about the operational parameters, which is used to detect anomalies, protect applications, and allows forensic analyses. There are two types of monitoring: • Real-time monitoring: The data about the system operation is collected in real-time, immediately analyzed, and corrective or protective actions are taken instantly; • Logging: The data is collected, stored in files (Log files), and examined at a later time, e.g., if an incident or breach of security has been detected. Definition 13.12: Monitoring An IT system monitor is a combination of hardware and software components used to collect data about the operation of the system, the network, and other components, such as applications. Real-time monitoring gathers the data in real-time, immediately analyzes it, and instantly takes corrective or protective actions. Logging acquires the data continuously, writes it unprocessed and time-stamped to files (Log files) for later analysis. The idea of monitoring is simple. The correct and effective implementation, however, is difficult. The monitor, i.e., the combination of hardware and software elements acquiring, storing, and analyzing data is tightly integrated with systems software (operating system, network software, device drivers, etc.) and interacts closely with the applications [Julian17, Josephsen13]. Example 13.6 gives an illustration of the value of real-time monitoring. Example 13.6: Extrusion Detection
Figure 13.13 shows the monitoring of the network boundary of an enterprise. In this case, the number of incoming and outcoming TCP packets [Forouzan10] per second was counted by the monitor. The average number of outgoing packets/sec was 28,000. At some time, the monitor recorded 401,452 outgoing packets/sec over several minutes. This raised a network alarm. The analysis showed that the large number of packets could not have been generated by the applications. The investigation then revealed the presence of a maliciously installed Denial-of-Service Attacker (DDoS, [Yu14]) which had not been detected by the installed anti-malware protection software. The malware subsequently could be identified and removed. Without monitoring it could possibly have remained active for a long time, potentially creating legal issues for the enterprise.
332
13 Architecture Principles for Resilience
Monitoring Front-End Server
Applicaons
Monitoring
Internet
401’452 Packets/sec
Malware (DDoS)
Fig. 13.13 Extrusion detection by monitoring
An additional—highly effective, but costly—monitoring task is the monitoring of applications interfaces. Application interface monitoring checks the conformance to the service contracts in real-time and detects any violation of a service contract (Sect. 12.6.3). This type of monitoring greatly reduces the error rate and improves dependability significantly.
Principle 13.9: Monitoring
1. Define the objectives of monitoring, both for technical monitoring and the business (applications) monitoring; 2. Carefully specify the metrics, analytics, results, and alerts to be extracted from monitoring; 3. Define the processes for data analysis, including incident/emergency responses; 4. Specify the actions following alerts—whenever implement fully automated responses (with no or little human interaction); 5. Recommendation: Use commercial monitoring tools whenever possible.
References
333
References [Allocco10] Allocco M (2010) Safety analyses of complex systems—considerations of software, firmware, hardware, human, and the environment. Wiley, Hoboken. ISBN 978-0-470-58770-6 [Al-Malki11] Al-Malki MF (2011) Fault-tolerant flight control—system design with application to bell-205 helicopter. VDM, Müller. ISBN 978-3-6392-3928-7 [Anwar18] Anwar S (2018) Fault tolerant drive by wire systems—impact on vehicle safety and reliability. Bentham Science Publishers, Sharjah. ISBN 978-1-6080-5667-5 [Bacik08] Bacik S (2008) Building an effective security policy architecture. CRC Press Inc., Boca Raton. ISBN 978-1-420-05905-2 [Barman07] Barman S (2007) Writing information security policies. Pearson Technology Group, Upper Saddle River. ISBN 978-1-578-70264-0 [Benveniste12] Benveniste A, Caillaud B, Nickovic D, Passerone R, Raclet J-B, Reinkemeier P, Sangiovanni-Vincentelli A, Damm W, Henzinger T, Larsen K (2012) Contracts for systems design. INRIA Research Report, N° 8147. ISSN 0249-6399. http:// hal.inria.fr/docs/00/75/85/14/PDF/RR-8147.pdf. Accessed 23 Sep 2017 [Blokdyk18c] Blokdyk G (2018) Information policy—a clear and concise reference. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-9869-4593-6 [Bondavalli16] Bondavalli A, Bouchenak S, Kopetz H (eds) (2016) Cyber-physical systems of systems: foundations—a conceptual model and some derivations: the AMADEOS legacy. Springer Lecture Notes in Computer Science, Heidelberg. ISBN 978-3-319-47589-9 [Borger03] Borger E (2003) Abstract state machines—a method for high-level system design and analysis. Springer, Berlin. ISBN 978-3-540-00702-9 [Buchanan02] Buchanan R (2002) Disaster proofing information systems—a complete methodology for eliminating single points of failure. McGraw-Hill Education, New York. ISBN 978-0-071-40922-3 [Butler07] Butler M, Jones C, Romanovsky A, Troubytsina E (eds) (2007) Rigorous development of complex fault-tolerant systems. Springer, Berlin (Lecture Notes in Computer Science, Vol. 4157). ISBN 978-3-540-48265-9 [Butler10] Butler M, Jones CB, Romanovsky A, Troubitsyna E (2010) Methods, models and tools for fault tolerance. Springer, Berlin (Lecture Notes in Computer Science, Vol. 5454). ISBN 978-3-642-00866-5 [Chio18] Chio C, Freeman D (2018) Machine learning and security—protecting systems with data and algorithms. O’Reilly UK Ltd., Beijing. ISBN 978-1-491-97990-7 [Diogenes18] Diogenes Yuri, Ozkaya Erdal (2018) Cybersecurity—attack and defense strategies: infrastructure security with red team and blue team tactics. Packt Publishing Inc., Birmingham. ISBN 978-1-7884-7529-7 [Dubrova13] Dubrova E (2013) Fault-tolerant design. Springer, Berlin. ISBN 978-1-461-42112-2 [Ericson15] Ericson CA (2015) Hazard analysis techniques for system safety, 2nd edn. Wiley, Hoboken. ISBN 978-1-118-94038-9 [Erl08] Erl T (2008) Web service contract design and versioning for SOA. Prentice Hall, Upper Saddle River. ISBN 978-0-136-13517-3 [Flammini12] Flammini F (ed) (2012) Railway safety, reliability, and security—technologies and systems engineering. Information Science Reference (IGI Global), Hershey. ISBN 978-1-4666-1643-1
334
13 Architecture Principles for Resilience
[Flammini18] Flammini F (ed) (2018) Resilience of cyber-physical systems—from risk modelling to threat counteraction. Springer, Berlin. ISBN 978-3-319-95596-4 [Flynn09] Flynn N (2009) The e-Policy handbook—rules and best practices to safely manage your company’s e-mail, blogs, social networking, … and other internet communication tools. American Management Association, New York. ISBN 978-0-8144-1065-3 [Forouzan10] Forouzan BA (2010) TCP/IP protocol suite, 4th edn. McGraw-Hill Inc., Chennai. ISBN 978-0-070-70652-1 [Francis14] Francis R, Bekera B (2014) A metric and frameworks for resilience analysis of engineered and infrastructure systems. Reliab Eng Sys Safety 121:90–103. https://blogs.gwu.edu/seed/files/2012/07/Reliability-Engineering-and-SystemSafety-2014-Francis-1y5jkh9.pdf. Accessed 3 Sep 2017 [Gertsbakh11] Gertsbakh I, Shpungin Y (2011) Network reliability and resilience. SpringerBriefs in Electrical and Computer Engineering. Springer, Heidelberg. ISBN 978-3-642-22373-0 [Goloubeva06] Goloubeva O, Rebaudengo M, Reorda MS, Violante M (2006) Softwareimplemented hardware fault tolerance. Springer, Berlin. ISBN 978-0-387-26060-0 [Hanmer07] Hanmer R (2007) Patterns for fault tolerant software. Wiley, Hoboken. ISBN 978-0-470-31979-6 [Hodson19] Hodson C (2019) Cyber risk management. Kogan Page, New York. ISBN 978-0-749-48412-5 [Hole16] Hole KJ (2016) Anti-fragile ICT systems. Springer, Berlin. ISBN 978-3-319-30068-9 [Hollnagel06] Hollnagel E, Woods DD, Leveson N (eds) (2006) Resilience engineering—concepts and precepts. Ashgate Publishing Ltd., Aldershot. ISBN 978-0-7546-4904-5 [Hollnagel13] Hollnagel E, Paries J, Woods DD, Wreathall J (2013) Resilience engineering in practice—a guidebook. CRC Press, Boca Raton. ISBN 978-1-472-42074-9 [Holzmann09] Holzmann F (2009) Adaptive cooperation between driver and assistant system— improving road safety. Springer, Berlin. ISBN 978-3-642-09388-3 [Hopkin18] Hopkin P (2018) Fundamentals of risk management—understanding, evaluating and implementing effective risk management, 5th edn. Kogan Page, New York. ISBN 978-0-749-48307-4 [Jackson10] Jackson S (2010) Architecting resilient systems—accident avoidance and survival and recovery from disruptions. Wiley, Hoboken. ISBN 978-0-470-40503-1 [Jain17] Jain T, Yamé JJ, Sauter D (2017) Active fault-tolerant control systems—a behavioral system theoretic perspective. Springer, Berlin. ISBN 978-3-319-68827-5 [Josephsen13] Josephsen D (2013) Nagios—building enterprise-grade monitoring infrastructures for systems and networks, 2nd edn. Prentice Hall Inc., Upper Saddle River. ISBN 978-0-133-13573-2 [Julian17] Julian M (2017) Practical monitoring—effective strategies for the real world. O’Reilly UK Ltd., Farnham. ISBN 978-1-491-95735-6 [Killmeyer06] Killmeyer J (2006) Information security architecture—an integrated approach to security in the organization. Auerbach Publishers Inc., Boca Raton. ISBN 978-0-849-31549-7 [Knight12] Knight J (2012) Fundamentals of dependable computing for software engineers. Chapman and Hall/CRC Inc, Boca Raton. ISBN 978-1-439-86255-1
References
335
[Kopetz11] Kopetz H (2011) Real-time systems—design principles for distributed embedded applications. Springer, New York. ISBN 978-1-4419-8237-7 [Koren07] Koren I, Mani Krishna C (2007) Fault-tolerant systems. Morgan Kaufmann Publishing, San Francisco. ISBN 978-0-120-88525-1 [Kshetri10] Kshetri N (2010) The global cybercrime industry—economic, institutional and strategic perspectives. Springer, Heidelberg. ISBN 978-3-642-11521-9 [Landoll16] Landoll DJ (2016) Information security policies, procedures, and standards—a practitioner’s reference. CRC Press, Boca Raton. ISBN 978-1-482-24589-9 [Leveson11] Leveson NG (2011) Engineering a safer world—systems thinking applied to safety. MIT Press, Cambridge. ISBN 978-0-262-01662-9 [Mahmoud13] Mahmoud MS, Xia Y (2013) Analysis and synthesis of fault-tolerant control systems. Wiley, Hoboken. ISBN 978-1-118-54133-3 [Mansouri16] Mansouri H (2016) Fault tolerance in mobile and ad hoc networks via checkpointing. LAP LAMBERT Academic Publishing, Saarbrücken. ISBN 978-3-330-00310-1 [Marcus03] Marcus E (2003) Blueprints for high availability. Wiley, Hoboken. ISBN 978-0-471-43026-1 [Mead16] Mead NR, Woody CC (2016) Cyber security engineering—a practical approach for systems and software assurance. Addison-Wesley Professional, Boston. ISBN 978-0-13-418980-2 [Mehan16] Mehan JE (2016) Insider threat—a guide to understanding, detecting, and defending against the enemy from within. IT Governance Publishing, Ely. ISBN 978-1-8492-8839-2 [Meyer09] Meyer B (2009) A touch of class—learning to program well with objects and contracts. Springer, Berlin. ISBN 978-3-540-92144-5 [Murer11] Murer S, Bonati B, Furrer FJ (2011) Managed evolution—a strategy for very large information systems. Springer, Berlin. ISBN 978-3-642-01632-5 [Oppliger16] Oppliger R (2016) SSL and TLS—theory and practice, 2nd edn. Artech House Publishers, Norwood. ISBN 978-1-608-07998-8 [Orondo14] Orondo O (2014) Identity and access management—a systems engineering approach. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-4993-5706-6 [Oezsu11] Özsu MT, Valduriez P (2011) Principles of distributed database systems, 3rd edn. Springer, New York. ISBN 978-1-441-98833-1 [Pelliccione07] Pelliccione P, Muccini H, Guelfi N, Romanofsky A (2007) Software engineering and fault tolerant systems. World Scientific Publishing Inc., Singapore. ISBN 978-9-8127-0503-7 [Peltier04] Thomas R (2004) Peltier: Information security policies and procedures— a practitioner’s reference, 2nd edn. Taylor & Francis, Boca Raton. ISBN 978-0-8493-1958-7 [Peti08] Peti P (2008) Diagnosis and maintenance in an integrated time-triggered architecture—tackling the trouble-not-identified phenomenon. VDM, Müller. ISBN 978-3-8364-8310-0 [Plösch04] Plösch R (2004) Contracts, scenarios and prototypes—an integrated approach to high quality software. Springer, Berlin. ISBN 978-3-540-43486-0 [Proctor02] Proctor PE, Christian Byrnes F (2002) The secured enterprise—protecting your information assets. Prentice Hall, Upper Saddle River. ISBN 978-0-130-61906-8 [Pullum01] Pullum LL (2001) Software fault tolerance techniques and implementation. Artech House, Norwood. ISBN 978-1-580-53137-5
336
13 Architecture Principles for Resilience
[Riascos10] Riascos LAM, Miyagi PE (2010) Fault tolerance in manufacturing systems— applying petri nets. VDM, Müller. ISBN 978-3-6392-7556-8 [Saxe18] Saxe J, Sanders H (2018) Malware data science—attack detection and attribution. No Starch Press Inc., San Francisco. ISBN 978-1-5932-7859-5 [Saydjari18] Sami Saydjari O (2018) Engineering trustworthy systems—get cybersecurity design right the first time. McGraw-Hill Education, New York. ISBN 978-1-260-11817-9 [Schmidt06] Schmidt K (2006) High availability and disaster recovery—concepts, design, implementation. Springer, Berlin. ISBN 978-3-540-24460-8 [Scholz14] Scholz JA (2014) Enterprise architecture and information assurance—developing a secure foundation. Auerbach, Boca Raton. ISBN 978-1-439-84159-4 [Selby17] Selby N, Vescent H (2017) Cyber attack survival manual—from identity theft to the digital apocalypse and everything in between. Weldon Owen, San Francisco. ISBN 978-1-6818-8175-1 [Shein04] Shein R (2004) Zero-day exploit—countdown to darkness. Syngress, Rockland. ISBN 978-1-931836-09-8 [Smith11] Smith DJ, Simpson KGL (2011) Safety critical systems handbook—a straightforward guide to functional safety, IEC 61508 and related standards, 3rd edn. Butterworth-Heinemann, Oxford. ISBN 978-0-08-096781-3 [Stolfo09] Stolfo SJ, Bellovin SM, Hershkop S, Keromytis A, Sinclair S, Smith SW (eds) (2009) Insider attack and cyber security—beyond the hacker. Springer, Berlin. ISBN 978-1-441-94589-1 [Wagner06] Wagner F, Schmuki R, Wagner T, Wolstenholme P (2006) Modeling software with finite state machines—a practical approach. Auerbach, Boca Raton. ISBN 978-0-849-38086-0 [Wahe11] Wahe S (2011) Open enterprise security architecture—a framework and template for policy-driven security. Van Haren Publishing, Zaltbommel. ISBN 978-9-0875-3672-5 [Wall07] Wall DS (2007) Cybercrime—the transformation of crime in the information age. Polity, Cambridge. ISBN 978-0-7456-2736-6 [Wheeler11] Wheeler E (2011) Security risk management—building an information security risk management program from the ground up. Syngress, Rockland. ISBN 978-1-5974-9615-5 [Yang17] Yang M, Hua G, Feng Y, Gong J (2017) Fault tolerance techniques for spacecraft control computers. Wiley, Hoboken. ISBN 978-1-119-10727-9 [Yu14] Yu S (2014) Distributed denial of service attack and defense. Springer, New York. ISBN 978-1-461-49490-4 [Zeigler17] Zeigler BP, Sarjoughian HS (2017) Guide to modeling and simulation of systems-of-systems, 2nd edn. Springer, London. ISBN 978-3-319-64133-1 [Zongo18a] Zongo P (2018) The five anchors of cyber-resilience—why some enterprises are hacked into bankruptcy while others easily bounce back. Broadcast Books, Sydney. ISBN 978-0-6480078-4-5 [Zongo18b] Zongo P (2018) The five anchors of cyber resilience—why some enterprises are hacked into bankruptcy, while others easily bounce back. CISO Advisory. ISBN 978-0-6480-0784-5. https://cisoadvisory.com.au/
Architecture Principles for Dependability
14
Abstract
Resilience is a generic property of dependable systems which is accomplished by adequate architecture features, such as redundancy or fail-safe behavior. Dependability of a system, however, embraces more: Protection of particular assets and defense against specific threats—such as unauthorized access to confidential information. The system is analyzed using a sophisticated risk management methodology. For each threat identified, essential mitigation measures or controls are defined and implemented to reduce the residual risk of the threat to an acceptable level. This chapter introduces a methodology and a number of illustrative principles for building dependable systems.
14.1 Introduction One of the key characteristics of future-proof software-systems is dependability (Dependability). Dependability has two pillars: • Resilience: General property of dependable systems; • Application-specific dependability properties: Chosen and quantified by the protection process (Fig. 14.1). Both for resilience and for application-specific dependability properties, powerful and proven principles for constructing the adequate architecture are known. There are nine general resilience architecture principles (see Chap. 13). Quote: “Architecture principles are eternal truths of software engineering.” © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8_14
337
338
14 Architecture Principles for Dependability ĞƉĞŶĚĂďŝůŝƚLJ
ZĞƐŝůŝĞŶĐĞ
ƉƉůŝĐĂƚŝŽŶͲƐƉĞĐŝĨŝĐĚĞƉĞŶĚĂďŝůŝƚLJƉƌŽƉĞƌƚŝĞƐ ĞƉĞŶĚĂďŝůŝƚLJ ƉƌŽƉĞƌƚŝĞƐ ;ƚĂdžŽŶŽŵLJͿ
ϵ 'ĞŶĞƌĂůƌĞƐŝůŝĞŶĐĞ ĂƌĐŚŝƚĞĐƚƵƌĞ ƉƌŝŶĐŝƉůĞƐ
ĞƉĞŶĚĂďŝůŝƚLJ ĂƌĐŚŝƚĞĐƚƵƌĞ ƉƌŝŶĐŝƉůĞƐ WƌŽƚĞĐƚŝŽŶ ĂƐƐĞƚ ŝĚĞŶƚŝĨŝĐĂƚŝŽŶ
ĂĐĐĞƉƚĂďůĞ √
dŚƌĞĂƚ ΘZŝƐŬ ŶĂůLJƐŝƐ
DŝƚŝŐĂƚŝŽŶ ŵĞĂƐƵƌĞƐ
ZĞƐŝĚƵĂůƌŝƐŬ ĂƐƐĞƐƐŵĞŶƚ EKd ĂĐĐĞƉƚĂďůĞ
Fig. 14.1 Dependability—protection process (Repetition of Fig. 13.1)
In this part, some selected architecture principles for application-specific dependability properties will be presented. For each of the application-specific dependability properties (see Table 7.1), a vast body of knowledge and rich literature exists. Figure 14.2 repeats the topology of the architecture principles. There are 15 architecture principles which govern the changeability of the software-system: • 3 principles for the business architecture layer (Chap. 11); • 12 principles for the application, information, and integration architecture layers (Chap. 12). The number of architecture principles for dependability is colossal: For each of the application-specific dependability properties—such as security, safety, real-time capability, etc.—a very advanced, specific body of knowledge exists. This chapter, therefore, has to restrict the material. The choice was to present some principles for the two crucial application-specific dependability properties security and safety. Remember that the relevant application-specific dependability properties are different for various applications: They are determined by the construction of a dependability taxonomy for the specific application (Fig. 5.13). The desired dependability properties are then defined in a policy (Resilience Architecture Principle #1: Policies), specified via the risk management process (Fig. 14.1), implemented, and finally assessed via a metric (Fig. 5.13).
14.2 Information Security
339
Business Architecture
Changeability Architecture Principles
Applicaon Architecture Informaon Architecture Integraon Architecture Technical Architecture Resilience
…
Quality of Service Architecture Principles (Safety, Security, Real-Time, …
Fig. 14.2 Topology of architecture principles
14.2 Information Security 14.2.1 Context Quote: “The theft of information assets and the intentional disruption of online processes are among the most important business risks facing major institutions.” (James M. Kaplan, 2015)
Today, billions of € of business are done via the internet. Many processes are digitized and executed online. Enormous amounts of digital values are stored in computers. Large parts of the economy are directly dependent on the distributed information infrastructure [Kaplan15]. More and more tasks are digitized and have become dependent on networked computers. In addition to the well-established Internet of Commerce, two far-reaching developments—the Internet of Things (IoT) and the Internet of Value—are gaining applications with great strides. Unfortunately, the technologies of distributed computers and the software they run are continuously at risk: They can be attacked via the networks or by insiders, they may show weaknesses in the implementation which can be maliciously exploited, they can host unnoticed malware, and they may have intentionally built-in backdoors. Protection of all parts of the information and computer technology is the responsibility of information security (Definition 14.1). Information security is today a crucial engineering discipline, on which a large part of our economy and lives rely.
340
14 Architecture Principles for Dependability
Definition 14.1: Information Security Information Security protects the confidentiality, integrity, and availability (CIA) of computer system data and functionality from unauthorized and malicious accesses. Example 14.1 shows successful attacks, whereas, Example 14.2 shows the grave consequences of a weakness in the software of an application. Example 14.1: Successful Attacks on Bitcoin
Bitcoin was the first digital currency [Antonopoulos17]. It exists purely in cyberspace, with no real, tangible value to back it in the real world. The Bitcoin technology is based on the blockchain [Swan15]—an inherently secure scheme, well protected by modern cryptography. The Bitcoins are stored in a wallet, which is a secure store either on a PC or on servers provided by a wallet service. However, through hacking and fraud the following Bitcoin losses were taken (Examples): 2011 A user reports “I just woke up to see a huge chunk of my Bitcoin balance gone (500,000 $)” he wrote. He believed that someone had hacked into his PC and stolen the bitcoins from his hard drive, transferring them to an account controlled by the hackers. 2012 Bitcoinica suffered a second hack that cost the company 18,000 bitcoins; 2012 Bitcoin exchange Bitfloor suffered a massive attack. Attackers stole 24,000 bitcoins, then worth around $250,000; 2014 Collapse of the world’s leading Bitcoin exchange Mt. Gox after 850,000 Bitcoins had gone missing—likely stolen by hackers, the company said; 2016 The Bitcoin exchange Bitfinex announced that hackers had stolen $77 million worth of bitcoins; 2018 Following a cyberattack on South Korean bitcoin exchange Coinrail, Bitcoin suffered a massive sell-off, destroying a whopping 42 billion $ of its market value; … and—as an Internet search reveals—many more. In all cases, the losses were made possible by weaknesses in the blockchain or the individual wallet implementations. This brutally demonstrated that information security must be taken very seriously. Example 14.2: Implementation Weakness—Programming Error
Knight Capital has been a successful company doing automated trading [Cartea15]. In automated trading, the orders to buy or sell securities are generated by computer algorithms without human supervision. Each of these algorithms has a specific strategy and can generate vast numbers of orders in very short time periods. Algorithms react much faster than human traders and can, therefore, make use of small market movements to earn money.
14.2 Information Security
341
On Wednesday, August 1, 2012, at 09:30, the trading algorithm of Knight Capital began generating an unprecedented number of erroneous trading orders into the market [Perez13]. The trading volume was so great and erratic that the exchange closed down and stopped trading. However, in 30 min, Knight Capital had lost 440 million $. Quote: “Computers do what they’re told. If they are told to do the wrong thing, they’re going to do it and they’re going to do it really, really well.” (Lawrence Pingree, 2015)
Later analysis showed that a programming bug was the cause for the runaway. But not only had the company produced dangerous software, they also had completely neglected security measures, such as last-ditch stop mechanisms to immediately shut down trading when anomalies were detected (Resilience Architecture Principle #5: Multiple Lines of Defense). There should be several warning signals and stop conditions built into every automated trading program.
14.2.2 Information Protection Figure 14.3 shows the conceptional model for information security: It is based on a security classification for each data/information item. A possible security classification scheme is given in Table 14.1. Each data/information item is assigned to a unique owner (= Information governance): The owner is responsible and takes all measures to protect his data/information items adequately.
Users
Authencaon
Authorizaon
Authorizaon
Encrypon Internal Use Only
Confidenal
Fig. 14.3 Conceptional model for information security
Secret
Governance
Logs
342
14 Architecture Principles for Dependability
Table 14.1 Data/information security classification Level
Access
Examples
Public
P
Accessible to the general public (e.g., company website)
• Information material • Marketing downloads • Event announcements
Internal Use Only
I
Accessible only to employees
• Internal documents • Internal presentations • Employee e-mails
Confidential
C
Restricted access to a defined group of employees
• Customer data • Transaction data • Project information • Patient records
Secret
S
Highly restricted access to specifi- • Strategic plans cally named managers • Product planning • Mergers and Acquisitions • Submarine construction blueprints
The primary functions shown in Fig. 14.3 for protecting access are: • Authentication: The access attempt is identified as initiated by a registered, entitled user. The digital identity is recognized and is allowed to proceed to authorization ([Dasgupta17, Todorov07], Example 7.1); • Authorization: The identity requests to access specific data/information items. The authorization system decides to either grant or deny access based on a table of rights; • Encryption: The data/information is mathematically transformed into an unintelligible form. Only legitimate users—possessing the required decryption key—can transform the encrypted data back to readable files [Paar10]. Encryption is applied to (at least) to “secret”-classified information; • Logging: A sufficient amount of trace data to track and reconstruct all authentication and authorization decisions, and all accesses to any data/information item (= Audit trail, [Maier06]); • Governance: The management responsibility and accountability organization to assure and enforce the correct and effective data/information protection [Wildhaber17]. The information for the decisions in the authentication and the authorization procedures are stored in tables of rights. Tables of rights define the access privileges: Access privileges should be constricted as much as possible (= principle of least privileges). These tables are frequently updated, e.g., whenever new customers register or customers depart. The tables are fundamentally important for information security and must be kept timely updated and specially protected.
14.2 Information Security
343
Security breaks will occur: It is not possible to secure a technical system—especially not a complex IT system of an organization—completely. New functionality, new COTS (= Commercial off-the-shelf) products to be integrated, new access channels, new mobile devices, migration to new technologies, changes in security procedures, and many more evolutionary steps open up new attack possibilities and new risks. In parallel, more powerful attack tools, new vulnerabilities (often not published), rising insider crime risk, etc., raise the stakes continuously. Because of these developments, two additional defense mechanisms must be carefully implemented: 1. Regular, periodic security audits where the information security of the system is seriously challenged by experts, possibly using attack tools (such as penetration testing [Oriyano16]); 2. A fast and professional forensic capability: After a security breach has been detected, the cause and weakness must be identified and remedied quickly. This requires a detailed, accurate, and trustworthy audit trail, which enables the reconstruction of all relevant operations [Vacca18]. Note that sufficient forensic capability can also be required from legal and compliance requirements.
Principle 14.1: Information Security
1. Appoint a Chief Security Officer and allocate sufficient funding for the security organization and defense measures adequate for the risk exposure of the organization; 2. Formulate and enforce a comprehensive security policy (with international standards as a foundation); 3. Assign a security classification to all data/information items; 4. Allocate each data/information item to a unique owner and unambiguously specify his/her responsibilities; 5. Chose adequate security mechanisms fitting the value of the information to be protected (e.g., multi-factor authentication for confidential data/information); 6. Develop and implement a dependable, trustworthy security architecture (adequate for your application domain and risk tolerance); 7. Grant only the most restricted access rights to employees or customers—so that they can do their job, but nothing more (= principle of least privilege); 8. Keep the authentication and authorization tables of rights timely updated and specially protected; 9. Execute periodical, regular security audits by competent, experienced experts (possibly with the help of external consultants and hacking tools); 10. Implement a forensic capability (based on sufficient, complete, and trustworthy logs from all relevant systems).
344
14 Architecture Principles for Dependability
14.3 Functional Security The functionality of the software-system is implemented in code/software. Correct operation requires that the code behaves precisely according to the specification—nothing more, nothing less. A problem is the avoidance of undesired code/software coming into the softwaresystem. There are numerous channels, through which harmful code/software can enter the system—intentionally or unintentionally. Figure 14.4 presents an overview: The software-system is delineated by its protection boundary and at its core is the functionality implemented by software. In this case, “software” means all the code in the system, i.e., system software, the application landscape, COTS (Commercial off-the-shelf products), protection software, system software, etc. Undesired software—resulting in undesired functionality—can enter the system via two paths: • From external sources, i.e., through the protection boundary (= Infiltration). Typical cases include malware from the internet; • From internal sources, i.e., from activities or software within the system. Typical sources include carelessness by programmers, malicious code written by employees, weaknesses in COTS, or risks associated with the business partners (= Insider). Functional security—implemented in principles, tools, and processes—ensures that the software-system remains free of undesired code/software (Definition 14.2).
Infiltraon
Defense Boundary
Aack Weaknesses
So wareSystem
add
modify
inhibit
So ware-System Funconality
add
modify
inhibit
Process Deficiency COTS
Fig. 14.4 Entry points for undesired code/software
Insider
Partner Partner Partner
14.3 Functional Security
345
Definition 14.2: Functional Security Functional security protects the software-system from malicious, infiltrated code, both from the outside and from the inside of the organization.
14.3.1 Infiltration Quote: “It is clear that the stakes couldn’t be any higher: Cybercrime is costing the global economy [2017] more than 500 billion $ annually, and the costs are soaring.” (Phillimon Zongo, 2018)
Infiltration of undesired code/software from outside sources is the most serious threat to networked computer systems—and thus for the whole economy and society. There is not one day, where not several severe security incidents, such as stolen information, defaced websites, denial-of-service attacks, digital espionage, various types of malware, cryptographically locked systems, etc., are reported. The damage done to the world economy is huge—and raising every year [Kshetri10, Moore10, Owen19, Wall07, White18, Zongo18a, b]. Example 14.3: Hospital Extortion
In 2016, the Hollywood Presbyterian Medical Center in Los Angeles (USA) was attacked: The invaders placed a malware called ransomware [Liska16] on the hospital’s servers. The ransomware encrypted all medical records and made them inaccessible by the hospital staff for 10 days. The cybercriminals demanded financial ransom, payable in Bitcoins (untraceable, [Aumasson17]). Quote: “Cyber risk is a business risk, not a technology problem.” (Phillimon Zongo, 2018)
The hospital was forced to transfer critical patients to other hospitals for treatment. All medical procedures became high risk because the medical history of the patients was not available to the medical practitioners. There was no option for the hospital as to pay the ransom of 20 k$. The intruders then decrypted the files, and the hospital went back to normal operation. The ransomware was infiltrated by a Trojan: A Trojan is a piece of malware which is deposited on the computers of the victims, either, e.g., by e-mail attachments or by visiting an unsafe website. The Trojan has a payload of highly undesired functionality, in this case, an encryption program which encrypts all the files on the storage devices. The defense boundary in Fig. 14.4 has the responsibility to detect and prevent the infiltration of undesired software. The security industry has developed many defense tools, such as virus scanners, malware detection programs, intrusion detection, and removal tools (see e.g., [Elisan15, Hoffman14, Saxe18, SolisTech16]). However, securing the boundary is a continuous race between the attackers and the defenders— a cat-and-mouse game. Especially dangerous are zero-day exploits [Shein04], where
346
14 Architecture Principles for Dependability
unknown weaknesses in products or the applications software are capitalized by the attackers. An interesting approach is to use artificial intelligence to safeguard the defense boundary [Chio18]. Quote: “It takes only one lethargig systems administrator to ignore database security logs and allow hackers to exfiltrate volumes of sensitive data to the Darknet over a prolonged period.” (Phillimon Zongo, 2018)
Note that the defense boundary is a generic term: It does not state that it is the physical boundary of the system: Elements of the defense boundary, such as logging and anomaly monitoring (see Example 13.2)—which are powerful in detecting malware— can be located anywhere in the software-system. Unfortunately, human behavior is often a dangerous entry point for undesired code/software. Opening infected e-mail attachments or visiting unsafe websites may trigger the intake of malware. Another doorway for undesired code/software is weaknesses of COTS (= Commercial off-the-shelf) software in the system (Fig. 14.4): Such third-party software may have unknown (or uncommunicated) weaknesses, which allow intrusion. Finally, undesired code/software may be distributed unknowingly by partners of our systems.
14.3.2 Malicious Modification Software is developed by employees (or contractors) which operate inside the defense boundary of Fig. 14.4. They construct applications, which then go into production. In principle, these programmers have the possibility to add undesired, possibly malicious functionality to the specified functionality that they should implement: This is called an insider threat [Arduin18, Bunn17, Cappelli12, Hoffman14]. Insider threats are dangerous and are difficult to detect. Potentially endangering modifications in programs can be [Hoffman14] • Unintentional: The programmer makes mistakes, forgets a security call, or copy-andpastes an unsecure piece of code; • Intentional and Non-Malicious: The programmer takes shortcuts, ignores principles or policies, or simply makes dumb mistakes; • Intentional and Malicious: The programmer is fully aware and writes malicious code with criminal energy (Example 14.4).
347
14.3 Functional Security Example 14.4: Database Destruction
A disgruntled programmer added a malicious modification to a program that he was tasked to modify. The legitimate function of the program was to store transaction data in a database. The malicious functionality, which he added was as follows (Fig. 14.5): 1. Anytime a legitimate access, i.e., a legitimate update of the database went through the application, the malicious code called a random generator; 2. Based on the random generator, in 1% of the calls, the malicious code minimally changed the content of one randomly chosen field of the database; The effect was that the database became incrementally corrupted. Because the destructive effect was slow and small, it took a long time to notice it. It manifested itself by leading to errors and inconsistencies in transactions or customer actions. Unfortunately, the gradual destruction had also propagated to all backup and save copies, a direct restore was, therefore, not possible. The code review revealed the malicious code and the configuration system gave the date of the malicious modifications and the name of the programmer. However, the reconstruction of the database was a very elaborate task.
14.3.3 Secure Software Development Avoiding the entry of malicious code/software during the development process requires a secure software development process (Definition 14.3, [Ransome13]).
Random Generator
Legimate Access
Applicaon
Legimate update
Destrucve modificaon
Fig. 14.5 Malicious database destruction
348
14 Architecture Principles for Dependability
Definition 14.3: Secure Software Development Process Software development life cycle process that treats nonfunctional requirements (such as resilience and security) and quality requirements as a core element of every phase, and has effective subprocesses for secure requirements, specifications, modeling, implementation, deployment, metrics, reviews, and enforcement in place. (Mark S. Merkow 2011)
Quote: “The only reliable way to ensure that software is constructed secure and resilient is by integrating a security and resilience mindset and process throughout the entire software development life-cycle.” (Mark S. Merkow, 2010)
The secure software development process starts with secure modeling [Matulevičius17]. Then follows architecting for security [Rerup18, Schoenfield15], using security patterns: [Fernandez-Buglioni13], and leads through the complete software development life cycle [Merkow10, Merkow11, Talukder08]. Two actions are actively supporting the process: • Reviews: All artifacts, especially the code, are critically reviewed by an experienced team of experts; • Automated code analysis [Huizinga07, Nielson04]: The source code is scanned by specialized tools. Various tools exist, and they find an amazing number of weaknesses in the source code. Some tools can even check for particular questions, such as which TCP/IP ports are used by the program (detecting a possible leak or backdoor). Principle 14.2: Functional Security
1. Appoint a Chief Security Officer and allocate sufficient funding for the security organization and defense measures adequate for the software development risk exposure of the organization; 2. Formulate and enforce a comprehensive security policy (with international standards as a foundation); 3. Install and enforce a secure software development process; 4. Monitor the execution/operation of the software (both in real time and with log analysis) to detect and react to anomalies;
14.4 Safety 14.4.1 Introduction Safety (Definition 14.4) is an integral part of dependability, but not the only one: The dependability of a system is constructed from a taxonomy fitting the field of application
14.4 Safety
349
(see Definition 8.4). However, for many systems—especially cyber-physical systems (Sect. 3.7.3)—safety is a primary concern. Definition 14.4: Safety Safety is the state of being protected against faults, errors, failures, or any other event that could be considered non-desirable in order to achieve an acceptable level of risk concerning loss of property, damage to life, health or society, or harm to the environment. Many of today’s systems are of mixed criticality, i.e., they have interwoven safety-critical system parts and non-safety-critical system parts. In addition, most of today’s interesting systems are cyber-physical systems-of-systems (CPSoS, Definition 12.2). In a cyber-physical system-of-systems (CPSoS), software controls part of the physical environment. It receives input via sensors and controls the physical world via actuators. Familiar examples are industrial control systems, autonomous cars, driverless trains, autopilots, collaborating robots, and many more (e.g., [Liu17]).
14.4.2 Safety Analysis Quote: “Safety is demonstrated not by compliance with prescribed processes, but by assessing hazards, mitigating those hazards, and showing that the residual risk is acceptable.” (Chris Hobbs, 2016)
Assuring the safety of such complex systems is a formidable and responsible task. It is primarily a process challenge, supported by many elaborated international standards (see e.g., [Smith10]). Quote: “The iceberg is the hazard. The risk is that a ship will run into it. One mitigation might be to paint the iceberg yellow to make it more visible. Even painted yellow, there is still the residual risk that a collision might happen at night or in fog.” (Chris Hobbs, 2016)
Safety analysis starts with hazard analysis (see e.g., [Ericson15]). Note that a hazard in safety is the same as a threat to security as shown in Fig. 14.1. Therefore, the first step in the safety analysis of a system is to identify as many hazards as possible: Of course, the goal is to identify all of the possible hazards—which in most cases is not achievable. A hazard generates a risk. Risks may lead to dangerous situations. To manage risks, they need to be well mitigated by countermeasures, i.e., actions or measures must be implemented to reduce or eliminate the consequences. After all mitigation measures are defined, a final risk assessment is carried out: If the remaining, residual risk is acceptable, the system can go into operation. If not, additional mitigation measures must be devised and implemented (Analogous to Fig. 14.1 for security). Figure 14.6 shows the simplified conceptional hazard model for a cyber-physical system-of-systems.
350
14 Architecture Principles for Dependability
Fig. 14.6 Conceptional model of hazards in a CPSoS
The following types of hazards are identified: 1. Sensor/Actuator signals are corrupted: The software relies on the sensors to receive authentic, complete, and up-to-date information about the physical world, such as the digital camera of an autonomous vehicle or the velocity of an airplane. If the unrecognized faulty information is delivered, dangerous situations may develop. The results of the computations must be transmitted by the actuators to the physical world. If this is not the case, i.e., some distortion or loss takes place, again dangerous situations can occur; 2. System-internal failure: In a system, any part may fail. In fact, for large and complex systems, parts failures are normal. Failures may propagate (Resilience Architecture Principle #3: Fault Containment Regions) and inhibit the correct functioning of the system; 3. Failure of a partner system: A failure can also take place in a partner system. If our system is dependent from functionality or information of the failed partner system, again this inhibits the correct functioning of our system; 4. Incorrect information from a partner system: A partner system may deliver incorrect information to our system, in either content, timing or completeness. If this fact is not recognized, the correct functioning of our system is impaired; 5. Attack from outside: Attacks from outside have been considered in the security context (see Sect. 14.2). However, in cyber-physical systems, successful attacks can have grave consequences for the physical world (Example 14.5);
14.4 Safety
351
6. Unknown dependency: An unknown dependency can create surprising, often very negative consequences. Because it is unknown, mitigation measures can be taken only in a very limited way and the consequences might be grave (Example 14.6). Example 14.5: Malicious Reprogramming of Implanted Medical Devices
1. Implanted insulin pumps automatically measure and inject the correct dose of insulin to the patient. Insulin pumps need an occasional readjustment of the parameters, to adapt, e.g., to changing patient conditions. For this function, a wireless link is established between the insulin pump and the external control unit. The current parameters can be read out, modified by the internist, and are stored back in the pacemaker. In 2011, it was experimentally demonstrated that a hacking attack allowed the wireless commandeering of the devices when an attacker was near the patient (https://www.theregister.co.uk/2011/10/27/fatal_insulin_pump_attack/). Software and a special antenna designed by the demonstrator allowed him to locate and seize control of any device within 100 m. The demonstrator explained: “With this device and the software I created, I could actually instruct the pump to perform all manner of commands. I could make it dispense its entire reservoir of insulin, which is about 300 units.” 2. Pacemakers are implanted biomedical devices that support the heart function of people with cardiovascular insufficiencies. They measure the heartbeat and inject electrical stimulation impulses when the software decides so. Pacemakers need occasional readjustment of the parameters, to adapt e.g., to changing patient conditions. For this function, a wireless link is established between the pacemaker and the external control unit. The current parameters can be read out, modified by the cardiologist, and are stored back in the pacemaker. In 2008, American researchers have demonstrated that it is possible to turn off patient’s pacemakers or maliciously change parameters through a wireless hacking attack [Halperin08]. Hackers would turn off the device, or—even worse—deliver unnecessary electrical shocks to the patient. Unfortunately, some vulnerabilities still exist in 2018 (https://www.cnbc.com/2018/08/17/security-researchers-saythey-can-hack-medtronic-pacemakers.html). This shows the technical difficulty to secure these devices, mainly because of the limited processing power and the very restricted energy availability in the implanted device. Example 14.6: Accidental Systems (Internal Failure)
In a safety experiment, a ship was sent into an area in the North Sea, where the signals from GPS (= Global Positioning System [Hofmann-Wellenhof03]) were deliberately jammed. As expected, the position of the ship became unreliable, jumping randomly to Norway, Germany, and other parts of Europe. But, very unexpectedly, the radar of the ship also wholly stopped working! The radar manufacturing company
352
14 Architecture Principles for Dependability
assured the ship’s crew that GPS signals were not used in the radar system. However, after contacting manufacturers of subsystems and components used in the radar system, one small component relying on GPS timing signals was identified. This radar system is an example of an accidental system: [Dale12, Hobbs15]: The manufacturer of the component knew that the component used GPS. He was, however, not aware of its 3rd party integration in the ship’s radar system. The designers of the radar system, on the other hand, were unaware of this GPS dependency. Result: A system with accidental dependencies had been built! Each organization needs to define its adequate process for building safety-critical systems. Many valuable and proven standards—often industry-specific—exist which can be used as a foundation (e.g., [ISO 26262 Functional Safety standard for road vehicles], [US Federal Motor Vehicle Safety Standards (FMVSS)], [DO-178B, Software Considerations in Airborne Systems and Equipment Certification], [SEI CERT C Secure Coding Standard], [Railway Safety Directive 2004/49/EC]). This section cannot delve deeper into the field of safety, the reader is referred to the rich literature available (e.g., [Allocco10, Anwar18, Dale12, Flammini12, Hanssen18, Hobbs15, Hollnagel14, Janicak15, Lee17, Leveson11, Rierson13, Ross16, Smith10, Wong18]).
14.4.3 Certification For many software-based systems—and for all safety-critical systems (e.g., [DeFlorio16])— certification (Definition 14.5) is an essential precondition for the use and commercialization of the systems. During the certification process, the product or the product extension is critically examined by an accredited agency. Once the agency accepts the safety of the product, it issues the permission to bring it to market (e.g., airworthiness certificates for airplanes or roadworthiness certificate for cars). Definition 14.5: Certification Formal procedure by which an accredited or authorized person or agency assesses and verifies (and attests in writing by issuing a certificate) the attributes, characteristics, quality, qualification, or status of individuals or organizations, goods or services, procedures or processes, or events or situations, in accordance with established requirements or standards. http://www.businessdictionary.com/definition/certification.html.
Safety Case Most certification agencies require a safety case as a base for their certification work. The safety case is a set of documents which proves all the safety investigations and measures the company has executed to make the product safe for use (Definition 4.1, [Axelsson15, Maguire06, Myklebust18]).
14.4 Safety
353
Definition 14.6: Safety Case A safety case consists of a rational argument and detailed evidence to justify and demonstrate that a system or product is tolerably safe in its use, and that it has a management program to ensure that this remains so. (Richard Maguire, 2006) For many industries, the required certification is a heavy, costly, and time-consuming activity. The certification effort should, therefore, be made in parallel with the development—not started at the end.
14.4.4 Safety-Systems Infrastructure In most systems, the run-time infrastructure (Hardware, communication links, operating system, system’s software, etc.) is a decisive part of high dependability. If the run-time infrastructure fails, the applications software is in most cases strongly affected. In safety-critical applications, the run-time infrastructure must not only guarantee the operating parameters, such as performance, availability, etc., but also enable the proof of the safety-relevant properties, such as timing. One interesting example of a safety-relevant run-time infrastructure is the TimeTriggered Architecture (Example 14.7). Example 14.7: Time-Triggered Infrastructure (TTA)
Real-time cyber-physical systems are based on one of the two paradigms: 1. Event-based: An event, such as a sensor value change or an action launched by a program results in the launch of a processing task executing the desired control function. This is the classical, interrupt-driven way of building real-time systems; 2. Time-triggered: The system’s processing tasks are assigned to fixed time slots. In each time slot, an unambiguous processing sequence has to take place, and predefined messages have to be sent and received. Failing to complete the tasks or loss of expected messages constitutes a failure with respect to real-time specifications (Example 14.7). The time-triggered technology or time-triggered architecture (TTA) has been developed at the Technical University of Vienna [Kopetz11, Nahas13, Obermaisser11, Pont17, Steinhardt16] and is now in commercial use in various industries. The timetriggered architecture is centered around a time-triggered communications channel: All nodes possess a common, synchronized time base (global clock) and therefore precisely access the intended time slots in the communication packets to send or receive their messages (Fig. 14.7).
354
14 Architecture Principles for Dependability
Fig. 14.7 Time-triggered architecture (TTA)
The safety properties of time-triggered architectures can be formally verified (which is extremely difficult for event-triggered systems). TTA has been shown to be safe in all operating conditions [Rushby01, Rushby99] and is therefore used in many safety-critical applications, such as aerospace, rail, and road. Although we could not present the full available knowledge on safety (see References), Principle 14.3 summarizes the most important points.
Principle 14.3: Safety
1. Define the safety requirements for your application, including the dependability expectations of the users; 2. Specify the safety requirements in unambiguous form. To each safety property, add a metric or an assessment method (Quantification); 3. Include (if required) the certification requirements in the specification of the safety requirements; 4. Define and maintain an architecture (vertical architecture layer “safety”) which supports all safety issues in an optimum way; 5. In mixed criticality, systems strictly separate the safety-critical functionality and the non-safety-critical functionality in all hardware and software artifacts (building fault containment regions);
14.5 Engineering Trustworthy Cyber-Physical Systems
355
6. Base your safety-critical systems on a suitable run-time infrastructure which assures the required quality of service properties; 7. Introduce and maintain a safety-aware development process (and seamlessly include certification, if required); 8. Audit both the architecture and the process at regular intervals. Use an experienced, resourceful team (possibly assisted by an external consultant); 9. Monitor (if possible in real time) the operation of the system. Implement “last ditch” safety mechanisms which stop the operation of a faulty system.
14.5 Engineering Trustworthy Cyber-Physical Systems 14.5.1 CPS and CPSoS Quote: “Trustworthiness is a holistic property that calls for the co-engineering of safety and cybersecurity, among other qualities. It is not sufficient to address one of these attributes in isolation, nor is it sufficient simply to assemble constituent systems that are themselves trustworthy. Composing trustworthy constituent systems may imply an untrustworthy SoS.” (Flavio Oquendo, 2015)
A large part of our work, life, and society in the next decades will be a cooperation with—or rather: a strong dependency from—cyber-physical systems, especially from cyber-physical systems-of-systems (CPSoS). The upcoming technology of the Internet of Things (IoT, see e.g., [ERCIM15, Raj17, Tsiatsis14, Zhuge12]) will massively affect our ecosystem, industry, and lifestyle. Much of the software produced in the next decades will control cyber-physical systems (CPS). Because of this heavy dependency the CPS, CPSoS, and IoT need to be trustworthy. Trustworthiness is a subset of dependability (Definition 4.1): Trustworthiness includes security and safety (Definition 14.7). Definition 14.7: Trustworthy Cyber-Physical System and Cyber-Physical System-ofSystems Cyber-physical system or cyber-physical system-of-systems with an adequate degree of security and safety to fulfill the trust expectations of its users. Note that for trustworthy cyber-physical systems and cyber-physical systems-of-systems—such as Example 12.18—the Managed Evolution, as well as all principles of future-proof software-systems are valid. Trade-offs between the quality of service properties, e.g., safety or security, versus time-to-market or cost are entirely management decisions, although with possibly dangerous consequences.
356
14 Architecture Principles for Dependability
14.5.2 Internet of Things Quote: “In IoT, each connected device could be a potential doorway into the IoT infrastructure or personal data. The potential risks with IoT will reach new levels, and new vulnerabilities will emerge.” (Shancang Li, 2017)
The Internet of Things (IoT, Definition 14.8) dangerously changes the game of tradeoffs. IoT devices have severe limitations: • Processing power constraints: The microcontrollers used in the IoT devices have very limited processing power. Most of the processing power is used up by their main functionality, i.e., the sensor- or actuator functions. Only a fraction of the processing power remains available for security or safety, e.g., for secure cryptography; • Limitations of available energy supply: Many IoT devices are battery-powered and connected via wireless networks. They should have a long, maintenance-free lifespan. Therefore, the available energy for processing security or safety functions is very limited; • Cost pressure: The IoT will be a tremendously large mass markets. IoT devices will be installed by the billions in the next decades. The fierce competition in this market generates strong cost stress—thus reducing the incentive to implement adequate safety and security measures. These severe limitations of many IoT devices require risky engineering compromises (Table 14.2). Definition 14.8: Internet of Things The Internet of Things (IoT) is an extension of the internet by integrating mobile networks, the internet, social networks, and intelligent things to provide services or applications to users. (Shancang Li 2017) Figure 14.8 shows the IoT architecture (mapped to the layers of Fig. 3.8). The IoT literature distinguishes the three layers (e.g., [Li17b]): 1. Sensor-/Actuator Layer (lowest layer); 2. Network Layer; 3. Applications- and Service Layer. Table 14.2 IoT device limitations Device
Processor
Memory
Energy
Cryptography
Servers, Desktops
32-, 64-Bit
Gbytes
“unlimited”
Conventional cryptography
Tablets, Smartphones
16-, 32-, 64-Bit
Gbytes
“unlimited”
Embedded Systems
8-, 16-, 32-Bit
Mbytes
Wh
RFID, Sensor Networks
4-, 8-, 16-Bit, ASICs, FPGAs
kBytes
mWh
Lightweight cryptography
14.5 Engineering Trustworthy Cyber-Physical Systems
357
Quote: “The security model in IoT must be able to make its own judgments and decision about whether to accept a command or execute a task.” (Shancang Li, 2017)
As shown in Fig. 3.8, the vertical architecture layers of security and safety are also shown in Fig. 14.8. The most common mechanism to provide safety and security is cryptography. Cryptographic techniques [Aumasson17, Paar10, Shemanske17] are used to ensure authentication, authorization, access control, privacy, confidentiality, and integrity. An IoT-based system or system-of-systems has a large number of possible attack points (Fig. 14.8). Securing and safeguarding such a system is, therefore, a formidable engineering task [Fiaschetti18, Sabella18, Smith17]. The starting point is the risk analysis (Fig. 13.1) for security issues and the hazard analysis (Fig. 14.6) for safety concerns. The results of the risk and the hazard analysis—and the relevant architecture principles—are the foundation for the security and the safety architectures (Fig. 14.8). Because of the constraints of some of the IoT devices (Table 14.2), not all desired cryptographic techniques can be implemented: The technology of lightweight cryptography has been introduced [McKay17, Poschmann09].
Emerging Properties A serious risk for security and safety in IoT systems or IoT systems-of-systems is the possibility of emergent properties or emergent behavior (Definition 12.1, [Bedau08,
Horizontal Architecture Layers
Applicaons- & Service-Layer
Business Architecture
App
App
App App
Applicaon Architecture Network-Layer
Informaon Architecture
WLAN
WSN
…
Mobile
Communicaon-/Sensor-/Actuator-Protocols
Integraon Architecture
Sensor-/Actuator-Layer RFID
Technical Architecture
Security
Fig. 14.8 IoT architecture
Safety
Sensor
Actuator
BlueT
…
358
14 Architecture Principles for Dependability
Bondavalli16, Charbonneau17, Mittal18, Sethna06]). The cooperation of the many parts of the system and its environment may produce unexpected behavior, exhibit unanticipated properties, or open up unforeseeable weaknesses. Principle 14.4: Internet of Things (IoT)
1. Execute a careful, complete, and thorough risk analysis (security) and hazard analysis (safety) for the complete system; 2. Design your security and safety architecture according to the results of the risk and hazard analysis—respecting all architecture principles for dependability; 3. Be very careful in allowing trade-offs between security/safety and time-to-market/cost. Each saving in cost—both in development cost and in the cost of the devices—may come later with a serious security or safety penalty; 4. If you have to use lightweight cryptography, only use NIST (US National Institute of Standards and Technology) approved technologies; 5. Beware of emergent properties of your system which may open up unforeseeable weaknesses and risks.
References [Allocco10]
[Antonopoulos17] [Anwar18]
[Arduin18] [Aumasson17]
[Axelsson15]
[Bedau08]
[Bondavalli16]
Allocco M (2010) Safety analyses of complex systems—considerations of software, firmware, hardware, human, and the environment. Wiley, Hoboken. ISBN 978-0-470-58770-6 Antonopoulos A (2017) Mastering bitcoin—unlocking digital cryptocurrencies, 2nd edn. O’Reilly, Farnham. ISBN 978-1-491-95438-6 Anwar S (2018) Fault tolerant drive by wire systems—impact on vehicle safety and reliability. Bentham Science, Sharjah. ISBN 978-1-6080-5667-5 Arduin P-E (2018) Insider threats. ISTE Ltd & Wiley, London & Hoboken. ISBN 978-1-848-21972-4 Aumasson J-P (2017) Serious cryptography—a practical introduction to modern encryption. No Starch Press, San Franciso. ISBN 978-1-5932-7826-7 Axelsson J (2015) Safety analysis for systems-of-systems. ERCIM NEWS 102:22–23. Special theme “Trustworthy Systems of Systems”. https://ercim-news.ercim.eu/images/stories/EN102/EN102-web.pdf. Accessed 7 Sep 2018 Bedau MA, Humphreys P (eds) (2008) Emergence—contemporary readings in philosophy and science. MIT Press, Cambridge. ISBN 978-0-262-02621-5 Bondavalli A, Bouchenak S, Kopetz H (eds) (2016) Cyber-physical systems of systems: foundations—a conceptual model and some derivations: the AMADEOS legacy. Springer Lecture Notes in Computer Science, Heidelberg. ISBN 978-3-319-47589-9
References [Bunn17]
359
Bunn M, Sagan SD (eds) (2017) Insider threats. Cornell Studies in Security Affairs. Cornell University Press, Ithaca. ISBN 978-1-501-70517-5 [Cappelli12] Cappelli DM, Moore AP, Trzeciak RF (2012) The CERT guide to insider threats—how to prevent, detect, and respond to information technology crimes (SEI Series in Software Engineering). Addison Wesley, Boston. ISBN 978-0-321-81257-5 [Cartea15] Cartea Á, Jaimungal S, Penalva J (2015) Algorithmic and highfrequency trading. Cambridge University Press, Cambridge. ISBN 978-1-107-09114-6 [Charbonneau17] Charbonneau P (2017) Natural complexity—a modeling handbook. Princeton University Press, Princeton. ISBN 978-0-691-17035-0 [Chio18] Chio C, Freeman D (2018) Machine learning and security—protecting systems with data and algorithms. O’Reilly, Farnham. ISBN 978-1-491-97990-7 [Dale12] Thomas M (2012) Accidental systems, hidden assumptions, and safety assurance. In: Dale C, Anderson T (eds) Achieving systems safety. Proceedings of the twentieth safety-critical systems symposium, Bristol, 7–9 February 2012. Springer, Berlin. ISBN 978-1-447-12493-1 [Dasgupta17] Dasgupta D, Roy A, Nag A (2017) Advances in user authentication. Springer, Berlin. ISBN 978-3-319-58806-3 [DeFlorio16] De Florio F (2016) Airworthiness—an introduction to aircraft certification and operations, 3rd edn. Butterworth-Heinemann, Oxford. ISBN 978-0-081-00888-1 [Elisan15] Elisan CC (2015) Advanced malware analysis. McGraw-Hill Education, New York. ISBN 978-0-071-81974-9 [ERCIM15] ERCIM NEWS (2015) Special theme “Trustworthy Systems of Systems” 102. https://ercim-news.ercim.eu/images/stories/EN102/EN102-web.pdf. Accessed 7 Sep 2018 [Ericson15] Ericson CA (2015) Hazard analysis techniques for system safety, 2nd edn. Wiley, Hoboken. ISBN 978-1-118-94038-9 [Fernandez-Buglioni13] Fernandez-Buglioni E (2013) Security patterns in practice: designing secure architectures using software patterns. Wiley, Hoboken. ISBN 978-1-119-99894-5 [Fiaschetti18] Fiaschetti A, Noll J, Azzoni P, Uribeetxeberria R (eds) (2018) Measurable and composable security, privacy, and dependability for cyberphysical systems—the SHIELD methodology. Taylor & Francis, Boca Raton. ISBN 978-1-138-04275-9 [Flammini12] Flammini F (ed) (2012) Railway safety, reliability, and security—technologies and systems engineering. Information Science Reference (IGI Global), Hershey. ISBN 978-1-4666-1643-1 [Halperin08] Halperin D, Heydt-Benjamin TS, Ransford B, Clark SS, Defend B, Morgan W, Fu K, Kohno T, Maisel WH (2008) Pacemakers and implantable cardiac defibrillators—software radio attacks and zeropower defenses. In: 2008 IEEE symposium on security and privacy. https://www.secure-medicine.org/hubfs/public/publications/icd-study. pdf. Accessed 3 Sep 2018 [Hanssen18] Hanssen GK, Stålhane T, Myklebust T (2018) Safescrum—agile development of safety-critical software. Springer, Berlin. ISBN 978-3-319-99333-1
360 [Hobbs15]
14 Architecture Principles for Dependability
Hobbs C (2015) Embedded software development for safety-critical systems. Taylor & Francis, Boca Raton. ISBN 978-1-498-72670-2 [Hoffman14] Hoffman J (2014) Intruders at the gate—building an effective malware defense system. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-5004-7957-2 [Hofmann- Hofmann-Wellenhof B, Lichtenegger H, Collins J (2001) Global Wellenhof03] positioning system—theory and practice, 5th edn. Springer-Verlag. ISBN 978-3-211-83534-0 [Hollnagel14] Hollnagel E (2014) Safety-I and safety-II—the past and future of safety management. Routledge, Abingdon. ISBN 978-1-4724-2308-5 [Huizinga07] Huizinga D, Kolawa A (2007) Automated defect prevention—best practices in software management. Wiley-IEEE Computer Society Press, Hoboken. ISBN 978-0-470-04212-0 [Janicak15] Janicak CA (2015) Safety metrics—tools and techniques for measuring safety performance. Bernan Print, Lanham. ISBN 978-1-5988-8754-9 [Kaplan15] Kaplan JM, Bailey T, O’Halloran D, Marcus A, Rezek C (2015) Beyond cybersecurity—protecting your digital business. Wiley, Hoboken. ISBN 978-1-119-02684-6 [Kopetz11] Kopetz H (2011) Real-time systems—design principles for distributed embedded applications. Springer, Berlin. ISBN 978-1-4419-8237-7 [Kshetri10] Kshetri Nr (2010) The global cybercrime industry—economic, institutional and strategic perspectives. Springer, Heidelberg. ISBN 978-3-642-11521-9 [Lee17] Lee EA, Seshia SA (2017) Introduction to embedded systems—a cyberphysical systems approach, 2nd edn. MIT Press, Cambridge. ISBN 978-0-262-53381-2 [Leveson11] Leveson NG (2011) Engineering a safer world—systems thinking applied to safety. MIT Press, Cambridge. ISBN 978-0-262-01662-9 [Li17b] Li S, Xu LD (2017) Securing the internet of things. Syngress, Cambridge. ISBN 978-0-12-804458-2 [Liska16] Liska A, Gallo T (2016) Ransomware—defending against digital extortion. O’Reilly, Farnham. ISBN 978-1-491-96788-1 [Liu17] Liu S, Li L, Tang J, Wu S, Gaudiot J-L (2017) Creating autonomous vehicle systems. Morgan & Claypool, San Rafael. ISBN 978-1-681-73007-3 [Maguire06] Maguire R (2006) Safety cases and safety reports—meaning, motivation and management. CRC Press, Boca Raton. ISBN 978-0-754-64649-5 [Maier06] Maier PQ (2006) Audit and trace log management—consolidation and analysis. Auerbach, Boca Raton. ISBN 978-0-849-32725-4 [Matulevičius17] Matulevičius R (2017) Fundamentals of secure system modelling. Springer, Berlin. ISBN 978-3-319-61716-9 [McKay17] McKay KA, Bassham L, Turan MS, Mouha N (2017) Report on lightweight cryptography (US National Institute of Standards and Technology Report NISTIR 8114). CreateSpace Independent Publishing Platform. ISBN 978-1-9811-1346-0. https://nvlpubs.nist.gov/nistpubs/ ir/2017/NIST.IR.8114.pdf. Accessed 9 Sep 2018 [Merkow10] Merkow MS, Raghavan L (2010) Secure and resilient software development. Auerbach Publications (Taylor & Francis), Boca Raton. ISBN 978-1-439-82696-6
References [Merkow11]
361
Merkow MS, Raghavan L (2011) Secure and resilient software— requirements, test cases, and testing methods. Auerbach Publications (Taylor & Francis), Boca Raton. ISBN 978-1-439-86621-4 [Mittal18] Mittal S, Diallo S, Tolk A (eds) (2018) Emergent behaviour in complex systems—a modeling and simulation approach. Wiley, Hoboken. ISBN 978-1-119-37886-0 [Moore10] Moore R (2010) Cybercrime—investigating high-technology computer crime, 2nd edn. Anderson Publishing, Oxon. ISBN 978-1-4377-5582-4 [Myklebust18] Myklebust T, Stålhane T (2018) The agile safety case. Springer, Berlin. ISBN 978-3-319-70264-3 [Nahas13] Nahas M (2013) Time-triggered embedded systems—bridging the gap between scheduling algorithms and scheduler implementations in timetriggered embedded systems. LAP Lambert Academic Publishing, Saarbrücken. ISBN 978-3-6593-8047-1 [Nielson04] Nielson F, Nielson HR, Hankin C (2004) Principles of program analysis, 2nd edn. Springer, Berlin. ISBN 978-3-540-65410-0 [Obermaisser11] Obermaisser R (2011) Time-triggered communication. CRC Press, Boca Raton. ISBN 978-1-439-84661-2 [Oriyano16] Oriyano S-P (2016) Penetration testing essentials. Sybex, Hoboken. ISBN 978-1-119-23530-9 [Owen19] Owen T, Noble W, Speed FC (2019) New perspectives on cybercrime (Palgrave Studies in Cybercrime and Cybersecurity). Palgrave Macmillan, New York. ISBN 978-3-319-85258-4 [Paar10] Paar C, Pelzl J (2010) Understanding cryptography—a textbook for students and practitioners. Springer, Berlin. ISBN 978-3-642-04100-6 [Perez13] Perez E (2013) Knightmare on wall street—the rise and fall of knight capital and the biggest risk for financial markets. Edgar Perez. New York, N.Y., USA. ISBN 978-0-9896577-0-9 [Pont17] Pont MJ (2017) The engineering of reliable embedded systems—developing software for ‘sil 0’ to ‘sil 3’ designs using time-triggered architectures. SafeTTy Systems, Great Dalby. ISBN 978-0-9930-3554-8 [Poschmann09] Poschmann A (2009) Lightweight cryptography—cryptographic engineering for a pervasive world. Bochumer Universitätsverlag Westdeutscher Universitätsverlag, Bochum. ISBN 978-3-89966-341-9 [Raj17] Raj P, Raman AC (2017) The internet of things—enabling technologies, platforms, and use cases. Taylor & Francis, Boca Raton. ISBN 978-1-498-76128-4 [Ransome13] Ransome J, Misra A (2013) Core software security—security at the source. Taylor & Francis, Boca Raton. ISBN 978-1-466-56095-6 [Rerup18] Rerup N, Aslaner M (2018) Hands-on cybersecurity for architects— plan and design robust security architectures. Packt, Birmingham. ISBN 978-1-7888-3026-3 [Rierson13] Rierson L (2013) Developing safety-critical software—a practical guide for aviation software and DO-178C compliance. Taylor & Francis, Boca Raton. ISBN 978-1-439-81368-3 [Ross16] Ross H-L (2016) Functional safety for road vehicles—new challenges and solutions for e-mobility and automated driving. Springer International, Switzerland. ISBN 978-3-319-33360-1
362 [Rushby01]
[Rushby99]
[Sabella18]
[Saxe18] [Schoenfield15]
[Sethna06] [Shein04] [Shemanske17]
[Smith10]
[Smith17] [SolisTech16]
[Steinhardt16] [Swan15] [Talukder08] [Todorov07]
[Tsiatsis14]
[Vacca18]
14 Architecture Principles for Dependability Rushby JM (2001) Bus-architectures for safety-critical embedded systems. In: EMSOFT 01 Proceedings of the first International Workshop on Embedded Software, 8–10 October 2001, 306–323. Springer, Berlin. ISBN 3-540-42673-6 Rushby J (1999) Systematic formal verification for fault-tolerant timetriggered algorithms. IEEE 25(5):651–661. http://citeseerx.ist.psu. edu/viewdoc/download?doi=10.1.1.174.7976&rep=rep1&type=pdf. Accessed 20 Dec 2018 Sabella A, Irons-Mclean R, Yannuzzi M (2018) Orchestrating and automating security for the internet of things—delivering advanced security capabilities from edge to cloud for IoT. Cisco Systems, Indianapolis. ISBN 978-1-5871-4503-2 Saxe J, Sanders H (2018) Malware data science—attack detection and attribution. No Starch Press, San Francisco. ISBN 978-1-5932-7859-5 Schoenfield BSE (2015) Securing systems—applied security architecture and threat models. CRC Press, Boca Raton. ISBN 978-1-482-23397-1 Sethna JP (2006) Entropy, order parameters, and complexity. Oxford University Press, Oxford. ISBN 978-0-19-856677-9 Shein R (2004) Zero-day exploit—countdown to darkness. Syngress, Rockland. ISBN 978-1-931836-09-8 Shemanske TR (2017) Modern cryptography and elliptic curves—a beginner’s guide. American Mathematical Society, Rhode Island. ISBN 978-1-470-43582-0 Smith DJ, Simpson KGL (2010) Safety critical systems handbook—a straight forward guide to functional safety, IEC 61508 (2010 EDITION) and related standards, including process IEC 61511 and machinery IEC 62061 and ISO 13849, 3rd edn. Butterworth-Heinemann, Oxford. ISBN 978-0-080-96781-3 Smith S (2017) The internet of risky things—trusting the devices that surround us. O’Reilly, Farnham. ISBN 978-1-491-96362-3 Tech S (2016) Malware—malware detection & threats made easy!, 2nd edn. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-5236-9310-8 Steinhardt G (ed) (2016) The faculty of informatics—key technology of the information society. Böhlau, Wien. ISBN 978-3-205-20129-8 Swan M (2015) Blockchain—blueprint for a new economy. O’Reilly and Associates, Farnham. ISBN 978-1-491-92049-7 Talukder AK, Chaitanya M (2008) Architecting secure softwaresystems. Auerbach Publishers, Boca Raton. ISBN 978-1-420-08784-0 Todorov D (2007) Mechanics of user identification and authentication—fundamentals of identity management. Auerbach Publishers, Boca Raton. ISBN 978-1-420-05219-0 Tsiatsis V, Mulligan C, Karnouskos S, Holler J, Boyle D (2014) From machine-to-machine to the internet of things—introduction to a new age of intelligence. Academic Press, Amsterdam. ISBN 978-0-124-07684-6 Vacca JR (2018) Computer forensics—computer crime scene investigation, 3rd edn. Jones & Bartlett, Sudbury. ISBN 978-0-7637-7997-9
References [Wall07] [White18]
[Wildhaber17]
[Wong18]
[Zhuge12]
[Zongo18a]
[Zongo18b]
363 Wall DS (2007) Cybercrime—the transformation of crime in the information age. Polity, Cambridge. ISBN 978-0-7456-2736-6 White RA (2018) Cybercrime—the madness behind the methods. CreateSpace Independent Publishing Platform, Scotts Valley. ISBN 978-1-9798-4857-2 Wildhaber B, Hagmann J, Burgwinkel D, Holländer S, Neuenschwander P, Spichty D (2017) Information governance—a practical guide: how to regain control over your information. The Swiss Information Governance Competence Center. Zollikon, Switzerland. ISBN 978-3-9524430-3-3 Wong W (2018) The risk management of safety and dependability—a guide for directors, managers and engineers. Woodhead Publishing, Illinois. ISBN 978-0-0810-1439-4 Zhuge H (2012) The knowledge grid—toward cyber-physical society, 2nd edn. World Scientific Publishing Company, Singapore. ISBN 978-9-8142-9177-4 Zongo P (2018) The five anchors of cyber-resilience—why some enterprises are hacked into bankruptcy while others easily bounce back. Broadcast Books, Sydney. ISBN 978-0-6480078-4-5 P Zongo (2018b) The five anchors of cyber resilience—why some enterprises are hacked into bankruptcy, while others easily bounce back. CISO Advisory. ISBN 978-0-6480-0784-5. https://cisoadvisory.com.au/
Conclusion
We have now completed a long journey through the rich field of software-systems architecture. How shall we end this book? With a word of direction: People and organizations producing software-systems have the great responsibility to make these software-systems trustworthy. The use and operation of the systems should not endanger or damage people, property, the environment, or society. This responsibility has become even more substantial with the advent of cyber-physical systems, where software directly affects the real world. Such cyber-physical systems will more and more become our companions in life and work, thus raising the trustworthiness requirements. Building trustworthy software-systems is a very challenging task. It involves people, organizations, methods, strategies, principles, tools, and many more elements. History has shown that the underlying architecture of a software-system is of crucial value for enabling the quality of service properties, especially trustworthiness. Therefore, the focus of this book is software-systems architecture. My intent was to provide well-founded knowledge about two essential elements of future-proof, trustworthy software engineering: (1) The sustainable software-systems development strategy “Managed Evolution”; (2) The software engineering process “Principle-Based Architecting”. This book contains the crystallized knowledge of an essential part of my lifetime’s work: “Crystallized intelligence is a human’s lifetime of intellectual achievement, as demonstrated largely through his vocabulary and general knowledge”. [Cattell 1966]
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8
365
References
The list of references for this book seems overwhelming. The author aimed to write this book as self-contained as possible. However, he intended to give the best possible printed sources for broader involvement of the reader with the addressed topics. Note that also a tremendous amount of information is available on the Internet. [Cattell 1966] C attell RB (1996) Intelligence – It’s Structure, Growth and Action. Elsevier Science Ltd., Amsterdam, NL. ISBN 978-0-4448-7922-6 [Edwards 1997] Edwards J, DeVoe D (1997) 3-tier client/server at work: ten of the world's most demanding mission critical applications. Wiley. New York, N.Y., USA. ISBN 978-0-471-18443-0 [Fensel04] Fensel D (2004) Ontologies — a silver bullet for knowledge management and electronic commerce, 2nd edn. Springer, Heidelberg (2010 softcover reprint of hardcover 2nd edition 2004). ISBN 978-3-642-05558-4 [IEEE00] IEEE-SA Standards Board, Standard IEEE 1471–2000, 14 November 2000: IEEE recommended practice for architectural description of software-intensive systems. The Institute of Electrical and Electronics Engineers, Inc., New York. ISBN 978-0-7381-2518-0 [Johnson07] Johnson N (2007) Simply complexity — a clear guide to complexity theory. Oneworld Publications, Oxford. ISBN 978-1-85168-630-8 [King01] King CM, Dalton CE, Osmanoglu TE (2001) Security architecture — design, deployment and applications. Mcgraw-Hill Professional, Osborne. ISBN 978-0-072-13385-1 [Martin13] Martin RC (2013) Agile software development. Prentice Hall, Upper Saddle River. ISBN 978-1-292-02594-0 [McGinn18] McGinn R (2018) The ethically responsible engineer — concepts and cases for students and professionals. Wiley, Hoboken. ISBN 978-1-119-06019-2 [Reinheimer17] Reinheimer S, Robra-Bissantz S (eds) (2017) Business-IT-Alignment — Gemeinsam zum Unternehmenserfolg. Springer Vieweg, Wiesbaden. ISBN 978-3-658-13759-5 [Sherwood05] Sherwood J (2005) Enterprise security architecture — a business-driven approach. McGraw-Hill Education, Boca Raton. ISBN 978-1-578-20318-5 [Yu14] Yu S (2014) Distributed denial of service attack and defense. Springer, New York. ISBN 978-1-461-49490-4
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8
367
Index
#SLOCs, 293 4 C’s, 273 α-principle, 202 ρ-principle, 208
A AADL, 281, 282 Abstraction, 146, 148 Acceptable risk, 46 Access protocol, 220 security, 101 Accidental complexity, 25, 50, 96, 279, 291, 292, 295 dependency, 352 redundancy, 227 Actuator, 35 Adaptation, 52, 142 Adapted pattern, 249 Advisor, 83 Agile process reinforcement, 174 Agile software development, 159 Anti-pattern, 120 Application, 31, 35, 213 architecture, 31 hierarchy, 94 layer, 93 interoperability, 236, 240 landscape, 31, 35, 116, 147, 213, 284 Application-specific dependability property, 337 Architecting for security, 348 Architectural
challenge, 224 complexity, 24 integrity, 97 Architecture, x, 25, 26, 46, 54, 57, 82, 91, 96, 142, 365 department, 288 description language (ADL), 108, 128, 281 directive, 83 editor, 108 erosion, 7, 13, 15, 27, 48, 96, 295 escort, 163 evaluation, 105 framework, 108, 128, 247 governance, 129 knowledge, 107, 125, 127, 128 layer, 209 organization, 125, 130 structure, 130 paradigm, 15 pattern, 107, 128 principle, x, 107, 114, 120, 128, 201, 207 for changeability, 208 process, 82, 131 program, 80 quality, 68 requirement, 162 tool, 107 view, 32 Architecture-based approach, 8 Architecture-driven process, 22 Artificial intelligence, 145 Asynchronous protocol, 221 Attack point, 357 Authentication, 33, 342
© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2019 F. J. Furrer, Future-Proof Software-Systems, https://doi.org/10.1007/978-3-658-19938-8
369
370 Authorization, 33, 342 Automotive cruise control system, 326 Autonomic computing, 28, 38 Autonomy, 38 AUTOSAR, 246 reference architecture, 246
B B-language, 151, 280 Behavior, 92 Big data, 263 Black box reuse, 254 Boxology, 275 Business architecture, 31, 115, 266 case for reuse, 256 IT alignment, 50, 202, 284 logic, 288 object model, 51, 52, 204, 225, 267, 280, 286, 288 process, 288 requirement, 21 rule, 149, 252, 256 engine, 256 language, 256 success, 126 value, ix, 47, 50, 52, 60, 80, 133, 201 erosion, 65, 80
C Car ontology, 276 Central architecture team, 143 CERT, 239 Certification, 260, 352 Change, 22, 26, 46, 52, 97, 142 impact on software, 27 Changeability, 50, 52, 53, 60, 68, 84, 97, 114, 116, 133, 210, 213, 218, 222, 244, 251, 338 Checklist, 75 Chief architect, 143, 224, 295 Clarity, 273 COBOL, 178 Code migration, 186 Codebase, 292 Cohesion, 214, 215, 268 rule, 215, 217
Index Commitment, 273 Committed management, 143 Common data, 241, 242 function, 241 functionality, 242 Communication, 273 Company culture, 8, 81, 97, 126, 132 management, 125, 127 Competitive advantage, 53 Competitiveness, 84 Complexity, 22, 24, 40, 46, 96, 131, 140, 214, 219, 284, 290, 349 factor, 290 impact on software, 25 management, 293 metric, 290 reduction architecture program, 293 Component, 35, 213 composition, 252 software engineering, 251 Computer science, 24 Conceptual architecture, 93 gap, 279 integrity, 23, 106, 142, 222, 225, 251, 267, 291 Concern, 147 Configuration management system, 232 Consistency, 264 Constituent system, 39, 140 Constraints, 167 Consultant, 83 Content fault, 320 Continuous degradation, 13 delivery, 102, 170, 171, 180 Contract, 40, 141, 237 Control, 273 CORBA, 220 Correctness, 150 Coupling, 211 CPS, 355 CPSoS, 355 Cross-platform risk, 169 Cryptography, 357 Cut and paste, 230 Cyber-physical system, 35, 36, 150, 255, 263, 355
Index of systems, 141, 349, 355 threat, 271
D Data, 227, 262 migration, 188 Database, 264 mirroring, 232 technology, 16 DCE, 220 DDoS, 331 Dead code, 187 Decidability, 278 Decision logic, 278 Defense boundary, 345 Dependability, 9, 47, 50, 53, 60, 69, 80, 232, 244, 309, 312, 337, 338, 348 architect, 311 metric, 73 objective, 313 property, 310 taxonomy, 53, 70 Dependable software, 7, 8 Dependency, 188, 212 Design-by-contract, 237 Development cost, 52, 66 money, 76 phase, 144 process, 46, 162, 273 time, 76 Devils of systems engineering, 23 DevOps, 47, 102, 170, 172, 180 DevSecOps, 175 Digital certificate, 81, 260 identity, 33, 261 signal processing, 4 signature, 261 Disaster prevention, 32 recovery, 32 Discipline, 165 Disorder, 11 Domain architect, 234, 267, 294 model, 31, 51, 52, 115, 128, 204, 217, 224, 234, 240, 242, 267, 280, 284, 285
371 software engineering, 224 Domain-driven design, 280 Domain-specific language (DSL), 280 modeling, 279, 280 Dominant decomposition, 217 partioning rule, 217 Driverless car, 38 DSL. See domain-specific language
E Effort, 46 EIA, 266 EIFFEL, 240 Electronic information asset, 314 Embedded computer, 36 software, 4 Emergence, 141 Emergent behavior, 141, 357 information, 141 property, 141, 357 Encapsulated partition, 218 Encapsulation, 211, 218 Encryption, 342 Enforcement, 83 Engine management software, 255 Ensemble, 141 Enterprise, 51 architecture, 31, 115, 247, 248, 266 layer, 247 computing, 35, 50 information architecture, 266 Entity-relationship diagram (ERD), 269 Entropy, 11 Error, 318 Essence of Managed Evolution, 62 Essential complexity, 25, 96, 279, 291, 292 Evaluation team, 105 Event-based, 353 Event-B-language, 280 Evolution cycle, 48, 50, 102 process, 17 strategy, viii, ix, 46 trajectory, 61
372 Evolutionary architecture, 102 Executable code, 149 Execution environment, 92 platform, 31 Expressivity, 278 Extension, 99 External uncertainty, 28
F Fail-safe state, 325 Failure, 318 Fault, 318 containment, 319 region, 318 detection, 320 handling, 320 propagation, 318 Fault-tolerance, 327 Fault-tolerant computing, 325 Faulty software, 7 Feedback control loop, 38 Financial institution, 287 value of Managed Evolution, 83 Force of entropy, 11, 16, 22 Forensic analysis, 175, 176 capability, 343 Formal language, 151, 280 method, 142 model, 145 Functional categorization scheme, 285 complexity, 24 emergence, 141 requirements, 167 Functionality, 47, 227, 262 Function point (FP), 66 Future, 139 Future-proof software-system, x, 17, 24, 35, 45, 54, 96, 132, 162, 174, 201, 203, 225, 337
G Generic architecture, 29, 38 pattern, 249
Index Governance, 40, 140, 224, 251, 252, 342 instrument, 131 responsibility, 224 Graceful degradation, 327 Greenfield requirements, 14 Grey box reuse, 254
H Hazard, 349 analysis, 349, 357 Hidden assumption, 225 redundancy, 234 Hierarchical refinement, 287 Hierarchy, 34, 93 High changeability, 66 High-quality IT systems, 97 High resistance to change, 66 Horizontal architecture layer, 30, 315 principle, 114 https, 240
I IBM 360/370, 179 IEEE, 239 Impact, 96 of software, 7 Incomplete information, 144 Industry standard, 51, 107, 239, 258 Infiltration, 345 Information, 31, 262, 263 architecture, 31, 263, 264 asset, 13, 314 emergence, 141 policy, 266, 267 protection policy, 269 security, 339 security policy, 314 system architecture, 127 Input, 92 Insider attack, 323 Insider threat, 346 Instantiation, 288 Integration, 100 architecture, 31 Interface, 219 contract, 237, 238 Internal uncertainty, 28
Index
373
L Large-scale agile, 164 Layered architecture, 246 Legacy application, 78 code, 17 software, 98, 177, 190 modernization and migration, 99, 179, 180, 189 LeSS, 164 LeSS Huge, 164 Lifecycle, 239 Lightweight cryptography, 357 Logging, 342 Long-term evolution, 3 sustainability, 76 Long-time competitiveness, 97 Loose coupling, 222 Loosely coupled, 221 Low changeability, 66 Low resistance to change, 66
process, 17 Malicious attack, 323 functionality, 346 modification, 347 Managed copy, 243 Managed Evolution, v, x, 46, 48, 54, 125, 135, 162, 167, 189, 190, 203 channel, 62 coordinate-system, 60 culture, 202 definition, 59 strategy, 17, 79, 99, 143, 225 Managed redundancy, 227, 231 Management, 46 of complexity, 291 responsibility, 97, 129 Manifesto for Agile software development, 160 MAPE-K, 38, 39 architecture, 28 MARTE, 279 Master authority, 242 Mathematical logic, 278 Measurement program, 134 Megasystem, 162 Metadata, 269 Metric, 72, 126, 133 Microservice architecture, 160 Migration, 99 Mission-critical, 32 Mitigation measure, 311 Model, 51, 107, 224, 272 checking, 149 explosion, 282 refinement, 283 Modeling, 273 activity, 274 convention, 274 language, 149, 273–275 methodology, 274 time, 192 Models-to-code, 149 Modification project, 57, 61 Module, 35, 213 Monitoring, 145, 317, 324 Monolith, 179 Multiple lines of defense, 323
M Machine learning, 145 Maintenance, 13
N Net present value (NPV), 64, 65 Network boundary monitoring, 331
Internet attack, 323 of Commerce, 339 of Things (IoT), 339, 355 device, 356 of Value, 339 security, 316 Interoperability, 235 level, 236 Invariant, 237 Invasive reuse, 252 Investment program, 203 ISO, 239 Isolation, 210 IT architecture, 130 IT investment, 202 IT megasystem, 284 IT policy, 313 IT system, 262
K Key success factor, ix
374 NIST, 239 Noninvasive reuse, 252 Normative impact, 246
O Object Management Group (OMG), 239, 278 Object-orientation, 278 Object-oriented modeling, 278, 283 Ontology, 276 language, 277 matching, 240 Open-source ontology editor, 278 Operating environment, 22, 48, 92 Operation of the software-system, 28 phase, 144 Operationalization, 76 Operational risk, 168 Opportunistic strategy, 84, 85 Order, 11 Output, 92
P Parametrization, 252, 254 Part, 92 Partitioning, 29, 211, 283 rule, 214, 216 Pattern, 118, 249 Pervasiveness of software, 5 Phoenix project, 174 Pirate database, 229 Planned redundancy, 227 Policy, 313, 338 process, 313 Portfolio, 131 Postcondition, 237 Precondition, 237 Primary property, 50 Principle, 92, 113 enforcement, 121, 132 Principle-based architecting, x, 8, 46, 54, 120, 125, 135 Process, 81, 349 ProCoS, 151 Profile, 279 Program, 35 Programming paradigm, 14 Project, 22, 47
Index risk, 168 Property, 93 Protection asset, 311 boundary, 344 Protégé, 278 Provable correctness, 150 Provably correct software, 150 Public key infrastructure (PKI), 260
Q Quality of service properties scorecard, 75 of service property, 32, 46, 47, 58, 167, 174 property, viii, ix, 114, 201, 207, 269, 316 requirements, 167
R Radio transceiver, 4 Ransomware, 345 Re-architecture program, 295 Real-time data/information, 263 information architecture, 269 monitoring, 331 Rearchitecting, 17, 79, 180, 182, 203 Recoverability, 175 Redundancy, 101, 227, 231, 284, 322, 328 avoidance pattern, 234 Redundancy-free system, 227 Reengineering, 180, 184, 203 Refactoring, 17, 79, 180, 186, 203 Reference architecture, 38, 108, 128, 245, 246 Relationship, 92, 141 Relentless change, 26 Replacement, 99, 180, 181 Requirements area, 164 engineering, 166 management, 166 process, 167 Residual risk, 312, 349 Resilience, 145, 310, 312, 337 infrastructure, 330 principle, 311 Resistance to change, 8, 13, 52, 116, 208 Response time metric, 74 Responsibility, ix, 266, 365
Index Reusable asset, 252 Reuse, 251 context, 257 cycle, 254 measurement, 257 type, 252 Reverse-engineering, 181 Review, 83, 121, 294 Risk, 8, 25, 60, 99, 106, 144, 168, 177, 181, 204, 349 analysis, 357 management, 311 Risk-based migration strategy, 178 RMI, 220 Role-based access control, 119 Run-time infrastructure, 353
S Safety, 338, 348 case, 352 Safety-critical system, 151, 352 Sanction filter, 225 list, 225 Scorecard, 75, 80 Scrum, 161 Secondary property, 50 Secure interoperability, 261 software development process, 347 Security, 338 architecture, 33, 316 audit, 343 classification, 341 Self-configuration, 28 Self-healing, 28 Self-optimization, 28 Self-protection, 28 Semantic alignment, 50, 51 interoperability, 50, 236, 240 Sensor, 35, 38 Separation of concerns, 215 Service, 243 contract, 237, 238, 332 reuse, 252 Service-oriented architecture, 17 Simplification, 294 program, 25
375 step, 293 Single point of failure, 322 Single source of truth, 233, 241 SLOC, 66 Software, ix, 4 accident, ix architecture, 92, 127 knowledge, 245 asset, 3 development process, 81 engineering, 139, 290 hierarchy, 252 infrastructure, 244 investment, 8 life cycle, 144 model, 7 opportunity, 6 system, 12, 272, 279, 309 world, ix, 3 Software architecting, 114 Software-defined radio, 4 Source lines of code, 293 of technical debt, 14 SPARK, 151 Specification, 151 Staff, 126 Standard type, 259 State-machine, 326 Strategy, 16, 46, 54, 58 Structural alignment, 51 complexity, 24 model, 225, 283 Structure, 57, 92, 142 Structured query language (SQL), 210 Subdomain, 285 Success factor, 96 story, 6 Survivability, 85 Synchronization, 241 mechanism, 231, 233 Synchronous protocol, 221 Syntactic interoperability, 236, 240 SysML, 278, 283 System and software engineering, 7, 47, 92 architecture knowledge, 245 engineering process, 21, 22
376 evolution, 293 process, 125, 128 modeling language, 278 state, 60 System-of-systems, 35, 39, 40, 50, 140, 147, 283
T Taxonomy, 70, 72, 240, 275, 310 Technical architecture, 31, 116 debt, 7, 13, 14, 27, 48, 132, 188, 295 infrastructure, 329 interoperability, 236, 240 standard, 240 Technology change, 185 portfolio, 32 strategy, 32 Temporal fault, 320 logic, 192 The Open Group Architecture Framework, 248 Third-party software, 16 Threat, 269, 314 Three devils of systems engineering, 22, 140 Tightly coupled, 221 Time, 191, 192, 223 Time-to-market, 52, 66, 143 Time-triggered architecture, 320, 353 TOGAF, 248 Trade-off, 356 Trajectory to death, 62, 84 Transaction, 264 Transformation, 47 Trust, 132 Trustworthiness, 174, 355 Trustworthy software, 365 TTA. See time-triggered architecture
U Ultra-large-scale system, 162 Uncertainty, 22, 23, 27, 46, 97, 144 impact on software, 28
Index Unified modeling language (UML), 278, 283 model, 269, 283 Unmanageable fault propagation, 254 Unmanaged redundancy, 227, 228, 234, 241 functional, 230 Unnecessary complexity, 291 Use case point (UCP), 66
V Validation algorithm, 271 process, 270 step, 270 Value, 132 fault, 320 of standards, 260 Vendor lock-in, 210 Vertical architecture layer, 32, 315 principle, 114, 116 Very large software-system, 47 View, 107, 283
W Web ontology language (OWL), 277, 278 OWL DL, 278 OWL Full, 278 OWL Lite, 278 White box reuse, 254 WSDL, 241
X X.509, 261 XML, 220, 240
Z Z-language, 151, 280 Zachman Framework, 248