Cross-Impact Balances (CIB) for Scenario Analysis: Fundamentals and Implementation 3031272293, 9783031272295

Cross-Impact Balances (CIB) is a method frequently used for research, in companies and in administrations for the system

211 83 9MB

English Pages 286 [287] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Acknowledgments
Contents
Abbreviations
List of Figures
List of Tables
Chapter 1: Introduction to CIB
References
Chapter 2: The Application Field of CIB
2.1 Scenarios
2.2 Scenarios and Decisions
2.3 Classifying CIB
References
Chapter 3: Foundations of CIB
3.1 Descriptors
3.2 Descriptor Variants
3.2.1 Completeness and Mutual Exclusivity of the Descriptor Variants
3.2.2 The Scenario Space
3.2.3 The Need for Considering Interdependence
3.3 Coping with Interdependence: The Cross-Impact Matrix
3.4 Constructing Consistent Scenarios
3.4.1 The Impact Diagram
3.4.2 Discovering Scenario Inconsistencies Using Influence Diagrams
3.4.3 Formalizing Consistency Checks: The Impact Sum
3.4.4 The Formalized Consistency Check at Work
3.4.5 From Arrows to Rows and Columns: The Matrix-Based Consistency Check
3.4.6 Scenario Construction
3.5 How to Present CIB Scenarios
3.6 Key Indicators of CIB Scenarios
3.6.1 The Consistency Value
Descriptor Consistency Values
Scenario Consistency Values
Nonconsideration of Autonomous Descriptors
Inconsistency Scale
Global Inconsistency
3.6.2 The Consistency Profile
Consistency Profile and Scenario Stability
Consistency Profile and Judgment Uncertainty
3.6.3 The Total Impact Score
3.7 Data Uncertainty
3.7.1 Estimating Data Uncertainty
3.7.2 Data Uncertainty and the Robustness of Conclusions
3.7.3 Other Sources of Uncertainty
References
Chapter 4: Analyzing Scenario Portfolios
4.1 Structuring a Scenario Portfolio
4.1.1 Perspective A: If-Then
4.1.2 Perspective B: Order by Performance
4.1.3 Perspective C: Portfolio Mapping
Scenario Axes
Using Scenario Axes Diagrams in CIB Analysis
Special form of the Scenario Axes Diagram: Probability vs. Effect
4.2 Revealing the Whys and Hows of a Scenario
4.2.1 How to Proceed
4.2.2 The Scenario-Specific Cross-Impact Matrix
4.3 Ex Post Consistency Assessment of Scenarios
4.3.1 Intuitive Scenarios
4.3.2 Reconstructing the Descriptor Field
4.3.3 Preparing the Cross-Impact Matrix
4.3.4 CIB Evaluation
4.4 Intervention Analysis
4.4.1 Analysis Example: Interventions to Improve Water Supply
4.4.2 The Cross-Impact Matrix and its Portfolio
4.4.3 Conducting an Intervention Analysis
Compilation of the Intervention Options
Testing a Proposed Intervention: E1
Testing a Proposed Intervention: A2
Robustness Check
Side Effect Control
Alternative Forms of Intervention Analysis
4.4.4 Surprise-Driven Scenarios
4.5 Expert Dissent Analysis
4.5.1 Classifying Dissent
4.5.2 Rule-Based Decisions
4.5.3 The Sum Matrix
Consensus and Dissent in the Matrix Ensemble
Evaluation of the Sum Matrix
Significance of Inconsistencies in Sum Matrices
Sum Matrix Construction in the Case of Nonuniform Rating Scales
Sum Matrix vs. Mean Value Matrix
Summary: Interpreting the Sum Matrix
4.5.4 Delphi
4.5.5 Ensemble Evaluation
Step 1: Individual Evaluation of the Expert Matrices
Step 2: Compiling the Ensemble Table
Step 3: Analyzing the Ensemble Table
Sensitivity Analysis
4.5.6 Group Evaluation
Step 1: Identification of the Key Dissent
Step 2: Grouping the Matrices Along the Key Dissent
Step 3: Group Sum Matrix Building and Evaluation
Comparing the Results of the Group Evaluation and the Ensemble Evaluation
4.6 Storyline Development
4.6.1 Strengths and Weaknesses of CIB-Based Storyline Development
4.6.2 Preparation of the Scenario-Specific Cross-Impact Matrix
4.6.3 Storyline Creation
4.7 Basic Characteristics of a CIB Portfolio
4.7.1 Number of Scenarios
Scenario Counts in Practice
Sparse Matrices: A Prerequisite for Large Scenario Portfolios
Frequency Distribution of the Inconsistency Value
4.7.2 The Presence Rate
4.7.3 The Portfolio Diversity
The Distance Table
Measuring Portfolio Diversity
Typical Diversity Scores
References
Chapter 5: What if Challenges in CIB Practice
5.1 Insufficient Number of Scenarios
5.2 Too Many Scenarios
5.2.1 Statistical Analysis
Interpreting the Frequency Data
Requirements for a Probabilistic Interpretation of Frequency Data
5.2.2 Diversity Sampling
5.2.3 Positioning Scenarios on a Portfolio Map
Method Comparison
5.2.4 Further Procedures
Cluster Analysis
Correspondence Analysis
5.3 Monotonous Portfolio
5.3.1 Unbalanced Judgment Sections
5.3.2 Unbalanced Columns
5.4 Bipolar Portfolio
5.4.1 Causes of Bipolar Portfolios
5.4.2 Special Approaches for Analyzing Bipolar Portfolios
Single Intervention
Dual Interventions
5.5 Underdetermined Descriptors
5.6 Essential Vacancies
5.6.1 Resolving Vacancies by Expanding the Portfolio
5.6.2 Cause Analysis
5.7 Context-Dependent Impacts
References
Chapter 6: Data in CIB
6.1 About Descriptors
6.1.1 Explanation of Term
6.1.2 Descriptor Types
Formal Typology: Classification by Interdependence Type
Content-Oriented Typology: Classification by Roles in Terms of Content
6.1.3 Methodological Aspects
Completeness of the Descriptor Field
Number of Descriptors
Aggregation Level
Documentation
6.2 About Descriptor Variants
6.2.1 Explanation of Term
6.2.2 Types of Descriptor Variants
State Descriptors Versus Trend Descriptors
Descriptor Variants: Scales of Measurement
Descriptor Variant Classification According to Occurrence in the Portfolio
6.2.3 Methodological Aspects
Definition
Completeness
Mutual Exclusivity
Absence of Overlap
6.2.4 Designing the Descriptor Variants
Gradation of the Descriptor Variants
Range of the Descriptor Variants: Conservative Scenarios vs. Extreme Scenarios
Plausibility of Descriptor Variants
6.3 About Cross-impacts
6.3.1 Explanation of Term
6.3.2 Methodological Aspects
Rating Interval
Empty Judgment Sections and the Omission of Very Weak Influences
Ensuring Coding Quality: Avoid Coding Indirect Influences
What Are Indirect Influences?
Why Is It a Problem to Code Indirect Influences in the Matrix Together with Direct Influences?
Implementation Hints
Ensuring Coding Quality: Avoiding Inverse Coding
Ensuring Coding Quality: Balancing Positive and Negative Cross-impacts
Comparability as a Criterion for the Coding Style
Conventions to Ensure Comparability of Cross-impact Ratings
``Standardization´´ as a Strict but also Restrictive Instrument to Balance Positive and Negative Cross-impacts
Ensuring Coding Quality: Calibrating Strength Ratings
Ensuring Coding Quality: Sign Errors and Double Negations
Ensuring Coding Quality: Predetermined Descriptors
Phantom Variants as a Cause of Bias
Ensuring Coding Quality: Absolute Cross-impacts
Avoiding Conflicts Between Absolute Cross-impacts
6.3.3 Data Uncertainty
6.4 About Data Elicitation
6.4.1 Self-Elicitation
Examples
6.4.2 Literature Review
Descriptor Screening
Descriptor Variants
Cross-impact Data
Coding Literature Quotations: An Example from Practice
Assessment
Examples
6.4.3 Expert Elicitation (Written/Online)
Descriptor Screening
Descriptor Ranking
Descriptor Variants
Cross-impact Data
Partitioning the Matrix for Expert Elicitation
Assessment
Examples
6.4.4 Expert Elicitation (Interviews)
Descriptor/Variant Screening
Cross-impact Data
Assessment
Examples
6.4.5 Expert Elicitation (Workshops)
Descriptor Screening
Descriptor Ranking
Cross-impact Data
Assessment
Number of Participants
Time Management
General Recommendations
Pretest
Combining Elicitation Methods
Iteration and Scenario Validation
Examples
6.4.6 Use of Theories or Previous Research as Data Collection Sources
Assessment
Examples
References
Chapter 7: CIB at Work
7.1 Iran Nuclear Deal
7.2 Energy and Society
7.3 Public Health
7.4 IPCC Storylines
References
Chapter 8: Reflections on CIB
8.1 Interpretations
8.1.1 Interpretation I (Time-Related): CIB in Scenario Analysis
8.1.2 Interpretation II (Unrelated to Time): CIB in Steady-State Systems Analysis
8.1.3 Interpretation III: CIB in Policy Design
8.1.4 Classification of CIB as a Qualitative-Semiquantitative Method of Analysis
8.2 Strengths of CIB
8.2.1 Scenario Quality
8.2.2 Traceability of the Scenario Consistency
8.2.3 Reproducibility and Revisability
8.2.4 Complete Screening of the Scenario Space
8.2.5 Causal Models
8.2.6 Knowledge Integration and Inter- and Transdisciplinary Learning
8.2.7 Objectivity
8.2.8 Scenario Criticism
8.3 Challenges and Limitations
8.3.1 Time Resources
8.3.2 Aggregation Level and Limited Descriptor Number
8.3.3 System Boundary
8.3.4 Limits to the Completeness of Future Exploration
8.3.5 Discrete-Valued Descriptors and Scenarios
8.3.6 Trend Stability Assumption
8.3.7 Uncertainty and Residual Subjectivity in Data Elicitation
8.3.8 Context-Sensitive Influences
8.3.9 Consistency as a Principle of Scenario Design
8.3.10 Critical Role of Methods Expertise
8.3.11 CIB Does Not Study Reality but Mental Models of Reality
8.4 Unsuitable Use Cases: A Checklist
8.5 Alternative Methods
References
Appendix: Analogies
Physics
Network Analysis
Game Theory
Glossary
Cross-impact matrix (in the context of CIB)
Portfolio (in CIB)
Scenarios (in the context of CIB)
Index
Recommend Papers

Cross-Impact Balances (CIB) for Scenario Analysis: Fundamentals and Implementation
 3031272293, 9783031272295

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Contributions to Management Science

Wolfgang Weimer-Jehle

Cross-Impact Balances (CIB) for Scenario Analysis Fundamentals and Implementation

Contributions to Management Science

The series Contributions to Management Science contains research publications in all fields of business and management science. These publications are primarily monographs and multiple author works containing new research results, and also feature selected conference-based publications are also considered. The focus of the series lies in presenting the development of latest theoretical and empirical research across different viewpoints. This book series is indexed in Scopus.

Wolfgang Weimer-Jehle

Cross-Impact Balances (CIB) for Scenario Analysis Fundamentals and Implementation

Wolfgang Weimer-Jehle ZIRIUS University of Stuttgart Stuttgart, Germany

ISSN 1431-1941 ISSN 2197-716X (electronic) Contributions to Management Science ISBN 978-3-031-27229-5 ISBN 978-3-031-27230-1 (eBook) https://doi.org/10.1007/978-3-031-27230-1 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland

Acknowledgments

The fact that it is possible today to present the CIB method based on an extensive body of practice and multifaceted methodological research is due to a growing community of method users and researchers, from which numerous suggestions, inspirations, methodological innovations, and critical questioning have emerged, thus helping the method mature. I would like to express my gratitude to my colleagues worldwide who have been inspired by the CIB method and shared their insights, experiences, and criticism. It is my pleasure to thank Dr. Diethard Schade, who initiated scenario research at the Center for Technology Assessment, Prof. Georg Förster, my comrade during my first walking attempts toward CIB, and Prof. Ortwin Renn, who fostered the development of CIB at the University of Stuttgart for 15 years by his continuous support. I especially desire to thank my colleagues at the CIB-Lab of ZIRIUS at the University of Stuttgart for the journey they have undertaken together with me for so many years and who have contributed inestimably to the further development and maturation of the CIB method. Their research, motivation, wealth of ideas, and untiring support have always been an inspiration and encouragement to me. Without them, this book would not have come about in the way it has. I would also like to thank Dr. Wolfgang Hauser, Dr. Hannah Kosow, Prof. Vanessa Schweizer, and M.A. Sandra Wassermann for valuable comments on the book’s manuscript. All remaining errors are mine.

v

Contents

1

Introduction to CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 8

2

The Application Field of CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Scenarios and Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Classifying CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11 11 13 14 18

3

Foundations of CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Descriptor Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Completeness and Mutual Exclusivity of the Descriptor Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 The Scenario Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 The Need for Considering Interdependence . . . . . . . . . . . 3.3 Coping with Interdependence: The Cross-Impact Matrix . . . . . . . . 3.4 Constructing Consistent Scenarios . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 The Impact Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Discovering Scenario Inconsistencies Using Influence Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Formalizing Consistency Checks: The Impact Sum . . . . . 3.4.4 The Formalized Consistency Check at Work . . . . . . . . . . 3.4.5 From Arrows to Rows and Columns: The Matrix-Based Consistency Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.6 Scenario Construction . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 How to Present CIB Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Key Indicators of CIB Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 The Consistency Value . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 The Consistency Profile . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.3 The Total Impact Score . . . . . . . . . . . . . . . . . . . . . . . . .

21 21 22 23 24 25 25 30 31 32 34 35 37 40 40 44 44 47 49 vii

viii

4

Contents

3.7

Data Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Estimating Data Uncertainty . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Data Uncertainty and the Robustness of Conclusions . . . . 3.7.3 Other Sources of Uncertainty . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49 50 51 51 52

Analyzing Scenario Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Structuring a Scenario Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Perspective A: If-Then . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Perspective B: Order by Performance . . . . . . . . . . . . . . . 4.1.3 Perspective C: Portfolio Mapping . . . . . . . . . . . . . . . . . . 4.2 Revealing the Whys and Hows of a Scenario . . . . . . . . . . . . . . . . 4.2.1 How to Proceed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 The Scenario-Specific Cross-Impact Matrix . . . . . . . . . . . 4.3 Ex Post Consistency Assessment of Scenarios . . . . . . . . . . . . . . . 4.3.1 Intuitive Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Reconstructing the Descriptor Field . . . . . . . . . . . . . . . . . 4.3.3 Preparing the Cross-Impact Matrix . . . . . . . . . . . . . . . . . 4.3.4 CIB Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Intervention Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Analysis Example: Interventions to Improve Water Supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 The Cross-Impact Matrix and its Portfolio . . . . . . . . . . . . 4.4.3 Conducting an Intervention Analysis . . . . . . . . . . . . . . . . 4.4.4 Surprise-Driven Scenarios . . . . . . . . . . . . . . . . . . . . . . . 4.5 Expert Dissent Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.1 Classifying Dissent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.2 Rule-Based Decisions . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.3 The Sum Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.4 Delphi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.5 Ensemble Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.6 Group Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Storyline Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Strengths and Weaknesses of CIB-Based Storyline Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Preparation of the Scenario-Specific Cross-Impact Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.3 Storyline Creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Basic Characteristics of a CIB Portfolio . . . . . . . . . . . . . . . . . . . . 4.7.1 Number of Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 The Presence Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 The Portfolio Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 54 54 57 60 64 65 66 67 68 69 70 71 73 76 76 76 82 84 84 86 86 92 94 97 100 101 101 102 107 107 111 114 117

Contents

ix

5

What if. . . Challenges in CIB Practice . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Insufficient Number of Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Too Many Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Statistical Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Diversity Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Positioning Scenarios on a Portfolio Map . . . . . . . . . . . . 5.2.4 Further Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Monotonous Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Unbalanced Judgment Sections . . . . . . . . . . . . . . . . . . . . 5.3.2 Unbalanced Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4 Bipolar Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 Causes of Bipolar Portfolios . . . . . . . . . . . . . . . . . . . . . . 5.4.2 Special Approaches for Analyzing Bipolar Portfolios . . . . 5.5 Underdetermined Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Essential Vacancies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6.1 Resolving Vacancies by Expanding the Portfolio . . . . . . . 5.6.2 Cause Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Context-Dependent Impacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119 119 121 122 124 128 129 131 132 134 136 137 139 143 147 148 148 150 155

6

Data in CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1 About Descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.1 Explanation of Term . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.2 Descriptor Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.3 Methodological Aspects . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 About Descriptor Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Explanation of Term . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2 Types of Descriptor Variants . . . . . . . . . . . . . . . . . . . . . 6.2.3 Methodological Aspects . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.4 Designing the Descriptor Variants . . . . . . . . . . . . . . . . . . 6.3 About Cross-impacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.1 Explanation of Term . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Methodological Aspects . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Data Uncertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 About Data Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Self-Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.3 Expert Elicitation (Written/Online) . . . . . . . . . . . . . . . . . 6.4.4 Expert Elicitation (Interviews) . . . . . . . . . . . . . . . . . . . . 6.4.5 Expert Elicitation (Workshops) . . . . . . . . . . . . . . . . . . . . 6.4.6 Use of Theories or Previous Research as Data Collection Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

157 157 157 158 164 166 166 167 171 174 177 177 178 193 194 195 196 199 204 206 213 214

x

Contents

7

CIB at Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Iran Nuclear Deal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2 Energy and Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3 Public Health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4 IPCC Storylines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

219 220 222 224 227 231

8

Reflections on CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1 Interpretations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.1 Interpretation I (Time-Related): CIB in Scenario Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 Interpretation II (Unrelated to Time): CIB in Steady-State Systems Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.1.3 Interpretation III: CIB in Policy Design . . . . . . . . . . . . . . 8.1.4 Classification of CIB as a Qualitative-Semiquantitative Method of Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Strengths of CIB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Scenario Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.2 Traceability of the Scenario Consistency . . . . . . . . . . . . . 8.2.3 Reproducibility and Revisability . . . . . . . . . . . . . . . . . . . 8.2.4 Complete Screening of the Scenario Space . . . . . . . . . . . 8.2.5 Causal Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.6 Knowledge Integration and Inter- and Transdisciplinary Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.7 Objectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.8 Scenario Criticism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Challenges and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.1 Time Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.2 Aggregation Level and Limited Descriptor Number . . . . . 8.3.3 System Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Limits to the Completeness of Future Exploration . . . . . . 8.3.5 Discrete-Valued Descriptors and Scenarios . . . . . . . . . . . 8.3.6 Trend Stability Assumption . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Uncertainty and Residual Subjectivity in Data Elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.8 Context-Sensitive Influences . . . . . . . . . . . . . . . . . . . . . . 8.3.9 Consistency as a Principle of Scenario Design . . . . . . . . . 8.3.10 Critical Role of Methods Expertise . . . . . . . . . . . . . . . . . 8.3.11 CIB Does Not Study Reality but Mental Models of Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Unsuitable Use Cases: A Checklist . . . . . . . . . . . . . . . . . . . . . . . 8.5 Alternative Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

233 233 233 234 235 236 239 239 239 240 240 241 241 242 242 243 243 244 244 245 245 246 246 247 247 247 248 248 250 253

Contents

xi

Appendix: Analogies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Network Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Game Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

257 258 260 262

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-impact matrix (in the context of CIB) . . . . . . . . . . . . . . . . . . . . . Portfolio (in CIB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scenarios (in the context of CIB) . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

265 265 267 267

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269

Abbreviations

ABM BASICS C CO2 CIB D FAR Gt C IC ICS IL IPCC KSIM m MINT N OECD q SD SRES TIS Vi Z

Agent-based modelling Batelle Scenario Inputs to Corporate Strategy (scenario method) Consistency score of a descriptor or scenario Carbon dioxide Cross-Impact Balances (scenario method) Diversity score of a scenario portfolio Field Anomaly Relaxation (scenario method) Giga tons (billion tons) of carbon Inconsistency score of a descriptor or scenario Significance threshold of a scenario inconsistency Intuitive Logics (scenario method) Intergovernmental Panel on Climate Change Kane’s Simulation Model (simulation method) Number of matrices of a matrix ensemble A group of academic disciplines, consisting of mathematics, informatics, natural sciences, and technology Number of descriptors of a cross impact matrix Organization for Economic Co-operation and Development Quorum applied in an ensemble evaluation Systems Dynamics (simulation method) Special Report on Emission Scenarios Total impact score Number of states of descriptor i Number of possible configurations of a morphological field

xiii

List of Figures

Fig. 2.1 Fig. 2.2 Fig. 2.3 Fig. 2.4 Fig. 3.1 Fig. 3.2 Fig. 3.3 Fig. 3.4 Fig. 3.5 Fig. 3.6 Fig. 3.7 Fig. 3.8 Fig. 3.9 Fig. 3.10 Fig. 3.11 Fig. 3.12 Fig. 3.13 Fig. 3.14 Fig. 3.15 Fig. 3.16

The scenario funnel . . .. . .. . .. . .. .. . .. . .. . .. . .. . .. . .. .. . .. . .. . .. . .. . .. A classification of scenarios and scenario methods . . . . . . . . . . . . . . . Example of a qualitative network of interacting nodes . . . . . . . . . . . Workflow of a CIB analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The descriptor field for the “Somewhereland” analysis . . . . . . . . . . Descriptor variants (alternative futures) for the descriptor “A. Government” . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . .. . . . . . . . . .. . . Compilation of descriptors and their variants for Somewhereland . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-impact assessment of the influence between two descriptors . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . Representing cross-impact data without use of numbers . . . . . . . . . Representation of cross-impact data following Weitz et al. (2019) . . .. . .. . . .. . . .. . .. . . .. . .. . . .. . .. . . .. . . .. . .. . . .. . .. . . .. . .. . . .. . . .. The cross-impact matrix of Somewhereland . . . . . . . . . . . . . . . . . . . . . . The cross-impact matrix printed without influence-free judgment sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . One of 486 scenarios for Somewhereland . . . . . . . . . . . . . . . . . . . . . . . . . Consulting the cross-impact matrix on the influence relationship A2 → B1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The impact diagram of scenario [A2 B1 C3 D1 F1 E1] . . . . . . . . . The influence diagram Fig. 3.11 with the impact sums of the descriptors . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . .. . . Demonstration of inconsistency of descriptor variant D1 . . . . . . . . Consequential inconsistency in Descriptor E after adjustment of Descriptor D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrix-based calculation of impact sums for scenario [A2 B1 C3 D1 E1 F1] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matrix-based calculation of the complete impact balances of a scenario . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . .

12 15 16 17 22 23 23 27 27 28 29 30 31 32 33 34 36 37 38 39

xv

xvi

Fig. 3.17 Fig. 3.18 Fig. 3.19 Fig. 3.20 Fig. 3.21 Fig. 3.22 Fig. 4.1 Fig. 4.2 Fig. 4.3 Fig. 4.4 Fig. 4.5 Fig. 4.6

Fig. 4.7 Fig. 4.8 Fig. 4.9 Fig. 4.10 Fig. 4.11 Fig. 4.12 Fig. 4.13 Fig. 4.14 Fig. 4.15 Fig. 4.16 Fig. 4.17 Fig. 4.18 Fig. 4.19 Fig. 4.20 Fig. 4.21 Fig. 4.22

List of Figures

The Somewhereland scenarios in tableau format with integrated descriptor listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Somewhereland scenarios in tableau format with separate descriptor listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Impact diagram of Somewhereland scenario no. 10 . . . . . . . . . . . . . . Consistency values of the descriptors calculated for the test scenario . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . Comparison of consistency and inconsistency scales . . . . . . . . . . . . Two examples of consistency profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Somewhereland-plus matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Somewhereland-plus portfolio arranged according to the “if-then” perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evaluation of descriptor variants according to their dissimilarity with the present state of Somewhereland (example values) . . . . . . Arrangement of Somewhereland scenarios on a performance scale . .. . .. . .. .. . .. . .. . .. . .. . .. . .. . .. . .. . .. .. . .. . .. . .. . .. . .. . .. . .. . .. . .. Somewhereland scenario tableau, ordered by dissimilarity to the present . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of a scenario axes analysis (The example draws from work done by the IPCC on the future of global climate gas emissions (Nakićenović et al. 2000)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Two-dimensional rating of the Somewhereland descriptor variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A portfolio map of the Somewherland scenarios . . . . . . . . . . . . . . . . . Ordering the portfolio according to probability and effect . . . . . . . Elucidating the background of Somewhereland Scenario no. 1 “Society in crisis” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The specific cross-impact matrix of Somewhereland Scenario no. 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Positive part of a scenario-specific cross-impact matrix . . . . . . . . . . Justification form for Descriptor variant “E3 Social cohesion: Unrest” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Illustration of the justifications of a descriptor variant . . . . . . . . . . . Scenario axes diagram and the “Somewhereland City mobility” example . .. . .. . .. . . .. . .. . .. . . .. . .. . .. . . .. . .. . .. . . .. . .. . .. . . .. . .. . .. . . .. Descriptors and descriptor variants of the “Somewhereland City” intuitive mobility scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Somewhereland City” cross-impact matrix . . . . . . . . . . . . . . . . . . . . . . “Somewhereland City” CIB analysis results . . . . . . . . . . . . . . . . . . . . . . Consistency critique of “The Unfinished” intuitive scenario . . . . Corrected and extended scenario axes diagram according to the result of the CIB analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nutshell I—Workflow of a CIB intervention analysis . . . . . . . . . . . The descriptors and descriptor variants of the “Water supply” intervention analysis .. . . .. . . .. . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . .. . . ..

42 42 43 45 46 47 55 56 58 59 59

60 62 63 64 65 66 66 67 68 69 70 71 72 72 74 75 77

List of Figures

Fig. 4.23 Fig. 4.24 Fig. 4.25 Fig. 4.26 Fig. 4.27 Fig. 4.28 Fig. 4.29 Fig. 4.30 Fig. 4.31 Fig. 4.32 Fig. 4.33 Fig. 4.34 Fig. 4.35 Fig. 4.36 Fig. 4.37 Fig. 4.38 Fig. 4.39 Fig. 4.40 Fig. 4.41 Fig. 4.42 Fig. 4.43 Fig. 4.44 Fig. 4.45 Fig. 4.46 Fig. 4.47 Fig. 4.48 Fig. 4.49 Fig. 4.50 Fig. 4.51 Fig. 5.1

“Water supply” cross-impact matrix (basic matrix) . . . . . . . . . . . . . . The “Water supply” portfolio without interventions (basic portfolio) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Water supply” cross-impact matrix with intervention at E1 . . . . The IC0 portfolio of the E1 intervention matrix . . . . . . . . . . . . . . . . . . “Water supply” cross-impact matrix with intervention at A2 . . . . The IC0 portfolio of the A2 intervention matrix . . . . . . . . . . . . . . . . . . Implementation of the “Global economic crises” wildcard into the Somewhereland matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . The Somewhereland portfolio under the impact of the “Global economic crises” wildcard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Different qualities of rating differences. Adapted from Jenssen and Weimer-Jehle (2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of a matrix ensemble and its sum matrix . . . . . . . . . . . . . . . Fully consistent solutions of the sum matrix . . . . . . . . . . . . . . . . . . . . . . Solutions of the “Resource economy” sum matrix, including all scenarios with nonsignificant inconsistency . . . . . . . . . . . . . . . . . . . . . . . Procedure of a Delphi survey . . . .. . . . . .. . . . . . .. . . . . . .. . . . . .. . . . . . .. . Dissent management using the Delphi method . . . . . . . . . . . . . . . . . . . Compilation of scenarios of the individual evaluations of the “Resource economy” matrix ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . The ensemble table of the “Resource economy” matrix ensemble .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. . .. .. Key dissent of the “Resource economy” matrix ensemble . . . . . . . Nutshell II—Dissent analysis by group evaluation . . . . . . . . . . . . . . . Specific cross-impact matrix for Somewhereland scenario no. 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data basis for storyline development for Somewhereland scenario no. 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data basis for the development of a storyline in graphical representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Example of a perfectly sequencable specific matrix . . . . . . . . . . . . . . Improved descriptor order for Somewhereland scenario no. 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Frequency distribution of the inconsistency value in the Somewhereland matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . The cross-impact “Oil price” matrix (Weimer-Jehle 2006) . . . . . . The three fully consistent scenarios of the “Oil price” matrix . . . Descriptor variant vacancies of the “Oil price” matrix (empty squares) . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . . . . .. . . . Distance table of the “Oil price” portfolio . . . . .. . . . . . . . .. . . . . . . .. . . Distance table of the Somewhereland portfolio with marking of an N/2 selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IC1 portfolio of the “oil price” matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xvii

77 78 79 79 80 81 83 84 85 88 89 90 93 94 96 96 97 99 102 103 103 104 104 110 113 113 113 114 115 120

xviii

Fig. 5.2 Fig. 5.3 Fig. 5.4 Fig. 5.5 Fig. 5.6 Fig. 5.7 Fig. 5.8 Fig. 5.9

Fig. 5.10 Fig. 5.11 Fig. 5.12 Fig. 5.13 Fig. 5.14 Fig. 5.15 Fig. 5.16 Fig. 5.17 Fig. 5.18 Fig. 5.19 Fig. 5.20 Fig. 5.21 Fig. 5.22 Fig. 5.23 Fig. 5.24 Fig. 5.25 Fig. 5.26 Fig. 5.27 Fig. 6.1

List of Figures

“Global socioeconomic pathways” matrix. Adapted and modified from Schweizer and O’Neill (2014) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Occurrence frequencies of the descriptor variants in the “Global socioeconomic pathways” portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exploring the future space through scenario selection . . . . . . . . . . . Nutshell III - Procedure for creating a selection with high scenario distances (diversity sampling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Scenario selection according to the “max-min” heuristic (diversity sampling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . Evaluation of descriptor variants according to the criteria of social and economic development (exemplary data) . . . . . . . . . . . . . . . . . . . . . Portfolio map of the “Global socioeconomic pathways” portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cross-impact matrix on the social development of an emerging country. Adapted and modified from Cabrera Méndez et al. (2010) (Translation from Spanish by the author) . . . . . . . . . . . . . . . . . . . . . . . . . Unbalanced judgment sections in the “Emerging country” matrix . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . .. . . . . . . . . . . .. . . . . . . . . . . . .. . . . Column sums of the “Emerging country” matrix . . . . . . . . . . . . . . . . . Example of a cross-impact matrix on social sustainability. Adapted and modified from Renn et al. (2007) . . . . . . . . . . . . . . . . . . . A bipolar portfolio .. . . .. . . .. . . . .. . . .. . . . .. . . .. . . . .. . . .. . . . .. . . .. . . .. . Cross-impact matrix “Social sustainability” with sorted descriptor variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Intervention effects: worst case (dark shading) and best case (light shading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Effect of dual interventions in the “Social sustainability” matrix Fictitious neighboring countries BigCountry and SmallCountry Economic-social development of the fictitious country SmallCountry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Portfolio of the “SmallCountry” matrix according to conventional evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “SmallCountry” portfolio when considering the underdetermination of descriptor D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Column sums of descriptor “E. Oil price” . .. . . . . .. . . . .. . . . .. . . . .. . Conventional (left) and context-dependent impact on B (right) . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . . . .. . . .. . . .. . . .. . . .. . . .. . Nutshell IV - Processing context-dependent impacts . . . . . . . . . . . . . Context-dependent impacts in the “Mobility demand” matrix . . . Conditional cross-impact matrices (the top matrix is valid for E1 scenarios, that below for E2 scenarios) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mobility demand portfolio after consideration of context dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flawed scenario logic due to neglect of context dependency . . . . “Group opinion” matrix and its portfolio . . . . . . . . . . . . . . . . . . . . . . . . . .

121 123 125 126 127 129 130

132 134 135 136 137 138 142 142 144 145 145 146 149 150 151 152 153 154 155 159

List of Figures

Fig. 6.2 Fig. 6.3 Fig. 6.4 Fig. 6.5 Fig. 6.6 Fig. 6.7 Fig. 6.8 Fig. 6.9 Fig. 6.10 Fig. 6.11 Fig. 6.12 Fig. 6.13 Fig. 6.14 Fig. 7.1 Fig. 7.2 Fig. 7.3

Fig. 7.4 Fig. 7.5

Fig. 7.6

Fig. 7.7

Fig. 8.1 Fig. 8.2 Fig. A.1

“Group opinion” cross-impact matrix and portfolio after removing the passive descriptor . .. . .. .. . .. . .. .. . .. .. . .. . .. .. . .. . .. .. . .. . .. .. . .. Post hoc determination of the consistent variant of a passive descriptor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Descriptor roles in the “Water supply” matrix . . . . . . . . . . . . . . . . . . . . Portfolios with (top) and without (bottom) intermediary descriptor D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of state descriptors and trend descriptors . . . . . . . . . . . . . . Examples of nominal, ordinal, and ratio descriptors . .. . .. . .. .. . .. Example of descriptor variants and their definitions . . . .. . . .. . . . .. Incorrect definition of descriptor variants due to overlap of topics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples of central and peripheral descriptor variants . . . . . . . . . . Nutshell V. Using subscenarios as descriptor variants . . . . . . . . . . . Partitioning of the assessment task according to knowledge domains .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . .. . . .. . .. . .. . .. . .. . .. . .. Two ways to construct a cross-impact matrix by expert group elicitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nutshell VI. Workflow in a scenario validation workshop . . . . . . “Iran 1395” scenarios and their thematic core motifs. Own illustration based on Ayandeban (2016) . . . . . . . . . . . . . . . . . . . . . . . . . . . Map of German societies in 2050 and their CO2 emissions. Modified from Pregger et al. (2020) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Section of the network of impact relations between the factors influencing the energy balance of an individual. Own representation based on data from Weimer-Jehle et al. (2012) . . CIB analysis of obesity risks for children and adolescents for four case examples. Data from Weimer-Jehle et al. (2012) . . . . . . . . . . . Scenario axes diagram of the forty SRES emissions scenarios. Own illustration based on data from Nakićenović et al. (2000) . . .. . .. . . .. . . .. . .. . . .. . .. . . .. . .. . . .. . . .. . .. . . .. . .. . . .. . .. . . .. . . .. Initial phase of the emissions trajectories of the forty SRES scenarios (own illustration based on data from Nakićenović et al. (2000) (SRES scenario emissions) and IPCC (2014) (historical CO2 emissions)) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of SRES and CIB scenarios in four classes of carbon intensity. Own illustration based on data from Schweizer and Kriegler (2012) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qualitative system model and its semiquantitative representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Comparative classification of CIB as a qualitativesemiquantitative analysis method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analogy of the equilibrium of forces: valleys as rest points for heavy bodies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

xix

160 161 162 163 168 169 172 174 176 197 202 203 212 221 224

225 226

228

229

230 237 238 258

xx

Fig. A.2 Fig. A.3 Fig. A.4 Fig. A.5

List of Figures

“Terrain profiles” of Somewhereland descriptors in the case of an inconsistent scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . “Terrain profiles” in a consistent scenario . . . . . . . . . . . . . . . . . . . . . . . . . Somewhereland as a dynamic network . . . . . . . . . . . . . . . . . . . . . . . . . . . . Analogy between CIB and game theory . . . . . . . . . . . . . . . . . . . . . . . . . . .

259 259 261 263

List of Tables

Table 2.1 Table 2.2 Table 3.1 Table 3.2 Table 3.3 Table 3.4 Table 3.5 Table 4.1 Table 5.1 Table 5.2 Table 5.3 Table 5.4 Table 5.5 Table 5.6 Table 5.7 Table 6.1 Table 6.2 Table 6.3 Table 6.5 Table 6.4 Table 6.6 Table 6.7 Table 6.8 Table 6.9 Table 6.10

Performance matrix for the evaluation of planning variants. Own depiction based on Fink et al. (2002) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types and functions of scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Number of scenarios in scenario spaces of different sizes . . . . . . . Seven-part cross-impact rating scale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Somewhereland scenario in list format . . . . . . . . . . . . . . . . . . . . . . . . . The 10 Somewhereland scenarios in short format . .. . .. .. . .. .. . .. Significance of inconsistency classes depending on the number of descriptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The scenarios of Somewhereland-plus in short format . . . . . . . . . . Portfolio of the “Global socioeconomic pathways” matrix . . . . . . Only solution of the “Emerging country” matrix . . . . . .. . . . . . . .. . . Portfolio after intervention on “A. Economic performance” . . . . Portfolio after intervention on “D. Social engagement” . . . . . . . . . Portfolio after intervention on “F. Equity of chances” . . . . . . . . . . . Portfolio after intervention on “H. Education” . . . . . . . . . . . . . . . . . . . Portfolios of the two conditional matrices “Mobility demand” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Descriptor variant classification . .. . . .. . . . .. . . . .. . . .. . . . .. . . . .. . . .. . Example of an inverse coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Correction of an inverse coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Coding an influence relationship using negative impacts . . . . . . . . Coding an influence relationship using positive impacts . . . . . . . . Coding an influence relationship using mixed impacts . . . . . . . . . . Example of a double negation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A judgment section contributing to predetermination . . . . .. . . . . . . A phantom variant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using absolute cross-impacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13 14 25 26 41 41 50 55 122 132 140 140 141 141 153 169 182 183 184 184 184 188 189 190 192

xxi

xxii

Table 6.11 Table 7.1 Table A.1

List of Tables

Coding a text passage .. . . .. . .. . . .. . . .. . .. . . .. . . .. . .. . . .. . .. . . .. . . .. . 198 Descriptor field of the CIB analysis “Iran 1395” . . . . . . . . . . . . . . . . . 220 Representation of the Somewhereland descriptor column “B. Foreign Policy” in Boolean rules. (Just like the cross-impact matrix, the rule set allows two B variants in certain cases) . . . . . 260

Chapter 1

Introduction to CIB

Keywords Cross-Impact Balances · CIB · Scenario · Foresight · Qualitative impact network Dealing sensibly with future indeterminacy and uncertainty is increasingly important in a world where organizations at all levels of society must make long-term, highstakes decisions and where these decisions must prove their correctness in an increasingly turbulent environment. Because they enable us to identify the scope for action and examine the prospects of our strategies, plans, and decisions, scenarios, i.e., sketches of alternative futures, have emerged in recent decades as a key tool for systematic preparation for an unknown future. Scenarios are generated by different actors for different purposes using different methods. The methods range from simply thinking about the future to complex mathematical simulations. Surprisingly, there is a rather limited stock of methods for the middle range between mental reflection and mathematical simulation. This is all the more surprising when we realize that in our preparations for the future, we often must cope with “systems” that, on the one hand, are too complex to be penetrated by mental reflection but that, on the other hand, we understand (at least in part) only qualitatively, making a credible mathematical simulation difficult. In this middle ground between simple and mathematically treatable questions about the future, cross-impact balances (CIB) has established itself since its publication (Weimer-Jehle, 2006) as a method for the algorithmic construction of qualitative scenarios and for qualitative systems analysis. With its help, scenarios and systems analyses have been produced on the topics of waste, the working world, education, biotechnology, energy, societal change, health and health infrastructure, industry and services, information technology, innovation, climate, management, mobility and transport, sustainability, politics, risk and security, urban and spatial planning, technology management, behavior change, and water supply. This range of application fields underscores that CIB has been accepted as a generic method of analysis despite its origin in energy economics research: CIB was originally

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management Science, https://doi.org/10.1007/978-3-031-27230-1_1

1

2

1

Introduction to CIB

developed in 2001 as a scenario tool in a study by the Center for Technology Assessment in Baden-Württemberg on the liberalization of European electricity markets.1 About this Book In the meantime, an extensive and steadily growing body of literature on CIB applications, methodological research, and methodological reflections has emerged.2 However, a cohesive presentation of the method and its most important evaluation and interpretation approaches is lacking. This is a disadvantage for application practice because, on the one hand, CIB analysis is, contrary to its simple appearance, by no means without difficulties and pitfalls, while on the other hand, it offers much more analytical potential than mere scenario construction, which has been the focus of application practice in the past. Optimal use of the CIB method, including an exhaustive data evaluation and an appropriate interpretation of the results, requires a thorough understanding of the method and a sensible selection and application of evaluation approaches. The present volume is intended to provide this comprehensive presentation and, as an introduction, is also aimed at users who have little or no experience with CIB. It is intended as a guide for them in their first applications of the method, providing them with the tools required for solid basic use and for the interpretation of results. It refrains, however, from discussing in-depth methodological issues or addressing the more complex analytical procedures that have only recently been developed, such as hybrid scenarios (Weimer-Jehle et al., 2016), the combination of CIB with structureseeking statistical procedures (Pregger et al., 2020), or the coupling of CIB analyses performed at different regional levels (Schweizer & Kurniawan, 2016; Vögele et al., 2017, 2019). Additionally, the CIB concept of scenario succession and the related interpretation issues and analysis opportunities are only briefly discussed in an appendix. These and other advanced issues and possible applications of the CIB method are to be addressed in a subsequent volume. Scenarios As mentioned, scenario construction is the most common application of CIB thus far. It is therefore to this field of application that the descriptions in this book refer, with a few exceptions. This focus is not intended to disregard the value of CIB for qualitative systems analysis but is motivated by the expectation that the transfer of the methodological descriptions and considerations formulated here for scenario analysis to the field of qualitative systems analysis be straightforward. To fulfill their function as instruments for preparing for the future, scenarios must be well constructed. They must capture what we can reasonably assume today about the future and the forces that will shape it. Taken together, well-constructed scenarios must express the different directions in which these forces can steer events. There have been differing views about how best to achieve this purpose since the early days

1 2

Method development: Weimer-Jehle (2001). First method application: Förster (2002). See the CIB bibliography at www.cross-impact.org/english/CIB_e_Pub.htm

1

Introduction to CIB

3

of scenario making in the 1950s and 1960s, from which two distinct “scenario cultures” developed. Simply Thinking Herman Kahn, the creator of the modern scenario concept, argued that the most important thing is to “think about the problem” (Schnaars, 1987: 109), in other words, to prepare scenarios without the use of formal construction methods. From Kahn’s perspective, formal construction techniques are perceived as a distraction and an impediment to inspiration, intuition, and free thinking. Following Kahn’s approach, the intuitive logics (IL) method emerged (Huss & Honton, 1987; Wilson, 1998), according to which scenarios are designed “by gut feeling” in expert discussions.3 The first groundbreaking successes of the scenario technique are due to this approach,4 and it is by far the most widely used scenario methodology to date, except probably in the area of scientific scenarios. The Magical Number Seven Plus/Minus Two Almost simultaneously with the preceding approach, however, another view of scenario construction emerged, which emphasized the value of formal methods in the collection of information and in actual scenario construction. One of the founders of this school of thought is Olaf Helmer, co-developer of the Delphi method for structured collection of expert assessments (Dalkey & Helmer, 1963) and co-inspirer of the first cross-impact techniques for formal analysis of expert judgments (Gordon & Hayward, 1968). Advocates of formal scenario construction can draw on weighty arguments from cognition research. In a 1956 essay that would become one of the most frequently cited publications in psychology textbooks,5 American psychologist George Miller evaluated a series of cognition experiments (Miller, 1956). He concluded that there is an upper limit to our mental capacity to accurately and reliably process information about simultaneously interacting elements6 and that this limit is seven plus or minus two elements. The essay triggered extensive and continuing research on the question, with the result that Miller’s “magical number” must be regarded as optimistically high (Cowan, 2001). The transfer of these findings of cognition research to the problem of scenario construction is inevitable and sobering. If a scenario analysis addresses ten factors that will define the future (a rather modest number), then 90 potential interactions

Schnaars (1987:106): “Scenario writing is a highly qualitative procedure. It proceeds more from the gut than from the computer, although it may incorporate the results of quantitative models. Scenario writing assumes that the future is not merely some mathematical manipulation of the past, but the confluence of many forces, past, present and future that can best be understood by simply thinking about the problem.” 4 This refers in particular to the Shell scenarios on the eve of the oil crisis (Wack, 1985a, 1985b). 5 According to Gorenflo and McConnell (1991). 6 According to the interpretation of Saaty and Ozdemir (2003). Specifically, the cognitive tasks in the experiments analyzed by Miller were, for example, the identification of n discriminable stimuli and the ability to correctly reproduce n items from a read-out list. 3

4

1

Introduction to CIB

arise between these ten factors. If only about half of the potential interactions actually matter (which, as we will see in Sect. 6.3.2, is about average), persons attempting the mental construction of a scenario will have to keep in mind and weigh approximately 45 interrelationships to extract from them a scenario that considers all relevant interrelationships. Given the limits of our mental capacities shown by cognition research, can we hope to do justice to this task by intuitive scenario construction? A challenge for mental scenario construction also arises from another angle, that is, from the combinatorial weight of the task. Even if we content ourselves with a rough analysis and grant each of the ten factors three conceivable future developments, which we then must combine into meaningful scenarios, this process results in 3 to the power of 10, i.e., approximately 59,000 combinatorial alternatives, each of which must be considered a possible scenario until disproven. How many of these alternatives can be evaluated by mental reflection, and how many relevant scenarios with potentially massive implications go unnoticed when we finally find ourselves at the end of our time resources after intuitively identifying a few plausible combinations? Incidentally, as we will see later, combinatorial spaces with 59,000 combinatorial alternatives are among the lesser challenges faced in scenario analysis. However, the question of intuition-based versus formal construction of scenarios is not the only fundamental controversy in the scenario community. A second controversy is whether (or for what purposes) scenarios should rely essentially on quantitative data or whether they should also build substantially on qualitative bodies of knowledge. If You Can’t Count It, It Doesn’t Count This phrase represents the viewpoint that analyses of the future should focus on quantitative methods (e.g., mathematical system models) and quantitative data.7 The use of qualitative methods is perceived as a loss of mathematical rigor, and qualitative information as a “gateway” for data that in the worst case are ill-defined or ambiguous to interpret or that, it is argued, too often put forward without a solid evidential base. Many advocates of this perspective acknowledge that unquantifiable factors can have an important influence on future events, particularly in systems whose evolution is shaped by human decisions. Nevertheless, they consider the disadvantage of the loss of rigor when opening the analysis to qualitative aspects to be more serious than the disadvantage of not taking such factors into account. Better Approximately Right than Precisely Wrong This phrase summarizes the counterperspective.8 Its advocates argue that the rigor of mathematical methods and quantitative data are meaningless and produce

7 Huss (1988:378), for example, reports the prevalence of this perspective among forecasters during the 1980s. 8 The phrase is attributed to various individuals, including economist John Maynard Keynes and philosopher Karl Popper. However, the oldest published source known to the author refers to the British philosopher Carveth Read (1920:351) (“Better to be vaguely right than precisely wrong”).

1

Introduction to CIB

5

pseudoprecision if essential factors are excluded because they are incompatible with the preferred analysis technique. From this perspective, an approximate but, in essence, accurate picture of the problem is considered more helpful than a picture that is drawn in detail but misleading because of insufficient problem scoping. Action research teaches that ignoring important problem aspects in favor of “working convenience” is not uncommon among decision-makers. In The Logic of Failure, German psychologist Dietrich Dörner analyzed experiments on the behavioral patterns of decision-makers. He found various patterns that can easily lead to failure when one is dealing with complex real-world problems. Among typical causes of failed problem-solving, he found the tendency to tailor the problem view to one’s familiar arsenal of methods (rather than the other way around), noting critically: “We do not solve the problems we are supposed to solve, but the problems we can solve.”9 Many perceptive future researchers have long been aware that this danger also exists in their own research domain. The old master of French future research, Michel Godet, a proven friend of mathematical methods, nevertheless deplored in his article Reducing the Blunders in Forecasting the tendency of his professional colleagues to exclude the poorly quantifiable factors from consideration to the disadvantage of forecast reliability and wrote of the “. . . dangers of excessive quantification (the ever-present tendency to concentrate on things which are quantifiable to the detriment of those which are not). . .”.10 CIB and the Concept of “Mechanical Reasoning” It is pointless to argue about which positions in these discourses are better-founded. However, it is not the case that the truth lies somewhere in the middle and that “everyone is a little bit right.” Rather, in future research, we encounter an immense variety of very different topics. The challenge lies in recognizing anew from case to case which arguments carry the greater weight in the application at hand. At its core, CIB analysis consists of collecting qualitative information on “crossimpacts,” i.e., the influence relationships between scenario factors, and coding these relationships using an ordinal scale. A software-supported simple balance algorithm is then applied en masse to determine which system developments form a selfstabilizing trend network and can thus be accepted as consistent scenarios. Therefore, with respect to the previously described discourse, the application area of CIB is research questions that (a) are too complex for exclusively mental treatment and at the same time (b) urgently require the inclusion of qualitative knowledge. When synthesizing the overall system picture, CIB combines the collected partial information about the pair relationships between system elements and assembles them into coherent constructions.

9

Dörner (1997), own translation from German edition. Godet (1983), pages 181, 182, 189. Godet does not conclude from this view to renounce quantitative methods but, rather, recommends a combination of qualitative and quantitative methods for prognostics. These considerations are transferable to the field of scenario methodology.

10

6

1

Introduction to CIB

Correspondingly, American knowledge integration researcher Vanessa Schweizer compares CIB analysis with the procedure of mechanical reasoning (Schweizer, 2020). She refers to a discourse initiated by the psychologist Paul Meehl in 1954 and conducted for decades on how case predictions in psychology can best be obtained from case data (e.g., “Does therapy X have prospects of being successful in case Y?” or “Will offender Z recidivate or rather not?”). Are such case predictions best made through expert assessments and case conferences? Or are they better based on formal (today, we would say algorithmic) evaluation of case data, i.e., mechanical reasoning? In psychology, the evidence points to the superiority of mechanical reasoning, and Schweizer refers to the analogy to the process of creating and evaluating scenarios either through intuitive-discursive processes in expert workshops (“case conferences”) or through CIB (“mechanical reasoning”). Human reasoning is without question much deeper, more multifaceted and more adaptive than any form of mechanical reasoning. However, this comes at the price that far fewer factors and alternatives can be considered. Whenever the mass application of simple reflections yields more benefit than a small number of selectively deepened reflections, mechanical reasoning is likely to have an advantage over human reasoning. This is especially true for combinatorial problems, i.e., problems that are characterized by a large number of successive forks, each with several alternatives. Combinatorial problems arise, for example, in chess, where computer programs today can defeat any human opponent, and also in scenario construction. How to Work with This Book The structure of this book follows the concept of a self-study course. Chapter 2 presents a short introduction of the scenario technique and the position of CIB with respect to this technique. Chapter 3 Chapter 3 presents the technical basics of CIB. We follow step by step how the workflow of a simple CIB analysis is structured and describe the concepts that are applied in it. The core of the chapter is a detailed explanation of the CIB algorithm, which, as experience shows, requires some time to understand despite its structural simplicity and the absence of complex mathematics. The reader should persist in this effort until success is attained because only a detailed understanding of how CIB evaluates cross-impact data allows access to the full application potential of the method and a solid interpretation of its results. Chapter 4 After “basic training” has been completed, Chap. 4 takes a detailed look at the application of the method by presenting various analysis procedures with examples. This presentation of the spectrum of possible analyses is important to overcoming the narrow focus on scenario generation, which remains observable in CIB practice, and bringing the other, diverse analysis opportunities offered by the method to the reader’s attention. Chapter 5 For most users, the desired result of a CIB analysis is probably a manageable portfolio of perhaps 3–6 clearly different scenarios. Such a result is in fact not atypical for a CIB analysis. However, CIB does not return a result with standardized properties. Rather, the scenario portfolio it generates is an expression of

1

Introduction to CIB

7

the systemic relationships formulated in the cross-impact matrix. The consequent result from a system-analytical perspective can be a small or large, diverse or rather monotonous scenario portfolio—independently of the wishes and expectations of the user. Chapter 5 therefore addresses the case in which the result of a CIB analysis does not meet expectations. It describes using other or supplementary analysis approaches to arrive at a result that meets one’s needs or at least at an understanding why one’s expectations are at odds with the system picture that was input into the CIB analysis. Chapter 6 Now that it is clear how CIB functions in principle and what can be achieved with it, it is time to take a closer look at the three central data objects of the method: descriptors, descriptor variants, and cross-impact data. Hidden beneath the surface of the technical application are many differentiations and design decisions that can be handled well or poorly, unconsciously or purposefully. Chapter 6 therefore presents four “dossiers” that compile key information about these data objects and how to collect them. The dossiers are designed to provide in-depth information and can be dispensed with when reading the book for the first time. However, the reader is then advised to read the chapter in a second pass. Chapter 7 Theory is followed by a visit to the workshops. Chapter 7 outlines four selected studies in which CIB was used by different research teams to analyze the future, to analyze systems or to critically review existing scenarios. The selection of examples is intended to reveal the thematic diversity of the application of the method. The examples also make clear that it is precisely in disciplines with entrenched methodological traditions that new perspectives can be gained by using this still young method. Chapter 8 The final chapter is dedicated to reflection. Based on the literature and the author’s own practice, the chapter describes CIB’s strengths but also the challenges it poses. The limitations of CIB are also addressed because a comprehensive understanding of the method has only been acquired when it has also been understood in which cases the use of this method is not promising. Some of the book’s content is presented in forms that require explanation. Where it seems useful, statistical analyses of method practice are presented in statistics boxes. These boxes are intended to enable the reader to position his or her own application within the spectrum of method applications. Nutshells are compact, selfcontained presentations of selected analytical procedures. Memos (M for short) highlight important principles that one should remain aware of in method practice. Several small examples of cross-impact matrices are used to illustrate methodological issues. Occasionally, these matrices are adopted from the literature, which is indicated accordingly. Mostly, however, CIB miniatures developed especially for didactic purposes are used. These miniatures do not claim to treat their respective topics substantially, and the analysis results they present should not be misunderstood as solid statements about the subject. The purpose of the miniatures is exclusively to demonstrate the method. The use of abstract matrices would prevent these demonstrations being misunderstood as content-oriented statements about the

8

1

Introduction to CIB

topics of the miniatures, but this approach would have reduced the miniatures’ vividness. The practical implementation of CIB analysis requires specialized computer software. At the time of printing this book, the free software ScenarioWizard is used almost exclusively in the published literature.11 All CIB evaluations in this book were performed using Version 4.4 of this software. As an invitation to reproduce the method demonstrations, the ScenarioWizard project files for all miniatures used in the book are provided at https://cross-impact.org/miniatures/ miniatures.htm. Since this book is only intended as a support for CIB users and not as a general textbook about scenario methods, it makes little mention of other methods, and comparative methodological considerations are limited to a brief discussion of method alternatives in Sect. 8.5. The sole purpose of the text is to explain the CIB method, and it avoids burdening readers with explanations that are not necessary for this purpose. For more detailed information on other methods and for comparative analyses, I refer the reader to the general literature on scenario techniques and to more specifically focused research literature. For the same reason, there is little discussion in the book of how scenarios, once created, can be used to prepare for the future by organizations or academics. This question is not CIB-specific and therefore does not require a CIB-specific discussion, and there are many good descriptions of the question in the general scenario literature.12 The task of this book essentially ends with the completion of scenario analysis in the narrow sense, i.e., the development and analysis of scenarios.

References Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24, 87–185. Dalkey, N., & Helmer, O. (1963). An experimental application of the Delphi method to the use of experts. Management Science, 9, 458–467. Dörner D. (1997). The logic of failure: Recognizing and avoiding error in complex situations basic books Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und Werkzeuge der strategischen Vorausschau. Campus. Förster, G. (2002). Szenarien einer liberalisierten Stromversorgung. Akademie für Technikfolgenabschätzung. Godet, M. (1983). Reducing the blunders in forecasting. Futures, 15, 181–192. Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of forecasting. Futures, 1(2), 100–116. Gorenflo, D. W., & McConnell, J. (1991). The Most frequently cited journal articles and authors in introductory psychology textbooks. Teaching of Psychology, 18, 8–12.

11 12

Available at: https://www.cross-impact.org/english/CIB_e_ScW.htm E.g., Kosow and Gaßner (2008), Ringland (2006), Fink et al. (2002).

References

9

Huss, W. R. (1988). A move toward scenario analysis. International Journal of Forecasting, 4, 377–388. Huss, W. R., & Honton, E. (1987). Alternative methods for developing business scenarios. Technological Forecasting and Social Change, 31, 219–238. Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment, and selection criteria. DIE Studies 39, Deutsches Institut für Entwicklungspolitik. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. The Psychological Review, 63, 81–97. Pregger, T., Naegler, T., Weimer-Jehle, W., Prehofer, S., & Hauser, W. (2020). Moving towards socio-technical scenarios of the German energy transition–lessons learned from integrated energy scenario building. Climatic Change, 162, 1743–1762. https://doi.org/10.1007/s10584019-02598-0 Read, C. (1920). Logic–deductive and inductive (4th ed.). Simpkin & Marshall. Ringland, G. (2006). Scenario planning – Managing for the future. John Wiley. Saaty, T. L., & Ozdemir, M. S. (2003). Why the magic number seven plus or minus two. Mathematical and Computer Modelling, 38, 233–244. Schnaars, S. P. (1987). How to develop and use scenarios. Long Range Planning, 20(1), 105–114. Schweizer, V. J. (2020). Reflections on cross-impact balances, a systematic method constructing global socio-technical scenarios for climate change research. Climatic Change, 162, 1705–1722. Schweizer, V. J., & Kurniawan, J. H. (2016). Systematically linking qualitative elements of scenarios across levels, scales, and sectors. Environmental Modelling & Software, 79, 322–333. https://doi.org/10.1016/j.envsoft.2015.12.014 Vögele, S., Hansen, P., Poganietz, W.-R., Prehofer, S., & Weimer-Jehle, W. (2017). Scenarios for energy consumption of private households in Germany using a multi-level cross-impact balance approach. Energy, 120, 937–946. https://doi.org/10.1016/j.energy.2016.12.001 Vögele, S., Rübbelke, D., Govorukha, K., & Grajewski, M. (2019). Socio-technical scenarios for energy intensive industries: The future of steel production in Germany. Climatic Change, 1–16. in context of international competition and CO2 reduction. STE preprint 5/2017, Forschungszentrum Jülich. Wack, P. (1985a). Scenarios–uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89. Wack, P. (1985b). Scenarios–shooting the rapids. Harvard Bussiness Review, 63(6), 139–150. Weimer-Jehle, W. (2001). Verfahrensbeschreibung Szenariokonstruktion im Projekt Szenarien eines liberalisierten Strommarktes. Akademie für Technikfolgenabschätzung in BadenWürttemberg. Weimer-Jehle, W. (2006). Cross-impact balances: A system-theoretical approach to cross-impact analysis. Technological Forecasting and Social Change, 73(4), 334–361. Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T., Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/10.1016/j.energy.2016.05.073 Wilson I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning, In: Fahey L, Randall R.M (Eds) Learning from the future: Competitive foresight scenarios, Wiley, p 81–108.

Chapter 2

The Application Field of CIB

Keywords Cross-Impact Balances · CIB · Scenario · Foresight · Qualitative impact network

2.1

Scenarios

Scenarios are a future research concept for dealing with future openness and uncertainty. According to the definition by Michael Porter (1985), a scenario is: . . .an internally consistent view of what the future might turn out to be—not a forecast, but one possible future outcome.

Scenarios thus assume that there are multiple possible futures and that it is not possible to recognize in the present which of them will occur. From the perspective of scenario technique, preparing for the future means dealing with a variety of possible futures instead of—as in forecasting—focusing on one expected future and aligning our actions specifically with this expectation. Figure 2.1 visualizes this concept by means of the so-called “scenario funnel” (e.g., Kosow & Gaßner, 2008). Here, the development of a system is sketchily represented by two quantitatively measurable key variables (“Trend A” and “Trend B”). At time P (the present), the state of the trend variables is known. In the future, the trend variables may evolve away from their present state. The further we look into the future, the greater the uncertainty about the state of the trend variables becomes, and the funnel enclosing the possibility space of the system opens. In the case of a forecast, one would rely on the center of the funnel. This is appropriate when the opening of the funnel is narrow. Often, however, in long-term decision problems, the opening of the funnel is so wide that the different locations of the opening represent essentially different ideas of the future and require different decisions. Then, it would be inappropriate to focus on the center, and the wideness of the opening is better addressed by a “portfolio” of different scenarios wisely distributed across the opening. Figure 2.1 has only an illustrative function and makes the basic idea of the scenario technique understandable. The front surface of the funnel is not necessarily © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management Science, https://doi.org/10.1007/978-3-031-27230-1_2

11

12

2

The Application Field of CIB

Trend B

Scenario I

P

Scenario II

Scenario III Trend A Time

Fig. 2.1 The scenario funnel

circular but can take on more complicated shapes. In reality, more than two key variables are usually required to describe the future development of a system in a meaningful way, often including qualitative variables that cannot be measured on a numerical scale. The development of the modern scenario concept is usually attributed to Herman Kahn, who worked at the RAND Corporation in the 1950s, when he advised the US government on military strategy issues (Kahn, 1960). Roots of thought, however, can be traced further back historically (cf. von Reibnitz, 1987). After a striking and economically momentous application of the concept in the corporate sector by the Shell company shortly before the first oil crisis (Wack, 1985a, 1985b), the use of scenario techniques quickly spread in academia, business, and administration. As early as the beginning of the 1980s, in a corporate survey, approximately 50% of the responding large US companies already confirmed that they used scenarios. All users had had sufficiently positive experiences with the method to want to continue using it (Linneman & Klein, 1983), and later surveys showed further increases in usage rates. The high importance of the concept in future research also is reflected in the use of terms in the literature. Textual analyses of electronically recorded Englishlanguage books show that “scenario” became a dominant term in foresight around 1994, surpassing the frequencies of use of the competing terms “projection” and “forecast” (Trutnevyte et al., 2016).

2.2

Scenarios and Decisions

Table 2.1 Performance matrix for the evaluation of planning variants. Own depiction based on Fink et al. (2002)

2.2

13

Planning Variant A Planning Variant B Planning Variant C Planning Variant D Planning Variant E

Environmental Scenario I II III ++ ++ + + o ++ --++ o o --

IV o ++ ---

Scenarios and Decisions

Scenarios are instruments for preparing for the future. As an example of the role that scenarios can play here, we consider their use in the development of long-term planning and decision-making. Preparation for the future is necessary here because planning must be aligned with its environment to be successful, not just at the time of planning but for the entire planning period. For example, given the very long-term investments involved, the planning of an urban drainage system cannot be based solely on current needs. Rather, planning must consider that, in the long term, population level, age structure, lifestyles, and consumption habits may change, changes may occur in the amount and structure of industrial and commercial activity in the settlement, and legal standards, such as environmental requirements, also may change.1 Only if these and other environmental conditions and their future uncertainty are adequately considered can we hope for robust planning. For short-term planning, it can be assumed for simplicity that the current environment will continue to exist. For medium-term planning, however, a change in the planning environment must be assumed, although the changes are still foreseeable (predictable). In the case of long-term planning, however, as in the case of urban drainage, this no longer applies. The planning environment can change so much that the changes lead to completely different environmental conditions. Trends and structures can turn, and it is not clearly foreseeable in which direction. This uncertainty cannot be eliminated by scenarios, but it can be made manageable. For this, the environmental uncertainty is represented by a series of scenarios, and for each scenario, how a proposed planning variant would perform in that scenario can be analyzed. This leads to a performance matrix (Table 2.1). Planning success is represented here in simplified form by a five-part ordinal scale [--, -,0,+, ++]. Decision-makers can now use the performance matrix to assess which planning variant best meets their objectives. Planning variants D and E can be discarded immediately, since planning variants A and B offer alternatives that promise at least the same or better performance for all scenarios.2 Risk-affine

1

The example is based on a study of the impact of demographic changes on a wastewater company (John, 2009). 2 In short, only Pareto-optimal planning variants must be considered.

14

2 The Application Field of CIB

Table 2.2 Types and functions of scenarios Scenario type Explorative scenarios

Focus What can happen?

Normative scenarios

What should happen?

Backcasting scenarios Policy analysis scenarios (strategy scenarios)

What do we need to do to make . . . happen? What happens if. . .?

Communication scenarios

How can I make my idea of the future understandable to others?

Function Expanding the understanding of trends and interrelationships in the system under study. Strategy development and assessment of environmental influences on planning. Testing the robustness of planning. Development and concretization of objectives. Reflection on the desirability and feasibility of certain developments. Identify actions needed to realize a targeted development path. Development of several scenarios, each assuming a different decision today. Comparison of the strengths and weaknesses of the alternative paths.

Symbol

Promote shared understanding about problems and perspectives. Organizational learning.

decision-makers will opt for planning variant A, since a good result can be expected for most environmental scenarios. Risk-averse decision-makers, however, will refrain from planning variant A because of the result for environmental scenario IV and prefer planning variant B instead because planning failure (i.e., negative performance) is not to be expected for any environmental scenario. However, as mentioned above, a robustness test for planning variants is only one example of the possible uses of scenarios. Scenarios can have very different functions in preparing for the future. They can aim at knowledge enhancement, organizational learning, goal formation or decision preparation (Kosow & Gaßner, 2008). In each case, the focus is on different questions (Table 2.2).

2.3

Classifying CIB

The position of CIB in the field of scenario methods has already been indicated in the introduction: CIB is particularly suitable for the analysis of complex future developments that can be understood only with the help of qualitative information (Fig. 2.2). Here, CIB has a unique position in some respects (cf. Sect. 8.1). For other requirements, other methods are available: Scenarios for quantitative systems are often created with the help of mathematical models, for example,

2.3

Classifying CIB

15

All important information about the system and the system relationships can be quantified.

All or a significant portion of the information about the system and system interrelationships is in qualitative form.

I

Quantitative

Qualitative

The system interrelationships are complex. Their implications cannot be grasped by mental processes.

Formal

Mathematical Simulations

CIB, Consistency Matrix

Formal

The system interrelationships are of low complexity. Their implications can be grasped by mental processes.

Mental

Intuitive Logics, Scenario Axes

Mental

Type of Data Processing

Data Type

Qualitative

III

IV

Quantitative

II

Fig. 2.2 A classification of scenarios and scenario methods

scenarios for traffic flows in a city assuming different variants of road construction projects. Scenarios whose subject matter can be described entirely or partially by qualitative factors and which are created quickly and without a sophisticated methodological framework are often developed using the Intuitive Logics (Wilson, 1998) or Scenario Axes methods (Van’t Klooster & MBA, 2006). An example would be a citizen panel on urban development where citizens are asked to express their ideas, preferences, and concerns in a series of alternative district scenarios. Approaches to mental processing of quantitative systems (Sector IV) do not play a significant role in scenario practice.3 For the field in the upper right (“Sector II”), in which the necessity of the inclusion of qualitative information coincides with the necessity of dealing with complexity, there are only a few methods established in broad application practice. The classical method for this profile of requirements is the Consistency Matrix method (aka Field Anomaly Relaxation) developed around 1970 (Rhyne, 1974; von Reibnitz, 1987; Johansen, 2018), and CIB can be understood as its further

3

However, both estimative (“intuitive”) and computational quantifications of qualitative scenarios partly play a role in the elaboration of qualitative “raw scenarios” into comprehensive pictures of the future (see, e.g., Alcamo, 2008 or Weimer-Jehle et al., 2016, 2020).

16

2

B

A

The Application Field of CIB

C

D E F G H I

J

K L

Fig. 2.3 Example of a qualitative network of interacting nodes

development. While the consistency matrix collects and utilizes information about which developments are mutually exclusive, CIB goes beyond this and works with a qualitative causal model of the system under investigation, which allows deeper access to the what and why of system behavior (Weimer-Jehle, 2009). CIB can be understood as a form of qualitative network analysis. The system under study is conceived as an array of network nodes, between which a network of bilateral influence relationships operates (Fig. 2.3). The network is qualitative because only a limited number of discrete states are assigned to each node, with the influence relationships between nodes determining which states are active in the nodes. The arrows between the nodes abbreviate a matrix indicating which state of the source node promotes or hinders which state of the destination node. The configurations of active states that can occur under the influence of the interactions are the scenarios of the network behavior. CIB’s general approach to analyzing a qualitative network consists of four steps, as shown in Fig. 2.4, using the example of a simple future analysis on the topic of societal development. First, the factors that should be part of the scenarios and are able to represent the most important system interrelationships are selected. These factors are the nodes of the network and are called “descriptors” in CIB.4 Next, a small number of alternative futures (“descriptor variants”) are formulated for each descriptor. These describe which future uncertainty (or future openness) is

4

The earliest use of the descriptor term in the scenario technique known to the author goes back to Honton et al. (1985).

2.3

Classifying CIB

17

I. Selecting scenario factors

II. Representing future uncertainty

Government Foreign policy Alternative futures of: Economy

Foreign policy

- Cooperation? - Rivalry?

Distribution of wealth - Conflict?

Social cohesion Social values

III. Identifying interdependences Impact Assessment Expert A

Social cohesion - Social peace - Tensions - Unrest

Social values - Meritocratic - Solidarity - Family ++ - - ++ - - - +++

IV. Constructing scenarios Scenario no. 1: ============================================

+ promoting - hindering

Expert A: "Social peace in Somewhereland promotes the emergence of a meritocratic value system, since stable social conditions are a prerequisite for social reward systems."

Government: Foreign policy: Economy: Distribution of wealth: Social cohesion: Social values:

'Social party' Cooperation Stagnant Balanced Social peace Solidarity

1 2 3

============================================

Fig. 2.4 Workflow of a CIB analysis

assumed for the descriptor. The descriptor variants represent the discrete states of the network nodes in Fig. 2.3. In this respect, CIB follows the program of morphological analysis, a general method for structuring possibility spaces (Zwicky, 1969). In the third step, however, CIB takes a different approach than the classical morphological analysis and turns, in the style of a cross-impact analysis (CIA, Gordon & Hayward, 1968), to the influence relationships between the descriptors, i.e., the arrows between the network nodes. To do this, information is collected on whether development in one descriptor X influences which development in another descriptor Y prevails. This information is then coded on an ordinal scale from strongly hindering to strongly promoting. These relationships are referred to as “cross-impacts.” Cross-impact analysis is the name given to a relatively broad group of methods developed from the 1960s onward to examine qualitative information on interacting events and trends in very different ways. With the name “Cross-Impact Balances,” CIB places itself in this tradition, but with the special name variant, it also refers to a characteristic peculiarity that distinguishes it from other cross-impact analyses: the use of impact balances as its central analysis instrument. Finally, in the fourth step, the collected information about the pair relationships of the system is synthesized into coherent images of the overall system, i.e., plausible network configurations, with the help of an algorithmic procedure. The results are interpreted as consistent scenarios. How exactly this synthesis step is performed will be the subject of Chap. 3. Next—no longer part of the CIB analysis in a strict sense and yet its objective—is the utilization of scenarios by individuals or organizations in the context of their

18

2

The Application Field of CIB

decision-making and their preparation for the future. However, the different uses for scenarios in planning and decision-making processes are not CIB-specific and therefore are not the subject of this book. Here, reference must be made to the general scenario literature. As a rule, different people are involved in different roles in a CIB analysis. For further use in the text, four terms are introduced: Core team Sources

Experts Target audience

The core team consists of the person(s) who design, conduct, evaluate, and document the CIB analysis. The information on the main factors and interdependencies of a system required for a CIB analysis can be obtained from the literature and/or by interviewing experts. The term “(knowledge) sources” is used as an umbrella term for both resources of information acquisition. When people play the role of knowledge sources for a CIB analysis, they are referred to as the “experts.” The preparation of the CIB analysis aims to provide the “target audience” with orientation to the system under study and thereby support goal setting or decisionmaking.

In practice, the roles may overlap. For instance, people from the target audience of a CIB analysis also may have expertise on the issue under investigation and contribute to the analysis by participating in the expert panel. In the remainder of the text, these terms are therefore used as role designations, irrespective of the personnel involved.

References Alcamo, J. (2008). The SAS approach: Combining qualitative and quantitative knowledge in environmental scenarios. In J. Alcamo (Ed.), Environmental futures–the practice of environmental scenario analysis (Vol. 2, pp. 123–150). Elsevier. Fink, A., Schlake, O., & Siebe, A. (2002). Erfolg durch Szenario-Management – Prinzip und Werkzeuge der strategischen Vorausschau. Campus verlag. Gordon, T. J., & Hayward, H. (1968). Initial experiments with the cross impact matrix method of forecasting. Futures, 1(2), 100–116. Honton, E. J., Stacey, G. S., & Millet, S. M. (1985). Future scenarios–the BASICS computational method, economics and policy analysis occasional paper (Vol. 44). Batelle Columbus Division. Johansen, I. (2018). Scenario modelling with morphological analysis. Technological Forecasting and Social Change, 126, 116–125. John, S. (2009). Bewertungen der Auswirkungen des demographischen Wandels auf die Abwasserbetriebe Bautzen mit Hilfe der Szenarioanalyse. Dresdner Beiträge zur Lehre der betrieblichen Umweltökonomie 34/09. University of Dresden. Kahn, H. (1960). On thermonuclear war. Oxford University Press. Van’t Klooster, S. A., & MBA, v. A. (2006). Practising the scenario-axes technique. Futures, 38(1), 15–30. Kosow, H., & Gaßner, R. (2008). Methods of future and scenario analysis – Overview, assessment, and selection criteria. DIE Studies 39,. Deutsches Institut für Entwicklungspolitik.

References

19

Linneman, R. E., & Klein, H. E. (1983). The use of multiple scenarios by U.S. industrial companies: A comparison study 1977–1981. Long Range Planning, 16, 94–101. Porter, M. E. (1985). Competitive advantage. Free Press. von Reibnitz, U. (1987). Szenarien - Optionen für die Zukunft. McGraw-Hill. Rhyne, R. (1974). Technological forecasting within alternative whole futures projections. Technological Forecasting and Social Change, 6, 133–162. Trutnevyte, E., McDowall, W., Tomei, J., & Keppo, I. (2016). Energy scenario choices: Insights from a retrospective review of UK. Renewable and Sustainable Energy Reviews, 55, 326–337. Wack, P. (1985a). Scenarios - uncharted waters ahead. Harvard Bussiness Review, 62(5), 73–89. Wack, P. (1985b). Scenarios - shooting the rapids. Harvard Bussiness Review, 63(6), 139–150. Weimer-Jehle, W. (2009). Szenarienentwicklung mit der Cross-Impact-Bilanzanalyse. In J. Gausemeier (Ed.), Vorausschau und Technologieplanung (pp. 435–454). HNI-Verlagsschriftenreihe 265. Weimer-Jehle, W., Buchgeister, J., Hauser, W., Kosow, H., Naegler, T., Poganietz, W.-R., Pregger, T., Prehofer, S., von Recklinghausen, A., Schippl, J., & Vögele, S. (2016). Context scenarios and their usage for the construction of socio-technical energy scenarios. Energy, 111, 956–970. https://doi.org/10.1016/j.energy.2016.05.073 Weimer-Jehle, W., Vögele, S., Hauser, W., Kosow, H., Poganietz, W.-R., & Prehofer, S. (2020). Socio-technical energy scenarios: State-of-the-art and CIB-based approaches. Climatic Change, 162, 1723–1741. https://doi.org/10.1007/s10584-020-02680-y Wilson, I. (1998). Mental maps of the future: An intuitive logics approach to scenario planning. In L. Fahey & R. M. Randall (Eds.), Learning from the future: Competitive foresight scenarios (pp. 81–108). Wiley. Zwicky, F. (1969). Discovery, invention, research through the morphological approach. Macmillan.

Chapter 3

Foundations of CIB

Keywords Cross-Impact Balances · CIB · Scenario · Algorithm · Qualitative systems analysis · QSA · Consistency · Morphological analysis · ScenarioWizard This chapter provides a step-by-step description of how to conduct a CIB analysis. The data objects used in CIB—descriptors, descriptor variants, and cross-impact ratings— are introduced and CIB’s data evaluation algorithm is explained. The goal of the chapter is to make the technical process of applying the method and the concept of consistent scenarios, as practiced in CIB, understandable. Many methodological and practical issues are left aside for the time being and are addressed in subsequent chapters. The process of a CIB analysis is outlined in this chapter in the context of a simple demonstration. The object of the demonstration is the fictitious country “Somewhereland” and the question, already briefly addressed in Sect. 2.3, regarding which societal futures are plausible for this country in, say, 20 years’ time under the effect of the interdependencies of political, economic, and social developments.

3.1

Descriptors

Descriptors are the key topics that are used to compose the scenarios during the analysis. Together, they should allow us to describe the system under study and its most important internal interactions. The term originates in librarianship and computer science to describe words that can be used to index the content of texts or datasets. Since the 1980s, the term also has been used in scenario techniques (Honton et al., 1985). In some cases, the term “(scenario) factors” is used instead in the scenario literature (e.g., Gausemeier et al., 1998). To identify the necessary descriptors, it can be helpful to adopt a fictitious future perspective: Imagine that the target year of the scenario analysis had been reached and that you, as a chronicler, were faced with the task of concisely describing the “past” development and explaining it in retrospect. Which topics would then appear to be particularly worth mentioning? Which connections and cause-effect © The Author(s), under exclusive license to Springer Nature Switzerland AG 2023 W. Weimer-Jehle, Cross-Impact Balances (CIB) for Scenario Analysis, Contributions to Management Science, https://doi.org/10.1007/978-3-031-27230-1_3

21

22

3

Fig. 3.1 The descriptor field for the “Somewhereland” analysis

Foundations of CIB

A. Government B. Foreign policy C. Economy D. Distribution of wealth E. Social cohesion F. Social values

relationships would have to be explained? In a limited text, it is not possible and not necessary to go into every detail. However, the chronicler must dissect the system into various partial developments to the extent that the developments that have occurred can be made understandable. Six descriptors are used for the “Somewhereland” demonstration analysis. Somewhereland is a multiparty democracy. Which party governs the country and thus shapes its political course is consequently an important but open question. Because Somewhereland has many neighboring countries with which it shares a variable history, the stance of its foreign policy also is an essential part of the story to be told. Economic development and the distribution of wealth also will contribute to shaping how the country develops. Whether social cohesion is strengthened or fractured will be the result but at the same time the cause of developments in other areas. Finally, closely interwoven as a cultural undercurrent with all these developments is the question of the social values that prevail in Somewhereland. In summary, the analysis addresses the descriptor field shown in Fig. 3.1. The selection of the descriptors is a first and decisive work step for the analysis quality. Different procedures for the practical execution of this step are described in Sect. 6.4.

3.2

Descriptor Variants

Openness of the future means that the descriptors could generally take different developments. Only then does the scenario approach make sense. If the future were foreseeable for all descriptors, the system would be predictable, and it would be unnecessary to develop different scenarios for the system. The openness of the future for each descriptor is captured in CIB by assigning a number of different developments to each descriptor as “descriptor variants.” Figure 3.2 shows an example of possible variants for the Somewhereland descriptor “A. Government.” It is assumed for Somewhereland that three political parties compete democratically for power, which they can win and often maintain for a long time in a majority electoral

3.2

Descriptor Variants

23

Fig. 3.2 Descriptor variants (alternative futures) for the descriptor “A. Government”

A1 ‘Patriots party‘

A. Government

A2 'Prosperity party'

A3 'Social party'

A. Government

A1 'Patriots party'

A2 'Prosperity party'

A3 'Social party'

B. Foreign policy

B1 Cooperation

B2 Rivalry

B3 Conflict

C. Economy

C1 Shrinking (ca. -2 %/a) C2 Stagnant (ca. 0 %/a) C3 Dynamic (ca. 2 %/a)

D. Distribution of wealth

D1 Balanced

D2 Strong contrasts

E. Social cohesion

E1 Social peace

E2 Tensions

E3 Unrest

F. Social values

F1 Meritocratic

F2 Solidarity

F3 Family

Fig. 3.3 Compilation of descriptors and their variants for Somewhereland

system without coalitions. The parties characterize their programmatic emphases by the catchwords “patriotic,” “prosperity,” and “social,” but they also pursue the other political goals to a lesser extent in each case. The programmatic emphases of the parties are placed in quotation marks because they are self-attributions. The choice of variants defines the range of possible futures that are considered realistic for the descriptor and relevant for the analysis. Typically, 2–4 variants are assigned to each descriptor, although the number of variants may vary from descriptor to descriptor, and occasionally, more variants are used. For special uses, individual descriptors with only one variant may be considered. In the “Somewhereland” example, the descriptor variants shown in Fig. 3.3 are used. The alphabetical labeling of descriptors (A, B, C, . . .) with the numbering of descriptor variants (B1, B2, B3) used in the example is applied throughout this book. However, this is not a binding convention in CIB.

3.2.1

Completeness and Mutual Exclusivity of the Descriptor Variants

CIB assumes that the range of possible futures of a descriptor is fully described, albeit qualitatively, by its set of descriptor variants. The structure in Fig. 3.3 thus means, for example, that the future rise of a fourth party, which is insignificant today, is ruled out at least for the time period under consideration. Otherwise, the possibility of this development would have to be considered by inserting an additional

24

3

Foundations of CIB

descriptor variant for the fourth party. Only then could the subsequent CIB analysis examine whether and under which circumstances this fourth option is “activated.” On the other hand, since it would be excessive and impracticable to cover every even remotely conceivable development with an additional descriptor variant, a restriction to the alternatives that are actually considered relevant and significantly different is unavoidable, which requires dexterity and the awareness that at this point the arena of possibilities in which the analysis will take place is determined. To uniquely represent each possible future of the system under study by exactly one scenario, the variants of each descriptor also must be defined in a mutually exclusive manner. This means that for the development of a descriptor that is to be described by the descriptor variant X, no other variant of the same descriptor may be applicable at the same time. Together, completeness and mutual exclusivity mean that each relevant future development of a descriptor must always correspond to one and only one of its variants.

3.2.2

The Scenario Space

Defining a scenario based on a set of descriptors and descriptor variants means selecting one variant for each descriptor from its range of variants. For example: A3

B1

C2

D1

E3

F1

is an abbreviation for the scenario: A3 “social party” B1 cooperation C2 stagnant D1 balanced E3 unrest F1 meritocratic

A. Government B. Foreign policy C. Economy D. Distribution of wealth E. Social cohesion F. Social values

With the descriptors and descriptor variants defined, the number of scenarios that can be constructed from a combinatorial point of view also is fixed, forming the scenario space from which we can select scenarios. The number of scenarios Z in the scenario space is: Z = V1  V2  V3  . . .  VN where Vi is the number of variants of descriptor i and N is the number of descriptors. Table 3.1 shows some examples of scenario spaces of different numbers of descriptors N. Capturing possibility spaces through a list of central characteristics and their optional variants is a general reflection technique called morphological analysis

3.3 Coping with Interdependence: The Cross-Impact Matrix Table 3.1 Number of scenarios in scenario spaces of different sizes

N 5 10 15 20 25

Vi = 2 32 1024 32,768 1.05 m 33.6 m

25 Vi = [3,2,3,2. . .] 108 7776 839,808 60.5 m 6.53 bn

Vi = 3 243 59,049 14.4 m 3.49 bn 847 bn

(Zwicky, 1969). It is widely used in scenario analysis (Rhyne, 1974; Honton et al., 1985; von Reibnitz, 1987; Gausemeier et al., 1998; Johansen, 2018). Thus, this first step of the CIB analysis is not CIB-specific. Characteristic of CIB, however, is how scenarios are selected from the scenario space.

3.2.3

The Need for Considering Interdependence

CIB emphasizes the perspective that it would be too simplistic to see a credible scenario in every combination of descriptor variants. This would imply that the developments of the descriptors are all completely independent of each other, so that, for example, the future of descriptor “E. Social cohesion” is completely open, regardless of the direction in which, for example, descriptors “D. Distribution of wealth” or “F. Social values” develop. However, in reality, social cohesion in a society certainly depends to some extent on whether or not society’s wealth is distributed according to widely accepted criteria. Blindly combining descriptor variants would mean ignoring such interrelationships, and a scenario thus easily loses the central quality that distinguishes a good scenario from an arbitrary imagination of the future and that also is at the heart of the definition of the term “scenario” (cf. Sect. 2.1): its inner logic, its consistency. The next step on the way to creating credible scenarios must therefore be to address the interdependencies between the various descriptors and their variants. Unlike the traditional consistency matrix method (cf. Sect. 8.5), however, CIB is not limited to a correlational understanding of interdependence (which developments can occur together and which cannot). Instead, CIB works with a causal understanding of interdependence (who influences whom, and how).

3.3

Coping with Interdependence: The Cross-Impact Matrix

From a combinatorial point of view, the scenario space for Somewhereland is easy to describe: When the variants of all descriptors are freely combined, there are 3 ∙ 3 ∙ 3 ∙ 2 ∙ 3 = 486 possible scenarios. For larger numbers of descriptors, the number of scenarios increases rapidly, as Table 3.1 shows.

26

3

Table 3.2 Seven-part crossimpact rating scale

Foundations of CIB

Rating scale for cross-impact judgments -3 Strongly hindering influence -2 Hindering influence -1 Weakly hindering influence 0 No influence +1 Weakly promoting influence +2 Promoting influence +3 Strongly promoting influence

As explained in Sect. 3.2, to ensure the internal consistency of the scenarios, it is necessary to examine the interdependencies between the descriptors, and this means first to identify them. In CIB, this is done by assessing the “cross-impacts” between the descriptor variants, i.e., by assessing the influence that the development of one descriptor exerts on the development of another descriptor.1 The cross-impact x → y answers the question: If development x were to occur for descriptor X, would this promote or hinder development y for descriptor Y?

M1

For the rating of the cross-impacts, an integer scale is used, which is mostly chosen from -3 (strongly hindering) to +3 (strongly promoting) (Table 3.2).2 However, other rating intervals, for example, from -2 to +2, also are possible in CIB. A cross-impact assessment brings together all cross-impact ratings that relate the influence of one descriptor on another descriptor. Ideally, the deliberations underlying the ratings are documented along with the cross-impact values. For the influence of descriptor “A. Government” on descriptor “B. Foreign Policy,” the cross-impact assessment might look like Fig. 3.4. Here, among other things, the judgment is coded that a government that places patriotic issues in the foreground of political discourse would have (in Somewhereland) a hindering effect of medium strength on the emergence of a cooperative style in foreign policy (entry -2 in the upper left of the number field in Fig. 3.4). It also is possible to express the cross-impact ratings by cumulating plus and minus signs in the matrix cells (Fig. 3.5) instead of using the numerical values -3 . . . +3. The term “cross-impact” places CIB in the tradition of “cross-impact analysis” first proposed in the 1960s by Gordon and Helmer and Gordon and Hayward (Gordon & Hayward, 1968). See Sect. 6. 3.1. 2 Strictly speaking, the type of scale used in CIB is an interval scale with a discrete metric. It is more presupposing than an ordinal scale because it assumes, for example, that the effect of “+2” is twice as strong as the effect of “+1,” whereas an ordinal scale would assume only that the effect of “+2” is stronger than the effect of “+1.” 1

3.3

Coping with Interdependence: The Cross-Impact Matrix

27

B. Foreign policy - B1 Cooperation - B2 Rivalry - B3 Conflict A. Government - A1 ‘Patriots party‘ - A2 ‘Prosperity party’ - A3 ‘Social party’

-2 +2 0

+1 +1 0

+1 -3 0

The emphasis on national interests hinders Somewhereland in developing cooperative international relations, since the primacy of national self-interest stands in the way of balancing interests with partners. On the other hand, an economyoriented policy specifically seeks cooperation in order to create the political environment for economic partnerships. A certain degree of rivalry in the case of protective measures for domestic industries remains conceivable, however. Serious conflicts, however, are regarded as a disruptive factor by an economy-oriented policy and are resolutely avoided.

Fig. 3.4 Cross-impact assessment of the influence between two descriptors

B. Foreign policy - B1 Cooperation - B2 Rivalry - B3 Conflict A. Government - A1 ‘Patriots party‘ - A2 ‘Prosperity party’ - A3 ‘Social party’

-++ 0

+ + 0

+ --0

Fig. 3.5 Representing cross-impact data without use of numbers

Kosow et al. (2022), following Weitz et al. (2019), point out another alternative form of representation (Fig. 3.6). The evaluation of the data is identical in all cases. The difference lies solely in the style of presentation. Documentation of the justifications for the cross-impact ratings, as indicated in Fig. 3.4, is not required for the technical procedure of scenario construction using CIB. However, it is still recommended for several reasons. First, the documentation is helpful for the core team itself to understand the internal logic of the completed scenarios and to be able to explain them to others. The documented arguments also make it easier for third parties to independently understand the cross-impacts and thus the foundation of the scenarios. This makes it easier for the target audience of the analysis to convince themselves of the plausibility of the scenarios or to offer more targeted and constructive criticism. Finally, it also can be difficult for the core

28

3

Foundations of CIB

B. Foreign policy - B1 Cooperation - B2 Rivalry - B3 Conflict A. Government - A1 ‘Patriots party‘ - A2 ‘Prosperity party’ - A3 ‘Social party’

-++

+ +

+ ---

Fig. 3.6 Representation of cross-impact data following Weitz et al. (2019)

team itself after some time to remember in detail the reasons for the cross-impact assessments. The documentation is then the key to being able to understand and explain one’s own work even years later. All cross-impact assessments taken together form the cross-impact matrix. In the case of Somewhereland, it takes the form shown in Fig. 3.7. The cross-impact data used here were chosen by the author and are merely illustrative. In practice, crossimpact data are usually collected through literature review and/or expert elicitation (cf. Sect. 6.4).

Just as for Fig. 3.4, it is also true for the entire matrix that in CIB the descriptors and descriptor variants in the rows are regarded as impact sources and in the columns as impact targets.

M2

Thus, the judgment cell marked by a small circle on the right in Fig. 3.7 describes the strongly promoting impact that social unrest (E3) would have on the emergence of family orientation as a dominant social value (F3). It is crucial to carefully observe the convention of rows as the source of impact and columns as the target of impact because confusing the roles of rows and columns leads to the reversal of cause and effect in the impact relationship and thus to the corruption of the internal logic of the scenarios. As a rule, the diagonal evaluation fields remain empty since they would not describe interdependencies but the influence of a descriptor on itself. In special cases, however, the diagonal fields also can be used to describe self-influences.3 The CIB algorithm is able to handle this issue as well. Optionally, the cross-impact matrix also can be printed without the judgment sections completely filled with zeros (Fig. 3.8). This can improve clarity, especially for matrices that (unlike Somewhereland) have a high proportion of empty judgment

An example of the use of diagonal fields is described in Weimer-Jehle et al. (2012). In this study, a diagonal field is used to represent that the practice of sports promotes the enjoyment of physical activity and that this further strengthens the inclination to engage in sports.

3

3.3

Coping with Interdependence: The Cross-Impact Matrix

Somewhereland A. Government: A1 'Patriots party' A2 'Prosperity party' A3 'Social party' B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family

A A1 A2 A3

29

B B1 B2 B3

C C1 C2 C3

D D1 D2

E E1 E2 E3

F F1 F2 F3

-2 1 1 2 1 -3 0 0 0

0 0 0 -2 -1 3 0 2 -2

0 0 -2 2 3 -3

-2 1 1 0 0 0 2 -1 -1

0 0 0 2 -1 -1 -2 2 0

-2 1 1 0 1 -1 3 0 -3

0 0 0 0 0 0

0 0 0 1 0 -1 3 -1 -2

0 0 0 0 0 0 -2 1 1

-2 2 0 0 -2 2

-3 1 2 0 0 0 3 -1 -2

0 0 0 0 0 0 0 0 0

3 -1 -2 -3 1 2

-2 1 1 2 -1 -1

0 0 0 0 0 0 3 -1 -2 2 1 -3 -1 2 -1 0 0 0

0 0 0 0 0 0 0 0 0

0 0 0 0 -3 3

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 2 -1 -1

0 0 0 -1 0 1 -3 1 2

-2 -1 3 1 1 -2 3 0 -3

0 0 0 0 0 0

0 3 -3 1 -2 1 0 0 0

0 0 0 0 0 0 0 0 0

-3 0 3 -1 2 -1 -1 2 -1

-3 3 2 -2 1 -1

A judgment section

A judgment group

2 -1 -1 -1 0 1 -2 -1 3 -2 1 1 2 -1 -1 2 -1 -1 A judgment cell

Fig. 3.7 The cross-impact matrix of Somewhereland

sections. In this book, cross-impact matrices are generally displayed without empty judgment sections. However, representations with visible empty judgment sections also are correct and widespread in the literature. Only direct influences should be considered when assessing cross-impacts. The consideration that social peace in Somewhereland encourages investments and thus has a beneficial effect on economic development describes a direct relationship that should be coded. In contrast, the consideration that social peace makes it easier for people to develop a meritocratic attitude, which in turn subsequently has a favorable effect on the electoral prospects of the “Prosperity party,” would be an indirect influence of social cohesion on government, because this chain of reasoning leads via a third descriptor, namely from Descriptor E to Descriptor F and only then in a second step from Descriptor F to Descriptor A. Such indirect relationships are considered automatically by the analysis algorithm if their components are coded as direct impacts in the matrix. The additional coding of the indirect influence would therefore lead to a double counting of the effect. Thus, the correct way to express the previously described train of thought in CIB is to code both parts of the indirect effect path separately: promoting meritocracy through social peace (E1 → F1) and promoting the electoral prospects of the

30

3

Somewhereland A. Government: A1 'Patriots party' A2 'Prosperity party' A3 'Social party' B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family

A A1 A2 A3

B B1 B2 B3

C C1 C2 C3

D D1 D2

E E1 E2 E3

F F1 F2 F3

-2 1 1 2 1 -3 0 0 0

0 0 0 -2 -1 3 0 2 -2

0 0 -2 2 3 -3

-2 1 1 0 0 0 2 -1 -1

0 0 0 2 -1 -1 -2 2 0

0 0 0 1 0 -1 3 -1 -2

0 0 0 0 0 0 -2 1 1

0 0 0 0 0 0 3 -1 -2

-2 1 1 0 1 -1 3 0 -3

2 1 -3 -1 2 -1 0 0 0

-2 2 0 0 -2 2

0 0 0 0 -3 3 0 0 0 0 0 0 2 -1 -1 0 3 -3 1 -2 1 0 0 0

Foundations of CIB

-3 1 2 0 0 0 3 -1 -2 3 -1 -2 -3 1 2

0 0 0 -1 0 1 -3 1 2

-2 -1 3 1 1 -2 3 0 -3 -3 0 3 -1 2 -1 -1 2 -1

-2 1 1 2 -1 -1 2 -1 -1 -1 0 1 -2 -1 3

-3 3 2 -2 1 -1

-2 1 1 2 -1 -1 2 -1 -1

Fig. 3.8 The cross-impact matrix printed without influence-free judgment sections

“Prosperity party” through meritocracy (F1 → A2). For a detailed discussion of indirect influences, see Sect. 6.3.2. The completed cross-impact matrix is a formal representation of the system view of the experts who made the assessments. It can only be as good and realistic as the experts’ understanding of the system, and other experts may arrive at different assessments. The role of a CIB analysis is to capture the systemic implications of the interdependencies laid out in the matrix, whether or not the coded system view is contestable.

3.4

Constructing Consistent Scenarios

Having defined the space in which scenarios can be searched by specifying the descriptors and their variants, and having formulated a conceptual system model by collecting the cross-impacts that will be the database for distinguishing between consistent and inconsistent scenarios, we turn to the heart of the CIB method: scenario construction using the CIB algorithm. The idea of CIB can be demonstrated graphically or in tabular form. We start with the graphical visualization of the approach.

3.4

Constructing Consistent Scenarios

31

A. Government:

F. Social values:

A2 'Prosperity party'

F1 Meritocratic

D. Distribution of wealth: D1 Balanced

B. Foreign policy: B1 Cooperation

C. Economy: C3 Dynamic

E. Social cohesion: E1 Social Peace

Fig. 3.9 One of 486 scenarios for Somewhereland

3.4.1

The Impact Diagram

Without considering the interdependencies between the descriptors, any combination of descriptor variants could be considered a scenario. As mentioned in Sect. 3.3, in the case of the Somewhereland example, this would be 486 scenarios. With the help of the cross-impact matrix, it is now possible to assess for each of these scenarios whether it is in line with the interdependencies formulated in the matrix, or whether contradictions, i.e., inconsistencies, appear. Figure 3.9 shows one of these 486 scenarios: the scenario [A2 B1 C3 D1 F1 E1], i.e., a scenario in which for descriptor A the variant A2 is active, for descriptor B the variant B1 is active, and so on. The formal cross-checking between this scenario and the cross-impact matrix is done in CIB by looking up for each descriptor pair which influence relationship exists between their active descriptor variants. For the influence relationship between the descriptor boxes “A. Government” and “B. Foreign Policy,” this is shown in Fig. 3.10. The extract from the cross-impact matrix in the upper left-hand corner of Fig. 3.10 shows that if the descriptor “A. Government” is assigned the variant “A2 ‘Prosperity party‘” and the descriptor “B. Foreign policy” is assigned the variant “B1 Cooperation,” the impact of A on B will be a medium strength promotion (+2, see the highlighted judgment cell in the upper left-hand corner of Fig. 3.10). This is represented graphically in Fig. 3.10 by a green arrow of medium strength. In terms of content, this expresses the judgment that an economy-focused party is likely to seek cooperation with other countries to promote trade relations and thus the domestic economy. A reverse effect from B1 to A2 is not coded in the matrix (cross-impact: 0). Likewise, the cross-impact matrix can be used to graphically represent all other descriptor relationships for the scenario under study. Figure 3.11 shows this result.

32

3

Foundations of CIB

Fig. 3.10 Consulting the cross-impact matrix on the influence relationship A2 → B1

It is of importance for understanding CIB to keep in mind that the diagram shown in Fig. 3.11 applies exclusively to the scenario studied here. Each of the 486 combinatorial scenarios of Somewhereland has its own impact diagram.

3.4.2

Discovering Scenario Inconsistencies Using Influence Diagrams

The influence diagram allows us to assess the consistency of each descriptor variant and the scenario as a whole. Descriptor “C. Economy” with its variant “C3 Dynamic” attracts numerous green arrows. This means that many pro-economic developments take place in the scenario under study, that is, an economy-focused government, the cooperative relationship with neighboring countries, a society that is at peace with itself, and a culture that values performance. Overall, the large number of green arrows arriving at “C3 Dynamic economy” indicate that the assumption made here is plausible and understandable against the background of the other descriptor variants active in the scenario—provided one accepts the system description encoded in the matrix. The situation is different for descriptor “D. Distribution of wealth,” which is assigned the variant “D1 Balanced” in the scenario under study. There is not a single green arrow pointing to this descriptor, whereas there are three red arrows. This expresses that, although the assumption “Balanced distribution of wealth” is part of the scenario under study, actually nothing in the scenario speaks in favor of this assumption, but much speaks against it: Pro-business policy of the government and

3.4

Constructing Consistent Scenarios

A. Government:

F. Social values:

A2 'Prosperity party'

F1 Meritocratic

D. Distribution of wealth: D1 Balanced

B. Foreign policy: B1 Cooperation

strongly hindering (-3)

33

C. Economy:

E. Social cohesion:

C3 Dynamic

E1 Social Peace

hindering (-2) weakly hindering (-1) weakly promoting (+1) promoting (+2) strongly promoting (+3)

Fig. 3.11 The impact diagram of scenario [A2 B1 C3 D1 F1 E1]

meritocracy in society tend to fuel income and wealth contrasts in the population (at least in Somewhereland), and even though dynamic economic growth brings wealth gains for many, the gains turn out to be higher for the wealthy than for the lower income groups. Thus, it is largely incomprehensible why a balanced distribution of wealth should be assumed in this environment, and the bundle of incoming red arrows at descriptor D is the formal expression of the lack of plausibility for this component of the scenario. The negative result of this plausibility check is anticipated in Fig. 3.11 by drawing the questionable descriptor variant “D1 Balanced” for descriptor D against a red background. Thus, according to this form of graphical plausibility check, three descriptors in Fig. 3.11 are assigned unambiguously plausible descriptor variants (A, B, and C). Two descriptors do receive mixed impacts (E and F). However, since the green arrows predominate, the selected descriptor variants appear acceptable in these cases as well. One descriptor, however, (D) is clearly at odds with its environment and therefore—as a part of this particular scenario—implausible. In CIB, the entire scenario must thus be discarded. It does not form an intact fabric of mutually supporting assumptions about the descriptor futures, and the scenario is therefore discredited as a logical construct. Just as in mathematics, where proofs are valid only if no link in the chain of reasoning fails, CIB accepts a scenario as a consistent solution to the cross-impact matrix only if no logical weakness is shown at any descriptor. Rather, in a consistent scenario, a variant must be active for each descriptor that is understandable in light of the influences of the other descriptors. As will be shown, this is typically true for only a very small part of the possible combinations of descriptor variants.

34

3

3.4.3

Foundations of CIB

Formalizing Consistency Checks: The Impact Sum

In view of the high number of scenarios to be checked in real applications, a consistency check is ultimately practicable only if it can be performed with software support. This requires a formalization of the described visual inspection of the impact diagrams. The key to this formalization is the impact sum, which is obtained in the impact diagram for each descriptor by summing up all the influences acting on the descriptor, considering the signs and strength ratings. The impact sum expresses the net balance of the promoting and hindering influences on the descriptor. In Fig. 3.12, the impact sums are shown below each descriptor. For instance, Descriptor E attains the impact sum +4, since the promoting impacts of +3 (Descriptor C), and + 3 (Descriptor D) and the hindering impact -2 (Descriptor F) are acting on it. In total, this results in +3 + 3–2 = +4. The plausibility argument for the impact diagram developed in Sect. 3.4.2 requires that for each descriptor, a variant is active that attracts as many green arrows as possible and as few red arrows as possible, whereby strong influences bear a higher weight than weak ones. Translated to the concept of impact sums, this means that a scenario is plausible and internally consistent if for each descriptor the impact sum is as high as possible. In accordance with the above visual inspection of the impact diagram Fig. 3.11, we now see in Fig. 3.12 already with a glance at the impact sums that the assumed variant for Descriptor C is particularly plausible and the assumed variant for Descriptor D is strikingly implausible. Next, we need to clarify how to interpret the “highest possible impact sum” requirement. In CIB, the following understanding applies (Weimer-Jehle, 2006):

A. Government:

F. Social Values:

A2 ‘Prosperity party‘

F1 Meritocratic +2

+3

D. Distribution of wealth: D1 Balanced

B. Foreign policy: B1 Cooperation +2

-7

C. Economy: strongly hindering (-3)

C3 Dynamic

hindering (-2) weakly hindering (-1) weakly promoting (+1) promoting (+2)

+10

E. Social Coherence: E1 Social peace +4

strongly promoting (+3)

Fig. 3.12 The influence diagram Fig. 3.11 with the impact sums of the descriptors

3.4

Constructing Consistent Scenarios

The impact sum of a descriptor is not “as high as possible,” and its active variant is therefore inconsistent with the rest of the scenario, if the impact sum could be increased by changing to a different descriptor variant.

35

M3

If even one of the descriptors has an inconsistent descriptor variant, then the scenario as a whole must be rejected as inconsistent. Only if the impact sum for none of the descriptors can be increased by changing the active variant is the consistency of the scenario confirmed and the scenario accepted as a valid solution of the matrix.4 Thus, CIB imposes a comparative criterion on the impact sums (“strong consistency”). Absolute criteria, such as the impact sum must be at least 0 for all descriptors (“weak consistency”), are possible but can lead to scenarios that appear questionable against the background of the strong consistency principle and are rarely used in practice.

3.4.4

The Formalized Consistency Check at Work

Figure 3.13 shows that the impact sum for Descriptor D can actually be increased in our test scenario by changing the active variant for this descriptor. At the top, the part of Fig. 3.12 responsible for the impact sum of Descriptor D in the test scenario is shown. The bottom of Fig. 3.13 shows for comparison how the impact sum for Descriptor D changes if—with an otherwise unchanged scenario—D2 (large contrasts in wealth distribution) is assumed to be active instead of D1: The impact sum for Descriptor D increases due to the change of the active variant from -7 to +7. This expresses that the assumption of large contrasts in wealth distribution is more favored by the variants of the other descriptors in this scenario than by the originally assumed descriptor variant. This observation proves that no consistent variant was selected for Descriptor D in the test scenario [A2 B1 C3 D1 F1 E1]. Thus, the scenario is rejected, and the consistency check of the test scenario has produced a clear answer. It is noteworthy that the local recovery of consistency at an inconsistent descriptor by replacing the inconsistent descriptor variant by its consistent counterpart does not necessarily mean that the modified scenario is now automatically consistent. In the example, the modified scenario is actually inconsistent because the improvement made to Descriptor D has caused a new inconsistency elsewhere (Fig. 3.14). The

4

Mathematically, the search for consistent scenarios in CIB is equivalent to discrete-valued multiobjective optimization: multiobjective optimization because all descriptors should simultaneously achieve the highest possible impact sums. Discrete-valued because each descriptor has the freedom to choose only one option from a set of enumerable options. The solutions to this optimization task are referred to in mathematics as “Nash equilibria”. Cf. Nash (1951).

36

3

Somewhereland A. Government: A1 'Patriots party' A2 'Prosperity party' A3 'Social party' B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family

Somewhereland A. Government: A1 'Patriots party' A2 'Prosperity party' A3 'Social party' B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family

Foundations of CIB

D D1 D2 0 0 -2 2 3 -3

-2 2 0 0 -2 2

-3 3 2 -2 1 -1

D D1 D2 0 0 -2 2 3 -3

-2 2 0 0 -2 2

-3 3 2 -2 1 -1

Fig. 3.13 Demonstration of inconsistency of descriptor variant D1

variant D2 now selected for Descriptor D fits into the picture as long as the rest of the scenario is assumed to continue. However, the new variant D2 calls into question whether the assumption “E1 Social cohesion: Social peace” is still tenable, since large contrasts in wealth undermine social peace in Somewhereland (according to the matrix author). This interdependence of the plausibility of the individual components of a scenario underscores the complexity of the task of finding a thoroughly consistent scenario, and this underlines the special quality of the few scenarios that are completely flawless from the point of view of the CIB consistency check.

3.4

Constructing Consistent Scenarios

A. Government:

F. Social values:

A2 'Prosperity party'

F1 Meritocratic

+0

+6

D. Distribution of wealth: D2 Strong contrasts

B. Foreign policy: B1 Cooperation +2

+7

C. Economy: strongly hindering (-3)

37

E. Social cohesion:

C3 Dynamic

E1 Social Peace

+10

-2

hindering (-2) weakly hindering (-1) weakly promoting (+1) promoting (+2) strongly promoting (+3)

Fig. 3.14 Consequential inconsistency in Descriptor E after adjustment of Descriptor D

3.4.5

From Arrows to Rows and Columns: The Matrix-Based Consistency Check

The consistency check based on the impact diagrams is illustrative and plausible. However, for larger systems, impact diagrams become confusing when the number of arrows rises. In addition, it is unhelpful that with an impact diagram, only the impact sums of the active descriptor variants can be traced. The comparison with the impact sums of the nonactive variants, which is decisive for the consistency assessment, cannot be done directly but requires, as in Fig. 3.13, the comparison with further impact diagrams, which show the descriptors with modified variants. Therefore, if in application practice it is necessary to perform a consistency assessment by hand, for example, to verify the results of the software-based evaluation for illustration, this is usually better done based on a tabular implementation of the CIB algorithm. Figure 3.15 shows how the calculation of the impact sums known from Fig. 3.12 can be carried out in tabular form. For this purpose, all rows and columns belonging to the active descriptor variants of the examined scenario are marked in the cross-impact matrix. The intersection cells of the marked rows and columns (highlighted in dark in Fig. 3.15) represent—if they do not carry the value 0—the impact arrows shown in the corresponding impact diagram. Thus, entry “3” in the intersection cell of Row F1 and Column A2 corresponds to the thick green arrow drawn from descriptor box F to descriptor box A in Fig. 3.12. The column sums of the intersection cells in Fig. 3.15 equal the impact sums for the corresponding descriptor, as confirmed by comparison with Fig. 3.12. The practical advantage of the matrix-based consistency check is that the impact sums of the nonactive descriptor variants also can be easily derived. For this purpose,

38

3

Somewhereland A. Government: A1 "Patriots party" A2 "Prosperity party" A3 "Social party" B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family Impact balances:

A A1 A2 A3

Foundations of CIB

B B1 B2 B3

C C1 C2 C3

D D1 D2

E E1 E2 E3

F F1 F2 F3

-2 1 1 2 1 -3 0 0 0

0 0 0 -2 -1 3 0 2 -2

0 0 -2 2 3 -3

-2 1 1 0 0 0 2 -1 -1

0 0 0 2 -1 -1 -2 2 0

-2 1 1 0 1 -1 3 0 -3

0 0 0 0 0 0

0 0 0 1 0 -1 3 -1 -2

0 0 0 0 0 0 -2 1 1

-2 2 0 0 -2 2

-3 1 2 0 0 0 3 -1 -2

0 0 0 0 0 0 0 0 0

3 -1 -2 -3 1 2

-2 1 1 2 -1 -1

0 0 0 0 0 0 3 -1 -2 2 1 -3 -1 2 -1 0 0 0

0 0 0 0 0 0 0 0 0

0 0 0 0 -3 3

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 2 -1 -1

0 0 0 -1 0 1 -3 1 2

-2 -1 3 1 1 -2 3 0 -3

0 0 0 0 0 0

0 3 -3 1 -2 1 0 0 0

0 0 0 0 0 0 0 0 0

-3 0 3 -1 2 -1 -1 2 -1

-3 3 2 -2 1 -1

-2 1 1 2 -1 -1 2 -1 -1

-7

4

3

2

10

2 -1 -1 -1 0 1 -2 -1 3

2

Fig. 3.15 Matrix-based calculation of impact sums for scenario [A2 B1 C3 D1 E1 F1]

only the rows, but not the columns, of the active descriptor variants are marked, as shown in Fig. 3.16. The sum of all marked rows then yields the impact balances,5 i.e., the compilation of the impact sums of all descriptor variants, regardless of whether they are active (bottom row in Fig. 3.16). In the row “impact balances,” the impact sums of the active descriptor variants (shown inverted) can now be easily compared with the impact sums of the nonactive variants of the same descriptor. In this way, it can be determined for which descriptors a nonactive descriptor variant would achieve a higher impact sum than the active variant and thus indicate inconsistency (a tie would be acceptable). For our test scenario, we already know from the graphical consistency check in Sect. 3.4.4 what the result will be. The matrix-based consistency check in Fig. 3.16 leads to the same result: Descriptor D (and only Descriptor D) violates the consistency condition because the nonactive descriptor variant D2 achieves a higher impact sum of +7 than the active variant D1, which achieves only -7. All other descriptors, however, satisfy the consistency condition in this scenario: The active descriptor variant A2 achieves an impact sum of +3 and thus is higher than the impact sums of A1 (0) and

The term “impact sums” refers to the sum of impacts on a single descriptor variant, while the “impact balance” consists of all impact sums of a descriptor.

5

3.4

Constructing Consistent Scenarios

Somewhereland A. Government: A1 "Patriots party" A2 "Prosperity party" A3 "Social party" B. Foreign policy: B1 Cooperation B2 Rivalry B3 Conflict C. Economy: C1 Shrinking C2 Stagnant C3 Dynamic D. Distribution of wealth: D1 Balanced D2 Strong contrasts E. Social cohesion: E1 Social peace E2 Tensions E3 Unrest F. Social values: F1 Meritocratic F2 Solidarity F3 Family Impact balances:

A A1 A2 A3

39 B B1 B2 B3

C C1 C2 C3

D D1 D2

E E1 E2 E3

F F1 F2 F3

-2 1 1 2 1 -3 0 0 0

0 0 0 -2 -1 3 0 2 -2

0 0 -2 2 3 -3

-2 1 1 0 0 0 2 -1 -1

0 0 0 2 -1 -1 -2 2 0

-2 1 1 0 1 -1 3 0 -3

0 0 0 0 0 0

0 0 0 1 0 -1 3 -1 -2

0 0 0 0 0 0 -2 1 1

-2 2 0 0 -2 2

-3 1 2 0 0 0 3 -1 -2

0 0 0 0 0 0 0 0 0

3 -1 -2 -3 1 2

-2 1 1 2 -1 -1

0 0 0 0 0 0 3 -1 -2 2 1 -3 -1 2 -1 0 0 0

0 0 0 0 0 0 0 0 0

0 0 0 0 -3 3

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0 2 -1 -1

0 0 0 -1 0 1 -3 1 2

-2 -1 3 1 1 -2 3 0 -3

0 0 0 0 0 0

0 3 -3 1 -2 1 0 0 0

0 0 0 0 0 0 0 0 0

-3 0 3 -1 2 -1 -1 2 -1

-3 3 2 -2 1 -1

-2 1 1 2 -1 -1 2 -1 -1

0 3 -3

2 1 -3

-9 -1 10

-7 7

4 -1 -3

2 -1 -1 -1 0 1 -2 -1 3

2 -1 -1

Fig. 3.16 Matrix-based calculation of the complete impact balances of a scenario

A3 (-3). The same applies to the impact sums of the variants of Descriptors B, C, E, and F. Of course, even for a small CIB matrix such as that of Somewhereland, it would be unfeasible to obtain the solutions by manual consistency checks of all descriptor variant combinations. This check must be executed in a software-based manner. However, the matrix-based consistency check makes it possible to convince oneself of the correctness of the calculated scenarios quickly and easily with the help of paper and pencil. On the one hand, this also can help the core team better understand the inner logic of the identified scenarios. Moreover, the manual check also can be a confidence-building tool to visualize the validity of the computer results to the participants of the scenario exercise who are not familiar with the CIB method or to the target audience of the scenario analysis. Because of this possibility of retrospective validation without technical aids, CIB analysis can avoid or at least mitigate black-box effects.

40

3.4.6

3

Foundations of CIB

Scenario Construction

The described consistency analysis (“CIB algorithm”) allows us to check each proposed scenario for its internal consistency and its consistency with the crossimpact matrix. The result of the check is, in simple terms, a “yes” (scenario is consistent and can be used) or a “no” (scenario has at least one flaw and should be discarded).6 Strictly speaking, the CIB algorithm is therefore a scenario consistency assessment tool, not a scenario construction tool. However, this assessment tool also is the key to scenario construction: The construction is done in the standard CIB procedure by checking all descriptor variant combinations of the scenario space and identifying the few combinations that pass the consistency check as “solutions of the matrix,” i.e., as consistent scenarios. Since the consistency check is formulated in a programmable way by introducing the impact sums, cross-impact matrices with millions or billions of variant combinations can be evaluated.7 In the Somewhereland example, 10 of the 486 combinations of descriptor variants pass the consistency check. These 10 successful scenarios will be referred to below as the “scenario portfolio” of the matrix and will be considered in detail in the next subchapter.

3.5

How to Present CIB Scenarios

As mentioned above, 10 of the 486 combinatorially possible scenarios of the Somewhereland matrix prove to be successful in the CIB consistency test and are considered solutions of the Somewhereland matrix (“consistent scenarios”). These scenarios can be presented in several ways. The obvious way is to simply list the descriptors together with their active variants (“list format”). Table 3.3 shows this for one of the consistent scenarios of the Somewhereland matrix. However, this format is cumbersome for larger scenario portfolios and makes it difficult to perceive the differences between scenarios. The “short format” is more compact: Scenario No. 7: [A3 B1 C3 D2 E1 F1].

6

Sect. 3.6 describes how to calculate and use gradual consistency ratings with the CIB consistency test and how further additional information can be taken from this rating. 7 At the time of printing of this book, nearly all published CIB applications were carried out using the freely available software ScenarioWizard (https://www.cross-impact.org). However, the algorithm in its basic form does not pose great challenges to experienced programmers and allows selfdeveloped software solutions.

3.5

How to Present CIB Scenarios

Table 3.3 A Somewhereland scenario in list format

41

Scenario No. 7 A. Government B. Foreign policy C. Economy D. Distribution of wealth E. Social cohesion F. Social values

A3 “social party” B1 cooperation C3 dynamic D2 strong contrasts E1 social peace F1 meritocratic

Table 3.4 The 10 Somewhereland scenarios in short format 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

[A1 B3 C1 D2 E3 F3] [A1 B3 C2 D1 E1 F3] [A1 B3 C2 D1 E1 F2] [A1 B2 C2 D1 E1 F2] [A3 B2 C2 D1 E1 F2] [A3 B1 C2 D1 E1 F2] [A3 B1 C3 D2 E1 F1] [A3 B2 C3 D2 E1 F1] [A2 B2 C3 D2 E2 F1] [A2 B1 C3 D2 E2 F1]

Or, even more compact

[1 3 1 2 3 3] [1 3 2 1 1 3] [1 3 2 1 1 2] [1 2 2 1 1 2] [3 2 2 1 1 2] [3 1 2 1 1 2] [3 1 3 2 1 1] [3 2 3 2 1 1] [2 2 3 2 2 1] [2 1 3 2 2 1]

Using the “short format,” all 10 consistent scenarios of the Somewhereland matrix resulting from the consistency check of the 486 descriptor variant combinations can be reported in a concise form (Table 3.4). The numbering of the scenarios can basically be chosen freely, because in the order of listing, there is no judgment of the scenarios.8 It is also important to be aware that neither the order of the descriptors in the matrix nor the order of the variants within a descriptor has an effect on the algorithm and thus on the formation of the scenarios. Both the order of the descriptors and the order of their variants can be chosen freely. However, reading the content of a scenario portfolio listed in short format is tedious. The tableau format often used in CIB studies is better suited for this, either with integrated (Fig. 3.17) or separate descriptor listing (Fig. 3.18).9 In the tableau format, the scenarios are to be read vertically from top to bottom. If neighboring scenarios match the variants of a descriptor, the relevant table cells are merged to highlight the similarity of the scenarios at this point and to increase the readability of the tableau by reducing the amount of text. The order of the scenarios can be changed to bring similar scenarios into proximity to each other. A respective sorting has already been done here.

8

The scenario order of the scenario list shown in Table 3.4 is not identical with the result output of the ScenarioWizard, as it has already been rearranged to prepare the following figures. 9 The tableau format for CIB scenarios is based on a proposal by Dipl.-Ing. Christian D. León.

42

3 Scenario No. 1

Scen. Scen. Scen. Scenario Scen. Scen. Scen. No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 No. 8 "The principle "Protect"Cozy "Us vs. them" "Society in crisis" of hope" ionism" society" A. Government: A. Government: A1 'Patriots party' A3 'Social party'

Foundations of CIB

Scen. Scenario No. 9 No. 10 "Prosperity in a divided society" A. Government: A2 'Prosperity party'

B. Foreign policy: B. Foreign policy: B. Foreign policy: B. Foreign policy: B1 Cooperation B2 Rivalry B2 Rivalry B1 Cooperation

B. Foreign policy: B3 Conflict C. Ecoomy: C1 Shrinking

C. Ecoomy: C2 Stagnant

C. Ecoomy: C3 Dynamic

D. Distribution of wealth: D2 Strong contrasts

D. Distribution of wealth: D1 Balanced

D. Distribution of wealth: D2 Strong contrasts

E. Social cohesion: E3 Unrest

E. Social cohesion: E2 Tensions

E. Social cohesion: E1 Social peace

F. Social values: F3 Familiy

F. Social values: F2 Solidarity

F. Social values: F1 Meritocratic

Fig. 3.17 The Somewhereland scenarios in tableau format with integrated descriptor listing

A. Government: B. Foreign policy: C. Ecoomy:

Scenario No. 1 No. 2 No. 3 No. 4 No. 5 No. 6 No. 7 No. 8 No. 9 No. 10 "Prosperity in a "Society "Protect- "Cozy "The principle "Us vs. them" of hope" divided society" in crisis" ionism" society" A1 'Patriots party' A3 'Social party' A2 'Prosperity party' B3 Conflict C1 Shrinking

D. Distribution of wealth: D2 Strong contrasts E. Social cohesion: F. Social values:

E3 Unrest F3 Familiy

B2 Rivalry

B1 Cooperation

B2 Rivalry

B1 Cooperation

C2 Stagnant

C3 Dynamic

D1 Balanced

D2 Strong contrasts

E1 Social peace F2 Solidarity

E2 Tensions F1 Meritocratic

Fig. 3.18 The Somewhereland scenarios in tableau format with separate descriptor listing

The top row of the tableau contains scenario titles (or mottos) that summarize the essence of the scenario. These titles are not a result of the CIB evaluation but are created by interpreting the scenarios. Different scenarios with similar characteristics can be combined into a scenario family by means of a shared title. The individual scenarios within the scenario family then present themselves as subvariants of the shared title. Often a suitable title is suggested by reading the scenario. It can also be helpful to look at the scenario’s impact diagram (or its tabular equivalent) to gain an understanding of the cause-effect relationships in the scenario and to draw inspiration for the choice of title. Occasionally, one also finds that certain descriptor variants are exclusive to a single scenario or scenario family and can therefore be considered characteristic features that suggest an informative scenario title. For example, “C1 Shrinking economy” and “E3 Unrest” occur only in Scenario no. 1 and were therefore the inspiration for the title of Scenario no. 1 chosen in Figs. 3.17 and 3.18. “A2 ‘Prosperity party’” and “E2 Tensions” occur only in scenario family no. 9/no. 10 and therefore shape the view of these scenarios. A particularly thorough but time-consuming way to find a title is to formulate a storyline for the scenario and then condense the storyline into a title. The procedure for storyline development is discussed in detail in Sect. 4.6.

3.5

How to Present CIB Scenarios

43

A. Government:

F. Social values:

A2 'Prosperity party'

F1 Meritocratic

+0

+3

D. Distribution of wealth: D2 Strong contrasts

B. Foreign policy: B1 Cooperation +1

+7

C. Economy:

E. Social cohesion:

C3 Dynamic

E2 Tensions

+5

+1

strongly hindering (-3) hindering (-2) weakly hindering (-1) weakly promoting (+1) promoting (+2) strongly promoting (+3)

Fig. 3.19 Impact diagram of Somewhereland scenario no. 10

Scenario no. 10 builds on the test scenario used to explain the CIB consistency check in Figs. 3.12 and 3.14. This scenario follows the original test scenario with respect to Descriptors A, B, C, and F but considers the change in Descriptor D, the necessity of which was made clear in Fig. 3.12, and adjusts for the consequential inconsistency in Descriptor E, which arose after the correction of D in Fig. 3.14. Finally, with these changes, the complete consistency of the scenario is achieved, which is evident by its appearance on the solution list. The corresponding impact diagram can be seen in Fig. 3.19. In Fig. 3.19, several aspects are worth noting. As expected, the CIB algorithm ensures that no descriptor in a consistent scenario has a particularly poor impact sum. However, this does not mean, as Fig. 3.19 and the impact diagrams shown earlier might suggest, that impact sums in consistent scenarios are generally nonnegative10 or that inconsistent descriptors have generally negative impact sums. Since the CIB consistency principle is designed as a comparative requirement for the impact sums, the consistency of a descriptor is decided only by comparing its impact sum with the impact sums that the other variants of the descriptor would achieve. These are not directly visible in the impact diagram, so that the level of the impact sum in the impact diagram is only a provisional indication of whether there is consistency for the descriptor in question. In the end, however, the impact sums of the alternative descriptor variants must be calculated and compared.

10

However, they often are, and in fact always are, if the cross-impacts fulfill the so-called standardization condition (Weimer-Jehle, 2009, cf. Sect. 6.3.2).

44

3

Foundations of CIB

Figure 3.19 further shows that impact diagrams of consistent scenarios also can contain descriptors that are exposed to hindering (red) impacts. However, the consistency condition also guarantees for these descriptors that their active variant was a good choice despite the hindering impacts, either because the hindering impacts are countered by sufficient promoting impacts or because the alternative variants of the descriptor would perform even worse or at least not better in their impact sums. For example, Fig. 3.19 shows that assumed dynamic economic development is hindered by social tensions (which can cause a deterioration in investor confidence, impair the consumer climate and disturb industrial peace in companies). On the other hand, business-friendly policies of the government, cooperative foreign relations and a widespread meritocratic culture provide so many forces promoting dynamic economic development that the assumption of weak economic development would be more questionable than the dynamic economic development assumed in the scenario.

3.6

Key Indicators of CIB Scenarios

As described, the core function of CIB is to confirm or refute the consistency of a scenario. However, CIB generates additional information as part of this review process, which can also be useful for understanding and assessing the scenario.

3.6.1

The Consistency Value

The consistency value is the most important key indicator of a scenario in CIB practice. It determines with its sign the two-valued statement “consistent” or “not consistent.” In addition, it allows for a gradation of the consistency assessment on an integer scale. Consistency values can be calculated for individual descriptors as well as for the whole scenario.

Descriptor Consistency Values As explained in Sect. 3.4.3, the decisive consistency criterion in CIB is that the impact sum of an active descriptor variant is not exceeded by any impact sum of a nonactive variant of the same descriptor. It is therefore natural to define the degree of consistency by the difference by which the impact sum of the active variant exceeds the impact sums of the nonactive variants of the same descriptor. We can read these differences from the impact balances of the descriptors. The impact balances of the already known test scenario [A2 B1 C3 D1 F1 E1] are shown in Fig. 3.16 and are summarized in Fig. 3.20 (active descriptor variants and their impact sums are printed inverted).

3.6

Key Indicators of CIB Scenarios

Impact balances: Consistency value:

45

A A1 A2 A3

B B1 B2 B3

C C1 C2 C3

D D1 D2

E E1 E2 E3

F F1 F2 F3

0 3 -3

2 1 -3

-9 -1 10

-7 7

4 -1 -3

2 -1 -1

3

1

11

-14

5

3

Fig. 3.20 Consistency values of the descriptors calculated for the test scenario

In the case of Descriptor A, the impact sum of the active descriptor variant A2 is +3. For the nonactive variants, the impact sums amount to 0 for A1 and - 3 for A3. The advantage of the active impact sum over the highest impact sum of all nonactive variants is +3 - Max {0, -3} = +3–0 = +3. This advantage is called the consistency value of Descriptor A, or consistency of A for short. The consistency values for the other descriptors are calculated accordingly. The consistency value for Descriptor D is negative, which indicates its inconsistency. In summary, this definition means that the consistency value for consistent descriptors is at least 0. The higher the consistency value is, the clearer the plausibility advantage of the active descriptor variant over its alternatives. Negative consistency values denote inconsistent descriptors because negative values indicate that a change in the active descriptor variant leads to a gain for the impact sum of the descriptor in question. Accordingly, the more negative this value is, the more pronounced the lack of plausibility.

Scenario Consistency Values The result of the consistency value calculation for the descriptors also is the basis for the consistency evaluation of the scenario as a whole. As already stated in Sect. 3.4.2, CIB requires that the inner logic of the scenario must not have any local flaws. The scenario must therefore be evaluated according to the descriptor with the lowest consistency, and the consistency value for the scenario as a whole is thus defined as the minimum of the consistency values of all descriptors. For the test scenario, the descriptor consistencies CD take the following values, according to Fig. 3.20: CD = ½3, 1, 11,- 14, 5, 3: The consistency value of the scenario [A2 B1 C3 D1 F1 E1] is therefore C S = Min fCD g = - 14, which identifies it as a clearly inconsistent scenario. A scenario is considered consistent if it consists exclusively of consistent descriptors and thus has a scenario consistency value of CS = 0 or higher.

46

3

Foundations of CIB

Nonconsideration of Autonomous Descriptors Descriptors that are not under the influence of other descriptors and therefore have an empty matrix column represent external influences on the system in CIB (cf. Sect. 6. 1.2). Accordingly, their impact balance necessarily consists of zero values, regardless of the scenario. Hence, the impact balances of autonomous descriptors do not contain any information about scenario consistency. Autonomous descriptors are therefore disregarded in determining the scenario consistency value. Somewhereland does not include any autonomous descriptor.

Inconsistency Scale The inconsistency value of a descriptor or a scenario describes the result of the consistency assessment from the opposite perspective: A scenario with a consistency value of -4 has an inconsistency value of 4. Inconsistency values, however, do not have negative values by convention, i.e., a scenario with a consistency value of +1 has the inconsistency value of 0 (cf. Fig. 3.21). This is to account for the understanding that the characterization “zero inconsistency” expresses the absence of inconsistency and should therefore generally encompass all consistent scenarios, regardless of their varying degree of positive consistency. Scenarios with inconsistency values of 0, 1, 2, etc. (i.e., consistency values ≥0, -1, -2, . . .), are abbreviated as IC0 scenarios, IC1 scenarios, IC2 scenarios, and so on. Thus, inconsistency values do not contain any new information compared to the consistency values. They merely provide an additional linguistic option to avoid the use of negative indicator values when describing inconsistent scenarios.

Global Inconsistency In addition to the concept of inconsistency described here, the concept of “global inconsistency” also has been introduced. It is rarely used in CIB practice and is not based on the most inconsistent descriptor but includes all inconsistent descriptors of a scenario by adding up the inconsistency values of all inconsistent descriptors of a scenario (Weimer-Jehle, 2006). For scenarios with only one inconsistent descriptor Consistency scale -3

-2

-1

0

1

2

3 Consistency

Inconsistency 3

2

1

0

0

Inconsistency scale

Fig. 3.21 Comparison of consistency and inconsistency scales

0

0

3.6

Key Indicators of CIB Scenarios

47

(as in the case of the test scenario in Fig. 3.20), there is accordingly no difference between global or ordinary (local) inconsistency. Consistent scenarios are characterized by the global inconsistency value 0, just as in the case of ordinary inconsistency.

3.6.2

The Consistency Profile

The set of descriptor consistency values of a scenario is called the scenario consistency profile. Thus, the descriptor consistencies [3, 1, 11, -14, 5, 3] of the test scenario [A2 B1 C3 D1 F1 E1], which can be obtained from Fig. 3.20, constitute its consistency profile. In this example, the central message of the consistency profile is quite obvious, namely, that the scenario in question is inconsistent and that descriptor D is responsible for it. However, the consistency profile also provides insightful information for consistent scenarios. Figure 3.22 shows the consistency profiles of the consistent Somewhereland scenarios no. 3 and no. 10, taken from Table 3.4. The profiles were computed according to the calculation scheme shown in Fig. 3.16 and the calculation rules for the descriptor consistencies (Fig. 3.20). This shows that the consistency values for the descriptors within a scenario are highly scattered in both cases, and this is typical for CIB scenarios. The consistency of a descriptor can be interpreted as an indicator of the well-foundedness of its active variant. That is, the active variant for wealth distribution in Scenario no. 10 (D2 Large contrasts) is a particularly well-founded assumption within this scenario because it is by far the better assumption than the opposite assumption (D1 Balanced).

Consistency profile scenario No. 3

Consistency profile scenario No. 10

A. Government

A. Government

B. Foreign policy

B. Foreign policy

C. Economy

C. Economy

D. Distribution of wealth

D. Distribution of wealth

E. Social cohesion

E. Social cohesion

F. Social values

F. Social values 0

5 10 Consistency value

15

Scenario No. 3: [A1 B3 C2 D1 E1 F2]

Fig. 3.22 Two examples of consistency profiles

0

5 10 Consistency value

15

Scenario No. 10: [A2 B1 C3 D2 E2 F1]

48

3

Foundations of CIB

Consistency Profile and Scenario Stability On the other hand, both scenarios shown in Fig. 3.22 include descriptors with zero consistency. These descriptors are marginally consistent, and the foundation of the corresponding scenario assumptions is less convincing: At least one other assumption for these descriptors would be similarly plausible. One possible interpretation of the descriptor consistency score is that systems are presumably most likely to be vulnerable at the low-consistency descriptors, in particular at the marginally consistent descriptors, where they can most easily lose their stability due to internal or external perturbations. They could be the “breaking points” of the system state, where the system begins to move away from its previous stable state when perturbed and begins to search for another stable system state. Therefore, marginally consistent descriptors also are referred to as marginally stable.

Consistency Profile and Judgment Uncertainty For a similar reason, the consistency profiles also indicate which cross-impact judgments require the most attention when critically reviewing the role of data uncertainty in the scenario construction process. For example, in Scenario no. 3, data uncertainty in the columns of Descriptors A, D, and E are of little relevance because the consistency values for the descriptors are so high that consistency for these descriptors would not be jeopardized even if individual cross-impact judgments had to be slightly revised. In contrast, in the columns of Descriptors B and F, even minor judgment revisions could cause the consistency of Scenario no. 3 to be lost. As practice shows, it is the rule that CIB scenarios contain one or more marginally stable descriptors. Scenarios without marginally stable descriptors are rather an exception. Following the definition of scenario consistency CS in Sect. 3.6.1, this also leads to the conclusion that it is the rule that consistent scenarios carry a consistency value of 0 and that scenarios with higher consistency values are rather rare. This is also true for the Somewhereland scenarios, of which 9 of the 10 scenarios have scenario consistency CS = 0. The fact that the presence of marginally stable descriptors is the normal case in CIB scenarios can be taken as an implicit commentary of the CIB method on the nature of complex systems, according to which the vast majority of these systems are usually in system states that have at least one vulnerability to their stability. From the CIB perspective, robustly stable complex systems must be understood as exceptions.

3.7

Data Uncertainty

3.6.3

49

The Total Impact Score

The total impact score of a scenario is defined as the sum of the impact sums of all active descriptor variants. For our familiar test scenario [A2 B1 C3 D1 E1 F1], the total impact score can be taken from Fig. 3.15: TIS = 3 þ 2 þ 10 - 7 þ 4 þ 2 = þ 14 The same result is obtained by adding all intersection cells in Fig. 3.15. The addition of the strength values of all arrows printed in the impact diagram Fig. 3.11, considering their signs, also leads to the same result. The latter calculation method also conveys the meaning of the total impact score: Scenarios in which there are numerous and strong promoting impacts between the descriptors (green arrows in Fig. 3.11) and only a few hindering impacts (red arrows) show a strong internal logic of the scenario. The total impact scores are high in these cases. Conversely, scenarios with a low total impact score correspond to impact diagrams in which few or weak green arrows and/or numerous and strong red arrows, i.e., hindering impacts, are active, indicating a weak inner logic of the scenario. Similar to the consistency value, the total impact score can be calculated for all scenarios, regardless of whether they are consistent or inconsistent. While the consistency value, as a “local” indicator, assesses scenarios according to their weakest point, the total impact score supplements this assessment by a “global” indicator, assessing the entire scenario. Scenarios of high consistency also tend to have a relatively high total impact score. However, the correlation is not strict, and therein lies the added value of the total impact score as a separate metric. In CIB, however, it is primarily the consistency value that is decisive for the scenario assessment, which is why the total impact score is usually used only as a supplement for the comparison of scenarios with the same consistency value. In a group of scenarios with the same consistency, the total impact score can be used to determine which of them has the higher overall logical strength.

3.7

Data Uncertainty

In principle, CIB categorically distinguishes between consistent and inconsistent scenarios. However, it must be kept in mind that the consistency assessment of the scenarios is based on the cross-impact data and that these are usually the result of expert estimates. Therefore, cross-impact data usually cannot be considered exact and unquestionable, but some uncertainty must be assumed for them. The main reason for uncertainty is that the cross-impact values usually do not directly express evidence. Instead, the experts must perform a translation of their body of knowledge to the cross-impact rating scale, and this translation is fraught with uncertainty, even

50

3

Foundations of CIB

if the underlying knowledge is reliable. In addition, the use of an integer rating scale for the strength assessments leads to rounding errors even from a purely technical point of view, since an intermediate strength rating (“strength lies between 2 and 3”) that is perceived as appropriate by the experts must be rounded up or down when assessing the impacts. The consequence is that the impact balances, which decide the scenario consistency, are also sums of uncertain values and thus uncertain themselves. Therefore, there are good reasons not to interpret the consistency condition as a mathematically sharp cutoff in practice but to introduce a significance threshold for inconsistency. An inconsistency below the significance threshold is to be regarded as a marginal inconsistency, and the respective scenarios cannot be discarded with certainty.

3.7.1

Estimating Data Uncertainty

The uncertainty margin may vary from case to case. However, there is empirical evidence of the range of uncertainty that can be considered typical. It is provided by systematic comparisons of the assessments of different experts on the same influence relationship. This is described in more detail in Sect. 6.3.3. As a result, empirical data suggest that a scenario of N descriptors can be discarded as surely inconsistent only if its inconsistency value exceeds the significance threshold of (Table 3.5) ICS ≈

1 2

pffiffiffiffiffiffiffiffiffiffiffiffi N -1

M4

In the Somewhereland matrix with N = 6 descriptors, IC1 scenarios are therefore not significantly inconsistent but marginally inconsistent, while IC2 scenarios must be considered significantly inconsistent. These orientation values apply to cross-impact ratings with average estimation uncertainty. In particular cases, for example, in the case of system relationships that are particularly difficult to assess, other values may be appropriate. Table 3.5 Significance of inconsistency classes depending on the number of descriptors

IC 1 2 3

N