Measurement and Understanding in Science and Humanities: Interdisciplinary Approaches 3658369736, 9783658369736

This anthology is a unique compilation of scientific contributions on the topic of measurement and understanding, showin

179 51 6MB

English Pages 273 [274] Year 2023

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
Foreword
Contents
Notes on Contributors
Introduction
Introduction
1 Why Interdisciplinary Research?
2 Challenges and Success Factors of Interdisciplinary Research
2.1 Challenges
2.2 Success Factors
Brief Presentation of the Contributions
Main Part
Counting and Narrating: A Methodological Dialogue of Medieval Literature and History
1 Theoretical Approach and Methods in Medieval Literary and Historical Studies
1.1 Medieval German Studies
1.2 Medieval Historiography
1.3 Methodological Commonalities
2 How Do We Count and Interpret? Historical Narratology and Historical Network Analysis as an Example
2.1 Literary Counting and Narrating: Historical Narratological Perspectives
2.2 Counting History: Historical Network Analyses
3 Counting and Narrating in Interdisciplinary Methodological Dialogue: Results
3.1 Structures and Patterns
3.2 Result and Gain of Knowledge
3.3 Humanities vs. Natural Sciences?
Does Klio Count, Too? Measurement and Understanding in Historical Science
1 Methods and Approaches
2 Counting and Measuring in Historical Research
3 What Is a Scientific Result? How Do We Recognize and Interpret Patterns?
4 Counting and Measuring—Quantification in Politics and Society in Medieval Europe
5 Conclusion
Quantitative Data and Hermeneutic Methods in the Digital Classics
1 What Do We Count?
2 How Do We “Read”?
3 Close Reading Supported by Digital Corpora
4 Quantitative Data Generated Through Interpretation
5 Closing Remark
Metaphors and Models: On the Translation of Knowledge into Understanding
1 Scientific Self-reference via the Metaphors and Models of Scientific Language
2 Use of Metaphors and Models as a Problem
3 Contingency and Contextuality
4 Example: The Humanistic Treatment of Surrogacy
5 Transdisciplinary Research Perspectives: Metaphor, Model and Pattern
6 Conclusion
Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics Using the Example of Computational Legal Linguistics
1 Law, Language and Algorithms—From the Micro to the Macro Perspective
2 Preliminary Definitions
2.1 Recognition and Understanding as a Starting Point
2.2 Systematizing—Counting—Measuring—Interpreting as Work Steps?
2.3 On the Lack of Methodological Metatheory
3 Computer-Aided Pattern Recognition as a Method
4 Three Theses on the Relationship Between Quality and Quantity in Knowledge Production
4.1 Thesis: The Introspective and Single-Case Experience of the World Is Absolute
4.2 Antithesis: In the Unity of Number, Individual Experience Becomes Collectively Relevant
4.3 Synthesis: Only the Cognitive Contextualization of the Individual Case in the Pattern of Contingent Multiplicity Creates Meaning
5 The Perspective: Everything Flows
Quantifying and Operationalizing the Commensurability of International Economic Sanctions
1 How/What Do We Count
2 How/What Do We Measure?
3 How Do we Recognize and Interpret Patterns?
4 What Is the Significance of Patterns and Numbers?
4.1 Jurisprudence
5 Political Science and Economics
6 What Is a Scientific Result? What Perspectives Does the WIN Project Offer?
6.1 Dogmatic Jurisprudence and Social Science Approach
6.2 Formalization as a Common Formal Object of Legal and Quantitative Social Science
6.3 Different Formalizations as an Obstacle to Reception
7 Conclusion
Science, Numbers and Power: Contemporary Politics Between the Imperative of Rationalization and the Dependence on Numbers
1 Introduction
2 Counting, Measuring and Interpreting in Politics (Political Science)
3 Promise and Temptation of “Scientific Politics
4 The Contribution of the Research Project “Science, Numbers and Power”
4.1 Section I: Historical Genesis of the Relationship Between Science and Politics
4.2 Section II: Science and Contemporary Politics
4.3 Section III: Case Study—European Education Policy
Measuring and Understanding Financial Risks: An Econometric Perspective
1 Measurement and Interpretation in Econometrics
2 Statistical Significance: The Trojan Horse
3 Measuring and Understanding Financial Risks
4 Measuring Financial Risks: Limitations and Challenges
5 Understanding, Measuring and Predicting Financial Risks: A New Perspective
6 Conclusion
Regulating New Challenges in the Natural Sciences: Data Protection and Data Sharing in Translational Genetic Research
1 Recognition and Understanding
2 The Rights and Expectations of Stakeholders in Macro and Micro Perspective—Normative Challenges
2.1 The Volume and Diversity of Data and Its Impact on Data Protection
2.2 The Individual Patient
2.3 Cooperating Research Centres, Partners and Countries
2.3.1 Transfer of Data from the European Union to Third Countries
2.3.2 Commercial Cloud Providers
2.3.3 The Individual Researcher
3 Generalisation and Solution Pattern of the Regulation
4 Consequences for the Interdisciplinary Approach in the Project and Need for Further Discussion
Psychology and Physics: A Non-invasive Approach to the Functioning of the Human Mirror Neuron System
1 Importance of Number and Measurement
1.1 Psychology
1.2 Physics and Computational Neuroscience
2 Synergies Through Cooperation Between the Two Subjects
3 Conclusion
Reflections and Perspectives on the Research Fields of Thermal Comfort at Work and Pain
1 Perception and Adaptation
2 How/What Do We Count?
3 How/What Do We Measure?
4 How Do We Recognize and Interpret Patterns?
5 What Is the Significance of Patterns and Numbers?
6 What Is a Scientific Result for Us? What Perspectives Does the WIN Project Offer?
7 Conclusion
Measuring and Understanding the World Through Geoinformatics Using the Example of Natural Hazards
1 From Counting and Measuring to Understanding in Geoinformatics
1.1 The Digital Representation of the World—The View of Geoinformatics
1.2 Counting
1.3 Measuring
1.4 Understanding
1.5 Interpretation
1.6 Pattern
1.7 Scientific Results
2 Neogeography of Digital Earth: Geoinformatics as a Methodological Bridge in Interdisciplinary Natural Hazard Analysis (NEOHAZ)
2.1 Project Objective and Object of Investigation
2.2 Counting and Measuring
2.3 Pattern
2.4 Scientific Findings and Results
3 Conclusion
Measuring Art, Counting Pixels? The Collaboration of Art History and Computer Vision Oscillates Between Quantitative and Hermeneutic Methods
1 Measuring, Counting, Recognizing Patterns in Art History
2 Measuring, Counting, Pattern Recognition in Computer Science and Computer Vision
3 Collaboration Between Art History and Computer Vision
Through Numerical Simulation to Scientific Knowledge
1 Basic Concepts and Considerations
2 Process of a Numerical Simulation
3 Modelling and Simulation of Fluid Flows
4 Importance of Modelling and Numerical Simulation
5 Challenges, Criticism and Outlook
6 Conclusion
Final Part
Communication Cultures
1 Reflection on Goals and Methods of Science Communication
1.1 Why Do We Communicate? (Goals)
1.1.1 Communication and Discussion
1.1.2 Verification Tool
1.1.3 Reputation
1.2 How Is Communication Done? (Methods)
2 Trends in Knowledge Communication and their Consequences
2.1 Trends
2.2 Consequences
3 Potentials and Limits of Interdisciplinary Communication
4 Conclusion
Conclusion: Measuring and Understanding the World Through Science
1 Scientific Disciplines Involved
2 Considered Objects of Investigation
3 Conclusions
Glossary
Author Index
Index of Topics
Subject Index
Recommend Papers

Measurement and Understanding in Science and Humanities: Interdisciplinary Approaches
 3658369736, 9783658369736

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

Measurement and Understanding in Science and Humanities Interdisciplinary Approaches

Edited by Marcel Schweiker · Joachim Hass Anna Novokhatko · Roxana Halbleib

Measurement and Understanding in Science and Humanities

Marcel Schweiker  •  Joachim Hass Anna Novokhatko  •  Roxana Halbleib Editors

Measurement and Understanding in Science and Humanities Interdisciplinary Approaches

Editors Marcel Schweiker Healthy Living Spaces lab, Institute for Occupational, Social, and Environmental Medicine, Medical Faculty RWTH Aachen University Aachen, Germany Anna Novokhatko Department of Classical Philology Aristotle University Thessaloniki, Greece

Joachim Hass Faculty of Applied Psychology SRH University Heidelberg Heidelberg, Germany Roxana Halbleib Institute of Economics, Faculty of Economics and Behavioral Sciences University of Freiburg Freiburg, Germany

ISBN 978-3-658-36973-6    ISBN 978-3-658-36974-3 (eBook) https://doi.org/10.1007/978-3-658-36974-3 © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 This book is a translation of the original German edition „ Messen und Verstehen in der Wissenschaft - Interdisziplinäre Ansätze“ by Schweiker et al. (eds), published by Springer-Verlag Germany in 2017. The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Palgrave Macmillan imprint is published by the registered company Springer Fachmedien Wiesbaden GmbH, part of Springer Nature. The registered company address is: Abraham-Lincoln-Str. 46, 65189 Wiesbaden, Germany

Foreword

Every human being strives for knowledge. He wants to gain knowledge about the people he meets, about the world that surrounds him, about the future that awaits him. He perceives the world with his senses, can remember and consolidate memory to experience. He compares differences and develops similarities. Man recognizes the general and the regular. The transition from curiosity to science begins. The scientist does not stop at the object which he observes, which he can explore experimentally, which he can define, but he searches for causes and reasons. In ancient Greece, the wise observed that fire burns sensed that it gives warmth, sought beyond that to understand why fire burns. They discovered scientific causalities, but also learned to marvel at the inexplicable. Yet, the scientist does not stop at this amazement, but will—even if he knows all that is necessary for life and has achieved everything he needs for a good living—continue to ponder in order to escape from ignorance. In this removal of man from bondage in nature, from imprisonment in the useful, he becomes free, even though he knows that this act of liberation is by nature beyond human power, likely reserved for the gods. Nevertheless, it would be unworthy of man not to seek for the knowledge that belongs to him, not to seek for this freedom. The practical usefulness of sciences—in medicine, in technology, in economics—serves man directly. Thinking liberated from laws of nature and utility leads human knowledge to the edge of the recognizable. In the interaction of sciences, advanced civilization in which we presently live comes into being.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

v

vi

Foreword

If a geneticist recognizes more and more clearly the meaningful rules of man and of life, he will miss in his scientific concerns and methods the probably most important meaning of human life: “The dignity of man is inviolable”. If a scholar of the humanities unfolds this meaning as the basis for the equality of human ­beings in individuality, freedom and peacefulness and seeks to bring it to binding force, he will never come across the basics of human genes. That is why the present age of highly specialized fields of science must exchange ideas in dialogue, to mutually stimulate each other, but also to moderate the use of scientific findings. Great scholars in the history of science—Aristotle, Thomas Aquinas, Nicolaus Cusanus, Martin Luther—have repeatedly urged the scientist not to dwell superficially on the object observed, but to inquire into causes and effects. Let him find the good in man through knowledge of human nature and the world, warn against the supposed omnipotence of science, cultivate a culture of modesty of the knowing of human beings. Today, the unlimited pursuit of knowledge is part of man’s self-­ image and self-confidence. The thirst for knowledge is a virtue. But the more man has the courage to “use his own mind” and to emancipate himself to self-confident freedom, the more he loses common responsibility and common sense in conscious subjectivity and tends to compulsion, unjust violence, war. In the Sachsenspiegel (1235), Eike von Repgow already warned against the destructive power of a will that refers only to itself. Today we observe that the discovery of the atom has led to nuclear weapons, that the development of technology pollutes the environment, that epidemics are also researched as instruments of warfare, that pharmacy also gives people power over the human psyche, that the investigative possibilities of IT technology call into question confidentiality and trust as a foundation of human coexistence, that some experts see the identity of the human being endangered by the development of genetic research. That is why the various scientific disciplines must work together. Science must not only ask what man can do, but also ask what man is allowed to do. If man sees himself as the ruler of nature and as a strong-willed individual to shape his own life, his responsibility to the community becomes weaker, the breadth of his vision and understanding becomes narrower, his curiosity becomes more superficial. Carl Friedrich von Weizsäcker turned his gaze to the dispute between Galileo and his church and adopted the view of Cardinal Bellarmine, Galileo’s opponent: Had Bellarmine foresaw the consequences of the approaching age of unbridled research, he would have shuddered. Science, he said, had faced the straight path from classical mechanics to the mechanics of atoms, from atomic mechanics to the atomic bomb. Whether this bomb would destroy Western civilization, we do not yet know— according to von Weizsäcker in 1999. These dangers posed by science are to be answered above all by more scientific openness and exchange as well as the parallel development and the dialogue between

Foreword

vii

the science of the “can” and the science of the “may”. We need a science that is open to all sides, that asks about the fundamental contexts and connections, that explores the general and the original for its own sake, that questions its findings and recommendations again and again, that examines their generalizability and their compatibility with the community, their confidence in freedom, and that keeps the standards of ability and permissibility in harmony. We must return to the university, which keeps the scientific experience and humanistic research in dialogue, gains its fundamental meaning in the idea of interdisciplinarity and openness to the world, constantly increases knowledge and moderates its application. This is a concept of trust in freedom and of the enlightened scientist, who again and again comes up against the limits of his thinking and encounters the incomprehensible. This openness alone ensures the humanity of science. Man wants to count reality succinctly in tables and balances, but also to tell it experientially and interpretatively. He wants to measure things, but reserve a wilful discretion for the decision that follows. The experience of his inadequacy guards him against hubris in measuring the world. He seeks a measure that protects him from lack of measure and thus from intemperance. He wants to understand the world, to make it his own, but in doing so he cannot learn everything in experiment and count it in comparison, but depends on estimation and assessment, on weighing and weighting of knowledge and experience. The number specifies, proves, provides security through the certainty of numerical logic. It simplifies external life in postcodes, account numbers and telephone numbers, promises more certainty than is possible for human beings in target forecasts—of economic growth or the weather—spurs the will to perform through target markings—from examination grades to expected returns— or overtaxes it. The number presents—as with the company and tax balance sheet—concise overviews, but also simulates or disguises success; it informs and dizzies. It binds the view and the thinking to detail and distracts from context and connections, from the general. The number is always a symbol for reality, for development, an idea, is not the counted itself. Therefore, the counted reality is always a preliminary stage of comprehension, fathoming and understanding. All sciences want to understand the world and man. In order to achieve this goal, some sciences choose quantification and empirical proof and seek to grasp normality in the model, in the pattern, in the type, and to determine their conditionality experimentally and empirically. Other sciences seek to understand the world and man in the intellectual generalization of observations and experiences, seek to discover their meaning and to create meaning. Still others strive to advance in abstraction to an idea, which frees man from causalities and the usefulness of life, uniting nature and culture in this freedom. The fascination of all this thinking sets one free to know and to marvel before the unknowable. Out of this will of man to recognize, to fathom and to understand grows the will and the ability to teach

viii

Foreword

s­ cientifically, to express the world artistically in design and style, to ask religious questions about the undiscoverable. The Heidelberg Academy of Sciences and Humanities has brought together young scientists who have already distinguished themselves through outstanding scientific achievements to create a forum for questioning, experiencing and understanding together. Under the theme “Measuring and Understanding the World Through Science”, it offers researchers the opportunity to reflect on their questions, methods and goals in conversation, to mutually stimulate, disturb and inspire each other in exchange. In this process, the limits of a numbers-based science and those of a theory-driven science become visible. We have cultivated a deeper dialogue among scientists, made them aware of thinking in patterns and models, in experiment and theory building, in concretization and abstraction, in observation and comparison, and shared the scientific progress of the respective projects in regular meetings. In doing so, we have encountered insights and people that will be significant for future science. In this volume, the young scientists present provisional results of their research, in which interdisciplinarity, cosmopolitanism, sustainability, and hope are recognizable on a common basis. Heidelberg, Germany September 2021

Paul Kirchhof

Contents

Introduction��������������������������������������������������������������������������������������������������������� 1 Introduction�������������������������������������������������������������������������������������������������������  3 Joachim Hass and Anna Novokhatko  Brief Presentation of the Contributions��������������������������������������������������������� 11 Joachim Hass and Roxana Halbleib

Main Part��������������������������������������������������������������������������������������������������   19  Counting and Narrating: A Methodological Dialogue of Medieval Literature and History������������������������������������������������������������������������������������� 21 Claudia Lauer and Jana Pacyna Does Klio Count, Too? Measurement and Understanding in Historical Science����������������������������������������������������������������������������������������� 39 Andreas Büttner and Christoph Mauntel  Quantitative Data and Hermeneutic Methods in the Digital Classics���������������� 51 Stylianos Chronopoulos, Felix K. Maier, and Anna Novokhatko Metaphors and Models: On the Translation of Knowledge into Understanding��������������������������������������������������������������������������������������������������� 61 Chris Thomale

ix

x

Contents

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics Using the Example of Computational Legal Linguistics������������������������������� 71 Hanjo Hamann and Friedemann Vogel Quantifying and Operationalizing the Commensurability of International Economic Sanctions ������������������������������������������������������������� 85 Matthias Valta Science, Numbers and Power: Contemporary Politics Between the Imperative of Rationalization and the Dependence on Numbers����������������103 Markus J. Prutsch Measuring and Understanding Financial Risks: An Econometric Perspective���������������������������������������������������������������������������������������������������������119 Roxana Halbleib Regulating New Challenges in the Natural Sciences: Data Protection and Data Sharing in Translational Genetic Research�����������������131 Fruzsina Molnár-Gábor and Jan O. Korbel Psychology and Physics: A Non-­invasive Approach to the Functioning of the Human Mirror Neuron System���������������������������������������149 Daniela U. Mier and Joachim Hass Reflections and Perspectives on the Research Fields of Thermal Comfort at Work and Pain�������������������������������������������������������������������������������161 Susanne Becker and Marcel Schweiker Measuring and Understanding the World Through Geoinformatics Using the Example of Natural Hazards ���������������������������������������������������������177 Bernhard Höfle Measuring Art, Counting Pixels? The Collaboration of Art History and Computer Vision Oscillates Between Quantitative and Hermeneutic Methods�������������������������������������������������������������������������������191 Peter Bell and Björn Ommer  Through Numerical Simulation to Scientific Knowledge�����������������������������201 Mathias J. Krause

Contents

xi

Final Part��������������������������������������������������������������������������������������������������  217 Communication Cultures���������������������������������������������������������������������������������219 Marcel Schweiker Conclusion: Measuring and Understanding the World Through Science�����������������������������������������������������������������������������������������������237 Mathias J. Krause and Susanne Becker Glossary �������������������������������������������������������������������������������������������������������������245 Author Index�������������������������������������������������������������������������������������������������������263 Index of Topics���������������������������������������������������������������������������������������������������265 Subject Index �����������������������������������������������������������������������������������������������������267

Notes on Contributors

Susanne Becker  Psychologist (University of Mannheim), Dr. rer. Soc. (University of Mannheim), Master—Methods and Models of Mathematics (Open University Hagen) Professor for Clinical Psychology, Department of Experimental Psychology, Heinrich Heine University Düsseldorf Working/research interests: Perception of pain and pain modulation, interaction of reward and learning with pain, neural correlates and neurochemical mechanisms Chapter: Reflection and perspectives on the research fields of thermal comfort at work and pain, Conclusion Peter  Bell  Studied History/Business Administration/Graphic Arts & Painting (Philipps-Universität Marburg), Dr. phil. History of Art (Philipps-Universität Marburg) Professor of Art History and Digital Humanities, Philipps University Marburg, Germany Working/research interests: Digital art history, digital humanities, alterity and stereotype formation in early modern art (SFB 600, University of Trier) Chapter: Measuring art, counting pixels? The collaboration between art history and computer vision oscillates between quantitative and hermeneutic methods Andreas Büttner  M.A. (Ruprecht-Karls-Universität Heidelberg/Università degli Studi di Catania), Dr. phil. (Ruprecht-Karls-Universität Heidelberg) Research Assistant, Department of History, Ruprecht-Karls-University ­Heidelberg

xiii

xiv

Notes on Contributors

Working/research interests: Political and constitutional history, economic history, rituals of power, numismatics and everyday history Chapter: Does Klio Count, Too? Measurement and Understanding in Historical Science Stylianos Chronopoulos  Dr. phil. (University of Ioannina, Greece) Department of Classics, University of Ioannina, Greece Research group leader in the WIN project: “The Digital Turn in Classical Studies: Perception—Documentation—Reflection” Working/research interests: Greek comedy, ancient lexicography and grammar, and digital editions Chapter: Quantitative data and hermeneutic methods in the Digital Classics Roxana Halbleib  M.A. Economics (University of Konstanz), Dr. rer. Pol. (University of Konstanz) Professor of Statistics and Econometrics at University of Freiburg, Germany Working/research interests: Financial risk measurement and prediction, empirical applications of simulation-based estimation techniques Chapter: Brief presentation of the contributions, Measuring and understanding financial risks—An econometric perspective Hanjo Hamann  Dr. iur. (Faculty of Law and of Economics, University of Bonn), Dr. rer. pol. (Faculty of Economic Sciences, University of Jena) Assistant Professor (with tenure track) for private law, commercial law, and intellectual property, with particular emphasis on digitalisation and legal linguistics, EBS University of Business and Law, Wiesbaden Working/research interests: Contract law, empirical legal research, legal linguistics, behavioral economics, corporate law, legal studies and legal theory Chapter: Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics Using the Example of Computational Legal Linguistics Joachim Hass  Graduate in physics (University of Göttingen), Dr. rer. Nat. Theoretical and Computational Neuroscience (University of Göttingen, Göttingen Graduate School for Neurosciences and Molecular Biosciences, GGNB) Professor of Research Methodology, Faculty of Applied Psychology, SRH University Heidelberg, and visiting scientist, Department of Theoretical Neurosciences, Central Institute of Mental Health Mannheim Working/research interests: Theoretical brain research, simulation models of time perception and cognitive functions, and statistical methods

Notes on Contributors

xv

Chapter: Introduction, Brief introduction of the contributions, Psychology and Physics: A non-invasive approach to the functioning of the human mirror neuron system Bernhard Höfle  Prof. Mag. Dr. rer. nat., Geography with focus on Geoinformatics and Physical Geography (University of Innsbruck, Austria) Full Professor of Geoinformatics and 3D Geodata Processing at the Institute of Geography, Ruprecht-Karls-Universität Heidelberg Working/research interests: Digital 3D geodata, laser scanning, 3D crowdsourcing and citizen science, natural hazard analysis and algorithms of geographic processes Chapter: Measuring and understanding the world through geoinformatics using the example of natural hazards Jan  O.  Korbel  Diplom-Ingenieur Biotechnology, specialization Medical Biotechnology (Technical University Berlin), Dr. rer. Nat. Molecular Biology (Humboldt University, Berlin and European Molecular Biology Laboratory (EMBL), Heidelberg) Group leader and Senior Scientists at the Genome Biology Unit, EMBL, Heidelberg. Head of Data Science, EMBL, Heidelberg. Honorarprofessor, Universität Heidelberg. Working/research interests: Human genetic variation, structural variation, chromothripsis, cancer research, scientific self-regulation, machine learning/artificial intelligence Chapter: Regulating new challenges in the natural sciences: Data protection and data sharing in translational genetic research Mathias J. Krause  Graduate in Mathematics in Economics (University of Karlsruhe (TH)), Dr. rer. Nat. (Karlsruhe Institute of Technology) Group Leader, Lattice Boltzmann Research Group at the Institute for Applied and Numerical Mathematics and at the Institute for Mechanical Process Engineering and Mechanics, both at the Karlsruhe Institute of Technology Working/research interests: Mathematical modelling, numerical simulation and optimal control of flows Chapter: Through numerical simulation to scientific knowledge, Conclusion Claudia Lauer  First state examination (teaching profession) for the subjects German, French, History (University of Freiburg), Dr. phil. German Philology (University of Mainz)

xvi

Notes on Contributors

Temporary Professor of Older German Literature, University of Mainz; Head of the Junior Research Group “Counting and Narrating”, WIN-College “Measuring and Understanding”, Heidelberg Academy of Sciences and Humanities (HAdW) Working/research interests: Poetry of the Middle Ages and early modern period, courtly epic of the Middle Ages, historical narratology and poetics, historical semantics, historical cultural studies and medieval reception Chapter: Counting and Narrating: A Methodological Dialogue of Medieval Literature and History Felix K. Maier  Prof. Dr. (University of Zürich) Research Group Leader of the WIN Research Group on “The Digital Turn in Classical Studies: Perception—Documentation—Reflection” Working/research interests: Greek and Roman historiography, emperors in late antiquity, concepts of space and time in antiquity, and escalation dynamics in ancient diplomacy Chapter: Quantitative data and hermeneutic methods in Digital Classics Christoph  Mauntel  PD (Eberhard Karls Universität Tübingen), Dr. phil. (Ruprecht-­Karls-University Heidelberg) Postdoctoral Researcher at the Department of Medieval History, Eberhard Karls University Tübingen Working/research interests: Medieval history, violence and uprisings, transculturality, travelogues, historical geography and cartography Chapter: Does Klio Count, Too? Measurement and Understanding in Historical Science Daniela U. Mier  Graduate in psychology (University of Giessen), Dr. rer. Nat. psychology (University of Gießen) Head of the research group Social-Affective Neuroscience and Experimental Psychology, in the Department of Clinical Psychology, at the Central Institute of Mental Health, University of Heidelberg/Medical Faculty Mannheim, now Professor of Clinical Psychology and Psychotherapy, University of Konstanz Working/research interests: Social cognition, functional magnetic resonance imaging, schizophrenia, borderline personality disorder and somatic symptom ­disorder Chapter: Psychology and Physics: A non-invasive approach to the functioning of the human mirror neuron system Fruzsina  Molnár-Gábor  Law (Eötvös Loránd University Budapest, Ruprecht-­ Karls University Heidelberg), Dr. iur. (Ruprecht-Karls University Heidelberg) Research group leader, Heidelberg Academy of Sciences and Humanities (HAdW)

Notes on Contributors

xvii

Working/research interests: Public international law, European law, biolaw, bioethics, data protection and intellectual property law Chapter: Regulating new challenges in the natural sciences: Data protection and data sharing in translational genetic research Anna Novokhatko  Prof. Dr. Aristotle University, Thessaloniki Department of Classical Philology, Aristotle University Thessaloniki, Greece Research group leader of the WIN research group on “Der digital turn in den Altertumswissenschaften: Perception—Documentation—Reflection”, Heidelberg Academy of Sciences and Humanities Working/research interests: History of philology in antiquity, cognitive approaches to Classical texts, ancient Greek comedy, ancient metaphor theories and Digital Classics Chapter: Introduction, Quantitative data and hermeneutic methods in the Digital Classics Björn Ommer  Prof. Dr. sc. ETH Zurich (Computer Science) Head of Computer Vision & Learning Group, Ludwig Maximilian University of Munich, Germany Work/research interests: Computer vision, machine learning, cognitive science, biomedical image analysis and digital humanities; in particular: Visual object recognition in images and videos, action recognition, shape analysis, graphical models, compositionality, perceptual organization and its applications Chapter: Measuring art, counting pixels? The collaboration between art history and computer vision oscillates between quantitative and hermeneutic methods Jana Pacyna  Magister Artium Medieval and Modern History and History of Art (University of Leipzig, Université libre de Bruxelles), Dr. phil. Medieval History (Friedrich Schiller University, Jena) Head of the Junior Research Group “Counting and Narrating”, WIN-College “Measuring and Understanding”, Heidelberg Academy of Sciences and Humanities (HAdW), Lecturer, University of Heidelberg Working/research interests: Historical network analysis, theory of action, methodology and philosophy of science, medieval history (social structures, church politics, religious coexistence, law). Chapter: Counting and Narrating: A Methodological Dialogue of Medieval Literature and History Markus J. Prutsch  Master of Philosophy, major in History (University of Salzburg/Heidelberg University); Master of Philosophy, major in Political Science (University of Salzburg/Heidelberg University); Master of Research (European

xviii

Notes on Contributors

University Institute, Florence); Doctor of History and Civilization (European University Institute, Florence) Researcher and Administrator in charge of cultural and educational policies, European Parliament; Associate Professor (Privatdozent), Heidelberg University Working/research interests: Politics of the European Union, European constitutional history, and comparative theory of democracy and dictatorship Chapter: Science, numbers and power: contemporary politics between the imperative of rationalization and the dependence on numbers Marcel  Schweiker  Dipl. Ing. Architecture (University of Kassel), Dr. Environmental and Information Science (Tokyo City University, Japan) Professor at RWTH Aachen University, Medical Faculty, Institute for Occupational, Social and Environmental Medicine, Healthy Living Spaces lab Working/research interests: User behaviour, user satisfaction, thermal comfort, well-being, health and building simulation Chapter: Reflection and perspectives on the research fields of thermal comfort at work and pain, Communication cultures Chris Thomale  Studied law and philosophy in Heidelberg, Cambridge and Geneva; Dr. iur. (Free University of Berlin); LL.M. (Yale University) Academic councillor at the Heidelberg Institute for Foreign and International Private and Commercial Law Working/research interests: Private international law, corporate law and legal theory Chapter: Metaphors and models: On the translation of knowledge into ­understanding Matthias  Valta  Dr. iur. (University of Heidelberg), Habilitation (University of Heidelberg) Professor for Public Law and Tax Law at Heinrich Heine University Düsseldorf, Germany. Working/research interests: Public law, German and international tax law, and international law Chapter: Quantifying and operationalizing the commensurability of international economic sanctions

Notes on Contributors

xix

Friedemann Vogel  Master of Arts in German Studies, Psychology, Philosophy, Dr. phil. (University of Heidelberg) Professor and head of the research group on Computer-­assisted Socio- and Discourse Linguistics (CoSoDi) at the University of Siegen. Working/research interests: Specialized communication, language and sociality in computer-assisted communication, conflicts and conflict resolution, media discourse and computer-assisted methods of language analysis Chapter: Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics Using the Example of Computational Legal Linguistics

Introduction

Introduction Joachim Hass and Anna Novokhatko

1

Why Interdisciplinary Research?

When scientists from different research disciplines work together, this is initially associated with increased effort for everyone involved—common goals and terminology have to be found, perhaps even prejudices have to be overcome. Is all this work worthwhile, given that there are enough unsolved problems in each discipline? Here, we aim to illustrate the opportunities and advantages that interdisciplinary research brings, the specific challenges that have to be overcome, and how, in the experience of the authors of this book, successful research between disciplines can succeed. The benefits of collaboration between several disciplines can be seen on at least three different levels: the realization of novel research projects, the personal development of the scientists involved, and the development of science and humanities as a whole.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The present version has been revised technically and linguistically by the authors in collaboration with a professional translator. J. Hass (*) Faculty of Applied Psychology, SRH University Heidelberg, Heidelberg, Germany e-mail: [email protected] A. Novokhatko Department of Classical Philology, Aristotle University, Thessaloniki, Greece e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_1

3

4

J. Hass and A. Novokhatko

Interdisciplinary research projects are often initiated following the insight that one cannot make progress with the resources of one’s own discipline alone. There may be a lack of methodology, knowledge or expertise that is available within other disciplines. According to Fruzsina Molnár-Gábor, for example, the need for cooperation between biology, human genetics and law arose from many years of experience in the Heidelberg scientific landscape. This gave rise to her project together with Jan O. Korbel (see chapter by Molnár-Gábor & Korbel). As a further example, Jana Pacyna and Claudia Lauer came into contact with each other while exploring the concepts of “counting” and “narrating” (“zählen” und “erzählen” in German), a relationship that had scarcely been illuminated in their respective disciplines, Early German Literary History and Medieval History, especially on a methodological level. This gave rise to their joint project, in which they work on the concepts from both disciplines and draw theoretical and methodological inspiration from each other’s approaches (see chapter by Lauer & Pacyna). Conversely, the encounter between disciplines can also give rise to completely new questions that would not have arisen at all from a single discipline. This contact between disciplines is most likely to succeed through the academic exchange between scientists. However, it can also arise when a scientist studies an alien subject area out of curiosity. Thus, in several projects carried out by a single individual, methods and theories from diverse disciplines are employed, for example from the fields of engineering, mathematics and medicine (see chapter by Krause), geography, computer science and sociology (see chapter by Höfle), law, economics and political science (see chapter by Valta), econometrics, statistics, mathematics, finance and economics (see chapter by Halbleib), political science, history, economics, sociology and educational science (see chapter by Prutsch) or law and philosophy (see chapter by Thomale). In all these cases, the disciplines involved contribute complementary methodological tools and perspectives for solving the same problem. This expanded repertoire of methods and concepts enables a more comprehensive treatment of the problem and thus also puts the interdisciplinary researcher in a position to solve the problems of a complex world. Beyond such concrete projects, interdisciplinary cooperation also offers advantages for the scientists involved that cannot be measured in publication figures (see chapter on communication cultures). Those who seriously engage in immersing themselves in the way of thinking and methodology of another discipline expand their scientific competence in a unique way. Not only do they gain profound insights into another discipline, but the new perspective also broadens the view on their own work. In addition, they are forced to present their work in a way that is understandable to colleagues outside their field. In addition of training communication skills that are useful for teaching and public relation work, this also often leads to a deepened and altered understanding of their own subjects. This new way of communicat-

Introduction

5

ing can also lead to finding a common language with colleagues, to rethinking seemingly self-evident terminology and subsequently using it more consciously (see chapter Conclusion). And last but not least, there is also the possibility of acquiring methods and expertise from the other discipline and using them in one’s own work. The synergies just described affect the level of individual researchers and individual projects, but can also lead in the long term to a change in the disciplines themselves and in science as a whole. Already today, numerous new research directions are developing through the “fusion” of traditional disciplines. The most prominent example is probably the entry of digital technologies (and thus, the methods of computer science) into many disciplines—in the natural sciences, computer simulation is establishing itself as a third pillar alongside experiment and theory (see the Krause chapter). Also in cultural studies, there is much talk of a digital turn, which places quantitative methods alongside classical text analyses (see Chronopoulos, Maier & Novokhatko chapter). Another example is the neurosciences, in which scientists from medicine, biology, psychology and, more recently, computer science, mathematics and physics are involved (see chapter Mier & Hass). In particular, the former strict separation between the humanities and the natural sciences is being increasingly blurred by these developments. This development seems imperative given the current state of research, although there have been good reasons for such a split in history (cf. e.g. the proposition of the two cultures by C. P. Snow1). On the one hand, the natural scientific investigation of the nature of man is gaining increasing importance and influencing the humanities; on the other hand, the interpretative (and inevitably also subjective) methods of the humanities are increasingly being recognized in the natural sciences. In the text-oriented sciences such as philologies, the application and use of scientific methods of analysis and measurement is of particular importance (see chapters by Lauer & Pacyna, Chronopoulos, Maier & Novokhatko and Hamann & Vogel). At this point, it should also be noted that the German language supports this symbiosis by retaining the term “Wissenschaft” for both directions science and humanities.2 Many of these new areas of research have had their beginnings in individual interdisciplinary research projects and it is to be expected that new forms of collaboration will also contribute to this development. Interdisciplinary collaborations

 Snow, C. P. (1959). Die zwei Kulturen. In H. Kreuzer (Ed.), Die zwei Kulturen. Literarische und naturwissenschaftliche Intelligenz. C. P. Snows These in der Diskussion (19–58). München: dtv 1987 and other chapters in the same volume. For a discussion of the separation of science and humanities, see also Brockman, J. (1996). Die dritte Kultur. Das Weltbild der modernen Naturwissenschaft. München: Goldmann (Engl. The Third Culture, New York 1995). 2  On the intersection of humanities and natural sciences: http://www.bpb.de/apuz/30124/sch nittstellenzwischengeistesundnaturwissenschaften?p=all; http://www.zeit.de/studium/unileben/2012-10/schlichtungnaturwissenschaftgeisteswissenschaft; 1

6

J. Hass and A. Novokhatko

make it possible to develop a new unique and wide-ranging perspective on research, which should necessarily lead to qualitatively new results. Ultimately, both the natural sciences and the humanities are always situated in historical, political, and cultural contexts that strongly influence the approach, course, and outcomes of research. To understand the context, one needs mutual support from both directions. This much is made clear by the projects described in the book. For example, the project by Mathias J. Krause combines engineering, mathematics and medicine (see chapter by Krause), Bernhard Höfle combines geography, computer science and sociology (see chapter by Höfle), Roxana Halbleib combines econometrics, statistics, mathematics, finance and economics (see chapter by Halbleib), Joachim Hass and Daniela Mier combine neuropsychology, computational neuroscience and physics (see chapter by Mier & Hass) and Peter Bell and Björn Ommer combine art history and computer vision (see chapter by Bell & Ommer).

2 Challenges and Success Factors of Interdisciplinary Research 2.1 Challenges Since the era of the natural philosophers of classical antiquity and the polymaths of early modern times, science and humanities have developed so tremendously that a scientist or humanist today can’t be an expert in more than a tiny slice of the world’s knowledge. Academic disciplines have developed out of this need for specialization, and with them, specific subject cultures and distinctions from related fields arise. These historically evolved characteristics often prove to be an obstacle to interdisciplinary cooperation. A scientist who has been socialized in a certain subject culture must first take a step back from his subject in order to gain access to the world of thought of another subject. For example, Susanne Becker and Marcel Schweiker have combined psychology and architecture in this way (see chapter by Becker & Schweiker). Here, communication and open-mindedness as well as a willingness to invest time and energy in the other’s project play an enormous role in successful collaboration. At the same time, the methods of the other fields should be reflected and analysed. For example, mathematical methods are increasingly being used in parts of the humanities. Examples include literary studies, history, palaeography,

Introduction

7

l­ibrarianship and archival research, which are now increasingly combined with computer science.3 Another aspect is the language by which the systems of thought are constructed. In addition to the specific technical language, which must be learned at least in its basic features (in the case of formal sciences, usually also the language of mathematics), difficulties of understanding lurk in terms that are taken from everyday language and appear clear in their meaning, but, in reality, are coined quite differently depending on the subject area. What engineers call a “model”, mathematicians understand as a geometry (see chapter by Krause). Different emphases in the definition of terms can be identified in individual disciplines (e.g., “quantification” and “rationalization,” see chapter Prutsch; “quality” and “quantity,” see chapter Lauer & Pacyna). This book also attempts to help overcome these language barriers by explaining central terms of the various projects in a glossary (see chapter Glossary). Numerous multiple definitions from the perspective of the different disciplines can also be found here. Apart from these substantive difficulties, there are also formal obstacles in the areas of funding, promotion of young researchers, career planning and publications. The majority of research funds are still provided for projects within the boundaries of the disciplines, for example through grants from specialist societies. Similar difficulties arise in the promotion of young researchers and career planning. Due to the traditional organisation of universities into faculties, it is often not easy to ensure a proper doctoral procedure for newly recruited doctoral students with interdisciplinary projects. The same applies to the appointment of professorships, which are usually advertised by a single department and often do not adequately recognize interdisciplinary achievements. Finally, the publication cultures of the various disciplines are so different that interdisciplinary publications are often only found in niche journals (see chapter on communication cultures). Fortunately, there are positive developments in all these areas: For example, the German Research Foundation (DFG) and the Federal Ministry of Education and  Cellier, J. & Cocaud, M. (2001). Techniques informatiques. Rennes: Presses universitaires de Rennes; Cibois, Ph. (2000). L’analyse factorielle: analyse en composantes principales et analyse des correspondances. Paris: Presses universitaires de Paris; Cibois, Ph. (2007). Les méthodes d’analyse d’enquêtes. Paris: Presses universitaires de Paris; Cohen, D. & Rosenzweig, R. (2006). Digital history: a guide to gathering, preserving, and presenting the past on the Web. Philadelphia: University of Pennsylvania Press; Genet, J.-Ph. & Zorzi, A. (Eds.). (2011). Les historiens et l’Informatique: un métier à réinventer. Actes de l’atelier Rome: École française de Rome; Guerreau, A. (2004). Statistiques pour historiens. Document de travail Ecole des Chartes; Saly, P. (1997). Méthodes statistiques descriptives pour les historiens. Paris: Armand Colin. 3

8

J. Hass and A. Novokhatko

Research (BMBF) are providing more and more funding lines that are explicitly designed for interdisciplinary cooperation. As an example, over a long period (2005–2015), the BMBF launched the “Bernstein Centres”,4 a large-scale funding initiative for computational neuroscience, which represents a genuinely interdisciplinary field of research. In some cases, this initiative is still being continued, for example in transnational initiatives between Germany, the USA, Japan, France and Israel (see chapter by Mier & Hass). Graduate schools are being established at universities, where doctoral students from different disciplines, but with a similar thematic orientation, receive joint support. Professorships are also increasingly being advertised by interdisciplinary centres in which several institutes of a university are involved. And publication cultures are also changing; new journals are emerging that focus more on topics than on disciplines, and even long-established editors are increasingly opening up to the fringe areas of their disciplines. We devote a separate chapter in this book to the topic of publication cultures (see chapter on communication cultures). Paradoxically, however, not all of these developments actually lead to increased and improved interdisciplinarity. Particularly in the case of interdisciplinary funding measures, too little emphasis is often placed on exchange between disciplines, so that often there is only a juxtaposition of projects on one topic that are planned anyway, and real synergies remain limited. The “Kolleg für den wissenschaftlichen Nachwuchs” (WIN-Kolleg) of the Heidelberg Academy of Sciences and Humanities is an outstanding example in this respect, as it was designed from the outset to promote genuine exchange between disciplines.

2.2 Success Factors According to the experience of the members of the WIN Kolleg, the success of interdisciplinary research depends decisively on the skills and personality of the researchers involved, as well as their motivation to actually overcome the boundaries of their own discipline, and on open and successful communication between the participants. Here, we formulate some concrete success factor that have helped participants in their projects: • Formulation of a common goal which, despite the interdisciplinarity, corresponds to the requirements of each of the disciplines involved  Information can be found on the website of the Bernstein Network Computational Neuroscience: http://www.nncn.de/. 4

Introduction

9

• Exchange of ideas and methods between disciplines • General openness and curiosity for other disciplines, willingness to engage with fundamentally different ways of thinking • Good communication skills—especially active questioning and listening and the ability to present one’s area of expertise in a comprehensible way • Last but not least, a certain humility, i.e., acknowledgment of the fact that the possibilities of one’s own discipline are not sufficient to deal with a certain question In addition to these more personal factors, it is also necessary to further improve the institutional conditions mentioned in the above section. This includes the promotion of the above-mentioned skills and characteristics among young scientists as well as the creation of occasions where scientists from different disciplines can engage in conversation, e.g., the organisation of symposia on interdisciplinary ­topics.

Brief Presentation of the Contributions Joachim Hass and Roxana Halbleib

Science and humanities divide the world, but also unite it. This attitude is the only one that brings progress and has the goal of creating an understanding of the world. This is reflected in our collaboration for this volume. Each project brings its own particular view of the roles of the concept of “number,” “measurement,” and “pattern” in understanding the world in each discipline considered. As different as these views are, there are also many similarities. In particular, three basic sets of themes can be identified that connect many of the individual contributions: first, the juxtaposition, the fusion, and the cross-fertilization of qualitative and quantitative approaches; second, the application, the interpretation, and the critical classification of mathematical and computational methods; and third, the quantification of different areas of life and science as a subject of inquiry. Below, we present a brief summary of the contributions to this volume, with the focus on the common themes mentioned above and the different approaches taken by each individual project.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The authors have subsequently revised the text further in an endeavour to refine the work stylistically and in terms of content. J. Hass (*) Faculty of Applied Psychology, SRH University Heidelberg, Heidelberg, Germany e-mail: [email protected] R. Halbleib Institute of Economics, Faculty of Economics and Behavioral Sciences, University of Freiburg, Freiburg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_2

11

12

J. Hass and R. Halbleib

The interplay of qualitative and quantitative methods is already central in the first chapter Counting and narrating-Medieval literature and history of methodological dialogue by Claudia Lauer and Jana Pacyna. The chapter begins with the culturalhistorical relationship between “counting” and “narration” (“Zählen und Erzählen” in German), which is even more tangible for the culture of the Middle Ages than for us today. The joint project of the two scholars makes use of two prominent approaches and procedures of medieval literary and historical studies, historical narratology and historical network analysis, among others, and thereby explores the scope and correlations between quantitative and qualitative comprehension of the world. Their central results are a new theoretical-methodological perspective of literary and historical studies under the focus of qualitas and quantitas, the elaboration of an understanding of “measurement” for the humanities, as well as a more precise and differentiated establishment of the humanities, which calls for a clear questioning of traditional dichotomous separation of the humanities and the natural sciences. In the chapter Does Klio count, too? Measurement and understanding in historical science, Andreas Büttner and Christoph Mauntel describe history as traditionally more of an understanding, interpretive science, which originally also sought to distinguish itself from the natural sciences that were on the rise during the nineteenth century. While Büttner and Mauntel also deal with the significance of quantitative methods in modern history (“cliometrics”), the focus of their project is on the investigation of the historical transformation processes of an increasingly quantitatively oriented comprehension of the world in the Latin-Christian Middle Ages. While Büttner’s project deals with the measurement of sovereign grace using money and thus the monetarization of the political order, Mauntel investigates the significance of empirical quantification in the assessment of the world. In doing so, it becomes clear that from the twelfth/thirteenth century onwards, quantification gained more and more importance within the medieval culture—a development that set decisive impulses for the further history of Europe, according to the authors. The project of Stylianos Chronopoulos, Felix Maier and Anna Novokhatko takes us even further back into the past. In their chapter Quantitative data and hermeneutic methods in the Digital Classics, the scholars from the fields of ancient history and classical philology present the result of their collaboration with regard to measuring and understanding the world through quantitative investigations of texts and corpora with the help of digital tools. The increased use of digital corpora and tools in both disciplines is changing the way quantitative data is collected and how it is combined with data and arguments obtained through hermeneutic methods. Digital tools allow for a much more detailed and extensive analysis of text features and their relations to each other and the data outside the text. This requires modeling texts and corpora as well as producing and adjusting algorithms based on the text features and the particular research question, which requires a profound knowledge of the text or corpus under investigation. Thus, not only do the results

Brief Presentation of the Contributions

13

of quantitative analysis influence the interpretation of a text, but conversely, the processes and tools that yield quantitative data are also shaped by the knowledge that comes through interpretation. The three researchers argue that this fact has an enormous impact on how the humanities conceive their object of research: Not only the text or corpus under study should be central to the research process, but also the tools used to collect data, the nature of those tools, and the characteristics of the data collected. This is especially true given that the texts of the ancient sciences often provide only a “fragmentary picture” due to their long transmission process, which must be supplemented by reconstruction hypotheses. Chris Thomale, in his chapter Metaphors and Models: On the translation of knowledge into understanding, also addresses the use of language, but with a focus on the academic language as a medium of research. He intends to make more visible and critically evaluate the role that metaphors and models play in the acquisition, formulation, and communication of knowledge. Moreover, this initially linguistic- and science-philosophical consideration is explored using concrete research questions, especially from the social sciences. In his chapter, the author presents a dialectic of academic language which, on the one hand, strives for a form that avoids ambivalence, intransparent valuation or aestheticization, and, on the other hand, uses more complex, figurative forms of expression such as metaphors or models. He pays particular attention to the significance of the use of metaphors and models in interdisciplinary contexts between the natural sciences and the humanities. Using the example of the ongoing debates about the so-called “surrogate motherhood”, he also highlights the socio-political relevance of the sciences reassuring themselves about their use of language. While the interplay of qualitative and quantitative methods in text studies is the subject of the chapters by Chronopoulos, Maier and Novokhatko as well as by Lauer and Pacyna, Hanjo Hamann and Friedemann Vogel also use and question the possibilities of digital text analyses. In their chapter Critical mass: Aspects of a quantitatively oriented hermeneutics using the example of computational legal linguistics, they describe their collaboration in the fields of linguistics, law and computer science to develop a legal reference corpus. A critically edited and digitally processed collection of legal texts from various fields of law serves as the basis for semi-automated studies from legal, linguistic and social science perspectives. This approach leads to elementary problems of the scientific production of meaning and knowledge and, thus, ultimately to the question of the adequate and legitimate procedures of scientific work. In particular, the authors focus on the tension between traditional subjective introspection and more recent corpus and computational linguistic methods of text analysis. Their project attempts to bridge this supposed opposition between qualitas and quantitas by using computer-assisted pattern recognition to help identify higher-level systematics and schematic cross-connections. The authors argue that

14

J. Hass and R. Halbleib

such patterns enable researchers to recognize set pieces of legal language behavior in texts and, thereby, to specifically question this language behavior. Thus, individual cases would be integrated into a context and legal textual work and, thus, become intersubjectively ascertainable and scientifically comprehensible. As in the chapter of Büttner and Mauntel, Matthias Valta’s chapter Quantifying and operationalizing the commensurability of international economic sanctions focuses on quantification as subject or research. Specifically, he explores to which extent law can use empirical economic and political science findings to test commensurability for international economic sanctions more accurately and objectively through quantification. Economic sanctions are often the means of choice to influence the policies of other states that pose dangers or even violate the law. The influence of such sanctions, however, is indirect and associated with negative consequences for the population. Political science and economics theorize about the impact of sanctions and try to substantiate them with economic and political data. At the same time, they measure the effects of sanctions using economic indicators. Legal science, on the other hand, examines the legitimacy of economic sanctions and, in particular, their commensurability: is the purpose of the sanction of such a nature and weight that it justifies the consequences of the sanction? Valta’s chapter raises the question to which extent quantifications derived from models based on many assumptions entail an increase in objectivity, and whether the formalized argumentative structure of legal action requires such quantifications at all. For political science and economics, this chapter uses an example to show hurdles and possibilities necessary for the legal relevance of such quantitative results. Markus J. Prutsch also uses quantification as a research subject. In the chapter Science, numbers and power: contemporary politics between the imperative of rationalization and the dependence on numbers, he deals with the role of science and, in particular, of quantifying methods in contemporary politics. The author argues for a parallel triumph of “rationalization” in (political) science and political practice and also lists central reasons and arguments for “scientific” politics. He discusses the growing complexity and diversification of modern politics, as well as the “charm of numbers” promising objectivity and verifiability, which goes hand in hand with the demands for transparency and traceability of political action. The risks of highly quantifying politics are also addressed, including the neglect of normative and qualitative aspects and the promotion of a technocratic understanding of politics, which can lead to an even greater distance between citizens and politics. He identifies a fundamental challenge in the fact that politics and (quantifying) science follow the different systems and functional logics that cannot be easily reconciled. The author, therefore, pleads for a connection between politics and science that is characterized by a balanced distance that is beneficial to both.

Brief Presentation of the Contributions

15

Politics should not be pursued in isolation from scientific knowledge, nor should science replace political debates and social discourse. A sensitive issue in the current political and scientific discourse is how to avoid and deal with the financial crises that cause extremely large monetary losses that affect the economy as a whole, but also each individual. These losses show how unrealistic the model assumptions of the existing financial risk measures are and how weak is the information content of the underlying data used for the financial risk estimates and predictions. The quantitative analysis of such discrepancies is one of the subjects of econometrics, which uses mathematical and probabilistic methods and statistical inference to empirically test economic theories. The chapter Measuring and understanding financial risks: An econometric perspective by Roxana Halbleib provides first a general introduction to the field of econometrics and its related concepts (such as causality and statistical significance) and shows how it differs from related fields such as statistics and data mining. The chapter then focuses both on the quantitative analysis of financial risks and on the limitation of non-quantitative analysis from the perspective of an investor who needs to decide both on the implementation of risk measures and the investment direction. While the projects of Chronopoulos, Maier and Novokhatko and Hamann and Vogel explore the potential of novel algorithms for the analysis of texts, the computer-­assisted evaluation of genetic data has long been standard practice. The cost reduction in genome sequencing is leading to an explosion in the number of decoded human genomes in research. The findings of Fruzsina Molnár-Gábor and Jan O. Korbel described in their chapter Regulation of new challenges in the natural sciences—Data protection and data sharing in translational genetic research, beyond providing insights into molecular commonalities of diseases, may also enable the development of new diagnostic procedures, therapies or preventive measures for serious diseases such as cancer. However, the quantity and diversity of the data set to be analyzed also lead to qualitatively new challenges for the data protection law. They show how the introduction of normative access in biotechnology can contribute to addressing these new ethic-legal challenges of genetic research. Moreover, it aims to show how the collected data can be protected in the context of new data processing technologies and how the rights and interests of the individuals involved can be respected. Molnár-Gábor and Korbel argue that self-regulatory solutions can only be convincing if they are consistent with the existing ethicallegal standards and legal regulations. The chapter by Daniela Mier and Joachim Hass, namely Psychology and Physics: A non-invasive approach to the functioning of the human mirror neuron system, also deals with the interplay between life sciences and information technology. In their chapter, the authors describe how their two disciplines complement

16

J. Hass and R. Halbleib

each other for measuring and understanding the human mirror neuron system. This system is thought to provide the neural basis for interpersonal understanding. However, mirror neurons cannot be measured directly in humans, so indirect measurements such as functional magnetic resonance imaging (fMRI) must be used instead. Mier and Hass argue that this poses a double challenge for understanding: On the one hand, the processes in the mirror neuron system cannot be measured directly and, thus, elude precise understanding. On the other hand, this limitation also prevents us from understanding how interpersonal understanding occurs in the brain. To solve this dilemma, the authors combine the knowledge and methods of experimental neuroscientific psychology and computational neuroscience, which applies the mathematical methods of physics to model brain functions. Both disciplines aim to capture the neuronal processes that underlie human thinking, feeling and behavior by means of measurements and quantifications and, thus, make them accessible to understanding. The combination of disciplines leads to the possibility of representing processes in mathematical equations that have not been directly measured. According to Mier and Hass, a significantly deeper understanding of the human mirror neuron system becomes possible in this way. In their chapter Reflections and perspectives on the research fields of thermal comfort at work and pain, Susanne Becker and Marcel Schweiker present the results of the cooperation between two disciplines that seems unusual at first glance, namely psychology and architecture. However, the two researchers share a common interest in quantifying human reactions to different temperatures, similar to the way Mier and Hass’ project use methods from different disciplines to better understand social processes such as empathy. The goal of this collaboration is to quantify the process of adaptation in the context of global thermal discomfort, local pain stimuli, and other confounding factors. However, this requires methods that allow changes in the perception of temperatures and pain to be expressed in numbers: Adaptation as a dynamic process changes the meaning of a number, e.g., a room temperature. This aspect gives the number and its interpretation as a subjective evaluation of a person a special role. Although the two research fields have a similar approach to counting and measuring, they work with different models and approaches. The project offers the perspective of developing new methods through the combination of different tools and the targeted manipulation of the influencing factors, used in each of the disciplines and beyond. In the chapter Measuring and understanding the world through geoinformatics, Bernhard Höfle uses the example of natural hazard analysis to address the role of digital geodata in the scientific analysis of natural hazards, which are being collected at an increasing rate, primarily by non-experts via the Internet and with smartphones. Geospatial data are digital data such as web content, satellite i­ magery,

Brief Presentation of the Contributions

17

or photos taken with a smartphone that have a spatial reference, i.e., can be placed at a location in the world. The growing amount of such geospatial data has led to a new branch of geography: neogeography. The chapter addresses the question of whether neogeography can be used to capture and measure local, implicit knowledge about natural hazards in order to develop locally adapted precautionary measures that take cultural values and perceptions into account. Another question is how counting, measuring and understanding geographical phenomena in the context of natural hazards change through stronger integration of non-experts in data collection. Furthermore, it is investigated whether the stronger role of the individual in neogeography gives qualitative-subjective approaches more weight than the dominant quantitative approaches of scientific natural hazard analysis. The relationship of computer science and art history to measuring and understanding the world, and the importance of numbers in both disciplines, is explored by Peter Bell and Björn Ommer in the chapter Measuring art, counting pixels? The collaboration between art history and computer vision goes beyond recognizing patterns. The field of computer vision within computer science, which quantitatively describes digital images in terms of color and brightness values, is first contrasted with art history, initially seemingly being far away from measuring and counting. As Bell and Ommer point out, even within this humanity discipline some examples point to the relevance of numbers, such as the dating of a work of art, the interpretation of images with the help of number symbolism, or the numerical weighting of composition—for example, the proportion that the horizon occupies in an image, or the use of the “golden section”. In their project, the two authors merge the disciplines by using the numerous digital image data available to select the artworks under investigation and by employing computer vision, a tool that facilitates and accelerates the analysis of the images according to similarities or reception. As an example, the project examined approximately 3600 images from the Prometheus Image Archive with the keyword Crucifixion. The automatic image search recognizes the number of duplicates, reproductions or variations of artwork and thus allows conclusions to be drawn about the popularity and frequency of an image motif. As Bell and Ommer argue, the collaboration of art history and computer vision reveals the potentials of quantitative art history and an interplay of paradigms, although the quantitative recording of a work of art seems to deviate from the hermeneutic methods of art history. While several of the previously described chapters use the possibilities of computer science and especially computer simulation, Mathias J. Krause devotes his chapter Through numerical simulation to scientific knowledge entirely to the importance of numerical simulation in the modern sciences and describes how, in interaction with mathematical model building, it enables scientific knowledge,

18

J. Hass and R. Halbleib

e­ specially in the natural sciences, but also beyond. Numerical simulations are used for the approximate prediction of facts under strictly defined conditions. They are based on mathematical models and represent causal relationships in the form of algorithms and computer programs. They have become ubiquitous in everyday life, for example in weather or growth forecasting. In politics, they serve as an important tool for decision-making. In the scientific context, however, simulations are much more than a forecasting tool. Similar to experiments, they also serve to build models themselves and thus enable the elucidation of causal relationships. In the meantime, numerical simulations are considered an indispensable tool for gaining knowledge in many scientific disciplines. Krause, therefore, proposes that numerical simulation will continue to rapidly gain in importance in the future and that it will produce further surprising findings and technologies. The various perspectives on measurement, some of which are very different in the considered disciplines, have led to fruitful interdisciplinary research collaborations, as well as to individual research projects with a strong interdisciplinary character, which aim at a better understanding of the world. Despite many differences, the disciplines considered also show many similarities in their approach to “counting” and “measurement”. These similarities allow for the development of a common language between the different disciplines, which has proven to be extremely important for the successful collaboration on this book.

Main Part

Counting and Narrating: A Methodological Dialogue of Medieval Literature and History Claudia Lauer and Jana Pacyna

Narration is almost indispensable for our understanding of the world. Narratives lend structure to what was previously disordered: they capture found or given events and knowledge, place them in logical contexts of order and action, and thus make the world tangible and representable. As a central cultural technique, the narration is closely related to numerical knowledge and the act of counting, as shown by the direct conceptual relations of the expressions ‘counting’ and ‘narrating’ in German, English and Dutch, but also in French, Italian and Spanish. This not only points to the fact that aspects of quantitative and discursive information-giving are essentially related. ‘Counting’ and ‘narration’ thus also share the idea of world a­ ppropriation as a “linguistic and poetic act”1: in both cases, information is dissected, arranged, summarized, but at the same time also selected, referred to and assigned significance. For the culture of the Middle Ages, the cultural-historical relationship between ‘counting’ and ‘telling’ is even more tangible than for us today. This is evidenced

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. 1  Wedell, M. (2011). Zählen. Semantische und praxeologische Studien zum numerischen Wissen im Mittelalter. Göttingen: Vandenhoeck & Ruprecht, p. 13.

C. Lauer (*) University of Mainz, Mainz, Germany e-mail: [email protected] J. Pacyna Heidelberg Academy of Sciences and Humanities, Heidelberg, Germany © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_3

21

22

C. Lauer and J. Pacyna

not only by the historical semantics of Middle High German zal, zeln and erzeln, as well as historical notational material from the nineth–fifteenth centuries. Literary narration and the medieval understanding of history as narrative also show close conceptual proximity to the practice of counting. What is invoked here is a cultural-historical ‘peculiarity’ that has so far remained largely unexamined in research. At the same time, however, methodological requirements of indexing arise for medieval literary and historical studies that go beyond a traditional attribution of qualitative work as humanistic and quantitative work as formal natural science. Starting (1) from a brief insight into the two disciplinary cultures and their fundamental theoretical-methodological commonalities, the work of our project will show how, in the field of tension between ‘counting’ and ‘narration’ (2), not only the respective medieval object of the investigation itself, but also two prominent approaches of humanities disciplines can be given a new perspective. On this basis, and this is what our project would like to illustrate in addition (3), especially in the interdisciplinary methodological dialogue, implications, meanings and correlations of quantity and quality can be identified, which allow us to question the apparent difference between work in the humanities and that in the natural sciences.

1 Theoretical Approach and Methods in Medieval Literary and Historical Studies Anyone who studies German-language literature and the European history of the Middle Ages is immersed in a world that goes back up to a thousand years. On the one hand, one encounters stories and works that follow their own rules and logic and whose ‘otherness’ is not spontaneously apparent. On the other hand, by ‘looking back’, cultural-historical ideas and ways of acting become tangible, which continue fascinatingly until today and which are ultimately “by no means completely foreign”.2 The development of this medieval world, which is both foreign and ­familiar at the same time, is the goal and task of two academic disciplines, both of which are genuinely based on the ‘past narrative’ in speech and writing, i.e. primarily on linguistic text sources, and each of which has developed its own subject culture within the framework of the academic subject constitution of the eighteenth/nineteenth century: medieval German studies and medieval history.  Kiening, C. (2005). Alterität und Methode. Begründungsmöglichkeiten fachlicher Identität. Mitteilungen des Deutschen Germanistenverbandes 52, pp. 150–166, here p. 162. 2

Counting and Narrating

23

1.1 Medieval German Studies The study of German-language literature of the Middle Ages is the subject of medieval German studies, which sees itself as a branch of German linguistics and literary studies and the ‘study of the Middle Ages’ from its beginnings in the eighth to the fifteenth/sixteenth century. The subject can look back on more than 150 years of scholarly history, during which not only have the focal points of the subject changed and shifted again and again but during which the subject has also developed a variety of methodological approaches. Starting from narrower linguistic and literary-historical problems and edition-philological work, two broad turns can be identified here. Thus, from the nineteenth to the twentieth century, under the influence of Wilhelm Dilthey’s “Introduction to the Humanities” (1883), there was initially a striking transition from positivism adopted from the natural sciences and an almost exclusively formal-analytical, i.e. This transition was aimed at an understanding of historically conditioned contexts of meaning and knowledge and introduced a ‘hermeneutic turn’, which has shaped the self-image of medieval German studies to this day. In the further course of the twentieth century, the subject sharpened its understanding of the ‘alterity’ of medieval literature and its ‘seat in life’ by integrating scientific theories and methods, for example from linguistics or sociology, and finally underwent a further fundamental change with the so-called cultural turn in the 1990s by increasingly opening up to cultural studies content and methods. In this context, not only have traditional subject areas been given a new perspective, but the subject has also turned beyond traditional philological procedures to modern and postmodern methods such as discourse analysis and research into emotions, gender and performativity, which increasingly include non-literary texts and focus on interdisciplinarity. In numerous areas, the application of modern terms or concepts could be made strong and more precisely contoured in the sense of gradually or contextually different historical phenomena. In this way, epochal interfaces and epochal caesurae were fundamentally questioned, which set modernity one-dimensionally against pre-modernity. More recently, with the humanities joining the so-called Digital Humanities, further development has been emerging in the discipline: Increasingly, computer-­ based methods are being used to create digital texts, especially editions, reference works and dictionaries, but also—similar to what is described in the chapter Chronopoulos & Maier & Novokhatko—to open up questions and knowledge rel-

24

C. Lauer and J. Pacyna

evant to literary studies (e.g. on authors, works, genres, style and epochs) anew with the help of innovative information-technological and empirical-statistical methods.

1.2 Medieval Historiography Since antiquity, historiography has been used to convey past events to the present. It was not until the nineteenth century, however, in the course of the development of so-called ‘historicism’, that one speaks of a specialist discipline with a concrete methodology oriented towards source criticism and hermeneutics (Leopold von Ranke; Johann Gustav Droysen; Wilhelm Dilthey). In his study of “English History” (1877), von Ranke described the historian’s working method programmatically as necessarily impartial and objective. He should explain based on concrete individual cases how things actually ‘were’. However, these claims of historicism formulated for historical science were vehemently questioned in the further course of the history of the discipline. Two fundamental directions of thrust can be discerned. Firstly, the one-sided orientation of historicism towards a history of the state and the individual, which argumentatively aimed at the nation-state of the nineteenth century, was countered at the turn of the twentieth century by social and economic historical approaches. It was no longer the individual person or the state, but social structures, fields of tension, and social and economic conflicts that were understood as the primary impetus for historical processes. And due to the more intensive theoretical-methodological orientation towards other disciplines such as law, politics, economics and social science from the 1960s onwards, empirical-­ statistical methods of analysis, in particular, came to stand alongside descriptive-­ hermeneutic event historiography. In the course of the so-called lingustic turn, the historical cognitive process of traditional historiography was fundamentally put up for discussion in confrontation with semiotic, discourse-analytic, and literary approaches. In this context, reference was made to the comprehensive textuality of historical work and the inherent danger of the subjectively present processing of the past by the historian, and ultimately the construction of history in the process of recognizing and interpreting past ‘worlds of images and ideas’ was also ­emphasized.

Counting and Narrating

25

Currently, in the course of the so-called digital turn, a shift and merging of these two theoretical-methodological fields of criticism is becoming apparent. Digital history, which is linked to the methods of the digital humanities, is not only characterized by the use of digital media and computer-supported empirical-statistical analysis procedures in the generation, communication and documentation of research results. Here, experiments are also being conducted with new forms of historical source processing and interpretation, which imply a process of abstraction and increased structural access, but which also fundamentally attempt to include the consciousness of subjectively present reconstruction in the analysis and communication of data results.

1.3 Methodological Commonalities The brief insight into the subject cultures of medieval German studies and history reveals specific historical subject developments and cuts. At the same time, however, two fundamental commonalities can be identified on a theoretical-­ methodological meta-level. First, both disciplines are united by Wilhelm Dilthey’s attempt to establish the epistemological foundation of the humanities against the scientific ‘explanation’ through experience, experiment and calculation, and by the self-understanding of a culture of interpretation from the standpoint of historical hermeneutics, which is dedicated to the ‘understanding’ of human action and behaviour. And secondly, both subjects show that despite this basic hermeneutic trait, there is no one medieval literary or historical method. Both disciplines are characterized by a complex diversity and a broad spectrum of methods, ranging from formal analytical and empirical-statistical to hermeneutically interpretive procedures, thus repeatedly undermining dichotomous attributions that define qualitative work primarily as humanistic and quantitative work programmatically as formal-­scientific.

2 How Do We Count and Interpret? Historical Narratology and Historical Network Analysis as an Example What emerges as common methodological characteristics for medieval German studies and historical studies raises concrete questions: How can work in the two subjects be defined on a theoretical-methodological meta-level? How do quantitative-­ numerical and qualitative-interpretative approaches relate to each

26

C. Lauer and J. Pacyna

other? Where and how does one merge with the other? And where do irreconcilable differences remain? This can be exemplified concisely, and this is where our project begins, by looking at the cultural-historical relationship between ‘counting’ and ‘narration’. In the following, we will not only show how medieval literary and historical narration as an object of the investigation itself shows concise conceptual proximity to the practice of counting. At the same time, a selected literary and historical example will be used to demonstrate how the two disciplines count and interpret here within the framework of their respective indexing and how, not least, two prominent approaches and procedures of medieval German studies and historiography can thus be given a new perspective on a theoretical-methodological meta-level: historical narratology and historical network analysis.

2.1 Literary Counting and Narrating: Historical Narratological Perspectives Whether Heinrich von Veldeke’s “Eneasroman”, which takes up Vergil’s Roman founding myth “Aeneis” and its Old French transposition, the “Roman d’Eneas”, the “Rolandslied” of the priest Konrad, which is based on the Old French work “La Chanson de Roland”, Hartmann von Aue’s “Iwein”, who brings Chrétiens de Troyes’ “Yvain” into Middle High German, or the “Song of the Nibelungs”, whose material and legendary material goes back in part to the time of the migration of peoples—the German-language poets of the Middle Ages tell their stories according to what they have found and what they have been given. Unlike in modern poetry, in which invention, i.e. originality and creative innovation, characterize artistic understanding, the authors take up written or oral material and texts and process this material according to medieval rhetorical-poetic rules of composition, at the center of which are two doctrines: dilatatio materiae and abbreviatio materiae. Both procedures, in contrast to the ancient poetics, have a primarily quantitative function. Thus dilatatio materiae aims at the expansion of the subject by, for example, using synonyms, paraphrase or comparison to expand the original quantitatively. In contrast, the abbreviatio materiae focuses on quantitative shortening, for example by avoiding repetitions, allusions or syntactic abbreviation. The so-called medieval ‘retelling’3 thus formalistically relies on procedures

 Cf. Worstbrock, F. J. (1999). Wiedererzählen und Übersetzen. In W. Haug (Ed.), Mittelalter und Frühe Neuzeit. Übergänge, Umbrüche und Neuansätze (pp. 128–142). Tübingen: Niemeyer. 3

Counting and Narrating

27

that can be defined in the sense of Middle High German mezzen as a ‘comparing, considering and examining’. At the same time, they show a close relationship to the basic arithmetic operations of arithmetic. Thus dilatatio materiae reads like the addition of natural numbers in the sense of a summation of individual factors, and abbreviatio materiae as a corresponding inverse operation: as a process that, like subtraction, subtracts one number from another and offers a difference as a result. In contrast to mathematics, however, literary ‘retelling’ does not present calculations based on fixed logical rules and aiming at an objectively valid result. The procedures of processing were specifically used in a sense-giving way. That is, the poets use procedures of measurement and calculation. But they do not stop in the space of the imaginary with summation and subtraction. It is above all a matter of discretion, i.e. the ‘correct’ mental grasping, comprehension and evaluation of the given potential meaning, and the corresponding own calculation in dealing with the material, which is not completely absorbed in mathematical-logical counting and unfolds meanings through a qualitative-discursive re-functioning of quantitative-­numerical calculation, which are emphatically subjective and relative. Medieval authors, it turns out, work according to patterns in the sense of given models and thereby open up for literary narration a complex field of tension between measuring and measuring as well as calculating and calculating. At the same time, however, they also work with patterns across genres. They follow a so-called ‘schematic narrative’ that is virtually “characteristic of pre-literate and semi-­literate societies such as the aristocratic culture of the Middle Ages”4: a hero who sets out on dangerous adventures, a king who sets out in search of a wife who is his equal, or a protagonist who tries to gain an advantage by means of cunning, lies and deception. Medieval novels and epics always tell the same story, but they always do so in a fascinatingly different way within the framework of the individual works. A close look at (historical) narratology, which emerged as the ‘science of narration’ at the beginning of the twentieth century and has since found its way into medieval German studies as a central method for understanding medieval narration, shows how the tension between counting and interpreting also shapes literary studies.  Schulz, A. (2012). Erzähltheorie in mediävistischer Perspektive. In M. Braun, A. Dunkel & J.-D. Müller. (Eds.), Berlin / Boston: De Gruyter, p. 184. 4

28

C. Lauer and J. Pacyna

From the perspective of the history of science, the first step is to identify the common structures inherent in narrative texts and to reconstruct them in the sense of a pattern. Influenced by formalist-structuralist theories of narrative, various procedures have emerged in this context that are subject to “a systematic approach to the phenomenon of narrative”.5 Starting from a larger number of texts, the experience and behaviour described in the context of narratives is not only considered in a qualitatively highly simplified way from the perspective of functional ‘plot units’ or plot-bearing elements (events, happenings). Based on qualitative comparisons and the recognition of repetitions, it is also ‘broken down’ by means of various inductive or deductive structuring and classification methods into narrative components, which are defined in terms of quantifiable ‘action fixed points’6 or ‘stations’7 as the paradigmatic equivalent of a concrete element. The result is thus the reconstruction of a “plot or narrative sequence typical of several or even all narrative texts”8: an abstract narrative pattern that goes beyond the concrete individual text and expresses itself in a generally uniform repetition and fixed, countable sequence of certain modes of thought or behaviour. Within the framework of historical narratology, the scholarly elaboration of patterns is grosso modo connected with two ‘cultures of interpretation’. On the one hand, some approaches start with the identity of patterns and interpret them systematically in general dimensions: on the one hand, for example, with regard to fundamental ‘deep structures’ of narrative, which show not least that medieval narrative can always be traced back to “a limited number of basic narrative types”9; on the other hand, however, especially since the so-called cultural turn, concerning themes that are repeated in terms of content, such as aventiure, minne, battle and feast, whose semantics are typically culturally shaped. On the other hand, some  Scheffel, M. (2011). Formalistische und strukturalistische Theorien. In M. Martínez (Ed.), Handbuch Erzählliteratur. Theorie, Analyse, Geschichte (pp. 106–114). Stuttgart / Weimar: Metzler, here p. 106. 6  Cf. e.g. Schmid-Cadalbert, C. (1985). Der ‚Ortnit AW‘ als Brautwerbungsdichtung. Ein Beitrag zum Verständnis mittelhochdeutscher Schemaliteratur. Bern: Francke. 7  Cf. e.g. Pörksen, G. and U. (1980). Die ‚Geburt‘ des Helden in mittelhochdeutschen Epen und epischen Stoffen des Mittelalters. Euphorion 74, pp. 257–286. 8  Martínez, M. (1997). Erzählschema. In K. Weimar et al. (Eds.), Reallexikon der deutschen Literaturwissenschaft. Vol. 1. (pp. 506–509). Berlin / New York: De Gruyter, here p. 506. 9  Müller-Funk, W. (2007). Die Kultur und ihre Narrative. Eine Einführung. 2nd, revised and enlarged ed. Vienna / New York: Springer, p. 45. 5

Counting and Narrating

29

analyses and interpretations focus on the concrete realization, i.e. on the respective content and linguistic-aesthetic design of the patterns. This means that it is not only individual texts that are increasingly becoming the focus of interest. In addition to commonalities, attention is also directed to deviations and differences in comparison to other pattern realizations or, following a cultural-scientific understanding of narrative, in relation to extra-literary “cultural patterns”10 that have acquired a certain permanence of cultural routine in the sense of social practices and norms. In contrast to the systematically general interpretations, the special creation of medieval literary narration becomes tangible: a tableau of different forms and meanings of narrative building blocks, which can be varied and dynamically combined in the field of tension between increases and reductions in complexity. On the other hand, the incommensurable, the specific ‘individuality’ of narrative, which is neither completely divisible nor objectifiable, is also revealed here: the concrete literary ‘sum’ of the individual building blocks is never completely analogous to other texts, nor does their design fully coincide with extra-­ literary patterns of behaviour and cultural concepts. And so Iwein, the brave hero in Hartmann von Aue’s romance, advances to become a celebrated Arthurian knight in spite of violations of the norm through his adventurous journeys, while the bold and strong Siegfried in the “Song of the Nibelungs” dies in spite of his successful exploits. King Rother’s cunning courtship, which is crowned with success, also differs from that of King Mark in Gottfried of Strasbourg’s “Tristan”, which, although all the rules are observed, leads to chaos. And finally, Genelun’s wiles, lies and deceptions in the “Song of Roland” by the priest Konrad read differently from Tristan’s comparable actions in Gottfried: the one is a negative “dazzling work of evil”, the other a positive “triumph of cleverness”.11

 Cf. Fulda, D. et al. (Eds.) (2011). Kulturmuster der Aufklärung. Ein neues Heuristikum in der Diskussion. Göttingen: Wallstein. 11  Semmler, H. (1991). Listmotive in der mittelhochdeutschen Epik: Zum Wandel ethischer Normen im Spiegel der Literatur. Berlin: Erich Schmidt, Preface. 10

30

C. Lauer and J. Pacyna

2.2 Counting History: Historical Network Analyses History in the Middle Ages is, as Isidore, Bishop of Seville, succinctly put it around the year 620, the narration of events through which we experience past events.12 Similar to the literary ‘retelling’, medieval history thus not only has a narrator who records orally or in writing what has occurred or been given in the form of ­chronicles, annals or biographies. As narratio, history is also meaning-giving and in a special way intentional. Only that is probed and combined which is attributed overriding importance and which is worthy of being passed on to posterity for information: the emergence and development of peoples and countries which create present and future identity, successful wars and battles which transmit the success and fame of a ruling dynasty, or the lives and actions of individual emperors, kings and popes who, with their deeds, have become role models or also cautionary examples. Comparable to literary narration, historical narration is thus not only located in significant fields of tension between measurement and discretion as well as calculation and reckoning. A closer look at the example of the English investiture conflicts and the theoretical-methodological procedures of historical network analysis shows in which way—similar to the case of ‘schematic narration’ and historical narratology—also the historical scientific development can be characterized by counting and interpreting. The investiture conflicts of the eleventh/twelfth century belong to those events of the European Middle Ages whose historically perceived value has led to a relatively broad narrative tradition. The dispute over the investiture of clerics by secular rulers was not only virulent in the Holy Roman Empire under Henry IV (associated with the proverbial ‘Walk to Canossa’ in 1077), but also pervaded the reigns of the English kings William I., William II. and Henry I. The conflict over the investiture of clerics by secular rulers was also a major topic in the Middle Ages, William II and Henry I. At the same time, in the field of tension between king and pope, as the sources attest, another man gained in importance: Archbishop  Historia est narratio rei gestae, per quam ea, quae in praeterito facta sunt, dinoscuntur. Isidore of Seville (1911), Etymologiae sive origins libri XX. ed. W.M. Lindsay, 2 vols. Oxford: Clarendenon Press, Liber I, 41. Goetz, H.-W. (1985). Die ‚Geschichte‘ im Wissenschaftssystem des Mittelalters”. In F.-J. Schmale (Ed.), Funktion und Formen mittelalterlicher Geschichtsschreibung (pp. 165–213). Darmstadt: Wissenschaftliche Buchgesellschaft, here pp. 186 f. 12

Counting and Narrating

31

Anselm of Canterbury (c. 1033–1109), who was consecrated in 1093 and whose ecclesiastical-­political activities are recorded in Eadmer’s “Vita Anselmi” and “Historia novorum” as well as in Anselm’s collection of letters.13 The interpretation of the role and value of Anselm’s ecclesiastical policy, however, is hotly disputed in research. The theses range from a heroic exaltation of the archbishop, who was brilliant not only in theological-philosophical but also in ecclesiastical-­political terms and superior to the king, to a comprehensive devaluation of the ecclesiastical-political actions of a primate of the English Church who was completely overtaxed in the Investiture Controversy.14 Due to the strongly isolated concentration on Anselm as an individual on the one hand, and the intentional over-forming of the sources in their creation and transmission on the other, this discussion has meanwhile reached an argumentative impasse. A new approach to solving this problem is offered by an approach that has recently gained increasing prominence in historical scholarship: (historical) network analysis. The aim of network analysis is to generate models under certain conditions, on the basis of which structures can be identified and patterns can be worked out and interpreted. The claim is not to represent “reality”, but to collect and systematize sufficient data in order to model a network with the help of computerbased procedures, which should enable the determination of e.g. certain (social)

 Vollrath, H. (2007). Der Investiturstreit begann im Jahr 1100. England und die Päpste in der späten Salierzeit. In B.  Schneidmüller & S.  Weinfurter (Eds.), Salisches Kaisertum und neues Europa. Die Zeit Heinrichs IV. und Heinrichs V (pp. 217–244). Darmstadt: Wissenschaftliche Buchgesellschaft, here pp. 219–221. Krüger, T. M. (2002). Persönlichkeitsausdruck und Persönlichkeitswahrnehmung im Zeitalter der Investiturkonflikte. Studien zu den Briefsammlungen des Anselm von Canterbury. Diss. Phil. Fak.,Univ. Freiburg (Breisgau)., pp. 30–34. 14  Ibid., pp. 71–82, 231–233; Vaughn, S.N. (1987). Anselm of Bec and Robert of Meulan. The Innocence of the Dove and the Wisdom of the Serpent. Berkeley: Univ. of California Press, pp. 132–140, 149 ff, 214 ff; Southern, R. W. (1990). Saint Anselm. A Portrait in a Landscape. Cambridge: Univ. Press, pp. 233 ff.; Niskanen, S. (2011). The Letter Collections of Anselm of Canterbury. Turnhout: Brepols, pp. 29–30. 13

32

C. Lauer and J. Pacyna

structures and related (behavioral) patterns.15 Based on structural action theory16 (sociology) and graph theory17 (mathematics), the first step is to systematically collect data on the network actors and on the relationships existing among them from the sources and to organize them, e.g., in Excel tables. If one is dealing with a specific set of sources such as a collection of letters, it makes sense to first record all persons mentioned in the letters as network actors and to mark existing interaction potentials and interactions between them as well as their frequency and intensity. These data are then imported into the network software. The network actors can additionally be described in more detail by entering attributes; for example, origin, status, gender, office, location and life data. The same applies to the relationships between them; these can also be defined in more detail qualitatively, for example as blood r­elationship, spiritual relationship, feudal relationship, friendship or enmity.18 After populating the so-called matrices with the collected information, the network software provides two results based on different algorithms, which can be analyzed in different ways: (1) numerical values as well as (2) visualizations in the form of diagrams and graphs. In the algorithmic calculation and visualization of network graphs, not only the relationally conditioned positions of the actors in the network are taken into account (centrality, group membership, role patterns), but also the directionality and intensity of their relationships and other qualities.

 Lemercier, C. (2012). Formale Methoden der Netzwerkanalyse in den Geschichtswissenschaften: Warum und Wie? Österreichische Zeitschrift für Geschichtswissenschaften 23(1), pp. 16–41, here pp. 30–36; Gramsch, R. (2013). Das Reich als Netzwerk der Fürsten. Politische Strukturen unter dem Doppelkönigtum Friedrichs II. und Heinrichs (VII.) 1225−1235. Ostfildern: Thorbecke, pp. 21–34. 16  Ibid., pp.  21–34. Jansen, D. (2006). Einführung in die Netzwerkanalyse. Grundlagen ­Methoden, Forschungsbeispiele. Wiesbaden: Verlag für Sozialwiss, pp. 11–33. 17  Ibid., pp. 91–99. Mathematically, graphs are defined as a set of nodes (e.g. actors, companies, texts, artifacts) and a set of edges (connections between the nodes). It is assumed that the connections between the network elements, the nodes, are neither purely regular nor purely random. Instead, networks are determined by the particular distribution in occurrence of their elements or nodes (degree distribution), by a cluster coefficient (measure of clique formation), and by a particular community structure or distinct hierarchical structure. 18  Cf. Rosé, I. (2011). Reconstitution, représentation graphique et analyse des réseaux de pouvoir au haut Moyen Âge: Approche des pratiques sociales de l’aristocratie à partir de l’exemple d’Odon de Cluny († 942). REDES—Revista hispana para el análisis de redes sociales 21, pp. 199–272, here pp. 206–209. 15

Counting and Narrating

33

Following the structural theory of action, it is assumed that the interests and resources for action of individuals/actors or groups depend on their embeddedness in society as a social structure or network. The position in the network and the interests determined by the position in the network form the ‘constraints’ of the action. Thus, it is not primarily norms, values, motives or goals of individuals/ actors and groups that influence their behaviour, but rather their embeddedness in the social network and the resulting social capital, since this generates individual and corporate access to resources and cooperation, but also competitive situations. However, it must be taken into account that the behaviour of individuals/actors and groups can have a modifying effect on the network.19 In network analysis, we are thus dealing with two forms of patterns, (1.) a social structure (network), which is generated by counting relations or even concrete interactions between individuals/actors, and (2.) possible patterns of behaviour, which can be read from the algorithmically calculated position of the individuals/ actors in the network and the resulting options and constraints for action. However, the counting of relational interactions or the algorithmic calculation of structural positions alone is not sufficient to arrive at a result that is actually interesting and meaningful. In addition to quantitative counting, it also requires a qualitative evaluation of relationships and, in addition to quantitative calculation, a qualitative classification and contextualization of options for action. By applying this new methodological approach to the events in the course of the English investiture conflicts of the eleventh/twelfth century, Anselm, whose behaviour as a central key figure has so far been interpreted controversially and mostly detached from social structures, can be experienced as part of (s)he network of social ties. This network not only depicts his sphere of influence, but also allows decisive conclusions to be drawn about the determinant framework conditions of his actions. With a structural view of Eadmer’s “Vita Anselmi” and “Historia novorum” as well as of the controversial Anselm letters, the intentional over-forming of the sources inherent in their composition and transmission is partially circumvented, thus opening up a new perspective for solving the research problem currently caught up in the problem of transmission. This becomes possible because the focus is initially less on the individual source, the individual event or the individual person and their action, but rather on the relational connection between the acting persons and their resulting structural position within the social network; with effects on options and constraints for action in relation to the events.  Jansen, D. (2006). Einführung in die Netzwerkanalyse. Grundlagen Methoden, Forschungsbeispiele. Wiesbaden: Verlag für Sozialwiss, pp. 11–33. 19

34

C. Lauer and J. Pacyna

3 Counting and Narrating in Interdisciplinary Methodological Dialogue: Results ‘Counting’ and ‘narrating’—what is closely related to cultural history in the Middle Ages and can also be more precisely contoured for literary and historical narration in the fields of tension between measurement and discretion as well as calculation and computation, also determines the methods of description and interpretation of medieval literary and historical studies in a special way. The theoretical-­ methodological meta-reflection of historical narratology and historical network analysis on the examples of ‘schematic narration’ and the role of Anselm of Canterbury in the English Investiture Controversy reveals significant commonalities here, which can be synergetically evaluated in the interdisciplinary dialogue and formulated as results under three central aspects.

3.1 Structures and Patterns Both approaches and methods are united by the endeavour to identify structures as well as to elaborate and interpret patterns. In doing so, both historical narratology and historical network analysis rely not only on quantitative-numerical counting and extracting specific narrative elements and data from larger text sets. In the process of recognizing structures as well as in the reconstruction of patterns as an aid to understanding and their interpretation, they also rely on qualitative-­discursive methodologies of ‘narration’. Both approaches show—similar to their objects of investigation, i.e. literary and historical narrative itself—in the same way the necessities of measuring and measuring, calculating and calculating: one observes, compares, examines and creates meaning not through simple and objective logical causal derivations (‘x explains itself from y’), but either through formal abstract substitutions and/or semantic attributions (‘x equals y’) or contextually by establishing and exhibiting relations (‘x is related to y in such and such a way’). Literary and historical work is thus, exemplarily seen, neither purely ‘quantitative’ nor purely ‘qualitative’ in the usual scientific sense: it logically and systematically inquires into facts and data, which, however, in counting and interpreting are also fundamentally subject to qualitative ‘understanding’. What comes into view is not only a close interplay of quantitative and qualitative procedures, which are correlated with each other in gradually different scopes and ‘translation processes’ depending on the gain of knowledge. In the field of tension between logical analysis, semantic attribution, and systematic contextualization, an interdisciplinary

Counting and Narrating

35

conceptual analytical toolkit also crystallizes that undermines traditional scientific ‘paradigms’20 of quantitative and qualitative methodologies with activities such as observing, describing, equating, distinguishing, and relating.

3.2 Result and Gain of Knowledge For both historical narratology and historical network analysis, the recognition of structures and the elaboration and interpretation of patterns are equally process-­ oriented and result-oriented. On the one hand, one works with literary texts and written sources and thus attempts to explore human action in its various forms and dynamics; on the other hand, concrete results are also presented. It is true for both literary and historical studies that the result is always an interpretation: an offer of explanation and understanding that is objectively comprehensible through the logical and precise application of methods, but whose plausibility is subjective. Thus, medieval German studies and historiography do not present research results that can be measured and generalized in the ‘hard’ sense, that can be grasped with numbers and a linear idea of progress. But neither are their findings the products of arbitrary individual measurement. As both disciplines are committed to their objects of investigation, i.e. to the value of literary and historical narration as a special form of human understanding of the world and the creation of meaning, and as they contribute with their various methods of ‘counting’ and ‘narrating’ to the ­“controlled and reflected unfolding of the alterity of their object”,21 their results can be justified and verified both subjectively and objectively. In interdisciplinary dialogue, it is not only views of so-called ‘soft’ humanities results that are ‘corroborated’. As a valid gain in knowledge, the differentiating ‘understanding’ of the past thus ultimately again undermines common scientific ideas and definitions of quantitative and qualitative methodologies.

3.3 Humanities vs. Natural Sciences? In their claim to want to open up and understand the literature and culture of the Middle Ages, which date back up to a thousand years, medieval literary and his Cf. exemplary Cook, Th. D. & Reichhardt, C. S. (Eds.). (1979). Qualitative and quantitative methods in evaluation research. Beverly Hills et al: Sage, p. 10. 21  Kiening, C. (2006). Gegenwärtigkeit. Historische Semantik und mittelalterliche Literatur. Scientia Poetica, vol. 10 (pp. 19–46), here p. 22. 20

36

C. Lauer and J. Pacyna

torical studies work out elements and data that can be numerically measured and quantified. At the same time, however, they also tell a story: they discourse and interpret the quantitative material qualitatively and, with their interpretations, thus contribute to the creation of individual, social and cultural meaning. In the interdisciplinary methodological dialogue, implications, meanings, and correlations of quantity and quality crystallize here, which, following recent research in the theory of science, allow us to continue to clearly question traditional definitions of quantitative and qualitative methodologies. Two findings can be noted. First, “the relationship between quantity and quality is by no means as simple and antagonistic as it is portrayed in public.”22 In this context, not only literary and historical counting and narrating as an object, but also precisely the comparably close quantitative and qualitative interplay and interplay within the framework of historical narratology and historical network analysis show: qualitas and quantitas, as they go back in terms of the history of thought to Aristotle’s writing on categories, are not in each case independent scientific methods. Rather, they read as epistemological approaches that allow humans to perceive and structure their environment, and which are thereby always categorically connected with ‘relation’. Thus, not only is a conceptually more precise distinction to be made between ‘method’ and so-called ‘modes of perception’. As a second result, attributions that assign qualitative work to the humanities and quantitative work to the formal and natural sciences also become questionable. It becomes apparent that such a separation, as it arises from the increasing mechanization of the sciences in modernity, not only fuels a one-dimensional discussion of science policy that is oriented towards concepts such as ‘objectivity’ and the resulting ‘usefulness’ of science. It is also unjustifiably reductionist with regard to the human capacity for knowledge: it conceals the fact that every scientist, whether from the humanities or the natural sciences, works with quantitative and qualitative categories of perception and development in the theoretical-methodical recording and investigation of his or her subject matter, and indeed must necessarily do so for a comprehensive understanding of the world and reality. The synergetic evaluation of the interdisciplinary dialogue under these three central aspects thus reveals, as can be pointedly reformulated once again in conclusion, the specific threefold gain of our project for the topic of “measuring and understanding the world through the sciences”. First, it makes clear that, in addi Neuenschwander, E. (2003). Einführung. In Ders (Ed.), Wissenschaft zwischen Qualitas und Quantitas (pp. 1–32). Basel et al.: Birkhäuser, here p. 2.

22

Counting and Narrating

37

tion to the traditional ‘understanding’ of human action and behavior from the standpoint of historical hermeneutics, ‘measurement’ can also be applied to work in the humanities. At the same time, however, it also becomes clear that this ‘measuring’ is defined differently from the ‘classical’ understanding of the natural sciences, which is formally based on numbers, in that it is always accompanied by a ‘measuring’ in the epistemological process of developing linguistic text sources, i.e. it cannot do without a simultaneous ‘interpreting’. And thus, thirdly, the interdisciplinary methodological dialogue on ‘counting’ and ‘narration’ finally also brings to light the fact that in the “measuring and understanding of the world through the sciences” qualitas and quantitas are not independent scientific methods in each case, but represent different modalities of perception and development, which are correlated with each other in the context of humanities research methodologies in gradually different scopes and ‘translation processes’ and call for the traditional dichotomous separation of work in the natural sciences and the humanities to be viewed in a much more differentiated and discriminating way.

Does Klio Count, Too? Measurement and Understanding in Historical Science Andreas Büttner and Christoph Mauntel

“History is the narration of events through which we learn about what happened in the past.”1—With these words Isidore, Bishop of Seville, described the question of what history is around the year 620. The core of any preoccupation with history is thus a historical interest (of whatever kind); the means of communication is the narrative, which can take place both orally and in writing. Isidore’s definition, which is now almost 1400 years old, has lost none of its actuality. For

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.  Historia est narratio rei gestae, per quam ea, quae in praeterito facta sunt, dinoscuntur. Isidore of Seville (1911), Etymologiae sive origins libri XX. ed. W.M.  Lindsay, 2 vols. Oxford: Clarendenon Press, Liber I, 41. Goetz, H.-W. (1985). Die ‚Geschichte‘ im Wissenschaftssystem des Mittelalters”. In F.J.  Schmale (Ed.), Funktion und Formen mittelalterlicher Geschichtsschreibung (pp.  165– 213). Darmstadt: Wissenschaftliche Buchgesellschaft, here pp. 186 f. 1

A. Büttner (*) Department of History, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany e-mail: [email protected] C. Mauntel Department of Medieval History, Eberhard Karls University Tübingen, Tübingen, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_4

39

40

A. Büttner and C. Mauntel

the study of history, this leads to the question to what extent and by what means historical knowledge is possible to obtain, and what kind of knowledge is involved.2 Isidore’s observation that history is ‘told’ seems banal at first, but it has a certain explosive power: history, or our knowledge of history, exists only in language; history needs a narrator. A narrator, however, pursues an intention, decides on the themes of his narrative, on arcs of tension and punchlines. A narrator creates meaning. According to today’s understanding of historical science, however, there is no ‘meaning’ inherently inscribed in history that historians merely have to decipher and uncover. Even past events are not simply ‘there’. Our knowledge of the past is based on a wide variety of sources (texts, images, objects) that need to be interpreted. In contrast to the storyteller, the historian follows methodological guidelines that make his interpretation of historical events comprehensible and verifiable. In contrast to the belief in objectivity and ‘truth’ of older research, the view that historical science cannot offer direct and unambiguous access to the past is increasingly gaining ground. Rather, it develops approaches and perspectives that only ever take shape through the historian’s questions. Current historical scholarship thus oscillates between the insight that there can be no definitive history and the claim to provide reliable knowledge about past times.

1 Methods and Approaches It was and still is disputed within the science of history in which way the investigation of the past should take place and which methods should be applied. After the science of history had emancipated itself more and more from theology and jurisprudence in the course of the century, it saw its scientificity called into question in the nineteenth century by the emerging natural sciences. Not least the impossibility of testing hypotheses by means of experiments due to the object of investigation led to the attempt to develop a specifically historical scientific methodology. Johann Gustav Droysen contrasted the historical ‘understanding’ with the scientific ‘explaining’, a separation that was further sharpened by Wilhelm Dilthey’s concentration on the self-understanding of the individual. Subsequently, Max Weber and others attempted to combine ‘understanding’ of inner motivation and ‘explaining’ as a causal derivation of external behaviour. Despite all methodological efforts, the scientific foundation of historical scholarship continues to be disputed: in 1973, the US historian and literary  Historian Jörg Baberowski also begins his book on the history of historiography with a very similar phrase: “History is a narrative of the past that owes itself to the interest of the present” (own translation); Baberowski, J. (2005). Der Sinn der Geschichte. Geschichtstheorien von Hegel bis Foucault. Munich: Beck, p. 9. 2

Does Klio Count, Too? Measurement and Understanding in Historical Science

41

scholar Haydon White provoked new debate with his emphatic reference to the narrative and poetic character of the discipline.­3According to his central thesis that “Clio also writes poetry”, historians in his views changed from being scholars into men of letters, which resulted in controversial debates and counter-designs,4 but often also in an unimpressed ‘business as usual’. More recently, as in other humanities, there has been a shift towards cultural studies. This is expressed in new approaches, questions and subject areas, but also in the renewed emphasis on Max Weber’s “conception of historical knowledge as empirically supported hypothesis-knowledge”, accompanied by a “constant ‘epistemological uncertainty’”.5

2 Counting and Measuring in Historical Research Most memories of the school subject ‘history’ are associated with the anxious feeling that the lessons consisted mainly of memorising dates. The study of history is not, of course, exhausted in the stringing together of dates, even if its close relationship to numbers is evident. Years and dates are the basic framework of history, with which we can determine the chronological sequence of events—provided we know the exact date of an event. But approximate dating also allows us to place the individual phenomenon in a logical and stringently organized overall system that makes temporal coincidences and progressions clear: When Maximilien Robespierre was executed in Paris in 1794, Napoleon Bonaparte was 24 years old. The simple observation that event X followed event Y in time or took place at the same time, however, holds neither explanation nor insight. The fact that Charlemagne was crowned emperor on 25 December 800 explains neither the implications of this event nor the symbolic content of the date. Dating undoubtedly serves as (one) instrument for ordering past events, but the actual cognitive interest of historical scholarship is different: the historian is not concerned with mere chronology for its own sake, but with the diversity of human action, with historical contexts and developments. This self-image and the underlying methodology of historical scholarship were—as described—themselves sub White, H. (1973). Metahistory. The Historical Imagination in Nineteenth Century Europe, Baltimore, London: John Hopkins University Press. German: (1991). Metahistory. Die historische Einbildungskraft im 19. Jahrhundert in Europa, Frankfurt am Main: Fischer. 4  Evans, R.  J. (1997). In Defence of History, London: Granta. German: (1998 Fakten und Fiktionen. Über die Grundlagen historischer Erkenntnis, Frankfurt am Main: Campus; Paravicini, W. (2010). Die Wahrheit der Historiker (Historische Zeitschrift. Beihefte N.F. 53), Munich: Oldenbourg. 5  Cf. Oexle, O. G. (2004). Historische Kulturwissenschaft heute. In R. Habermas (Ed.), Interkultureller Transfer und nationaler Eigensinn. Europäische und anglo-amerikanische Positionen der Kulturwissenschaften (pp. 25–52). Göttingen: Wallstein, here pp. 41, 46. 3

42

A. Büttner and C. Mauntel

ject to change, which in view of the time-bound nature of research can never be complete and must always be open to new approaches. From the fundamental disputes about the meaning and goals of historical scholarship outlined above, it is already evident that there is no such thing as a single method of historical scholarship: Historical work is characterized by methodological pluralism, with different paradigms competing and reacting to each other. A good example of this is social and economic history, as it has developed since 1900 in both Germany and France. The background to its development was a widespread criticism of historicism, which was accused of focusing too much on an event-oriented history of politics and diplomacy, and thus on a few ‘great men’ who were supposed to have ‘made’ history. Instead, the focus was now on longer-­ term structures, institutions and processes, and the question was asked as to their formative forces. It was no longer the individual in its various manifestations that was to be examined, but the generalizable whole. This was accompanied by an increase in the importance of statistical methods and the study of serial sources, i.e. sources that can be recorded uniformly and in large numbers. Quantitative methods experienced a particular upswing with the increased use of electronic data processing since the 1960s; more recently, there has been a slight overall decline.6 An example of such a quantifying approach is the attempt to calculate historical homicide rates. Homicide rates indicate the number of annual homicides in relation to 100,000 inhabitants: the higher the rate, the more dangerous the location. Quantification provides comparability. For Warwickshire in 1232 a rate of 47 was calculated, whereas in Norfolk only 9 per 100,000 persons were said to have been killed violently. It could therefore be inferred that life in Warwickshire in 1232 was almost 12 times as dangerous as in Norfolk. Such a juxtaposition is captivating in its supposed unambiguity, but only on the assumption that the record for both regions is equally good and accurate. This applies to a greater extent to a supratemporal comparison, according to which the Warwickshire of 1232 would have been 59 times as dangerous as Germany (rate: 0.8) and about 10 times as dangerous as the USA in 2011 (rate: 4.7).7  For a study—characteristically quantitative—of the methods used in (economic) history, see Daudin, G. (2010). Quantitative Methods and Economic History. In F. Ammannati (Ed.), Dove va la storia economica? Metodi e prospettive secc. XIII-XVIII.  Where is Economic History going? Methods and Prospects from the 13th to the 18th Centuries. Atti della “Quarantaduesima Settimana di Studi” 18–22 aprile 2010. (Istituto internazionale di storia economica F. Datini, Prato. Serie 2: Atti delle settimane di studi e altri convegni 42, Florence: Firenze University Press) (pp. 453–472), Florence: Firenze University Press. 7  Cf. Bauer, J. (2011). Schmerzgrenze. Vom Ursprung alltäglicher und globaler Gewalt, Munich: Karl Blessing Verlag, p. 114 f; Slightly different figures in Brown, W. (2011). Violence in medieval Europe (The Medieval World). Harlow: Longman, pp. 3–5. 6

Does Klio Count, Too? Measurement and Understanding in Historical Science

43

The methodological problem behind the seemingly objective data is enormous: it begins with the necessary but uncertain estimation of historical population figures, leads to the question of how reliably and comprehensively the criminal records of earlier eras report killings, and ultimately also raises the source-critical question of what kind of ‘killings’ were recorded at all. Moreover, in 1232 a wound resulting from a tavern brawl may have been fatal, whereas today it would cause no further health difficulties after a visit to the local hospital. Even if these methodological problems could be eliminated, the comparison of murder rates would still not tell us anything about the respective perception, classification and evaluation of violence in different epochs, which, moreover, may differ depending on the social group. In short, there is not only the question of the security of the data basis, but also that of the validity of historical statistics. Empirical social and economic history (“cliometrics”) as a more quantitative branch of historical science can be paradigmatically contrasted with the approach of “dense description”: This, according to the US anthropologist Clifford Geertz, is concerned with the most comprehensive and detailed description possible of a set of facts, in which the researcher should be aware of his or her own role in the collection, selection and interpretation of the data. A microhistory resulting from this approach does not aim to make generalizable statements, but to be able to record a specific phenomenon as precisely as possible. Emmanuel Le Roy Ladurie’s study of the southern French village of Montaillou can serve as a paradigm for this approach.8 In 1320, this village was targeted by the Inquisition, whose detailed protocols of interrogations have been handed down to us. On the basis of this source material, Le Roy Ladurie succeeded in creating a lively and detailed panorama of village life at the time, which goes right down to the intricacies of the love affair between the attractive noblewoman Béatrice des Planissoles and her lover, the aggressively acting priest Pierre Clergue. But here, too, there are methodological pitfalls: can such inquisition protocols, which came about under the threat or after the execution of torture, be trusted? Furthermore, it has been criticized that the ‘thick description’, the focus on the individual case, raises the question of how representative the events in Montaillou were for southern France or even for fourteenth century France as a whole. Between the two poles of historical statistics and microhistory, which are presented here in a somewhat simplistic manner, there is a great variety of approaches that incorporate quantitative methods to varying degrees. One fundamental ques Le Roy Ladurie, E. (1975). Montaillou, village occitan de 1294 à 1324, Paris: Gallimard. German: (1980). Montaillou. Ein Dorf vor dem Inquisitor. Frankfurt am Main, Berlin, Vienna: Propyläen. 8

44

A. Büttner and C. Mauntel

tion that ultimately affects almost every study is simply the quantity of sources analyzed. Many studies often build on a large (and sometimes quantifiable) number of sources. But their real goal is often not to examine how many sources support one and the same statement or interpretation. History does not reveal itself through the formation of majorities. Instead, it is often enough the individual case or the deviation from the norm that is more revealing for the historian. The historian Arnold Esch, who read about 30,000 petitions and evaluated about 2400 for his study of petitions to the papal curia in the fifteenth century,9 commented on his decision not to perform a statistical evaluation with the statement that the historian must weigh, not count.10 At the same time, weighting is inconceivable without quantifying contextualization, for otherwise the subjective impression of the historian would remain too determinative and the interpreted source would not be appreciated in its particularity or representativeness: “Quantification is an antidote to impressionism.”11

3 What Is a Scientific Result? How Do We Recognize and Interpret Patterns? If historical impressionism, i.e. representations based purely on individual impressions, is to be avoided, how then is a result of historical scholarship to be grasped? Since historical science deals with past events, it is precluded from working experimentally—every historical situation is unique, irretrievable and unrepeatable. Nor is it suitable for theorizing about future events, so that this way of verifying its theses is also denied to it. The starting point for the verifiability of historiographical analyses is the disclosure of the premises, the material basis and the procedure. Both the sources examined and the discussion of previous research require detailed evidence. One’s own approach, the selection of sources, and the chosen ­methodology must also be disclosed. The plausibility and comprehensibility of the argumentation provide the basis for the evaluation and reception of the theses by colleagues, who as a scientific community decide on the relevance, persuasiveness and con Esch, A. (2014). Die Lebenswelt des europäischen Spätmittelalters. Kleine Schicksale selbst erzählt in Schreiben an den Papst. Munich: Beck, p. 9. 10  Esch. A. (2014) Große Geschichte und kleines Leben. Wie Menschen in historischen Quellen zu Wort kommen. Heidelberg Academy Lecture, 18 November 2014, Heidelberg. 11  Herlihy, D. (1972). Quantification in the Middle Ages. In V. R. Lorwin & J. M. Price (Eds.). The Dimensions of the Past. Materials, Problems and Opportunities for Quantitative Work in History (pp. 13–51). New Haven: Yale University Press, p. 18. 9

Does Klio Count, Too? Measurement and Understanding in Historical Science

45

nectivity of the researched question. In this respect, specific questions and source corpora are certainly reproducible, even if several analyses of the same phenomenon may well lead to different interpretations. The recognition and interpretation of patterns refers to significant similarities of events, structures or discourses. In this context, however, it is necessary to ask more precisely about the notion of pattern. It is true that statistics, as an evaluation result of large amounts of data, offer a more intersubjectively verifiable result than textual analyses. However, historical scholarship does not assume patterns inherent in the sources, which are merely to be recognized. Results of both historical statistics and source interpretation are conditioned by the research question and the selected material, and thus always reflect the historian’s point of view and interest. The latter is able to elaborate tradition-based and repeatedly reproduced forms of description and to make these the basis of interpretation. ‘Patterns’ in historical sources would in this case not be a hard criterion of knowledge, but descriptive devices of the historian. Often even the exception, the account or structure that deviates from the rule, is more interesting or meaningful to the historian. For example, the vast majority of medieval world maps called mappaemundi place Jerusalem as the centre of the world. This tradition followed religious precepts rather than geographical ones.12 The map of the Venetian Camaldolese monk Fra Mauro (created in 1459) is a significant exception: due to the higher geographical precision, Jerusalem is clearly moved from the centre to the west on his map. The monk probably anticipated the critical inquiries of his contemporaries and justified himself in an inscription: Jerusalem was still the center of the earth—not geographically, but because of the population density: Because Europe was more densely populated than Asia, Jerusalem, which had moved westward, formed the center of humanity.13 With this statement Fra Mauro stands alone in the cartographic tradition of the Middle Ages. For the historian interested in the world views of past epochs, however, his statement is no less relevant than the significantly higher number of maps that also see Jerusalem geographically as the centre. On the contrary: Fra Mauro’s map shows alternative ways of reasoning with which the monk (in a precisely unique way) skilfully sought to reconcile tradition and empiricism. History can also be understood, but not only through statistical averages.

 Since Jerome, the central passage here is Ez 5:5: “Thus says the Lord God: This is Jerusalem. I have set it in the midst of the nations and the countries round about” (translated). 13  Falchetta, P. (Ed.). (2006). Fra Mauro’s world map. With a commentary and translations of the inscriptions (Terrarum orbis 5). Turnhout: Brepols, p. 381 (no. 1011). 12

46

A. Büttner and C. Mauntel

4 Counting and Measuring—Quantification in Politics and Society in Medieval Europe The methodological application of fundamental epistemological considerations, as well as the individual content, is always bound to the facts to be investigated, the sources available and their nature. This concretization of work in the historical sciences will be presented below based on two projects that are united by a common approach: the focus is not so much on the researcher’s own counting and measuring, but rather on the significance of a strongly quantitatively oriented recording of the world in a historical perspective, more precisely in the Latin-Christian Middle Ages. Looking at the twelfth to thirteenth centuries, it can be seen at various levels that quantitative approaches gained in importance. During this period, for example, both coinage (and thus the circulation of coined money) as well as the credit system experienced an upswing. From the second half of the twelfth century, many noble families also began to compile fief records, which listed their possessions and rights in more or less detail. With regard to the recording of knowledge, the thirteenth century stands out for its numerous large encyclopaedic works, which collected, arranged and made the available knowledge newly comprehensible. The technical measurement of time also took on new forms and spread; the temporal hours of different lengths became the equinoctial hours of the same length in every season. These brief examples are situated on quite different levels and ultimately have no obvious connection. Precisely for this reason, however, they suggest how comprehensively the use of countable and quantifiable quantities achieved a new quality from the twelfth century onwards. Our projects examine this independently of one another in two different areas, the monetarization of the political order and the question of the significance of empirical quantification in recording the world. Money and power always exhibited a special proximity to each other. As money became increasingly available in the form of silver bars and minted coins in the course of the High Middle Ages, both its use and its perception in the political context changed. Especially in the Holy Roman Empire, abstract concepts such as the honour or grace of the ruler, but also the loyalty of the subordinate, now became measurable and tradable. If one wanted to regain the lost favor of the ruler, it was often not enough to perform a certain ritual of submission or to purify oneself through an oath, but the (material and/or immaterial) damage had to be made good in cash. Political advantages such as the bestowal of a principality could also be obtained through money. Such payments were taken for granted by contemporaries and must not be understood as corruption in the modern sense. Criticism arose when

Does Klio Count, Too? Measurement and Understanding in Historical Science

47

the law was disregarded, and it usually took a directly disadvantaged party to bring this up. The amount of money was less relevant; the decisive factor for a moral rejection of such a gift was the unjust behaviour of the ruler caused by it.14 The payment flows were not only directed at the ruler. Thus, military support was no longer provided by the princes per se or in the implicit hope of a later reward; instead, they were expected to pay the costs directly or at least to share in the costs. Especially in situations in which the king found himself in a difficult political situation, he often had to spend large sums of money in order to commit himself to a sufficient princely following or to weaken that of his opponent. In addition, the attainment of the kingship itself increasingly required monetary payments in order to win the favour of the king’s electors. Also, in the political context, the sources no longer spoke primarily of “a lot”, “very much”, or “an incredible amount of money” as in earlier centuries, but increasingly of concrete amounts such as 1000 marks of silver or 4000 pounds of Pisan pfennigs. Comparative analysis of various case studies and source genres (charters, chronicles, letters, etc.) reveals that these figures do not stand as indefinite ciphers for high amounts, but have a strong connection to reality. Comparison reveals a certain “tariff system”: not explicitly formulated or normatively prescribed, but—as can be seen from certain relations—nevertheless present in political practice. Counting and measuring also played an important role in the surveying of the world. What role did the measurement and counting play as a descriptive method and explanatory model in describing the world in medieval Europe? For early medieval authors such as Lactantius († ca. 325) or Ambrosius († 397), it was initially a matter of differentiating Christian knowledge of the world from ancient (and thus pagan) approaches: “What do I care to measure the circumference of the earth, which the geometers have calculated to be 180,000 stadia? [...] Knowledge of the nature of the earth is better than knowledge of its extent”15, said Bishop Ambrose of Milan. However, such a view, directed solely towards qualitative understanding, did not prevail. Already in early medieval exegetical writings, numerical allegorism played an important role, and in practically oriented fields numerical indications were in any case indispensable. On the contrary, their influence seems to have grown. Authors of travel or pilgrimage reports, for example, repeatedly used quan Cf. Kamp, H. (2001). Geld, Politik und Moral im hohen Mittelalter. Frühmittelalterliche Studien, 35, pp. 329–347. 15  Ambrose. (1896). Hexameron. In K. Schenkl (ed.), Sancti Ambrosii Opera. 1. Exameron, de paradiso, de Cain et Abel, de Noe, de Abraham, de Isaac, de bono mortis (Corpus Scriptorum Ecclesiasticorum Latinorum 32/1) (pp. 1–261). Prague, Vienna, Leipzig: Verlag der österreichischen Akademie der Wissenschaften, here p. 208 (VI,2,7). 14

48

A. Büttner and C. Mauntel

titative data to describe distances travelled or what was seen in a foreign country as vividly as possible. Especially when describing the holy places in Jerusalem, many pilgrims resorted almost excessively to numerical data, measuring for example the Via Dolorosa (1100 steps), the steps to Calvary (18) or the tomb of Jesus (one and a half fathoms long).16 In contrast to mystical approaches to sacred places that rely on inwardness, surveying appears to be a sober, objective form of description—since it is theoretically verifiable. Because of their apparent precision, the numerous travelers to Asia of the late Middle Ages also frequently use numerical data to indicate the number of days’ travel, to record the size of cities, or to describe the splendor and wealth of foreign rulers. Nevertheless, many of these indications remain vague, since units such as ‘daily journeys’ or ‘miles’ were not unambiguous indications as long as there was no uniform system of measurement. At the same time, some indications may well have been symbolic, such as when a city modeled on the heavenly Jerusalem (Rev 21:11–15) with 12 city gates is described. Moreover, already the contemporaries seem to have been aware of the fact that many travel reports were strongly characterized by many and above all seemingly large numbers: The fact that Marco Polo’s famous travelogue is called Milione in many Italian manuscripts can be interpreted as a reaction to its numerical data, often numbering in the thousands, with which the Venetian traveler sought to map the empire of the Mongolian Khan. Whether this was done in mockery or disbelief, however, cannot be said with certainty.17 In general, it can be said that quantification not only of money and the world, but also in many areas of society gained importance in the High Middle Ages. A close look at the accompanying processes and discourses, be they affirmations or contradictions, is able to provide insights into a society that had to weigh up the benefits and limits of quantification for itself. What we observe today on a ­scientific and non-scientific level as a turn towards the increased importance of number and counting was also experienced by the people of the High Middle Ages—less rapidly and comprehensively, but probably no less incisively due to a different starting position.

 Reichert, F. (2001). Erfahrung der Welt. Reisen und Kulturbegegnung im späten Mittelalter. Stuttgart, Berlin, Köln: Kohlhammer, p. 146 f. 17  Cf. Münkler, M. (2015). Marco Polo. Leben und Legende, Munich: Beck, pp. 89–95, with a concise account of the different interpretations of the title Il Milione. 16

Does Klio Count, Too? Measurement and Understanding in Historical Science

49

5 Conclusion The science of history needs and uses numbers as a fundamental instrument for the treatment of its object of investigation when it is a matter of the chronological order of data (in the sense of date). Nevertheless, it was and is always concerned with a deeper comprehension. The approaches chosen for this purpose can be summarized by the pair of terms ‘understanding’ and ‘explaining’, the opposition of which, however, gave way to a complementary approach. The process of an increased shift towards quantitative methods, which can currently be observed in many disciplines, does not seem to have such a strong impact in historical studies: there is still a wide variety of methods, which allow for classical working methods of close reading as well as statistical surveys or computer-aided semantic analyses.18 In addition, the different approaches have different meanings depending on the period: statistical methods require a broad database, which is rarely available in the pre-modern period. In individual cases, therefore, the methods have to be selected and weighted according to the research question and the tradition. For a comprehensive indexing of the past, it is necessary to combine both currents, the in-depth look at individual sources as well as the collection of statistically usable mass records.

 Cf. for instance the project on “Computational Historical Semantics” at Goethe University Frankfurt: Retrieved from http://www.comphistsem.org/home.html (31.10.2016), and “The Latin Text Archive” the Latin Text Archive (lta.bbaw.de) affiliated at the Berlin-Brandenburg Academy of Sciences (01.03.2023). 18

Quantitative Data and Hermeneutic Methods in the Digital Classics Stylianos Chronopoulos, Felix K. Maier, and Anna Novokhatko

In an essay published in Digital Humanities Quarterly in 2013, H.  Porsdam argues for the need to find the right balance between qualitative and quantitative research methods in the humanities, which are increasingly shaped by digitization. On the one hand, she starts from the observation that knowledge production and academic research are changing because of the application of digital technologies (§7); on the other hand, she notes that digitization in the humanities is accompanied by increased pressure to consider quantitative results as more “valid” as well as to disregard non-quantifiable aspects of research materials. Porsdam sees this pressure as problematic, especially because it undermines the

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The present version has been revised technically and linguistically by the authors in collaboration with a professional translator. S. Chronopoulos (*) Seminar for Greek and Latin Philology, University of Ioannina, Ioannina, Greece e-mail: [email protected] F. K. Maier University of Zürich, Zürich, Switzerland e-mail: [email protected] A. Novokhatko Department of Classical Philology, Aristotle University, Thessaloniki, Greece e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_5

51

52

S. Chronopoulos et al.

opportunity to create a c­ ombination of quantitative and qualitative approaches unique to the humanities in the digital age.1 Within the project “The digital turn in Classical Studies: Perception— Documentation—Reflection”, which has been funded by the Heidelberg Academy of Sciences as part of the WIN-Kolleg, we explored ways to approach and characteristics of this combination of qualitative and quantitative research methods, especially for Classics and Ancient History.2 To this end, we brought together scholars of antiquity and representatives of publishing houses and libraries, who have very different attitudes towards the use and production of digital tools and materials, and fostered an intensive dialogue focused on concrete research products and methods, and on more general questions. The following considerations concern the link between qualitative and quantitative results in ancient history and classical philology in the digital age, and focus on one subfield, namely that research which, through close reading and thick reading as well as distant reading of ancient texts, attempts to answer questions of literary interpretation, as well as to investigate historical questions closely related to the analysis of discourse. In the first part of this chapter, we introduce the three different modes of reading (close reading and thick reading as well as distant reading), the linking of these modes with digital corpora and tools, and the interrelation that arises between these modes when working with digital media. The second part discusses a working process based on corroborating qualitative interpretations with quantitative data, the collection of which has become significantly easier and, above all, more accurate through digital corpora, and then feeding these back into an additional inter Cf. Porsdam, H. (2013). Digital Humanities: On Finding the Proper Balance between Qualitative and Quantitative Ways of Doing Research in the Humanities. Digital Humanities Quarterly 7. Retrieved from http://www.digitalhumanities.org/dhq/vol/7/3/000167/000167.html. On digital resources for classical studies, see Babeu, A. (2011). “Rome Wasn’t Digitized in a Day”: Building a Cyberinfrastructure for Digital Classicists. Washington, DC: Council on Library and Information Resources. 2  See Chronopoulos, S., F. K. Maier and A. Novokhatko (Eds.): Digitale Altertumswissenschaften: Thesen und Debatten zu Methoden und Anwendungen, Heidelberg: Propylaeum, 2020 (Digital Classics Books, Band 4). On the cooperation between computer science and classical studies, see Solomon, J. (ed.), (1993). Accessing Antiquity: the computerization of classical studies. Tucson: University of Arizona Press, and Crane, G. (2004). Classics and the computer: an end of history. In S. Schreibmann, R. Siemens & J. Unsworth (Eds.), A Companion to Digital Humanities (pp.  171–191). Malden, MA: Blackwell; on computer science and history, Genet, J.-P. (1986), Histoire, Informatique, Mesure. Histoire & Mesure 1, 7–18; On the problems and frictions such cooperation can cause, see Rieder, B. & Röhle, T. (2012), Digital methods: five challenges. In D.  M. Berry (Ed.), Understanding digital humanities (pp. 67–84). London: Palgrave Macmillan. 1

Quantitative Data and Hermeneutic Methods in the Digital Classics

53

pretive process. Awareness of the way in which the digital corpora used have come about and the characteristics and limitations of these corpora also play a significant role. The third part then discusses the role of interpretations obtained through qualitative methods for the production of digital tools and annotated digital corpora, which then enable the collection of quantitative data.

1 What Do We Count? Linguistic statements can be analysed into individual elements at five levels— phonetic, phonological, morphological, syntactic and pragmatic—using theories and terminologies developed by linguistics. This means that simple counts can be made on linguistic statements, texts and corpora: according to the relative and absolute frequency of appearance of either an element or a combination of elements. Such counts are part of the everyday research practice of scholars working with texts; and basic tools for reading texts—such as dictionaries—rely on such results of counts. The application of statistical techniques and the inclusion of metadata, such as the time of origin of a text, its genre, or the relevant dialect, lead to rich and meaningful quantitative results that are ultimately based on the counting of textual elements in a text or corpus.3 An important part of working with texts is to detect similarities at different linguistic levels between passages in one or more texts. In order for these similarities to be established and expressed through counts and measurements, it is assumed that the elements to be compared should be analyzed through the same system of parameters. Varied linguistic expressions must be made comparable by applying a model to all the texts under study. Attempting, for example, to compare argumentative structures in different texts with each other by producing and applying quantitative data (and not, for example, by comparing some passages considered to be particularly characteristic) requires (a) that the term “argumentative structure” be precisely defined; (b) that, on the basis of this definition, sub-elements/parameters of such a structure and their relationships be determined; (c) that different, diverse linguistic expressions/linguistic realizations of these sub-elements be identified. The production of an abstract model on the basis of the existing concrete text material and the concrete question only allows a valid comparison of texts, so that similarities can be expressed quantitatively.  For a general introduction to the methods and aims of quantitative linguistics, see Köhler, R. (2005). Gegenstand und Arbeitsweise der Quantitativen Linguistik. In R. Köhler, G., Altmann & R. G. Piotrowski (Eds.), Quantitative Linguistik / Quantitative linguistics (pp. 1–15). Berlin: De Gruyter. 3

54

S. Chronopoulos et al.

The production of such models, the consideration of metadata in establishing similarities, and, above all, the attempt to establish different degrees of similarity are research practices that are indispensable for the production of textual interpretations.4 It should also be pointed out that the detection of parallels between textual elements (mostly at the syntactic and semantic levels) and their appropriate application is a fundamental method for meaning production in classical philology.5

2 How Do We “Read”? Distant reading, a term coined by literary researcher Franco Moretti, refers to the analysis of large textual datasets through statistical methods, as well as the graphical representations of the resulting quantitative data and the discussion of these representations. It is a way of finding out, illuminating and analysing certain properties of large text corpora on the basis of a research question, without actually reading the texts of these corpora. What is actually read and interpreted are the quantitative data and their graphical representations; at the same time, the statistical methods used and the way the corpus under study is constructed become crucial elements of research based on “distant reading”.6 The method of distant reading can be contrasted with the method that focuses on the analysis of one text or a small corpus of texts and examines them through close reading and thick reading, in which the formal aspects of the text and its context are opened up and taken into account and interconnected in the interpreta Cf. Lowe, W. (2003). The Statistics of Text: New Methods for Content Analysis. Paper Presented at MPSA 2003. Retrieved from http://ssrn.com/abstract=2210808, for Methods of statistical analysis for detecting similarities between text elements in large corpora. 5  On the importance of parallels for interpretation in classical philology, see Gibson, R. K. (2002). A Typology of “Parallels” and the Function of Commentaries on Latin Poetry. In R. K. Gibson & C. S. Kraus (Eds.), The Classical Commentary: Histories, Practices, Theory (pp. 331–357). Leiden: Brill. 6  F. Moretti explains his method and applies it in Moretti, F. (2005). Graphs, Maps, Trees: Abstract Models for a Literary History, London: Verso. See also the essays in the volume Moretti, F. (2013). Distant Reading. London: Verso. For theoretical reflection on the different modes of reading in the digital age, the essay by K. Hayles is very significant: Hayles, N. K. (2010). How We Read: Close, Hyper, Machine. ADE Bulletin 150, pp. 62–79. On recent developments and trends in distant reading methods and digital tools, with a particular emphasis on visualizations, see Jänicke, S., Franzini, G., Cheema, M. F. & Scheuermann G. (2015). On Close and Distant Reading in Digital Humanities: a Survey and Future Challenges. In R. Borgo, F. Ganovelli & I. Viola (Eds.), Eurographics Conference on Visualization. Proceedings of EuroVis-STARs, pp. 83–103. 4

Quantitative Data and Hermeneutic Methods in the Digital Classics

55

tion process. Close reading refers to the method, practiced and promoted primarily by the New Criticism school, of focusing on the formal aspects of the text and on mechanisms that create ambiguity and making sense of them without taking into account extra-textual elements (e.g., the social and historical context, or the [presumed] intentions of the author).7 Thick reading refers to the research method used primarily by anthropologists of New Historicism; it is characterized by opening up a text (or event) by drawing out as fully as possible the threads that link it to its historical, social, political context.8 The two methods are not necessarily complementary—especially if one is concerned with a radical, ahistorical form of close reading—and were developed primarily to meet the needs of different genres of texts; they are, however, often combined by classical philologists and historians. While distant reading is only made possible through the use of digital tools and digitized text corpora and applies quantitative research procedures, close and thick reading is linked to analog media and qualitative interpretation procedures. However, digital tools and text corpora crucially support those interpretive procedures that rely on close and thick reading, and conversely, quantitative results and visualizations that come about through distant reading are heavily dependent on qualitative interpretive approaches both in their emergence and in their interpretation.9 Perceiving how these interrelationships are shaped and applied in the research practices of classical philologists and ancient historians can help to identify forms of balance between quantitative and qualitative methods.  On the concept of close reading and its practical application, see Brooks, C. (1947). The Well Wrought Urn: Studies in the Structure of Poetry. New York: Reynal & Hitchcock; See also Wellek, R. and Warren, A. (1949). Theory of Literature. New  York: Harcourt Brace; especially pp.  1–37 and pp.  139–284; The critical survey in Wellek, R. (1978). The New Criticism: pros and cons. Critical Inquiry 4, pp. 611–624; Contrasting distant and close reading in Bell, D. F. et al. (2009). Close reading: a preface. SubStance 38.2, pp. 3–7. 8  See especially Geertz, C. (1973). Thick Descriptions. Towards an interpretative theory of culture. In The interpretation of cultures: Selected essays (pp.  3–30). New  York: Basic Books; And the critical examination of the term in Love, H. (2013). Close reading and thin description. Public Culture 25, pp. 401–434. 9  On the juxtaposition between distant and close reading and the observation that the two types of reading can and do function complementarily in historical research practice, see Erlin, M. & Tatlock, L. (2014). Introduction: ‘Distant Reading’ and the Historiography of Nineteenth-Century German Literature. In Erlin, M. & Tatlock, L. (Eds.), Distant Readings: topologies of German culture in the Long Nineteenth Century (pp. 8–10). New York: Camden House; See also Mohr, J. W. et al. (2015). Toward a computational hermeneutics. Big Data & Society 2.2, pp. 1–8, for the connection between close and thick reading with methods of digital content analysis on large data corpora. 7

56

S. Chronopoulos et al.

3  Close Reading Supported by Digital Corpora The absolute and relative frequency of appearance of linguistic elements and their distribution in different genres, authors and epochs are the questions that ancient scholars systematically answer by collecting quantitative data. Linguistic elements concern all linguistic levels (phonetics, morphology, syntax, semantics, pragmatics) and the quantitative recording of their frequency of appearance and distribution is closely related to the precise understanding of particular passages in the text: A rare word, for example, must be recognized as such in order to adequately capture the meaning of a passage, and the combination of words that appear rarely and frequently must be recognized as well. The majority of Greek written sources surviving via medieval manuscripts are retrievable and searchable in their digitized form in the Thesaurus Linguae Graecae (TLG: http://stephanus.tlg.uci.edu); in addition, the texts are provided with important metadata (time of origin, genre, geographical origin of the author). The TLG allows a variety of linguistic analyses, which are an enormous contribution to the study of those texts. For example, if one wants to investigate from when the word demokratia actually appears as a political term in Athenian democracy (this, by the way, is much later than one would assume), a combined query for the nominal and verbal forms arising from the root “demokrat*” and covering this word field helps in two ways: First, this kind of query enables us to obtain information of the first occurrence in the literature we have received; second, the statistically processed search result enables us to narrow down quite precisely the establishment of this term in public discourse. A ‘conventional’ search by reading through the relevant texts would—as goes without saying—mean an incomparably higher time commitment. The same would apply to other cases: If, for example, one wants to check whether Caesar’s proverbial clementia concept—forgiveness towards defeated political enemies—was also propagated by his successor Augustus as proof of his own greatness, the query via the database of Latin texts provided by the “Packard Humanities” Institute (http://latin.packhum.org) helps in the same way as the TLG does for Greek sources.10 The two examples mentioned represent only an illustrative fraction of the possibilities of collecting quantitative data through digital tools and digitized ancient sources. However, meaning is only created from these data through further inter This example refers to a concrete term that has been politically instrumentalized. In the case of discourse, of course, the search is considerably more complex and must be approached differently. 10

Quantitative Data and Hermeneutic Methods in the Digital Classics

57

pretation. To illustrate: If one searches for the frequency of occurrence of Tyche (the ancient goddess of fate) in the historical work of the Greek historian Polybios (200–120 B.C.), one comes across the impressive finding of 141 references. Now, one could conclude from this that Polybios absolutely believed in Tyche, and, by extension, that Tyche decisively influences the historical course of events. However—when one takes a closer look at the individual passages—it turns out that his use of Tyche has rather a proverbial character, or only reflects the perspective of the actors, but not his own view of the causality of events. While this awareness of the need to interpret quantitative data through close reading is a common view in ancient studies, there is still the problem of the nature of the existing digital corpora. Which text, or rather which variant of the text, underlies the respective databases? Texts from Greek and Roman antiquity have undergone a complicated process of transmission, so that there is no longer one or the text of a work, but different variants. In the digital information system Perseus, which also collects Roman and Greek text sources, for example, for Tacitus the Oxford Classical Texts edition by Fisher (1906) is mentioned, for Livius it is Weißenborn and Müller (Bibliotheca Teubneriana 1911), and for Sallust the edition by Ahlberg (Bibliotheca Teubneriana 1919). All of these are scholarly editions, but by no means up to date, and, above all, in each case only one variant of the text is used, although the technical possibilities allow for inclusion of parallel characters, as Lachmann put it, in an electronic edition. The changes and issues highlighted by the case studies indicate that quantitative data is becoming more important in text-oriented ancient studies through the use of digital corpora and tools, and that there is therefore a greater awareness of the use of counts and the embedding of the resulting data in interpretations.

4 Quantitative Data Generated Through Interpretation The impression that emerges from the presentation so far is that there is no profound change in the fundamental relationship between counting and interpretation: Quantitative data remain, even in the digital age, elements that are processed together with others to form an interpretive narrative. This impression, however, is only partially correct. For the production of digital corpora and tools that facilitate or even enable the collection of complex quantitative data is a process that in part also presupposes interpretation. Seen from this perspective, the quantitative data that contribute to the production of interpretations are themselves the result of interpretations.

58

S. Chronopoulos et al.

Two examples will illustrate the circle of interpretation, quantitative data and interpretation in digital corpora that scholars of antiquity work with. The first example concerns the collection of quantitative data to determine the semantic change of specific words/terms in a corpus spanning several centuries. The automatic determination of the exact meaning of terms in specific text passages is based on the discovery and comparison of significant co-occurrences. The results depend crucially on how the terms being searched for have been defined, how the corpus being searched in has been determined, how large/long the “text window” defined as the basic unit within which co-occurrences are being searched for is, and how this text window is defined (e.g. by a specific number of characters or by the use of graphic conventions that divide a text into periods/semi-periods), and how the identified co-occurrences are weighted. For example, it is to be expected that for texts of different genres or styles, it is reasonable to define the text windows differently in each case. It is obvious that different text windows lead to different quantitative results and, as a consequence, to different interpretations, so that in some cases at least it may be useful and significant for the interpretative work to analyse the corpus more than once with different text windows and to compare the results with each other each time and to explain the differences found. In any case, when evaluating the quantitative results, it is particularly important to consider whether the text window has been determined in each case completely independently of the particular characteristics (e.g. genre or style) of the material under study, or whether the assessment about such characteristics has been taken into account in the process. If the latter is the case, then the quantitative data produced depend on what is essentially an interpretive decision, and this must not be ignored in their evaluation. The second example involves the preparation of a corpus of ancient Greek texts so as to allow the collection of quantitative data on the specialized question of the sociolinguistic and pragmatic use of verbs that are equivalent to the German “Bitte” (or the English “please”) in the 1st person singular. The approach chosen is not to search the corpus on the basis of a search model, but to manually annotate the corpus based on a system of interrelated terms (an ontology) and to arrive at quantitative data by searching this annotated corpus. Both the definition of the terms to be studied and the production of the conceptual system to be used in the annotation process require intensive work with the corpus. Both quantitative and qualitative questions will be considered: Under what circumstances do words meaning “please” occur? At what point is one word or the other no longer used in hymns and prayers in reference to the deity, but becomes a colloquial speech act verb? To answer these questions, fixed linguistic categories, special categories established on the basis of the material to be annotated and its

Quantitative Data and Hermeneutic Methods in the Digital Classics

59

particular features, and metadata on the texts of the corpus are necessary. The collection of quantitative data using existing databases is particularly helpful in this regard. A search in the TLG (corpus: all extant Greek texts handed down via manuscripts (i.e. not via papyri or inscriptions), which are dated before the third century B.C.; no genre restriction; search for the grammatical form: 1st person singular present active of the respective text. Person singular present active of the verb given as a search word in each case) reveals that the first verb of the fixed list, the verb hiketeuō (“I ask, I plead”), occurs 62 times, of which 18 times (29.03%) are in sublime (lyrical and tragic) contexts or in the comedy sections, which parody the sublime style, while the second verb, antibolō (“I beg”), which occurs 35 times, is never used lyrically and is typical of colloquial speech such as comedy (28 times, 80%). This does not mean that we do not find the verb hiketeuō in comedy (it occurs 10 times, 24%). However, it is important to ask the broader question in these cases whether the context has tragic/epic/lyrical connotations. In the case of the verb antibolō, the evidence is further examined in terms of sociolinguistic parameters through close reading: Who exactly uses the word, woman/man, children/ boy/adult, Greek/non-Greek and others. Linguistic context is of primary importance here. With what other words or morphological and syntactic forms (vocative, imperative, direct accusative, infinitive, etc.) does “please” most often occur in conjunction? The answers to these questions will enable the production of concrete parameters and the annotation of the occurrences of the corresponding verbs in the corpus. It becomes clear that this annotation is based on results of a complex linkage between quantitative and qualitative results. This annotation, in turn, allows for more complex queries that can yield quantitative data. Thus, it is clear that the ­correct assessment of this data requires that the researcher can understand exactly which interpretative decisions are embedded in the annotations.

5 Closing Remark When dealing with quantitative data, models and interpretations, sciences that study texts deal with a special feature of their research objects. Texts are artefacts in which linguistic material is structured with regard to a particular communicative situation. The assignment of a text to a particular genre is an indication that this text shares significant similarities with other texts in terms of the structures it employs to organize and represent its material. Furthermore, ancient texts have been transmitted to the present day through a complex process; an example of this is the

60

S. Chronopoulos et al.

transmission of texts through various media (papyrus scroll, codex in majuscules, codex in minuscules, first typographic editions by 1500 [Incunabula]). The materiality of the medium of transmission and the conventions that shape it have a significant influence on the shaping of the text—it is enough to think of structuring elements such as punctuation, word division, paragraph division or reference systems, which vary greatly from one medium to another and decisively shape the perception, reading and understanding of the text. In this sense, each text is the combination between, on the one hand, the structure of linguistic material produced on the basis of the author’s intentions and the conventions of language and genre, and, on the other hand, the particular way of representing these textual structures that depend on the medium of transmission. In modern text-critical editions, scientific modeling is added as a third level of organization: these editions, which are the basis of scientific work with ancient texts, present a model for the tradition of the text and, on this basis, attempt to create a version that is as close as possible to the original text. Such editions are the basis for the digitized corpora from which the quantitative data are collected with which classical philologists and historians of ancient history work. These quantitative data can be understood and interpreted if the influence of the genre, the respective material text carrier, and the model of the scholarly text-critical edition for the design of each text are perceived and taken into account.

Metaphors and Models: On the Translation of Knowledge into Understanding Chris Thomale

This article examines how the use of metaphors and models in scientific discourse can itself be made the subject of scientific discourse. In doing so, it looks in particular at the ethical and jurisprudential treatment of surrogacy and shows the extent to which an implicit, politically coloured use of metaphor takes place there. Finally, the use of the word “pattern” is also identified as a metaphor.

1 Scientific Self-reference via the Metaphors and Models of Scientific Language At first glance, the title of the WIN College “Measuring and Understanding the World through Science” contains something quite paradoxical. For it calls for a consideration of the observer by the observer, a reference of scientific discourse to itself. Yet a core insight of jurisprudence is that such self-references among people do not succeed very well, that a legislator is not a judge and a judge is not a good one in his own cause. So why is it that today it is the humanities, of all disciplines, and above all the author’s own disciplines—philosophy and jurisprudence—that, according to anecdotal evidence, spend half of their existence justifying themselves? The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. C. Thomale (*) Heidelberg Institute for Foreign and International Private and Commercial Law, Heidelberg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_6

61

62

C. Thomale

Certainly, the functionalisation and economisation of the academic world may have played a certain part in this self-justifying zeal in recent times. Thus, the ­humanities are revealing a growing willingness to mutate to avoid becoming victims of modern funding selection: Everywhere, “clustering” is done artfully, “interdisciplinary” doors are pushed open—ironically, often without clarifying whether they are closed at all, or whether one might not rather be in the process of tearing down load-bearing walls in the convoluted, hundred-water-esque edifice of the sciences. Such discourses demand self-assurance and self-justification. Thus the biologist, who is supposed to be researching with a lawyer or a historian of philosophy on the topic of “Genetic Engineering—A Challenge of the 21st Century”, innocently wants to know what “method” she is using. The vague reference to hermeneutics, the establishment of dogmatic coherence or even deconstruction is unlikely to satisfy the biologist. He wants to know: What do you do? By this he means: what do you count or measure, how do you derive from it what kinds of taxonomies or patterns, and how do such patterns enrich your next count or measurement? The humanities scholar should respond to this smugly and ambiguously, preferably in the new lingua franca of nature, so as to be understood nonetheless: Understanding does not count any more than it pays. For indeed: knowledge in the humanities, in all its “breadlessness”, has nothing to do with measuring, but with understanding. But it is not for nothing that understanding is equated with comprehension, i.e. with getting to grips with the concept. For understanding and grasping stand in a highly intimate relationship to language. Sometimes, especially among didactically unsuccessful university teachers, it is gladly assumed that these two moments—understanding and language—can be meaningfully separated from each other. On the other hand, many might agree, after some reflection, that speechless understanding is encountered less frequently than intelligible language. Nevertheless, there is a heuristic gap between raw humanistic “knowledge” as such, such as an idea, and linguistically coagulated, formulated humanistic understanding. The question is how exactly the language of science bridges this gap. While conceptualization and conceptual history have already received almost unrestricted attention, especially in philosophy, metaphor has been somewhat lost sight of. Thus, especially in the literature on the history of science, we find individual studies and also a hardly manageable number of linguistic theories of metaphor. However, a scientific-theoretical translation of the same is largely missing. Clearly more present in scientific discourse is the metaphor’s big brother, the model, but its relationship to the metaphor is often ignored. This reproach also and especially applies to the natural sciences, whose scientific language often moves in models and uses metaphorical expressions. In the project “Metaphors and Models—On the

Metaphors and Models: On the Translation of Knowledge into Understanding

63

­ ranslation of Knowledge into Understanding”, this aspect of scientific observaT tion is made the subject of scientific consideration.

2 Use of Metaphors and Models as a Problem Metaphor has a dubious reputation in science, since it is often regarded as a mere rhetorical stylistic device whose vagueness, ambiguity, and irrationality make it at best superfluous, at worst even harmful, for a factual discourse that depends on the precise formulation of one’s own thoughts. The credo attributed to George Berkeley: “A metaphoris autem abstinendum philosopho”1 therefore describes not only the prevailing attitude of the humanities, but also of the natural sciences in particular. In contrast, models appear to be at the cutting edge of our time and shape scientific discourse. In this context, the fact that the model is actually a child of architecture and2 originates from the writings of the Renaissance master builder Leon Battista Alberti, in particular “De Pictura” and “De Statua”, recedes into the background. What is meant is the true-to-scale representation of a building in a design, one could also say: a proportional analogy. However, precisely this also represents a manifestation of metaphor, as already listed in Aristotle’s Poetics.3 If metaphors and models have the same illustrative character in common, it must be astonishing that they enjoy such different respect. For the reproach of irrationality and vagueness could even be affirmed against models, since they form an inner system that suggests a factual reasonableness under the guise of logical conclusiveness. This reasonableness depends in a circular way solely on the correctness of the model’s premises and findings. This is unfortunate because the model is supposed to make direct recourse to them superfluous: If the metaphor is a deception, then the model is a deception that has been increased into the systematic. The project takes up this current discrepancy in perception by subjecting metaphors and models to a theoretical examination of scientific language. Firstly, there is a dimension of the history of ideas, insofar as the genealogy of metaphor use is examined in the style of Hans Blumenberg, i.e. metaphors are to be analysed and

 In English: Let the philosopher dispense with metaphors.  Bätschmann and Schäublin (2011) offer an excellent synoptic translation into German. 3  Aristotle, P. (1994). Poetik: Griechisch/deutsch. Stuttgart: Reclam. 1 2

64

C. Thomale

described as a phenomenon, as it were.4 Secondly, there is the heuristic dimension of metaphors and models. This is an optimization problem: Obviously, metaphors and models seem to be both a curse and a blessing: on the one hand, they allow for the indispensable modeling of scientific knowledge, but they also often stand in the way of scientific understanding when confronted with data, measurements, and contexts that lie outside their imaginative space.5 An apt example is provided by the wave-particle dualism of light: only when one has distanced oneself from one of the models as a mere summary of certain properties in certain contexts, i.e. when the naturalistic appearance of the model has been stripped away and it is consciously thought of as a visualization, does one attain the mental flexibility to escape the tertium non datur of the either-particle-or-­ wave. In jurisprudence, a similar category error is discussed under the concept of the naturalistic fallacy, according to which one erroneously concludes from a being to an ought and vice versa. For example, it can be argued that the “legal person” has not been seen through to this day, i.e. that it is erroneously taken as a thing, a being.6 In addition, common supposed technical terms such as “nullity” are charged with naturalistic aberrations.7 In the project work, the question is pursued of how metaphors and models are formed and used in the humanities and natural sciences, in particular, under what circumstances old models are replaced by new ones. A secondary aim of this work is to develop a typifying taxonomy for scientific metaphors and models and thus to structure the material. Next, interest is focused on identifiable cases in which metaphors and models shape the perception and exploration of empirical or sociocultural reality. Here, for example, the focus is on experiments that serve to prove a theoretically predicted phenomenon, or empirical studies whose implicit model premises anticipate the supposedly found result. Finally, on the basis of Popperian and Albertian critical rationalism, the aim is to shed light on the scientific use of metaphors and models in the philosophy of

 See more recently for an introduction: Konersmann, R. (ed.), (2011). Wörterbuch der philosophischen Metaphern (3rd ed.). Darmstadt: WBG; therein especially the instructive preface by the editor himself, pp. 7 ff, with many further references. 5  Cf. Dawkins, R. (2000). Unweaving the rainbow: science, delusion and the appetite for wonder. Boston: Houghton Mifflin Harcourt: “dangers of becoming intoxicated by symbolism, by meaningless resemblances.” (pp. 180 ff., p. 184). 6  Cf. Gulati, G. M., Klein, W. A., & Zolt, E. M. (2000). Connected contracts. UCLA Law Review, 47, p. 887. 7  An instructive overview of the different levels of meaning of metaphor in law is provided by: Makela, F. (2011). Metaphors and Models in Legal theory. C. de D., 52, p. 397; Blair, M. M. (2005). On Models, Metaphors, Rhetoric, and the Law. Tulsa L. Rev., 41, p. 513. 4

Metaphors and Models: On the Translation of Knowledge into Understanding

65

s­ cience. A minimum requirement might be to refrain from using metaphors and models altogether, but to reflect on them critically and to make them transparent. Finally, a third, rhetorical-political meaning of scientific metaphor ties in with this: Metaphors and models, especially in the social sciences, can become vehicles of implicit political preferences of the user. If such metaphors can be established as supposedly unsuspicious scientific terminology, they simultaneously impose these implicit preferences on the general public. In this context, metaphors endanger nothing less than the freedom of domination of social scientific discourse.

3 Contingency and Contextuality The work on metaphors and models just outlined is extremely laborious and complicated because, first, there is no analytically explicable necessary connection between a particular metaphor or model and the knowledge it conveys. Secondly, in addition to this inherent contingency of metaphors and models, there is their extensive dependence on the individual case. The meaning and effect of a metaphor can only be judged by its interplay with the other interpretative horizon of a given linguistic community—be it a discipline, a department of it or along with other relations. It is highly dependent on time, culture and convention, and thus largely contextual. In light of this contingency and contextuality, the great challenge with regard to the analysis of a given use of metaphor is to gain critical distance from its object. To this end, the supposedly unchanging idea or context that the metaphor is supposed to convey must first be worked out in order to then evaluate it in comparison to other metaphors or forms of expression. At the same time, however, the point of the metaphorical translation of knowledge into understanding is that content and form of expression cannot be entirely separated. Rather, language and the verbalized, expression and thought form an inseparable amalgam precisely when the linguistic expression is composed metaphorically. For then it does not want to denote and signify in a simple sense of correspondence, but rather it wants to stimulate one’s own performative comprehension of the thought. This special problem requires a cautious, careful approach—basically a full-­ fledged re-examination of thought itself. Thus a science of the use of metaphors and models in science is at the same time directed towards the re-examination and re-enactment of the scientific thought itself under investigation. Observation and what is observed merge completely. In the following, this will be exemplified by an object whose metaphorical analysis has already been completed: the reproductive method of so-called surrogate motherhood.

66

C. Thomale

4 Example: The Humanistic Treatment of Surrogacy Contemporary German-language discourse uses the metaphor of “surrogacy” to describe the following phenomenon: two people, typically a couple, enter into a contract with a woman. Its main object is that the woman is to carry a child for the couple and leave it to them as a child after birth. In return, the woman is entitled to payment for her childbearing activities. The ethical questions raised by this process are obvious: is childbearing an activity over which a woman can freely and bindingly dispose by contract, or must her freedom—for her own protection against exploitation, for example—be restricted? Is it in keeping with the dignity of the child to make it the subject of a contract even before it is born? Other questions are specifically legal: Who are the child’s legal parents? What applies if different legal systems assess the same facts differently, for example, if the woman giving birth is recognized as the mother in one State, but the contractually designated “mother” in another? All this was reflected upon in order to be able to grasp the function that metaphors play in this discourse on legal ethics. The results of the project work have been documented in a monograph.8 In it, the ethical and jurisprudential work is first carried out in detail. Only this penetration of the object of discourse allowed a metaphorological critique. The point of this critique is the realization that the metaphorical talk of intended parents who have a child carried by a surrogate mother, which is widespread in the social sciences, is misleading. For the term “loan” denotes a gratuitous legal relationship. This is also conceivable in the case of surrogacy, for example when a woman carries the child of her infertile sister. This is also referred to as “altruistic” surrogacy. The quantitatively and qualitatively significant cases of surrogacy, on the other hand, are based on a business deal: the surrogate mother is paid for her services. Thus, viewed in the light of day, she is not lending her uterus at all, but renting it out. Interestingly, therefore, the legislators of the twentieth century did not yet speak of surrogacy for intended parents, but of “surrogate motherhood” for “intended parents”.9 It was thus possible to prove that this change of metaphor did not happen by chance, but was rather an expression of a growing political advocacy of surrogacy. This is precisely no longer to be portrayed as a business or even a (child) trade, but as the altruistic transfer of a child from the mother who does not want it to the actual “parents” who so ardently “desire” it. In the course of this,  Thomale, C. (2015). Mietmutterschaft: Eine international-privatrechtliche Kritik. Tübingen: Mohr Siebeck. 9  Bundestag Printed Papers 11/4154 of 9 March 1989 and 11/5460 of 25 October 1989. 8

Metaphors and Models: On the Translation of Knowledge into Understanding

67

English itself denies the maternal identity of the surrogate mother and speaks only functionally of ‘surrogate’ rather than ‘surrogate mother’. By contrast, Spanish and Portuguese, for example, preserve this typical gratuitousness of surrogacy by speaking of a maternidad de alquiler and a maternidade de aluguel respectively. These languages thus use the metaphor of renting, which captures the designated object more realistically than the metaphor of lending common in contemporary German discourse. To draw attention to this fact, the monograph was titled “Mietmutterschaft”.10 The identification and critique of a concrete use of metaphors in scientific discourse is, of course, only a first step. How can we influence this discourse so that it revises its metaphors? Should, for example, a book based, among other things, on the insight that surrogate motherhood should actually be called hired motherhood, speak of “surrogate motherhood” or “hired motherhood”?11 Is it legitimate from the point of view of scientific ethics not only to point out that surrogate motherhood arrangements are remunerative and to criticize the widespread disregard for this fact, but also to demand that others use the opposite metaphor? We are in the process of better understanding these and other connections—of course with the help and development of new metaphors and models.

5 Transdisciplinary Research Perspectives: Metaphor, Model and Pattern Discussions with other WIN fellows, as well as reading the other contributions collected in this volume, reveal a majority focus on the “pattern”. It should be noted that this, too, is mostly meant metaphorically. For it is by no means a matter of examining a mostra, i.e. a test piece, which “pattern” actually means, taken as a concept.12 Rather, in an analogous sense, it means the mental operation that extrapolates the properties of a specimen to the other objects of the same genre. This typifying generalization of a context of meaning is meant when man and nature seem to follow identifiable patterns of behavior, reasoning, and developmentan exemplary case of metaphorical thinking, the deep structure of which further collaboration will illuminate in more detail.  Thomale, C. (2015). Mietmutterschaft: Eine international-privatrechtliche Kritik.Tübingen: Mohr Siebeck. p. 8. 11  Cf. Ibid, p. 11 at and fn. p. 41. 12  Cf. Kluge, F. (2002). Etymologisches Wörterbuch der deutschen Sprache. Berlin: de Gruyter. p. 640. 10

68

C. Thomale

6 Conclusion Although we have made substantial progress, our epistemological interest in metaphor and model use in scientific language is far from satisfied. Rather, in the future we would like to engage in empirical psychological, cognitivist research on scientific metaphor and model use. The starting point is the previous analytical result that the common scientific-­ theoretical and political-critical analysis of metaphors is based on an implicit cognitive dualism of thinking and speaking. The future insight goal is now to use empirical psychology to shed more light on13 this relationship between thinking and speaking on the basis of cognitivist theories of metaphor. Thus, a fundamental change of perspective from a discursive description and critique of metaphor to its analysis as cause and effect of human cognition is planned. The model for this is probably the internationally most effective conception of metaphor according to George Lakoff and Mark Johnson.14 In its most recent version it has found a connection to theories of neuronal networks.15 One of its main claims is that metaphors are not a purely linguistic phenomenon, but a constituent structure of the cognitive system and form the basis of the majority of all linguistic utterances. The simplest analogy relations are subsequently acquired in the course of life and combined to form more complex metaphors.16 An analogy relation consists of a source domain and a target domain, both of which contain “commonly used” information about different dimensions of the object. The metaphor provides a “map”, that is, a description of analogical and non-analogical dimensions, as well as the specific context in which the metaphor is used, which determines its interpretation.17 With this basic system, Lakoff/Johnson’s theory has already found fruitful applications

 Classical examples are the interactional metaphor theory in the sense of Max Black and the pragmatic approach according to John Searle, in addition to the theory of Lakoff/Johnson, which will be discussed in a moment. 14  Lakoff, G. & Johnson, M. (1980). Metaphors We Live By. Chicago: University of Chicago Press. Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its challenge to western thought. New York: Basic books. 15  Lakoff, G. (2008). The neural theory of metaphor. In R. W. Gibbs Jr. (Ed.). The Cambridge Handbook of Metaphor and Thought. pp. 19–38. 16  Cf. Ibid., (2008), p. 27. 17  Cf. Ibid., (2008), p. 28. 13

Metaphors and Models: On the Translation of Knowledge into Understanding

69

in the fields of mathematics18 and philosophy.19 We would like to build on this and first empirically investigate the contextual conditions under which specific metaphorical communication influences the experienced and actual success of communication between persons (communication hypothesis). In addition, the eviction effect of model-like metaphor use, which is equally important in scientific modelling and science didactics, will be explored. The choice of the right metaphor or model can also determine which cognitive processes take place in scientists as well as in the popular reception of the respective language use (cognitive hypothesis). As a relevant method, we have developed the “metaphor impact analysis” (MIA), which we will again initially test in the field of surrogacy. The experimental setup consists of a neutral definition, which comes in two versions: One version uses the terms “surrogate motherhood” and “order parents,” while the other—with otherwise absolutely identical text—uses the terms “surrogate motherhood” and “intended parents.” Participants are assigned one of these definitions, read it and then answer questions about it. Using suitable scales, not only qualitative but also quantitative statements can thus be made about the hypotheses formulated in the previous work. Subsequently, these evaluations can be summarized, but also statistically compared between groups. A comparison is planned between lawyers, other academics and a broader random sample that approximates a reception by the public. This first MIA is intended as a pilot project. It is designed to provide a template for further MIAs within and beyond the legal profession. As such, it will also be processed and published in an open source format.

 Lakoff, G., & Núñez, R. E. (2000). Where mathematics comes from: How the embodied mind brings mathematics into being. New York: Basic books. 19  Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its challenge to western thought. New York: Basic books. 18

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics Using the Example of Computational Legal Linguistics Hanjo Hamann and Friedemann Vogel

1

 aw, Language and Algorithms—From the Micro L to the Macro Perspective

The following reflections on the relationship between quantity and quality in the process of interpretation and knowledge in the sciences are based on an encounter between very different and yet related disciplines: Within the framework of the project “Legal Reference Corpus” (JuReko), which has been funded by the Heidelberg Academy of Sciences and Humanities since 2014, we are testing the ­possibilities and limits of “computer-assisted legal linguistics”.1 To this end, we are building a reference corpus of German-language law, i.e. a processed collection of legal texts from particularly The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The present version has been revised technically and linguistically by the editors in collaboration with a professional translator.

 Computer Assisted Legal Linguistics, CAL2. Retrieved from https://www.cal2.eu.

1

H. Hamann (*) EBS University of Business and Law, Wiesbaden, Germany e-mail: [email protected] F. Vogel German Studies, Linguistics, University of Siegen, Siegen, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_7

71

72

H. Hamann and F. Vogel

relevant domains (legislation, case law, jurisprudence) and various fields of law (civil, criminal, administrative, commercial law, etc.) as a basis for semi-automatic studies from a legal, linguistic and social science perspective. The corpus now comprises several hundred thousand fully digitized law, decision and jurisprudential essay texts, which have been stored in a standardized XML format, automatically marked up and enriched with metadata in a database. By building up this database, the project aims to use computer-assisted, quantifying analyses to test hypotheses that have emerged from previous qualitative research, as well as to develop new hypotheses about the linguisticsocial constitution of the rule of law based on inductive data. This interest in knowledge can be made fruitful for the practice in court, in legislation and legal theory: What has the language of law been composed of since the 1970s? How do new expressions develop, where and when does the meaning of existing expressions change? What is the function of different patterns of language use in law? What are the internal and external legal sources from which German legal discourse is fed? Which sources—and authors or groups of authors—are particularly frequently consulted? What is the easily and frequently cited “prevailing opinion” in law? Which perspectives dominate, which perspectives are excluded? Which methods, in particular computer linguistic algorithms, can be profitably applied to legal linguistic questions such as the ones above? How can a quantifying approach that disregards the individual case be communicated at all, with a legal decision-­making practice in which every specific detail, every word of precisely the individual case can be of guiding importance for the outcome of a case? Questions like these lead to elementary problems of scientific production of meaning and knowledge, namely the question of the fit between method and object and thus ultimately also the question of the adequate and legitimate procedures of scientific production. In this article, we postulate a cross-disciplinary paradigm in which measurement and understanding do not have to be contradictory but always interdependent procedures of scientific analysis. Using the example of computer-­ aided legal linguistics, we would like to make these procedures explicit, to make the blind spot itself the object of observation, as it used to be.

2 Preliminary Definitions 2.1 Recognition and Understanding as a Starting Point Linguistics and jurisprudence have historically emerged from the same tradition of thought, which essentially makes use of the hermeneutic method—i.e. the meaning-­ understanding interpretation of humanly produced, linguistic actions and phenomena (artefacts such as words and texts). A fundamental underpinning for the quality

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

73

of linguistic or legal interpretation was and is, above all, introspection, i.e. the individual feeling for language when interpreting selected individual evidence.2 With the emergence of new digital data sources and thus easily accessible volumes of evidence, as well as the development of informatic, corpus and computational linguistic methods, a fundamental scepticism towards introspection has spread in the humanities and especially philological disciplines since the 1970s at the latest. The selected single piece of evidence and its interpretation could now be juxtaposed ‘at the click of a mouse’ with a multitude of competing, interpretation-­ varying pieces of evidence. Introspection became more and more suspicious of allowing only those interpretations that were acceptable to the interpreter by virtue of his own interpretive sovereignty, which was polemically described as his intellectual “armchair”. The computer, on the other hand, promised to produce “incorruptible” (a frequent—paradoxically anthropomorphizing—metaphor in literature) empiricism and to bring the humanities (also in terms of science and funding policy) on a methodological path with the natural sciences. A factual debate between these camps—very roughly sketched here—has so far taken place only sporadically, with the “empiricists” claiming the argumentation sovereignty for themselves in recent years and qualitative hermeneuticists developing rather defensive justification. The situation is (still) different in jurisprudence. It, too, is concerned with interpreting texts in a way that makes sense, but attempts to make empiricism and modern data processing fruitful for this purpose have remained sporadic. The ancient legal term of the judge who does not have to calculate (and is therefore allowed to correct calculation errors in the judgement,3 but this etymological context is seldom mentioned) remains stubborn. Text comprehension, as is emphasized or ­implied everywhere, is so essentially qualitative that statistics—itself also occupied with an unflattering proverb—cannot make any significant contribution to it. A digital indexing of legal sources and legal texts has therefore so far taken place primarily to the extent that it is a matter of the cost efficiency of legal advisory practice—or, exceptionally, of historical source texts threatened by the ravages of time, which then, however, primarily serve legal-historical specialists for (again purely qualitative) processing. Accordingly, a “camp of empiricists” that could strive for argumentation sovereignty has hardly been discernible so far, if one dis Vogel, F., Christensen, R. & Pötters, S. (2015). Richterrecht der Arbeit—empirisch untersucht: Möglichkeiten und Grenzen computergestützter Textanalyse am Beispiel des Arbeitnehmerbegriffs. Berlin: Duncker & Humblot. pp. 80 ff. 3  Liebs, D. & Lehmann, H. (2007). Lateinische Rechtsregeln und Rechtssprichwörter, no. 150. Munich: C. H. Beck. 2

74

H. Hamann and F. Vogel

regards the probably failed march of “sociology at the gates of jurisprudence” in the 1970s,4 which has also stopped there.5 It is only recently that proposals for “empirical legal research” or “evidence-­ based jurisprudence” have begun to reappear in Germany,6 and the battle cry for “big data jurisprudence” is even being heard from the USA.7

2.2 Systematizing—Counting—Measuring—Interpreting as Work Steps? The comparable basic hermeneutic orientation of the two individual disciplines is also accompanied by a similar system of their native working methods. Instead of using the methods of counting and measuring, legal science primarily makes use of systematization. This, however, does not strive for a sharp demarcation, but seeks pragmatically useful typologies in a legal system that is known to be vague and intended to be vague because of its flexibility. In doing so, the generality and timelessness of its laws is emphasized just as much as their predictability in individual cases. This dialectic gives rise to categories that are seldom exclusive, but at best heuristically plausible: New social developments (human dignity, gender equality, consumer protection, etc.) can shift the entire system of categories and force a ­remeasurement of the entire system from the then consensual point of view. Consequently, if time-consistent category boundaries cannot be drawn, methods of counting and measuring also fail: neither the preconditions and consequences of a particular legal provision can be counted independently of readings of the current zeitgeist, nor even the number of legal provisions—let alone their readings. While a countability of regulations seems possible through the usual paragraph numbering in laws, the very term “numbering” points to the absurdity of this undertaking: as little sense as it makes to add up house numbers or to want to measure the num-

 Lautmann, R. (1971). Soziologie vor den Toren der Jurisprudenz. Zur Kooperation der beiden Disziplinen. Stuttgart: Kohlhammer. 5  Dreier, H. & Wenz, E. M. (Eds.) (2000). Rechtssoziologie am Ende des 20. Jahrhunderts: Gedächtnissymposium für Edgar Michael Wenz. Tübingen: Mohr Siebeck. pp. 3 f. 6  In detail with references Hamann, H. (2014). Evidenzbasierte Jurisprudenz: Methoden empirischer Forschung und ihr Erkenntniswert für das Recht am Beispiel des Gesellschaftsrechts. Tübingen: Mohr Siebeck. 7  Fagan, F. (2016). Big Data Legal Scholarship: Toward a Research Program and Practitioner’s Guide. Virginia Journal of Law & Technology 20, pp. 1–81. 4

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

75

ber of households by the house numbers of a street, the numbers inserted into the legal text for orientation purposes do not allow a meaningful count or measurement. The same applies to legal readings: there are indeed a finite number of them, but paradoxically not a countable number: Only the frequent invocation of “prevailing” opinions against those in the “minority” suggests quantitative comparability (commensurability), but this nomenclature merely cloaks the discursive use of power, for no one can give a precise definition of when an opinion “prevails”—it can only be formally asserted through final judgment. The well-known dictum of the “two experts, three opinions”, which lawyers also claim for themselves, ultimately leads back to the fact that it is not possible to form categories with a high degree of separation. Counting and measuring are thus disavowed as working steps. Linguistics behaves similarly, but is less stabilized (or—depending on the reading—also sclerotized) than jurisprudence by normative texts and normative dogmatics. Although the postulate of a quantifying “measurement” of language (e.g. in the field of language statistics) has only become widespread with the establishment of computer linguistic or corpus linguistic methods, the analysis of linguistic phenomena has always consisted, at least predominantly, in a modelling of linguistic structures that count evidence and weight it according to frequency. Quantifying concepts, however, were less based on elaborated statistical models. They also did not (or less frequently) manifest themselves in the specification of numbers, but rather in quantifiers (“all”, “many”, “hardly any”, etc.). Little has changed in this regard in many areas of linguistic empiricism, even in the age of Big Data and data mining. To this day, the focus of linguistic work—similar to jurisprudence—is on systematisation, i.e. the description of linguistic artefacts as aspects of a coherent system whose coherence can be explained. So what remains of “systematizing, counting, measuring, interpreting” in linguistics and jurisprudence? Above all, the first and last step—in between lies an accepted or downright intended area of fuzziness. Here, the open-ended hermeneutic discourse takes the place of pre-agreed mathematical measurement conventions. In this respect, lawyers have for generations spoken of “balancing”, thus again ­resorting to a commensurability metaphor and thus unawares overlooking the fundamental impossibility of consistent ordinal scaling in law: a ranking of legal goods that is plausible for any context cannot be justified, indeed cannot even be constructed from the provisions of the applicable law (such as the threats of punishment in the StGB = German penal code). Here, the need for a methodological metatheory of counting and measuring in law and linguistics becomes particularly apparent.

76

H. Hamann and F. Vogel

2.3 On the Lack of Methodological Metatheory A systematic discussion reflecting the historical genesis, the (sub-)disciplinary characteristics and the framing of science policy is still lacking. For the time being, both linguistics and jurisprudence assert their proprium in one-sided methodologies and do not always sufficiently take into account the international discourses of the theory of science and methodology. Of course, the present contribution cannot remedy this deficiency. However, we would like to use the example of our transdisciplinary project to demonstrate a method for bridging the supposed opposition between qualitas and quantitas. Three consequences for knowledge production can then be dialectically derived from this and put up for discussion.

3 Computer-Aided Pattern Recognition as a Method Law and language are related to individual cases—they do not encounter us as abstracts, but manifest themselves in concrete speech acts or legal acts. The resulting artefacts do not stand next to each other unrelatedly—as in the image of the starry sky, in which each star shines on its own—but interact in a variety of ways and create a meaningful overall system that constitutes their individual meaning—just as the stars interact with each other on closer inspection and each individual is subject to the laws of motion imposed on it by the “common” overall system. This interwoven overall system, in which each artefact can only be interpreted in its interweaving with all the others, can be described—to remain true to the metaphor of the man-made artefact—as a fabric,8 whereby law and language are not only woven separately from their individual artefacts, but also crossed over among themselves. Warp and weft need each other, otherwise the textile is not resilient. So that the metaphor does not lead astray, a mental distinction must be made between “language” as a complex emergent phenomenon, and the language-­ constituting signs (letters, words and sentences, as far as writtenness is concerned), which form the thread from which both language and law are woven. As in the fabric, the choice of thread according to strength, colour and density produces dif Thus, see the title of the conference “The Fabric of Language and Law: Discovering Patterns through Legal Corpus Linguistics” of the Academy Project; Vogel, F., Hamann, H. et al. (2016). “Begin at the beginning. Lawyers and Linguists Together in Wonderland”. Winnower 3, no. 4919. 8

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

77

ferent patterns in the overall picture, and this without the weaver—or even the totality of weavers—having to be aware of it. Law and language are emergent in this way, transcending their context of origin, so that Heine’s verse from Hebrew Melodies (1851) applies to them: “What he weaves, no weaver knows.” And yet: whoever recognizes the patterns in the overall picture understands, to a good extent, the weavers. This image paraphrases (and with it the metaphor has exhausted its meaning) the methodology of computational legal linguistics: it requires an overview of the “big picture”, i.e. as complete a section as possible of the tapestry of legal language—in this respect it is therefore a demanding editing project—and then contributes to the discovery of interesting and relevant patterns. Only this method— but not the intuition of the individual analyst—makes non-existence statements about linguistic phenomena possible, and also makes existence statements more plausible and assessable. At the same time, however, it cannot decide on its own—without the individual analyst—which patterns are interesting, relevant, or otherwise worthy of investigation. So as the art critic needs to understand as much of the fabric as the weaver, the computational linguist needs the legal hermeneutician, quantitas does not replace qualitas but complements it. This is expressed in the attribute of Computer Assisted Legal Linguistics (CAL) mentioned at the outset. The roughly outlined transdisciplinary approach of the project encourages methodological reflection. What can be experienced through numbers, what eludes them? How do individual cases and introspection relate epistemologically to the quantitatively recorded multiplicity of phenomena? To this end, a dialectical three-­ step emerges from the project.

4 Three Theses on the Relationship Between Quality and Quantity in Knowledge Production 4.1 Thesis: The Introspective and Single-Case Experience of the World Is Absolute The focus of a qualitative, interpretative and argumentative analysis is always the individual case, the concrete appearance of a specific artefact, i.e. a human product, situated in time and space (physical, virtual and social space). Its meaning already lies in its mere existence; it alone is a statement pregnant with meaning, because it is the result and at the same time a transcriptive point of contact in the social, semi-

78

H. Hamann and F. Vogel

otically supported discourse.9 The artefact refers in a sign-like manner to social actions that have produced it and, as it were, ‘enriched it with meaning’. This ‘sense’, however, is nothing metaphysical (supratemporal, ahistorical), nor is it something physical that would be inscribed in the artefact and make it grow. Rather, sense in this case means that the artifact appears in the world in this way and not otherwise. The sense is reflected in the context of the artefact: a word, for example, is produced in a certain way that will always remain historically unique, written down, painted or pronounced, accompanied by this and no other facial expression in the presence of this and not that recipient, selected in the context of an investigation, etc. Recipients perceive these sensually perceptible contextual stimuli, they “understand” the artefact—a word—intuitively. This is not a question of “whether”. Understanding means that the artefact is made “meaningful”, i.e. contextualised,10 depending on the recipient’s knowledge.11 As a cognitive process that is predominantly unconscious and automatic, references are made to immediate environmental perceptions as well as to prior experiences, long-term interests, situational needs, and so on.12 It is these references that constitute the meaning of the artefact, they are meaning. In the scientific field—especially in that of linguistics and jurisprudence—the attempt is made to control the production of meaning, to make it intersubjectively comprehensible. This is the difference between understanding and interpreting. Those who interpret make understanding explicit, try to arrange their cognitive processes plausibly with those of the institutionalized professional community— and that usually means in the mode of scientific communication. To interpret means to take special account of the contingency of the concrete individual case, to do justice to its role in the web of semiotic relations. In this sense, the introspective experience of an artifact is always absolute, i.e. unrestricted, valid. Qualitative ana Hermanns, F. (2007). Diskurshermeneutik. In I. Warnke (Ed.): Diskurslinguistik nach Foucault: Theorie und Gegenstände, Berlin: De Gruyter. pp. 187–210. Hunter, L. (2003). Transkription—zu einem medialen Verfahren an den Schnittstellen des kulturellen Gedächtnisses. Retrieved from http://www.inst.at/trans/15Nr/06_2/jaeger15.htm. 10  Hörmann, H. (1980). Der Vorgang des Verstehens. In W.  Kühlwein (Ed.): Sprache und Verstehen, Tübingen: Gunter Narr Verlag. pp. 17–29. 11  Busse, D. (1992). Textinterpretation: Sprachtheoretische Grundlagen einer explikativen Semantik, Opladen: Westdeutscher Verlag. p. 17. Gumperz, J. (1982). Discourse strategies. Cambridge: Cambridge University Press; Auer, P. (1986). Kontextualisierung. Studies in Linguistics 19, pp. 22–47. 12  Dijk, T. A. van (2006). Discourse, context and cognition. Discourse Studies 8, pp. 159– 177. 9

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

79

lytics is characterized by the fact that it attempts, starting from the individual case, to fathom all conceivable and intersubjectively comprehensible conditions of existence of the one, specific artifact as part of a coherent system. The disciplinary art consists in elaborating and proving the overall system, which is initially only assumed experientially (otherwise one would not even turn to the artefact), on the basis of the individual case. This can be well illustrated by the example of judicial norm text interpretation: At the outset there is a legal expression (the artefact) which, despite—or precisely because of—the generality of its wording, carries with it not too little but ‘too much’ meaning. By placing it in different relations, the opposing parties—plaintiff and defendant—signify it. Each of these interpretations, in-relation-settings, is initially absolutely valid in itself. The standard of validity is the individual happening and experience, classification, processing. The task of the judge is to arrange the references in his decision (reasons for judgment) in such a way that they are intersubjectively communicable against the background of previous institutionalized experiences of interpretation (dogmatics)—and that means above all also transferable to potentially comparable cases (“in a case like this”). This presupposes that the judge integrates competing interpretations of the normative text, processes them argumentatively, and thus connects to the language use of other subjects by paraphrasing. The decision itself, however, always remains case-specific; it decides only the case alone, the meaning of the expression in the here and now. A second case (a second artefact) can only be indirectly conducive to this, for it will never be identical with the first. It is therefore only “of significance” insofar as it establishes a point of reference, which in turn must be contextualized in a way that is absolutely valid for itself.

4.2 Antithesis: In the Unity of Number, Individual Experience Becomes Collectively Relevant The qualitative analytics of hermeneutic practice may describe the individual case; but it fails to recognize it if it does not consider the individual case as a discrete unit in the continuum of multiplicity. In essence, the very subsumption of an artifact as a single case is an artificial isolation and fixation of its fuzzy, fluid edges—a quantifying statement. The recognition of an artefact’s fallenness presupposes knowledge of its prototype—just as a captcha image can only be interpreted by those who have abstractly internalised, typologised, the underlying units of meaning. But who determines the contour lines between artifacts in the colourful multiplicity of sensory perceptions?

80

H. Hamann and F. Vogel

In purely qualitative hermeneutics, the standard by which an artifact is ennobled as a prototypical unit worthy of consideration remains implicit and often uncontrolled. The measuring instrument designed for quantification from the outset is different: the prior collective agreement (definition) on abstract discrete units gives the individual case a constant contextual framework of variable length that abstracts the individual case. Classic example: the ruler or metre-measure. The applied line sequences and digits are based on an international definition (formerly the “primal metre”, today depending on the constant of the speed of light). Only the application of a metre-measure to an object gives the intuition (length estimation) a reliable, reliable and as such effectively communicable size orientation. In other words: quantification creates knowledge about the individual case only through a stabilisation of the in-relation settings, in short: through standardization and norming, through the transfer of sense objects into known number systems. This is not a negation of the individual case, but rather a particularly stabilized recognition and accentuation as an individual case in a relationship of tension with other individual cases, which partly harmonize, partly contrast with it, but in any case interact fruitfully.

4.3 Synthesis: Only the Cognitive Contextualization of the Individual Case in the Pattern of Contingent Multiplicity Creates Meaning On closer examination, thesis and antithesis can only be separated from each other analytically, for heuristic purposes, but not optically. On the contrary, we claim that every science—indeed, in general, every epistemic, i.e. action-generating—­ knowledge work necessarily proceeds both qualifying and quantifying. The question is rather how controlled these two sign-mediated work processes are in practice. With a view to the dispute between “empiricists” and “armchair linguists”, the cognitive linguist Charles Fillmore already summarized this train of thought in the 1990s in an ironically exaggerated way: Armchair linguistics does not have a good name in some linguistics circles. A caricature of the armchair linguist is something like this. He sits in a deep soft comfortable armchair, with his eyes closed and his hands clasped behind his head. Once in a while he opens his eyes, sits up abruptly shouting, ‘Wow, what a neat fact!’, grabs his pencil, and writes something down. Then he paces around for a

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

81

few hours in the excitement of having come closer to knowing what language is really like. (There isn’t anybody exactly like this, but there are some approximations.) […]. Corpus linguistics does not have a good name in some linguistics circles. A caricature of the corpus linguist is something like this. He has all of the primary facts that he needs, in the form of a corpus of approximately one zillion running words, and he sees his job as that of deriving secondary facts from his primary facts. At the moment he is busy determining the relative frequencies of the eleven parts of speech as the first word of a sentence versus as the second word of a sentence. (There isn’t anybody exactly like this, but there are some approximations.) […]. These two don’t speak to each other very often, but when they do, the corpus linguist says to the armchair linguist, ‘Why should I think that what you tell me is true?’, and the armchair linguist says to the corpus linguist, ‘Why should I think that what you tell me is interesting?’ […] I have two main observations to make. The first is that I don’t think there can be any corpora, however large, that contain information about all of the areas of English lexicon and grammar that I want to explore; all that I have seen are inadequate. The second observation is that every corpus that I’ve had a chance to examine, however small, has taught me facts that I couldn’t imagine finding out about in any other way. My conclusion is that the two kinds of linguists need each other. Or better, that the two kinds of linguists, wherever possible should exist in the same body.13

This additive, rather than abductive, synthesis, however, is to be conceived more radically: Measurement means abstraction, i.e. alienation from the individual case— taking away its particularity and subordinating it to the continuum of standardized (i.e. also: calculable, expectable) multiplicity. In this way, special, unique individual cases are made comparable in a metasystem. Of course, it is questionable to what extent a specific artefact of the world may be standardised to the scale to be applied without completely dissolving it or making it itself a part of the artificial scale. This question is not a quantitative or statistical one, but always a purely qualitative one, for it is a matter of typification, classification, evaluation of objects of investigation, and the purpose of a measurement. The number of words in legal texts depends heavily on what I understand—or more precisely, want to understand—by a word. In order to impress potential project funders or approaches competing in the academic market, we might apply a rather broad concept of words to

 Fillmore, C. J. (1992). ‘Corpus linguistics’ vs. ‘Computeraided armchair linguistics’. In J.  Svartvik (ed.): Directions in Corpus Linguistics: Proceedings of Nobel Symposium 82, Stockholm, 4–8 August 1991, pp. 35–60, here pp. 35 ff. Berlin: De Gruyter Mouton. 13

82

H. Hamann and F. Vogel

make our corpus of inquiry appear as voluminous as possible. In contrast, when constructing frequency lists as the basis for a dictionary of modern legal language, we chose a rather narrow concept of words, etc. Accordingly, measurement accuracy is only a relative quantity, which may well be statistically justifiable (i.e. in turn standardizable), but without a qualitative in-relation to a measurement purpose is in itself completely meaningless. The mass must always remain a critical mass, critical in Kant’s sense:14 a mass to be questioned with regard to its constitutional conditions. Thus, the decisive question is not for or against a qualifying or quantifying analytics, but the controlled handling of the points of contact between these two perspectives: Under which conditions is it legitimate to form groups from structurally valid and categorizable individual cases in an abstract manner (i.e. disregarding details) and to assign quantities to their elements (whether in the form of numbers or quantity words such as often, rarely, frequently, etc.)? Conversely, how can what is measured, quantified numerically, and modeled as such be legitimately applied as a standard for classifying each new individual case? In short, how can the particular individual case be categorized and meaningfully contextualized as an element of a pattern in such a way that new options for action (ideally also for the social reality of life) can be generated from it? In our opinion, these questions cannot be answered in a general way, but can only be discussed and controlled with regard to the concrete object of investigation. Generalizing postulates—such as that of a scientification of philological work— may be helpful in terms of science policy, but they serve little purpose.

5 The Perspective: Everything Flows Sciences—and indeed all disciplines without exception—are necessarily dependent on a complementary relationship between measuring and understanding the world. From our legal linguistic perspective, this means: unity, the classification of a singular artifact such as a word or a legal case, always takes place against the background of an imagined multiplicity; and conversely, multiplicity is semantically empty without sufficient knowledge of the individual, of what is imagined as unity. Measurement and understanding are not contradictory; strictly speaking, they do not complement each other either. Rather, they are part of one and the same process of human processing of sense data. As scientists, we strive to control these processes and make them transparent. 14

 Kant, I. (1974 [1787]). Kritik der reinen Vernunft. Berlin: Suhrkamp.

Critical Mass: Aspects of a Quantitatively Oriented Hermeneutics

83

In this sense, understanding is like Heraclitus’ river, the ever-flowing sense, unstoppable and untamed. That is why we build measuring stations on the riverbed: water testing stations, landing stages, viewing platforms, points of orientation. But with the stabilizing intervention we change the world at the same time: straightening facilitates navigation and at the same time destroys the beauty and diversity of the original wild growth. Therein lies the potential, but therein also lies the risk. We understand the river only when it is no longer the river we set out to measure. We comprehend language only through language that awaits renewed comprehension. There is no escaping the hermeneutic circle, the infinite semiosis. Those who think they know everything have understood nothing.

Quantifying and Operationalizing the Commensurability of International Economic Sanctions Matthias Valta

States may pose dangers or commit breaches of international law to which other states must respond in order to protect themselves and their citizens and residents. Military aggression, its preparation, proliferation of weapons of mass destruction, promotion of terrorism, environmental pollution, deprivation of cross-border resources are just a few examples. At the same time, states have become closely interconnected as economic areas through globalization and have become dependent on cross-border trade. This makes economic sanctions the instrument of choice to influence the actions of other states or at least to express disapproval in a powerful way. However, the influence on the political actors is indirect, because state-related sanctions primarily affect the population, which must expect poorer economic conditions, including mass unemployment and serious supply shortages. The consequences of economic sanctions and their effectiveness are the subject of both political science and economics as well as law. Political science and economics theorize about the effects of sanctions and attempt to substantiate them with economic and political data. At the same time, they measure the effects of sanctions using economic indicators.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

M. Valta (*) Chair for Public Law and Tax Law, Faculty of Law, Heinrich Heine University, Düsseldorf, Düsseldorf, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_8

85

86

M. Valta

Legal scholarship examines the legal admissibility of economic sanctions and, in particular, their commensurability: does the sanction1 further the sanction’s purpose, are there milder, equally effective alternative measures, and is the sanction’s purpose of such a nature and weight that it justifies the sanction’s consequences in both the sanctioned and the sanctioning state? The present WIN project poses the question of the extent to which the law can take up economic and political science findings not only to record the facts of the case, but also for the subsequent legal assessment. The aim is to make the commensurability test more accurate and objective through quantification. Legal argumentation is expressed in natural language and uses argumentation figures that evaluate the severity of an interference and the importance of the protected legal good. Subjective evaluation is objectified by adherence to legal methodology and connection to professional discourse. Empirically supported and quantified results promise a gain in rationality and objectivity. Thus, not only “severe” consequences and “important” legal interests would be related, but also, for example, economic damage caused by sanctions in the amount of 100 million euros and, without sanctions, threatening damage, e.g. through breaches of international law, up to and including armed conflict of 200 million euros. It remains to be clarified, however, to what extent many a premise-rich model and the quantifications derived from it actually entail an increase in objectivity. In addition, it must be examined to what extent the formalized argumentation structure of jurisprudence even demands and processes such quantifications. For political science and economics, this project can use a limited example to show what hurdles and possibilities there may be for the legal relevance of their results.

1 How/What Do We Count In jurisprudence, counting in the sense of determining a number of elements only takes place in the context of determining the facts of a case. The facts of the case are the cutout from the reality of life on which the legal assessment is based. However, the purely formal number of elements is regularly of secondary importance here as well: more important is the evaluation of the legal interest behind it and its impairment or realisation. In this context, economic values are often used; damages are regularly monetarized and thus quantified.2  For the sub-area of the law of countermeasures: ILC Draft Articles on Responsibility of States for Internationally Wrongful Acts. Art. 42 et seq., Art. 49 et seq., Art. 54. 2  Thus, although in the law of damages in rem restitution, the removal of the damage by the tortfeasor himself (§ 249 para. 1 BGB) is the guiding the principle. More significant in practice are the many exceptions (§ 249 para. 2, 250 et seq. BGB), which give the creditor a claim to money. 1

Quantifying and Operationalizing the Commensurability of Sanctions

87

In legal argumentation, the formal number of elements is in principle irrelevant. The outcome does not depend on the number of arguments or the number of rights or legal interests affected that one side can claim. Nor does the number of literature opinions and court decisions relied upon matter. Although “majority opinions” or “prevailing opinions” and “minority (opinions)” are often designated as such, no proof of authority follows from this.3 Such considerations may be sociologically relevant for the prediction of decisions, but it is primarily the authority of the decisions that matters, e.g. of supreme courts. In common law systems, the authorship and quality of potentially binding precedents also matters more than the sheer number of matching decisions.4 In exceptional cases, however, counts are also used to structure legal valuations. This is particularly the case when the relative comparability of different valuations is in the foreground and the valuation must therefore meet particular requirements of certainty and transparency. Examples include the assignment of grades in education and training,5 but also the valuation of compensation areas in nature conservation law,6 bids in the allocation of scarce resources (e.g. market booths,7 mobile phone licences) and in the award of public contracts,8 as well as cost-benefit calculations in the planning of transport routes and the promotion of their construction.9

 In lieu of all Tettinger, P.J., & Mann, T. (2015). Einführung in die juritische Arbeitstechnik. (5th ed.) para. 219. 4  Cf. Koch, H., Magnus, U., & von Mohrenfels, P.W. (2010). IPR und Rechtsvergleichung (4th ed.). m.no. 11 with regard to England, Rn. 41 ff. restrictively for the USA. 5  Cf. e.g. on grading in schools in Baden-Württemberg, Germany, § 89 para. 2 nos. 4, 5 SchG in conjunction with § 5 Grade Education Ordinance. 6  § 15 para. 2 BNatSchG; Cf. Hofmann, E. (2007). Abwägung im Recht — Chancen und Grenzen numerischer Verfahren im öffentlichen Recht. Tübingen: Mohr Siebeck. pp. 28 et seq. 7  Cf. on the system of scoring points in the allocation of scarce booth space in the Dresden Christmas market “Striezelmarkt” pursuant to Section 70 (3) Gewerbeordnung (Trade, Commerce and Industry Regulation Act) in conjunction with a selection guideline: OVG Bautzen v. 26.11.2013-3 B 494/13, Gewerbearchiv 2014, p. 128. 8  Hofmann, E. (2007). Abwägung im Recht — Chancen und Grenzen numerischer Verfahren im öffentlichen Recht, Tübingen: Mohr Siebeck. pp. 16 et seq. 9  Cf. Ibid., pp. 31 et seq. 3

88

M. Valta

2 How/What Do We Measure? Measurements in the sense of quantifying observations10 are also primarily used by jurisprudence to determine facts that are legally evaluated in a second step. Depending on the facts and the standard, such quantifications and their standard-­ compliant determination have great significance.11 In immission control law, for example, the measurement of emissions and immissions is crucial in order to assess and continuously monitor hazards to humans and the environment. However, economic variables are also relevant as input variables for legal evaluation. For example, budgetary law provides for certain absolute and relative limits on public debt, which are expressed as a percentage of gross domestic product (GDP). A distinction must be made in the legal evaluation of the figures collected. Some facts specify the evaluation of the figures by naming precise threshold or maximum values. For example, immission control law contains maximum limits for pollutants and noise, which can be directly compared with the figures, provided that they are measured in accordance with the regulations. Budgetary law specifies precise GDP figures. However, the accompanying reduction of the legal assessment to the comparison with a figure is an exception - albeit not an uncommon one - which requires justification from a legal point of view. This is because a certain arbitrariness is regularly inherent in the numerical limit. Why should 1.00 still be sufficient for a maximum value of 1, but 1.01 or 1.02 no longer suffice? The equality problem associated with this is justified pragmatically: If legal certainty, the complexity of the recording of the facts or the enforcement require it, the legislator can typecast and provide for lump sum assesments and thereby also draw limits expressed in numbers, even if the choice of one number and not the other may appear arbitrary.12 The law on sanctions does not explicitly link sanctions to specific values. However, the consequences of state-related sanctions can be reflected in the change in the gross national income of the target state and also of the sanctioning state. For example, sanctions against Iraq as a result of the attack on Kuwait led to a 54% decline in Iraq’s gross national income, equivalent to US$830 per capita.13 The

 For definitions of measurement, see the glossary “Measurement”.  At least nowadays, it is no longer tenable in the broad sense to attribute a relatively subordinate importance to quantifications, still Bydlinksi, F. (1991). Juristische Methodenlehre und Rechtsbegriff. Vienna, New York: Springer Verlag (2nd ed.). p. 87. 12  Cf. for tax law BVerfG v. 23. 6. 2004, 1 BvL 3/98, 1 BvL 9/02, 1 BvL 2/03, BVerfGE 111, 115, 137. 13  Database from Hufbauer, G.C. (2007). Economic sanctions reconsidered (3rd ed.). Washington D.C.: Peterson Institute for International Economics. Data adjusted for purchasing power as of 2007. 10 11

Quantifying and Operationalizing the Commensurability of Sanctions

89

a­ verage of 2.8% when considering all sanctions episodes of the 20th century indicates the exceptional strength of these sanctions. However, this measurement remains an input variable that must be legally evaluated and thus translated. A commensurability test using the figures directly is not possible. For one thing, monetarisation does not represent reality in its entirety, but under certain economic paradigms.14 Gross national income is an abstract and limited quantity that does not allow statements about individual hardships. Second, not all legally protected goods can be comprehensively valued in money terms. Thus, the purposes of sanctions, e.g. violations of international law, human rights violations, cannot be monetized in a meaningful way. The value of human life and health cannot be monetized in general15 (and, in principle, cannot be weighed in legal16 terms).As an alternative to the limitations of gross national income, other indices can be considered, such as the United Nations Human Development Index. In addition to gross national income, this index also tracks life expectancy and length of schooling.17 Life expectancy and school attendance are supposed to reflect the non-­economic capabilities that the state grants its citizens. At the same time, the realisation of essential human rights positions (protection of life, right to education and participation) is recorded in the abstract. Although this index is broader, it is not without controversy.18 Moreover, it also remains very abstract and reflects the individual fundamental rights positions only very indirectly via aggregated data.  Führ, M. (2001). Ökonomische Effizienz und juristische Rationalität: Ein Beitrag zu den Grundlagen interdisziplinärer Verständigung. In E. Gawel (Ed.), Effizienz im Umweltrecht, Baden-Baden: Nomos Verlag. pp. 157 et seq., pp. 166 et seq.; Sunstein, C.R. (2014). The limits of quantification. California Law Review, 102, pp. 1369 et seq., pp. 1376 et seq. 15  Special monetizations of human life are found, for example, in the insurance industry, but these do not constitute a general measure of the “value” of a human life. Instructive on U.S. behavioral economics approaches and related problems Sunstein, C.R. (2014). The limits of quantification. California Law Review, 102, pp. 1369 et seq., pp. 1373 et seq. 16  In this complex and not completely clarified question, the restriction to the indication that a weighing is not possible in particular if the person concerned cannot influence the endangerment of his or her life, cf. BVerfG v. 9.11.2015, 1 BvR 357/05, BVerfGE 115, 118, 151 et seq. 17  Retrieved from http://hdr.undp.org/en/content/humandevelopmentindexhdi. (31.1.2016). 18  Review in Kovacevic, M. (2011). Review of HDI Critiques and Potential Improvements. Retrieved from http://www.hdr.undp.org/sites/default/files/hdrp_2010_33.pdf (23.11.2016). See also Wolff, H., Chong, H. & Auhammer, M. (2011). Classification, Detection and Consequences of Data Error: Evidence from the Human Development Index. The Economic Journal, 121, p. 843. 14

90

M. Valta

The individual fundamental rights positions can therefore not be put in relation to each other. If, in the case of gross national income, it is only possible to assign changes to concrete sanctions by estimation, such attributions are more than doubtful in the case of more complex indices due to data poverty. The legal concept of discretion must be distinguished from measurements in the sense described above. Discretion is a legal consequence that provides for a decision-­making leeway of administrative authorities, which must be filled following the purpose of the norm, higher-ranking law and the principle of commensurability.19 Etymologically, the German term “Ermessen” derives from the verb “ermessen”, whose original meaning “to measure and assess” has been overlaid by a variety of figurative meanings “to consider, estimate, believe, conclude, comprehend, judge”.20 Even though this concept of discretion still presupposes a yardstick,21 it is not necessarily quantitative, nor is it tied to actual observations (see also chapter Lauer & Pacyna).

3 How Do we Recognize and Interpret Patterns? The concept of “pattern” as such has no essential meaning in jurisprudence.22 Nevertheless, jurisprudence captures the reality of life through patterns defined in the elements of the offence in the law. If all the elements of the offence are present, a certain legal consequence is triggered: conduct can be established as lawful or unlawful, orders of conduct can be set or revoked. One consequence of the complexity-reducing formalisation is that the law can only make certain minimum or framework statements (“ethical minimum”) - there remains room for private autonomous or political arrangements which can follow more complex information processing models. In this way, people can conclude contracts with each other which can take into account and balance the respective interests in a privately autonomous manner, however absurd they may appear to a third party. In the political process, political guiding principles, religious views or  Cf. instead of all Maurer, H. (2011). Allgemeines Verwaltungsrecht (18th ed.). Berlin. C. H. Beck. § 7 m.no. 7. 20  Held-Daab, U. (1996). Das freie Ermessen: Von den vorkonstitutionellen Wurzeln zur positivistischen Auflösung der Ermessenslehre. Berlin: Duncker & Humblot. p. 21. 21  Stickelbrock, B. (2002). Inhalt und Grenzen richterlichen Ermessens im Zivilprozeß. Cologne: Dr. Otto Schmidt Verlag. p. 12. 22  In intellectual property law utility models and design patents (now: designs) are protected, but this refers to technical or design patents, see § 1 No. 1 DesignG. 19

Quantifying and Operationalizing the Commensurability of Sanctions

91

world views can play a role. Although these can in principle be legally formulated, they must again take sufficient reserves of freedom into account in order to safeguard individual rights of freedom.

4 What Is the Significance of Patterns and Numbers? 4.1 Jurisprudence Legal science defines law on a sociological basis as legitimate mutual expectations of behaviour.23 (Dogmatic) jurisprudence is thus a decision science that makes formalized statements about expected behavior under limited knowledge.24 The claim of decision science is normatively given by the right to justice and the right to effective legal protection against public authority. In order to be able to decide in every conceivable circumstance in accordance with the imperatives of the safeguarding of justice and effective legal protection, pattern recognition about facts is multilevel and interconnected.25 In addition to rules, norms26 with an unambiguous legal consequence, it27 also contains principles and principled discretionary powers, which only require the extensive realization of certain legally protected interests as a legal consequence.28 The constitution and functional partial equivalents of European and international law contain, as meta-law, comprehensive frameworks of principles which are concretised by laws and individual state acts within the

 Luhmann, N. (1970). Positivität des Rechts als Voraussetzung einer modernen Gesellschaft. Jahrbuch für Rechtssoziologie und Rechtstheorie, 1, p. 175, p. 179; Habermas, J. (1994). Faktizität und Geltung. Berlin: Suhrkamp. p. 155. 24  Engel, C. (2007). Herrschaftsausübung bei offener Wirklichkeitsdefinition: Das Proprium des Rechts aus der Perspektive des öffentlichen Rechts. In C.  Engel & W.  Schön (Eds.), Recht  - Wissenschaft  - Theorie: Vol. 1. Das Proprium der Rechtswissenschaft. Tübingen: Mohr Siebeck. pp. 224 et seq., pp. 236 et seq. 25  On the model character of the facts from a mathematical information perspective Ferrara, M., & Gaglioti, A. (2012). A Mathematical Model for the Quantitative Analysis of Law: Putting Legal Values into Numbers. In M.K. Jha, M. Lazard, A. Zaharim, & S. Kamaruzzaman (Eds.), Applied mathematics in electrical and computer engineering. pp. 201 et seq., 202. 26  Should sentences (commands, prohibitions, permissions), cf. Alexy, R. (1994). Theorie der Grundrechte (2nd ed.). Berlin: Suhrkamp. p. 72. 27  Ibid., p. 76. 28  Ibid., p. 75. 23

92

M. Valta

limits drawn by the respective higher levels.29 A concrete state of affairs must therefore be compared with a complex set of facts, which is composed of several levels of norms. Thus, a concrete individual act of the state can be based on a law, which, however, is only decisive to the extent that it does not violate constitutional values and must be interpreted in conformity with the constitution. This interconnected system based on valuations allows statements to be made about all possible facts of life, whereby the practitioner of law has a certain scope for decision-making within the framework of legal methodology and is in this respect also a lawmaker.30 Legal science is not an exact science and the practice of law is also the exercise of rule.31 The necessity of unambiguous decision is solved procedurally through the order of jurisdiction and legal protection by the courts,32 which in the last instance select the practically authoritative one from among several justifiable solutions. The flexible, principle-guided finding of law finds its expression in the principle of commensurability. With it, several principles can be weighed against each other and optimised in the sense that the legal consequence is neither black nor white, but describes a “grey area” of justifiable decisions appropriate to both the “white” and the “black” principle and the protected interests embodied therein in the concrete situation.33 The possibilities and limits can be illustrated using the example of state-related economic sanctions. The protection of human rights is affected both in the intervention perspective and in the duty to protect perspective. At the same time, the  Bydlinksi, F. (1991). Juristische Methodenlehre und Rechtsbegriff (2nd ed.). Vienna, New York. Springer Verlag. p. 11; Seiler, C. (2000). Heidelberger Forum: Vol. 111. Auslegung als Normkonkretisierung, Heidelberg. pp. 47 et seq.; Müller, F. (2013). Juristische Methodik / Friedrich Müller: Bd. 1, Hrsg. 11. Juristische Methodik, Band I: Grundlegung für die Arbeitsmethoden der Rechtspraxis. (11th ed.). Berlin: Duncker & Humblot. pp. 263 et seq., pp. 303 et seq., pp. 504 et seq. 30  Jestaedt, M. (2007). “Öffentliches Recht” als wissenschaftliche Disziplin. In C. Engel & W. Schön (Eds.), Recht - Wissenschaft - Theorie: Vol. 1. Das Proprium der Rechtswissenschaft. Tübingen: Mohr Siebeck. p. 252; Lieth, O. (2007). Die ökonomische Analyse des Rechts im Spiegelbild klassischer Argumentationsrestriktionen des Rechts und seiner Methodenlehre, Baden-Baden: Nomos. pp. 90 et seq. 31  Engel, C. (2007). Herrschaftsausübung bei offener Wirklichkeitsdefinition: Das Proprium des Rechts aus der Perspektive des öffentlichen Rechts. In C.  Engel & W.  Schön (Eds.), Recht  - Wissenschaft  - Theorie: Vol. 1. Das Proprium der Rechtswissenschaft. Tübingen: Mohr Siebeck. pp. 224 et seq., p. 238. 32  Ibid., pp. 224 et seq., p. 238. 33  Alexy, R. (1994). Theorie der Grundrechte (2nd ed.). Berlin: Suhrkamp. pp. 75 et seq., pp. 145 et seq. 29

Quantifying and Operationalizing the Commensurability of Sanctions

93

conflicting spheres of national sovereignty and the associated allocations of ­legitimacy and responsibility must be fundamentally preserved within the framework of the prohibition of intervention. This leads to complex intervention and protection relationships. The classical doctrine of the prohibition of intervention and the protection of the equal sovereignty of states34 as well as the right of countermeasures35 remains stuck to a binary rule structure: If the sanctioning state can show a certain connection as a regulatory interest as a minimum threshold (e.g. protection of its national territory, its existence, its inhabitants and nationals, measures against war crimes and other violations of peremptory international law) it is authorized to impose sanctions without further ado, as far as it has not contractually committed itself to refrain from doing so. This formal rule structure has, however, led to the fact that the steering power of law is very low and that a wide scope for sanctions has emerged away from the minimum threshold. However, if one questions the purpose of sovereignty, it can to a certain extent be linked to preconditions and thus materialized and made weighable as a principle. Thus, according to recent developments in international law, sovereignty is related to the protection of human rights. The freedom of states within the framework of the prohibition of intervention is accompanied by a responsibility to ensure basic human rights protection.36 This is discussed in particular in the context of the responsibility to protect, according to which a state can only defend itself to a limited extent against interventions by the community of states if it fails to meet its human rights responsibility in a blatant manner because it cannot (failed state) or does not want to.37 Even if the responsibility to protect is controversial in detail, especially after the intervention in Libya, it is at least recognized in principle for core crimes of international criminal law such as genocide, crimes against humanity and war of

 More commonly: sovereign equality, cf. Art. 2 para. 1 UN Charter.  In particular: Art. 49 para. 1, Art. 54 para. 1, in conjunction with. Art. 48 para. 1, ILC Draft Articles on the Responsibility of States; but with an opening to the principle of proportionality and the protection of human rights in Art. 50 and 51, ILC Draft Articles. 36  Peters, A. (2009). Humanity as the A and O of Sovereignty. European Journal of International Law, 20(3), pp. 513 et seq. 37  Deng, F.M. (Ed.). (1996). Sovereignty as responsibility: conflict management in Africa. Washington D.C.: Brookings Institution Press; Evans, G.J., & Sahnoun, M. (2001). The responsibility to protect: Report of the International Commission on Intervention and State Sovereignty; Peters, A. (2009). Humanity as the A and O of Sovereignty. European Journal of International Law, 20(3), pp. 513 et seq., pp. 535 et seq.; Overview in Vashakmadze, M. (2012). Responsibility to protect. In B. Simma, H. Mosler, A. Paulus, & E. Chaitidou (Eds.), The Charter of the United Nations: A commentary. Oxford: Oxford University Press. pp. 1201 et seq. 34 35

94

M. Valta

a­ ggression. The concerns about military interventions on the basis of the responsibility to protect do not arise to the same extent in the case of economic sanctions. If one thinks materialization further, the sovereignty of the target state can in principle be weighed against the sanctioning state’s interest in sanctions. Both the sanctioning interest and the restrictions imposed on the target state by the consequences of the sanction can be assessed in terms of human rights. Does the sanction bring with it an increase in the realization of human rights which justifies the impairment of human rights in the target state? Instead of woodcut-like minimum thresholds for sanctions, a weighing can take place that “optimizes” the sanction within the framework of commensurability according to the claim: The human rights interests in the sanction and against the sanction should both flow into the decision to the greatest possible extent and be realized in the result.38 But here, too, a wide margin continues to exist: the human rights test does not promise the one optimal solution, but can only indicate a range of possible solutions. Moreover, the balancing is done through natural language reasoning, where concerns are structured by freely chosen attributes such as “minor”, “substantial”, “major/high”, “very major/very high” and “paramount”. The persuasive content is in the argument.39 The correctness of the result can only be secured procedurally by an open professional discourse de lege artis before the courts and in jurisprudence, which must be continuously taken into account by the practitioner of the law.

5 Political Science and Economics Research in political science and economics does not deal with the permissibility of sanctions under international law, but sheds light on their actual political occurrence and records their effects, both political and economic. It also provides indications of rationalized sanctions policies with regard to political reactions that can be expected according to certain paradigms and behavioral models. Research in economics, and increasingly also in political science, makes extensive use of quantification: the economic effects of sanctions can usually be quantified statistically, e.g. through changes in gross domestic product. Political science research attempts to formalize political dynamics through numbers, e.g. simply through the number of changes in government. Quantification has proven to be politically influential, although a closer look raises doubts about the validity of  Cf. for a balanced view Alexy, R. (1994). Theorie der Grundrechte (2nd ed.). Berlin: Suhrkamp. p. 152. 39  In general: Jestaedt, M. (2007). “Öffentliches Recht” als wissenschaftliche Disziplin. In C. Engel & W. Schön (Hrsg.), Recht - Wissenschaft - Theorie: Vol. 1. Das Proprium der Rechtswissenschaft. Tübingen: Mohr Siebeck. pp. 254 et seq. 38

Quantifying and Operationalizing the Commensurability of Sanctions

95

some indicators. This can be exemplified by the influential study by Hufbauer et al.40 With the statement that 33% of all sanctions are “successful”,41 influenced the foreign policy of the United States of America at the beginning of the 1990s and led to a renaissance of sanctions.42 Looking more closely at the study, it turns out that this figure is based on the assessment of success on a scale of 1–4 (failure/unclear/partial success/success) and the contribution of sanctions to this on a scale of 1–4 (negative/small/significant/decisive), with a product of both values greater than 9 being considered a success.43 One criticism points out that one can also only assume a figure of 6–9% for successful sanctions due to other subjective evaluations of the same data basis.44

6 What Is a Scientific Result? What Perspectives Does the WIN Project Offer? 6.1 Dogmatic Jurisprudence and Social Science Approach In jurisprudence, a distinction can be made between dogmatic jurisprudence and the fundamentally-oriented study of law and jurisprudence, which is also abbreviated to social science, whereby the fundamental reference is also part of a correctly understood dogmatic jurisprudence. Dogmatic jurisprudence applies the applicable law through interpretation, concretisation and argumentation.45 Thus, on the one hand, it does nothing other than the practical user of the law and the courts when it assesses concrete facts. Precisely because of its neutrality, it is an important participant in the professional discourse and a discussion partner for the courts and lawyers committed to the interests of the parties. On the other hand, it deals with the subject matter of law in the abstract and systematizes it. The systematization of law and the formation of abstract applica-

 Current edition: Hufbauer, G.C. (2007). Economic sanctions reconsidered (3rd ed.). Washington D.C.: Peterson Institute for International Economics. 41  Ibid., pp. 158 et seq. 42  Pape, R.A. (1997). Why Economic Sanctions Do Not Work. International Security, 22(2), p. 90, pp. 91 et seq. 43  Hufbauer, G.C. (2007). Economic sanctions reconsidered (3rd ed.). Washington D.C.: Peterson Institute for International Economics. pp. 49 et seq. 44  Pape, R.A. (1997). Why Economic Sanctions Do Not Work. International Security, 22(2), p. 90, p. 106, fn. 34. 45  Bydlinksi, F. (1991). Juristische Methodenlehre und Rechtsbegriff (2nd ed.). Vienna, New York: Springer Verlag. p. 16. 40

96

M. Valta

tion results in doctrines that make the complex network of interrelated provisions operationalizable; it facilitates and improves the subsequent application of law. The foundation-oriented approach makes use of neighbouring disciplines as basic subjects in order to better understand the practice of law. In particular, the social sciences should be mentioned here (e.g. sociology of law and the economic analysis of law), so that one can also speak of a “social science approach”. However, philosophical and ethical approaches are also relevant. Their insights are sources of legal values.46 They are relevant for legal policy, for policy advice guided by jurisprudence. In addition, they also gain dogmatic relevance, so that the arc can be drawn to dogmatic jurisprudence (“law in context”). In this way, statements about justice, efficiency or social participation are referenced and justified in statements of fact and are to be applied to the extent opened up in each case for the interpretation and control of norms of equal and lesser importance. When transposed into meta-legal norms, especially principled constitutional norms, they are directly called upon to fill them out; in other respects, they expand the “scope of reasoning for law-sentence-based legal discovery.”47 A scientific result of dogmatic jurisprudence therefore always consists in an abstract or concrete interpretation of the law or an objectified argumentation for a particular application. According to the claim as a decision science, a normative statement is made. Within the framework of abstract interpretation, the legal material is ordered and systematically related to each other, and ideally doctrines (dogmas) are derived from it. Statements can therefore read, for example: “The legal consequence of norm A applies only insofar as the legal consequence of norm B does not conflict with it”; or: “The application of certain norms must take into account principle X, with the consequence that certain legal interests must be weighed argumentatively in the interpretation of the facts or the determination of the legal consequence within a given scope.” In the context of  Kirchhof, P. (2000). Besteuerung im Verfassungsstaat. Tübingen: Mohr Siebeck, p 21. Following the broad concept Alf Ross of the source of law as any source of knowledge of law, cf. Ross, A. (1929). Theorie der Rechtsquellen. Aalen: Scientia. Similarly Luhmann, N. (1973). Die juristische Rechtsquellenlehre aus soziologischer Sicht. In G. Albrecht, H. Daheim, & F. Sack (Eds.), Soziologie, Festschrift für René König, pp. 387 et seq., according to which law depends on dogmatically grasping social valuations via the sources and incorporating them into its system in order to recognize the need for change in what is valid. 47  Kirchhof, P. (2000). Besteuerung im Verfassungsstaat. Tübingen: Mohr Siebeck, p. 21 f; Luhmann, N. (1973). Die juristische Rechtsquellenlehre aus soziologischer Sicht. In G. Albrecht, H. Daheim, & F. Sack (Eds.), Soziologie, Festschrift für René König, p. 387, at p. 390: with regard to the compulsion to make decisions in law, “a surplus of possibilities of justification is produced, so that sufficiently many and also opposing decisions can be justified in case of need.” (translated) 46

Quantifying and Operationalizing the Commensurability of Sanctions

97

concrete application, individual facts are judged in law. The result may therefore be that in a given set of facts there is a certain legal consequence (criminal liability; individual act unlawful and to be set aside). A scientific result of the auxiliary sciences is measured according to the methodology of the respective science. For example, an economic result may be that a certain norm or its implementation is inefficient. This result can in turn have an impact on dogmatic jurisprudence in a second step, if a legal fact refers to efficiency and thus the auxiliary scientific finding is interpretative.

6.2 Formalization as a Common Formal Object of Legal and Quantitative Social Science If one compares the jurisprudential approach and the quantifying political science and economics approach, a certain formalisation in the recording and processing of the reality of life can be observed in both cases. Legal science processes the reality of life by means of a complex network of facts and legal argumentation, which, however, has the effect of reducing complexity in relation to the abundance of the reality of life. This formal object of dogmatic jurisprudence has an identity-­creating effect.48 The quantifying approach of political science and economics contains a complexity-­reducing formalization through the use of numbers. This applies to cardinal measurements, since the determination of the measure already contains a formalization, as well as to statistical operations, but especially also to ordinal indicators as in the Hufbauer et al. study. Ordinal indicators are comparable to the legal facts method. Hufbauer’s evaluation of sanction success in four levels and sanction contribution in the same four levels is based on definitions or facts of what constitutes success or contribution of a certain level. The comparability of ordinal quantifications and the legal methodology is also shown by the fact that the commensurability test can largely be expressed in ordinal numbers by Robert Alexy’s weight formula. Alexy maps the outcome of the balancing with “bedingten Vorrangrelationen” (“conditional precedence ratios”). The weight formula contains numerical approaches by assigning different numerical values to the levels (“quotient formula” 1, 2, 4). Accordingly, the weight of both the purpose of the sanction and the sanction measure is determined by three factors  Jestaedt, M. (2007). “Öffentliches Recht” als wissenschaftliche Disziplin. In C. Engel & W. Schön (Eds.), Recht - Wissenschaft - Theorie: Vol. 1. Das Proprium der Rechtswissenschaft. Tübingen: Mohr Siebeck. pp. 267 et seq. 48

98

M. Valta

(“double triad”)49: intensity of intervention “heavy, medium, light”, abstract weight of the human right affected “high, medium, low”50, to which is added the prognosis factor (0–1) of certainty of knowledge. The weight formula can be supplemented with numerical approaches by assigning different numerical values to the levels (“quotient formula” 1, 2, 3). A certain encroachment (1) of high intensity (3) on the right to physical integrity or life (3), which is to be weighted highly in the abstract, leads, for example, to a rating of 1 × 3 × 3 = 9 for sanctions affecting the food supply. In contrast, an unclear general impairment of the economy, which only leads to loss of income without endangering life and limb, would, for example, be assessed with the prognosis factor 0.5, the abstract weight 1 and the concrete intensity of encroachment 1 with a total of only 0.5. The purpose of the sanction must now be made commensurable as a comparative variable by assessing the improved realization of human rights that it brings about. If the purpose of the sanction is, therefore, the prevention of human rights violations in the target state, the following must be applied: the probability of success of the sanction, which according to Hufbauer/Schott/Eliott/Oegg can be set at 0.33, although the figure is easily disputable and suggests a bogus accuracy. If the human rights violations refer to life and limb or to human rights that are important for self-determination, such as freedom of opinion and freedom of assembly, their abstract weight can also be assessed with 3; in the case of the concrete intensity, an assessment would also have to be made, which in this example is also assessed with “high” and thus 3. As a result, the purpose of the sanction would have to be evaluated in terms of human rights with 0.33 * 3 * 3 = 3. In comparison to the previous examples, this sanction purpose would consequently outweigh the moderately probable slight loss of income with its value of 0.5, but not a certain serious impairment of life and limb with its value of 9. The numbers are not intended to conceal the fact that this is merely a matter of normative valuations that cannot be objectively quantified. Petersen, therefore, doubts the rationalizing power of Alexy’s formula, since, in addition to changes in  Cf. Alexy, R. (2003). Die Gewichtsformel. In J. Jickeli (Ed.), Gedächtnisschrift für Jürgen Sonnenschein. Berlin: De Gruyter, pp. 771 et seq.; See also Riehm, T. (2007). Abwägungsentscheidungen in der praktischen Rechtsanwendung: Argumentation, Beweis, Wertung. Munich: C. H. Beck.; Klatt, M., & Schmidt, J. (2010). Spielräume im öffentlichen Recht. Tübingen: Mohr Siebeck. 50  In doubt Petersen, N. (2015). Verhältnismäßigkeit als Rationalitätskontrolle: Eine rechtsempirische Studie verfassungsgerichtlicher Rechtsprechung zu den Freiheitsgrundrechten. Tübingen. Mohr Siebeck, p. 63, noting that this presupposes commensurability of the various fundamental rights positions. A corresponding comparability of different abstract of different abstract fundamental rights positions does exist in legal argumentation, but it can be only be translated into ordinal numbers by mapping the precedence relations. 49

Quantifying and Operationalizing the Commensurability of Sanctions

99

classification, even small changes in scale lead to different results.51 Indeed, it is debatable whether the specific intensity of encroachment and the abstract weight of the fundamental right should be given equal weight in the formula. Even small changes in the respective factor lead to large deviations due to the multiplication. The multiplication of ordinal numbers is also generally problematic from a mathematical point of view.52 As a result, Alexy’s weight formula can therefore only explicate legal valuations in a valuable and meritorious manner, but not rationalize them (further). The evaluation thus continues to be guided merely by the rules of the legal art of interpretation and argumentation into a range of justifiable results, from which the final deciding institution (a court, a commission) declares one to be binding.This caveat view also applies to quantifications in political science and economics, insofar as they represent valuations in ordinal numbers.

6.3 Different Formalizations as an Obstacle to Reception In questions of commensurability, jurisprudence has difficulties with a (further) formalization through quantification. If one examines the appropriateness of sanctions, it is sufficient in legal terms that they promote the sanction purpose (policy change in the target state) in some way. However, the quantifications of the Hufbauer et al. study are not demanded by the legal test of appropriateness53: promotion of the purpose is given for 6% as well as for 33%.  Petersen, N. (2015). Verhältnismäßigkeit als Rationalitätskontrolle: Eine rechtsempirische Studie verfassungsgerichtlicher Rechtsprechung zu den Freiheitsgrundrechten. Tübingen: Mohr Siebeck, p. 64; Illustrative Schlink, B. (1976). Abwägung im Verfassungsrecht. Berlin: Duncker und Humblot, pp. 136 et seq. 52  Petersen, N. (2015). Verhältnismäßigkeit als Rationalitätskontrolle: Eine rechtsempirische Studie verfassungsgerichtlicher Rechtsprechung zu den Freiheitsgrundrechten.Tübingen: Mohr Siebeck, p. 60, p. 63, pointing out that ordinal numbers could only be multiplied if only one factor was compared (e.g. the abstract value of fundamental rights); the ordinal scale, on the other hand, as sufficient Klatt, M., & Meister, M. (2012). Verhältnismäßigkeit als universelles Verfassungsprinzip. Der Staat, 51, pp. 159 et seq., p. 175; Borowski, M. (2013). On Apples and Oranges. Comment on Niels Petersen. German Law Journal, 14, p. 1409, pp. 1413 et seq., p. 1415. 53  Cf. generally Führ, M. (2001). Ökonomische Effizienz und juristische Rationalität: Ein Beitrag zu den Grundlagen interdisziplinärer Verständigung. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden-Baden: Nomos, p. 157, p. 180; Critical of the low threshold in the face of economic opportunities Meßerschmidt, K. (2001). Ökonomische Effizienz und juristische Verhältnismäßigkeit. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden-Baden: Nomos, pp. 226 et seq. 51

100

M. Valta

In the second step of the commensurability test, necessity, the question is asked for a means that is equally effective but less detrimental to the human rights of the citizens concerned.54 The result of the study by Hufbauer et al. that pure financial sanctions (restriction of payment and capital movements) are already successful in 35% of all cases, comprehensive trade and financial sanctions in 40% of all cases,55 is not sufficient for legal necessity. Even if pure financial sanctions are almost as effective as comprehensive sanctions, they are not equally effective. Demonstrating equal effectiveness is generally a practical hurdle to the necessity test.56 For the third step of the examination, commensurability in the narrow sense, it is necessary to weigh the various legal positions, in particular human rights, which are interfered with and which are to be protected by the sanction.57 Here, there is a lack of a meaningful, commensurable quantification of the legal positions.58 It is true that the consequences of sanctions can be quantified in monetary terms, and similar quantifications are also conceivable for the purposes of sanctions (e.g. damage caused by military aggression). However, the significance of such quantifications is doubtful, especially since the narrow focus on monetarization does not, or not adequately, capture many values, especially human rights values.59 A corresponding economic cost-benefit analysis can, of course, be an aid to the more comprehensive legal consideration by capturing the economic aspects.60

 Instead of all Sachs, M. (2014). Art. 20. in M. Sachs (ed.), Grundgesetz (7th ed.). m.no. 152.  Hufbauer, G.C. (2007). Economic sanctions reconsidered (3rd ed.). Washington D.C.: Peterson Institute for International Economics, pp. 170 et seq. 56  General Führ, M. (2001). Ökonomische Effizienz und juristische Rationalität: Ein Beitrag zu den Grundlagen interdisziplinärer Verständigung. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden-Baden: Nomos, p. 157, p. 181, p. 184; For accepting somewhat less effectiveness in exchange for great sparing Sachs, M. (2014). Art 20. in M. Sachs (ed.), Grundgesetz (7th ed.). m.no. 153. 57  Sachs, M. (2014). Art. 20. in M. Sachs (ed.), Grundgesetz (7th ed.). m.no. 154 et seq. 58  Führ, M. (2001). Ökonomische Effizienz und juristische Rationalität: Ein Beitrag zu den Grundlagen interdisziplinärer Verständigung. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden- Baden: Nomos, pp. 157 et seq.; Petersen, N. (2015). Verhältnismäßigkeit als Rationalitätskontrolle: Eine rechtsempirische Studie verfassungsgerichtlicher Rechtsprechung zu den Freiheitsgrundrechten. Tübingen: Mohr Siebeck, pp. 58 et seq. 59  Messerschmidt, K. (2001). Ökonomische Effizienz und juristische Verhältnismäßigkeit. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden- Baden: Nomos, pp. 230 et seq.; Petersen, N. (2015). Verhältnismäßigkeit als Rationalitätskontrolle: Eine rechtsempirische Studie verfassungsgerichtlicher Rechtsprechung zu den Freiheitsgrundrechten. Tübingen: Mohr Siebeck, p. 60. 60  Gawel, E. (2001). Ökonomische Effizienzanforderungen und ihre juristische Rezeption. In E. Gawel (Ed.), Effizienz im Umweltrecht. Baden- Baden: Nomos, pp. 30  et seq. 54

55

Quantifying and Operationalizing the Commensurability of Sanctions

101

Some relief could be provided by the Human Development Index, which, in addition to the gross national product, also depicts life expectancy and the level of education.61 However, even this index is not uncontroversial62 and cannot adequately depict the consequences of sanctions such as sanction targets. The aggregate data do not allow for sufficient attribution to specific sanctions. Life expectancy and the measurement of education respond to influences only with a time lag. Data poverty and considerable forecasting uncertainties cast additional doubt on the suitability for decisions under limited knowledge.

7 Conclusion As a result, quantifications can be useful and necessary as input variables for legal argumentation and discussion. However, they are fundamentally no substitute for legal argumentation techniques. An ordinal structuring of commensurability, as proposed by Alexy with his weight formula, is conceivable.63 This can serve to illustrate legal argumentation but does not permit any independent conclusions. The limited receptiveness of the law to formalizing quantifications can at least also be explained by the fact that the legal argumentation, as described, already contains its own purpose-oriented formalization.64 The law assumes incommensurability and a lack of data. The (economic) formalization at least by cardinal numbers, on the other hand, tends to assume commensurability and sufficient data, which is one reason for the only limited combinability. Consequently, legal considerations of commensurability cannot be further formalised by quantifications. Quantifications that flow into the legal argumentation as source material must be legally evaluated in the latter and thus adapted to the deviating formalization of the law.  Retrieved from: http://hdr.undp.org/en/content/humandevelopmentindexhdi. (31.1.2016).  Review in Kovacevic, M. (2011). Review of HDI Critiques and Potential Improvements. Retrieved from http://www.hdr.undp.org/sites/default/files/hdrp_2010_33.pdf (11/23/2016); See also H. Wolff, H. Chong, & M. Auhammer. (2011). Classification, Detection and Consequences of Data Error: Evidence from the Human Development Index. The Economic Journal, 121, p. 843. 63  Alexy, R. (2003). Die Gewichtsformel. In J.  Jickeli (Ed.), Gedächtnisschrift für Jürgen Sonnenschein. Berlin: De Gruyter, pp. 771 et seq. 64  In this vein Führ, M. (2001). Ökonomische Effizienz und juristische Rationalität: Ein Beitrag zu den Grundlagen interdisziplinärer Verständigung. In E.  Gawel (Ed.), Effizienz im Umweltrecht. Baden- Baden: Nomos, p.  157, p.  201, who speaks of independent tests of rationality by which both sciences instrumentally mediate between ends and means. 61

62

Science, Numbers and Power: Contemporary Politics Between the Imperative of Rationalization and the Dependence on Numbers Markus J. Prutsch

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

M. J. Prutsch (*) European Parliament, Bruxelles, Belgium Heidelberg University, Heidelberg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_9

103

104

1

M. J. Prutsch

Introduction1 [...] when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be.2 (William Thomson, Baron Kelvin, 1891)

This quote by William Thomson, Baron Kelvin, from a speech in 1883, which — still widely used—can be cited in the context of debates surrounding the relationship between qualitative and quantitative approaches in science and their intrinsic value, as an unequivocal plea for the primacy of the latter. Furthermore, if one follows Kelvin’s argumentation, quantifiability is to be equated with scientificity. Indeed, a clear tendency towards quantitative methods can be observed in many of today’s scientific disciplines, including the social sciences and not least political science. It is significant, however, that a “trend towards numbers” can be observed,  The attempt to reflect on the role of “measuring” and “counting” as well as the potential of  interdisciplinarity from  the  perspective of  a  specific scientific discipline—in this case, political science—is challenging. Thus, the  very idea of  the  existence of  “pure subjects” proves to be problematic, especially since education and research in many academic disciplines, especially the humanities and social sciences, are nowadays deliberately interdisciplinary or multidisciplinary. In addition, there is always a subjective component of the view, which is fed by personal preferences and experiences of the individual with “his”/”her” discipline, and inevitably causes a perspective view. Moreover, many scientists are anchored in several disciplines, which makes the boundaries between them blurred, so to speak. This also applies to me: my training in both history and political science and later research activities in both disciplines make it difficult to clearly align myself with one or the other. A further increase in complexity arises if the professional activity is not—or not exclusively—located in an academic environment. An activity in the context of practical politics (consulting), for example, as in my specific case, further blurs disciplinary (self-)understandings and boundaries, in that experiences from the “non-scientific” profession in the narrower sense inevitably also have an influence on the scientific location, the determination of desiderata of “one’s” discipline(ies), etc. However, a virtue may also arise from this double constraint, insofar as a lack of disciplinary and  professional “unambiguity” is not merely perceived as  a  limitation, but  also as a potential opportunity. 2  Thomson, W. (1891). Electrical units of measurement. In Ders. (ed.), Popular Lectures and Addresses. Vol. 1: Constitution of Matter (2nd ed.) (pp. 80–143), London: Macmillan, here pp. 80 f. 1

Science, Numbers and Power

105

not only for the “science of politics”, but also for the object of its interest—politics (policy-making) itself.3 This points to the fact that the strong presence of numbers is by no means limited to the field of science, but represents a possibly general phenomenon of our time that goes beyond it. In this context, it can be assumed that the parallelism of the development in political science and politics towards more quantification is not, or at least not exclusively, of a coincidental nature, but rather expresses fundamental ideas about any scientific and political activity today must be based on in equal measure: “empirical knowledge”. However, can one go so far as to speak of a “numerical affiliation” of contemporary politics and political science? And do qualitative approaches necessarily appear subordinate to quantitative ones, be it in terms of practical significance, or in terms of ascribed “added value”? Ongoing discussions on this matter aside, even Kelvin gives a much less explicit answer than the above quotation suggests. Indeed, his plea for quantification can serve as a classic example of a quotation taken out of context, since Kelvin—which is almost always omitted in later reproductions—explicitly refers only to physics (dubbed physical science by Kelvin), without necessarily formulating a claim to absoluteness for all sciences, let alone for other spheres of life. Against this backdrop, in the following article: 1. the question of the concrete role of “quantification” in today’s politics (political science) will be examined (Counting, measuring and interpreting in politics (political science)), 2. challenges of a highly number-based politics (political science) will be discussed (Promise and temptation of “scientific politics”), and 3. the potential contribution of the Heidelberg WIN-Kolleg research project “Science, Numbers and Power” to the determination and future shaping of the relationship between (quantifying) science and politics will be presented (The Contribution of the Research Project “Science, Numbers and Power”).

 The growing role of numbers in politics is also reflected in the relevant specialist literature. The mathematician Carl Peter Ortlieb, for example, is one of several authors in the German-­ speaking world who deal with this topic. 3

106

M. J. Prutsch

2 Counting, Measuring and Interpreting in Politics (Political Science) Political science, in the narrower sense of an independent branch of science, is a relatively recent phenomenon, despite the fact that an intensive—also scientific— preoccupation with politics in all its facets can be identified since antiquity. In the German-speaking world, political science, also known as “Politische Wissenschaft” or “Politologie”, established itself as a discipline mainly after the Second World War, under significant US-American influence. As part of the modern social sciences, it deals with the study of political structures (polity), processes (politics) and contents (policies), as well as the political dimensions and expressions of human coexistence in general. Political science is considered a classical “integration science”, which builds on various disciplines and, accordingly, is transdisciplinary. There is a particularly close relationship to sociology and jurisprudence, but also psychology and history. Although there is no conceptual agreement, the discipline can be divided into three central areas: a) political theory; b) comparative political science, which is devoted to the comparative analysis of political systems; and c) international relations and international politics. These are joined by a number of more specific sub-disciplines, such as political economy or policy analysis.4 The sub-disciplines of political science are characterised by different emphases in terms of methodological approach and central epistemological interest. Nevertheless, three overarching central “schools” are essentially distinguished in the scholarly discussion, regarded as “classical paradigms”:5 • normative-ontological political science • critical-dialectical political science • empirical-analytical political science

 For further information on the structure of the subject and its subject areas, please refer to the relevant literature, including, for example: Pelinka, A. & Varwick, J. (2010). Grundzüge der Politikwissenschaft (2nd ed.). Vienna, Cologne and Weimar: Böhlau; Lauth, H.-J. & Wagner, C. (2016). Politikwissenschaft: Eine Einführung (8th ed.). Paderborn: Schöningh (UTB). 5  Alemann, U. von. (1994). Grundlagen der Politikwissenschaft. Opladen: Leske + Budrich p. 124. At this point, we will not delve into the extent to which this division may be outdated today—especially in view of the loss of importance of the critical-dialectical school oriented towards a Marxian social analysis, which can be ascertained above all since 1990—as this is emphasized by various sides. 4

Science, Numbers and Power

107

The normative-ontological approach is considered to be the oldest concept in the scientific study of politics and is based on the fundamental premise that political action is, and indeed should always be, connected with questions of “right and good”, i.e. normative questions of value. Based on the fact that (political) science is not able to evade “the question of the ultimate goals and norms of politics [...], of the good order”,6 the existence of a more or less clearly definable truth or morality is assumed. Politics is thus guided— or must be guided—by an ideal state, be it Plato’s perfected polis or liberal democracy. In light of this, and since man is understood as part of a comprehensive order of being, political science understood in this way shows great proximity to political philosophy. Its primary task is the elaboration of ethical criteria for the assessment and shaping of politics.7 With recognizable links to the normative-ontological approach, the critical-­ dialectical school declares the further development of society to be a scientific task, based, however, on the idealistic concept of knowledge of Hegel and Marxian philosophy. This is to take place by means of a “critique that turns into practice”,8 whereby an absolute truth or morality can only be achieved on the last stage of historical-social development, which at the same time implies the abolition of political and social domination. Common to the normative-ontological and dialectical-historical schools is the focus on qualitative methods. In contrast, quantitative methods predominate in the empirical-analytical school, and the element of “interpretation” takes a back seat to “measurement” and “understanding”. For the second half of the twentieth century, this school has grown in importance and today it embodies the undoubtedly most formative direction in modern political science, even if normative approaches have experienced a certain renaissance since the 1980s. Characteristic of empirical-analytical approaches, despite differences in detail, is the effort to grasp life—in the case of political s­cience,

 Oberndörfer, D. (1966). Wissenschaftliche Politik (2nd ed.). Freiburg im Breisgau: Rombach, p. 21. 7  The assessment of the political reality is based on how large the “gap” is to the respective ideal state. From this, alternative courses of action can be formulated to close—or reduce as far as possible—the existing “gap” to that ideal state. 8  Naßmacher, H. (1998). Politikwissenschaft (3rd ed.). München und Wien: R. Oldenbourg, p. 465. 6

108

M. J. Prutsch

namely political—“reality”9 as objectively as possible and, accordingly, the formation or application of theories and methods that are as free of value judgements as possible, whatever these may be in concrete terms. Based on a positivist understanding of science and in close analogy to the modern natural sciences, operationalizable and verifiable explanatory approaches, as well as logically sound statements that are in accordance with the objects of reference, are the focus of research interest. This is done with the declared aim of minimizing the normative dimensions of, for example, political problem-solving approaches as far as possible. In this context, scientific language that is as precise as possible and the measurement of the empirical content of a scientific theory by its verifiability and falsifiability play central roles. The defining understanding of political science as an empirical (social) science corresponds to a methodological focus on questioning, observation, experimentation10 and (quantitative) content analysis, as well as a general attempt to present and justify empirical facts and “patterns”,11 but also possible conclusions from the empirical findings, primarily numerically. Last but not least, the means of statistics are used for this purpose.12  In accordance with the broadness of the concept of politics, “political reality” can also be understood comprehensively. Consequently, it includes all those actors, organisations and institutions (in the broadest sense of the term) that shape and/or represent “politics”, whether actively or passively. 10  In the case of experimental political science, the focus is particularly on the analysis of decision-making and action situations, for example in relation to voting behaviour. The transitions to (social) psychology in particular are fluid. A classic example of an experiment that is also relevant to political science is the Milgram experiment, which was first conducted in 1961. 11  “Patterns” in the political science sense implies above all “patterns of action”: patterns of behaviour and routines that can be recognised as regularities by means of empirical observation and recorded as social patterns and generalised by induction or deduction. In a consolidated form—namely as “structures”—patterns of action are central to the stability of a social order. 12  Examples of further literature on (empirical) methods in political science are: Schlichte, K. (2005). Einführung in die Arbeitstechniken der Politikwissenschaft. Wiesbaden: VS Verlag für Sozialwiss; Behnke, J., Baur, N., & Behnke, N.. (2006). Empirische Methoden der Politikwissenschaft. Paderborn: Schöningh (UTB; Landman, T. (2007). Gegenstand und Methoden der Vergleichenden Politikwissenschaft. Eine Einführung. Wiesbaden: VS Verlag für Sozialwiss; On empirical social research in general, cf. among others Diekmann, A. (2007). Empirische Sozialforschung. Grundlagen, Methoden, Anwendungen (13th ed.). Hamburg: Rohwolt. 9

Science, Numbers and Power

109

As previously mentioned, a turn towards empiricism and numbers in particular cannot only be observed in political science,13 but also in politics itself. The triumphant advance of social rationalisation, already emphasised by Max Weber,14 has not stopped at politics and policy making in recent decades, manifesting itself in the progressive “scientification” of political operations, as is evident in the increasing importance of scientific policy advice. This scientification finds its emblematic expression in a steady increase in numbers and their role in politics. Detailed number-based analyses and impact assessments, with the aim of determining the possible effects of a particular decision or legislative project as precisely as possible and expressing them in measurable form, are now part of everyday political business. At the same time, the effort to quantify is not limited to effects alone, but also encompasses the goals and ambitions of certain policies, which are to be made empirically measurable. This is done primarily in the form of fixed quotas, rates of increase or benchmarks, the achievement or non-­achievement of which can serve as a yardstick for political success or failure. A striking example at the European level in this respect are the EU convergence criteria to which the member states committed themselves in the Maastricht Treaty of 1992, and which are composed of fiscal and monetary target values. The growing role of figures and quantitative target indicators at both European and national level is by no means limited to the area of economic and financial policy, but is a phenomenon of many, if not all, policy fields, including environmental and education policy. Concrete examples include defining the share of renewable energy sources to be achieved at a certain point in time in environmental policy or the fixing of the share of university graduates of a certain age cohort in education policy. Thus, a parallel increase in the importance of numbers and quantifying methods in (political) science and politics can be observed. These tendencies—which can be summarized under the term “rationalization imperative”—have repercussions on the respective “communication practice”. Thus, in view of the central importance of the empirical-analytical school, it is hardly surprising that in today’s political science, above all that which can be justified statistically or by other empirical evidence is regarded as a “scientific result”. Accordingly, a majority of specialist publications are determined by numbers and data. The communication practice of today’s politics reflects the discernible “trend towards numbers” in such a way that political decisions—at least in Western 13  As a result of its stronger empirical-analytical orientation, the analysis of concrete policy fields (policies) also gained in importance in political science. 14  Cf. Weber, M.. (2002). Wissenschaft als Beruf. In Ders (ed.), Gesammelte Schriften 1894– 1922. Ausgewählt und hrsg. Von Dirk Kaesler. Stuttgart: Kröner, pp. 474–511, here p. 488.

110

M. J. Prutsch

d­ emocratic systems—are (have to be) legitimized, or at least accompanied, by rational, mostly empirical and often quantitative-statistical arguments. This is partly due to the growing complexity and diversification of modern politics, which almost inevitably leads decision-makers to resort to scientific expertise, but especially to the widespread insight that decisions are not primarily to be made out of convictions and “according to the best of one’s conscience”, but rather “according to the best of one’s knowledge” and on the basis of rational-scientific knowledge. Conversely, it is correspondingly more difficult to justify policies that are based solely on normative considerations, be it “conviction” or “ideology”. The most important form of expression and communication of a fact- and evidence-­based policy15 are numbers and statistics in particular, and it is not presumptuous to describe “measurability” and “quantifiability” as a conditio sine qua non of contemporary policy-making. The potential of a policy being oriented towards numbers and quantitative indicators is obvious: it promises objectivity and verifiability and meets public demands for transparency and traceability of political action. At the same time, however, significant challenges arise from the rationalization imperative of politics and political science.

3 Promise and Temptation of “Scientific Politics As convincing as the arguments for highly quantifying politics and political science may be, it is also important to recognize limitations and potential dangers. The most obvious risk is that the focus on “numbers” and “hard facts”16 goes hand in hand with a structural neglect of qualitative aspects that are not—or at least not directly—measurable. In addition, rationalization and quantification can promote a technocratic understanding of politics that threatens to lose sight not only of value issues but also of the individual citizen and his or her needs, in favor of abstract numbers and complex measurement categories. For precisely this reason—as well as because of the specialized knowledge often required to deal with this abstraction and complexity—rationalization and quantification are also likely to increase aversion to politics and abstinence on the part of the general public. If political science can at least console itself with the fact that science, regardless of  Especially in English, the buzzwords evidence-based and factual-based policy making are often used in the context of discussions about the appropriate form of modern policy. 16  With regard to modern societies, Carl Peter Ortlieb speaks of nothing less than a “number fetish”: Ders. (2006). Die Zahlen als Medium und Fetisch. In J. Schröter, G. Schwering & U. Stäheli (Eds.). Media Marx. Ein Handbuch. Bielefeld: Transcript, pp. 153–167. 15

Science, Numbers and Power

111

whether it is normatively or empirically oriented, has always been and still is the endeavor of a relatively small minority, the alienation from the (electoral) people caused by “scientification” is an existential problem for politics and its legitimacy.17 This alienation is not only due to the immediate “complication” of politics caused by more numbers, statistics and complex scientific evidence. It is also because politics and science follow different functional logics that cannot be easily reconciled. Whereas politics, by its very nature, focuses on increasing and securing the legitimacy of rule, the focus of science is on increasing and securing systematic knowledge and gaining knowledge.18 This can be sharpened to the opposition “power versus truth”—an opposition that can undoubtedly be relativized, but ultimately not completely abolished. The close interaction between politics and science, as desirable as it may be, proves to be problematic against this backdrop, and calls for a balance conducive to the well-being of both subsystems that takes their respective specifics into account. Just as politics should not be pursued in isolation from scientific knowledge, science cannot and must not take the place of political debate and social discourse. “Factual legalism”—in the sense of scientific knowledge—should help to prepare and underpin (value-based) decisions, but not replace them, as this would ultimately degrade politics to the sole executive arm of (supposed) scientific-technocratic rationality.19 At the same time, science itself is welladvised to avoid any suspicion of instrumentalization by politics. Even today, science, and especially scientific policy advice, has to cope with the accusation—whether justified or not—that it is too easily and uncritically placed in the service of politics and that it also places itself in the service of politics. In this context, “scientific expertise” has become a buzzword of visibly ambivalent character, since it is increasingly regarded as uncertain and arbitrary, or at least interchangeable. The widespread perception is that policy-makers do not necessarily seek the study or statistic that is of the highest quality from a scientific point of view, but  It is a paradox of “scientific politics” that, on the one hand, numbers are supposed to increase the accessibility of and thus also the trust in politics—which is partly successful— but, on the other hand, more quantification can also promote the distance between politicians and citizens. 18  This assumes the perspective of the respective “professional actors”, i.e. politicians on the one hand, scientists on the other. However, the meaning and goal of politics and science can also be determined differently, for example, when it comes to the public perception of these two spheres. 19  On such a “technocratic model” cf. for instance Schelsky, H. (1961). Der Mensch in der wissenschaftlichen Zivilisation. Cologne: Westdeutscher Verlag. 17

112

M. J. Prutsch

often the one that is most politically acceptable. If an expert opinion does not correspond to the ideas and expectations of the political decision-makers, a counter-­ expertise is sought, and usually found. The impression of a politicization of science not only diminishes the reputation of scientific policy advice, but also undermines the credibility of science per se. This applies to empirical-­analytical (political) science in particular, which, in conscious differentiation from normative-ontological approaches, rejects values as a guiding factor for action, but is only able to uphold its own claim to unconditional neutrality in practice to a limited extent. At the same time, the impression of a merely tactically motivated use or even abuse of “(pseudo) scientificity” is likely to damage trust in politics. Significantly, it appears that the increase in (actual and perceived) scientific rationality in politics contributes to strengthening and making “irrationality” in the political process acceptable rather than eliminating it. Such irrationality finds expression, for example, in the susceptibility of many voters to populist rhetoric, which is given preference over empirical arguments, regardless of how well grounded the latter may be. This is particularly true in the case of the use of direct democratic instruments such as referendums, in which factual logic increasingly loses out to scaremongering or the desire for mere reckoning with the “establishment”, which is accused of arrogance and neglect of the needs of the people. In the process, science— sometimes together with the media—is often seen as at least the silent accomplice of a political caste that does not deserve trust.20 In view of this, maintaining the highest standards in the relationship between science and politics ultimately represents an even more important challenge for both. Political science, as the discipline closest to politics in terms of content, can play a central bridging role here. Since its beginnings, political science has not only been dedicated to the analysis of politics, but has also performed a potentially competent advisory and steering function for politics. In this context, it would do well to recall its normative ontological roots more strongly, without immediately renouncing empiricism. More normativity might not only enrich the culture of discussion in the discipline itself, for which the claim to “objectivity” as the sole leitmotif in the dominant empirical-analytical school, which in any case cannot be consistently upheld, tends to be an element that impedes discourse. The r­ ediscovery  Such a pattern can be exemplified by the recent Brexit referendum, in which, despite the multitude of expert voices and academic studies that had warned in advance with detailed analyses of the negative consequences—especially economic ones—of a United Kingdom leaving the European Union, these ultimately lost out to a populist campaign warning of excessive alienation and a supposed loss of sovereignty and control. 20

Science, Numbers and Power

113

of normative-ontological approaches could also help to open up the discipline to the broader public by also directing the gaze to fundamental issues that are tangential to values and do not necessarily presuppose scholarly interest and specialized knowledge, such as the role of citizenship in modern society. At the same time, it seems indispensable to discuss the role of numbers in politics and the related opportunities and limits up for discussion not exclusively at the level of political science, but as comprehensively as possible. In this context, it is important to involve not only scholars from different disciplines, but also actors from politics and society, and to take their opinions, as well as their practical experiences, into account. Only a holistic approach is capable of providing useful recommendations for a generally beneficial design of the alliance between politics and (quantifying) science. This is a central starting point for the research project “Science, Numbers and Power”.

4 The Contribution of the Research Project “Science, Numbers and Power” Against the backdrop of the above, the research project “Science, Numbers and Power” is interdisciplinary and holistic. Beyond a specific political science discourse, synergies between different relevant scientific disciplines are to be identified, and at the same time, the exchange between representatives of science and “political practice” is to be actively promoted. Accordingly, the international project team includes historians, political scientists, economists, philosophers and educationalists, all of whom represent different professional backgrounds. In addition to researchers from universities and non-­ university institutions, there are also staff from scientific services of national and international political institutions, who are particularly active at the interface between science and politics and represent a link between scientific and practical experience. In this way, they also represent a transmission belt to allow research results to be used directly for political practice. While this allows the existing diversity of perspectives from science and politics on the object of study to be reflected, which also finds its counterpart in different methodological approaches, at the same time, the challenge is to enable structured joint work and to arrive at coherent results. The sheer multiplicity and disciplinary heterogeneity of the contributors causes a multitude of specialist languages and traditions, definitions of terms and specific research approaches, from which evident “communication problems” can arise. This is countered both by organisational measures—specifically the establishment of an organisational team whose

114

M. J. Prutsch

task is the central coordination of all individual research projects—and, in particular, by measures relating to the structure of the content. This includes, on the one hand, the definition of an overarching epistemological interest, namely: to locate the current relationship between politics and science historically and to subject it to a critical analysis; this with particular regard to the “quantification” of politics. This results in several concrete tasks for the project: • Examination of the historical development of the relationship between (quantifying) science and politics; • Determination of the role of science and quantification in particular in contemporary politics; • Elaboration of chances and problems of a “scientified” political operation. On the other hand, the coherence of the overall research project is ensured by grouping the individual contributions into three thematic sections, which will be briefly presented below: I. Historical genesis of the relationship between science and politics II. Science and contemporary politics III. Case study—European education policy

4.1 Section I: Historical Genesis of the Relationship Between Science and Politics The nineteenth century in particular—significantly promoted by the legacy of the Enlightenment and the “Great Revolutions” of the eighteenth century—brought about numerous rationalisation processes, which found expression, among other things, in the founding of statistical offices and specialist societies. At the same time, statistical methods became more sophisticated, making it possible to conduct numerous surveys of the state and the population. In the twentieth century, the triumph of rationalization continued and spread to wider areas of society, including the economy, as well as health and education. In the more recent past, the differentiation of number-based methods gained additional momentum due to the technical possibility of mass data storage (big data). This section examines why number-based methods became a fixture in modern politics from a historical perspective and how the relationship between quantitative and qualitative analyses of the social world evolved over time. A particular focus is

Science, Numbers and Power

115

placed on the social, political and economic factors that contributed to the rise of numbers in politics and society, and the impact of quantifying analyses on the understanding of “truth”, “objectivity” and “fairness”.

4.2 Section II: Science and Contemporary Politics Science and especially numbers represent everyday and quasi-natural companions of contemporary politics. However, the concrete role and significance of scientific results or statistics and figures can vary greatly depending on the respective policy field and concrete application, and assessments of whether the “scientification” of politics, which is primarily characterized by countability and measurability, is more a blessing or a curse, are no less varied. This section evaluates the fundamental advantages and disadvantages of closely linking policy and science. In particular, the importance of quantitative indicators in politics will be systematically analyzed and it will be clarified how such indicators are determined and what the long-term consequences of making number-based decisions are. In this context, we will also assess which official and unofficial sources of knowledge are available to policymakers and what role intermediaries play in translating scientific findings into the language of policymaking.

4.3 Section III: Case Study—European Education Policy In recent decades, educational issues have increasingly been discussed on a European and international level. At the same time, the growing role of “quantification” can also be observed in education policy debates. The increasing importance of numbers in education policy is not least due to the correlation of education/research on the one hand and general socio-economic performance on the other, which is usually expressed with the catchword “knowledge society”. This section examines the significance of scientific “rationality” and quantification using the case study of supranational—specifically European—education policies. Not only will this allow for a more in-depth consideration of the relationship between “numbers” and “policy” in a particular policy area, but it will also shed light on the feedback of the specifics of the European political space. This is characterised by complex decision-making structures and processes as well as by the lack of a common language and the existence of pronounced cultural differences. In addition, education policy is a core competence of the nation-states and

116

M. J. Prutsch

the EU only has a coordinating function, which means that joint decisions at the European level tend to be difficult and often controversial. In view of these complexity-­increasing factors, it will be examined to what extent there is a special tendency towards quantitative indicators and figures at the European political level. In the case of all three sections, the approach to the topic is deliberately based on open questions, which means that cooperation in the project is not based on any disciplinary or methodological coherence, but rather on common research interest. This approach should make it possible to make use of different horizons of experience and to illuminate the relationship between (quantifying) science and politics from different perspectives without losing the necessary focus. Furthermore, this focus is sought by operationalizing the general interest in knowledge by means of the concept of working numbers—emphasizing the active-process character of the creation and use of numbers in politics as well as the dynamic relationship between (quantifying) science and politics. This is done concretely by analyzing the following three, closely interrelated dimensions of that relationship: (a) Production: How, on what basis and by whom are figures and statistics generated? (b) Transfer: In what way and by whom are they introduced into the political discourse and made usable for it? (c) Application: How and to what end are numbers and statistics used in political discourse?21 (a) Production: (numerical) material may or may not have been generated for immediate or later political use. An analysis of the conditions under which it was produced can provide information about this and is likely to reveal underlying (vested) interests and existing dependencies. In this context, it is important to shed light on the data basis and methods used to “create” figures, as well as the underlying premises—including the determining scientific and cultural self-­ image. Furthermore, a central role in revealing contextual conditions, dependencies and (particular) interests in the production of figures is played by the location of the relevant actors, biographically and institutionally. (b) Transfer: No less important than those concerning the production preconditions of numbers are questions about how these enter the political sphere. Who are the driving forces, and what channels of transfer are used? Do certain groups—for example, policy advisors—perform a hinge and filter function  It should be noted that not every research contribution necessarily addresses all three of these dimensions equally, but rather focuses on them depending on the specific topic and the technical-methodological approach. 21

Science, Numbers and Power

117

between the actual producers of figures and politicians? What communication strategy and “language” is used to “convey” figures? And to what extent is this “communication” accompanied by a discussion not only of opportunities, but also of risks, such as the possibility of inaccuracies and miscalculations? (c) Application: Finally, the “how” of the argumentative use of numbers in political discourse is of central interest. What is the concrete use and effect of numbers by/on certain social/political persons (groups)? Is the “correctness” of figures a criterion worth mentioning, and if so, to what extent? Are numbers and empirical “evidence” in general capable of actively influencing the political agenda by attributing to them an intrinsic intrinsic value or even a direct steering function, or are they rather viewed and used instrumentally, for instance for the mere glossing over and/or ex-post legitimization of already ideologically predetermined positions? And what are the effects of rationalised, number-based politics on citizens and their understanding of politics? Taken together, these three dimensions—with the actors involved as the central “cross-cutting element”—are intended to ensure that the relationship between “science, numbers and power” is both problem-oriented and as coherent as possible. But ultimately, what are the associated expectations of the research project? The aim is to contribute to examining this complex constellation and, beyond the academic sphere, to generate directly usable benefits for “practitioners” of the political business by motivating them to critically (re)reflect on their dealings with numbers and the role of science in politics in general. It remains to be said that numbers can function as a powerful instrument of rationalization, but also of manipulation. They are characterized by an only supposed objectivity, especially since they—like science itself—always have a pronounced “value” element.

Measuring and Understanding Financial Risks: An Econometric Perspective Roxana Halbleib

Let neither measurement without theory nor theory without measurement dominate your mind but rather contemplate a two-way interaction between the two which will stimulate your thought processes to attain syntheses beyond a rational expectation! (Arnold Zellner, 1996)1

1 Measurement and Interpretation in Econometrics The name econometrics is derived from the ancient Greek word combination: oikonomia  =  economy and metron  =  measure, measurement. Econometrics can, therefore, be described as the measurement of economic activity. Formally, econometrics is a social science that uses economic theory and mathematical methods as well as statistical inference to quantitatively analyze economic processes and pheThe translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. 1  Zellner, A. (1996). Past, present and future of econometrics, Journal of Statistical Planning and Inference, 49, pp. 3–8.

R. Halbleib (*) Institute of Economics, Faculty of Economics and Behavioral Sciences, University of Freiburg, Freiburg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_10

119

120

R. Halbleib

nomena and empirically test economic theories. In the competition among different economic theoretical hypotheses, econometrics confronts these hypotheses with the reality provided by the data, so that the usefulness of an economic theoretical statement becomes tangible only if it can describe more or less the reality. An accurate description of the reality by an econometric model requires an accurate measurement of the relevant variables and their relationships. In contrast to natural science laboratory data, the data in economics are mostly not experimental, i.e. the observable explanatory and dependent variables result from the economic process itself, e.g., from the interactions between the economic agents. Consequently, disantagling between variables and their causal effects is by no means trivial. Therefore, it is absolutely necessary to perform and interpret the econometric estimations only on the basis of well-founded economic theories. The interdependence of economic variables means that measuring pure statistical correlations can generally provide only limited insights into the causal relationships. Statistical correlations describe relationships between two variables. However, one cannot prove causality by measuring a correlation. A correlation can be the result of causality between variable A and B, or it can be purely random, such as the correlation between skirt length in women’s fashion and the Dow Jones stock index in the 1960s and 1970s. When there is causality behind a correlation, it is unclear in which direction the causality goes. For example, it has been estimated a negative correlation between body temperature and the number of lice on one’s head: the higher the temperature, the fewer the lice, and vice versa.2 This has led in certain regions to the use of lice to lower the body temperature and, thus, reach a healing effect. The interdependence of economic variables is ultimately the reason why econometrics, in comparison to stastistics, is much more concerned with the question of how causal relationships between variables can be identified and quantified. A common error in interpreting a correlation as a causality occurs when the correlation between two variables is caused by a third one: for example, a man’s income and his hair density on his head are negatively correlated, however the reason behind this correlation is the third variable, the age: with the age the income increases, but the amount of hair on the head reduces.3

 Krämer, W. (2011). Statistik verstehen—Eine Gebrauchsanweisung. München: Piper Verlag. 3  Krämer W. (2012). Angst der Woche—Warum wir uns vor den falschen Dingen fürchten. Munich: Piper Verlag. 2

Measuring and Understanding Financial Risks: An Econometric Perspective

121

2 Statistical Significance: The Trojan Horse Causal structures are most often represented in econometrics by regression models: a dependent economic variable Y is caused by one or more independent variables X. An important part of the regression is given by the regression parameters, which describe the effect of variable X on variable Y under the assumption that other variables remain constant (the ceteris paribus principle). The strength of causality is measured by the size of the estimated (based on observations/data) regression parameters. To test whether actual causality exists between the variable X and the variable Y, one tests the (statistical) significance of the estimated model parameters. If a parameter proves to be statistically insignificant, it is said that there is no causality between the corresponding variable X and the dependent variable Y.  This is a problematic statement, however. The correct statement in this case would be: If the parameter is not statistically significant, then based on the data (information) at hand, one cannot prove causality between X and Y. In other words, the data do not provide sufficient information to reject the null hypothesis of no causality between X and Y. An important concept in econometric analysis is the p-value, which is used to show the statistical significance of results. The p-value is defined as the probability that an equal or more extreme result will occur than that which is actually observed, provided that the null hypothesis of a test is true. In order to evaluate the statistical significance of a result, limits are set for the p-value (i.e., the significance levels). The usual significance levels used in economics are set at 1%, 5% and 10%. Any results with a p-value less than 1% (5% or 10%) are considered not statistically significant for the given significance level. Unfortunately, p-values are very often misapplied and misinterpreted. For example, the p-value is often misinterpreted as the probability that the null hypothesis is true. However, the null hypothesis is not a random variable and has no probability, which means that the null hypothesis can only be true or not true. Furthermore, it is not correct to compare the p-values computed from samples of different sizes. For large samples, the p-values tend to indicate evidence against the null hypothesis, which means that the p-values can always be increased by increasing the size of the sample, thus, yielding statistically significant results.4 The misuse and misinterpretation of p-values has led the American Statistical Association (ASA) to publish a set of rules for the proper use of the p-value in

 Johnstone, D. J. (1986). Tests of significance in theory and practice. The Statistician, 35(5), pp. 491–504. 4

122

R. Halbleib

r­ esearch, research funding, professional careers, science education, policy, journalism, and law.5 The ASA recommends that 1. the p-values should only be used as an indication of incompatibility between the underlying data and the proposed models (tested under the null hypothesis), 2. the p-values should be interpreted as a statement about the association of the data with the null hypothesis and not as a statement about the null hypothesis itself, 3. scientific conclusions and economic or policy decisions should not be made solely based on whether or not a p-value exceeds a certain threshold, but also on the basis of other factors, such as further evidence, for the phenomenon under study or the quality of the measurements. 4. researchers should disclose the full information of their studies, such as the number of hypotheses investigated, all calculated p-values, 5. one should also consider the size of the causal effect in addition to the p-values. This should lead to avoiding in practice the approach of increasing the significance of a result by increasing the size of the sample, 6. the researchers should have sound knowledge about the limitations of the p-­ value. The ASA’s recommendations also represent a reaction to the increasingly incorrect use of the p-value in the publication culture of scientific results. Statistical significance plays a central role in the publication culture of empirical economic research, because only statistically significant causality is accepted as a scientific result. However, it should not be forgotten that statements about the statistical significance and corresponding causality are based on the randomness of the sample at hand. Unlike in the natural sciences, in economics, one cannot do replications of the sample/observations. This is especially true in macroeconomics and finance, where the underlying data are time-series observations. A time series is a time-dependent sequence of data points and is unique. This limitation has a clear implication for the informativeness of the sample at hand. As mentioned above, the measurement accuracy of a model parameters, and hence their statistical significance, can be increased by increasing the number of observations. While this practice is possible in natural science, it is unthinkable in macroeconomic and financial research due to the lack of data, such as predicting a  Wasserstein, R. L. & Lazar, N. A. (2016). The ASA’s statement on p-values: context, process, and purpose, The American Statistician, 70 (2), pp. 129–133. 5

Measuring and Understanding Financial Risks: An Econometric Perspective

123

new financial crisis based on old/similar crises. As a result of these limitations, one should follow the ASA’s suggestion and look at the magnitude of the numerical values of the estimated parameters in addition to their statistical significance. Just because the estimated parameters of a financial return prediction model are statistically significant does not mean that stock returns are predictable. In most cases, the predicted stock return is so small in magnitude that it can barely cover the transaction costs of trading. Therefore, the statistical significance and the size should be considered together to make a reliable scientific economic statement. As mentioned above, it is the task of econometrics to empirically test economic theoretical models. However, the assumptions of many theoretical economic models (such as the ceteris paribus principle) lead to limited econometric methods, which in turn can lead to poor empirical results (such as the gross national product forecast, which has to be corrected several times). This problem is nowadays addressed by analyzing a large amount of data (big data analysis) in order to find an appropriate procedure. Formally, data mining is the extraction of knowledge that is valid (in a statistical sense), previously unknown and potentially useful for various applications.6 The discovery of new patterns through data mining is performed using methods such as machine learning or database systems. Unlike econometrics, significance testing does not play a crucial role in data mining. In most cases, the patterns discovered through data mining lead to a better fit of economic theories to reality. Although data mining would be a perfect complement for modern econometrics, the (still small) amount of available data limits its usefulness. This is especially true for macroeconomic and financial economics research. In these subjects, a combination between a priori determined theoretical specification and a data-driven model is the only plausible solution.7

3 Measuring and Understanding Financial Risks Financial risks play a very important role in our modern world as they can trigger huge economic losses. These losses affect the whole economy, but also each individual. The impact of such risks and losses on our world is most evident during financial crises. During the financial crisis from 2007/2008, many financial institu Ester, M. & Sander, J. (2000). Knowledge discovery in databases. Techniken und Anwendungen. Berlin: Springer. 7  Feelders, A. (2002). Data mining in economic science. In J. Meij (Ed.), Dealing with the data flood: Mining data, text and multimedia (pp. 165–175). The Hague: STT/Beweton. 6

124

R. Halbleib

tions suffered extreme losses or even insolvency (such as Lehman Brothers): The losses of American and European banks between 2007 and 2010 amounted to $2.6 trillion, which is the size of the gross domestic product of France in 2012. This was the result of a huge and lasting domino effect of extreme losses and risks of the various financial instruments, markets, institutions and even economies. One reason for these losses was found to be the invalidity of the basic assumptions of the modern finance theory, the so-called efficient market hypothesis. According to the efficient market hypothesis, the current market prices or values of financial instruments contain all the information available in the market. Market participants are fully rational, that is, they act on the basis of the same information, which is available to everyone and already processed in the price value. Technically speaking, this meant that the best prediction of a future market price is the current market price, respectively that the price changes are independent of each other and their best prediction takes the value of zero. Under this theoretical assumption, the market is basically in equilibrium and cannot contain speculative bubbles. Evidence that these assumptions do not hold in reality can be seen in the speculative bubble of house prices in the run up to the financial crisis from 2007/2008 or in the dotcom bubble in 2000. This suggests that there is more information in the real prices than the information available to all and that price changes (returns) are not independent. It can therefore be said that financial crises are the result of a failure of theory to accurately describe reality. The development of the mathematical and statistical techniques that accurately model and predict financial risks is of particular interest to current research. This topic is addressed by financial econometrics, which is a relatively new area of econometrics. Robert Engle and Clive Granger, two econometricians who were jointly awarded the Nobel Prize in Economics in 2003, have made valuable contributions to the field of financial econometrics, particularly in the areas of economic cointegration and risk management. Financial risks are usually measured by the volatility of financial returns, known in statistics as the standard deviation (root square of the variance), or by distribution measures such as the quantiles of the distribution of returns. A very well known risk measure based on quantiles is the Value at Risk (VaR), which is defined as the largest possible loss for a given probability p. The popularity of VaR arises from its use by the Basel Committee on Banking Supervision for regulatory purposes, which allows to set capital requirements for banks to cover potential losses on their risky investments. Banks are permitted to use their own methods (called “internal models”) to calculate and estimate VaR, provided that these methods prove to guarantee accurate predictions of losses.

Measuring and Understanding Financial Risks: An Econometric Perspective

125

The Basel Committee checks the quality of the internal models by means of so-­called backtesting. This procedure consists of counting the daily violations (losses smaller than the estimated VaR) and testing whether the frequency of violations is consistent with the probability p of VaR. The probability p was set by the Basel Committee at 1%. Banks, for which the number of violations exceeds the expected value p at a given significance level, are required to hold more capital or reserves to cover potential losses. At a significance level of 1%, banks with a total of five violations in a year will be sanctioned by holding higher capital reserves that increase with the number of violations. The imposition of sanctions leads to reputational damage, higher capital costs and possibly the usage of more stringent external models to forecast VaR. Banks with more than nine violations will have to use a standardised methodology to compute VaR and 8% of their investments as reserves.8 A maximum of four violations are considered as evidence of the good quality of the internal risk models. A major drawback of this practice is the lack of significance of a backtest due to the low level of 1%. A possible solution to this problem is to increase either the sample sizes for backtesting or the significance level to 5%.9 Through Robert Engle’s 1982 Nobel Prize-winning paper on the autoregressive conditional heteroskedastic (ARCH) model, the measurement of financial risk through volatilities has received much attention from the research community.10 The introduction of the ARCH model and its application in finance had an immense effect on the econometrics and finance community in terms of measuring volatility and, in general, financial risks. Until that time, it was common to view volatilities as constant over time. The optimal prediction of volatility based on this assumption is determined by the value of the empirical standard deviation calculated using historical data. This measure assigns the same weight to observations from the recent past as well as to observations that lie far in the past. However, this practice is not realistic, as the time series of the (daily) return cluster by high and by low volatility. This provides important evidence that the volatility of financial returns is not constant, but changes over time. The ARCH model is the first mathematical attempt to accurately describe this empirical behavior. In predicting volatility using  Basel Committee on Banking Supervision (1996). Supervisory framework for the use of ‘backtesting’ in conjuction with the internal models approach to market risk capital requirements Retrieved from http://www.bis.org/publ/bcbs22.pdf. 9  Jorion, P. (2007). Value at risk. New York: McGraw-Hill. 10  Engle, R.  F. (1982). Autoregressive conditional heteroskedasticity with estimates of the variance of UK inflation. Econometrica, 50, pp. 987–1008. 8

126

R. Halbleib

an ARCH model, to the most recent historical observation one assigns a high information content, while to observations further in the past one assigns a lower information content. To date, there are hundreds of extensions to the ARCH model that capture besides clustering also other empirical properties of returns and volatilities, such as: asymmetric responses of volatility to negative and positive returns (leverage effects), the time- persistence of volatility, or the time-dependent correlation between returns. The usefulness of these models is mirrored in the calculation of VaR by means of the so-called location-scale model. The location-scale model requires accurate estimates and predictions of the mean of financial returns, their volatilities, and the quantiles of the standardized (by mean and volatility) returns. The best estimates and predictions of the mean of financial returns, especially at low frequencies (such as daily or monthly) is zero or simply the empirical average of historical returns. The quality of the quantile estimates in the location-scale model depends heavily on the statistical distribution assumption of financial returns, which is an important issue in the existing financial risk measurements.

4 Measuring Financial Risks: Limitations and Challenges A very popular assumption is the normal distribution of returns, which is widespread in both academic research and practice. This assumption has come under heavy criticism due to the occurence of financial events (such as financial crises) that drive prices, and correspondingly returns, to extreme values. Although very popular in the natural sciences, the normal distribution assumption is not appropriate for most social sciences, especially for finance. The normal distribution is based on the central limit theorem and asymptotically describes the distribution of the averages of random variables with finite and positive variance. That is, if each market price change is independent and identically distributed with finite variance, then the market price change is normally distributed over a period (day or week).11 However, the reality contradicts the market efficiency hypothesis and shows that price changes are not independently distributed and that the extreme values of returns are real and accompanied by explosive (infinite) variance. The normal ­distribution assigns practically a zero probability to extreme events. However, such  Fama, E. F. (1963). Mandelbrot and the stable paretian hypothesis. The Journal of Business, 36 (4), pp. 420–429. 11

Measuring and Understanding Financial Risks: An Econometric Perspective

127

events occur in reality much more often than the distributional assumption predicts. The consequence of such events is that the normal distribution, which is very popular in practice, remains an inappropriate theoretical assumption. As a result, alternatives such as extreme value theory or the alpha-stable distribution assumption should be used to capture the extreme financial price changes. Although these are closer to reality, complex mathematical formulations and computationally intensive implementations make them difficult to use in financial risk modelling and prediction by professionals. In order to predict financial risks, one needs accurate and realistic mathematical and statistical assumptions as well as data sets with very high information content. These data sets are mostly time series data of returns, prices, interest rates or dividends. As stated above, these time series data are not reproducible and, therefore, very limited in terms of informative value. Moreover, financial time series data are very complex as they exhibit properties such as autocorrelation, long persistence, heteroskedasticity, clustering, structural breaks, jumps, thick tails and skewness. The multi-layered nature of these data requires complex, highly parameterized econometric specifications that are typically very difficult to apply in practice. A common approach in finance is to use simple, easy-to-understand models based on strong mathematical assumptions (for example, normal distribution). The disadvantage of this approach is that the models used are not able to capture the empirical complexity of the underlying data and, thus, do not provide reliable results in terms of causal effects and predictions. One approach that deals with the limitations of the underlying data in finance, but also with the complexity of the appropriate mathematical models is the Monte Carlo simulation. This method consists of repeatedly generating random samples to obtain numerical results for inference or prediction. To perform a Monte Carlo simulation one must draw random numbers from a fully specified distribution. The assumed distribution should be able to generate data with properties very close to those of the observed data. Because of the complexity of financial time series, it is not always easy to put Monte Carlo simulations into practice. As an alternative to the Monte Carlo simulations, one can bootstrap the original (unique) time series data. Bootstrapping creates random samples by replacing the original time series. Thus, one does not need any distribution assumptions for this method, only the time series itself. While classical bootstrapping consists in creating repeated samples randomly from the data by replacement, for time series one should repeatedly create samples of an entire block of data (block bootstrap) to preserve the empirical dynamic properties of the data.

128

R. Halbleib

A relatively new approach to address the problem of limited information contained in financial data is the use of high frequency (tick) data. The availability of high-frequency financial data since the early 2000s due to the advances in storage capacity and computational power of new computer technology has opened up new possibilities for measuring and predicting financial risks and other financial measures. Although it seems very tempting to use high-frequency data, it is very problematic in practice. In addition to standard errors (for example, recording errors), disturbances due to the discrepancy of market prices from the fundamental values of a stock, caused by the characteristics of the particular market, must be eliminated before use. This discrepancy is known in the literature as the market microstructure noise and captures a variety of frictions that arise in the trading process such as bid-ask differences, discontinuity of price changes, differences in trading volumes, successive changes in price on block trades or strategic components of the order flow. These frictions lead to inaccurate (biased) estimates of the financial risk. This effect is intensified by increasing the frequency of the data. However, increasing the frequency simultaneously leads to an increase in the information content of the prices and, thus, more precise risk estimates with lower variation. A common approach is, therefore, to find the optimal frequency of sampling the price data that represents an optimal balance (“mean-variance balance”) between precision and accuracy. It should be noted that the optimal frequency of the data depends on the underlying financial risk measure. Another way to address this problem is to mathematically correct the risk measures so that they remain reliable in the presence of market frictions. These corrections are based on mathematical assumptions that describe the relationships between observed market prices, fundamental prices, and market microstructure noise. Although this approach is very appealing, in practice the results are highly dependent on the underlying assumptions on the structure of the market microstructure noise and its relationship to the observed prices. A simple example illustrates the importance of using informative data and accurate mathematical measures to estimate and predict risks in the recent financial crisis. On September 17, 2008, the Dow Jones Industrial Index realizes a loss of 7.8% over the previous day as a consequence of the announcement of the Lehman Brothers bankruptcy. A VaR prediction with 1% probability level for this day, based on the location-scale model with time-constant volatility estimated from low-­ frequency data, gives a value of about 3.3%.12  Note that VaR is a loss measure and thus its values have negative signs.

12

Measuring and Understanding Financial Risks: An Econometric Perspective

129

Replacing the assumption of time constant volatility with time-varying volatility using ARCH models improves the prediction of VaR to 4.8%. Considering high-­ frequency information to measure and predict the volatility, the prediction of VaR further increases to 6.9%. The additional assumption that the underlying return distribution assumptions give extreme events a high probability leads to a VaR prediction of over 8%. Based on this forecast, the investor could have covered the realized loss of her portfolio in the Dow Jones Industrial Index of about 7.8% as her reserves would have needed to be as large as the prediction.

5 Understanding, Measuring and Predicting Financial Risks: A New Perspective Two important parts of my current research project are to investigate and minimize the discrepancy between the assumptions of the existing mathematical and statistical financial risk models and the reality, and to propose new, more accurate methods that incorporate the empirical properties of real data. The WIN project aims at developing new methods for measuring and predicting extreme events and risks in the financial markets by using high-frequency data (stock data from the Trades and Quotes database of the New York Stock Exchange and currency data from the foreign exchange market). The project has three objectives. The first objective is to understand what causes the extreme losses during financial crises. This will be done based on the analysis of the discrepancy between the theoretical model assumptions and the empirical properties of the financial data. The second objective is to identify and analyze the high frequency information that is highly valuable for financial risk prediction. Achieving these two goals should lead to newly developed financial risk measures that optimally capture the market microstructure effects. The third goal of my project is to measure the accuracy and the robustness of these financial risk measures on different data sets (stock prices, exchange rates). To achieve the goals presented above, I use several statistical and mathematical methods that have already been applied in other disciplines. One of these methods is matrix regularization (factor analysis, principal component analysis, shrinkage methods), which provides dimensional reduction and it is already applied in psychology, social sciences, engineering, physics and biology. A second method is the extreme value theory, which is widely used in measuring and predicting extreme climate events (extreme weather or extreme floods), in geology for predicting earthquakes and in the insurance industry. The third method is based on power-­ laws, which measure scale relationships between extreme events and risks in daily

130

R. Halbleib

or low frequencies and those measured at higher frequencies. These methods have already experienced wide applications in biology, psychology and engineering. While my project is mainly concerned with the quantitative analysis of financial risks, a non-quantitative analysis from the investors’ perspective would be necessary to create a complete picture of financial risks. This analysis would provide answers to the following questions: are investors rational? What role do bonuses, job security, or vanity play in their decisions? How do the implementation costs of mathematical models and their complexity affect final decisions? Such questions are very difficult to answer in our academic world due to a general lack of communication between the practice and the academic community. In order to minimize financial risks and manage their impact on our society, one should be able to reach the optimal synergy between theory, empirics and practice.

6 Conclusion Measuring economic phenomena is the main objective of econometrics. Measuring and understanding economic reality are very closely related: while measuring alone produces random numbers without meaning (this practice is known in the econometric community as wild regressing), understanding without measuring remains a philosophical act. This close relationship is particularly evident in the analysis of financial risks. The enormous losses of previous financial crises have shown that the validity of the very restrictive and unrealistic mathematical assumptions of the existing risk measures need to be seriously questioned. An important role in measuring and understanding financial risks is played by the underlying data. My WIN-college project aims at improving financial risk measuring and forecasting by developing and applying econometric methods based on high frequency (tick by tick) financial data.

Regulating New Challenges in the Natural Sciences: Data Protection and Data Sharing in Translational Genetic Research Fruzsina Molnár-Gábor and Jan O. Korbel

1

Recognition and Understanding

In cancer genome research, measurement and understanding are important at three levels: in the collection of patient data, in their scientific analysis, i.e. in the production of scientific results and medical findings, and in the handling of the information obtained. Recent developments in DNA sequencing technology make it possible to decode a complete human genome for less than €1000, if just the cost of biochemical reagents used in the process is considered alone.1 This cost reduction is leading

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The text is in accordance with the state of scientific research and legal regulations as of 2016. The present version has been revised technically and linguistically by the authors in collaboration with a professional translator.

 Collins, F.S. (2010). Has the revolution arrived?, Nature, 464, pp. 674–675, p. 675.

1

F. Molnár-Gábor (*) Heidelberg Academy of Sciences and Humanities, Heidelberg, Germany e-mail: [email protected] J. O. Korbel Genome Biology Unit, EMBL, Heidelberg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_11

131

132

F. Molnár-Gábor and J. O. Korbel

to an explosion of sequencing in research and is driving a number of initiatives that are conducting genome analyses of patients and subjects on a large scale. Examples of such initiatives in Europe or under European leadership are the British “100,000 Genomes Project”, the “Pan-Cancer Analysis of Whole Genomes” (PCAWG) project led by the International Cancer Genome Consortium, and the projects of the German Consortium for Translational Cancer Research.2 It is estimated that several hundred thousand patient genomes will be analyzed over the next three to five years as part of these and similar initiatives.3 Novel systems for DNA sequencing (including the Illumina HiSeq X Ten Platform) will enable the sequencing of more than one million patient genomes worldwide within the next five to ten years. Together with other molecular biology data (e.g. transcriptome and microbiome data), this will generate data sets of unprecedented size, which will be used for improved basic research, patient stratification, and in the future for better diagnostics and personalised medicine through integrative analyses. While genome analyses are relevant for many diseases, they are of particular importance in cancer research. This can be attributed to a high level of patient participation and also to a number of successful application examples in which the genome data examined have helped to improve patient management.4 The availability of genetic and clinical data from a large number of patients, and in particular their integrative analysis, increases the likelihood that research can provide an improved understanding of disease patterns with relevance to clinical care. By analysing millions of genetic markers using appropriate filters in bioinformatics and statistical methods for data integration, improved therapies, prevention and screening programmes could increase healthcare efficiency and patient outcomes in the future—provided, of course, that data can be made available and adequately analysed by researchers. In order to enable the introduction of the results of research into patient care, as driven by translational medicine, i.e. the immediate translation of the results of research and clinical trials into medical care, it is necessary to retain the patient data of the data subjects.  100,000 Genomes Project; PCAWG Project. Retrieved from https://dcc.icgc.org/pcawg; German Consortium for Translational Cancer Research. Retrieved from https://www.dkfz. de/de/dktk/. 3  Brors, B., Eberhardt, W., Eils, R., et al. (2015). The Applied and Translational Genomics Cloud (ATGC), White Paper. 4  Retrieved from http://www.applied-translational-genomics-cloud.de/joomla/index.php/en/. Korbel, J.  O., Yakneen, S., Waszak, S.  M., et  al. (2016). Eine globale Initiative zur Erforschung von Krebserkrankungen: Das Pan Cancer Analysis of Whole Genomes (PCAWG) Project, Systembiologie, April 2016, pp. 34–38. 2

Regulating New Challenges in the Natural Sciences

133

This ensures that individuals and patient groups can be matched to results. The results of genome sequencing in diagnostic laboratories can be correlated with clinical course data (such as tumour markers or drug dosages), enabling the identification of affected individuals for improved individualised treatment. Two central questions in this approach arise. Firstly, how can the large amount of data be processed in a meaningful way. At present, neither universities nor research centres worldwide have the necessary infrastructure to carry out analyses with such large data sets and to ensure secure storage of and access to the data. Whether one can store a data set securely is virtually independent of the size of the data set. Moreover, even very small amounts of data—i.e. just under 13 small data fragments (single nucleotide polymophisms)—are sufficient to make an anonymised person identifiable. Furthermore, results data processed in different institutes are not comparable due to a lack of standardisation of work processes in computational analysis. Clouds are brought to life through large infrastructure investments. Cloud computing generally means the storage and large-scale as well as scalable (i.e., adaptable in scale depending on the requirement) processing of data by multiple users using a common computing infrastructure. Processing is typically performed using remote access, which is usually established via the Internet (or, in exceptional cases, via an internal network). One of the key technical advantages of cloud computing is the bundling of resources, which accelerates calculations and ensures elasticity (i.e. rapid and dynamic scalability of the computing capacity used).5 The possibility of determining the computer power called up according to requirements (“on-demand self-­service”) represents a major advantage for users compared to traditional data centres and can reduce costs for both the procurement and operation of IT infrastructure. Broad network access and standardised data security measures make it possible to process enormous amounts of data in the petabyte range under the same protective measures. Compared to individual data centres in academic institutions, which are usually used with time interruptions, clouds can significantly reduce infrastructure and operational costs due to high leasing rates and resource sharing.6 Typical cloud service models include Software-as-a-Service (the provision of specialised software as a service by the cloud provider) and Infrastructure-as-a-Service (the provision of specialised IT infrastructure as a service by the cloud provider). Clouds can be accessible to the public (public cloud) or restricted to a specific community of  Mell, P. & Grance, T. (2011). The NIST Definition of Cloud Computing. Recommendations of the National Institute of Standard and Technology, SP 800–145 NIST Special Publication. Retrieved from http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145. pdf. 6  Bedner, M. (2013). Cloud Computing—Technik, Sicherheit und rechtliche Gestaltung, Kassel: University Press, p. 23, p. 27. 5

134

F. Molnár-Gábor and J. O. Korbel

users (community cloud). In addition to clouds from commercial providers such as Amazon Web Services, Google, Microsoft or T-Systems, numerous academic clouds are only accessible to certain user groups or user communities, such as the “Embassy Cloud” at the European Bioinformatics Institute of the European Molecular Biology Laboratory (EMBL-EBI) in the UK.7 The number of mixed clouds operated through a public-private partnership is also increasing.8 In addition to the question of how the large amount of data can be processed in a meaningful way, the second question to be addressed is how the collected data and the persons involved in data processing can be protected. This question is particularly significant because of a broad application of this new data processing technology in research and medicine that is already emerging at this point in time. The aim of the WIN project “Regulating new challenges in the natural sciences—data protection and data exchange in translational genetic research” was to address these two issues and to establish a solution model for the use of cloud technology against the background of ethical-legal requirements of data protection and data subjects’ rights in an interdisciplinary manner. In its analysis, the WIN project used the PCAWG project organized within the International Cancer Genome Consortium (ICGC) as a use case, which has been the globally largest bioinformatics project due to the amount of data processed. Participants in this project are cancer patients who have voluntarily agreed to participate in the ICGC.

2 The Rights and Expectations of Stakeholders in Macro and Micro Perspective—Normative Challenges 2.1 The Volume and Diversity of Data and Its Impact on Data Protection The volume and diversity of data processed using clouds is growing rapidly. This enables genetic research to establish disease patterns, i.e. correlations between genetic changes and disease patterns, as well as patterns of disease processes with high statistical validity. Even small sections of genetic material allow conclusions  Amazon Web Services, available at: https://aws.amazon.com/de/; Google Cloud Platform, available at: https://cloud.google.com/. Microsoft Cloud Services, available at: https://www.microsoft.com/de-de/cloud/default. aspx. T-Systems Cloud, available at: http://cloud.t-systems.de/; Embassy Cloud, available at: www.ebi.ac.uk/services. 8  Helix Nebula, available at: http://www.helix-nebula.eu/. 7

Regulating New Challenges in the Natural Sciences

High data diversity

Reidentification risk is increased

Outsourcing Data volumes in the peta-byte range

Responsibility

135

Organisational effort

Dynamic requirements on data data protection

Patient rights

General Data Protection Regulation (GDPR)

Fig. 1  Quantitative requirements justify increased qualitative challenges to data protection. (Own representation)

to be drawn about the individual. Although not all collected (genetic) data have a clear personal reference, any form of analysis of patient data also increases the probability that a person can be identified despite encryption of their data. In order to prevent the illegal re-identification of individuals, simple encryption alone is not sufficient in the case of high data diversity. When keeping data in clouds, separation between different categories of data becomes necessary. At the same time, there is a risk that separation between data sets will negate the benefits of keeping them in clouds if this makes it difficult to share resources in a cost-effective way. In addition, DNA data are almost always decrypted first in practice before being processed, so ensuring continued confidentiality in the context of cloud computing is hardly feasible at present.9 The quantity and diversity of the data sets to be analysed not only lead to new insights in genetics, but also to qualitatively new challenges for data protection law (Fig. 1).

2.2 The Individual Patient A legal as well as ethical basis for processing personal data is provided by the informed consent of the data subject. However, in the context of research projects, it is often not possible to fully determine at the time of data collection for which

 Molnár-Gábor, F. & Korbel, J. O. (2016). Verarbeitung von Patientendaten in der Cloud, Zeitschrift für Datenschutz 6, pp. 274–281. 9

136

F. Molnár-Gábor and J. O. Korbel

scientific purposes the data are being collected. For this reason, if ethical standards of scientific research are to be observed, it is often only permitted to give consent for a specific area of research. For research fields that are very much in flux, such as translational genomics, it is a benefit if the data collected can be used in a variety of ways without having to contact patients again. The results of the analyses following genetic sequencing are often not known in advance and may be so numerous that the patient cannot provide a sufficient basis for this research with conventional consent.10 Therefore, informed consent should increasingly be understood as a process that is dynamic, with extensive rights granted to the patient to ensure the implementation of his or her will at the different stages of the process, such as the right to be informed about the processing, the right of access to the data, the right of withdrawal, as well as the right to be forgotten. Translational research aims at a high number of participating patients. For this reason, the withdrawal of consent by individual patients does not have a significant impact on research. However, it is currently technically unclear how various patient rights, such as the right to be forgotten or the right to access data, can be implemented when data are processed in a globally operated public cloud. Locating individual data sets within a public cloud causes considerable difficulties, as the providers of such cloud services distribute their data centres in different countries and thus create a large number of copies of diverse data sets. Even if the revocation of individual declarations of consent does not affect large-scale translational research based on statistically proven correlations, the real challenge is to perceive the patient in his or her uniqueness, which is reflected in his or her particular life situation and medical history, and to ensure that the patient can exercise his or her rights in the dynamic process of data processing by designing the research circumstances and the technology used.

2.3 Cooperating Research Centres, Partners and Countries Commercial partners and partners from third countries are being increasingly involved in data processing in order to ensure the necessary IT infrastructure for establishing clouds. Firstly, their involvement due to research collaborations enables the pooling of large and diverse data sets, and secondly, they usually have good technical equipment. For example, in the initial stage of the PCAWG project,  Molnár-Gábor, F. & Weiland, J. (2014). Die Totalsequenzierung des menschlichen Genoms als medizinischer Eingriff—Bewertung und Konsequenzen, Zeitschrift für medizinische Ethik, 2, pp. 135–147. 10

Regulating New Challenges in the Natural Sciences

137

tumour tissues and healthy tissues from about 1300 mostly North American but also Asian cancer patients were analysed on a cloud platform. This platform is owned by a US company that has enabled the analyses at a reduced price and under improved processing conditions compared to the academic centres involved in the project.11 The international exchange of data is subject to requirements under international and European law.

2.3.1 Transfer of Data from the European Union to Third Countries In 2000, the European Commission found that data recipients in the United States who subscribe to the principles of the so-called Safe Harbor Agreement ensure an adequate level of protection for the personal data transferred.12 Nevertheless, a growing distrust of the companies participating in the Safe Harbor program has been observed for some time. Research has shown that, due to the lack of independent controls, actual compliance with the Safe Harbor Principles has varied widely among companies.13 In addition, international surveillance scandals have damaged trust in the agreement and led to data protection authorities not accepting its validity and denying that it ensures an adequate level of data protection in the US.14 The basis for this ­assessment by data protection authorities was that contractual clauses of cloud providers regularly authorise them to disclose user data if there is a valid court order or a request from law enforcement or executive authorities. Based on mutual legal assistance treaties, executive authorities of one nation may review the private data of citizens of another nation if investigative authorities believe that such information may be relevant to national security.  Stein L. D., Knoppers, B. M, Campbell, P., Getz, G. & Korbel, J. O. (2015). Data analysis: creating a cloud commons, Nature 523, pp. 149–151. 12  2000/520/EC: Commission Decision of 26 July 2000 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection afforded by the Safe Harbor Principles and related Frequently Asked Questions (FAQs), submitted by the United States Department of Commerce, OJ L 215, 25. 8. 2000, pp. 7–47. 13  European Commission, Communication from the Commission to the European Parliament and the Council on the functioning of the Safe Harbour from the Perspective of EU Citizens and companies established in the EU, Brussels, November 27, 2013, COM (2013) 847 final, 6–7. 14  Arbeitskreise Technik und Medien der Konferenz der Datenschutzbeauftragten des Bundes und der Länder sowie der Arbeitsgruppe Internationaler Datenverkehr des Düsseldorfer Kreises, Orientierungshilfe Cloud Computing, 9. Oktober 2014, Version 2.0, 19–22, https:// www.datenschutzbayern.de/technik/orient/oh_cloud.pdf. 11

138

F. Molnár-Gábor and J. O. Korbel

Legislation in many countries further extends these powers, inter alia, in the interest of law enforcement and counter-terrorism. European data protection authorities have therefore criticised that the Safe Harbor Principles only apply to self-certified US companies in the private sector that receive personal data from the EU and that US public authorities are not required to comply with the Principles. For the access rights of the US authorities, it is not decisive whether the cloud is located inside or outside the USA. It is considered sufficient under US law if the cloud provider at least also conducts business in the USA. In particular, Title 50 USC, Sec. 1881a FISA, allows almost unrestricted government access to data and communication protocols.15 Furthermore, there is no purpose limitation for the further processing of collected data by the US authorities.16 The Commission Decision on Safe Harbor gives priority to the requirements of “national security, public interest or the implementation of laws” over the Safe Harbor Principles. Accordingly, self-certified private US organisations receiving personal data from the Union are also obliged, without any restriction, to leave the Safe Harbor Principles without application if they conflict and are therefore found to be incompatible with the said governmental requirements.17 In response to these developments, the European Court of Justice (ECJ) invalidated the Commission’s decision on the Safe Harbor Agreement in its judgment of 6 October 2015.18 Beyond the concerns of data protection authorities, the ECJ found that the Safe Harbor Agreement does not contain a finding on whether there are state rules in the United States designed to limit any interference with the fundamental rights of individuals whose data are transferred from the Union to the United States.19 In addition, the Agreement does not contain any finding on the existence of an effective judicial remedy against such interference.20 Citing these findings, the ECJ finds that, in particular, a rule allowing the authorities to have general access to the content of electronic communications violates the essence of the fundamental right to respect for private life guaranteed by Article 7 of the Charter of Fundamental Rights of the European Union.21  Foreign Intelligence Surveillance Act of 1978 (FISA Pub.L. 95–511, 92 Stat. 1783, 50 U.S.C. ch. 36). 16  Ibid. 17  ECJ, Judgment of 6 October 2015, Case C-362/14, Maximillian Schrems v Data Protection Commissioner, para. 86. 18  Ibid. 19  ECJ judgment, para 88. 20  ECJ judgment, para 89. 21  ECJ judgment, para 94. 15

Regulating New Challenges in the Natural Sciences

139

Similarly, a system which does not provide for the possibility for citizens to obtain access to personal data concerning them by means of a judicial remedy or to obtain their rectification or erasure infringes the very substance of the fundamental right to effective judicial protection enshrined in Article 47 of the Charter.22 In early 2016, the Commission announced the conclusion of negotiations with the US on a new data transfer mechanism, the “EU-US Privacy Shield”.23 The negotiated results served as a new basis for an adequacy decision by the European Commission, following the opinion of the Article 29 Working Party and the representatives of the Member States.24 However, the new baselines have been severely criticised by data protection experts, who believe it is possible that the new agreement will also soon have to be examined by the ECJ.25

2.3.2 Commercial Cloud Providers Cooperation with commercial cloud providers is associated with additional normative challenges. The applicable law governing the processing of data usually depends on the location of the processing or the company headquarters. In third ­countries, these often offer, as the ECJ has also found, lower protection of patient data in many respects than German laws or European regulations, which cannot be compensated for even by storing the data in specified regions. The specification of a US jurisdiction for the settlement of disputes in the terms of service of the transatlantic cloud providers results on the one hand in a considerable cost risk, not least in view of the provision which stipulates that each party must in principle bear its own costs irrespective of the outcome of the proceedings.  ECJ judgment, para 95.  See Commission Press Release, retrieved from http://europa.eu/rapid/press-release_IP-16216_de.htm. 24  Commission Implementing Decision (EU) 2016/1250 of 12 July 2016 pursuant to Directive 95/46/EC of the European Parliament and of the Council on the adequacy of the protection provided by the EU-US Privacy Shield (notified under document C (2016) 4176) (Text with EEA relevance). 25  Cf. ECJ, Case T-670/16. Weichert, T. (2016). EU-US Privacy Shield—Ist der transatlantische Datentransfer nun grundrechtskonform? Eine erste Bestandsaufnahme. Zeitschrift für Datenschutz, 5, pp. 209–217. BBC News, Data watchdog rejects EU-US Privacy Shield pact. Retrieved from http:// www.bbc.com/news/technology-36414264 (30 May 2016). Windelband, D. (2016). Kritik am EU-US Privacy Shield reißt nicht ab. Abgerufen von https://www.datenschutz-notizen.de/kritik-am-eu-us-privacy-shield-reisst-nichtab-5614373/ (7 April 2016). Schreiber, K. & Kohm, S. (2016). Rechtssicherer Datentransfer unter dem EU-US Privacy Shield? Zeitschrift für Datenschutz, 6, pp. 255–260. 22 23

140

F. Molnár-Gábor and J. O. Korbel

On the other hand, it results in the need for the involvement of lawyers based in the state in which the action is brought. Challenging the jurisdiction of a U.S. court itself is not easy and is usually very costly. The storage and processing of data are carried out by the cloud provider in different geographical regions, depending on the utilisation and cost-effectiveness of the available computing power. Customers are often allowed to specify the region in which the data is processed and stored for an “additional charge”. Many service providers also offer regions in the European Economic Area and promise to keep the data exclusively in the specified region. Other service providers refuse to disclose the geographic regions in which they operate. Nonetheless, even if the region is specified, promises of retention there are very vague and difficult to monitor. Due to the wording of the service contracts, it is often not clear whether in fact all data, including what is generated as a backup, or data fragments, as well as all data processing, remain in the chosen regions. The involvement of sub-contractors and various temporary personnel may also necessitate data transfers outside the chosen region. The control options on the part of cloud users are very limited and transparent solutions usually leverage the scalability and resource pooling advantages of cloud computing, at least in part. Customers of commercial cloud providers share physical but also logical storage, i.e. the logical addresses that the programs see and which are mapped to physical addresses. However, references in contracts to “reasonable and appropriate measures” to secure data by the service provider are very vaguely worded and do not provide any information about the exact design of the protection. Responsibility for the security of data in clouds thus falls to the customer—if personal data are processed in a cloud, they require special protection. This leads to the question of the actual means of access to the data, including by the service provider itself. In addition to the possibility of disclosing the customer’s content to comply with a request from a government agency or regulatory body, service contracts are usually also very vague in other respects: On the one hand, it is claimed that the provider does not have access, on the other hand, access is nevertheless granted to the provider for the purpose of ­maintaining the service. The maintenance of the service is also defined differently: sometimes specifically to the service provided for the particular customer, sometimes generally to the services of the group. In the case of groups that finance themselves through advertising, these access rights can be defined very broadly. Not only the content of the user can be relevant from this point of view, but also the monitoring of the usage behaviour represents an interesting option for the provider.

Regulating New Challenges in the Natural Sciences

141

2.3.3 The Individual Researcher The growing outsourcing of the computing infrastructure makes it much more difficult for both the data subjects (patients) and the researchers involved to follow and verify data processing procedures. The new European data protection rules establish a comprehensive responsibility and liability of the controller for its processing of personal data and for processing carried out on its behalf (Art. 22–23, Art. 77 of the General Data Protection Regulation (GDPR)).26 It must not only implement technical and organisational measures to be able to demonstrate compliance with data protection rules,27 but also take appropriate and effective measures taking into account the nature, scope, circumstances and purposes of the processing and the risk for the personal rights and freedoms of the data subjects.28 It must only engage data processors that provide sufficient guarantees for the protection of personal data.29 It must be responsible for the implementation of all data processing principles and shall in particular bear the obligation to implement data subjects’ rights.30 The controller, as well as the processor, are obliged, in all cases where there is no decision by the European Commission on the adequacy of the protection existing in a third country, to use solutions that provide data subjects with enforceable and effective rights in relation to the processing of their personal data after the transfer of such data, so that they can continue to enjoy the fundamental rights and safeguards as in the EU.31 These provisions do not take into account the different relationships between data controllers and data processors with academic and commercial cloud providers. When researchers and data controllers (often in combination) use the cloud services of large industrial companies, it is hardly possible for them to comply with the obligations to ensure and control data protection standards. Even in the case of cross-continental cooperation, difficulties may arise in implementing the obligations. Meeting expectations, even with the goodwill of all research partici Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), OJ L 119, 4.5.2016, p. 1–88. 27  Art. 5(1) GDPR. 28  Recital 60 GDPR. 29  Recital 63a, Art. 26 GDPR. 30  Article 5(2) of the GDPR. 31  Art. 14, Art. 17 No. 2a GDPR. 26

142

F. Molnár-Gábor and J. O. Korbel

Measure Germline genome Cancer genome TCAGCGATGTTA CTCGCACATTAT CGACCGATATCA TCTACGTACCGG

TCAGCGATGTTA CTCGCACATTAT CGACCGATATCA X TCTACGTACCGG

Understanding

Phenotypic Understanding or Prediction

e.g. regression analyses

y = f (x1, x2, ...x n) + e Transcriptomics Epigenetics Proteomics

Environment

Metabolomics

Fig. 2  Correlations and causal relationships: model building in genetics—measuring and understanding at the level of scientific analysis. The mathematical formulation of regression analysis can be represented by the relationship between the independent variables x and the dependent variable y. Here f denotes the function sought or assumed and e the error or residual of the model. (Own representation)

pants, is a major challenge when there are fundamental differences between countries in their views on the protection of personal data and the rights of data subjects (Fig. 2).32

3 Generalisation and Solution Pattern of the Regulation In addressing these normative challenges, consideration should be given, on the one hand, to how the rapidly changing medical research on extensive data sets, which increasingly crosses borders and forms centres at various locations, can be made possible. On the other hand, the recently tightened responsibilities of researchers should be respected and the extensive rights of data subjects need to be  Molnár-Gábor, F. & Korbel, J. O. (2016). Verarbeitung von Patientendaten in der Cloud, Zeitschrift für Datenschutz, 6, pp. 274–281. Molnár-Gábor, F. (2017). Data protection. In R.  Grote, F.  Lachenmann, R.  Wolfrum (Eds.). Max Planck Encyclopedia of Comparative Constitutional Law, Oxford University Press. 32

Regulating New Challenges in the Natural Sciences

143

adequately addressed. A regulation is needed that formulates answers to the challenges of genetic research both at the necessary level of abstraction and in a way that is applicable to the specific context, and that enables its swift implementation. One solution that is becoming increasingly established is the consideration of regulatory approaches developed by the research community itself. Self-regulation is understood as the development of specific self-obligatory norms of behaviour through setting professional standards. Such independent standards regimes are initially not legally binding. Private standard-setting is not democratically legitimised, and scientific organisations or even loose groups of scientists without legal capacity cannot in principle issue legally binding standards as subjects of private law.33 Nonetheless, many self-regulatory measures refer to significant legal principles in the research field concerned—such as, for example, the Max Planck Society’s guidance and rules on the responsible handling of research freedom and research risks.34 Self-regulation allows ways of decision-making and corridors of action to be pointed out; these standards can be understood as interpretative aids for the implementation of general legal regulations, also in the field of data exchange and data processing. The initiation of self-regulatory rules by science can ensure the necessary specificity of standards for the particular regulatory context. Moreover, the specification can be flexible. In contrast to the legislature entrusting natural or legal persons with standard-setting powers (delegation) or to involving experts in the legislative process from the field of technology or ethics, the design of self-regulatory measures—since it initially lies outside the legislative process—is not formalised in terms of process and material. However, self-regulatory measures can have an indirect legal effect in various areas of law.35  As registered associations under private law, scientific organisations may enact statutes in accordance with Section 25 BGB. These are binding for their members. Consequently, they usually have organised decision-making as well as organised administrative structures and often compliance and enforcement mechanisms. 34  Information and Rules of the Max Planck Society on the Responsible Use of Research Freedom and Research Risks, available at: https://www.mpg.de/199426/forschungsfreiheitRisiken.pdf. See also the DFG Guidelines: Ensuring Good Scientific Practice, available at: http:// www.dfg.de/download/pdf/dfg_im_profil/reden_stellungnahmen/download/empfehlung_ wiss_praxis_1310.pdf. 35  Molnár-Gábor, F. (2016). Case Study 2: Self-regulation in research: The EURAT Code of Conduct for Whole Genome Sequencing. In M. Dreyer, J. Erdmann & Ch. Rehmann-Sutter (Eds.), Genetic Transparency? Ethical and Social Implications of Next Generation Human Genomics and Genetic Medicine. Amsterdam: Brill-Rodopi, pp. 216–230. 33

144

F. Molnár-Gábor and J. O. Korbel

According to Section 276 II BGB (German Civil Code), liability of private persons may arise due to negligence if the required level of care is not observed. According to the prevailing opinion, reasonable care also includes the consideration of specific guidelines and codes of conduct that reflect the standard in a field of research.36 Accordingly, depending on the case, negligence may, for example, be justified by a failure to take account of risk-minimising research measures laid down in a code of conduct. Whether specific self-regulatory measures reflect the standard in the field concerned must also be considered on a case-by-case basis. Labour law measures can be a solution for the heads of institutions to include specific self-­ regulatory requirements in the regulatory framework of their institutions. Depending on the specific inclusion, these solutions achieve different effects. Initially, admissions can be ordered under Section 106 of the Industrial Code and Section 315 of the Civil Code as part of the director’s rights to direct the head of the institution. Although the employer can only specify existing contractual obligations within the scope of these rights, it is possible to tie self-regulatory rules of science, e.g. in the form of a code of conduct, to existing measures for compliance with contractual obligations.37 Another option for incorporating self-­regulatory rules is to include them in employment contracts. This is less problematic for new hires than for changes to existing employment relationships, which usually require the consent of both parties.38 A third option is the inclusion in company ­agreements.39 Once employees are obliged to comply with the self-regulatory rules of science, non-compliance may be classified as a material breach of duty and result in disciplinary measures or sanctions under labour law. Last but not least, self-regulatory measures may acquire legal applicability due to implementation by public law bodies. Universities, as public corporations, may regulate their affairs by statutes, unless the law contains provisions to the contrary. Private norm-setting, however, generally only becomes relevant in state law when the legislation orders compliance with the specific private regulations or by means of general clauses.40

 Damm, R. (2009). Wie wirkt “Nichtrecht”?, Zeitschrift für Rechtssoziologie, 30(1), pp. 3–22, p. 14 et seq. 37  MünchArbR/Blomeyer, § 48, recital no. 32. 38  Mengel, A. (2009). Compliance und Arbeitsrecht. Implementierung, Durchsetzung, Organisation. Munich: C. H. Beck, Rn. 31 et seq. 39  Ibid., para 50. 40  Ibid., p. 12 et seq. 36

Regulating New Challenges in the Natural Sciences

145

In addition to its legal significance, private self-regulation can contribute to the development of the state of the art and science in the field of data processing in a beneficial way. Such regulations can ensure a timely take-up of specific technological and scientific developments, since the creation of such standards usually takes less time than if the legislator—including appropriate delegation—becomes active itself. Experts are involved in the establishment of such standards, which ensures the disciplinary suitability of the regulations and can thereby achieve their appropriate scientification. Beyond the legal aspect, self-regulation has an important influence on the ethics of science and can significantly advance the establishment of ethical standards of conduct (cf. medical professional codes of conduct) for professions that do not have such an established canon (biology, biotechnology, bioinformatics, etc.) but are increasingly responsible for respecting the personal rights of patients, as well as reaffirming ethical standards across different legal systems. Last but not least, the actors involved generally consider the path of self-regulation to be a more effective solution compared to higher levels of generalisation, such as laws, to regulate specific issues in individual scientific fields.41 Self-regulatory solutions can of course only be convincing if they are in line with the applicable law, especially with the constitutional requirements. Furthermore, they can only gain trust and support at professional and societal levels if they adequately implement the established ethical principles of handling sensitive data and respecting the patients concerned. In this sense, both the former Data Protection Directive and the General Data Protection Regulation of the European Union adopted April 2016, encourage the development of codes of conduct for specific areas and sectors of data processing (Art. 27 Data Protection Directive,42 Art. 40 GDPR).

 Sieber, U. & Engelhart, M. (2014). Compliance Programs for the Prevention of Economic Crimes. An Empirical Survey of German Companies. Berlin: Duncker & Humblot, p. 116 et seq. 42  Directive 95/46/EC of the European Parliament and of the Council of October 24, 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data, OJ L 281 of 11.23.1995, pp. 31–50. 41

146

F. Molnár-Gábor and J. O. Korbel

4 Consequences for the Interdisciplinary Approach in the Project and Need for Further Discussion Interdisciplinary measures of self-regulation allow research to influence the legal and ethical development of technological standards of conduct. In this way, standardisation at the interface of science and technology can be initiated in such a way that it specifies, implements and standardises in a data protection-immanent manner the standards set by existing law. This approach goes beyond the claim of merely satisfying legally binding, already established regulations. However, the development of interdisciplinary self-regulatory measures presupposes a constant exchange between the natural sciences and the normative approaches of ethics and law beyond a simple impulse-giving role. An interdisciplinary exchange makes it possible to translate and channel the analytical pathways, quantifications and work processes, i.e. measurement and understanding in biology, into normative challenges. The flexibility and specificity of the informal generation of self-regulatory rules can ensure science-adequate and timely solutions whose integration into state law can proceed more quickly through formal incorporation. While the sequencing of the human genome undoubtedly has enormous potential for basic and biomedical research, the approach is also associated with ethical risks that scientists working in this research area must be aware of.43 Interdisciplinary self-regulatory approaches to cloud computing are of essential interest because of the responsible incorporation of this new technology into the scientific system through normative assessment, and because they provide protection from liability. However, it is not only the law that determines technology—what is technically possible also influences the law. The development of terms such as the concept of personal data already demonstrates the need for dovetailed technical and normative approaches to clarify the legal framework: the possibility of identifying a person from certain data depends on the specific concrete combination and aggregation of data. Through close interdisciplinary cooperation with biologists and bioinformaticians, impulses from the current state of research can be directly taken up for the

 The participants in this WIN project participated in drafting the Code for Non-Medical Scientists Involved in Whole Genome Sequencing, in Particular of Patient Genomes, and its explanatory notes, in: Project Group “Ethical and Legal Aspects of Whole Human Genome Sequencing Sequencing”, Cornerstones for a Heidelberg Practice of Genome Sequencing, 2013, pp.  12–30, unchanged reprint of the Code and its explanations with scope for “researchers significantly involved in whole genome sequencing, especially of patient genomes” in the 2nd edition 2015, pp. 18–36. 43

Regulating New Challenges in the Natural Sciences

147

establishment and further development of the terms relevant for legal regulation. Furthermore, the consideration of these impulses can contribute to the adequate specification of existing standards and legal regulations. It will be vital to ensure that those affected by these regulations as well as biologists are flexibly involved in the standardisation processes. At the same time, it will be possible to achieve that the aspects of better protection for the affected patient as a guiding legal principle are combined with the data-driven requirements typical of biotechnology. Ethical and legal requirements that would restrict scientific research can be incorporated into research methods through innovationoriented solutions, such as privacy-by-design models. Consequently, the responsibility for such approaches characterised by the claim to standardise the processing procedures and the goal of protecting personal data in research is firmly anchored. Thus, the disciplines of biology and law complement each other in the project, and only their dovetailing enables the scientific implementation of the project, i.e. to find adequate regulatory answers to new challenges of translational genomic medicine, especially those related to data protection law. The interdisciplinary nature of the project will therefore make a significant contribution both to the promotion of scientific freedom and to the promotion of innovative research approaches, as well as to guaranteeing patients’ rights and treating patients as persons with due regard for their dignity. The development of an interdisciplinary methodology for privacy regulation can bring the approaches of each of the disciplines involved to measuring and understanding the world closer together and perfect the understanding about the object of study by the two disciplines: biotechnology and law. “Can”, conditioned by biotechnological research, and “may”, conditioned by law, are merged into standards of action in the sense of “ought”. This not only results in practical advantages, such as the relief of researchers from liability or the appropriate implementation of patients’ rights, but also contributes to the development of a concept of science that is characterised by freedom, humanity and a culture of trust.

Psychology and Physics: A Non-­invasive Approach to the Functioning of the Human Mirror Neuron System Daniela U. Mier and Joachim Hass

At first glance, psychology and physics seem to have nothing in common. However, in both disciplines, sections are dealing with the study of one of the most complex objects in the universe: the human brain. Within psychology, neuropsychologists and neuroscientists “watch the brain at work” by measuring electrical and magnetic signals while participants perform tasks. Physics comprises the section of biophysics and is involved in the interdisciplinary field of computational neuroscience, in which the complex processes in the brain are represented with mathematical models. Furthermore, both disciplines focus on quantitative measurements and present their results in form of numbers.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). The authors have subsequently revised the text further in an endeavour to refine the work stylistically and in terms of content. D. U. Mier (*) Clinical Psychology and Psychotherapy, University of Konstanz, Konstanz, Germany e-mail: [email protected] J. Hass Faculty of Applied Psychology, SRH University Heidelberg, Heidelberg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_12

149

150

D. U. Mier and J. Hass

In our joint project, we try to combine approaches from both disciplines to learn more about the function of the human mirror neuron system, which is considered the neural basis of social cognition.1 In various studies, functional magnetic resonance imaging (fMRI) has shown that certain brain areas are activated when observing and imitating the movement of other people.2 Interestingly, these areas correspond exactly to the areas in which the so-called mirror neurons have been found in primates. These mirror neurons are active both when a monkey itself performs a movement and when it observes a comparable movement in others.3 This finding has caused a “hype”, as it has been assumed that the neuronal correlate of empathy and emotion recognition had been found and that these very neurons would be dysfunctional in persons with autism and psychopathy. The problem, however, is a lack of a non-invasive methodology that can be used to study the activity of individual neurons, so direct measurement of mirror neuron activity in healthy humans is virtually impossible.4 To infer cellular activity despite these difficulties, cooperation between experimental neuroscientific psychology and computational neuroscience is required, aiming at theoretical modeling of mirror neurons and cell networks. Expanded knowledge of the human mirror neuron system could make a crucial contribution to the development of therapies for mental illnesses such as autism. In the following, we first give insight into the scientific cultures of the two disciplines and also show their limitations. Subsequently, we discuss how these limitations can be overcome in our project through cooperation between the disciplines.

1 Importance of Number and Measurement 1.1 Psychology Psychology as a discipline has evolved from the experimental study of human perception. In this respect, psychology is a science based on experimental work. One  Gallese, V., & Goldman, A. (1998). Mirror neurons and the simulation theory of mind-­ reading. Trends in Cognitive Sciences, 2(12), pp. 493–501. 2  Carr, L., Iacoboni, M., Dubeau, M.  C., Mazziotta, J.  C., & Lenzi, G.  L. (2003). Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences of the United States of America, 100(9), pp. 5497–5502. 3  Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, pp. 169–192. 4  It should be noted that fMRI is based on the detection of changes in oxygenation in active brain areas and quantified via the so-called Blood Oxygen Level Dependent (BOLD) signal. This BOLD signal is triggered by neuronal activity, but activity but does not allow any further inference about the underlying firing rate of the neurons. 1

Psychology and Physics

151

contrast to some other human sciences in psychology is that experimental scientific thinking is empathized already in undergraduate courses. Psychology students are trained early on to think in a way that generates hypotheses, to test hypotheses by operationalizing relevant mechanisms through suitable experiments or questionnaires, and to apply statistical methods in order to distinguish significant results from chance. This means that in psychology, precise and empirically testable assumptions (=hypotheses) are proposed and suitable experiments are designed to test them, making the previously theoretical constructs measurable, i.e. quantifiable (=operationalization). Regarding the hypotheses, it should be noted that these are set up before an investigation and never afterward. After the investigation, the results can be used to generate new hypotheses for new studies. However, a hypothesis that forms basis of a study may never be changed afterward, as it would then cease to be a hypothesis. Rather, it would be a false post-hoc representation of the pre-study assumptions based on the results of the study. Such scientific fraud would lead to a deterioration of the scientific way of working and thinking and would significantly limit the gain in knowledge. Further differentiation from other human sciences is based on the fact that in psychology, as in the life sciences, clear connections are made between psychological processes and their biological substrates. Since our brain filters and selects what we become aware of, it seems inevitable that any psychological research is neuroscientific research. At this point, however, it should be noted that not all psychological disciplines are neuroscientific. This is because not every research question is directly related to biology (e.g., studies of group phenomena in social psychology). However, in efforts to understand human feeling and thinking and their constituent mechanisms, the study of human central and peripheral physiology (i.e., the functioning of the brain and autonomic functions) has an important role. To give an example of a psychological issue from our project, take the phenomenon of empathy. It is assumed that the ability to empathize with the thoughts and feelings of other people is not a single skill, but a conglomerate of several skills. Thus, it is assumed and investigated in the joint project whether the process of empathy can be divided into the following three components: Compassion ­(so-­called affective empathy), emotion recognition (so-called cognitive empathy), and personal suffering (so-called personal distress). Without elaborating on this, it should be noted that individuals who are poor at recognizing another person’s emotion are also less able to adequately empathize with the other person, and that high personal distress could lead to self-focus and thus lower empathy with the other person. Importantly, by combining fMRI with an experiment on empathy, we can measure a direct neural correlate of compassion. By confronting our subjects with empathy-inducing images and asking them about the extent of their empathy, we can determine which brain areas are involved in the generation of empathy.

152

D. U. Mier and J. Hass

Finding an appropriate operationalization for the question of whether personal distress, cognitive empathy, and affective empathy are indeed separable components of the empathic response is not trivial. First, these components must each be operationalized as separate conditions. To this end, in our project we show photographs of people expressing either fear or anger and instruct participants, while viewing the photographs, to rate either how much they feel for the other person, how bad the person depicted feels, or how bad the participant themself feels. After viewing the face, participants indicate their rating on a visual analog scale ranging from 0 to 100 (0 = not at all, 100 = very much). The question of distinct components can now be answered in several ways. On the one hand, a statistical test can be used to check whether the level of the ratings differs significantly. Secondly, correlations between the components can be calculated to check how strong the relationship between the components is. If the correlations were not significantly different from zero, then this could indicate that the components are not distinct. However, a non-significant result could also indicate that the operationalization of the components was not good enough, or that the group size was too small to get a significant result even for a small effect. A significant difference in the level of scoring between the components is also highly dependent on the visual material. Ratings of personal distress would likely be much higher if, instead of faces, scene photographs were used that showed, for example, sad children or scenes of war. Therefore, an alternative way to test the hypothesis of distinct components is to correlate them with an external criterion (so-called external validity). The result that the different components differ significantly in their correlation with certain questionnaires could be taken as an indication of this notion. Nevertheless, none of the analyses seems to be optimal. Therefore, at the behavioral level, the indication of whether the correlation between the components is significantly different is meaningful, yet unsatisfactory. Therefore, one should approach the question of distinct components of empathy differently and assume that the components are significantly correlated (behavioral level), but that the components are associated with both overlapping and distinct brain activation (neural level). Indeed, at the level of brain activation, it is possible to examine significant differences in the activation of different brain regions, depending on the component.

Psychology and Physics

153

In this context, data are collected from groups of individuals to capture common patterns.5 The patterns we are addressing in the project are those of mirror neuron firing rates, i.e., the number of so-called “spikes” per time unit by means of which the neurons communicate with each other. The question for brain activity is whether we find activation (i.e., increased firing rates) in the areas associated with mirror neurons for all three components, but whether there are also differences in the strength of activation in areas belonging to the limbic system and in areas associated with self-referential processes, i.e., mental processes about one self. The associated hypotheses are that there is increased activation in the limbic areas in both affective empathy and distress compared to cognitive empathy, and reduced activation in regions of the frontal cortex in distress compared to cognitive empathy. The results of the significance tests, in turn, can be interpreted in terms of a pattern. Particularly for brain activation, the results must be interpreted based on existing knowledge about the function of the respective brain areas. Thus, by drawing on published findings and accumulated knowledge about specific brain areas, a qualitative analysis of the results is formed. This interpretation of the patterns can also serve as a basis for the development of a conceptual model that depicts certain psychological processes and their interactions and thus allows predictions that can be investigated in future experiments. So, while we can measure brain activation and quantify and represent the results in numbers, this pure quantifiable information is almost “meaningless” because it is the knowledge of the function of the brain areas and the patterns and networks that convey the meaning. However, in neuroscience, knowledge is relative, since a given brain area does not have a single fixed function. The function only arises from the knowledge that a brain area is active in a particular task, such as empathy. The knowledge on which we base the interpretation of our results is thus subject to change and expansion, which may cause the interpretation of the activation of a brain area to differ between the studies processes and over time. Constructing a model on the basis of the results could thus even be called educated guessing. Since the presentation of the results was already subject to interpretation, we first have to check the individual aspects of the model again in ­experiments, and the overall interaction of the individual aspects of a model is often so complex that direct verification in experiments is not possible.

 For the definition of the term pattern in psychology we would like to refer to the chapter Becker & Schweiker. 5

154

D. U. Mier and J. Hass

1.2 Physics and Computational Neuroscience Physics is perhaps the natural science in which measurement, and, above all, the mathematical formulation of its results are of the greatest importance. It claims to unravel fundamental natural phenomena by very precise measurements and to record their results in mathematical formulas that are as simple and generally valid as possible, so-called “physical laws”. A simple example of such a law is Newton’s law of gravitation:

F =G

m1m2 . r2

It states that two bodies with masses m1 and m2 attract each other with a force F, which on the one hand increases with these two masses and on the other hand decreases with the distance r between the two bodies. Unlike the two masses, the distance r enters this equation quadratically, so if the distance is doubled, for example, the force is reduced to a quarter. Finally, G is the gravitational constant, a natural constant that can be determined by experimental measurements of all four quantities. The force F is the gravitational force, which acts universally and unshieldable between all bodies of the universe. At first glance, such a general statement seems to be very difficult to prove— and indeed, every physical law (just like every hypothesis in psychology) starts out as a mere theory, which has to be substantiated or disproved by experiments and observations. If it is convincingly disproved (falsified), the law is invalid and must be replaced by a new one. If, on the other hand, the law is confirmed, it becomes more convincing with each experiment and is eventually accepted as universally valid, as is now the case with the law of gravitation.6 Newton had derived this law at the end of the seventeenth century, together with Robert Hooke, from the study of planetary motions. Since the gravitational force is very small compared to other fundamental forces, it is difficult to measure experimentally. It was not until about a hundred years later that Henry Cavendish succeeded in such an experiment7: a

 However, this only applies as a limiting case for comparatively small mass densities and velocities. With increasingly precise measurements, small deviations from Newton’s law of gravity emerged, which could only be fully explained with the help of a new theory, Einstein’s general theory of relativity. 7  Cavendish, H. (1798). Experiments to determine the density of the Earth. By Henry Cavendish, Esq. FRS and AS.  Philosophical Transactions of the Royal Society of London, 88, pp. 469–526. 6

Psychology and Physics

155

small dumbbell is suspended from a thin wire so that the dumbbell can be turned, and the wire is twisted. If you now bring two heavy balls close to the two dumbbell weights, an attractive force is created, the small balls “fall” towards the large ones. However, the stiffness of the wire resists the fall—it drives the balls back to their initial position, and the further away they are from it, the more so. As soon as the gravitational force and this so-called torsional force balance each other out, the dumbbell will rotate back to its starting position. All in all, this results in a very slow oscillation, which can be used to calculate the gravitational force. In this way, it is also possible to determine the gravitational constant G in the above equation. The gravitational constant is one of the fundamental constants of nature and its exact determination is still the subject of intense research.8 This simple example also illustrates the fundamental distinction between theoretical and experimental physics: theoretical physicists establish physical laws on the basis of existing observations in logical and mathematical work, while experimental physicists test them in experiments. This division of labor has become necessary due to the immense amount of work involved in both theoretical and experimental research, although each physicist still receives extensive training in both areas. The field of Computational neuroscience attempts to apply the methods of theoretical and computational physics as well as computer science to the brain and its information-processing functions.9 For this purpose, mathematical equations are set up, as in physics, that map certain aspects of brain functions and help to explain and understand the observations of neuroscientific experiments. Since brain research takes place on many different levels, this also applies to computational neuroscience. The connection to physics is most obvious at the “lower level”, for example in the description of the electrical properties of individual neurons, the mechanical behaviour of muscle proteins or the thermodynamic diffusion of neurotransmitters through the synaptic cleft between two neurons. All these are topics of biophysics, which directly applies known physical laws to complex, but physically well comprehensible biological objects. One example is the modelling of a nerve cell as an electric circuit: the membrane of such a cell separates two areas with different concentrations of charged particles.  Speake, C., & Quinn, T. (2014). Newton’s constant. Physics Today, 67(7), p. 27.  Dayan, P., & Abbott, L.  F. (2001). Theoretical neuroscience (Vol. 10). Cambridge, MA: MIT Press. 8 9

156

D. U. Mier and J. Hass

As a result, the inside of the cell is electrically negatively charged compared to the outside, similar to a battery. On the other hand, the separation of charges is not absolute: certain charged particles can pass through ion channels, small openings in the membrane. Some of these channels are always open, while others only open when a certain electrical voltage is exceeded. This leads to the characteristic peaks in the voltage, the so-called “spikes”, which are used to transmit information from one nerve cell to another. Now, a simple modelling approach is to model the membrane and the permanently open ion channels as electrical components and the spikes as a threshold value for the membrane voltage and to convert this into corresponding formulae. This gives an equation for the time course of the voltage, with the side condition that the voltage is reset to a lower value as soon as it exceeds a threshold. This so-­ called “integrate and fire” model10 is remarkably successful in replicating the times at which a neuron produces spikes, although the threshold is an obvious simplification of real biological processes. Such equations can also be used to model local networks of cells by using a separate equation for each cell and coupling them together via the spikes. In this way, supercomputers can be used to simulate the collective dynamics of networks with several 100,000 neurons. However, given the number of nearly one hundred billion neurons in the human brain, this method is still only useful for relatively localized networks. If one wants to model the interplay of different brain areas, it makes sense to choose a “lower resolution” and, instead of modelling individual neurons, to combine whole cell assemblies with thousands of neurons in a single equation. Instead of the membrane voltage, the time course of the mean firing rate of these associations is then examined, for example. Such models can then be compared with the results of imaging methods such as fMRI, which is also restricted to the measurement of composite signals from many thousands of neurons. And even coarser modelling is possible, for example by considering entire brain regions as functional units and interconnecting them on the basis of macroscopic data. Such so-called information-processing models are suitable for representing complex patterns of behaviour, thought and emotion in equations, as they are investigated in psychological experiments. Such models also exist for the human mirror neuron system.11

 Burkitt, A. N. (2006). A review of the integrate-and-fire neuron model: I. Homogeneous synaptic input. Biological Cybernetics, 95(1), pp. 1–19. 11  Thill, S., Caligiore, D., Borghi, A. M., Ziemke, T., & Baldassarre, G. (2013). Theories and computational models of affordance and mirror systems: an integrative review. Neuroscience & Biobehavioral Reviews, 37(3), pp. 491–521. 10

Psychology and Physics

157

What all these models have in common is that they represent neurobiological processes and are therefore dependent on data collected using methods outside their own discipline, be it behavioural or imaging data in humans or electrophysiological or anatomical data from mice, rats or monkeys. Computational neuroscience is thus dependent on interdisciplinary cooperation. However, the degree of this cooperation varies greatly in individual cases: On the one hand, there are researchers with a strong mathematical orientation who deal with the fundamental principles of the brain and try to derive new principles through elaborate mathematical analyses—similar to the physical laws of theoretical physics. This requires abstracting very strongly from concrete neurobiology, so that the significance of these results for the brain is not always immediately clear. On the other side of the spectrum is a strongly data-driven approach, in which as many parameters of a model as possible are derived directly from experiments and also tested directly in new experiments. Both approaches (as well as all gradations in between) have their place in neuroscience and each promises complementary insights into how the brain works—that is, into the biological mechanisms of brain activity on the one hand and into the algorithmic processes behind complex behavioral patterns on the other. And both approaches have in common that they are fundamentally quantitative methods that require a precise mathematical description and are determined by numbers—the model parameters.

2 Synergies Through Cooperation Between the Two Subjects In the above description of the two subjects represented in the project, we have tried to show both their possibilities and their limitations: Neuroscientific psychologists “measure” aspects of human behavior and simultaneously record signals of brain activity. However, this approach cannot tell us anything about the precise physiological mechanisms underlying behavior and activity patterns, which would require invasive methodology. Computational neuroscience, on the other hand, can offer complex mechanistic models of brain function at a wide variety of levels of abstraction, but it relies on a wealth of different sources of data to determine the parameters of these models and to validate them, which, as a theoretical science, it cannot collect itself. Another difficulty is linking models of different levels of abstraction. For example, network models with a small number of cells are excellently suited for investigating the modes of action of neurotransmitters such as dopamine. However, as described above, due to their complexity and the limited performance

158

D. U. Mier and J. Hass

of computers, these models cannot be easily applied to the entire brain, so that it remains unclear how dopamine influences behavioural performance in a complex neuropsychological task. In our project investigating the human mirror neuron system, we try to overcome these limitations of our fields by combining their approaches: In the experimental part, indicators of mirror neuron activity (behavioral measures, fMRI and electroencephalography, EEG) are collected for core processes of social cognition. In addition, factors influencing mirror neuron activity (deactivation of brain areas with transcranial magnetic stimulation (TMS), as well as genotyping in relation to the dopaminergic and oxytocinergic neurotransmitter system) are recorded within the same sample of participants. The TMS manipulation and its observed effects on psychological indicators such as error rate and reaction time on the one hand, and the fMRI and EEG signals on the other hand, serve to substantiate the causal relationship between the mirror neuron system and the core processes of social cognition. Finally, the determination of genetic variations allows for the first time drawing conclusions on the role of the neurotransmitters dopamine and oxytocin for mirror neurons. In the theoretical part, a two-stage model approach is pursued, which makes it possible to use the experimental data to draw conclusions about the physiological properties and temporal dynamics of the involved cell assemblies. Specifically, at the first level, the results of the fMRI measurements are used to adjust the parameters of a global firing rate-based model of the mirror neuron system by statistical optimization (dynamic causal modeling). At the second level, the results of this global model are used to adjust parameters of a local network model that maps the temporal dynamics of individual neurons. These parameters are then directly related to the physiological properties of the neurons and the synapses that connect them. Furthermore, the effects of modulations by dopamine and oxytocin as well as TMS can be investigated directly at the level of the local network, which allows a deeper insight into their mechanisms of action. The model is also validated by testing predictions regarding specific frequencies in the EEG.

3 Conclusion A common goal of physics and psychology is the as-precise-as possible description of aspects of the world by means of measurements in the form of numbers. Physics approaches this goal by eliminating interfering influences as completely as possible, whereas psychology approaches them by carefully balanced experimental con-

Psychology and Physics

159

ditions and elaborate statistical analyses. Both sciences rely on making as many measurements as possible to minimize the influence of random noise. Both sciences must also relate the numerical results of their measurements back to the reality they are trying to describe. In psychology, this referencing is a central part of scientific discourse under the concept of validity, but even the supposedly more unambiguous results of physics often require precise interpretation, as the example of quantum mechanics shows.12 A central working principle of the natural sciences, which is also shared by physics and psychology, is the fruitful interplay between theory and experiment, in which theoretical models provide the basis for new experiments and experimental results in turn inspire new theories. We have now applied this principle to a common object of study, the human mirror neuron system. Computational neuroscience provides us with an interface between psychology, physics and brain research that allows us to investigate this highly complex object of investigation with appropriate methods: A combination of existing neuropsychological measurement methods on the one hand and the linkage of the resulting data by applying a biologically detailed mathematical model on the other hand. This interdisciplinary approach to the interplay of theory and experiment allows for a deeper understanding of the mirror neuron system, which would not be possible through a purely experimental approach due to limited measurability. This is of crucial importance as such an understanding could have a decisive impact on the therapy of social-cognitive deficits in mental disorders such as autism. The impaired motor simulation process in autism, can be improved without exact knowledge of the underlying neuron populations. This approach however, lacks precision. Whereas, knowledge of neuronal processes and associated neurotransmitters, would allow direct manipulation of the functioning of neuronal populations and thus provides a starting point for the development of pharmacological agents. The cell network model can pave the way for computer-simulated, i.e., in silico, studies that can not only significantly advance our knowledge of the human mirror neuron system, but also our ability to develop specific psychopharmaceuticals.

 Reichenbach, H. (2013). Philosophische Grundlagen der Quantenmechanik. Berlin: SpringerVerlag. 12

Reflections and Perspectives on the Research Fields of Thermal Comfort at Work and Pain Susanne Becker and Marcel Schweiker

Thermal comfort and thermal pain are two independent phenomena studied by different disciplines. The research field of thermal comfort deals with the adaptation of perception of thermal conditions at the workplace. The field of thermal pain with sensory and emotional perceptions of painful heat stimuli and its adaptation. While research on thermal comfort is conducted, for example, by architects, mechanical or civil engineers depending on the research focus, the topic of pain is investigated by physicians, psychologists and biologists, among others. Both phenomena are often close to each other in terms of time and space, and despite all the differences between comfort and pain research, the methodological approaches are very similar. Starting from the individual disciplines, this chapter describes the role of numbers in measuring thermal comfort and pain and the interpretation of results thus obtained, the added value of bringing both disciplines together in a joint project, as well as successes and perspectives arising from targeted deviations from standard scientific practice.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

S. Becker Heinrich Heine University Düsseldorf, Düsseldorf, Germany e-mail: [email protected] M. Schweiker Healthy Living Spaces lab, Institute for Occupational, Social, and Environmental Medicine, Medical Faculty, RWTH Aachen University, Aachen, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_13

161

162

S. Becker and M. Schweiker

1 Perception and Adaptation Humans, as homoiothermic (constantly warm) creatures, have developed buildings and technologies throughout history that allow them to maintain core body temperature within the limits necessary for survival without increasing energy expenditure (through movement, shivering) or water loss (through sweating). ­ Nevertheless, even after more than 100 years of research on thermal comfort, complaints about thermal conditions are among the most common reasons for dissatisfaction in the workplace.1 In this context, the goal of this line of research is to minimize these complaints, while at the same time minimizing the energy required for heating and cooling. Thus, this field of research is very closely linked to current issues of climate change and the energy transition. The focus of research on thermal comfort lies on physical-physiological-­ psychological processes, i.e. the interplay between the physical and objectifiable parameters of air temperature, surface temperature, air velocity and air humidity together with a person’s degree of clothing and activity, physiological reactions and the resulting subjective evaluation.2 For a long time, the aim here was to determine “optimal” (mostly constant) temperature ranges, e.g. room temperatures between 21 °C and 25 °C. In contrast, there are current discussions that question these constant temperature ranges. Instead, the aim is to quantify advantages of dynamic temperature ranges that can be influenced by individual and context-dependent factors. This includes the extension of the investigation from influencing variables to parameters of the outdoor climate, the interaction between people and buildings (user behaviour), and effects on a person’s health, work performance and motivation.3 In this context, the perception of thermal environments is permanently modulated by behavioural, physiological and psychological adaptation processes.4 Thereby, it is  Schakib-Ekbatan, K., Wagner, A., & Lützkendorf, T. (2012). Bewertung von Aspekten der soziokulturellen Nachhaltigkeit im laufenden Gebäudebetrieb auf Basis von Nutzerbefragungen. Stuttgart: Fraunhofer-IRB-Verlag. 2  Fanger, P.O.. (1970). Thermal Comfort Analysis and Applications in Environmental Engineering New York: McGraw-Hill. 3  Höppe, P. (2002). Different aspects of assessing indoor and outdoor thermal comfort. Energy and Buildings, 34(6), pp. 661–665. Langevin, J., Wen, J., & Gurian, P. L. (2015). Simulating the human-building interaction: Development and validation of an agent-based model of office occupant behaviors. Building and Environment, 88, pp. 27–45. Cui, W., Cao, G., Park, J. H., Ouyang, Q., & Zhu, Y. (2013). Influence of indoor air temperature on human thermal comfort, motivation and performance. Building and Environment, 68, pp. 114–122. 4  De Dear, R. J., Brager, G. S., Reardon, J., & Nicol, F. (1998). Developing an adaptive model of thermal comfort and preference/Discussion. ASHRAE transactions, 104, p. 145. 1

Reflections and Perspectives on Thermal Comfort and Pain

163

necessary to quantify subjective perception to the extent that it can be used within the framework of ­standards and guidelines for the planning of future buildings.5 The extension of factors of individual influencing variables that are considered, goes along with the realization that the current issues can only be solved in an interdisciplinary manner, even if specific examples of interdisciplinary projects are rare. Findings expected from such interdisciplinary work are, however, relevant for every individual, especially since we spend on average 90% of our time indoors in Europe today. Pain research is primarily concerned with neurobiological and psychobiological processing of pain, ranging from molecular processes and central nervous higher processing to the influences of feelings and thoughts. This is set against the background that pain is a central problem in health care. Pain in the musculoskeletal system is one of the most common complaints in medical practice.6 Chronic pain leads to enormous costs in the health care system and to severe suffering of those affected, especially since available therapeutic approaches are often not or only partially effective.7 A central aspect of pain research is the adaptability of pain perception to factors such as stress, situational context, and expectations, as well as the study of the neurophysiological processes underlying this adaptability. Pain processing is a strikingly dynamic and flexible system that allows such adaptation at multiple levels. For example, rapid, short-term (stimulus-induced) and reversible, but also long-term, non-reversible adaptation is possible.8 Pain can thus also be regarded as an aversive disturbing stimulus to which a person reacts either with (psychological) habituation or increased sensitivity, depending on external, situational and internal, individual factors. Such habituation or increased sensitivity may manifest itself in behavioral changes, for example, when a person responds increasingly quickly and/or strongly with avoidance behavior (e.g., by changing body posture) to such a distrubing stimulus. This type of adaptation can also be regarded as an adaptation process. In order to be able to understand the complex interplay of the various factors and thus also of adaptation processes, it is essential to describe them in such a way that comparisons between individuals or across groups are possible, in order to avoid, for example, conclusions being based only  DIN EN 15251. (2012). DIN EN 15251: Eingangsparameter für das Raumklima zur Auslegung und Bewertung von Gebäuden–Raumluftqualität, Temperatur, Licht und Akustik. Berlin: Beuth. 6  Khan A, Khan A, Harezlak J, et al. (2003). Somatic symptoms in primary care: etiology and outcome. Psychosomatics, 44, pp. 471–478. 7  Flor, H., & Diers, M. (2007). Limitations of pharmacotherapy: behavioral approaches to chronic pain. Handbook of Experimental Pharmacology, 177, pp. 415–427. 8  Woolf, C. J., & Salter, M. W. (2000). Neuronal plasticity: increasing the gain in pain. Science, 288(5472), pp. 1765–1769. 5

164

S. Becker and M. Schweiker

on individual characteristics. Perception and its adaptation are quantified to describe, explain and predict related emotions, suffering, therapeutic successes and failures, functionality (in patients) as well as underlying neurobiological processes. Both topics have in common that a subjective, changing perception is to be quantified. What is needed are methods that allow to express changes in the perception of temperatures and pain in numbers. Adaptation as a dynamic process changes the meaning of a number, for example of a room temperature, whereby such dynamic changes cannot necessarily be represented well on a rigid scale with equal distances between its categories.9 Parallels can be drawn here with the subjective perception of geographical hazards (see Chapter Höfle). The aim of the joint project “Thermal Comfort and Pain: Understanding Human Adaptation to Disturbing Factors by Combining Psychological, Physical, and Physiological Measurements and Measurement Methods” is, therefore, to understand processes of adaptation in the context of global thermal discomfort, local pain stimuli, and other disturbance factors through the innovative combination of different measurement methods and to determine influencing factors through targeted manipulation. Here, numbers and their interpretation as a person’s subjective evaluation play a critical role. In addition to measuring and understanding adaptation in order to gain knowledge of such socio-economically highly relevant processes, the long-term goal of this research project is to provide methods that enable research into measures for optimizing the handling of disturbing factors also outside the realm of thermal comfort and thermal pain, such as acoustic or visual stimuli.

2 How/What Do We Count? Pure counting, as defined as determining a number of elements, is rarely used in both comfort and pain research. Almost all researchers in comfort research use a classical quantitative approach. The data collected (see section How/what do we measure) are aggregated and analyzed using statistical methods. Here, the focus is less on the number of a particular result or the frequency of a variable than on the relationships between two or more factors and the influence of one or more independent variables on the dependent variable, e.g. the influence of indoor temperature and degree of clothing on the  Schweiker, M., Fuchs, X., Becker, S., Shukuya, M., Dovjak, M., Hawighorst, M., & Kolarik, J. (2016). Challenging the assumptions for thermal sensation scales. Building Research & Information, pp. 1–18. 9

Reflections and Perspectives on Thermal Comfort and Pain

165

perception of comfort. The aim here is to identify not only correlations but also causal influences by means of carefully planned investigations (see also chapter Halbleib). However, despite careful planning, execution and analysis of these studies, it is often impossible to rule out the possibility that the presumed causal influence is in fact only a correlation and that a latent unknown third variable has triggered the apparent causality, due to the complexity of the subject matter. Recently, the consideration of dynamic processes has increasingly come to the fore. Humans are no longer seen only as passive recipients of thermally stationary experiences, but as active, self-adapting designers of transient environmental conditions. In this context, adaptive processes are both of a readily quantifiable and predictable physiological nature, as well as of a more difficult to predict behavioural and psychological nature. This requires the development of new approaches for the analysis of data (counting) as well as the collection of data. In pain research, too, the focus lies on classical quantitative approaches with corresponding statistical procedures. As in comfort research, the number of a particular event, i.e. the frequency of a variable, is rarely the focus of interest, but rather associations between different variables and influencing factors. An exception to this are epidemiological approaches and questions, which (among others) aim to record how frequently a particular condition, such as musculoskeletal chronic pain, is present in the general population.10 Qualitative methods are also used less in pain research, mostly for clinical questions in therapy research. Often, however, these studies are problematic because the study performance and data collection do not meet scientific standards (e.g., due to the lack of a suitable control group) and thus no clear conclusions can be drawn. Typical problematic examples of this are studies on homeopathic applications or acupuncture.11 The statistical procedures common in quantitative approaches, through which collected data are aggregated, are a sensible approach, because this enables conclusions about the entire population of interest (patient groups, risk groups, healthy individuals). Such conclusions are fundamental and necessary to understand phenomena such as factors that modulate the perception of acute and chronic pain and to identify their mechanisms. However, limitations of such a quantitative approach  Elliott, A. M., Smith, B. H., Penny, K. I., Smith, W. C., & Chambers, W. A. (1999). The epidemiology of chronic pain in the community. Lancet, 354(9186), 1248–1252. 11  Boehm, K., Raak, C., Cramer, H., Lauche, R., & Ostermann, T. (2014). Homeopathy in the treatment of fibromyalgia-A comprehensive literature-review and meta-analysis. Complementary Therapies in Medicine, 22(4), pp. 731–742. 10

166

S. Becker and M. Schweiker

are obvious: statements about individuals are almost impossible. This becomes very clear when, for example, an attempt is made to predict the therapeutic success of a particular treatment for a particular patient with his or her individual characteristics. Increasingly, attempts are made to solve these problems by applying complex statistical procedures such as multivariate approaches or by mathematically modelling of mechanisms of perception, taking into account different (individual) weights of different variables. Such procedures attempt to infer underlying mechanisms that causally explain changes in perception. Despite the application of sophisticated statistical methods, however, it is usually not possible to conclusively determine cause and effect of such changes in perception. However, such methods offer important new alternatives to improve the accuracy of predictions. In contrast, the joint project investigates methods of counting that currently receive little or no attention in the two research fields. These methods are qualitative approaches, such as text analysis (counting frequencies) as well as the description (narration) of individual cases.

3 How/What Do We Measure? Due to the predominant quantitative approach, comfort and pain research is almost inconceivable without measurement. A distinction must be made between the object to be measured, the measured variables, the modulatory variables, the measuring devices and the measuring methods/procedures. The primary object of measurement is the human being. Their state with regard to thermal comfort and pain is the primary measured variable and object of investigation. Concerning pain research, it is worth mentioning that measurement objects, particularly in neuroscientific, biological or physiological research areas, can also be animal models, nerve cells and also single molecules. However, the aim of these research areas is also to understand human pain perception and chronic pain. In both disciplines, a distinction can also be made in the measurement of perception as to whether implicit or explicit processes are recorded. Implicit processes describe perceptual processes of which a person is not aware, e.g. peripheral physiological processes. Explicit processes are perceived consciously and can be described accordingly. This distinction overlaps with the distinction between objective and subjective measures. Objective measures, similar to implicit processes, are obtained through physiological measures or measures of (not necessarily conscious) behavior, whereas subjective measures are obtained through self-reports. Objective measures, however, need not be implicit. For example, an

Reflections and Perspectives on Thermal Comfort and Pain

167

individual may be well aware of her skin temperature, especially if it is noticeably cold. However, only subjective measures allow insights into, for example, the reasons for observed behaviour or changes in physiological measures—for example, increased skin temperature may be a sign of discomfort from heat or a sign of increased stress. The reasons for behavior, however, are not always amenable to introspection, i.e., self-­observation, and this self-observation and attribution of causes may also be flawed. However, since we have no other means of capturing such reasons, subjective measures remain the means of choice. Due to the complexity of perception, it is indispensable to measure numerous modulatory variables. In comfort research, these include physical parameters of the indoor climate surrounding the person (e.g. temperature or humidity), the outdoor environment, objectifiable characteristics of the room (window size, wall structures, control options), personality characteristics of the respective person (physiological, such as weight, height, and psychological, such as preferences), as well as subjective evaluations, such as ratings on a scale from cold to hot.12 This is done with sensors, data loggers, building inspections, surveys and questionnaires. The specific point in time of the measurement as a further number is also relevant for the assignment of subjective evaluations to physical measured values and also as a further modulatory variable: the temporal sequences of stimuli influence the perception. A warm room is perceived as warmer when entering it from a cool room compared to entering it from an equally warm room. The exact definition of the measurement instruments and questions to be used is determined in advance by the objective of the study. In addition to modulatory factors (e.g. temperature), additional disturbing variables (e.g. noise level) are assessed. It also depends on the respective research question whether variables are collected continuously or only once at the time of a survey. When considering user interventions to return to the individual thermal comfort range, such as switching on or turning up the heating, it makes sense, for example, to continuously record thermal room conditions. This enables the observation of the conditions at the time of the user intervention as well as the temporal course prior to the action. The same applies to the analysis of subjective comfort ratings by means of questionnaires, since here too both the conditions at the time of the action and the preceding dynamics play a role.

 Schweiker, M., Brasche, S., Bischof, W., Hawighorst, M., Voss, K., & Wagner, A. (2012). Development and validation of a methodology to challenge the adaptive comfort model. Building and Environment, 49, pp. 336–347. 12

168

S. Becker and M. Schweiker

Similar to comfort research, objective parameters such as neurobiological processes, physical stimulus intensities in experiments and subjective evaluations, e.g. with regard to perceived pain intensity and aversiveness, are collected in pain research. Also, sensors, stimulus devices and measurement systems such as electroencephalography (EEG) or magnetic resonance imaging (MRI), questionnaires and behavioural observations are used. How accurately measurements are made (e.g., continuous vs. one-time) is again determined by the research question. The measurement of biological processes usually requires special methods (e.g. EEG, MRI), whereas subjective experiences are recorded by introspective procedures such as questionnaires or interviews. The research question also determines the analyses and thus how the variables of interest are collected, i.e. whether they are collected dichotomously, categorically or continuously, for example. Methods of measuring are the approaches for data acquisition. In comfort research, a classic distinction is made between field and laboratory studies. The first studies on thermal comfort at the beginning of the twentieth century were laboratory studies in which procedures that are still common today were used.13 A certain number of participants is exposed to mostly static thermal conditions for a certain time and are asked about their evaluation of the conditions. This is repeated for different thermal conditions. The climatic chambers used for this type of experiment are usually room-sized, windowless, and thus only artificially lit cubicles inside a larger enclosure. Although these conditions are unrealistic in several aspects, these types of experiments are still very popular today. In contrast to the laboratory investigations are the field investigations, in which the measuring instruments are transported to real workplaces and the workers are questioned in-situ, i.e. at their real workplace, about their current evaluation of the thermal conditions. In parallel, physical room parameters are measured, and the person’s degree of clothing and activity are queried or estimated by means of observation. The advantage of these field investigations is their high degree of realism. Disadvantages are (1) the poorly controllable conditions—one has to record the conditions that prevail at the moment—, (2) uncertainties in the accuracy of the estimates of clothing and activity levels, which have a significant influence on the evaluation,14 and (3) uncertainty on how established relationships can really be attributed to causal influences (see chapter Halbleib).

 Houghten, F. C., & Yagloglou, C. P. (1923). Determination of the comfort zone. ASHVE Transactions, 29, p. 361. 14  Fanger, P. O. (1970). Thermal comfort. Analysis and applications in environmental engineering. McGraw-Hill, New York. 13

Reflections and Perspectives on Thermal Comfort and Pain

169

An alternative approach experiments in the field laboratory LOBSTER (Laboratory for Occupant Behaviour, Satisfaction, Thermal comfort, and Environmental Research) in Karlsruhe.15 The LOBSTER (see Fig.  1) comprises two realistic office rooms in which the participants can see and relate to the outside space through real windows and—depending on the respective test scenario—open the windows, lower and raise the external blinds and sunshades change the speed of a ceiling fan, change the heating or cooling setpoint temperature, control the ventilation system and/or change the artificial light. At the same time, the thermal conditions in the interior can be controlled via the existing building technology, so that comparable conditions can be created despite the participants’ possibility to intervene.16

Fig. 1  Participants in one of the offices of the LOBSTER field laboratory. (own representation)

 Available at: http://lobster-fbta.de.  Schweiker, M., & Wagner, A. (2015). The effect of occupancy on perceived control, neutral temperature, and behavioral patterns. Energy and Buildings, 117, pp. 246–259. 15 16

170

S. Becker and M. Schweiker

In pain research, experimental approaches dominate, although this can vary largely in basic and clinical research. In the field of human research, this ranges from psychophysical investigations, in which various objectively or subjectively detectable reactions of a person to physical stimuli are measured under well-controlled laboratory conditions, to clinical observations, in which, for example, the reaction of patients to certain therapeutic measures is observed. Often, such experiments also investigate how several people influence each other’s perceptions and actions and which neurobiological mechanisms underlying these effects. One example is there the mirror neuron system, which is considered to be the basis of social cognitions (see chapter Mier & Hass). This system also plays a role in the perception of pain, because this perception is strongly influenced by the reactions of the environment.17 In order for such measurement procedures to allow the description of behaviour and perception by numbers and to allow unambiguous conclusions, adequate control conditions are necessary. For example, in therapy research, a treatment to be tested is usually compared with a placebo condition. In contrast to the usual methods of the two research fields, quantitative and qualitative methods are combined in the project Thermal Comfort and Pain. Thus, on the one hand, the physical boundary conditions (e.g. room temperature, stimulus temperature) are collected together with the subjective evaluations, as is classically done in both research fields. On the other hand, however, free text answers and interview techniques are used, which originate from qualitative research. The aim here is to develop, apply and validate quantitative and qualitative procedures and methods in order to improve the measurement of adaptation processes and to determine how the measurement instruments (e.g. type of scale) influence the results.

4 How Do We Recognize and Interpret Patterns? In both fields of research, the word “pattern” is rarely used. Rather, the term model is used and discussed. In order to achieve such models, which are supposed to explain mechanisms and allow predictions, associations and modulatory factors are described. Ultimately, however, such a model compares patterns, namely a  Singer, T., Seymour, B., O’Doherty, J., Kaube, H., Dolan, R.  J., & Frith, C.  D. (2004). Empathy for pain involves the affective but not sensory components of pain. Science, 303(5661), pp. 1157–1162; Flor, H., Kerns, R. D., & Turk, D. C.. (1987). The role of spouse reinforcement, perceived pain, and activity levels of chronic pain patients. Journal of Psychosomatic Research, 31(2), pp. 251–259. 17

Reflections and Perspectives on Thermal Comfort and Pain

171

hypothetical (model) with an empirical pattern (collected data) and describes how well these patterns match. Typically, extensive data sets comprising numerous variables are collected in both research fields. These variables are related to each other e.g. by regression analyses, path analyses, discriminant analyses or approaches from graph theory. Relating these variables this way is a method by which a pattern or model is generated. An individual can be characterized by how his or her values and characteristics map onto this pattern. This corresponds to the complex statistical procedures mentioned above, by which an attempt is made to project the findings from mean value analyses back onto the individual. In comfort research, such statistical models are related to physiological processes (e.g., changes in blood flow from the body core to the extremities to regulate heat release via vasoconstriction or vasodilation) and/or physical laws (e.g., evaporative cooling of sweat as a function of ambient air temperature/humidity) to develop a model that represents the human thermoregulatory system. In pain research, patterns—even if they are not so-called—are of great importance in diagnostics. Through the occurrence of a certain pattern, i.e. the joint occurrence of certain symptoms, a patient is assigned to a certain diagnostic category (and thus often also to a certain therapeutic approach).18 Even outside the clinical setting, a type of “diagnosis” is made on the basis of occurring patterns. For example, in the context of a statistical analysis, the reactions of test persons are categorized and related to the characteristics of these participants. For example, an attempt is made to categorize subjects according to whether they respond well to a placebo based on how anxious (situation-specific and more broadly as a personality trait) they are. Categorization of comfort groups becomes also increasingly more addressed in comfort research.

5 What Is the Significance of Patterns and Numbers? Patterns are used, if at all, for preparation or as a basis for model building. However, as described above, the term “pattern” appears very rarely if at all in both research fields and a clear definition is lacking. If one also counts the matching of hypotheses (models/model-like patterns) with empiricism (collected data/empirical pat-

 Rief, W., & Martin, A. (2014). How to Use the New DSM-5 Somatic Symptom Disorder Diagnosis in Research and Practice: A Critical Evaluation and a Proposal for Modifications. Annual Review of Clinical Psychology, 10, pp. 339–67. 18

172

S. Becker and M. Schweiker

terns), then patterns are central to arriving at new insights. This is equally true in clinical diagnostics, but here, too, patterns are rarely mentioned. In contrast, the importance of numbers is very high, because research in both research areas is largely based on quantitative approaches. Both fields of research aim to express the subjective perception of a stimulus (comfort/pain) in numbers and to define cut-off/threshold values on this basis. In comfort research, these cut-­ off values are incorporated into the standards and guidelines for the planning and operation of buildings. Furthermore, developed models can be used, e.g. within the framework of simulations, to come to conclusions on the influence of different control options (openable windows, ceiling fan that can be switched on/off) and related changes in cut-off values of thermal comfort on the building energy demand. Empirically developed models thus represent the starting point for numerical simulations (see chapter Krause). In pain research, for example, groups of patients are characterized based on threshold values and therapy measures or dosages of drugs are recommended on an individual level, partly based on ratings on scales. Also, by changing these thresholds or assessments, the effectiveness of a factor to modulate pain perception is expressed as a number or changing number. Based on such descriptions, models of the mechanisms of pain, both for acute and chronic pain stages, are formulated, which on the one hand advance our understanding of the phenomenon of pain, and on the other hand can be used as a basis for therapy recommendations.19

6 What Is a Scientific Result for Us? What Perspectives Does the WIN Project Offer? Comparable to other disciplines, a scientific result in the two disciplines presented here is accompanied by a gain in knowledge. This gain in knowledge can be based on both a confirmed and a rejected hypothesis; both are important findings, even if a confirmed hypothesis is easier to publish. A gain in knowledge is accompanied by an increased understanding of the relationship between two or more factors, the interplay and reciprocal influence of variables, or a further component that can better explain the complex phenomena of thermal comfort or pain. The basic aim is to verify/falsify hypotheses, but also to formulate new, more advanced hypotheses. The type of gain of knowledge can be manifold. In the field of comfort research, such a gain in knowledge can be, for example, findings with regard to the signifi Arendt-Nielsen, L. (2015). Central sensitization in humans: assessment and pharmacology. Handbook of Experimental Pharmacology, 227, pp. 79–102. 19

Reflections and Perspectives on Thermal Comfort and Pain

173

cance and/or the size of an effect of one or more modulatory factors on the subjective perception of thermal comfort. Morever, this can be findings on the influence of objective variables, such as temperatures, properties of buildings or physiological reactions or their effect through adaptive processes on subjective perception. In pain research, such a gain in knowledge can be findings of the connection between, for example, personality traits, such as the tendency to react anxiously, and the perception of pain, or about the pain-relieving effects of a particular method and in how many cases studied such a pain-relieving effect occurs. In the project Thermal Comfort and Pain we expect to gain knowledge on the understanding of adaptation mechanisms as well as on methodological aspects of the measurement of thermal comfort and pain. With respect to adaptation mechanisms, our research will provide new insights into the quality and quantity of factors that modulate these mechanisms. To this end, we have developed an approach that attempts to empirically test the question of the psychological equidistance of categories on a rating scale and, in addition, takes into account the context in which the scale value was recorded. In this approach participants freely arrange words that refer to the intensity of perceived temperature or pain on a continuous line to express how large they psychologically perceive the distance between the words. This procedure is accompanied by an interview technique known as the “speak out loud” technique. This means that while arranging, participants freely report why they arranged the concepts in a certain way. In this way, it can be explored how the abstract concept is constructed that participants use when thinking about the terms and their relations under neutral conditions (neutral temperature or no pain). For this purpose, studies were conducted with a total of 62 participants. Figure 2 shows a selection of how the individual subjects positioned words on the thermal sensation on the scale and which of the terms they associated with a comfortable sensation. This illustrates not only large interindividual differences but also that the assumption of equidistance is on average not acceptable. After the participants arranged the terms on the scale, they were brought into controlled conditions in the LOBSTER field laboratory (different room temperatures or heat pain stimuli on the arm). The current conditions the participants experienced were rated by the participants on continuous standard scales. Concerning the interview conducted before, it can be analyzed whether the ratings of the conditions fits the concept explored in the interview or whether the evaluation standards change due to the context. As an example of the results from these studies, Fig. 3 shows the probability that a certain room temperature is rated as comfortable. If one analyzes the data with the commonly used standard methods, which ignore individual differences

174 32 31 30 29 28 27 26 25 24 23 22 21

Subject number

20 19 18 17 16 15 14 13 12 11 10 9 8 7 6

S. Becker and M. Schweiker

2 2

1 0

4

3

4

3 2

3

5 4

6

5 5 5 5

4

3 2

6

5

4 4

3

3

2

6 6

4

3

6

5 5 5

4

3 2

1

3

6

5

4

3

6

5 4

3

2

2

6

5 4

2

7 7 7 7

6

4 3

6

5

5

4

3

2

6

5 4

3

2

6

5

3

7 7 7 7

6

4

3

2

6

5 5

4 4

3

2

2

6

5

4

3

2

1 1 1 1

4

3

2

1 1 1 1

6

5

4 3

2

6

5

4

7 7 7 7 7

6

5

4

3

6

5

3

2

1 2 1 1 2 1

6

5

4

3

2 2

6

4

2

1 1 1 1 1

7 7 7 7

6

5 5

4

3

2

5 6

5

4

3 3 3

2

1 1 1 1

2

4

3

7 7 7 7

6 6 6

5 4 4

3 2

6 5

4 4

3

2

1 1 1 1 2

5

3 2

2

1 1 1 1

7 7

6 5

4

3

2

5

4

3

2

1 1

6 6 6 6

Comfort not comfortable comfortable

Thermal sensation 1 cold 2 cool 3 somewhat cool 4 neutral 5 somewhat warm 6 warm 7 hot

7 7 7 7 7 7 7 7 7

Position of the verbal anchor to the thermal sensation

Fig. 2  Position of the verbal anchors of the thermal sensation scale drawn by participants and the anchors rated as comfortable as drawn by participants. This contradicts the classical assumption of an equidistant distribution and the classification of the middle three anchors on thermal sensation (slightly cool, neutral, slightly warm) as a comfortable range. (own representation)

and contextual influences, a significantly different distribution than with the methodology developed here results. In addition, the standard method does not show significant differences between seasons, while the method developed here highlights that the temperatures perceived as comfortable are more widely distributed in summer compared to winter. Based on these results, a new field of applied research might emerge. By measuring and understanding adaptation mechanisms and their modulatory factors, the possibility of accelerating or facilitating heat adaptation and thereby sustainably reducing the stress level induced by warm indoor climates results. In the long term, such scientific results could be applied preventively in advance of announced heat

Reflections and Perspectives on Thermal Comfort and Pain 1.0

Probability of feeling comfortable

Fig. 3  Comparison of the probability curves based on the classical assumptions of comfort research with those based on the method developed in this project for summer and winter. (own representation)

175

Classic assumption Personal zone Summer Winter

0.8

0.6

0.4

0.2

0.0 15

20

25

30 35 Room temperature [ºC]

waves. For example, particularly endangered groups of people, such as elderly persons with weakened health, could be prepared for a heat wave by specifically triggered adaptation. With regard to patients with chronic pain, the results of this project may be used to increase the perception of comfort in the clinical context and thus create relief for the patients and maintain the functionality of such patients in the work context. Regarding methodological aspects, the results of this project could extend current methods in both disciplines through the methods applied and evaluated in these studies. This is expected to provide a new starting point and an important extension for future research projects. Furthermore, an important contribution will be made to current discussions in comfort research regarding the applicability of existing methods and models.20 By working out how qualitative methods can be used in the context of pain research in a way that is both profitable and compatible with current approaches, this methodological expansion is expected to yield a gain in knowledge, particularly concerning the application of models at the individual level.

 Schweiker M., Fuchs X., Becker, S. et al. (2017). Challenging the assumptions for thermal sensation scales. Building Research & Information, 45(5), pp. 572–589. 20

176

S. Becker and M. Schweiker

The expected scientific insights mentioned above are only possible due to the present interdisciplinary work: first, through the collaboration of comfort and pain research and, second, through the interdisciplinary exchange in the network of the WIN-Kolleg, especially with disciplines with stronger qualitative approaches. Psychological pain research provides e.g. important knowledge about the assessment of perception and its representation in numbers. For example, both in the clinic and in research the use of scales to describe perception and experience is very common. Thus, there is extensive knowledge about factors that modulate such assessments and thus the effects of different thermal conditions on human physiology and on thermal sensation.

7 Conclusion One focus of the project Thermal Comfort and Pain was the development and application of qualitative methods, which had previously played only a subordinate role in both research fields. The exchange in the WIN-Kolleg resulted in novel insights for both disciplines, which broadened the horizon of their own research field and further allowed directly deriving methodological approaches for the project. It was shown that the jointly developed methodological approach makes it possible to quantify the influence of subjective factors and the context of an investigation on the results of quantitative methods by combining qualitative and quantitative elements. Qualitative methods make it possible to understand quantitative methods and results much better, which is a key insight, especially for the prediction of experiences and behaviour desired in the research fields. Furthermore, this approach allows not only to improve the measurement of the dynamic perception of thermal comfort and pain, but also to better understand and explain the underlying ­processes.

Measuring and Understanding the World Through Geoinformatics Using the Example of Natural Hazards Bernhard Höfle

New digital data is being created at every moment in an unprecedented quantity, such as new web content, satellite images or photos taken with a smartphone. We learn about natural disasters via digital media, especially via websites with user-­ generated content and via “social media”, where information from the scene of the event can be shared by affected persons but also by emergency forces. Geoinformatics considers spatially referenced observations as digital geodata that are linked to a specific location on Earth. This spatial linking of a digital dataset can be done by various methods, such as terrestrial surveying, global navigation satellite systems (e.g. GPS), by specifying a street address or by using the IP address of a digital device. By this we know the “where”—the spatial reference of the information. The term “digital geodata” thus encompasses a very large and heterogeneous spectrum of digital datasets—for example Facebook messages or satellite images—but with the common feature that the data set can be assigned to a location on Earth. This makes it possible to spatially overlay local subjective observations of people with objective (natural) scientific geodata.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

B. Höfle (*) Institute of Geography, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_14

177

178

B. Höfle

The scientific models currently predominantly used to analyse potential natural hazards are based on quantitative (geo)data and quantifying methods.1 Particularly in the modelling of hazard processes (e.g. floods), scientific-technical approaches are used to find out, for example, which buildings would be affected by a flood wave.2 As is well known, nature knows no catastrophe and the human factor plays a decisive role in the study of natural hazard risk.3 Only by spatially overlaying the potential natural hazard processes with the vulnerability of people can the natural hazard risk be analysed. It is precisely at this point that user-generated geodata, interdisciplinary natural hazard research and geoinformatics come together. This research aims to integrate participatory contributed digital geodata on the risk perception of the local population into the scientific analysis of natural hazards and thus to extend purely quantitative modeling. The risk of flooding in the study area Santiago de Chile is investigated. This results in two innovative aspects that need to be explored from the perspective of geoinformatics. First, how can risk perception and implicit knowledge of the local population, which cannot be measured with technical devices, be formalized and made quantifiable in user-generated geodata? Second, how can the collection of geospatial data in our concept not be done by science alone? Citizens actively participate in the collection (user-generated geospatial data) and are also the subjects of the study. Thus, counting, measuring, capturing and understanding is not done exclusively in a coherent way by academic science. This participatory approach of data collection can be classified as citizen science.4 In this project, the connection between science and citizen science will be made through new methods of geoinformatics, which will be used, among other things, to locate and map risk perception in space.

 Felgentreff, C., & Glade, T. (Eds.) (2007). Naturrisiken und Sozialkatastrophen. Heidelberg: Springer Spektrum. 2  Veulliet, E., Stötter, J., & Weck-Hannemann, H. (Eds.) (2009). Sustainable natural hazard management in alpine environments. Heidelberg: Springer. 3  Frisch, M. (1981). Der Mensch erscheint im Holozän. Eine Erzählung. Frankfurt am Main: Suhrkamp. 4  See, L., Mooney, P., Foody, G., Bastin, L., Comber, A., Estima, J., Fritz, S., Kerle, N., Jiang, B., Laakso, M., Liu, H.-Y., Milčinski, G., Nikšič, M., Painho, M., Pődör, A., Olteanu-Raimond, A.M., & Rutzinger, M. (2016). Crowdsourcing, Citizen Science or Volunteered Geographic Information? The Current State of Crowdsourced Geographic Information. ISPRS International Journal of Geo-Information, 5(5), p. 55. 1

Measuring and Understanding the World Through Geoinformatics

179

1 From Counting and Measuring to Understanding in Geoinformatics 1.1 The Digital Representation of the World—The View of Geoinformatics Counting, measuring and understanding in geoinformatics directly depends on the view of geoinformatics on the world and must therefore be considered in advance. Geoinformatics is a methodological discipline and spatial science that deals with the development of new methods for the acquisition, management, analysis and visualization of digital geodata. Geographic phenomena (humans and the environment) are thus studied in the digital spatial or spatiotemporal representation of the world using both quantitative and qualitative methods that explicitly consider spatial properties (e.g., geometry) and relationships (e.g., spatial distances, spatial statistics). This explicit focus on space and also time is the center of geoinformatics and the difference to other disciplines (e.g. computer science). Until a few years ago, the acquisition and analysis of digital geodata was reserved for experts from science and practice. Through technological progress— such as the Internet and GPS, which are available in smartphones—more and more heterogeneous groups of people (including non-experts) can capture, visualise, share and, if necessary, also analyse digital geodata in a computer-based way. Simple analyses by non-experts can happen by using the captured geodata in an application (e.g. route planner) or by controlling the quality of the captured geodata by non-experts themselves.5 By evaluating the captured data, for example on the risk of forest fires, the danger to one’s own house or possible escape routes can be individually assessed by people with very good local knowledge.6 The sum of digital geodata results in a digital representation of the world in space and time. An example for the two-dimensional representation of the earth by digital geodata is an interactive map in the navigation system of the car, for a three-­ dimensional representation we know virtual digital globes (e.g. Google Earth). In general, both visible, material geographical phenomena and objects (e.g. rivers, buildings) and non-visible, immaterial phenomena (e.g. social structures, risk  OSM Community (2016, 11/11). OSM Tasking Manager/Validating Data. Retrieved from http://www.wiki.openstreetmap.org/wiki/OSM_Tasking_Manager/Validating_data. 6  Ferster, C. J., Coops, N. C., Harshaw, H. W., Kozak, R. A., & Meitner, M. J. (2013). An exploratory assessment of a smartphone application for public participation in forest fuels measurement in the wildland-urban interface. Forests, 4, pp. 1199–1219. 5

180

B. Höfle

perception) can be spatially represented and thus analysed. Two fundamental concepts of spatial representation are distinguished: first, topographic space, in which objects and phenomena can be uniquely placed, for example, with XY coordinates or an address specification, such as in a topographic map. In topographic space, objects are close to each other when having a small spatial distance in between. Besides, secondly, there is the relational (topological) approach, which focuses on the relationships between objects. In the simple case, this may be a network of institutions engaged in the analysis of natural hazards. The proximity between objects is represented here by the closeness in the network, such as the intensity of cooperation (and not by the topographic proximity of the institutions’ buildings in meters). The respective view also determines how to count and measure. What is common to both representations, and this should be emphasized, is that simplifications, selections, generalizations and aggregations take place per se through the digital representation of reality in a model. In geoinformatics, we thus count and measure in digital representation and can also only consider aspects that can be represented digitally at all. We can represent and count the buildings threatened by floods in topographic space in the form of a digital map. However, it becomes more complex to represent subjective local knowledge and experience in dealing with natural hazards digitally and in space without losing important or possibly even essential aspects. This challenge—measurable/non-measurable, objective/subjective, qualitative/ quantitative—is generally solved in geography by multi-method approaches, which use procedures from natural sciences, social sciences and humanities and combine them in a superordinate synthesis. Digital representation is complemented in geography and related disciplines by field campaigns, the physical visit to the study area. This physical presence in space allows researchers to use all their senses for understanding and to overcome the reduction to the digitally representable and represented in geodata sets. Despite big data on everyone’s lips and permanent satellite observation of all parts of the world, the field campaign is and remains an important element of geographical research in order to better understand and explain complex relationships between people and society in context. The possibilities offered by user-generated geodata, which contain quantitative and qualitative information contributed by the local population and thus enable the geographer to conduct a virtual field campaign, are now all the more interesting. Research must now find out to what extent, for example, the quality and reliability of the contributed data, the protection of privacy and the exclusion of groups of people through digital media allow the use of user-generated geodata for scientific analyses. One concrete contribution is this research project.

Measuring and Understanding the World Through Geoinformatics

181

1.2 Counting The number plays an important role in geoinformatics. Digital signs and numbers become data through a given syntax, which can be analyzed with computer-based methods. Both spatial and object-related aspects are represented by numbers: One example is coordinates, i.e. numbers that specify a spatial reference of a geo object, such as the building of the Heidelberg Academy of Sciences (HAdW) with the geographical coordinates 49.41 ° North and 8.71 ° East. In addition, properties of geo objects are expressed by numbers, such as the height (in meters) or the year of construction of the HAdW building. Counting in the sense of determining the number of objects as an absolute integer is usually performed by the computer in geoinformatics and can be done at two different levels. First, counting is done when geospatial data is collected and thus the quantity of a particular object at a particular place and time is determined. Examples of this would be counting buildings in a certain area that were destroyed by a tsunami. Second, computer-based methods can be used to count in digital spatial representation. For this purpose, mostly methods of spatial aggregation, spatial statistics and spatial overlaying are used. Aggregation would be the number of buildings for the whole of Heidelberg and a spatial overlay would be counting buildings that are located near the Neckar River. In the field of spatial statistics, the main focus is on interrelationships (correlations), which also depend on the location in space. The derivation of hot or cold spots of a certain phenomenon could be counted among these, such as the street intersections in Heidelberg with the most or the fewest accidents. The computer thus counts the accidents per intersection. This counting happens in the designed algorithm and the geoinformatics thus determining what should be counted how and where by the computer. Usually, the amount of data is so large that manual counting by a person is not possible. The mere act of counting can already allow initial conclusions to be drawn about a geographical phenomenon. However, most analyses use multiple data sources and examine several spatial and non-spatial factors simultaneously in order to understand and explain the how and why of a phenomenon. Why are there more accidents at certain road intersections? This needs to be related to traffic volume, what kind of accidents occurred, who was involved, etc. to better understand the cause. Due to a large amount of data (e.g. all traffic accidents in Germany), ­automatic computer-based methods of spatial statistics are used here, which can, among other things, determine correlations between variables of a phenomenon, taking into account the location. The variables can be available in different scales,

182

B. Höfle

such as ordinal scale (e.g. subjective degree of destruction of a building in 5 classes), nominal scale (e.g. building destroyed or not destroyed) or in metric scale (e.g. the destroyed building area in square meters).

1.3 Measuring Geoinformatics can basically work with all digital geodata whose measured values (and also subjective observations) can be located in topographic or relational space. Thus, the spatial component (e.g. position on the Earth’s surface) and also the object-­describing properties (e.g. water level of a river) are measured and available for computer-based analysis. For example, the position of an observation may be recorded using GPS and the observation itself may be a measurement of the rate of movement of a landslide. The recording of geodata is an expression of measuring and at the same time of counting. The actual focus of measurement in geoinformatics lies in the determination of spatial properties and relationships that are present in the digital geodata sets. Thus, we “measure” in the representation of the geodata and not in reality (e.g., as in a laboratory experiment). Measurement is very diverse and can range from a simple measurement of the height of a building in a 3D model to the measurement of spatial relationships (correlation) between several recorded variables, such as the spatial relationship between flood risk and distance to the river, or between academic rates and the number of single households. Here, the theoretical principle applies that spatially close objects also have more similar properties than objects at greater distances from each other.7 It is thus statistically more likely that the neighbour has a similarly high income than a household at a great distance. This measurement thus also involves a comparison—in concrete terms, a comparison of observations in space. The acquisition of geodata, which includes counting and also measuring, represents a certain selection (selection bias) of objects and object properties of reality and is thus subject to a certain subjectivity of the scientist as to what is acquired and how or what is not acquired. Moreover, the acquisition can only take place in a simplified representation of reality, in a model determined by the scientist (model bias), which specifies which properties characterize objects. In such an object class model, for example, the object “house” can only be described with a few selected

 Ord, J. K., & Getis, A. (1995). Local spatial autocorrelation statistics: distributional issues and an application. Geographical Analysis, 27 (4), pp. 286–306. 7

Measuring and Understanding the World Through Geoinformatics

183

properties (e.g. address, roof shape, number of windows) and geometrically represented by a point, which is not very realistic but effective simplification. Capturing thus represents a link to “real” geographical objects and processes. Another important aspect is who counts and who records? Scientists record as objectively as possible and according to a well-defined uniform scheme. However, in the field of user-generated geodata on the web, counting and measuring are sometimes very heterogeneous and subjective on the part of the person recording the data. The inclusion of the capturing person is an additional dimension that needs to be considered in our research.

1.4 Understanding In geoinformatics, we consider a method of data acquisition, analysis, management, and visualization of digital geodata to be “understood” and thus a new finding if it has been validated internally, both theoretically and experimentally. In addition, this method must also have been externally validated through its application in real case studies, and the behavior of the methods must have been studied in detail. Geoinformatics also supports the process of understanding spatial phenomena, such as in geography, ecology, sociology. Understanding is thereby merely supported but not achieved through the use of the methods of geoinformatics.

1.5 Interpretation In geoinformatics as a methodological discipline, subjective interpretation is not found as a concept in the theoretical foundation of the discipline. However, subjective interpretations can in turn flow into spatial analyses through a formalization as qualitative geodata. Conversely, maps and 3D visualizations based on digital geodata can also find their way into interpretation processes, especially when other scientific disciplines use geoinformatics methods and data as supporting methods.

1.6 Pattern Spatial patterns are the focus of analyses in geoinformatics. These patterns can be automatically derived, for example, by the geometric arrangement, spatial ­distribution and frequency of values by spatial statistics and spatial clustering algo-

184

B. Höfle

rithms, or visually displayed and interpreted in a map or statistical graph.8 The most important principle here is spatial autocorrelation, in which individual observations in space are related to neighboring observations, which can result in the following “patterns”: (a) there is no relationship to neighboring observations, (b) the values are similar (positive correlation), or (c) the spatial neighbors behave in opposite ways (negative correlation). Thus, this also involves comparing in space how values behave spatially relative to each other over different distances. Spatial patterns can be searched for in primarily collected geodata using computer-­aided methods, such as by asking which areas in Heidelberg have clustered traffic accidents. Patterns can also be determined in geodata derived through further spatial analysis, such as the delineation of new development areas calculated as potentially at risk from flooding.

1.7 Scientific Results Scientific findings and results of geoinformatics are, on the one hand, new methods (e.g. algorithms) for the investigation of digital geodata (method science) and, on the other hand, the understanding of geographical phenomena (spatial science) by applying these methods to the digital representation of the Earth. For example, a newly developed geoinformatics method can enable the calculation of potential annual solar radiation in three-dimensional space. An analysis of a geographical phenomenon based on this could be to investigate whether shady properties determine a higher sales price compared to shaded ones, and how pronounced this increase in value is in certain urban areas.9 However, the same method—the calculation of solar radiation—could also be used to determine melting processes on glacier surfaces in the Arctic, to calculate the energy input of façade-integrated photovoltaic panels or even to improve the determination of decomposition processes of corpses in forensic medicine.10,11,12  Andrienko, N., & Andrienko, G. (2006). Exploratory analysis of spatial and temporal data: a systematic approach. Heidelberg: Springer. 9  Helbich, M., Jochem, A., Mücke, W., & Höfle, B. (2013). Boosting the Predictive Accuracy of Urban Hedonic House Price Models Through Airborne Laser Scanning. Computers, Environment and Urban Systems, 39, pp. 81–92. 10  Arnold, N. S., Rees, W. G., Hodson, A. J., & Kohler, J. (2006). Topographic controls on the energy balance of a high Arctic glacier. Journal of Geophysical Research, 111, pp. 1–15. 11  Jochem, A., Höfle, B., & Rutzinger, M. (2011): Extraction of Vertical Walls from Mobile Laser Scanning Data for Solar Potential Assessment. Remote Sensing, 3(4), pp. 650–667. 12  Pan, Z., Glennie, C. L., Lynne, A. M., Haarman, D. P., & Hill, J. M. (2014): Terrestrial laser scanning to model sunlight irradiance on cadavers under conditions of natural decomposition. International Journal of Legal Medicine, 128(4), pp. 725–732. 8

Measuring and Understanding the World Through Geoinformatics

185

This breadth of geoinformatics is also a major reason why geoinformatics is often seen as part of other disciplines such as geography, geodesy, computer science, etc. This fuzzy separation of geoinformatics as a discipline is often interpreted as a deficit in discussions on the philosophy of science compared to traditional and definable disciplines. Geoinformatics is referred to as an auxiliary science because it also provides methods, but does not necessarily use them to gain additional knowledge. Whether this diversity and connectivity of geoinformatics is a strength or weakness, or even irrelevant, will be answered by the future. The importance of the number for achieving scientific results is very high in geoinformatics since geodata consists of numbers. In particular, the large amount of objects motivates the development of automatic computerized methods. Measurement and comparison are based on the metrics assigned to geo objects in the chosen and simplified representation of reality. The derivation of (spatial) patterns and relevant information is based on measurement (e.g. measure of spatial autocorrelation) and comparison (e.g. crossing of a threshold). Qualitative as well as quantitative data can be spatially analyzed and related to non-spatial datasets in geoinformatics. In geoinformatics as a methodological science, the focus is on the process—the method—and less on the content (i.e. the numbers and data) that are “processed”. The application of the methods and thus the interpretation of the numbers and data are usually carried out by other disciplines that use them to test a hypothesis or a model.

2 Neogeography of Digital Earth: Geoinformatics as a Methodological Bridge in Interdisciplinary Natural Hazard Analysis (NEOHAZ) 2.1 Project Objective and Object of Investigation Neogeography can be understood in a simplified way as the collection, sharing and also the analysis of digital geographic data by non-experts, especially via the web and smartphones, such as the transmission of GPS positions of possible flood-­ prone areas via a web application. A central question is: Can local, implicit knowledge about natural hazards be captured, measured and understood through neogeography in order to be able to design locally adapted prevention measures? The project focuses on the natural hazard risk, which results from (a) the potentially occurring natural hazard (here floods in Santiago de Chile) and (b) the vulnerability of the population through a spatial overlap (see Fig. 1).

186

B. Höfle

VULNERABILITY RISK

RISK PERCEPTION

Knowledge

Risk assessment

Experience

confidence

NATURAL DANGER

Participatory Mapping

Social Media

NEOGEOGRAPHY Fig. 1  Concept of natural hazard analysis extended with the approach of neogeography. (own representation)

The project is located in the temporal phase of preparedness—i.e. before a concrete event occurs—of the four phases of the risk cycle: (1) mitigation, (2) preparation, (3) response and (4) recovery.13 The central methodological element is participatory geodata collection, i.e. the local population, organized groups (e.g. of volunteers) and institutions (NGOs and governmental) contribute digital geodata, thus user-generated geodata. Data collection is not only carried out by a central institution or by scientists, as has been the case in established natural hazard analysis to date. An important aspect here is the risk perception of the population, which to date has hardly been taken into account in the scientific-technical context of natural hazard analysis.14

 Felgentreff, C., & Glade, T. (Eds.) (2007). Naturrisiken und Sozialkatastrophen. Heidelberg: Springer Spektrum. 14  Klonner, C., Marx, S., Usón, T., Porto de Albuquerque, J., & Höfle, B. (2016) Volunteered Geographic Information in Natural Hazard Analysis: A Systematic Literature Review of Current Approaches with a Focus on Preparedness and Mitigation. ISPRS International Journal of GeoInformation, 5(7), p. 103. 13

Measuring and Understanding the World Through Geoinformatics

Risk Awareness Maps based on OSM Field Papaers

187

Participatory 3D hazard mapping of flood elevations

Fig. 2  Participatory hazard mapping (by local people) in the Quilicura study area, Santiago de Chile, using floods as an example (May 2015). Left: Roads affected by flooding are marked in yellow in the OSM Field Paper, right: capturing flood levels using a smartphone application (own representation, map data: OpenStreetMap and contributors, CC-BY-SA2.0)

2.2 Counting and Measuring In the research project, a large part of the geodata is collected in a participatory manner and thus the contributing persons count and measure. Both qualitative and quantitative geospatial data as well as data in topographic and relational spatial representation are collected and analyzed. The hypothesis is that by involving local people, local knowledge about natural hazard risk can be captured and formalized. The following aspects are directly covered by the population: 1. Flood level: Using a smartphone app(lication) developed by us, the flood levels of the last extreme event at the place of residence can be recorded and transmitted as geodata using various methods (e.g. estimation or with photogrammetry) (see Fig. 2 right).15 This user-generated information is used to supplement and for comparison with a scientific-technical calculation of the flood hazard (see Fig. 1). 2. Risk perception: With the help of paper maps (OpenStreetMap field papers), which can be automatically digitized, areas with increased perceived risk of

 Marx, S., Hämmerle, M., Klonner, C., & Höfle, B. (2016): 3D Participatory Sensing with LowCost Mobile Devices for Crop Height Assessment—A Comparison with Terrestrial Laser Scanning Data. PLoS ONE, 11(4), pp. 1–22. 15

188

B. Höfle

flooding are identified by the local population.16 In addition, interviews are conducted with local people to better formalize and interpret the risk perception maps. This dataset is geospatial data with qualitative observations. A spatial overlay of many observations should allow a quantitative statement to be made so that spaces with consistent perceptions of high or low risk can be identified. In addition to the topographic data collection elaborated above, an institutional analysis was carried out by the researchers in the project, exploring the governance system for flood risk management and the possibilities of using participatory geographic approaches for geospatial data collection.17 Through interviews and the information extracted from them, a relational analysis (network analysis) of the institutions involved could be carried out.

2.3 Pattern The recorded observations are merged and valid statements and comparisons should become possible through deliberately high redundancy and overlapping of the data. The quantitative geodata on the estimated inundation depth can be analysed directly. Here, the quality of the measurements, i.e. the consistency and accuracy in the statements of the contributors, is determined using independent reference data. Spatial patterns are determined by robust methods of geostatistics, which can be used to generate a map of participatory flood levels. Patterns in this map will be visually compared to scientific simulations of flooding. A direct metric comparison between the local observations and the simulation is not useful, as a simulation does not represent a real event. The risk perception data are analysed with spatial statistics (hot/cold-spot detection) and spatial overlays (see Fig. 3), whereby the frequency (number) of mentions of a risk area plays a role. The qualitative analysis of the interviews conducted in parallel between scientists and the local population will allow a further interpre-

 Klonner, C., Marx, S., Usón, T., & Höfle, B. (2016): Risk Awareness Maps of Urban Flooding via OSM Field Papers—Case Study Santiago de Chile. Proceedings of the ISCRAM 2016 Conference, Rio de Janeiro, Brazil, pp. 1–14. 17  Usón, T., Klonner, C., & Höfle, B. (2016) Using participatory geographic approaches for urban flood risk in Santiago de Chile: Insights from a governance analysis. Environmental Science and Policy, 66, pp. 62–72. 16

Measuring and Understanding the World Through Geoinformatics

189

Fig. 3  Example of a map of the derived classification of the risk of flooding in Quilicura, Santiago de Chile. (Klonner, C., Marx, S., Usón, T., & Höfle, B. (2016): Risk Awareness Maps of Urban Flooding via OSM Field Papers—Case Study Santiago de Chile. Proceedings of the ISCRAM 2016 Conference, Rio de Janeiro, Brazil, pp. 1–14) (CC BY-NC-SA 4.0, Map data: OpenStreetMap and contributors, CC-BYSA2.0)

tation of the generated risk perception map. Here, additionally recorded variables (e.g. age, period of residence etc. of the interviewed and contributing persons) are included.

2.4 Scientific Findings and Results The comparison between the participatory collected geodata on risk perception and the conducted interviews is an important result. This will help us learn how to interpret the content of the geodata and whether it is possible to integrate this information into scientific risk analysis.

190

B. Höfle

The institutional analysis of the governance system based on expert interviews connects the methodological and content-related part by examining how participatory geodata collection can be introduced into the existing governance system of Chile in the first place. We can thereby understand how and through which institutions a successful implementation would be conceivable at all. This part of the project leads to the realization of which methodological and content-related needs could be served by neogeography in practical implementation. Through the research project, we gain knowledge about the methods of geoinformatics for the participatory collection of geodata regarding flood risk and the perception of risk in space. Thus, the developed methods can be fundamentally better understood, validated and, if necessary, further developed. Critical points with high relevance for geoinformatics, in general, are especially the data quality, consistency and also the minimum required number of redundant observations to be able to make valid statements.

3 Conclusion Complex human-environment interactions require broad interdisciplinary scientific investigation in order to move from measuring and describing to understanding these geographic phenomena. Natural hazards are a societally highly relevant phenomenon and a suitable example for the research and development of new investigation methods. Geoinformatics methods have an explicit focus on space and time, analyse geographical phenomena (human and environmental) in a wide range of disciplines and work exclusively in the digital spatial representation of the world with both quantitative and qualitative digital data. We have learned in the WIN Kolleg and in the research project that the “Measuring and understanding the world through science” is already strongly influenced by two major developments: 1. The progressive digitalization of the world (e.g. internet, social media and earth observation) allows us to observe geographic phenomena that were not measurable before. 2. Through the participation of citizens (citizen science) in the process of scientific measurement and also the interpretation of digital data, new insights and at the same time knowledge transfer can be achieved. Both developments mentioned can be brought together by digital methods of geoinformatics and contribute significantly to the exploration of our geographic world.

Measuring Art, Counting Pixels? The Collaboration of Art History and Computer Vision Oscillates Between Quantitative and Hermeneutic Methods Peter Bell and Björn Ommer

The project “Artificial and Artistic Vision. Computer Vision and Art History in Practical-Methodical Cooperation” is interdisciplinary by definition and also in its personnel composition and combines the humanities, engineering, and natural sciences. Together, prototypes and methodological approaches to an automatic vision that assists art history are being developed in the form of basic research. With computer science and art history, two subjects cooperate that have a very different relationship to counting, measuring and interpreting patterns. On the one hand, there is computer science, which draws heavily on mathematics; on the other, there is a historical-philosophical subject that initially seems to be distant from counting and measuring. Art history de facto uses primarily qualitative methods, especially a hermeneutic interpretation of images and sources. The different lines of tradition, visual

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

P. Bell (*) Art History and Digital Humanities, Philipps University Marburg, Marburg, Germany e-mail: [email protected] B. Ommer Computer Vision & Learning Group, Ludwig Maximilian University of Munich, Munich, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_15

191

192

P. Bell and B. Ommer

skills and approaches of the subjects of computer science and art history lead to productive discussions and innovative approaches for both disciplines: In art history, the view for quantitative methods is sharpened and in parts reactivated, computer science gets insight into hermeneutic procedures, iconography and iconology. In the project, this mix of methods is applied to theory building and basic research as well as to a practice-oriented development of prototypes.

1 Measuring, Counting, Recognizing Patterns in Art History To begin his short but famous treatise on the art of painting well understandable, Leon Battista Alberti uses mathematics (Scrivendo de pictura in questi brevissimi comentari, acciò che ‘l nostro dire sia ben chiaro, piglieremo dai matematici quelle cose in prima quale alla nostra materia apartengano.).1 Euclidean geometry is the basis of optics and perspective construction, through which early Renaissance art achieved spatial depth and, in many respects, new form. Many painters, not only of this epoch, were at the same time architects and thus closely acquainted with quantitative methods, with measuring and counting, and applied mathematical calculations in their teachings on proportions. This technical side of art is widely known, but rarely addressed in research. Art history has been able to establish itself as a qualitative hermeneutic subject in the humanities (“Art is not an illustration of intellectual development, but a part of it”2). It is3 this positioning, which is fundamental to the subject, that has resulted in a great distance from technical and scientific questions.4 However, one number that has always had a special significance for art history, because it brings art and history together, is the date, i.e. the year in which a work of art was created or is said to have been created.5 Where the date was not incontestably handed down by the artist or commission context, a measurement of various indicators in the form of stylistic features was postulated.

 Alberti, L B. (2002). Della Pittura. Über die Malkunst. O.  Bätschmann & S.  Gianfreda (Eds.). Darmstadt: WBG, p. 66. 2  Tietze, H. (1924). Geisteswissenschaftliche Kunstgeschichte. In J. Jahn (ed.). Die Kunstwissenschaft der Gegenwart in Selbstdarstellungen, Leipzig: F. Meiner, p. 191. 3  See also Bätschmann, O (1984). Einführung in die kunstgeschichtliche Hermeneutik. Darmstadt: WBG. 4  On overlaps and convergences cf.: Fiorentini, E (2003). Naturwissenschaft und Kunst. In U. Pfisterer (ed.), Metzlers Lexikon der Kunstwissenschaft. Stuttgart: J. B. Metzler, pp. 244– 248. 5  Cf. for example Panofsky, E. (1985). Zum Problem der historischen Zeit. In Ders. (ed.): Aufsätze zu Grundfragen der Kunstwissenschaft. Berlin: Wissenschaftsverlag Spiess, pp. 77–84. 1

Measuring Art, Counting Pixels?

193

Stylistic criticism is a procedure with which, in addition to dating, attribution to a person, a workshop or a topographical context can be found. Last and not least, stylistic patterns that are peculiar to the artist or group of artists also become ­visible. The “counting” of artworks in chronological order was particularly important for early art history (beginning with Giorgio Vasari’s Vitae), as personal and general lines of development were to be shown in the biographical narrative. The innovation-­ oriented art history of the early modern period, like Pliny the Elder long before, looked for innovators (“il primo che …”) in technical, iconographic, or aesthetic terms, for which a chronology of works and life dates became necessary, as well as an enumeration of teacher-pupil relationships as chains of inspiration and innovation. “In the postwar period, interpretive interpretation in historical context visibly determines the research program of the more advanced art history, while the history of style recedes as a subject.”6 One part of the discipline considers dating to be largely reliable or less relevant, while another part criticizes stylistic criticism sweepingly as formalism as well as a less certain and often subjective procedure for arriving at a secure dating. Thus, in many studies, the numerical work of chronology no longer ranks as an object or structuring element, but rather as background information. In addition to dating, there are other relative and concrete figures that are collected in art history. For example, the number of artists in a city and data on their living and production conditions are collected.7 Numbers prove to be extremely telling at this point, although the leap to the interpretation of the individual work is nevertheless a long one. Counting becomes problematic when authors assign artists a small oeuvre or a high productivity, without any known mediocrity and occasionally without reflecting on the history of transmission. But this is a key point: The preservation of artworks over time depends on many factors such as appreciation, provenance, trends in the art market or catastrophes. And even if all works have been preserved, their number may not be ascertainable due to lack of inventory in museums and private collections. A largely reliable count of works of art can only be guaranteed with some certainty if it is carried out by the artist himself; for example, by numbering serial prints. That uncertainty maybe one reason why art historians are agnostic against numbers. The formats of works are also to be measured. Especially due to the mass distribution of (digital) reproductions, the size of a work of art is rarely assessed.  Locher, H. (2003). Stil. In U. Pfisterer (ed.). Metzlers Lexikon der Kunstwissenschaft, Stuttgart: J. B. Metzler, pp. 335–340, p. 339. 7  As a prominent example of an empirical case study on the work situation of artists and sociotopography see: Jacobsen, W. (2001). Die Maler von Florenz zu Beginn der Renaissance. Munich/Berlin: Deutscher Kunstverlag. 6

194

P. Bell and B. Ommer

­ lthough the dimensions of a work are an essentially obligatory metadatum, the A size or even small scale of a work can be difficult to imagine or even distorted by the size of the reproduction. A miniature then easily looks like a panel painting or even a fresco through a high-resolution digital copy. These shifts in scale, which are often not so important for a comparison of content and iconography, are all the more important for assessing the function and use of the work of art, and often allow conclusions to be drawn about the original (e.g. architectural) context. The transmission of the scales with the reproduction thus creates the virtual reconstruction of the work of art in the first place. Amounts that are also rarely collected in academic art history are commission and purchase sums for works of art. An economic history of patronage and the art market is difficult to compile due to the many local currencies and often too few or corrupted sources. A reconstruction of the economic history would, however, be of interest in order to be able to make statements about the status of an artist, his financial situation, or the taste of a region. The purchase sums of works of art achieved at auctions are becoming increasingly easy to access for the last few decades. Large houses such as Sotheby’s or Christie’s publish on their websites sales results of individual works and total proceeds of auctions held, while Heidelberg University Library digitizes catalogues before 1945. Users of these offerings not only receive ‘numbers’, but also information about the taste of the time or the topicality of an artist. Price developments can be traced by means of comparisons, and art databases such as Artprice bundle and visualize the development of prices to a great extent. Another little-noticed area of counting and measuring is found in the iconographic interpretation of the picture, although this intuitive activity is hardly ever verbalized. The art historian will immediately infer the Last Supper from thirteen persons at the table, and the Emmaus Supper from three. The symbolism of numbers in myth, Judaism and Christianity makes counting in iconography and architecture an indispensable method (three graces, three angels visit Abraham, twelve pillars stand for the apostles, etc.), which as a system of reference should, however, always be critically questioned. The structure and weighting of the composition can also be ascertained numerically; at which points within the picture does a condensation of the persons or objects take place? And what proportion is taken up by the horizon? Not least through the ‘discovery’ of the golden section and the Fibonacci sequence of numbers, which has been known since 1202, and through the use of a perspective construction, works of art have been able to be calculated and reconstructed using mathematical formulae since antiquity and especially during the Renaissance.

Measuring Art, Counting Pixels?

195

Leonardo da Vinci’s famous proportion sketch of the Vitruvian Man (1492) is not least a testimony to and result of a mathematical conception of art. In conclusion, art history, especially in the German-speaking world, tends to focus on qualitative and hermeneutic issues. Quantitative approaches can be observed in museums, on the art market, and in oeuvre catalogues. The connoisseurial view of a work in order to attribute it to an artist, a year or a place always has a quantitative dimension. Connoisseurship is based on a large amount of visual experience and a comparison with this fund in terms of multiple indicators.

2 Measuring, Counting, Pattern Recognition in Computer Science and Computer Vision A certain and rather underestimated necessity of quantitative approaches is thus also present in art history, and it is becoming more and more virulent the more the image data repositories are growing, making qualitative access impossible in many cases. It is usually initially the amount of data that brings art history and computer science together—it is only in the interdisciplinary work that further common research interests become apparent. At this point, there is no need for a comparable analysis of the importance of numbers in computer science. The subject derives in large part from mathematics. Measuring, counting and recognising patterns are accordingly fundamental tasks of algorithms, whereby computers not only revolutionised measurement technology, but also began early on to recognise patterns, for example in texts, and thus to work towards the humanities with quantitative results (e.g. computational linguistics). In a kind of counter-current movement, computer science is now in many areas ready to turn to qualitatively hermeneutic questions, building on the quantitative basis. Image processing has also established itself as a branch of science in its own right, thanks to increased computer power and algorithmic research. The basis for computer vision are digitally captured reproductions, forming an image of reality that is described quantitatively as statistical relations. This description is typically based on the measurement of brightness or color values of individual pixels on a sensor. At this point, two fundamental difficulties of measuring and counting already become apparent. a) the photons reflected by an object can indeed be measured and thus its brightness and colour can be inferred. However, this is a difficult inverse problem, since the perceived color also depends on the light source illuminating the object. As in human perception (color constancy), the attributed colors are thus rela-

196

P. Bell and B. Ommer

tive features that depend on the rest of the scene (illumination source, etc.). b) The measured intensities are local observations that describe small sections of an object, depending on characteristics such as the camera lens, image sensor, or object distance used. However, interesting properties such as the shape of an object cannot be captured with single pixels. Consequently, a meaningful description of a depicted object or scene can only succeed when many locally observed color values (the pixels) are combined into shapes or informative components or entire objects. For, as Max Wertheimer already noted: The whole is different from the sum of its parts.8 Only at this level can patterns and objects then be recognized in digital reproductions. Similar images are compared by measuring distances and by identifying patterns in the form of like transformations. The matrix of pixels seems a basic quantification here, but if informative similarities are needed for image comparison, a greater degree of abstraction is required. What is needed, then, is a representation that can no longer be measured locally, but is the product of an inference process, that is, an indirect derivation that takes into account the whole object or the context of the scene.

3 Collaboration Between Art History and Computer Vision It is the “countless” image data that bring computer vision and art history together. Art history turns to computer science because it sees the mass of its own image stock becoming unmanageable, while the latter recognizes in it only a manageable sub-area of a much larger quantity of visual data (which go far beyond the human living world, e.g. into the macro- and micro-areas). The merging of art history and computer vision not only uncovers commonalities in visual studies but also leads to the fact that both fields have to deal with methodological changes. Traditional methods of art history are supplemented and evaluated by technical possibilities.9 The analysis of formal characteristics of a work of art becomes more effective and meaningful through the inclusion of a considerably larger amount of digital data. Computer vision finds scalable problems within art historical data sets and sometimes very complex semantic  Metzger, W. (1975). Was ist Gestalttheorie? In K.  Guss (Ed.). Gestalttheorie und Erziehung. Darmstadt: Steinkopff, p. 6. 9  The introduction of the computer was critically reflected upon early on: Nagel, T. (1997). Zur Notwendigkeit einer Ideologiekritik der EDV im Museum. In H. Kohle (Ed.). Kunstgeschichte digital. Eine Einführung für Praktiker und Studierende, Berlin: Reimer. 8

Measuring Art, Counting Pixels?

197

c­onstellations that can only be addressed in collaboration with art history. Furthermore, the two visual studies (Bildwissenschaften) also face similar problems methodologically. Gestalt theory could be identified as a common methodological starting point since it has inspired both disciplines and also provides explanatory models for the design of works of art.10 In it, human vision and, in some cases, artistic visual tasks were analyzed, so that many approaches for an artificial vision (of art) are contained therein. Seeing, as art historians and computer vision experts know equally well, requires a lengthy learning process. The project has institutionalized a hermeneutic understanding of pictures in which the dialogue partners are not only the experts from computer science and art history but also, in a certain way, the machine and the artist: the computer, by communicating its conceptions of similarity and empirical overview in order to arrive at better results iteratively with the art historian; the artist, by likewise incorporating conceptions of similarity (for example, to the visible world and to receive works of art) and individual solutions in his works. As a rule, artists represent their pictorial objects in such a way that they can be easily identified (ideal viewer standpoint, appropriate situational context); only sub-areas of modern art break with this convention (Cubism, Surrealism, etc.). Thus we are looking at a subset of the total visual information, a clear semantic, which also has a very concrete or abstract reference to reality, and in many cases what is seen is pointed and interwoven via tradition, choice of subject and style. Without being able to make numerical statements, the relationship of the images to one another and individual patterns in the canon of art history are evident. This volume is further reduced and sharpened in partial investigations. The WIN project examined approximately 3600 images from the prometheus image archive that were tagged with the keyword crucifixion. According to this keyword, almost exclusively a concrete image motif, the crucifixion of Christ, and due to the database only art-historically relevant representations are to be expected.11 In this context, measuring and counting become relevant. By applying the automatic image search, it is possible to recognize how many duplicates, reproductions and variations of the same artwork appear (Fig. 1). Iconographic or compositional features, as well as canonical motifs, can be estimated in their frequency and  See for art history, for example, the adaptation of Arnheim, R. (1978). Kunst und Sehen. Eine Psychologie des schöpferischen Auges. Berlin: De Gruyter. 11  See on algorithm and procedure: Takami M., Bell, P. & Ommer, B. (2014). Offline Learning of Prototypical Negatives for Efficient Online. Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pp. 377–384. 10

198

P. Bell and B. Ommer

Fig. 1  capitals sorted by similarity. The first field at the top left corresponds to the search range. The similarity then decreases line by line to the right. (Own representation)

Fig. 2  Stylistically similar compositions from all reproductions tagged with ‘crucifixion’ from the Prometheus image archive. (Own representation)

s­ ingular innovations can be identified. Especially the visualization of the search results as tiled thumbnails allow counting and recognizing patterns (Fig.  2). However, such a partial dataset is not very representative of art history. The fourdigit quantity of images may reproduce the total holdings in a distorted way due to the research interest of individual chairs or ambitious digitization campaigns of

Measuring Art, Counting Pixels?

199

Fig. 3  Michelangelo’s ‘Ignudi’ from the Sistine Chapel frescoes (column 1), contemporary reproductions (column 2), and visualization of deviations (column 3). (Own representation)

200

P. Bell and B. Ommer

media libraries, museums or libraries. No overview of the cultural heritage can yet be obtained, since no comprehensive and merged digital registration of at least preserved works and image material has yet been achieved. Thus, any quantitative analysis carried out on such material is also to some extent a part of the history of science in the present and has its biases. The results indicate what appears to be worth digitizing at the moment, and the search queries that the user of the algorithm makes are just as subjective. In the context of the WIN project, a procedure was also tested in which originals and copies are compared and the deviations are measured (Fig. 3). To do this, the contours are recorded and the work and its reproduction are superimposed. It is then measured how the line of the copy must be transformed in order to become congruent with the contour of the original. This reveals where the copy deviates significantly from the original or where there are systematic errors due to the reproduction. In Michelangelo’s ‘Ignudi’ (column 1) and later copies (column 2), clear differences can be seen through the automatic visualization (column 3). These deviations are due to the high challenge of drawing the frescoes located on the vaulted ceiling true to proportion. The method now identifies regions of equal transformation, i.e. sections within the contours where deviations with the same geometric basis have occurred. In addition to a very quick overview of the deviations, patterns can also be detected, such as recurring mechanical defects or optical distortions caused by the viewer’s point of view or camera. In addition, a detailed analysis is possible in which the individual variations can be traced on the basis of the regions. At this point, the geometric capabilities of the computer help to provide information that are difficult to recognize by humans, while humans can afterwards interpret the computed findings. In the second phase of the WIN project, the focus is on indexing the data set as a whole. The computer is supposed to recognize recurring structures in the entire image database and come to quantifiable results itself, such as rough dating. On the one hand, the quantitative recording of artistic works seems difficult to establish within a hermeneutic history of art. Categories such as innovation, creativity or aesthetics cannot superficially be measured and represented by numbers. On the other hand, the golden section, the scenes recognizable by numbers of persons, the recurring patterns with which stylistic criticism works and the associated framework of dating as precisely as possible, as well as now the successful use of computer vision, are strong indications that a quantitative history of art offers a large field of activity.

Through Numerical Simulation to Scientific Knowledge Mathias J. Krause

Numerical simulations are used for the approximate prediction of situations under strictly defined conditions. They are based on mathematical models and represent interdependencies in the form of algorithms and computer programs. In everyday life, they are now ubiquitous for everyone, for example in weather forecasts or economic growth forecasts. In politics, they serve as an important tool for decision-­ making. In the scientific context, however, simulations are much more than a prediction tool. Similar to experiments, they also serve to build models themselves and thus enable the elucidation of causal relationships. Due to increasingly powerful computers, the importance of numerical simulation in science has grown steadily and rapidly over the last 50 years. In the meantime, it is considered an indispensable tool for gaining knowledge in many scientific disciplines.1 It is to be expected that numerical simulation will continue to grow rapidly in importance in the future and produce further surprising findings and technologies.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

 Bungartz, H.-J., Zimmer, S., Buchholz, M. & Pflüger, D. (2014). Modeling and simulation: an application-oriented introduction. Undergraduate Texts in Mathematics and Technology. Berlin, Heidelberg: Springer. 1

M. J. Krause (*) Institute for Applied and Numerical Mathematics and Institute for Mechanical Process Engineering and Mechanics, Karlsruhe Institute of Technology, Karlsruhe, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_16

201

202

M. J. Krause

The main goal of this article is to show the current importance of numerical simulation in modern science. In addition, it will be shown how, in interaction with mathematical modeling, it enables scientific knowledge, especially in the natural sciences, but also beyond. The WIN project CFD-MRI is an example of this, in which a newly developed combined measurement and simulation method uses the possibilities of modern high-performance computers and thus enables new findings for medical research, among other areas. In the Introduction, some definitions of important basic terms such as model, reality, observation and experiment are given. They are accompanied by basic considerations on the overarching context of the terms. The next section describes in general the rather technical procedure of a simulation process. This is illustrated in the third section by a practical example of modelling and simulation of flows. In the same context, the fourth section shows the importance of numerical simulation and finally discusses the current limitations and challenges.

1 Basic Concepts and Considerations In general, models serve to predict events and, in a scientific context, to gain knowledge, i.e. to understand a cause-and-effect principle. Following Wolfgang Eichhorn,2 a model is a simplified image of reality. The term reality is to be distinguished with regard to the scientific discipline in which the model is applied. Figure 1 shows the results of a breathing simulation in a human nasal cavity. The simulation is based on two models. One describes the geometry of the nasal cavity and is based on computed tomography measurements. The second model describes the airflow. This is discussed in detail in Sect. 3. In the natural sciences, experiments carried out under accurately defined and, as far as possible, identical conditions yield identical results, which thus confirm an event predicted by the model. Here, a clear distinction must be made from observation (cf. Ortlieb,3 Ortlieb4), which does not require idealized simplified assumptions, is usually much more complex, and is often random. By repeating the  Eichhorn, W (1972). Die Begriffe Modell und Theorie in der Wirtschaftswissenschaft. Wirtschaftswissenschaftliches Studium: WiSt; Zeitschrift für Studium und Forschung, 1(7). 3  Ortlieb, CP. (2000). Exakte Naturwissenschaft und Modellbegriff. Hamburger Beiträge zur Modellierung und Simulation. Retrieved from http://www.math.uni-hamburg.de/home/ortlieb/ hb15exactnatmod.pdf (July 15, 2000). 4  Ortlieb, CP. (2001). Mathematische Modelle und Naturerkenntnis. Retrieved from http://www.math.uni-hamburg.de/home/ortlieb/hb16Istron.PDF (May 2001). 2

Through Numerical Simulation to Scientific Knowledge

203

Fig. 1  Numerical simulation of human respiration in the nasal cavity (Krause, M.J. (2010). Fluid Flow Simulation and Optimisation with Lattice Boltzmann Methods on High Performance Computers: Application to the Human Respiratory System. PhD thesis, Karlsruhe Institute of Technology (KIT), Universität. Karlsruhe (TH), Kaiserstraße 12, 76,131 Karlsruhe, Germany, July 2010. http://digbib.ubka.uni-­karlsruhe.de/volltexte/1000019768). Colored cones reflect the direction and strength of the flow velocity, with red cones representing fast velocities and blue cones representing slow velocities. Experiments have their limits due to technical feasibility but also due to ethical concerns, whereas simulations provide new insights into the mode of action and function of nasal breathing. (Own representation)

e­ xperiment and varying the input values under the same conditions, a model is then understood as a simplified representation of reality, if no contradiction arises. If this happens over a longer period of time and observations of natural phenomena also fit the model predictions without contradiction or at least approximately, the model is considered to be highly confirmed. However, it is not possible to identify5 reality and model because the model is based on simplifications.  Identifying here is to be understood as assigning and recognizing and at most as a weakened equating. 5

204

M. J. Krause

In other disciplines, such as medicine or the social sciences, knowledge discovery is often based on the recognition of patterns in collected data sets. As in physical experimentation inspired by observations, accurate and idealized criteria are set for characteristics of a data set under which a certain outcome (another characteristic of the same data set) is likely to occur, i.e. here too a model is assumed to describe a cause-effect relationship. Data sets that meet the specified criteria of the model are selected and averages6 of their predicted characteristic are compared with the prediction of the model. Both processes for gaining knowledge have in common that they are based on mathematical models. In abstract terms, these are based on a system of a few basic assumptions (axioms), from which more complex statements can then be qualitatively or quantitatively inferred. In most cases, variables representing numbers from sets of numbers are explicitly related by relations and/or functions or implicitly by equations and/or inequalities. However, a mathematical model can do without numbers in principle. Elements of abstract sets are connected by relations and functions. They order them and thus serve as a qualitative mathematical model. Furthermore, deterministic and stochastic models can be distinguished, depending on whether the influence of chance is neglected (deterministic) or not (stochastic). Unlike deterministic models, stochastic models can produce different predictions when used repeatedly with identical input values.7 However, these predictions follow a certain pattern, the so-called probability distribution. A further distinction is made between continuous and discrete models. In contrast to continuous models, discrete models are always based on sets with a finite number of elements. Discrete models are needed for numerical simulations, because the common computer technology requires instructions (algorithms) that are completed in a finite number of steps. The task of mathematical modelling is, besides the creation of new models, to simplify them in a controlled way, which is also to be understood in the sense of abstracting, in order to enable statements about their structure and solvability, as well as to enable computability with given technical resources in a practically ­motivated time span. Results are model hierarchies, qualitative and quantitative statements about how much the results of different models differ from each other. Mathematical modelling as an approach does not need a definition of reality at all.  Underlying the averaging procedure is the law of large numbers, which states that the relative frequency of an event approaches the theoretical probability, on the premise of repetition under the same conditions. 7  An example of a stochastic model is a micromodel for describing the movement of pedestrians. This can be used, for example, to predict evacuations at mass events. Here, for example, the change in direction of each individual pedestrian is described by a random variable. The flow models on which weather forecasts are based are usually deterministic in nature. 6

Through Numerical Simulation to Scientific Knowledge

205

It is only in other disciplines that mathematical models are associated with the objects of reality studied there. Whether at all and, if so, to what extent a mathematical model can be considered detached from reality or as part of it, since it is simplified, is quite controversially discussed.8,9

2 Process of a Numerical Simulation The starting point of any numerical simulation is a given mathematical model. This model is a simplified representation of a causal relationship from an application discipline. Depending on the application, this model can be quantitative or qualitative, deterministic or stochastic, discrete or continuous. The resulting mathematical formulation of the model is an equation or several equations (system of equations), possibly also an inequality, a system of inequalities or their combination. Often the solutions sought are not numbers or elements of a set, but functions. In this case, a so-called differential equation system is often obtained. If the system is structurally too complex or if too many calculation steps are necessary to calculate a solution directly “by hand”, a computer can be used. In order to make this possible, further simplifications have to be made depending on the model. Continuous models, such as those formulated in terms of systems of differential equations, usually have to be converted into discrete models (discretization). In the case of flow problems (cf. the simulation of nasal breathing in Fig. 1), this is done by decomposing a flowed-through space into finitely many subspaces (computational lattices). For each of these subspaces the flow velocity and pressure are then calculated. If the partial solutions are reassembled, a velocity and pressure distribution in the entire space is obtained. Depending on the problem class, further simplifications are necessary to ensure computability. For example, nonlinear systems often have to be linearized and for an implicitly defined problem an explicit solution procedure is required, which then iteratively and approximately provides the corresponding solution. Such and similar steps can, like the discretization, be seen as a model simplification. This is usually accompanied by an error, i.e. a deviation of the solution of the original model from the simplified model, which must be kept as small as possible or at least controlled by means of mathematical tools. Furthermore, with each  Ortlieb, CP. (2000). Exakte Naturwissenschaft und Modellbegriff. Hamburger Beiträge zur Modellierung und Simulation. Retrieved from http://www.math.unihamburg.de/home/ortlieb/hb15exaktnatmod.pdf (July 15, 2000). 9  Ortlieb, CP. (2001). Mathematische Modelle und Naturerkenntnis. Retrieved from http:// www.math.uni-hamburg.de/home/ortlieb/hb16Istron.PDF (May 2001). 8

206

M. J. Krause

simplification step, care must be taken that the problem remains well-posed (according to Jacques Hadamard), i.e. that there is an unambiguous solution at all, and that small changes in the input values cause only relatively small changes. If this is not the case, the well-behavedness must be modelled. This also creates an error that must be controlled. The discrete problem must now be formulated in the form of an algorithm and implemented in a programming language so that it can be processed on a computer. It should be noted that the resulting discrete computer solution contains another error, the computational error, since numbers are generally represented on computers only with limited accuracy. Depending on the problem under consideration, this error can be significant. Here, too, tools of numerical mathematics help to quantify the error propagation and, if necessary, to control it. In the next step, the discrete solution must be prepared in such a way that it can be compared with measurement results. Further errors are also to be expected during visualization, interpretation and analysis. An example of this is the interpolation of a simulated quantity, for example a velocity or a pressure, so that finally a comparison with a specific measurement or measuring point is possible (validation). This step is also an error-prone model building process. Figure  2 schematically summarizes the flow of the model building and simulation process. measurement

Observation

Reality Modeling

Error estimator Initial value problem

Discretization

(Linear) system of equations

Grid generation and Solving on parallel computer Discrete solution

Validation

Post-processing Visualization Interpretation and analysis

Simulation results

Fig. 2  Sequence of a numerical simulation. The arrows illustrate the time sequence on the one hand, but on the other hand also the fact that an error occurs at each partial step, which must be controlled. When developing new methods for numerical simulation, the comparison of the results with the observation and the error control are indispensable. (Own representation)

Through Numerical Simulation to Scientific Knowledge

207

3 Modelling and Simulation of Fluid Flows Fluid flows, i.e. the movements of gases or liquids, can be described and simulated in models in different ways depending on the required level of detail.10 A possible classification can be defined on the basis of the length scale considered. Equally conceivable is a characterization on the basis of the time scale. Probably the most intuitive length scale is the macroscopic one. A fluid in a flow area (for example a glass of water, cf. Fig. 3) is observed “from the outside” with the human eye. If we assume for the sake of simplicity that there are no temperature changes around as well as in the glass and that the fluid we are looking at is a so-called Newtonian fluid (for example water, air, but not blood or toothpaste), the flow can be described by certain conservation equations for mass and momentum, the so-called (incompressible) Navier-Stokes equations. The quantities sought are the location- and time-dependent functions of velocity and pressure. In general, no solutions can be given directly for these partial differential equations. They must be calculated numerically by approximation, as described in the last section. Common solution methods for this are the Finite Element Methods (FEM) or the Finite Volume Methods (FVM). In a macroscopic approach, the fluid is assumed to be continuous, i.e. the fact that a fluid consists of many microscopic particles, in this case water molecules, is neglected. Consequently, the description does not allow to describe effects originating from single particles with sufficient accuracy or at all. An example of this is the surface tension of water, which results from electrical dipole charges of the molecules (hydrogen bonding). A microscopic approach to modeling flows could correctly represent these phenomena. In such models, the trajectories of all water molecules, as well as the interaction and, depending on the model, the charge are recorded. A system of differential equations is obtained, where the particle positions, the velocity and, if applicable, their charge are the quantities sought. Frequently used solution methods are the Molecular Dynamics Method or the Discrete Element Method. However, if one wants to consider a fluid on a macro-

 Krause, M.J. (2010). Fluid Flow Simulation and Optimisation with Lattice Boltzmann Methods on High Performance Computers: Application to the Human Respiratory System. PhD thesis, Karlsruhe Institute of Technology (KIT), Universität Karlsruhe (TH), Kaiserstraße 12, 76131 Karlsruhe, Germany, July 2010). Retrieved from http://digbib.ubka.unikarlsruhe.de/volltexte/1000019768. 10

208

M. J. Krause

macro

meso

micro

Fig. 3  Water in a glass can be described by macroscopic, mesoscopic and microscopic models. Depending on the problem, a suitable description must be selected, whereby the level of detail decreases from micro to macro model and the computational costs increase. (Own representation)

scopic length scale with this method—thinking again of the glass of water—it is not suitable for this purpose. The problem is the large number of molecules. Even the calculation of only one cubic millimetre of water, in which there are 1022 molecules, exceeds the performance of all currently available computers by many orders of magnitude, which limits the applicability of microscopic models to very small areas. In the last 100 years, a third approach, the mesoscopic approach, has become established for modeling flows in addition to the microscopic and macroscopic approaches. It is based on the Boltzmann equation, which statistically describes a particle distribution as a function of location, time and microscopic velocity. The function sought here gives the probability of encountering a particle with a certain velocity for any location and at any time. Similar to macroscopic models, the behavior of individual particles is neglected here, but microscopic velocities are at least recorded as a distribution. The connection of mesoscopic to macroscopic models for the description of incompressible fluid flows could only be established in the last 30 years. Numerical methods referred to as Lattice Boltzmann Methods (LBM) have contributed to this. A simplified Boltzmann equation is solved with the aim of finally obtaining a solution of the incompressible Navier-Stokes equations. The computational effort for this is much less than for the microscopic approach, but is roughly similar to that of FEM and FVM for solving the macroscopic conservation equations. This is

Through Numerical Simulation to Scientific Knowledge

209

surprising because the mesoscopic model includes a greater level of detail. In addition, LBMs offer the advantages of being able to make much better use of current available parallel microprocessor architectures and are structurally much less algorithmically complex. The three model classes described, form a hierarchy in the sense that microscopic models are more general than mesoscopic and macroscopic ones. This is usually accompanied—but not always, as the above example of LBM shows—by the fact that the complexity of the numerical schemes increases from macro to micro models.

4 Importance of Modelling and Numerical Simulation Historically, arriving at scientific knowledge about causal relationships solely through theoretical considerations has been considered since Galileo Galilei’s experiments on free fall in 1623 or Otto von Guericke’s Magdeburg hemisphere experiment in 1663 at the latest. Since then, experiments have established themselves as an important tool for gaining knowledge. Numerical simulations are just beginning to establish themselves as the third pillar of science alongside theory and experiment.11 Konrad Zuse and John von Neumann are considered the fathers of numerical simulation, as they laid the foundations for today’s modern software and hardware technology with their first modern computer designs (Neumann 1945 and Zuse 1941). The process of establishment is far from complete. Gordon Moore’s law from 1965 (actually more of an observation than a law) is still valid. It roughly states that technological progress doubles the available computing power per hardware unit about every 18 months. Since hardware technology usually limits the accuracy and thus also the possibilities with respect to the complexity of numerical simulations, there is a direct connection to progress. Assuming that this development continues and that the importance of simulation for gaining knowledge increases with the technical ­possibilities, it follows directly that the importance of numerical simulation will continue to increase rapidly. What this means in quantitative terms is shown by the example of the simulation of two nasal breathing cycles from 2010 (Fig. 1), which at that time required a computing time of about one day on 128 processing cores.  Ehlers, W. (2014). Simulation—die dritte Säule der Wissenschaft. Themenheft Forschung: Simulation Technology, (10), pp. 8–12. Retrieved from http://www.uni-stuttgart.de/hkom/ publikationen/themeissue/10/simulation.pdf. 11

210

M. J. Krause

The method used is characterized by the fact that it can make very good use of the potential of modern high-performance computers, so that today, about 6 years later, the same simulation is possible in only 1.5 hours on, however, 2048 processingr cores, while the availability and acquisition costs of the hardware required in each case have remained about the same. With the constantly increasing technical possibilities, the fields of application of numerical simulation are also broadening. There is hardly a scientific field in which it is not used. The attractiveness of numerical simulation stems from the fact that it is used to clarify causal relationships in areas where experiments are difficult, impossible or simply too expensive. Two examples of this are ethical concerns in medical research and technical limitations in astrophysics. However, simulations are by no means to be seen as a substitute for experiments, but rather as another way of gaining knowledge. A separate consideration of experiment and simulation in the sense of either-or is also too limiting. Rather, their strength lies in their combined use. If an experiment and a simulation are based on the same model, their results should be the same or at least very similar. A comparison of the results thus allows to find errors, such as measurement and implementation errors. Furthermore, simulations offer convenient possibilities to find model parameters automatically (cf. Fig. 4). In this way, techniques of sensitivity analysis and optimization can be used to trace a cause-effect relationship backwards. This means, on the one hand, that the appropriate cause can be calculated for a given effect with a given model or, on the other hand, that a model can be calculated for a given effect with a given cause. This results in a multitude of new possibilities to gain scientific knowledge. A current example of a combined method is the development of the CFD-MRI method.12 Here, a flow measurement is first carried out with a Magnetic Resonance Imaging (MRI) (Fig. 5). The generally noisy measurement results represent temporally and spatially averaged mean values. They are at the same time the solution of a fluid flow problem, which is characterized by a mathematical model with boundary conditions and associated geometry and can be described by the Navier-Stokes equations. The CFD-MRI method makes use of the knowledge of the model in order to calculate the noise by means of numerical flow simulation (CFD, Computational Fluid Dynamics) on the one hand and on the other hand to infer fine structures of the geometry on the basis of averaging. For this purpose, a parameterized CFD model is first created, in which the parameters describe the underlying  Krause, M.J. (2015). Charakterisierung von durchströmten Gefäßen und der Hämodynamik mittels modell- und simulationsbasierter Fluss-MRI (CFD-MRI). Jahrbuch der Heidelberger Akademie der Wissenschaften für 2014. Heidelberg: Universitätsverlag Winter. 12

Through Numerical Simulation to Scientific Knowledge

Cause

Model

Experiment Initial state

211

Effect

Final condition

Simulation Initial condition

Simulation result Sensitivity analysis / optimization

Fig. 4  A model is a simplified representation of a causal relationship. Experiments and simulations are based on models, but at the same time also serve as a tool for developing a model in many scientific disciplines. Simulations can also be carried out “backwards” using sensitivity analysis and optimization techniques. In this way, it is possible to find suitable initial states or the parameters characterizing a model on the basis of experimental results. The shadow of the hare shows the limit, because it is not always possible to unambiguously determine the cause from the effect, even if the cause-effect relationship is understood. (Own illustration)

geometry and boundary conditions by means of a model. To calculate the parameters, an optimization problem is now solved which minimizes the difference between the measurement and parameter-dependent simulation results, taking into account the averaging during the measurement, and at the same time satisfying the model equations. Thus, a more finely resolved image of the flow velocities with associated geometry is obtained, which corresponds to the measurement results, eliminates measurement artifacts and is meaningful with respect to the fluid flow model. The importance of the CFD-MRI method only becomes clear when considering its field of application. Today, for numerous medical applications accurate knowledge of the flow dynamics (flow velocities, particle trajectories, pressures, wall shear stress, etc.) is a basic prerequisite for gaining knowledge, but also for diagnostics, medication and operation planning. However, neither measurement methods nor simulations alone are currently able to provide this data (Fig. 5).

212 Fig. 5 The CFD-­MRI method schematically, which solves a topology optimization problem by a descent gradient-­based method. (Own representation)

M. J. Krause

1st Step: MRI Scan g, Edge/Porosity f0

2nd step: CFD simulation f(f0)

3rd step: Calculate a) Difference from CFD and MRI Result J = (f(fi)-g)2, b) its gradient dd and c) from it new parameters for edge/porosity fi+1

5 Challenges, Criticism and Outlook Challenges and criticisms regarding the further development and use of the concepts of mathematical modelling and numerical simulation are manifold and of scientific, technical as well as societal nature. From a purely technical point of view, one of the main challenges in the development of new numerical simulation methods is probably the dependence on hardware development. On the one hand, there is the rapid growth (cf. Sect. 4), but on the other hand, there is also the fact that most computer buyers do not use the hardware for numerical simulations but for playing computer games. Many new developments, for example powerful graphics cards or processors, are even driven by the computer game market. In practice, therefore, numerical schemes are developed to match the hardware and not vice versa. The above mentioned example of the nasal breathingsimulation concerning the reduction of computing time due to technological progress, illustrates a current scientific challenge to the fields of mathematical modelling and numerical simulation. It is necessary to use the technological progress of computer development. For about 15  years, this has been

Through Numerical Simulation to Scientific Knowledge

213

characterized by the fact that an increase in computing power can only be achieved by parallelism of the hardware and software. This has direct effects on mathematical modelling and numerical simulation, because problems have to be decomposed into independent subproblems in order to use the parallelism of the hardware. This becomes clear in the example of flow simulation, since a more complex physical model (Boltzmann formulation13) is better suited as a basic model for simulations on parallel computers than its supposed simplification (Navier-Stokes formulation14). Central to the success of a mathematical model and a numerical simulation methods in the application is a high-quality software development. This is characterized in particular by traceability and a high degree of validity. Considering the traditional publication culture with articles in journals, another challenge arises here, because the models and simulations are difficult to reproduce on paper. Important implementation aspects, which can be crucial for the benefit of the method, for example due to the program duration, are rarely published. In addition, comparability with alternative methods is not directly available. A group researching one alternative would have to implement the other method itself, which can take years of work. If research results were always published as open source software, the development of new models and numerical methods would be significantly accelerated and, as a consequence, the methods themselves would also deliver significantly more valid results, since they would be accessible to the scrutinizing eyes of many third parties. Other positive aspects of developing one’s own software as open source are the transparency and sustainability of research in general, and especially that of the academic training of young researchers. On the one hand, publishing the software promotes the motivation of all participants. On the other hand, it requires a much more comprehensible form of programming and documentation, which significantly simplifies the continuation of model and method development across generations of doctoral students.

 Krause, M.J. (2010). Fluid Flow Simulation and Optimisation with Lattice Boltzmann Methods on High Performance Computers: Application to the Human Respiratory System. PhD thesis, Karlsruhe Institute of Technology (KIT), Universität Karlsruhe (TH), Kaiserstraße 12, 76131 Karlsruhe, Germany, July 2010. Retrieved from http://digbib.ubka.unikarlsruhe.de/volltexte/1000019768, Section 3. 14  Ibid., section 4. 13

214

M. J. Krause

Such approaches that combine model, experiment, simulation and application are particularly promising and challenging for scientific progress (cf. CFD-MRI, Sect. 4). Interdisciplinary practical and theoretical expertise is required here, and it is essential that this be taught at universities in an interdisciplinary manner. For this purpose, an opening of the scientific fields towards mathematics and vice versa is necessary, and this far beyond the current situation. The first promising approaches to mastering this pedagogical challenge have emerged in recent years. For example, the podcast Modellansatz,15 launched by Sebastian Ritterbusch and Gudrun Thäter in 2013, is worth mentioning. Here, students, PhD students and university lecturers report on their modelling work. Often, final theses are vividly thematized that were carried out across disciplines. Another example is the school lab mathematics of the Karlsruhe Institute of Technology (KIT). There, interested students have the opportunity to playfully understand complex problems and models. Society often criticizes the inaccuracy or lack of realism of numerical simulations. If one thinks, for example, of everyday weather forecasts based on fluid flow simulations, this criticism is sometimes quite understandable. On the other hand, simulations are nowadays used to make decisions that save many lives, such as hurricane or tsunami warnings. If experiments are not possible, there is no alternative. But even if they are possible, it has to be considered that both, simulation and experiment, are based on models that assume simplifications. Thus, errors are to be expected in both cases. If simulations serve as a basis for decision-making in political processes, such as the simulation for the large-scale project Stuttgart’s new central station, further challenges arise: model assumptions must be presented in a transparent and comprehensible manner, results must be at least partially comprehensible to laypersons, and a minimum level of reliability must be guaranteed. If these and other challenges are overcome, the concerns will certainly become smaller in the future. Looking at the dynamic development of the last few years, numerical simulation is only at the beginning of its possibilities, both, as a prediction tool and as a key to understanding cause-effect relationships.

 Ritterbusch, S. & Thäter. G. (2013). Modellansatz Podcast, 2013. Retrieved from http:// www.modellansatz.de. 15

Through Numerical Simulation to Scientific Knowledge

215

6 Conclusion Numerical simulations are used for approximate predictions under strictly defined conditions. They are based on mathematical models and represent causal relationships approximately in the form of algorithms and computer programs. In the scientific context, however, simulations are much more than mere prediction tools. Similar to experiments, they serve as build block for models themselves and thus enable the elucidation of causal relationships. Due to increasingly powerful computers, the importance of numerical simulation in science has grown steadily and rapidly over the last 50  years. Today, numerical simulations are considered an indispensable tool for gaining knowledge in many scientific disciplines. It is to be expected that they will continue to gain in importance at an increasingly accelerated rate in the future and that they will produce further surprising findings and technologies.

Final Part

Communication Cultures Marcel Schweiker

The preceding parts presented the path from the desire for knowledge via methods, fundamentals and the inherent use of numbers and patterns to gaining knowledge for various disciplines and projects. This chapter deals with the communication cultures in sciences and humanities. Here, the focus is not only on a scientific publication as the “Endprodukt der Erkenntnisgewinnung”.1 On the one hand, this wider focus is chosen because science communication consists of more aspects than a scientific publication in the narrow sense and, on the other hand, the publication can serve as a starting point for new value chains of knowledge. The first part of this chapter reflects goals and methods of science communication, while the second part deals with current trends in knowledge communication and their consequences, and discusses the necessity, potentials and limitations of interdisciplinary projects.

The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content.

 “End product of gaining knowledge” (own translation) Ball, R. (2009). Wissenschaftskommunikation im Wandel—Bibliotheken sind mitten drin. In U. Hohoff & P. Knudsen (Eds.), dwdw (96), pp. 39–54. Frankfurt: Vittorio Klostermann, here p. 43. 1

M. Schweiker (*) Healthy Living Spaces lab, Institute for Occupational, Social, and Environmental Medicine, Medical Faculty, RWTH Aachen University, Aachen, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_17

219

220

M. Schweiker

This chapter aims to present the commonalities of different disciplines and, with regard to the basic aim of science and humanities to increase knowledge about the world, to reflect the concepts of “number” and “pattern”.

1 Reflection on Goals and Methods of Science Communication 1.1 Why Do We Communicate? (Goals) “Sinn und Zweck hat die Arbeit der einzelnen Wissenschaftlerlnnen insofern nur, wenn die Ergebnisse ihrer Arbeit (möglichst vielen) anderen Forschern zugänglich gemacht werden, damit die Resultate ihrer Bemühungen überprüft und kritisiert werden, in andere Forschungen einfließen, mit anderen Einzelanalysen zu Synthesen zusammengefasst, in die gemeinsamen Wissensfonds gespeist werden können etc.”2 (G. Fröhlich, 1994) “At its most basic level science communication can be thought of as those in the know informing those that are not.”3 (S. Illingworth, 2015)

These two quotations show that the goal of science communication as the transfer of knowledge as an ideal and material benefit is a) to inform, b) to contribute to the scientific discourse, c) to increase the overall existing knowledge and d) to make knowledge accessible. The transfer of knowledge is certainly one of the most i­mportant functions, but not the only one. Four functions are mentioned in the literature,4 and the transfer of knowledge can be assigned to the first of these four functions: 1 . Communication of the findings and their permanent availability, 2. Verification tool for priority and originality,  “In this respect, the work of individual researchers has meaning and purpose only if the results of their work are made available to (as many as possible) other researchers, so that the results of their efforts can be reviewed and criticized, can be incorporated into other research, can be combined with other individual analyses into syntheses, can be fed into the common funds of knowledge, etc“(own translation) Fröhlich, G. (1994). Der (Mehr-) Wert der Wissenschaftskommunikation. In W. Rauch, F. Strohmeier & H. Hiller (Eds.). Mehrwert von InformationProfessionalisierung der Informationsarbeit (pp. 84–95). Konstanz: Universitätsverlag, p. 85. 3  Illingworth, S. (2015). A brief history of science communication. Retrieved from http://blogs.egu. eu/geolog/2015/02/06/a-brief-history-of-science-communication/ (August 20, 2016). 4  Schirmbacher, P., & Müller, U. (2009). Das wissenschaftliche Publizieren—Stand und Perspektiven. CMS Journal, 32, pp. 7–12, here p. 8. 2

Communication Cultures

221

3 . Reputation and comparison, 4. Royalties. In the following, the first three functions will be discussed in more detail since the fourth point often plays a subordinate role in the scientific community.

1.1.1 Communication and Discussion As mentioned introductory, the written publication is only one part of scientists’ communication, which is also referred to as the formal part of scientific communication. Publication thus contrasts with the informal part, which consists, for example, of communicating the desire for knowledge or an idea.5 A basic distinction can be made between science communication within a scientific community (Scholarly communication) and between the scientific community and the non-­ scientific circles of society (Science communication—see chapter Höfle).6 Thus, scientists not only communicate different aspects of their activity—starting with the desire for knowledge, e.g. in the form of project proposals—but also with different recipients. Regardless of the aspect and recipient, the primary purpose should be to communicate one’s findings and discuss them—whether orally or in writing (see section How to Communicate? below). The added value of the resulting discussions lies in “(a) the stimulation of ideas, source of motivation, encouragement to invest time, energy, one’s “reputation” in a certain direction, in other words: orientation, (b) the avoidance of unnecessary repeated and redundant inventions, synergy effects, and at the same time intensification of competition, (c) the promotion of argumentative protection and criticism, or referred to as “evaluation”/quality control/selection”.7 The scientific publication as a special form of communication thereby forms the basis for disseminating and permanently safeguarding knowledge once it has been gained, and serves as a prerequisite for scientific research as it enables build on previous work and relate to one another despite potential temporal and spatial distribution of the actors involved.8  Ball, R. (2009). Wissenschaftskommunikation im Wandel—Bibliotheken sind mitten drin. In U. Hohoff & P. Knudsen (Eds.), dwdw (96), Frankfurt: Vittorio Klostermann, p. 43. 6  Ibid., p. 39. 7  Fröhlich, G. (1994). Der (Mehr-) Wert der Wissenschaftskommunikation. In W.  Rauch, F. Strohmeier, H. Hiller & C. Schögl (Eds.), Mehrwert von Information-Professionalisierung der Informationsarbeit (pp. 84–95). Universitätsverlag Konstanz, p. 90. 8  Schirmbacher, P., & Müller, U. (2009). Das wissenschaftliche Publizieren—Stand und Perspektiven. CMS Journal, 32, pp. 7–12. 5

222

M. Schweiker

1.1.2 Verification Tool In addition to the purpose of communication, the written publication, including the patent application, also serves as a recognised medium and means of proof for (a) the claim to have been the first to have had an insight / to have been the first to experiment, (b) to create transparency about the scientific activity that led to the insight, and (c) to have rightly received the resources used. Both play an important role for the following point: reputation.

1.1.3 Reputation Regardless of the discipline, in particular, the written publication of one’s own findings serves to enhance one’s reputation and often serves as a means of comparison between individual scientists. Thus, publications play a decisive role in the acquisition of a professorship in many disciplines, even if the type and number of publications varies from discipline to discipline. The (well-known) problem of evaluating the quality of scientific work based on numbers, e.g. through the H-­ index or impact factor,9 and corresponding counter-movements10 will only be mentioned here, but not discussed further (see section Trends).

1.2 How Is Communication Done? (Methods) The first scientific findings were communicated orally to people present and discussed with them in the same place and at the same time. In ancient Athens, this transfer of knowledge took place in the context of public debates, which are also described as democratizing knowledge.11 The oral allows to respond to arguments in discourse, to answer them and to be able to confirm or refute them and these aspects were already for Plato the decisive argument to prefer the conversation and the speech to the written.12 Especially since he saw the danger that what was put

 Seglen, P. O. (1997). Why the impact factor of journals should not be used for evaluating re search. BMJ: British Medical Journal, 314(7079), p. 498. 10  Warnecke, T., & Burchard, A. (2010). Schluss mit der Salamitaktik. Retrieved from http:// www.zeit.de/wissen/2010-02/dfg-publications-research (September 5, 2016). 11  Illingworth, S. (2015). A brief history of science communication. Retrieved from http:// blogs.egu.eu/geolog/2015/02/06/a-brief-history-of-science-communication/ (August 20, 2016). 12  Kalmbach, G. (1996). Der Dialog im Spannungsfeld von Schriftlichkeit und Mündlichkeit. Berlin: De Gruyter, p. 86, p. 142. 9

Communication Cultures

223

down in writing could be misunderstood or misused, depending on the reader’s ability and willingness to interpret.13 On the other hand, Aristotle argued that science proves itself in the written and clear fixation, in the possibility of re-reading and reproducing and in the repeatability of what has once been fixed.14 Written dissemination of knowledge, however, made it possible in later years (and to some extent still today) to make knowledge available only to privileged persons and thus to regulate it.15 Until the17th century, the scientific book was the form of written communication and represented the most important medium for presenting one’s life’s work. However, it was expensive for the author and slow to produce, so it was of limited use in proving first discoveries. Additionally, it was also expensive for the reader and thus was not widely used.16 In the seventeenth century, the scientific journals founded in Paris and London expanded the media repertoire to include scientific articles.17 The same century also saw the emergence of the peer-review process, which is still used today, to check the credibility of the work of researchers who were unknown at the time.18 These Transactions, published for example by the Royal Society in England, were multidisciplinary to motivate researchers from different disciplines to communicate their respective findings.19 Another form of written communication is the direct message (formerly the letter) between individuals or groups of scientists. In the 17th and 18th centuries, these letters were often sent to so-called gatekeepers, who forwarded the messages.20

 Ball, R. (2009). Wissenschaftskommunikation im Wandel—Bibliotheken sind mittendrin. In U. Hohoff & P. Knudsen (Eds.), dwdw (96). Frankfurt: Vittorio Klostermann, p. 40. 14  Ibid., p. 41. 15  Illingworth, S. (2015). A brief history of science communication. Retrieved from http:// blogs.egu.eu/geolog/2015/02/06/a-brief-history-of-science-communication/ (August 20, 2016). 16  Fjällbrant, N. (1997). Scholarly Communication—Historical Development and New Possibilities. Proceedings of the IATUL Conferences, p. 7. 17  Schirmbacher, P., & Müller, U. (2009). Das wissenschaftliche Publizieren—Stand und Perspektiven. CMS Journal, 32, pp. 7–12, here p. 7. 18  Fjällbrant, N. (1997). Scholarly Communication—Historical Development and New Possibilities. Proceedings of the IATUL Conferences, p. 11. 19  Das, A. K. (2015). Scholarly communication. United Nations Educational, Scientific, and Cultural Organization. p. 7. 20  Fjällbrant, N. (1997). Scholarly Communication—Historical Development and New Possibilities. Proceedings of the IATUL Conferences, p. 5. 13

224

M. Schweiker

While the overarching goal, the why, and the communication by oral and/or written means of conveying one’s own findings can be regarded as independent of a scientist’s discipline, the manifestations, the how of publishing, are very different.21 In this context, it can be observed that the apparent diversity between the disciplines is present in detail, but recedes into the background compared to the commonalities already mentioned. As explained below, they can be justified in part as a result of the subject matter of science. First of all, it must be considered that in knowledge transfer, the knowledge to be conveyed must be staged in such a way “that its way of presentation increases the chance to be received by potential and desired addressees”.22 Therefore, it is essential to present new knowledge in such a way that the knowledge is understood, which in turn presupposes that the presentation corresponds to common patterns, e.g. in terms of how the results are presented to the intended addressees. For example, a recipient from the field of natural sciences currently still tends to expect to have the results presented in the form of figures and graphs and would be surprised by a lyrical presentation. In addition to the patterns mentioned in the previous section of the book, which presented individual projects, a further pattern term is thus added. The existing patterns in the way findings are presented did not arise by chance, let alone be attributable to preferences or abilities of individual scientists. Rather, differences in whether, for example, book publications are more common than technical articles in a discipline can be traced back to differences in the respective object of study and the resulting technical terms.23 Thus, due to the objectifiable object of investigation and the external observer role of the scientist, a physical finding can be described with the existing sharply defined technical terms in relatively few words—however, this description is often only comprehensible to those who know the technical terms. Discussions regarding the interpretation of these terms are rare, so the authors can assume that the relevant term is not interpreted differently by other scientists in the same discipline. In  Alexander von Humboldt Foundation. (Ed.). (2009). Publikationsverhalten in unterschiedlichen wissenschaftlichen Disziplinen. Beiträge zur Beurteilung von Forschungsleistungen (2nd ed.). Bonn: VS Verlag für Sozialwiss. 22  Antos, G., & Weber, T. (2009). Typen von Wissen—Begriffliche Unterscheidung und Ausprägungen in der Praxis des Wissenstransfers. Frankfurt am Main: Peter Lang GmbH Internationaler Verlag der Wissenschaften, p. 1. 23  Jahr, S. (2009). Strukturelle Unterschiede des Wissens zwischen Naturwissenschaften und Geisteswissenschaften und deren Konsequenzen für den Wissenstransfer. In T.  Weber & G. Antos (Eds.), Typen von Wissen—Begriffliche Unterscheidung und Ausprägungen in der Praxis des Wissenstransfers (pp. 76–98). Frankfurt am Main: Peter Lang GmbH Internationaler Verlag der Wissenschaften. 21

Communication Cultures

225

addition, the data collected serve as independent evidence of the knowledge conveyed.24 In contrast, an object of study such as human language requires the respective author to ensure that his or her own necessary definitions of a term reach the recipient and are not misunderstood. At the same time, it must be ensured that counter-­theses are included in the argumentation. For this purpose, the monograph appears more suitable than a short article, because it provides the authors with the required space.25 According to the object of investigation, not only the value of a technical article or a monograph differs26 but also the roles of number and pattern in the publication differ, as illustrated below with examples from the authors of this book. In literary studies (see chapter Lauer & Pacyna), numbers generally do not play a central role. When they do occur, then either as a precise contextual indication (e.g. period of origin, year of publication of the work), as a quantity indication in literary transmission relationships (handwritings, manuscripts, etc.), as a literary-­ scientific topic (e.g. the symbolism of numbers), or methodologically as a possible indication of narrative elements (e.g. in the analysis of narrative structures and literary plot programmes). In contrast, patterns and theoretical models take on a more important role. They are subject to the requirement, for example, to clarify or point out narrative principles that can be generalized across genres and thus to be able to illustrate and analyze so-called “deep structures” of a narrative more precisely. In historical science (see chapter Büttner & Mauntel) the importance of the number depends on the methodological approach and the research question. If no statistics are used, the importance is very low, with the exception of year numbers, which can be important for the temporal proximity or the succession of different events. However, temporal proximity does not yet imply causality—causality must be made plausible by the historian by means of argumentation. In this context, historians make increased or primary use of statistics only in certain fields (such as social and economic history). The recourse to superordinate theoretical models is by no means necessary; the historical sources always have a so-called “right of veto” over any theory, that is, they carry the greater weight. The need for theory is therefore assessed differently in historical scholarship: For some, it is indispensable as a source of ideas and a guiding instrument; others believe they can do without it or assign it a subordinate role.  Ibid., p. 79.  Ibid., p. 91. 26  Alexander von Humboldt Foundation. (Ed.). (2009). Publikationsverhalten in unterschiedlichen wissenschaftlichen Disziplinen. Beiträge zur Beurteilung von Forschungsleistungen (2nd ed.). Bonn: VS Verlag für Sozialwiss. 24 25

226

M. Schweiker

In ancient studies (see chapter Chronopoulos, Maier & Novokhatko) numbers, patterns and models have a special meaning. All the criteria of the literary studies described above are retained, but additional linguistic views are added. In order to analyze the style of an author, word usage must be evaluated. To do this, one must count how often a particular term/phrase/construction is used. To analyze terminology, one must look at frequency: e.g. how many times is the term “genre” used in the fifth century BC? In the fourth c. BC? in the third c. BC, etc.? (see further argumentation in the above chapter). In law (see chapter Hamann & Vogel and chapter Valta), numbers do not play an essential role. Numbers are sometimes queried by the facts of law. In tax law, the legal consequence can be expressed in a number, the tax liability. In each case, however, it is not the numbers as such that are important, but the legal evaluations behind them, which are not expressed in numbers. If the factual structure of law is understood as a pattern (see chapter Valta), this also has a high significance for publication. Furthermore, there is a great importance of patterns in “dogmatics”, the systematics of legal valuations/statements. For interpretation, there are more often several theses, which are then also often referred to as theory. In political science publications (see chapter Prutsch), numbers play an increasingly important role, both in methodological terms (empirical-analytical school of political science) and as a means of presenting (research) results. Statistics are of particular importance in this respect. The importance of numbers in the publication behaviour of modern political science is also evident in other respects apart from the content-related level: as in many other scientific disciplines, a visibly increasing quantity of specialist publications can be observed, which threatens to marginalise qualitative aspects and complicates specialist discourse due to the sheer number of work items to be taken into account in combination with the need for the most comprehensive possible publication activity of one’s own. Patterns play an important role in publications, especially in the form of theoretical models, for instance concerning the functioning of political systems. This is due to the need in political science for high-level abstraction of complex processes (electoral behaviour, political decision-making processes, etc.). In publications of financial econometrics (see chapter Halbleib), the specification of the significance level in the form of a numerical value (statistical significance), but also the effect size (economic significance) are important. The publication of new models is an important component. However, the new models can only be published if they can be simultaneously compared with already existing models with real data and shown (via above criteria) that they are better at accurately representing the properties of the real data or/and make better predictions. The predictions are evaluated statistically and economically.

Communication Cultures

227

In human genetics research (see chapter Molnár-Gábor & Korbel), the specification of significance levels is also an essential part of publications and it requires the analysis of large amounts of data as well as often the sharing of data among different institutions (to make significance levels achievable in the first place). In addition, quantitative challenges in data protection lead to qualitative challenges in regulations. In terms of models and patterns, there is a need to establish generalizable regulations, as new technological developments in data processing and sharing emerge, to ensure patient rights and research freedom. In addition, theoretical models are also of high importance in biomedicine and genetics to represent hypotheses on molecular functioning. In neuroscience (see chapter Mier & Hass), numbers (from experiments or simulations) form the basis of the work. Depending on how closely the modeling is based on real data, the more important are statistical data in the publications. For example, this information is very important when it comes to values from psychological studies or physiological data of humans (e.g. electroencephalography (EEG) or functional magnetic resonance imaging (fMRI)). In the case of more conceptual models (proof of principle), the numbers are only illustrative in the extreme. Here, too, the role of patterns/models is a very central one, since the main work consists in setting up, simulating and validating theoretical models, all of which are based on a pattern (theoretically conceived and/or derived from data). In the field of psychology for the study of pain (see chapter Becker & Schweiker), numbers are the basis of any publication: Phenomena are quantified by assigning numbers, statistically evaluated, and results are reported using significances/effect sizes. Theoretical models are important for the derivation of questions and thus also for the static testing of hypotheses. In architectural publications, numbers generally play a subordinate role, as comprehensible information about spatial dimensions are usually not presented in the text, but in attached drawings. However, in research on thermal comfort in the workplace (see chapter Becker & Schweiker), numbers are used in a manner comparable to psychology to quantify findings and demonstrate significance. These figures are also used to confirm or falsify model-like assumptions, e.g. about physiological processes or behavioural patterns. In geoinformatics (see chapter Höfle), numbers are important as a data basis for scientific analyses (e.g. measured values) and for describing the results, which is usually done quantitatively using numbers (e.g. descriptive statistics or spatial statistics). In many publications, geoinformatic methods are used to try to find patterns and causalities in sometimes large geodata sets (e.g. with machine learning) in order to be able to explain relationships or to falsify hypotheses or theories. In art historical contributions (see chapter Bell), the significance of the number plays no role or, in the form of dates or measurements, a subordinate role. For cer-

228

M. Schweiker

tain questions and tasks such as catalogues raisonnés, the exact dates are of course very important in order to trace an artistic development. The further significance of the number in art history is discussed in detail in the actual essay (see chapter Bell). In the publications of mathematics (see chapter Krause) the number logically plays a very large role. On the one hand, it serves to show that one method is better than another, since, through the method, an error is smaller by x. On the other hand, it is also used for qualitative statements, e.g. about the model being independent of the parameter y. At the same time, patterns are also very important, because the goal is to identify patterns and finally to formalize them (find a suitable model that explains the pattern). In summary, these examples confirm that the meaning of numbers and patterns depends on the respective object of investigation and can also be differentiated partly within a subject discipline depending on the type of analysis. Even if the research objects and methods of the respective disciplines lead to the predominance of certain tendencies, it can be stated that in these points no clear separation between different disciplines is possible.

2 Trends in Knowledge Communication and their Consequences “Das Bindeglied der Wissenschaftskommunikation über die Disziplinen hinweg war jenseits der Unterschiede in Form und Umfang, Frequenz, Sprache oder Autorenschaft die Materialität des wissenschaftlichen Outputs als gedrucktes Buch oder als Beitrag in einer gedruckten Zeitschrift”27 (R. Ball, 2016)

2.1 Trends While the number of publications in book form or as the scientific article continues to rise, science communication is in a state of upheaval, which can be partly explained by the following three developments28:  “The link of science communication across disciplines, beyond differences in form and scope, frequency, language, or authorship, was the materiality of the scientific output as a printed book or as a contribution to a printed journal.“(own translation) Ball, R. (2016). Digital Disruption—Warum sich Bibliotheken neu positionieren müssen. Forschung & Lehre, pp. 776–777, here p. 776. 28  Schirmbacher, P. & Müller, U. (2009). Das wissenschaftliche Publizieren—Stand und Perspektiven. CMS Journal, 32, pp. 7–12, here p. 11. 27

Communication Cultures

229

1 . the removal of technical barriers through the Internet, 2. the spread of the computer and associated possibilities of producing one’s own publications, and 3. summarized under the term beyond paper: the presentation of findings by means of audio, video and other media. In addition to the existing forms of publication, i.e. scientific articles, conference papers and books, there are new forms of disseminating findings via social media, blogs, grey publications (e.g. self-published publications without any peer review), podcasts, forums, software programs, open source journals with comment functions, platforms for specialist disciplines,29 and even the (mandatory) publication of the data collected. At first glance, the latter provides numerous advantages: For example, the publication of data and associated programs drastically increases the citation rate. In addition, this publication makes data freely available and disseminates it, which would otherwise only be available to a limited number of well-­ paying or well-networked scientists. With respect to solvent scientists, this can be data from experiments that can only be collected in special facilities and with special measuring devices (see chapter Becker & Schweiker), very expensive and complex to process, or not freely available high-frequency financial data (e.g. one month of data on every single trade of the New York Stock Exchange costs 750 US dollars) (see chapter Halbleib). In addition, there is an increase in availability and reproducibility. At the same time, the publication of data must be viewed critically due to its sensitivity, especially in the area of personal health data (see contribution Molnár-Gábor & Korbel). The role of these new forms of publication and their use varies not only according to the discipline, but also to a large extent according to the extent to which individual scientists deal with them. In addition, other forms of communication are more rarely referred to as publications, yet, reveal the thoughts of scientists to at least one other entity: In many fields, it is necessary, e.g. for the acquisition of third-party funding, to write a project proposal in which the background, idea, hypothesis, methodology and relevance are explained. At the end of the project, the results are then to be presented in a project report. Another form of early publication of one’s own intentions and methods is a pre-analysis plan required by top medical journals, which describes in detail how the data to be collected will be analysed and evaluated even before data collection.30  Degkwitz, A. (2016). Überholtes Geschäftsmodell? Bibliotheken in der digitalen Transformation. Forschung & Lehre 2, pp. 770–772, here p. 772. 30  Glennester, R., & Takavarasha, K. (2013). Running randomized evaluations: A practical guide (STU-Stud). Princeton: Princeton University Press. 29

230

M. Schweiker

In addition to publicly available communications, there are other levels at which communication takes place. These include, for example, correspondence and e-­mails between scientists, the peer review process, online meetings, informal conversations “in the corridor” or “during the coffee break at the conference”, lectures, so-called literature clubs (joint discussions of one’s own and external work) and retreats (joint seminars in a secluded environment that is as free of communication as possible from the outside world) at group or institute level. Finally, the number and importance of international and/or interdisciplinary working groups, associations and scientific societies are increasing in many disciplines. In general, a significant increase in the number of so-called “interdisciplinary working groups” can be observed at universities, whose research is thematically interdisciplinary and interdepartmental. In addition to humanities and natural science societies, which were first founded in the nineteenth century to discuss current research topics, there is also a marked growth in international societies in the humanities, such as the International Arthurian Society (IAS), which brings together scholars from all over the world who are interested in Arthurian literature. With international congresses (every three years), national (section) conferences (between the international congresses) and, for example, the publication of an annual bibliographical bulletin, the IAS not only responds to the wide range of the so-called Matière de Bretagne (legends about King Arthur), but also bundles the diversity of approaches and questions in the field of research into Arthurian literature worldwide. A similar picture can be exemplified for the natural sciences in the so-called IEA EBC Annexes (International Energy Agency, Energy in Buildings and Communities Programme). In these annexes (e.g. Annex 69 on adaptive comfort models in low-energy buildings, which is running in parallel to the joint project Thermal Comfort and Pain (see chapter Becker & Schweiker)), scientists from all over the world and with different professional backgrounds (usually between 30 and 60 people from more than 15 nations) meet twice a year over a period of four years to summarize the current trends and problems in their own field and to produce guidelines for future developments. Similarly, there are some international organizations in geoinformatics that are made up of working groups (WG) on specific research topics. These WGs organize workshops, conferences, meetings, and special issues, among other things, and are partly made up of people from different disciplines.

Communication Cultures

231

2.2 Consequences The trends outlined above clarify that both, the opportunities to develop one’s own ideas and methods for gaining more knowledge and to publish the knowledge gained, as well as corresponding discussion platforms, have greatly increased in number and breadth. This prompted some authors to describe the development of science communication from a purely one-way publication system to a two-way communication system as partially completed also with regard to communication with the non-scientific public.31 Noteworthy, such development was called for as early as 1994.32 In this context, the trend towards open-access provision of ­knowledge is leading to a further development in which not only the results are made available to and discussed with the general public, but also the questions to be solved and the knowledge to be gained are discussed—see, for example, Citizen Science (see the Höfle chapter) and crowdfunding.33 Whereas until recently, findings were only communicated after they had been gained, there is now an unspoken obligation to communicate one’s own work continuously. This creates discussion forums that force people to think about an issue and react to it, while the interim balance function of the classic publication recedes into the background. Together with the necessity of project proposals, the increase in the number of publications for reputation, and the ubiquity of forums and e-mail requests, the inherent desire of scientists to communicate can thus turn into either a feeling of compulsion to communicate or a great potential for the dissemination of one’s own ideas, thoughts, and findings. Together with the multitude of communication channels, this increases the probability that the published knowledge is not perceived. However, “erst durch das Lesen und unser Mitdenken wird das darin Geschriebene lebendig”.34 For  Trench, B. (2008). Towards an Analytical Framework of Science Communication Models. In D. Cheng, M. Claessens, T. Gascoigne, J. Metcalfe, B. Schiele, & S. Shi (Eds.), Communicating Science in Social Contexts: New models, new practices (pp. 119–135). Dordrecht: Springer Netherlands, p. 4. 32  Fröhlich, G. (1994). Der (Mehr-) Wert der Wissenschaftskommunikation. In W.  Rauch, F. Strohmeier, H. Hiller & C. Schögl (Eds.), Mehrwert von Information-Professionalisierung der Informationsarbeit (pp. 84–95). Konstanz: Universitätsverlag, p. 93. 33  Illingworth, S. (2015). A brief history of science communication. Retrieved from http:// blogs.egu.eu/geolog/2015/02/06/a-brief-history-of-science-communication/. 34  “it is only through reading and our thinking that what is written comes to life” (own translation) Ballod, M. (2009). Wer weiß was? Eine synkritische Betrachtung. In T.  Weber & G. Antos (Eds.), Typen von Wissen—Begriffliche Unterscheidung und Ausprägungen in der Praxis des Wissenstransfers (pp. 23–30). Bern: Peter Lang, p. 26. 31

232

M. Schweiker

example, less than half of published articles have been cited by another article in five years.35 Along with these trends, the role of the classical gatekeepers, such as usually the one editor or the one to four reviewers in the peer-review process, is increasingly being replaced by the public perceiving the publication: “publish first, filter later”.36 The goal is to prove to be the first, while leaving the reflection to others. Due to the sheer number of types of communication and published knowledge, however, the individual scientist must also select and develop indicators for the evaluation of the communicated findings. These can be indicators based on figures such as the number of citations, sales figures, or the ranking of the journal. However, it should be noted here that a paper must first be discovered and recognized by other scholars in order to score on many of these indicators. Meanwhile, indicators can also be reviews, subjectively assessed originality, comprehensibility, ­quality of the knowledge gained, continuation by other scientists, elaborateness of the presentation or the reputation of the publisher/journal. At the same time, the following questions remain to be answered: (1) how does quality control work in the new forms of communication, e.g. the citation of blog entries; (2) to what extent are electronic sources specified in publications available in the far future37- on the one hand, the author has already come across several examples where references (links) specified in books are no longer available and, on the other hand, printed sources are not available without restriction, either because they have been destroyed or because access restrictions can only be circumvented by means of payment -; (3) what does this change mean for individual disciplines, not only such as philology and philosophy, but also modern, newly emerging disciplines such as geoinformatics; and (4) to what extent do the new forms of publication serve not only the increase of one’s own visibility, but also, for example, in evaluation processes for the awarding of grants, project funding or professorships. It is hoped that in the end it will not necessarily be the quantitatively strongest, loudest or flashiest information and theories disseminated on multiple channels that will prevail, but that ways will be found to reward, recognize and disseminate the highest quality ones.

 Jacsó, P. (2009). Five-year impact factor data in the Journal Citation Reports. Online Information Review, 33(3), pp. 603–614, here p. 611. 36  Kohle, H. (2015). Publish first-filter later. Archäologische Informationen. Early View, pp. 109–112. 37  See also Nachhaltigkeit elektronischer Medien in Speer, A. (2016). Wovon lebt der Geist? Über Bücher, Bytes und Bibliotheken. Forschung & Lehre, pp. 766–768, here p. 768. 35

Communication Cultures

233

3 Potentials and Limits of Interdisciplinary Communication “Science today is an enormous repository of disconnected information”38 (R. Refinetti, 1989)

Apart from all the differences between individual disciplines in the way they communicate and publish, interdisciplinary publications have the potential to bring order into the supposed chaos of existing knowledge, to connect unconnected knowledge and thus to make an important, often underestimated, contribution to measuring and understanding the world. It is precisely the sometimes opposing tendencies of disciplines that establish relationships on many sides and disciplines in which specific knowledge is a­ vailable in a deep hierarchical structure that,39 when brought together, can combine and exploit their respective strengths in order to address complex issues. The coordination processes with regard to terms that are used in common, but initially (or perhaps still at the end) have different connotations, represent—if published—a significant treasure to support other scientists in their interdisciplinary communication. This search for and definition of a common language provides, on the one hand, a necessary basis for collaborations and, on the other hand, can lead to the realization that the same terms have different meanings. Thus, through the terms described and defined in the glossary of this book from different angles, such as “number” or “model”, this book aims to contribute to the wider research community. However, prerequisites for each individual project are, on the one hand, the openness of the respective individual to question his or her own terms and methods and to acknowledge the “otherness” of the other discipline and, on the other hand, successful communication between the participants and beyond.

 Refinetti, R. (1989). Information processing as a central issue in philosophy of science. Information Processing & Management, 25(5), pp. 583–584, here p. 584. 39  Jahr, S. (2009). Strukturelle Unterschiede des Wissens zwischen Naturwissenschaften und Geisteswissenschaften und deren Konsequenzen für den Wissenstransfer. In T.  Weber & G. Antos (Eds.), Typen von Wissen—Begriffliche Unterscheidung und Ausprägungen in der Praxis des Wissenstransfers (pp. 76–98). Bern: Peter Lang, p. 82. 38

234

M. Schweiker

At the same time, authors of interdisciplinary work can encounter limitations, especially if, for example, there are people on a selection or review committee who are unfamiliar with the “communication culture” of the respective other disciplines and only use the patterns common in their own disciplines, such as the number of authors per article or the typical number and type of publications, to assess achievement. Thus, depending on the discipline, the value of a qualifying paper, book chapter, or journal article varies from being essentially important to being classified as a “nice accessory.”40 As a result, existing indexes may be meaningful within disciplines, but not comparable across disciplines. This is partly because the purpose and nature of knowledge transfer is different. In addition, there may be formal obstacles in some cases. For example, this book had to be assigned to a single discipline for organisational reasons, since the publishing and distribution industry is not (yet) prepared for interdisciplinary work. In summary, it can be said that the different publication cultures represent both an opportunity and a challenge.41

4 Conclusion In addition to the classic publication channels, i.e. the book and the scientific article, a variety of ways have developed in recent decades to communicate the findings, methods and ideas obtained within and outside the sciences and humanities and to put them up for discussion. While consequently, the communication channels of individual disciplines are converging, differences remain, which have their purpose and can be justified by the respective object of investigation and goal. These differences should not fall victim to the desire for standardisation, quantifiability and comparability of disciplines, but should be accepted, recognised and made known. At the same time, there is a need for methods and evaluation criteria that prevent the quantitatively strongest, loudest, flashiest information and theories disseminated on multiple channels from prevailing in the end. Preferably, procedures are  Alexander von Humboldt Foundation. (Ed.). (2009). Publikationsverhalten in unterschiedlichen wissenschaftlichen Disziplinen. Beiträge zur Beurteilung von Forschungsleistungen (2nd ed.). Bonn: VS Verlag für Sozialwiss. 41  Jahr, S. (2009). Strukturelle Unterschiede des Wissens zwischen Naturwissenschaften und Geisteswissenschaften und deren Konsequenzen für den Wissenstransfer. In T.  Weber & G. Antos (Eds.), Typen von Wissen—Begriffliche Unterscheidung und Ausprägungen in der Praxis des Wissenstransfers (pp. 76–98). Bern: Peter Lang, p. 93. 40

Communication Cultures

235

found to distinguish, recognize, and disseminate the highest quality. This requires the ambition to develop, communicate and recognise something worth communicating, despite the existing pressure to publish and communicate. Finally, there is a need to talk and communicate with each other in order to promote mutual understanding of peculiarities and commonalities, so as to be able to offer something to society not only individually but also collectively, and thereby increasing societies knowledge and understanding.

Conclusion: Measuring and Understanding the World Through Science Mathias J. Krause and Susanne Becker

United in their motivation to better understand and describe the world, scientists from a wide range of disciplines search for new insights every day. At first glance, the approaches chosen on the way to gaining scientific knowledge seem to be very different. However, terms such as “number”, “pattern” or “model” appear in the communication of results in almost all fields of science. This suggests that nowadays a quantifying approach that focuses on “numbers” and “measures” is of particular importance. In this book, the 23 young scientists of the Heidelberg Academy of Science (HAdW) discussed the use of numbers and measurement methods in a wide variety of disciplines, as well as the opportunities and limitations of quantifying methods to arrive at scientific insights. In particular, differences and similarities in the methodology used, as well as the communication culture of the various disciplines should become clear. To this end, the 23 young scientists presented the goals of their research work from the perspective of their field of science and reported on the methods usually used to obtain insights. The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. M. J. Krause (*) Institute for Applied and Numerical Mathematics and Institute for Mechanical Process Engineering and Mechanics, Karlsruhe Institute of Technology, Karlsruhe, Germany e-mail: [email protected] S. Becker Heinrich Heine University Düsseldorf, Düsseldorf, Germany e-mail: [email protected] © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3_18

237

238

M. J. Krause and S. Becker

In order to be comprehensible beyond the respective subject boundaries, the reflection on the approach of one’s own discipline was exemplified by the interdisciplinary research project funded by HAdW. Central keywords such as “number”, “pattern” and “model” were analyzed in the individual contributions with regard to the meaning attributed to them in the respective disciplines. These analyses and comparisons clearly showed that communication is a central issue and that finding a common language is a crucial success factor of interdisciplinary work. Therefore, the discussion of differences, similarities, and challenges in communicating scientific results in the various disciplines and within interdisciplinary projects was carried out in a separate chapter (see chapter Communication Cultures).

1 Scientific Disciplines Involved Social sciences, literature, natural sciences, economics, engineering, political science, law, philology, medicine, physics, architecture, art, psychology, history, biology, bioinformatics, geoinformatics, econometrics, legal linguistics.

2 Considered Objects of Investigation Currents, human beings, perception of the world and poetry of medieval authors, figures from experiments or simulations on psychological phenomena, politics and power, politics and money, text and interaction of people, human brain, protection and exchange of personal data, models for spaces, financial risks, thermal comfort, pain.

3 Conclusions In the following, the most important results are summarized and nine conclusions are derived from them. Curiosity to understand and explain the world is the unifying driving factor of scientists, despite the diversity of questions. Curiosity, the desire to understand the world (better and better), is generally regarded as a meaningful human virtue. For scientists, it is the main driving force. This is also the case for the 23 young scientists at HAdW, who are jointly striving for scientific knowledge in 14 interdisciplinary projects on the topic of “Measuring and understanding the world through science”.

Conclusion: Measuring and Understanding the World Through Science

239

The description of the research activities leaves no doubt about this driving force. The generalization that the pursuit of knowledge unites science is not contradicted by the great diversity of the scientific disciplines considered here, which look at the most diverse objects of investigation. Past, present, future, realism, abstraction, and generalization—understanding the world needs many aspects. The enormous breadth of the objects of investigation is reflected in the question of temporal relationships and the degree of abstraction. Aspirations of researchers are neither uniformly directed towards the future, nor are events that have already been completed considered exclusively. Nor is abstraction and generalization of cause-effect relationships a goal that is pursued unreservedly in all disciplines, even if both are goals of research activities in most disciplines. If one considers, for example, the project of Büttner & Mauntel (see chapter Büttner & Mauntel) on the understanding of medieval authors’ perception of the world, this illustrates a matter of pure comprehension and understanding. Understanding abstract structures in mathematics (see chapter Krause) does not require any reference to reality and leaves a future use open, at least initially. Then again, past texts and cultural phenomena described in them are studied in order to better understand contemporary literature and culture and thus also to offer a reasonable view of the future (see chapter Lauer & Pacyna). In the natural sciences, and especially in engineering, a newly gained understanding of an issue usually offers starting points for developing better technical devices, more energy-efficient processes or more effective medicaction (see chapter Krause and chapter, Becker & Schweiker). Sometimes an understanding of mathematics, developed centuries ago, comes into play. For example, many algorithms for encrypting data were known long before the development of the first computer. These examples and the preceding chapters show that scientific progress does not follow a uniform and usually not a predictable path. Looking into the future is just as important as working through what has already been done, just as a reference to reality and abstraction form necessary counterweights. The borders between quantitative and qualitative approaches are blurry. The projects of the young researchers illustrate that quantitative approaches are omnipresent in many disciplines (see chapters Krause, Mier & Hass, Molnár-­ Gábor & Korbel). The core of quantitative approaches is to capture a research object by a number, based on specific assignment rules. The repeated application of such assignments (for example, by means of measurement) builds the basis of experiments, whether of classical physical nature or purely virtual through simulations. Using such approaches a more or less extensive data set is generated.

240

M. J. Krause and S. Becker

As a rule, such data is then subjected to statistical analysis or used as the basis for further mathematical modelling, which in turn abstracts and connects objects. In contrast, there are qualitative methods, which are particularly represented in the humanities (see chapters Lauer & Pacyna, Büttner & Mauntel), but also find their application in areas of the social sciences and engineering (see chapter Becker & Schweiker). Typical qualitative methods are interviews, descriptive behavioural observations, individual case analyses and qualitative content analyses. Common to these methods is an open approach that can change during the research process. The aim is to discover previously unknown phenomena or facts and to capture subjective perspectives. Through the qualitative analysis of historical events, for example, the perception of the world by medieval authors or historical cultural phenomena and their significance for the present age are extracted (see chapters Lauer & Pacyna, Büttner & Mauntel). While quantitative approaches often aim at generalization in order to describe and compare a phenomenon across individuals, qualitative methods aim at capturing specific details of a phenomenon in its full depth. However, the interdisciplinary work of the young researchers here shows that the lines between such quantitative and qualitative methods are hard to draw. For example, qualitative methods are viewed on a meta-level with the aim of generalization, or quantified perceptions of humans are supplemented by subjective descriptions. Qualitative descriptions are often in turn put into categories and thus in a certain way transferred into a quantitative view. However, quantitative results only gain their meaning through explanatory additional qualitative descriptions. The projects of the young scientists show that the often-described deep divide between the various scientific fields does not exist in this form, that differences can be overcome and that the disciplines learn largely from each other. Numbers are an integral part of all disciplines, but their meaning and use vary from auxiliary variables to the complete quantification of the object of research. The common image of science almost inevitably generates notions of extensive use of numbers and constant quantification. While the use of numbers and quantification are a central part of everyday scientific life, especially in the natural sciences and engineering, the projects presented here show that numbers do play a role in almost all disciplines, but that this role can take very different forms. As one extreme, there is, for example, the study of literature and historical sciences, where numbers are used for contextualization and specification or even viewed symbolically or suspiciously. At the extreme on the other side are mathematics, which is a science about and with numbers, but usually views them in a completely detached way from real objects. Fields of research in between use numbers to describe real objects and phenomena and thus make them tangible. Such observations can be, for example, the readings of sensors of various kinds (see chapter Höfle), prices on financial markets (see chapter Halbleib) or perception and sensa-

Conclusion: Measuring and Understanding the World Through Science

241

tion of humans (see chapter Mier & Hass, Becker & Schweiker). The latter shows an interesting area of tension that often occurs in areas of the social sciences, psychology and medicine. Subjective perceptions, which can hardly be compared between persons, are to be put into numbers, quantified and considered statistically. Such allocations of numbers to perception provide the necessary methodological basis for being able to investigate the mechanisms of, for example, phenomena such as empathy, pain, and comfort. At the same time, they raise the question of the extent to which such allocations are comparable between people and thus how meaningful they are. It is undisputed, however, that the use of numbers for quantification was and is central to many important scientific insights in the social sciences and psychology. Assignment process, modelling, pattern recognition and classification can be found in most of the research projects considered, they are the core of quantitative approaches to gain knowledge. Numbers only acquire meaning and relevance through an assignment. The process of assignment, in turn, is an essential part of any modelling, pattern recognition, and classification, the creation of which,1 or at least the understanding of which, is essential for gaining knowledge in almost all of the research projects described. Once the assignment has been made, relationships can be established in models by introducing an arithmetic between numbers, i.e. ultimately between objects. This then allows in a further step, for example, predictions via simulations (see chapter Krause, Halbleib, Mier & Hass). Similarly, patterns can be recognized more easily and objects can be divided into classes, which goes hand in hand with having found an explanation or understanding of interrelations between objects. In studies of literature, for example, models serve abstraction and thus enable categorization (see chapter Lauer & Pacyna). Measurement provides quantitative data and is therefore extremely important for the increase of knowledge in many disciplines, but it varies in its nature. The objects explored by the young scientists, which the projects seek to understand and explain, are extremely diverse in their nature and shape. From the perception of the world by medieval authors (see chapter Büttner & Mauntel) to comfort and pain (see chapter Becker & Schweiker), politics and power (see chapter Prutsch) to the modelling of neuronal functions and perfused vessels (see chapter Mier & Hass, Krause), diverse topics are represented in the projects. Despite this enormous diversity, what most research projects have in common is that understanding is preceded by some kind of measurement. However, measurement is not always an integral part of the research. In mathematics, for example, measurement is usually not performed in the sense of number-object ­mappings,

 “Creation” here is meant to imply that such number-object mappings do not exist naturally in the first place. 1

242

M. J. Krause and S. Becker

but rather the basis of measurement is provided in the form of models and theoretical considerations. The form that measurement takes varies as much as the research objects themselves. This measurement is not always linked to numbers and quantification, as the term suggests. Measurement can also be the description of the perception of authors by analyzing historical texts (see chapter Büttner & Mauntel), as well as the counting of occurrence frequencies (see chapter Lauer & Pacyna) and also the precise recording of temperatures (see chapter Becker & Schweiker) or the recording of brain activity (see chapter Mier & Hass). These measurements are intended to manifest an object of investigation and thus to enable understanding. Sometimes, real references are made to observable objects in our world (e.g. simulation of blood flow in the aorta, see chapter Krause, simulations for predicting and estimating financial data and prices, see chapter Halbleib), but also observations can be abstract (see chapter Krause). However, the aim is always to capture the object of investigation and, through this, to make it accessible to others, whether experts or laypersons (see chapter on communication cultures). A limitation of quantitative approaches is a loss of complexity. In the assignment of real objects to numbers, a limitation of quantitative approaches comes to light, because a number practically never expresses the entire state of a real object. As already mentioned above, this becomes particularly clear in the attempt to assign numbers to human perception and sensation, which are then in turn put into categories (see chapter Becker & Schweiker, Mier & Hass). While such quantitative approaches allow comparison across individuals and groups of individuals, the complexity of a perception is lost. For example, while the perceived intensity of a pain stimulus can be estimated on a scale commonly used in clinical practice (e.g. from 0 “no sensation” to 10 “strongest pain imaginable”), this says nothing about how unpleasant this stimulus is perceived and what a person would do to stop or avoid this stimulus. Also, discovered correlations between genotype and phenotype as well as the quantification of a risk factor of suffering from a serious disease in the future do not convey what subjective feeling the individual patient associates with this information. Nor does this information convey to which extend an individual patient is willing to release his or her personal data for any kind of data exchange (see chapter Molnár-Gábor & Korbel). This problem is also evident in the evaluation of scientific performance on the basis of the number of publications, which does not necessarily indicate the quality of the knowledge gained, but is common practice in the scientific community (see chapter Communication Cultures). Thus, information is lost. As long as this information is not essential for answering a question, the loss is acceptable. It becomes problematic when the scientist is not aware of this loss of

Conclusion: Measuring and Understanding the World Through Science

243

attribution and confuses his numbers, modelling, pattern recognition and classification with reality (numbers as a basis for policy decisions, see chapter Prutsch). However, the scientist must not only be aware of this loss of complexity, but also communicate related problems in a way that is understandable to other scientists and laypersons (see chapter Communication Cultures). Communication is decisive for the success of interdisciplinary research projects. The complexity of all the scientific questions considered here requires expertise from several scientific fields in order to answer them (see chapter Introduction). The approach chosen to gain scientific knowledge must therefore be comprehensible for the disciplines involved (see chapter Communication cultures). Quantitative approaches facilitate communication because the focus is on numbers that are associated with the objects under investigation, thus enabling an extremely compact exchange of information. However, information is lost as a result, as described above. In addition, quantitative approaches are not useful in all areas, e.g. when a phenomenon has to be understood and described in its full depth rather than in its breadth (see chapter Communication Cultures). Finding a beneficial compromise here is a major challenge of interdisciplinary work. This can only be done through successful communication. Often the beginning of such successful communication is finding and defining common terms as a common language. However, the effort to find common terms can also lead precisely to the realization that terms are understood and used very differently. It is precisely this recognition and documentation of this, as has happened here, for example, for the terms “number”, “model” and “pattern”, that can be a necessary and robust basis for working together on scientific issues. However, successful communication goes beyond communication within research projects. Scientific findings must also be communicated to the broader research community and to society, e.g. in order to justify the use of tax funds in research or to bring new research findings into the application. Interdisciplinary work can lead to unexpected synergies and real scientific progress. Interdisciplinarity is currently a much-invoked keyword in the scientific world, both nationally and internationally. However, it is common for interdisciplinary projects to involve collaboration between very close disciplines, such as computer science with mathematics or psychology with medicine. This seems to be at least partly due to the problem of communication in interdisciplinary contexts. While the project partners must have already found a common language at the time of application, reviewers are not necessarily familiar with this use of language, which in the worst case leads to misunderstanding.

244

M. J. Krause and S. Becker

The HAdW as a funding institution with the 14 projects presented in this book is an exception here. Within the projects, for example, psychology comes together with engineering (see chapter Becker & Schweiker) and physics (see chapter Mier & Hass) and molecular biology with jurisprudence (see chapter Molnár-Gábor & Korbel). Outside the projects, the young scientists worked out a common understanding and approach to the questions of measuring and understanding the world through science. Among others, the social sciences, studies of literature, natural sciences, economics, engineering, political science, law, and history are represented. Both the individual projects and the joint work illustrate that true interdisciplinary research has to struggle with obstacles such as different specialist languages and research approaches, but—perhaps precisely because of this—provides important new impulses, broadens the horizons of all those involved and thus allows new ideas and perspectives to emerge. It is precisely this “looking beyond one’s own nose” that provides important impulses to go beyond the boundaries of one’s own field. The mathematical modelling of neuronal networks to understand social phenomena such as empathy, for example, is an interesting mutual complement (see chapter Mier & Hass). While approaches from physics are applied to real conditions, the existing methodology of psychology is fundamentally expanded, enriching the explanation and prediction of behavior.

Glossary

In this chapter, terms are defined that are important for the understanding of individual chapters. The definitions were written by the individual authors and deliberately not coordinated among themselves. They also have no claim to universal validity. Thus, in some cases, several definitions of the same term from the perspective of different projects is found. Abstraction  From Latin abstrahere: ‘to subtract, to separate’; a conceptual (cognitive) process of reducing the property variance of a set of objects by selection and prototyping. (Friedemann Vogel, Hanjo Hamann) Adaptation processes  Adaptations of the human body or mind to deal with dynamically changing conditions inside and outside one’s own body. Within thermal comfort research, a distinction is made between physiological, behavioural and cognitive adaptation processes. An example of a physiological adaptation is the more effective sweating of the body after prolonged exposure to warm temperatures. Behavioral adaptation processes are evident in changes in the interaction between humans and their environment, such as changes in the degree of clothing. Finally, cognitive adaptation processes describe a change in perception of thermal conditions due to a change in expectations or a different perception of one’s ability to control. (Marcel Schweiker) Adaptation processes  In the field of pain research, adaptation processes are distinguished on the one hand according to their time frame and reversibility and The translation was done with the help of artificial intelligence (machine translation by the service DeepL.com). A subsequent human revision was done primarily in terms of content. © Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3

245

246

Glossary

on the other hand according to the level of adaptation. For example, short-term, immediately reversible stimulus-induced processes in the peripheral nervous system are distinguished from long-term changes caused by chronic pain in the central nervous system, e.g. at the level of the brain. Cognitive-emotional processes are also distinguished from this, such as successful coping strategies, which can limit the extent of sensitization processes. (Susanne Becker) Annotation  As a process, the term “annotate” refers to the incorporation of data, interpretations, and systematic labels to texts, images, audio, or video material that may pertain to the annotated object as a whole as well as to sub-elements of it that are recognized and determined as entities to be annotated. Annotation aims to enable or facilitate the understanding and reception of the annotated object and/or to enable systematic comparison between different objects or elements and the collection of quantitative data from such comparisons. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Anonymization  Changing personal data in such a way that they can no longer be assigned to a person. (Jan O. Korbel, Fruzsina Molnár-Gábor) Artefact  Material object that has been created or altered by human artistry or labour and thus has a ‘social life’ (working basis of SFB 933 “Materiale Textkulturen”) (Christoph Mauntel) Artefact  Objects, actors, or circumstances of the lifeworld created by human behaviour. (Friedemann Vogel, Hanjo Hamann) Artefact  A feature in a set of measurement data that is not attributable to the observed relationships, but to errors or biases in the collection, measurement, or analysis. Artefacts that arise from human perceptual or cognitive structures are the subject of psychological research. (Joachim Hass) Balancing  Comparing the intended realization of two or more principles in a concrete situation and determining a proportionate balance. (Matthias Valta) Bootstrapping  A procedure that creates random samples by replacing and swapping observations in the original dataset. (Roxana Halbleib) Causality (Statistical)  describes a cause-effect relationship between different random variables. (Roxana Halbleib) Chronology  is the representation of temporal sequences e.g. in the context of yearly counts and calendars or events. (Peter Bell) Cloud computing  Provision of IT infrastructure and IT services such as storage space, computing power or application software as a service via a computer network such as the Internet. (Jan O. Korbel, Fruzsina Molnár-­Gábor) Commensurability  Quantitative Comparability of Two Parallel Prototypes (Friedemann Vogel, Hanjo Hamann) Complexity  In mathematics and computer science, the complexity of an algorithm is understood to be an estimate of the effort (e.g., computational effort, memory

Glossary

247

effort) required to solve the problem or to execute the algorithm. To measure the effort, a simple function such as f(n)=n^2 (quadratic complexity) or f(n)=2^n (exponential complexity) is assigned to a number n characterizing the problem size, which then characterizes the growth of the effort. (Mathias J. Krause) Complexity  On the one hand, the complexity of a system is understood as the number of independent key parameters that describe the behavior of this system. On the other hand, complexity can also be used to describe the dynamic behavior of a system, where a high level of complexity stands for a particularly rich repertoire of behavior. Interestingly, even some very simple systems can exhibit very complex behavior. (Joachim Hass) Complexity  In the context of political science, complexity refers in particular to the influence of various factors on the political process (e.g. the design of the political system; the number of actors involved). One example of a high degree of complexity in the decision-making in modern democratic systems, in which not only members of government and parliament participate, but also experts, non-governmental organisations and other social actors. (Markus J. Prutsch) Computer vision  Branch of computer science used to study machine vision and image understanding. (Peter Bell) Cooccurrence (signifier)  In linguistic terminology, the term “cooccurrence” refers to two or more linguistic elements that appear together in a text within a particular “text window”. Different approaches are used to determine the particular text window: Either one is oriented towards the existing structural level of the text (half-period, period, paragraph, chapter) or one specifies a certain number of characters as the “text window”. Cooccurrences are considered “significant” if they reach a frequency value within a corpus which, according to predefined criteria, indicates that the cooccurrence is not random. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Correlation (statistical)  describes a linear relationship between two statistical variables (Roxana Halbleib) Counting and Narrating  Counting has an active and processual character that leads to number-based findings. In scientific, political and social discourses, numbers are embedded in narratives, whether consciously or unconsciously. “Counting” is thus also an essential component of “narration.” (Markus J. Prutsch) Counting  “Counting” is generally based on numerical knowledge and action (numbers, arithmetic laws). In the field of literary studies, in the strict sense, ‘counting’ plays a largely subordinate role, primarily as an indication of significance, e.g. in literary transmission relationships, or ­methodologically in special disciplines such as quantitative literary studies and digital philology (empirical-statistical procedures). In a broader sense, however, “counting” can also be understood as conceptually detached from “number”, which means that both the

248

Glossary

literary-scientific object of investigation of the “count” and the “number” can be implicitly interpreted in the same way as “narration” (in German “Erzählen, which includes the word “Zahl” for number) as well as traditional methods and possibilities of indexing (e.g. in the analysis of narrative structures and literary plot programs) as acts of quantitative information distribution. (Claudia Lauer) Data mining  The systematic application of statistical methods to large data sets to discover new relationships. (Roxana Halbleib) Data protection  Collective term about the legal norms aiming to protect the privacy of the individual from unauthorized access from the outside (state, other private parties). (Jan O. Korbel, Fruzsina Molnár-Gábor) Dating  Determining the age of work or finding in as small a period as possible. (Peter Bell) Decoding  Synonym for deciphering (e.g. of information within the genome). (Jan O. Korbel, Fruzsina Molnár-Gábor) Definition  Explicit description of predication of the ideal-typical form ‘X is Y’, aiming at and possibly based on collective conventionalization. (Friedemann Vogel, Hanjo Hamann) Diagnostics  In order to make a diagnosis, e.g. of a psychological problem, information is collected and processed in a targeted manner to determine relevant characteristics and to empirically record their state, course and change over time. Based on the information collected, a decision is made in a rule-guided manner as to whether a person falls into a particular category. An example is the diagnosis of depression as a clinical disorder. To be assigned this diagnosis, a person must exhibit certain behavioral and psychological characteristics as described in standardized diagnostic systems. However, the classification of learning potential or the recording of attitudes, for example, is also a form of diagnosis. The aim of diagnostics is the description and classification of individual characteristics or multidimensional profiles, the explanation e.g. of conspicuous behaviour, the prognosis, both intra- and interindividual, e.g. for the identification of suitable applicants or therapy successes and the evaluation, to describe courses and changes of characteristic features. (Susanne Becker) Diagnostics  Measures for the detection (diagnosis) of a disease. (Jan O. Korbel, Fruzsina Molnár-Gábor) Digitisation  “Digitisation” is the process by which texts, images, video and audio data are processed in such a way that they can be stored by computers and displayed and processed by computers. Depending on the method used, the resulting products (digitized data) are machine-readable to various degrees and -executable (machine-operable). In industry, the term “digitization” denotes something different: the adaptation and transformation of work processes so that they can be controlled by the computer. The general definition in the English-language Wikipedia

Glossary

249

(“Digitizing simply means the conversion of analog source material into a numerical format; the decimal or any other number system that can be used instead. […] Though analog data is typically more stable, digital data can more easily be shared and accessed and can, in theory, be propagated indefinitely, without generation loss, provided it is migrated to new, stable formats as needed. This is why it is a favored way of preserving information for many organizations around the world.”1) covers both areas. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Discretion  Legal consequence of state statutory enabling bases, which provides a decision-making discretion of administrative authorities, which must be filled by the purpose of the norm, higher-ranking law and the principle of proportionality. (Matthias Valta) Discretionary scope  See “Discretion” (Matthias Valta) Econometrics  (from oikonomia = economy/economy and metron = measure, measurement) is the science which, on the basis of economic theory and mathematical methods as well as statistical data, quantitatively analyses economic processes and phenomena and empirically tests theoretical economic models. (Roxana Halbleib) Effect size  Characteristic value by which the significance of study results is described. In contrast to significance tests, effect size calculations are independent of the size of the sample, which means that fewer biases occur. Effect size describes the magnitude of an effect, e.g. through experimental manipulation of variables. (Susanne Becker) Empathy  The ability to put oneself in another person’s shoes and experience their feelings, while at the same time being aware that these are the other person’s feelings. Empathy thus forms the basis of social behaviour. (Daniela U. Mier) Empiricism  Cumulative stock of world knowledge, coagulated from findings systematically obtained and interpreted in pre-agreed procedures. (Friedemann Vogel, Hanjo Hamann) Estimate (statistical)  An approximation based on the data of a representative random sample of an initially unknown statistical measure or parameter that is used to describe the associated population. (Roxana Halbleib) Ethics  performs the reflected examination of practiced norms and values, in that its object, morality and the morality of the agent, is as actually lived values. Ethical judgement always takes place in socially and culturally determined contexts. (Fruzsina Molnár-Gábor)

 Retrieved from https://en.wikipedia.org/wiki/Digitization

1

250

Glossary

Experiment (scientific)  In the natural sciences, experiments are carried out under accurately defined and, as far as possible, identical conditions. They produce (almost) identical results, which thus confirm an event predicted by the model. Here, a clear distinction must be made from observation, which does not require idealized simplified assumptions, is usually much more complex, and is often random. By repeating the experiment and varying the input values under the same conditions, a model is then understood as a simplified representation of reality, if no contradictions arise. (Mathias J. Krause) Experiment  The experiment is characterized by the fact that there are so-­called independent and dependent variables. The independent variables are determined in advance by the investigator. It is examined whether the expression of the dependent variables differs between independent variables. The assignment to the independent variables is random. If the assignment is non-random, it is called a quasi-experiment. For example, when comparing men and women, gender is the independent variable, but it is given and cannot be randomly determined by the investigator. (Daniela U. Mier) Experiment  The repeated and systematic recording of the effects of a variable that is manipulated in a random and controlled manner under conditions that are as constant as possible (the independent variable) on another variable that is to be measured as precisely as possible (the dependent variable). The starting point is the hypothesis about an effect relationship, which describes the expected sequence of events. The object of investigation can be, for example, the behavioural or psychological reaction of persons to changed environmental factors. (Susanne Becker) Failed states  States that can no longer exercise sufficient state power over their territory (e.g. loss of government control in civil wars) and can thus no longer fulfil their responsibility for the territory and its inhabitants. (Matthias Valta) Formalised recording of reality  Reduction of complexity in the recording of reality by describing it only in terms of certain patterns (e.g. numerical values or legal facts). (Matthias Valta) Formalization through quantification  Reduction of complexity of the reality of life through abstraction of a fact to a numerical value. (Matthias Valta) Genome  Totality of biological molecules (DNA) storing the genetic material (Jan O. Korbel, Fruzsina Molnár-Gábor) Golden ratio  A rule of proportion by which many pictorial works of cultural heritage have been composed since antiquity. (Peter Bell) Hermeneutics  Method for understanding and interpreting texts, which itself has a long history and discussion (Andreas Büttner) Historiography  Historiography is the study of the human past and historical images with the help of a methodological toolkit. The basis for this are sources

Glossary

251

such as textual traditions and artefacts. History and, thus, the research areas can be divided according to the area of investigation (e.g. local, regional, national history), the period of investigation (e.g. Middle Ages, early modern period, contemporary history) or the object of investigation (e.g. legal, economic, everyday history). The science of history also includes historiography (theory of history) and the didactics of history (teaching of history). (Jana Pacyna) Hypothesis  An assumption based on theory and/or previous studies that the scientist seeks to disprove through appropriate empirical experiments. (Roxana Halbleib) Identification (Re-)  Converting anonymized data back into person-specific data, e.g. by using data matching methods. (Fruzsina Molnár-Gábor, Jan O. Korbel) Interpretation  means the process of recognising or reconstructing the deeper meaning, the contexts of meaning and the significance of (something such as) objects, linguistic statements, actions and events. Especially in relation to texts, “interpretation” as a form of understanding and explaining is an essential methodological building block of hermeneutics. (see ‘Hermeneutics’). (Claudia Lauer, Jana Pacyna) Interpretation  Addressee-oriented explication and reformulation of understanding. (Friedemann Vogel, Hanjo Hamann) Interpretation  Evaluation of the material collected and developed to answer a question. In historiography, an interpretation is to be understood as an interpretation by the historian. As such, it is ultimately always subjective but gains persuasive power by following scientific standards and building plausible arguments. (Andreas Büttner, Christoph Mauntel) Interpretation  Geographical interpretation aims to bring the geographical phenomena recorded and described (e.g. by maps) into a deeper context of meaning. The interpretation is supposed to explain the “what is where, how, when and why in space”. (Bernhard Höfle) Introspective  Based on the individual, intuitive experiential knowledge. (Friedemann Vogel, Hanjo Hamann) Knowledge  The ability to comprehend an object, how it is constituted, as well as the ability to deal with it successfully. Knowledge about a genetic predisposition is characterized by the subjective features of certainty as well as risk assessment. (Fruzsina Molnár-Gábor) Law of gravitation  A physical law that relates the strength of the gravitational force with which two bodies attract each other to the mass and distance of these two bodies. The force increases with the mass of the two bodies are and decreases with the (quadratic) distance between them. (Joachim Hass)

252

Glossary

Law  Legitimate mutual expectations of behaviour, the validity of which is set and which thus enable continual confirmation, change and adaptation in a process of law enforcement. (Matthias Valta) Legally binding law  In the constitutional state, a text put into effect by institutionalized procedure as an institutionally proven, explicit point of reference and orientation for the establishment and negotiation of legal norms as well as, according to the claim, non-legal norms. (Friedemann Vogel, Hanjo Hamann) Literary studies  are the general term for the scientific study of the creation, reception and impact of literature. It is a generic term for all types, methods and aspects of the scientific study of national or general literature and includes subfields such as the history of literature and its reception, literary theory, aesthetics and edition philology. The specific discipline for German literature in German philology (Germanistik), which is subdivided into two basic subfields: (a) older German literary studies (eighth–fifteenth/sixteenth centuries) and (b) modern German literary studies (sixteenth–seventeenth centuries to the present day). (Claudia Lauer) Literature  The term ‘literature’ (Latin: literatura = letter writing) includes, in the literal sense, the entire stock of written evidence, i.e. written works of any kind, including non-fictional, factual and scientific texts. In a narrower sense, this means above all the so-called beautiful literature (fiction), which is not “purpose-bound” and reaches its highest form in poetry through the artistic shaping of language, which can be classically divided into three genres (epic poetry, dramatic poetry, lyric poetry). (Claudia Lauer) Location-Scale Model  A mathematical-statistical linear model defined by a location parameter (such as mean or median) and a scale parameter (such as standard deviation). (Roxana Halbleib) Market Efficiency Hypothesis  A mathematical-statistical theory of finance that states that the current market prices or values of a financial instrument contain all the information available in the market so that no market participant is able to make above-average profits on a sustained basis. (Roxana Halbleib) Market microstructure noise  describes the discrepancy between the market price and the fundamental value of a financial instrument caused by the characteristics of the particular market and trading process. (Roxana Halbleib) Mathematical modelling  comprises both the creation of new models and their controlled simplification. The latter is also to be understood in the sense of abstraction, in order to enable statements about the structure and solvability of a model, as well as to enable computability with given technical resources in a practically motivated period. Results are model hierarchies, qualitative and quantitative statements about how much the results of different models dif-

Glossary

253

fer from each other. Mathematical modelling as an approach does not need a definition of reality at all. Only in other disciplines are mathematical models associated with the objects of reality studied there. Whether at all and, if so, to what extent a mathematical model can be considered detached from reality or as part of it, since it is simplified, is discussed quite controversially. (Mathias J. Krause) Measurement  Concrete application of a quantifying method, at the end of which a measurement result is produced in the form of a measurand defined in advance and on which the evaluation procedure is based. Since measurements are carried out on the basis of a quantitative procedure, measurements are usually accompanied by a promise of objectivity, but at the same time they hide aspects of the object of investigation that are not—or at least not directly—measurable. (Markus J. Prutsch) Measurement  In a measurement, observable behaviour or physiological processes are collected in a quantifiable way. The measurement should take place under standardised conditions and is therefore theoretically independent of the person taking the measurement. (Daniela U. Mier) Measurement  Observation of reality and description of the observation in a numerical value. In environmental law, for example, measurements are a subcase of the general determination of emissions or immissions and thus of the determination of the facts of life, to which the facts of the laws and ordinances are then applied. Another sub-case is environmental effects that can be perceived directly by humans, e.g. stench. (Matthias Valta) Metaphor  Linguistic expression that is not exhausted in the literally meaning (concept), but rather refers pictorially to an analogous context in a sense that is not simply objectively descriptive. (Chris Thomale) Method  A defined procedure (algorithm), usually computer-based implemented in geoinformatics, for processing digital data. A geoinformatics method generates, manipulates, or analyzes digital geodata. In geoinformatics, methods are developed and applied for the acquisition, management, analysis and visualization of digital geodata. An example of a method is the calculation of the area of a building footprint based on polygon geometry. A more complex method is the calculation of the annual potential solar radiation on all building facades of Heidelberg. (Bernhard Höfle) Method  A systematic procedure for moving from an interest in knowledge to a gain in knowledge. A distinction can be made between qualitative and quantitative methods. While qualitative methods are primarily aimed at capturing a phenomenon in its entirety and “interpreting” it on the basis of a few case studies, quantitative methods aim to “count” and “measure” statistically evalu-

254

Glossary

able aspects of a phenomenon on the basis of a large number of case studies. Whereas qualitative methods draw not least on ethical criteria for assessment and design, it is characteristic of quantitative methods to strive for the most objective and value-judgement-free grasp possible of reality. In practice, however, qualitative and quantitative methods are less clearly distinguishable, since qualitative methods, for example, also draw on counting and quantitative methods can be based on value-based presuppositions. (Markus J. Prutsch) Method  Systematized way of gaining knowledge. The historical method is classically divided into source acquisition (heuristics), development (­ criticism) and evaluation. Within the science of history, the methodological approach always depends on the research question—there is a great variety of methods. (Andreas Büttner, Christoph Mauntel) Model (mathematical)  A mathematical model is a simplified representation of a reality. Here, reality is defined by a few simple rules and basic assumptions (axioms). They depict interrelationships, mostly cause-effect relationships, in a simplified way in the form of mathematical equations. (Mathias J. Krause) Model (regression)  Statistical procedure that models relationships between a dependent and one or more independent random variables. (Roxana Halbleib) Model parameter  A fixed but arbitrary number that accompanies and describes a model. (Roxana Halbleib) Model  A model (re)presents reality in a simplified way, which is considered too complex for a comprehensive representation. It can be a representational, mathematical or computer simulation model. An exact reproduction of reality is often not intended, but only the capture of the essential influencing factors that are relevant to the process under investigation. (Jana Pacyna) Model  A model is a simplified representation of a reality. Here, the term reality is to be distinguished with regard to the scientific discipline in which the model is applied. They depict interrelationships, mostly cause-effect interrelationships, in a simplified way. (Mathias J. Krause) Model  On the one hand, a model can be a theoretical or conceptual, graphical or textual representation of the relationships, dependencies or even opposites between two or more variables. An example from the field of thermal comfort is the adaptive comfort model, which describes adaptation processes that change the perception of the same objectively measurable thermal conditions. An example from pain research is the “motivational decision model”, which describes decision processes depending on external and internal factors when two opposing motivations in the form of pain and reward meet. On the other hand, a model can represent one or a group of equations which, based either on a theoretical model or on a set of data, describes the relationship between depen-

Glossary

255

dent and independent variables. Well-known examples are regression models. In the field of thermal comfort, there are also more complex psychophysiological models that represent the thermoregulatory processes of the human body in response to different thermal conditions and relate them to the perception of these same conditions. The best known examples are Fanger’s PMV model or Gagge’s SET model. (Susanne Becker, Marcel Schweiker) Monte Carlo Simulation  A technique from mathematics that repeatedly generates random samples to produce numerical results. (Roxana Halbleib) Multiculturalism  (e.g. in the form of linguistic diversity), as it is found especially at the supranational level. (Markus J. Prutsch) Multiculturalism  The encounter and interaction of different cultures, as opposed to cultures simply living side by side in the same space. Many scientific institutions are prime examples of intercultural exchange, since scientists with different cultural backgrounds work together on the basis of common research interests. (Joachim Hass) Narration  “Narration” is generally understood to be the oral or written reproduction of found or given events and knowledge in a logical context of order and action (narrative, story). In the literary field, not only categories such as fictional and non-fictional play a special role here. It is also possible to define two terms of “Narrative” which can be distinguished as a product of storytelling: on the one hand, in the broader sense as a collective term for all epic genres (novel, short story, novella, fairy tale, etc.); on the other hand, in the narrower sense of a genre, i.e. as a separate literary genre characterized, among other things, by shorter length, smaller number of characters, and simpler literary structure. (Claudia Lauer) Norm  Behavioral instructions institutionally enforced by sanctions, either implicitly existing as orientation knowledge or made explicit and exerted by formal procedures. (Friedemann Vogel, Hanjo Hamann) Norm  statement of ought (commandment, permission, prohibition). (Matthias Valta) Normative evaluation  Statement about the expected behaviour in a concrete situation, made on the basis of all relevant principles and rules. (Matthias Valta) Null hypothesis  An existing assumption about the population in the context of a hypothesis test which statement can be tested statistically. The null hypothesis represents an assumption that one wants to reject. (Roxana Halbleib) Numerical representation  aims to express facts on the basis of numbers and quantitative methods. In politics, numerical representations are used not only for the analysis of effects (impact assessment), but also for the determination or formulation of political objectives. (Markus J. Prutsch)

256

Glossary

Numerical simulation  A numerical simulation aims to follow the time course of a simplified model of an aspect of reality, i.e., to conduct an experiment that examines the dynamic behaviour of a model. A numerical simulation performs such an artificial experiment with a mathematical model. For this purpose, a certain initial state is defined by a set of numbers, and it is investigated how this state changes over time according to the mathematical rules of the model. Nowadays, computer programs are almost exclusively used to perform numerical simulations. (Joachim Hass) Numerical simulations  are based on discrete models, which are then processed in the form of algorithms by common computer technology—instruction instruction. They finally provide an approximate prediction of the modeled reality. (Mathias J. Krause) Observation  does not require idealized simplifying assumptions, is usually much more complex than an experiment, and is often random. (Mathias J. Krause) Observation  The systematic collection of observable behaviour. Prior to observation, it is determined whether the observation is participatory, what exactly will be observed and how these observations will be recorded (qualitatively or quantitatively), at what time observations will be made and for how long. (Daniela U. Mier) Operationalize  Designing an appropriate experiment that maps the critical conditions of a hypothesis into measurable and quantifiable factors to test the hypothesis. (Daniela U. Mier) Pattern  In genetics, structured genetic information in the genome is often called patterns (Jan O. Korbel) Pattern  The term pattern is not used in our research areas. Relationships and assumptions are described by models. (Susanne Becker, Marcel Schweiker) Patterns  In literary studies, patterns are primarily “narrative patterns”, i.e., literary structures that are repeated in several narrative texts and can be reconstructed and defined by means of empirical observation, qualitative comparisons and inductive or deductive structuring and classification methods as a typical plot or narrative sequence with a fixed, countable sequence of plot units. “Narrative patterns” can refer, in the narrower sense of a plot pattern, to the plot structure of individual texts that are characteristic of certain text groups or genres. In a broader sense, however, they can also stand for typical patterns of narratives and narrative processes as a whole, which include aesthetic and pragmatic aspects. (Claudia Lauer)

Glossary

257

Patterns  In the political-social sphere, patterns first and foremost refer to “patterns of action”, i.e. behaviours and routines that can be recognised as regularities by means of empirical observation. They can also be generalized by induction or deduction. In a consolidated form—namely as “structures”—patterns of action critically determine the stability of political-­social orders. (Markus J. Prutsch) Patterns  Tradition-based and repeatedly reproduced forms of description. (Andreas Büttner, Christoph Mauntel) Personalised medicine  A method of investigation, diagnosis, and treatment by which patients are treated taking into account individual circumstances, such as individual hereditary characteristics. (Jan O. Korbel, Fruzsina Molnár-Gábor) Physical law  Regular relationship between measurable quantities, usually expressed in mathematical form. Newly established laws always have the character of a hypothesis that must be tested by experiments and are usually called a physical law in the narrower sense only when they have been proven by a large number of experiments. An example of this is the law of gravitation. (Joachim Hass) Political science  Science in general aims at increasing and securing systematic knowledge and at gaining insights. Political science, in particular politology, critically and analytically considers the practical shaping of politics and researches political structures (polity), processes (politics) and contents (policies), as well as the political dimensions and expressions of human coexistence. The discipline of political science is primarily divided into three areas: (a) Political theory; (b) Comparative Political Science; and (c) International Relations or International Politics. These are complemented by numerous specialized specific fields, such as political economy. (Markus J. Prutsch) Politics  Following Max Weber, politics can be defined as “striving for a share of power or for influencing the distribution of power”. In this framework, politics primarily aims at increasing and securing the legitimacy of rule. (Markus J. Prutsch) Principles  Norms that require the realization of a legal good to the greatest extent possible in law and in fact. (Matthias Valta) Proportionality  In the broader sense: evaluation of an action on the basis of the comparison of the impairment of one principle with the simultaneous achievement of another principle. (Matthias Valta) Prototype  Exemplar of the lifeworld (be it a concept or a physical entity) designated and described as idealized, claiming to represent an indefinite set of comparable exemplars. (Friedemann Vogel, Hanjo Hamann)

258

Glossary

Pseudonymization  Replacing the name and other identifying characteristics of persons with an identifier in order to exclude or at least significantly complicate the identification of these persons. (Jan O. Korbel, Fruzsina Molnár-Gábor) p-value  The probability that a random result is equal to or more extreme than what is actually observed when the null hypothesis of a test is true. (Roxana Halbleib) Qualitative  Scientific methods and approaches in the humanities referred to as “qualitative” are characterized by (a) developing a research question that can be meaningfully answered by describing and contextualizing a reasoned selection of data in a way that is not necessarily standard; (b) analyzing the object of research in significant elements and exploring, through the use of interpretive methods, the consistency of these elements and their relationships to other elements; (c) producing a scientific narrative that presents the results of this exploration. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Quantitative / Qualitative  Quantitative geographical characteristics can be measured (automatically) using a specific method or measurement technique and expressed as a number or category on a given scale. Qualitative means that a geographical characteristic does not result directly from a measurement as a measured value, but is determined by a description (e.g. purely textual) of the expression of a characteristic. (Bernhard Höfle) Quantitative / Qualitative  Quantity and quality can be understood as opposites in an ideal-typical way. In politics, the distinction between quality and quantity is expressed in the form of evidence-based decisions on the one hand and valuebased decisions on the other. The former are based on scientific—often quantifiable—findings, the latter are based on political-­ideological considerations. In political practice, there is usually a parallelism or blending of quantitative and qualitative aspects, as manifest, for example, in the embedding of statistics in ideologically motivated statements. (Markus J. Prutsch) Quantitative  Scientific methods and approaches in the humanities that are called “quantitative” are characterized by (a) developing a question for which it is useful to consider results of counts and measurements or which can be fully answered by such results; (b) analyzing and preparing the object of research in such a way that certain elements are measurable by certain methods of measurement and tools; (c) presenting the numbers that are the result of these measurements and discussing their contribution to addressing the question. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Reality  is to be distinguished with regard to the scientific discipline in which a model is applied (Mathias J. Krause)

Glossary

259

Reality  Apart from a physically (also in the past) existing world, it is impossible for historical science to reconstruct historical reality. There can only be ­methodologically guided, plausible attempts at reconstruction and explanation. The science of history does not aim at the depiction of a past world; as a science of probability it is aware of its own limitations. (Andreas Büttner, Christoph Mauntel) Reception  stands for (1) cognitive perception and processing (2) understanding and adoption of cultural works or larger contexts (e.g., reception of antiquities) (Peter Bell) Responsibility and liability  Accountability for the processing of personal data of those who decide on the purposes, means, scope and methods of the processing, and enforcement of a system of sanctions against reprehensible conduct. (Fruzsina Molnár-Gábor) Responsibility to protect  The state’s responsibility under human rights law to protect its territory and its inhabitants. If the state does not fulfil this responsibility, the responsibility of the community of states to take charge is discussed. (Matthias Valta) Risk perception  “Risk perception and assessment is understood here in a broad sense as the everyday process by which people assess risks without the possibility of recourse to long data series and exact calculation models. Risk perception is the often intuitive or purely experience-based, unstructured perception of possibilities for success and failure and of possible connections between actions and consequences.”2 (Bernhard Höfle) Safe Harbor Program  An agreement reached in 2000 between the European Union and the United States that ensured that personal data could be lawfully transferred to the United States until it was annulled by the European Court of Justice in 2015. The follow-up agreement, the EU-US Privacy Shield, came into force in 2016. (Fruzsina Molnár-Gábor, Jan O. Korbel) Sanctions/economic sanctions (international law)  Measures of an economic nature—as distinct from diplomatic and military measures—which express a rejection of the target state’s trade and are intended to induce it to change its behaviour or even its government. In a narrower understanding of the term, (economic) sanctions are equated with the concept of a countermeasure or reprisal in response to a breach of international law by the target state. (Matthias Valta)  Translated from Plapp, S.  T. (2003): Wahrnehmung von Risiken aus Naturkatastrophen. Eine empirische Untersuchung in sechs gefährdeten Gebieten Süd- und Westdeutschlands. Dissertation an der Fakultät für Wirtschaftswissenschaften der Universität Fridericiana zu Karlsruhe. 2

260

Glossary

Scientification  describes the increasing importance of numbers and quantitative scientific expertise in social and political life. Accordingly, political decisions are not made exclusively on the basis of value considerations, but increasingly on the basis of scientific evidence. The scientification is reflected, among other things, in number-based analyses and impact assessments, which are carried out with the claim to express the possible effects of a certain (political) decision in measurable form. (Markus J. Prutsch) Self-regulation  development of specific self-committed norms of behaviour, mostly in the context of professional standard setting. (Jan O. Korbel, Fruzsina Molnár-Gábor) Semiotics  Sign-mediated constitution of artefacts of the lifeworld. (Friedemann Vogel, Hanjo Hamann) Significance (Statistical)  A statistical measure of the degree of “truth” of a result (in the sense of “representative of the population”). (Roxana Halbleib) Significance  A hypothesis is confirmed if the statistical test falls below an a priori significance value. Thus, significance can be regarded as a decision rule whether a certain hypothesis is accepted or rejected by a statistical test. (Susanne Becker) Source (Historical)  In the broadest sense, all testimonies (records) that inform about historical (= past) events (processes, conditions, persons, ways of thinking and behaving). (Hans-Werner Goetz) (Andreas Büttner) Source (Historical)  In the science of history, all texts, objects and facts from which knowledge about the human past can be gained are referred to as sources. Historical sources are to be distinguished from so-called secondary literature, such as modern history literature. They can have the characteristics of a remnant source (the past is immediately accessible) and / or a traditional source (the past has been intentionally processed and mediated by someone). (Jana Pacyna) Sovereignty  From the equality of states follows the right of self-­determination of each state in defense against interference of other states. Sovereignty follows from the responsibility of the state for its territory and its inhabitants as a legitimation and accountability instance (cf. state; responsibility to protect) (Matthias Valta). Standardization and norming  Bundles of definitions with normative, i.e. institutionally proven, application claims. (Friedemann Vogel, Hanjo Hamann) State  Here: legal unit for the attribution of legitimacy and responsibility of rule on a certain state territory and with regard to a certain state people. (Matthias Valta) Text-critical edition  The edition of a text, which starts from the analysis of the history of the origin and/or the transmission of this text. On the basis of this

Glossary

261

analysis, (a) a text is produced which should be closest to that which is defined as the reconstructible ideal form of this text; (b) textual variants which have arisen as a result of the genesis and/or the transmission history of the text are documented and represented by the application of various systems of conventions. (Stylianos Chronopoulos, Felix Maier, Anna Novokhatko) Time  An evenly proceeding, irreversible physical process; for the historian an indispensable structural element of the past; culturally one may speak of times of accelerated change or the like—but these are already historical interpretations (Andreas Büttner) Translational research  Basic research at the interface with applied research that is geared towards concrete application goals or a concrete medical, economic, social or cultural benefit. (Jan O. Korbel, Fruzsina Molnár-­Gábor) Type  is a reference object that contains the essential elements of similar life situations in a generalized manner. In jurisprudence there are two applications. Typifications summarise the essential elements of similar cases in a normative way and determine the legal consequence in the “typical” case. The particularities of the individual case are neglected. For example, the flat-rate incomerelated expenses allowance for employees in the amount of 1000 euros (§ 9 S. 1 No. 1 a Income Tax Act) is based on the work-related expenses incurred by a “typical”, “average” employee. The methodical figure of the type concept must be distinguished from this. Facts can consist of class terms or type terms. While the class concept is conclusively defined and must be determined by the legal practitioner on the basis of the existence of all defining characteristics, the type concepts are not considered to be conclusively definable. Although there are type features, these do not have to be present in their entirety and are subject to a conclusive overall assessment. The necessity of type terms is controversial and the leeway given to the practitioner of the law is not unobjectionable with regard to the separation of powers (Matthias Valta). Typology  Collection of prototypes that are treated as distinguishable according to a certain criterion. (Friedemann Vogel, Hanjo Hamann) Understanding Intuitive, automatic, and socialization-influenced cognitive process of contextualizing sensory stimuli and experiential knowledge. (Friedemann Vogel, Hanjo Hamann) Understanding  in the humanities can be understood as an intellectual process that grasps and classifies the object of investigation in its context and connection. “Understanding” is not sensually perceptible and thus consequently neither

262

Glossary

empirically comprehensible nor objective.3 In order to overcome an artificial separation of “understanding” and “explaining”, it is necessary to address both the internal motivation and the causal derivation of external behavior. (Andreas Büttner, Christoph Mauntel) Validation  Validation or evaluation, whereby both terms can have a different connotation, refers to a check of the validity, applicability or goodness of a model, using the same data (internal validation) or other data (external validation). (Susanne Becker, Marcel Schweiker) Validity  means that the test “measures what it is supposed to measure”. (Daniela U. Mier) Value at Risk  is a measure of risk in finance defined as the quantile of the distribution of returns. (Roxana Halbleib) Verify and falsify  Trial and error: through experiments, hypotheses are either verified, or falsified. If the hypothesis is falsified, then the assumed event (e.g. that the mean values of two groups differ) does not occur. Gaining knowledge comes from an experiment (trial) and falsifying the hypothesis (error) and leads to the operationalization of new experiments that test an adapted hypothesis. Theoretically, verifying a hypothesis is hardly possible, since the truth of the hypothesis could be coincidence (just as the non-confirmation of the hypothesis could be coincidence), which is why results have to be replicated. (Daniela U. Mier) Volatility  In finance, volatility describes the fluctuation in the return of financial instruments such as shares, foreign exchange or interest rates. It is equivalent to the standard deviation in statistics. (Roxana Halbleib) Vulnerability  “Engineers and many scientists understand vulnerability to be the relative susceptibility of people and assets such as buildings, ­infrastructure, social and environmental assets to damage, as measured by a scale between 0 (damage resistant) and 1 (highly vulnerable).”4 (Bernhard Höfle) Word  Expressive unit of language for the realization and collectivization of propositional content or content links. (Friedemann Vogel, Hanjo Hamann)

 Adapted from Thomas Zwenger, “Understanding” Retrieved from http://www.philosophiewoerterbuch.de 4  Translated from Felgentreff, C. & Dombrowsky, W.  R. (2007): Hazard-, Risiko- und Katastrophenforschung. In C.  Felgentreff & T.  Glade (Eds.): Naturrisiken und Sozialkatastrophen.Wiesbaden: Springer Spektrum, p. 13. 3

Author Index

A Anselm of Canterbury, 30–34

I Isidore of Seville, 30, 39

C Chrétien de Troyes, 26

K Konrad, Monk, 26, 29

D Dilthey, Wilhelm, 23–25, 40 Droysen, Johann Gustav, 24, 40

M Marx, Karl, 107

E Eadmer, 31, 33 G Gottfried of Strasbourg, 29 H Hartmann von Aue, 26 Hegel, Georg Wilhelm Friedrich, 107 Henry I (England), 30 Henry IV (Holy Roman Empire), 30

R Ranke, Franz Leopold, 24 Rizzolatti, Giacomo, 150 T Thomson, William, 1st Baron Kelvin, 104 V Veldeke, Heinrich von, 26 Vergil, 26 W Weber, Max, 40, 41, 109 William I (England), 30 William II (England), 30

© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3

263

Index of Topics

A Ancient history, 52 Architecture, 6, 16, 63, 161–176, 194, 209 B Bioinformatics, 131–147 Biology, 4, 5, 129–147, 149–159, 244

G Geography, 4, 6, 17, 177–190 Geoinformatics, 16, 177–190 German studies, 21–37 H Historical science, 12, 21–37, 39–49, 106, 225, 240 History of art, 6, 17, 191–200, 228

C Classical philology, 51–60 Classical studies, 51–60, 232 Computational neuroscience, 6, 8, 16, 149–159 Computer science, 4–7, 17, 18, 131–147, 149–159, 177–215, 229, 243 Critical-dialectical school, 106 Cultural studies, 5, 23, 28, 41

L Law, 4, 13–15, 61–64, 69, 71–72, 74–77, 85–101, 122, 135–147, 154–157, 171, 209, 226, 238, 244 Legal linguistics, 71–83 Linguistics, 53, 195, 238 Literary studies, 6, 21–37, 40, 225, 226, 241, 244

E Econometrics, 119–130 Economic, 4, 6, 14, 85–101, 119–130, 244 Empirical-analytical school, 107, 226 Engineering, 4, 6, 7, 129, 130, 161–176, 239, 240, 244

M Mathematics, 4–7, 27, 32, 69, 119–130, 149–159, 161–176, 191–215, 228, 239–241, 243 Medicine, 4–6, 132–134, 143, 147, 163, 184, 202, 204, 210, 211, 227, 238, 241–243

© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3

265

266 N Neuroscience, 5, 16, 149–159, 161–176, 227 personalized, 132 Normative-ontological school, 107 P Philosophy, 4, 61–69, 107, 232 Physics, 5, 6, 16, 105, 129, 149–159, 161–176, 204, 210, 213, 224, 244 Political science, 4, 14, 85–101, 104–117, 226, 244

Index of Topics Psychology, 5, 6, 16, 68, 106, 129, 130, 149–159, 161–176, 227, 241, 243, 244 S Sociology, 4, 6, 23, 32, 74, 96, 106, 183 Statistics, 4, 6, 15, 43, 45, 73, 75, 108, 110, 111, 115, 119–130, 166, 179, 181, 183, 188, 225–227 translational, 132

Subject Index1

A (Ab)counting, 193, 195 Abstraction, vii, viii, 25, 81, 110, 143, 157, 196, 226, 239, 241 Adaptation, 16, 161–176 Aeneis (work), 26 Aesthetics, 200 Aggregation, 89, 101, 164, 165, 180, 181 Algorithms, 12, 15, 18, 32, 71, 72, 183–184, 195, 201, 204, 206, 215, 239 Amounts, 47, 194 Analyze, 15, 24, 25, 34, 35, 53, 54, 58, 63, 129, 135, 179, 180, 190, 197, 206, 225, 226, 240, 242 Annotation, 58, 59 Approach, social science, 96 Attribution, 34, 167, 193 Auxiliary science, 185 B Basic data protection regulation, 145 Basic research, 132, 146, 176, 191, 192

Behavioural patterns, 28, 32, 33, 157, 227 Big Data, 74, 75, 114, 123, 180 Bootstrap, 127 Buildings, 162–163, 169, 172–173, 178–181, 230 C Calculating and computing, 27, 30, 34 Cancer research, 132 Cardinal scale, 101 categorical, 168 Cause-and-effect principle, 202 Chanson de Roland (work), 26 Chile, 178, 185, 187, 189, 190 Chronology, 41, 193 Circle, hermeneutic, 83 Citizen science, 178, 190, 231 Clinical data, 132 close reading, 49, 52–60 Cloud computing, 133–143 Codes of conduct, 144 Cognition, 68 Cognition, social, 150, 158, 170 Cognitive, 78

 Note: Page numbers followed by ‘n’ refer to notes.

1

© Springer Fachmedien Wiesbaden GmbH, part of Springer Nature 2024 M. Schweiker et al. (eds.), Measurement and Understanding in Science and Humanities, https://doi.org/10.1007/978-3-658-36974-3

267

268 Cognitive faculty collecting, 56 Color value, 17, 195, 196 Comfort, thermal, 16, 161–176, 227, 230, 241 Commercial cloud providers, 133, 136, 139–142 Compare, 53, 58, 121, 182–190, 240 Computational neuroscience, 6, 8, 16, 149, 150, 154–157 Computer-based methods/procedures, 23, 31, 179–190 Computer power, 195 Computer vision, 6, 17, 191–200 Connection, causal, 120, 158 Consent, informed, 135–136 Consideration, 13, 46, 52–54, 61–63, 80, 87, 100, 115, 142–147, 202–210 Contextualize, 33, 34, 44, 77–82, 240 continuous, 168, 204, 205 Control conditions, 170 Coordinates, 180, 181 Correlation, statistical, 120–123 Counts, 53, 57, 62, 74, 75, 87, 193 Creativity, 29, 200 cultural, 29 Cultural patterns, 29 Cultural turn, 23, 28 D Data basis, 43, 95, 116, 227 Data diversity, 135 Data, high frequency, 128–130 Data mining, 15, 75, 123 Data processing, electronic, 15, 42, 73, 134, 136, 140, 141, 143, 145, 227 Data volume, 45, 54, 133, 134, 181, 195, 196, 204, 227 Dating, 17, 41, 193, 200, 227 Delegation, 143 dependent, 164 Digital classics, 12, 51 Digital history, 25 discrete, 204, 205

Subject Index Distance, 154, 164, 173, 196 distant reading, 51–60 DNA sequencing, 131–132 Dogmatics, 62, 75, 79, 91, 93, 95–97, 226 Duty to protect, 93 Dynamic causal modeling, 158 economic, 15 E Economic, 119, 120, 130 Economic history, 42, 43, 194, 225 Economic sanctions, 14, 85, 86, 92 Education policy, 109, 114, 115 Electroencephalography (EEG), 158, 168, 227 Empathy, 16, 150–153, 241, 244 Eneasroman (work), 26 EU convergence criteria, 109 European Court of Justice (ECJ), 138 European Union (EU), 137–138, 145 EU-US Privacy Shield, 139 Evaluation, 15, 36, 44, 45, 58, 168, 179, 242 explicit, 166 F Facts, 88–95, 226 Failed states, 93 Field campaign, 180 Field investigation, 168 Figures, 193 Financial crisis, 15, 119–130 Financial market econometrics, 124 Financial risk, 15, 119–130, 238 Flood, 177–190 formalistic-structuralistic, 28 Formalization through quantification, 99 Format, work format-size, 193 Frequency, 17, 53–57, 75, 125, 164–166, 183, 188, 197, 242

Subject Index G Genome, human, 15, 131–147 Geodata geographical, 17, 179–190 Geometry, 7, 179, 192, 202, 210, 211 Gestalt theory, 197 Golden section, 17, 194, 200 Governance, 188, 190 Graph theory (mathematics), 32, 171 H Hazard mapping, 187 Health, 43, 89, 114, 132, 162–176, 229 Healthcare system, 163 Hermeneutics, 13, 24, 25, 37, 62, 71–83 Historia novorum (work), 31, 33 historical, 23, 24 Historicism, 24, 42 Humanities, 23, 25, 51, 52 Human rights protection, 92, 93 I Identity, vi, 28, 30, 67, 97 Image data, 17, 195, 196 Imitation, 150 implicit, 17, 166, 185 independent, 164 Indicator, 95, 97, 109–110, 115–116, 158, 192, 195, 232 Individuality, vi, 29, 161–176 Influencing variables, 162, 163, 166, 167 Information system, 57 Innovation, 26, 147, 193, 198, 200 Intensity, 32, 98, 99, 168, 173, 180, 196, 242 Interdisciplinarity, vii, viii, 8, 23, 243 Interfering stimulus, aversive, 163 International Cancer Genome Consortium (ICGC), 134 International law, 85–101, 137 Internet, 16, 133, 179, 190, 229 Intervention ban, 92, 93

269 Introspection, 13, 73, 77, 167 Investiture conflict, 30, 33 Iwein (work), 26, 29 K Knowledge, v–vii, 4, 6, 12, 13, 15–18, 21, 23, 24, 34–36, 40, 41, 45–47, 51, 61–69, 71, 72, 76, 78–82, 91, 98, 101, 104, 105, 107, 110, 111, 113, 115, 116, 122, 123, 150, 151, 153, 159, 164, 172, 173, 175, 176, 180, 185, 187, 201–215, 219–224, 228–235, 237, 238, 241–243 Knowledge, historical, 24, 40 L Laboratory examination, 168 Level of significance, 121, 125, 226, 227 Liability, 141, 144, 146, 147 Linguistics, 68, 71–83 Literature (poetry), 22, 23 local, 180, 187 Location-scale, 126, 128 M Maastricht, Treaty of, 109 Machine, 197 Magnetic resonance imaging (MRI), functional, 16, 150, 168, 210, 227 Mapping, 178, 187 Maps/cartography, 45, 183, 187, 188 Market efficiency hypothesis, 124, 126 Market microstructure noise, 128 Matrix, 196 Measurability, 110, 115, 159 Measurement and discretion, 27, 30, 34 Measures, 88, 142, 166 Medieval retelling (dilatatio materiae; abbreviatio materiae), 26 Mediocrity, 193

270 Methodological discipline, 179, 183 Middle Ages, 12, 21–35, 44–48 Mirror neuron system, 16, 149–159, 170 Modality(ies) of perception, 36 Model, vii, 7, 13, 27, 47, 53, 61, 67, 75, 86, 91, 111n19, 120, 133, 149, 162n3, 175, 201, 211, 225, 237 Modelling, mathematical, 17, 201–215 modern and postmodern, 23 molecular, 163 Monetarization, 12, 46, 89, 100 Money, 12, 46–48, 238 Motivation, 40, 162, 213, 221, 237 N Narratology, historical, 12, 25–37 Natural hazard analysis, 16, 17, 185, 186 Natural hazards, 17, 177–190 Network analysis, historical, 12, 25–37 Network model, neuronal, 16, 157–159 Neurobiological, 157, 163–164, 168, 170 Neurotransmitter, 155–159 New Criticism, 55 Nibelungenlied (work), 26, 29 Normal distribution, 126, 127 Number symbolism, 17, 194 O Objectivity, 14, 36, 40, 86, 110–113 Operationalization, 14, 85, 96, 108, 116, 151–152 Optimization, 64, 158, 210–212 Ordinal scale, 75, 182 P Pain, 16, 161–176, 227, 230, 238, 241–242 Pan-Cancer Analysis of Whole Genomes-­ Project (PCAWG), 132, 134, 136 Paradigm, 72 Parameter, 121, 211, 228 Patients, 131–147, 164–175, 227, 242 Patterns, recognize, 17, 192, 195 Perception, subjective, 16, 172, 173

Subject Index Perspective construction, 192, 194 Phenomenon, 28, 41, 43, 45, 64, 66, 68, 76, 105, 106, 109, 122, 151, 172, 179–181, 190, 240, 243 Pixels, 195, 196 Planning, 7, 87, 163, 165, 172, 211 Pluralism of methods, 42 Policy advice, 96, 109, 111, 112 Polis, 107 political, 214 Political, 90, 112–114 Positivism, 23 Predictability, 74 Price development, 194 Prices, 119–130, 194, 240, 242 Private standard setting, 143 Process, 12, 24, 42, 53, 62, 71, 86, 106, 119, 131, 149, 162, 183, 196, 202, 204–206, 223, 233, 239 Processing, neurobiological, 163 Proportionality, 93n35, 249, 257 Purchase sums, 194 P-value, 121, 122 Q Qualitas, 12, 13, 36, 37, 76, 77 Quantitas, 12, 13, 36, 37, 76, 77 Quantity, 7, 15, 22, 36, 71, 77, 135, 173, 181, 226 R Rationalization, 7, 14, 109, 110 Reading, 49, 52–60, 67, 75, 231n34, 240 Reception, 17, 44, 69, 99 Respond, 62, 85, 101, 152–158, 171, 222 Responsibility, 141, 239 Responsibility to protect, 93, 94 Right to be forgotten, 136 Roman d’Eneas (work), 26 S Safe Harbor Agreement, 137–139 Sales results, 194

Subject Index Sanctions, 14, 85–101, 125, 144 Scientification, 109–113, 145 Search result, 56, 198 Self-observation, 167 Self-regulation, 15, 143–146 Sense-making/-foundation, 55 Significance, 15, 121–123, 125, 153, 172–173, 226–228 Similarity, v, 11, 17, 18, 45, 53, 54, 59, 196–198 Simulation, Monte Carlo, 127 Small scale, 194 Software, 32, 133, 209–213, 229 Song of Roland (work), 26, 29 Sovereignty, 93, 94 Standards, 112, 136, 141–147, 163–165, 172–173 State of the art and science, 145 stochastic, 204 Stimulus intensity, 168 Structure, 14, 21, 24, 28, 31–36, 32n17, 42, 45, 53, 60, 64, 67, 68, 75, 86, 87, 93, 106, 106n4, 108n11, 114, 115, 121, 128, 143n33, 167, 179, 194, 200, 204, 210, 225, 226, 233, 239 Subjectivity, vi, 182 Subset, 197 Surveying, 47, 48, 177 T Temperature, 16, 120, 161–176, 207, 242 Text (-elements), 53, 54 Text interpretation, 54

271 Theories and approaches, 4, 22–25 Theory of action (sociology), 32 thick reading, 52–55 Time series data, 127 Total revenues, 194 Transcranial magnetic stimulation (TMS), 158 Transdisciplinarity, 67, 76, 77, 106 Transformation, 12, 31, 33, 196, 200 Tristan (work), 29 Truth, 40, 107, 111, 115 Turn, 5, 24, 25, 52 V Validation, 157, 170, 206 Valuation, normative, 98 Value at Risk (VaR), 124–129 Variable, 80, 88, 89, 98, 101, 120, 121, 126, 142, 162–168, 171–173, 181, 189, 204, 204n7, 240 Vita Anselmi (work), 30–33 Volatility, 124–129 W Weight formula, 97–101 Work performance, 162 Workplace, 161, 162, 168, 227 Y Yield, 13, 59, 175, 202 Yvain (plant), 26